Annapurna Labs was a startup company acquired by AWS in 2015, and is now fully integrated. If AWS is an infrastructure company, then think Annapurna Labs as the infrastructure provider of AWS. Our org covers multiple disciplines including silicon engineering, hardware design and verification, software, and operations. AWS Nitro, ENA, EFA, Graviton and F1 EC2 Instances, AWS Neuron, Inferentia and Trainium ML Accelerators, and in storage with scalable NVMe, are some of the products we have delivered, over the last few years.The Product: AWS Machine Learning accelerators are at the forefront of AWS innovation. The Inferentia chip delivers best-in-class ML inference performance at the lowest cost in cloud. Trainium will deliver the best-in-class ML training performance with the most teraflops (TFLOPS) of compute power for ML in the cloud. This is all enabled by cutting edge software stack, the AWS Neuron Software Development Kit (SDK), which includes an ML compiler, runtime and natively integrates into popular ML frameworks, such as PyTorch, TensorFlow and MxNet. AWS Neuron and Inferentia are used at scale with customers like Snap, Autodesk, Amazon Alexa, Amazon Rekognition and more customers in various other segments. The Team: The Amazon Annapurna Labs team is a responsible for building innovation in silicon and software for AWS customers. We are at the forefront of innovation by combining cloud scale with the world’s most talented engineers. Our team covers multiple disciplines including silicon engineering, hardware design and verification, software and operations. Because of our teams breadth of talent, we have been able to improve AWS cloud infrastructure in networking and security with products such as AWS Nitro, Enhanced Network Adapter (ENA), and Elastic Fabric Adapter (EFA), in compute with AWS Graviton and the EC2 F1 FPGA instances, in storage with scalable NVMe, and now in AI and Machine Learning with AWS Neuron SDK, Inferentia and Trainium ML accelerators. You: In this customer-facing role, you will work closely with our Neuron software development team and strategic customers on cutting edge accelerated Machine Learning solutions. You will bring your hands-on experience developing and deploying Deep Learning models and integrate it with our ML accelerator products, into large-scalable production applications. You will need to be technically capable and credible in your own right, to become a trusted advisor for customers developing, deploying and scaling Deep Learning applications on AWS ML accelerators. You’ll succeed in this position if you enjoy capturing and sharing best practices and insights, and help shape how AWS ML accelerator technology gets used. You will be a hands-on partner to AWS services teams, technical field communities, sales, marketing, business development, and professional services, to drive adoption. You’ll leverage your communications skills, and be very technical when doing so, to help amplify the thought-leadership around AWS Neuron technology stack to the broader AWS field community, as well as our customers. Roles & Responsibilities: - Design architectures and own Proof of Concept (PoC) solutions for strategic customers, leveraging AWS ML accelerators technologies and the broader set of AWS features and services. - Drive adoption by taking ownership of technical engagements with eco-system partners and strategic customers, assisting with the definition and implementation of technical roadmaps and enabling them to successfully deploy on AWS ML Accelerator. - Develop strong partnership with engineering organizations, serving as the customer advocate, to help drive product roadmap working backwards from customers feedback. - Drive thought leadership by crafting and delivering compelling audience-specific messaging artifacts (product videos, demos, workshops, how to guides etc.) presenting AWS ML accelerator technology through AWS Blogs, reference architectures and solutions, and public-speaking events. - Capture, implement and share best-practices knowledge among the AWS technical community regarding AWS ML Accelerators.About Us Inclusive Team CultureHere at AWS, we embrace our differences. We are committed to furthering our culture of inclusion. We have ten employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences. Amazon’s culture of inclusion is reinforced within our 14 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust. Work/Life BalanceOur team puts a high value on work-life balance. It isn’t about how many hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives.Mentorship & Career GrowthOur team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. Our senior members enjoy one-on-one mentoring. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded engineer and enable them to take on more complex tasks in the future.We are open to hiring candidates to work out of one of the following locations:Cupertino, CA, USA | Seattle, WA, USABasic qualifications- 8+ years of specific technology domain areas (e.g. software development, cloud computing, systems engineering, infrastructure, security, networking, data & analytics) experiencePreferred qualification - 3+ years of design, implementation, or consulting in applications and infrastructures experienceAmazon is committed to a diverse and inclusive workplace. Amazon is an equal opportunity employer and does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status. For individuals with disabilities who would like to request an accommodation, please visit https://www.amazon.jobs/en/disability/us.Our compensation reflects the cost of labor across several US geographic markets. The base pay for this position ranges from $122,900/year in our lowest geographic market up to $239,000/year in our highest geographic market. Pay is based on a number of factors including market location and may vary depending on job-related knowledge, skills, and experience. Amazon is a total compensation company. Dependent on the position offered, equity, sign-on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and/or other benefits. For more information, please visit https://www.aboutamazon.com/workplace/employee-benefits. This position will remain posted until filled. Applicants should apply via our internal or external career site.