NVIDIA has continuously reinvented itself over two decades. Our invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing. NVIDIA is a “learning machine” that constantly evolves by adapting to new opportunities that are hard to resolve, that only we can seek, and that matter to the world. This is our life’s work, to amplify human inventiveness and intelligence.
We are seeking an innovative Senior Timing Methodology Engineer to help drive timing sign-off , and silicon correlation strategies for the world's leading GPUs and SoCs. This position is a broad opportunity to optimize performance, yield, and reliability through increasingly comprehensive modeling, informative analysis, and automation. This work will influence the entire next generation computing landscape through critical contributions across NVIDIA's many product lines ranging from consumer graphics to self-driving cars and the growing domain of artificial intelligence! We have crafted a team of highly motivated people whose mission is to push the frontiers of what is possible today and define the platform for the future of computing. If you are fascinated by the immense scale of precision, craftsmanship, and artistry required to make billions of transistors function on every die at technology nodes as deep as 3 nm and beyond, this is an ideal role.
What you will be doing:
Research and develop highly scalable simulation tools and methodologies to model silicon behavior to enable accurate silicon V-F projection and detailed correlation with pre-SI simulation.
Collaborate with technology leads, circuits and systems teams, VLSI physical design, and timing engineers to define and deploy the most sophisticated strategies of improve design performance, predictability, and silicon reliability beyond what industry standard tools can offer.
Understand corner case timing sign-off risks in the latest 3 nm and deeper technology nodes. Develop strategies to mitigate and margin for them.
What we need to see:
PhD or master’s degree in electrical engineering, Computer Science, or Computer Engineering or equivalent experience
At least 6+ years of relevant work experience in timing methodology and/or silicon data analysis/correlation roles.
Deep understanding of silicon data, jitter, FET device model, spice, advanced STA.
Expertise in Python, statistics, data science and ML modeling is highly preferred.
Strong communication and interpersonal skills
With competitive salaries and a generous benefits package, NVIDIA is widely considered to be one of the technology world’s most desirable employers. We welcome you join our team with some of the most hard-working people in the world working together to promote rapid growth. Are you passionate about becoming a part of a best-in-class team supporting the latest in GPU and AI technology? If so, we want to hear from you!
The base salary range is 156,000 USD - 247,250 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.