NVIDIA Logo

NVIDIA

Software Engineer, AI Systems - vLLM and MLPerf

Posted 18 Days Ago
Be an Early Applicant
In-Office or Remote
2 Locations
Senior level
In-Office or Remote
2 Locations
Senior level
Design and implement efficient inference systems for generative AI, benchmark performance, optimize system components, and collaborate on model deployment.
The summary above was generated by AI

We are seeking highly skilled and motivated software engineers to join our vLLM & MLPerf team. You will define and build benchmarks for MLPerf Inference, the industry-leading benchmark suite for inference system-level performance, as well as contribute to vLLM and optimize its performance to the extreme for those benchmarks on bleeding-edge NVIDIA GPUs.

What you’ll be doing:

  • Design and implement highly efficient inference systems for large-scale deployments of generative AI models.

  • Define inference benchmarking methodologies and build tools that will be adopted across the industry.

  • Develop, profile, debug, and optimize low-level system components and algorithms to improve throughput and minimize latency for the MLPerf Inference benchmarks on bleeding-edge NVIDIA GPUs.

  • Productionize inference systems with uncompromised software quality.

  • Collaborate with researchers and engineers to productionize innovative model architectures, inference techniques and quantization methods.

  • Contribute to the design of APIs, abstractions, and UX that make it easier to scale model deployment while maintaining usability and flexibility.

  • Participate in design discussions, code reviews, and technical planning to ensure the product aligns with the business goals.

  • Stay up to date with the latest advancements and come up with novel research ideas in inference system-level optimization, then translate research ideas into practical, robust systems. Explorations and academic publications are encouraged.

What we need to see:

  • Bachelor’s, Master’s, or PhD degree in Computer Science/Engineering, Software Engineering, a related field, or equivalent experience.

  • 5+ years of experience in software development, preferably with Python and C++.

  • Deep understanding of deep learning algorithms, distributed systems, parallel computing, and high-performance computing principles.

  • Hands-on experience with ML frameworks (e.g., PyTorch) and inference engines (e.g., vLLM and SGLang).

  • Experience optimizing compute, memory, and communication performance for the deployments of large models.

  • Familiarity with GPU programming, CUDA, NCCL, and performance profiling tools.

  • Ability to work closely with both research and engineering teams, translating state-of-the-art research ideas into concrete designs and robust code, as well as coming up with novel research ideas.

  • Excellent problem-solving skills, with the ability to debug complex systems.

  • A passion for building high-impact software that pushes the boundaries of what’s possible with large-scale AI.

Ways to stand out from the crowd:

  • Background in building and optimizing LLM inference engines such as vLLM and SGLang.

  • Experience building ML compilers such as Triton, Torch Dynamo/Inductor.

  • Experience working with cloud platforms (e.g., AWS, GCP, or Azure), containerization tools (e.g., Docker), and orchestration infrastructures (e.g., Kubernetes, Slurm).

  • Exposure to DevOps practices, CI/CD pipelines, and infrastructure as code.

  • Contributions to open-source projects (please provide a list of the GitHub PRs you submitted).

At NVIDIA, we believe artificial intelligence (AI) will fundamentally transform how people live and work. Our mission is to advance AI research and development to create groundbreaking technologies that enable anyone to harness the power of AI and benefit from its potential. Our team consists of experts in AI, systems and performance optimization. Our leadership includes world-renowned experts in AI systems who have received multiple academic and industry research awards. If you've hacked the inner workings of PyTorch, or if you've written many CUDA/HIP kernels, or if you've developed and optimized inference services or training workloads, or if you've built and maintained large-scale Kubernetes clusters, or if you simply just enjoy solving hard problems, feel free to drop an application!

#LI-Hybrid

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 116,250 CAD - 201,500 CAD for Level 3, and 142,500 CAD - 247,000 CAD for Level 4.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until October 12, 2025.

Top Skills

C++
Cuda
Docker
Kubernetes
Nccl
Python
PyTorch
Sglang
Slurm
Vllm

Similar Jobs

6 Hours Ago
Remote or Hybrid
8 Locations
Senior level
Senior level
Blockchain • Fintech • Mobile • Payments • Software • Financial Services
The role involves designing BI pipelines for regulatory reporting, collaborating with multiple teams, ensuring data integrity, and monitoring regulatory changes.
Top Skills: LookerModePythonSQLTableau
Senior level
Cloud • Information Technology • Productivity • Security • Software • App development • Automation
Lead the design and deployment of scalable ML and data pipelines. Collaborate across teams, optimize infrastructure, and mentor engineers.
Top Skills: AWSDatabricksJavaPythonSparkSQL
6 Hours Ago
Remote
Canada
Internship
Internship
Cloud • Information Technology • Productivity • Security • Software • App development • Automation
As a Data Engineer Intern, you'll support data teams by defining metrics, ingesting data, ensuring data quality, and generating insights while working under a Data Engineering Manager.
Top Skills: PythonRelational DatabasesSQL

What you need to know about the Montreal Tech Scene

With roots dating back to 1642, Montreal is often recognized for its French-inspired architecture and cobblestone streets lined with traditional shops and cafés. But what truly sets the city apart is how it blends its rich tradition with a modern edge, reflected in its evolving skyline and fast-growing tech industry. According to economic promotion agency Montréal International, the city ranks among the top in North America to invest in artificial intelligence, making it le spot idéal for job seekers who want the best of both worlds.

Key Facts About Montreal Tech

  • Number of Tech Workers: 255,000+ (2024, Tourisme Montréal)
  • Major Tech Employers: SAP, Google, Microsoft, Cisco
  • Key Industries: Artificial intelligence, machine learning, cybersecurity, cloud computing, web development
  • Funding Landscape: $1.47 billion in venture capital funding in 2024 (BetaKit)
  • Notable Investors: CIBC Innovation Banking, BDC Capital, Investissement Québec, Fonds de solidarité FTQ
  • Research Centers and Universities: McGill University, Université de Montréal, Concordia University, Mila Quebec, ÉTS Montréal

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account