NVIDIA Logo

NVIDIA

Senior Systems Software Engineer - Deep Learning Solutions

Reposted Yesterday
Be an Early Applicant
In-Office or Remote
Hiring Remotely in Toronto, ON
Senior level
In-Office or Remote
Hiring Remotely in Toronto, ON
Senior level
Lead deep-learning inference optimization for edge/autonomous systems: analyze models to operator/kernel level, benchmark and deploy TensorRT/compiler solutions on Jetson/DRIVE/ARM/GPU platforms, collaborate with compiler/runtime/hardware teams, and engage partners to deliver production-ready performance improvements.
The summary above was generated by AI

NVIDIA is a global leader in physical AI, powering self-driving cars, humanoid robots, intelligent environments, and medical devices. Our software platforms are central to this mission. We help innovators build products that save lives, enhance working conditions, and improve living standards globally! We are hiring a Senior Engineer to become part of our team as a technical authority in deep learning inference optimization for autonomous vehicles and robotics on edge hardware. This role requires a hands-on expert who can inspect model architectures down to the operator level. They will uncover performance bottlenecks through kernel traces and evaluate how modern architectures (transformers, vision-language models, diffusion/flow matching, state space models) function on GPU and SOC. The work performed directly advances how autonomous vehicles and robots sense and respond in the real world, with instant impact!

This group addresses some of the toughest optimization problems in the industry, operating at the crossroads of innovative model architectures, compiler technology, and embedded hardware. We work in close partnership with automotive OEMs, robotics collaborators, and internal hardware teams to expand the limits of what can be achieved on edge devices.

What you'll be doing:

  • Address customer and partner optimization challenges: Engage directly with prominent automotive OEMs and robotics associates to analyze, debug, and improve their deep learning models on NVIDIA platforms. We emphasize delivering solutions rather than just recommendations.

  • Own performance benchmarking: Drive efforts to achieve leading results on MLPerf Edge and industry benchmarks, as well as closed-source engagements with key partners. Define methodology, ensure reproducibility, and turn results into actionable optimization priorities.

  • Evaluate emerging model architectures: Analyze new DL architectures, including vision encoders, multi-modal VLMs, hybrid SSM-Transformer backbones, diffusion/flow matching decoders, and multi-camera tokenizers, for compilation feasibility, memory footprint, and latency on target SOCs.

  • Collaborate across teams: Partner with our compiler, runtime, and hardware teams to connect model-level insight with platform capabilities.

  • Contribute to build reviews and help develop internal roadmap priorities based on real customer workload patterns.

  • Represent NVIDIA externally: Share our deep learning optimization expertise at conferences, webinars, and partner events. Help elevate the broader team by bringing back insights and establishing guidelines.

  • Deliver TensorRT and compiler-stack solutions for edge: Create and deploy inference solutions on Jetson, DRIVE, and GPU + ARM platforms for AV and robotics workloads. Develop Proofs of Readiness (PORs) and work closely with our compiler team on Torch-TRT, MLIR-TRT, and related frameworks to bridge performance gaps.

What we need to see:

  • Master’s degree or equivalent experience in Computer Science, Electrical Engineering, or a related field.

  • 12 + years of industry experience with over 8 years in deep learning model optimization, inference engineering, or neural network compilation. You need to be adept at interpreting and reasoning about model architectures at the operator/kernel level, not only operating them.

  • Over 5 years of validated expertise in embedded/edge software, with experience delivering production inference solutions within power-limited, latency-sensitive deployment environments.

  • Deep knowledge of current DL architectures: transformers, attention variants, vision encoders (ViT), multi-modal/vision-language model frameworks, and experience with diffusion models and/or state space models.

  • Expert knowledge of GPU architecture fundamentals, CUDA, and low-level performance optimization using heterogeneous computing. Experience with TensorRT, compiler IRs, or equivalent inference optimization toolchains.

  • Solid understanding of embedded operating system internals (QNX/Linux), memory management, C/C++, and embedded/system software concepts.

  • Background in parallel programming (e.g., CUDA, OpenMP) and experience reasoning about memory hierarchies, data movement, and compute utilization.

  • Demonstrated capability to collaborate directly with external partners and customers in a deep technical role, solving their workload issues, identifying performance problems, and providing solutions within production limitations.

Ways to Stand Out from the Crowd:

  • Experience with ML compiler frameworks (TVM, MLIR, XLA, Triton) or contributing to inference runtime development.

  • Production deployment experience with autonomous vehicle perception or planning stacks, understanding the full pipeline from sensor input through trajectory output.

  • Familiarity with the Physical AI model landscape: VLM + action expert architectures, end-to-end driving models, or robot foundation models.

  • Contributions to MLPerf benchmarks and large-scale industry performance optimization efforts.

  • Experience with automotive safety standards (ISO 26262, SOTIF) and their implications for inference system development.

  • Experience leading technical initiatives across globally distributed engineering teams.

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 225,000 CAD - 275,000 CAD.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until March 2, 2026.

This posting is for an existing vacancy. 

NVIDIA uses AI tools in its recruiting processes.

Top Skills

Arm
C
C++
Cuda
Diffusion Models
Drive
Gpu
Jetson
Linux
Mlir
Mlir-Trt
Mlperf
Openmp
Qnx
State Space Models
Tensorrt
Torch-Trt
Transformers
Triton
Tvm
Vision Transformer
Xla

Similar Jobs

58 Minutes Ago
Easy Apply
Remote
3 Locations
Easy Apply
Senior level
Senior level
Artificial Intelligence • Enterprise Web • Software • Design • Generative AI
As a Strategic Account Executive at Webflow, you will build relationships with enterprise customers, manage a complex sales pipeline, close strategic deals, and contribute to product evolution and go-to-market strategies.
2 Hours Ago
Easy Apply
Remote
2 Locations
Easy Apply
Mid level
Mid level
Artificial Intelligence • Enterprise Web • Software • Design • Generative AI
The Security Technical Program Manager leads security initiatives at Webflow, coordinating efforts across teams to enhance security and manage the vulnerability lifecycle, focusing on collaboration and process improvements.
Top Skills: Ai ToolingContainer ScanningJIRAQualysScaSocketVulnerability Management Tools
2 Hours Ago
Remote
Canada
Internship
Internship
Artificial Intelligence • Cloud • Consumer Web • Productivity • Software • App development • Data Privacy
As a Technical Content Developer intern, you'll create user guides and technical documentation for Dropbox products, collaborating with various teams to enhance customer success.
Top Skills: AemGuruHighspotLms

What you need to know about the Montreal Tech Scene

With roots dating back to 1642, Montreal is often recognized for its French-inspired architecture and cobblestone streets lined with traditional shops and cafés. But what truly sets the city apart is how it blends its rich tradition with a modern edge, reflected in its evolving skyline and fast-growing tech industry. According to economic promotion agency Montréal International, the city ranks among the top in North America to invest in artificial intelligence, making it le spot idéal for job seekers who want the best of both worlds.

Key Facts About Montreal Tech

  • Number of Tech Workers: 255,000+ (2024, Tourisme Montréal)
  • Major Tech Employers: SAP, Google, Microsoft, Cisco
  • Key Industries: Artificial intelligence, machine learning, cybersecurity, cloud computing, web development
  • Funding Landscape: $1.47 billion in venture capital funding in 2024 (BetaKit)
  • Notable Investors: CIBC Innovation Banking, BDC Capital, Investissement Québec, Fonds de solidarité FTQ
  • Research Centers and Universities: McGill University, Université de Montréal, Concordia University, Mila Quebec, ÉTS Montréal

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account