Luma AI Logo

Luma AI

Lead Infrastructure and Reliability Engineer (Systems & Scale)

Reposted 23 Days Ago
Be an Early Applicant
Remote or Hybrid
Hiring Remotely in CA
Senior level
Remote or Hybrid
Hiring Remotely in CA
Senior level
The Lead Infrastructure and Reliability Engineer will enhance GPU operations, define scalability strategies, and develop organizational strengths in a high-demand AI infrastructure setting.
The summary above was generated by AI
About Luma AI
A new class of intelligence is emerging, systems that understand and generate the world across video, images, audio, and language.

Building multimodal AGI is not just a modeling challenge. It is an infrastructure challenge at the edge of what hardware, software, and organizations can support.

At Luma, we operate rapidly scaling 10k+ GPU fleets, pushing utilization, throughput, and reliability hard enough that yesterday’s solutions break regularly. Researchers depend on this infrastructure to move the frontier forward. Customers depend on it to power real creative work.

Many companies run accelerators. Very few sit directly next to the teams inventing the models that redefine what those accelerators must do.

At Luma, improvements to scheduling, efficiency, and reliability immediately translate into faster research iteration and entirely new product capabilities.

We are still early. The playbook is still being written. A single exceptional engineer can reshape how the company operates.

Where You Come In
Our Infrastructure Engineering team is a systems engineering group with company-level responsibility. At Luma, reliability engineers work directly with the researchers and products pushing the limits of multimodal intelligence.

We operate close to the metal:
  • Kernels
  • Containers
  • Schedulers
  • Networking
  • Storage
  • GPU behavior

But we are also responsible for something bigger:

Turning deep systems knowledge into repeatable, scalable reliability for the entire company. We are hiring a leader who will define that direction. You will be a technical authority, an organizational force multiplier, and a magnet for other great engineers.

What You’ll OwnReliability of the Frontier
  • Architect and operate large, heterogeneous GPU environments under extreme demand
  • Improve utilization and performance where small gains materially change company outcomes
  • Resolve failures that span hardware, OS, runtimes, and orchestration
  • Eliminate entire classes of instability
  • Build mechanisms that make heroics unnecessary

Scaling Training & Inference
  • Define how infrastructure and workloads evolve as cluster size and concurrency grow
  • Design scheduling, placement, and resource management approaches for increasingly complex jobs
  • Work directly with research to build the systems required for new model capabilities
  • Ensure inference platforms scale rapidly without sacrificing reliability or latency
  • Anticipate where today’s abstractions will fail and redesign ahead of them

Building the Organization
  • Hire and develop exceptional systems and reliability engineers
  • Set the bar for technical depth, judgment, and production ownership
  • Shape architecture early through strong partnerships with research and product
  • Translate reliability constraints into long-term platform strategy

Who You AreRequired:
  • Deep expertise in Linux and distributed systems
  • Experience operating GPU / accelerator clusters in real production environments
  • Strong fluency in Kubernetes and modern open-source infrastructure
  • Comfortable debugging across hardware → kernel → runtime → orchestration
  • You understand how systems behave under contention and at scale
  • You write code and build automation
  • You think in bottlenecks, failure modes, and tradeoffs
  • Engineers trust your judgment, especially when things break

Important: This role requires comfort operating close to upstream and close to the metal. If most of your experience has been inside highly abstracted internal platforms where others owned the underlying machinery, this is unlikely to be a match.

Leadership Expectations
  • You raise reliability standards across the company
  • You influence product and research architecture early
  • You build strong partnerships, not ticket queues
  • You attract and level up exceptional engineers
  • You are curious how models use infrastructure, because improving systems expands what becomes possible

Why This Role Is Special
Most infrastructure roles optimize mature systems. This one helps define how reliability works for a new generation of AI infrastructure.

The decisions you make here will influence:
  • How research progresses
  • How products scale
  • How customers trust us
  • And how the engineering organization grows

If you want to build the reliability foundations of a company operating at the technological frontier, we should talk.
Compensation
The base pay range for this role is $230,000 – $360,000 per year.
About Luma

Luma’s mission is to build unified general intelligence that can generate, understand, and operate in the physical world.

We believe that multimodality is critical for intelligence. To go beyond language models and build more aware, capable and useful systems, the next step function change will come from vision. So, we are working on training and scaling up multimodal foundation models for systems that can see and understand, show and explain, and eventually interact with our world to effect change.

Top Skills

Containers
Distributed Systems
Gpu
Kubernetes
Linux
Networking
Orchestration
Storage

Similar Jobs

An Hour Ago
Remote
Canada
Expert/Leader
Expert/Leader
Artificial Intelligence • Digital Media • Social Media
Manage and grow AI influencer brands by refining identities, optimizing content strategies, and driving performance across social platforms, focusing on audience connection and engagement.
Top Skills: Ai Content ToolsNotionShort-Form Video Editing Tools
An Hour Ago
Remote
Canada
Senior level
Senior level
Artificial Intelligence • Productivity • Software • Automation
Lead R&D FP&A: own forecasting, budgeting, variance analysis, and cloud (AWS) cost modeling; evaluate AI investments; build decision-ready models; support executive and board reporting; improve forecasting, tooling, and data quality.
Top Skills: AWS
An Hour Ago
Remote
2 Locations
Mid level
Mid level
Fintech • Financial Services
The Applied AI Engineer will develop AI-powered solutions, architect backend systems, integrate AI services, and collaborate with cross-functional teams to enhance business capabilities.
Top Skills: Ai ApisAWSAws BedrockClaudeGoHugging FaceLangchainOpenaiPythonRubySnowflake

What you need to know about the Montreal Tech Scene

With roots dating back to 1642, Montreal is often recognized for its French-inspired architecture and cobblestone streets lined with traditional shops and cafés. But what truly sets the city apart is how it blends its rich tradition with a modern edge, reflected in its evolving skyline and fast-growing tech industry. According to economic promotion agency Montréal International, the city ranks among the top in North America to invest in artificial intelligence, making it le spot idéal for job seekers who want the best of both worlds.

Key Facts About Montreal Tech

  • Number of Tech Workers: 255,000+ (2024, Tourisme Montréal)
  • Major Tech Employers: SAP, Google, Microsoft, Cisco
  • Key Industries: Artificial intelligence, machine learning, cybersecurity, cloud computing, web development
  • Funding Landscape: $1.47 billion in venture capital funding in 2024 (BetaKit)
  • Notable Investors: CIBC Innovation Banking, BDC Capital, Investissement Québec, Fonds de solidarité FTQ
  • Research Centers and Universities: McGill University, Université de Montréal, Concordia University, Mila Quebec, ÉTS Montréal

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account