The role involves refining and personalizing Luma's multimodal AI models, improving their controllability and adaptability for creative workflows, while collaborating with cross-functional teams.
About Luma AI
Luma's mission is to build multimodal AI to expand human imagination and capabilities. We believe that multimodality is critical for intelligence. To go beyond language models and build more aware, capable, and useful systems, the next step function change will come from vision. So, we are working on training and scaling up multimodal foundation models for systems that can see and understand, show and explain, and eventually interact with our world to effect change.
About the Role
This is a foundational opportunity to refine, personalize, and build the final capabilities and control interface of Luma’s foundation models and drive real-world value.
You’ll sit at the intersection of research, product, and partnerships, helping close the gap between state-of-the-art and production-ready. Your mission is to make our video foundation models more expressive, controllable, and personalized – solving the “last mile” challenges demanded by top-tier creative workflows.
What You'll Do
You will work as a fullstack applied researcher across modeling, data, systems, and evaluation to adapt and deploy models to production.
- Controllability and Features: You will leverage a toolkit spanning SFT, RL, personalization, distillation, control adapters, and more, to develop and maintain model variants purpose-built for user environments and creative partners.
- Personalization: Architect the data engine for rapid adaptation. You will leverage proprietary, vertical-specific datasets to create specialized finetunes and improve future training recipes, ensuring our models rely on data that reflects real-world use cases.
- End-User Quality: You will define and drive end-user quality – setting success metrics, building user-aligned evaluations, and iterating on the model/data/evals loop to meet strict fidelity and reliability targets in specific enterprise verticals.
- Cross-functional Collaboration: Partner closely with Product, Research, and Design to translate creative intent and user feedback into model behavior, intuitive controls, and production-ready capabilities for users and partners.
Who You Are
- Product-Obsessed Researcher/Engineer: You treat end users and partners as collaborators and enjoy solving specific “last mile” problems—not just optimizing public metrics.
- ML Expert: Strong ML fundamentals with deep experience in visual generative models (diffusion/transformers or related architectures). Ideal candidates also have a deep understanding of at least one: fine-tuning, personalization, domain adaptation, data curation, targeted distillation, interpretability, or human-feedback-driven refinement.
- Hands-On Builder: Strong Python and deep learning engineering skills (ideally PyTorch), comfortable moving between research prototypes and production systems.
- Contributions to state-of-the-art models in image/video generation.
- Experience collaborating with creative partners (VFX, animation, film, design tools).
- Track record building workflows/tools that materially improve iteration speed and evaluation rigor.
- Familiarity with large-scale training infrastructure and distributed systems (Ray, Slurm, Kubernetes).
Luma’s mission is to build unified general intelligence that can generate, understand, and operate in the physical world.
We believe that multimodality is critical for intelligence. To go beyond language models and build more aware, capable and useful systems, the next step function change will come from vision. So, we are working on training and scaling up multimodal foundation models for systems that can see and understand, show and explain, and eventually interact with our world to effect change.
Top Skills
Kubernetes
Python
PyTorch
Ray
Slurm
Similar Jobs
Artificial Intelligence • Digital Media • Social Media
Manage and grow AI influencer brands by refining identities, optimizing content strategies, and driving performance across social platforms, focusing on audience connection and engagement.
Top Skills:
Ai Content ToolsNotionShort-Form Video Editing Tools
Artificial Intelligence • Productivity • Software • Automation
Lead R&D FP&A: own forecasting, budgeting, variance analysis, and cloud (AWS) cost modeling; evaluate AI investments; build decision-ready models; support executive and board reporting; improve forecasting, tooling, and data quality.
Top Skills:
AWS
Fintech • Financial Services
The Applied AI Engineer will develop AI-powered solutions, architect backend systems, integrate AI services, and collaborate with cross-functional teams to enhance business capabilities.
Top Skills:
Ai ApisAWSAws BedrockClaudeGoHugging FaceLangchainOpenaiPythonRubySnowflake
What you need to know about the Montreal Tech Scene
With roots dating back to 1642, Montreal is often recognized for its French-inspired architecture and cobblestone streets lined with traditional shops and cafés. But what truly sets the city apart is how it blends its rich tradition with a modern edge, reflected in its evolving skyline and fast-growing tech industry. According to economic promotion agency Montréal International, the city ranks among the top in North America to invest in artificial intelligence, making it le spot idéal for job seekers who want the best of both worlds.
Key Facts About Montreal Tech
- Number of Tech Workers: 255,000+ (2024, Tourisme Montréal)
- Major Tech Employers: SAP, Google, Microsoft, Cisco
- Key Industries: Artificial intelligence, machine learning, cybersecurity, cloud computing, web development
- Funding Landscape: $1.47 billion in venture capital funding in 2024 (BetaKit)
- Notable Investors: CIBC Innovation Banking, BDC Capital, Investissement Québec, Fonds de solidarité FTQ
- Research Centers and Universities: McGill University, Université de Montréal, Concordia University, Mila Quebec, ÉTS Montréal



