Cerebras Systems Logo

Cerebras Systems

Lead Product Manager, Inference Cloud

Reposted 13 Days Ago
Be an Early Applicant
In-Office or Remote
2 Locations
Mid level
In-Office or Remote
2 Locations
Mid level
As Lead Product Manager for Inference Cloud, you'll drive product strategy and develop AI applications leveraging Cerebras' wafer-scale performance while collaborating with developers and teams.
The summary above was generated by AI

Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.  

Cerebras' current customers include global corporations across multiple industries, national labs, and top-tier healthcare systems. In January, we announced a multi-year, multi-million-dollar partnership with Mayo Clinic, underscoring our commitment to transforming AI applications across various fields. In August, we launched Cerebras Inference, the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services.

About The Role  

As a Lead Product Manager on our Inference Cloud team, you will define how developers harness the power of Cerebras wafer-scale AI speed to create the next-generation of AI applications.  

You will be responsible for setting and driving product strategy, roadmap, and requirements for Cerebras’ Inference Cloud API – the front door that delivers Cerebras’ wafer-scale inference speed to developers, enterprises, and platform partners around the world. You will shape how developers interact with the world's fastest GenAI models through our API, lead 3P integrations with open-source community frameworks, and invent the future of how developers can creatively leverage instant AI speed for their applications.  

In your role, you will work closely with developers every single day, growing adoption across start-ups and enterprises alongside our world-class GTM team.  

You should have a proven track record of working with developers as your users, empathizing with and understanding their needs, and building products they love.  

Leveling for this role will depend on applicant experience level, and can be adjusted for Senior – Principal PM.  

Responsibilities 

  • Define and own the product vision, strategy, and roadmap for the Cerebras Inference AI – balancing rapid iteration with long-term platform evolution to build the premier inference offering for the most ambitious and valuable AI applications.
  • Conduct user research and analyze usage data and feedback to uncover insights on user pain points, measure product success, and discover new opportunities. 
  • Drive strategic customer engagements to showcase the value of ultra-fast inference. 
  • Build open-source integrations that amplify the accessibility and capability of our solution. 
  • Lead cross-functional go-to-market execution with Sales, DevRel, Customer Success, and Marketing to deliver seamless user experiences and drive adoption. 
  • Stay on the cutting edge of API, developer-tools, and AI-infrastructure trends to keep Cerebras’ offering best-in-class. 

Skills And Qualifications 

  • 3-8+ years of experience as a Product Manager working on developer-focused SaaS products or cloud platforms. Hands-on experience launching or operating an inference, PaaS, or high-QoS API is a plus (e.g. Amazon Bedrock, Vertex A).
  • Strong technical background - able to understand API architecture and to partner with engineers- prior SWE experience is a plus. 
  • Strong grasp of GenAI models, their strengths and weaknesses, and perf cost drivers (latency, context length, batch size, rate limits).
  • Successfully launched zero-to-one products that have found distribution or commercial success. You have experience gathering feedback, input and data from users and running beta tests. 
  • Stellar written/verbal communication; comfortable presenting to execs, customers, and developer communities.
  • Ability to set clear KPIs and make data-driven tradeoffs. Experience instrumenting usage metrics, A/B tests, and setting goals. 
  • Ability to excel amidst ambiguity, and to figure out how to solve complex new problems with simple, elegant solutions.
  • Application-minded and passionate about working with customers to transform the future of AI with order-magnitude faster inference speeds.
  • BS/MS in CS, EE, or related; MBA a plus.

Assets

  • Passion and ability to rapidly prototype new AI use case demos.
  • Experience building products in the AI space.
  • Experience with both consumer and developer audiences.
  • Familiarity with OSS inference stacks (vLLM, SGLang, Dynamo).
Why Join Cerebras

People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection  point in our business. Members of our team tell us there are five main reasons they joined Cerebras:

  1. Build a breakthrough AI platform beyond the constraints of the GPU.
  2. Publish and open source their cutting-edge AI research.
  3. Work on one of the fastest AI supercomputers in the world.
  4. Enjoy job stability with startup vitality.
  5. Our simple, non-corporate work culture that respects individual beliefs.

Read our blog: Five Reasons to Join Cerebras in 2025.

Apply today and become part of the forefront of groundbreaking advancements in AI!

Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.

This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.

Top Skills

APIs
Cloud Platforms
Genai
Oss Inference Stacks
SaaS

Similar Jobs

5 Hours Ago
Remote or Hybrid
Canada
Mid level
Mid level
Cloud • Insurance • Payments • Software • Business Intelligence • App development • Big Data Analytics
As a Software Engineer, you'll develop enterprise software, ensure quality standards, participate in reviews, and collaborate with an Agile team.
Top Skills: .Net 5/6AgileAngularC#CypressGitlabGoKubernetesReactSQL ServerTypescriptVisual Studio
8 Hours Ago
Remote
Canada
Senior level
Senior level
Artificial Intelligence • Cloud • Consumer Web • Productivity • Software • App development • Data Privacy
This role involves designing and implementing robust IAM solutions using Okta and SailPoint, managing access policies, and ensuring secure authentication across the enterprise.
Top Skills: APIsJavaScriptMfaOauthOidcOktaOkta WorkflowsPythonSailpointSAML
9 Hours Ago
Easy Apply
Remote
2 Locations
Easy Apply
Senior level
Senior level
Cloud • Security • Software • Cybersecurity • Automation
Lead and manage the renewals team to enhance customer retention, drive revenue growth, and improve operational efficiency within a fast-paced SaaS environment.
Top Skills: AIGitlabSaaSSubscription Business

What you need to know about the Montreal Tech Scene

With roots dating back to 1642, Montreal is often recognized for its French-inspired architecture and cobblestone streets lined with traditional shops and cafés. But what truly sets the city apart is how it blends its rich tradition with a modern edge, reflected in its evolving skyline and fast-growing tech industry. According to economic promotion agency Montréal International, the city ranks among the top in North America to invest in artificial intelligence, making it le spot idéal for job seekers who want the best of both worlds.

Key Facts About Montreal Tech

  • Number of Tech Workers: 255,000+ (2024, Tourisme Montréal)
  • Major Tech Employers: SAP, Google, Microsoft, Cisco
  • Key Industries: Artificial intelligence, machine learning, cybersecurity, cloud computing, web development
  • Funding Landscape: $1.47 billion in venture capital funding in 2024 (BetaKit)
  • Notable Investors: CIBC Innovation Banking, BDC Capital, Investissement Québec, Fonds de solidarité FTQ
  • Research Centers and Universities: McGill University, Université de Montréal, Concordia University, Mila Quebec, ÉTS Montréal

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account