JOB DETAILS

New Grad - ML Stack Optimization Engineer

CompanyCerebras
LocationToronto
Work ModeOn Site
PostedApril 15, 2026
About The Company
Cerebras Systems is the world's fastest AI inference. We are powering the future of generative AI. We’re a team of pioneering computer architects, deep learning researchers, and engineers building a new class of AI supercomputers from the ground up. Our flagship system, Cerebras CS-3, is powered by the Wafer Scale Engine 3—the world’s largest and fastest AI processor. CS-3s are effortlessly clustered to create the largest AI supercomputers on Earth, while abstracting away the complexity of traditional distributed computing. From sub-second inference speeds to breakthrough training performance, Cerebras makes it easier to build and deploy state-of-the-art AI—from proprietary enterprise models to open-source projects downloaded millions of times. Here’s what makes our platform different: 🔦 Sub-second reasoning – Instant intelligence and real-time responsiveness, even at massive scale ⚡ Blazing-fast inference – Up to 100x performance gains over traditional AI infrastructure 🧠 Agentic AI in action – Models that can plan, act, and adapt autonomously 🌍 Scalable infrastructure – Built to move from prototype to global deployment without friction Cerebras solutions are available in the Cerebras Cloud or on-prem, serving leading enterprises, research labs, and government agencies worldwide. 👉 Learn more: www.cerebras.ai Join us: https://cerebras.net/careers/
About the Role

Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.  

Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference. 

Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.

Job Overview

We are seeking a highly skilled Compiler Engineer with a passion of optimizing compiler technologies for AI workloads. You will be an integral part of our software compiler stack team, focusing on enhancing our compiler to fully leverage the unique capabilities of our CS3 system. Your work will play a critical role in achieving unprecedented levels of performance, efficiency, and scalability for AI applications.

Key Responsibilities
  • Design, develop, and optimize compiler technologies for AI chips using LLVM and MLIR frameworks.
  • Identify and address performance bottlenecks, ensuring optimal resource utilization and execution efficiency.
  • Work with the machine learning team to integrate compiler optimizations with AI frameworks and applications.
  • Contribute to the advancement of compiler technologies by exploring new ideas and approaches.

Qualifications
  • Master’s degree in Computer Science, Electrical Engineering, or a related field required.
  • Proficiency in C/C++ programming and experience with low-level optimization.
  • Proficiency in Python programming.
  • Strong background in optimization techniques, particularly those involving NP-hard problems.
  • Familiarity with either of the following is a plus:
    • The Satisfiability Problem
    • Integer-Linear Programming
    • Constraint Satisfaction Problems
  • Familiarity with MLIR is a plus.
  • Excellent problem-solving skills and a strong analytical mindset.
  • Ability to work in a fast-paced, collaborative environment.

What We Offer
  • Competitive salary and benefits package.
  • Opportunities for professional growth and career advancement.
  • A dynamic and innovative work environment.
  • The chance to work on cutting-edge technologies and make a significant impact on the future of AI.

Why Join Cerebras

People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection  point in our business. Members of our team tell us there are five main reasons they joined Cerebras:

  1. Build a breakthrough AI platform beyond the constraints of the GPU.
  2. Publish and open source their cutting-edge AI research.
  3. Work on one of the fastest AI supercomputers in the world.
  4. Enjoy job stability with startup vitality.
  5. Our simple, non-corporate work culture that respects individual beliefs.

Read our blog: Intern at Cerebras

Apply today and become part of the forefront of groundbreaking advancements in AI!


Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.


This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.

Key Skills
C++CPythonLLVMMLIRCompiler optimizationAI workloadsPerformance tuningNP-hard problemsInteger-linear programmingConstraint satisfaction problemsProblem-solvingAnalytical mindset
Categories
TechnologySoftwareEngineeringData & Analytics
Benefits
Competitive salaryBenefits packageProfessional growth opportunitiesCareer advancement
Job Information
📋Core Responsibilities
Design, develop, and optimize compiler technologies for AI chips using LLVM and MLIR frameworks. Collaborate with the machine learning team to integrate compiler optimizations and address performance bottlenecks.
📋Job Type
full time
📊Experience Level
0-2
💼Company Size
845
📊Visa Sponsorship
No
💼Language
English
🏢Working Hours
40 hours
Apply Now →

You'll be redirected to
the company's application page