d-Matrix

ML Compiler Software Engineering Technical Lead

Santa Clara, California, United States

$196,000 – $300,000Compensation
Senior (5 to 8 years), Expert & Leadership (9+ years)Experience Level
Full TimeJob Type
UnknownVisa
AI & Machine Learning, Hardware, Robotics & AutomationIndustries

Requirements

Candidates should have a BS or MS in Computer Science or equivalent with at least 10 years of experience in ML Compiler. Experience with AI compiler projects such as TVM, Glow, or MLIR is essential, along with familiarity with the LLVM project. A background in establishing and developing engineering teams, particularly in software, is required, as well as experience in leading agile development methods, including coordinating scrums and managing project tasks.

Responsibilities

The ML Compiler Software Engineering Technical Lead will drive the design and implementation of the MLIR-based compiler framework. This includes overseeing the development of a compiler that partitions and maps large-scale NLP models to a multi-chiplet, parallel processing architecture. The role requires coordinating task scheduling, data movements, and inter-processor synchronization, as well as collaborating with various teams, including hardware and software architecture, data science, AI kernels, and testing groups to ensure the overall efficiency of the solution.

Skills

MLIR
LLVM
TVM
Glow
Graph Optimization
Compiler Design
Parallel Processing
Agile Development
Scrum
Kanban
Data Reshaping
Padding
Tiling
LLM
Pytorch

d-Matrix

AI compute platform for datacenters

About d-Matrix

d-Matrix focuses on improving the efficiency of AI computing for large datacenter customers. Its main product is the digital in-memory compute (DIMC) engine, which combines computing capabilities directly within programmable memory. This design helps reduce power consumption and enhances data processing speed while ensuring accuracy. d-Matrix differentiates itself from competitors by offering a modular and scalable approach, utilizing low-power chiplets that can be tailored for different applications. The company's goal is to provide high-performance, energy-efficient AI inference solutions to large-scale datacenter operators.

Key Metrics

Santa Clara, CaliforniaHeadquarters
2019Year Founded
$149.8MTotal Funding
SERIES_BCompany Stage
Enterprise Software, AI & Machine LearningIndustries
201-500Employees

Benefits

Hybrid Work Options

Risks

Competition from Nvidia, AMD, and Intel may pressure d-Matrix's market share.
Complex AI chip design could lead to delays or increased production costs.
Rapid AI innovation may render d-Matrix's technology obsolete if not updated.

Differentiation

d-Matrix's DIMC engine integrates compute into memory, enhancing efficiency and accuracy.
The company offers scalable AI solutions through modular, low-power chiplets.
d-Matrix focuses on brain-inspired AI compute engines for diverse inferencing workloads.

Upsides

Growing demand for energy-efficient AI solutions boosts d-Matrix's low-power chiplets appeal.
Partnerships with companies like Microsoft could lead to strategic alliances.
Increasing adoption of modular AI hardware in data centers benefits d-Matrix's offerings.

Land your dream remote job 3x faster with AI