d-Matrix

ML Compiler Engineer, Staff

Toronto, Ontario, Canada

Not SpecifiedCompensation
Senior (5 to 8 years)Experience Level
Full TimeJob Type
UnknownVisa
AI & Machine Learning, HardwareIndustries

Requirements

Candidates must have a Bachelor's degree in Computer Science with at least 7 years of relevant industry experience, or a Master's degree in Computer Science with at least 5 years of relevant industry experience. Proficiency in delivering production quality code in modern C++ is required, along with experience in modern compiler infrastructures such as LLVM or MLIR. Familiarity with machine learning frameworks and interfaces like ONNX, TensorFlow, and PyTorch is essential, as well as experience in production compiler development. Preferred qualifications include algorithm design skills and experience with relevant Open Source ML projects like Torch-MLIR, ONNX-MLIR, Caffe, or TVM.

Responsibilities

The ML Compiler Engineer will develop the compiler backend, focusing on the assignment of hardware resources in a spatial architecture to execute low-level instructions. The role involves solving algorithmic compiler problems and learning the intricate details of the underlying hardware and software architectures. The engineer will work on model partitioning, tiling, resource allocation, memory management, scheduling, and optimization for latency, bandwidth, and throughput while collaborating with a team of experienced compiler developers.

Skills

C++
LLVM
MLIR
ONNX
TensorFlow
PyTorch
Compiler Development
Algorithm Design
Torch-MLIR
ONNX-MLIR
Caffe
TVM
Model Partitioning
Tiling
Resource Allocation
Memory Management
Scheduling
Optimization

d-Matrix

AI compute platform for datacenters

About d-Matrix

d-Matrix focuses on improving the efficiency of AI computing for large datacenter customers. Its main product is the digital in-memory compute (DIMC) engine, which combines computing capabilities directly within programmable memory. This design helps reduce power consumption and enhances data processing speed while ensuring accuracy. d-Matrix differentiates itself from competitors by offering a modular and scalable approach, utilizing low-power chiplets that can be tailored for different applications. The company's goal is to provide high-performance, energy-efficient AI inference solutions to large-scale datacenter operators.

Key Metrics

Santa Clara, CaliforniaHeadquarters
2019Year Founded
$149.8MTotal Funding
SERIES_BCompany Stage
Enterprise Software, AI & Machine LearningIndustries
201-500Employees

Benefits

Hybrid Work Options

Risks

Competition from Nvidia, AMD, and Intel may pressure d-Matrix's market share.
Complex AI chip design could lead to delays or increased production costs.
Rapid AI innovation may render d-Matrix's technology obsolete if not updated.

Differentiation

d-Matrix's DIMC engine integrates compute into memory, enhancing efficiency and accuracy.
The company offers scalable AI solutions through modular, low-power chiplets.
d-Matrix focuses on brain-inspired AI compute engines for diverse inferencing workloads.

Upsides

Growing demand for energy-efficient AI solutions boosts d-Matrix's low-power chiplets appeal.
Partnerships with companies like Microsoft could lead to strategic alliances.
Increasing adoption of modular AI hardware in data centers benefits d-Matrix's offerings.

Land your dream remote job 3x faster with AI