Senior Staff ML Researcher - LLM Algorithmic Optimization at d-Matrix

Bengaluru, Karnataka, India

d-Matrix Logo
₹40,000,000 – ₹60,000,000Compensation
Senior (5 to 8 years), Expert & Leadership (9+ years)Experience Level
Full TimeJob Type
UnknownVisa
AI, TechnologyIndustries

Requirements

  • Strong mathematical skills
  • MSc or PhD in math, CS, statistics, physics, or a related STEM field
  • Experience in Python and OOP code design
  • Experience with transformer architecture is advantageous but not mandatory

Responsibilities

  • Invent efficient algorithms for optimizing Large Language Model inference on DNN Accelerators
  • Design efficient algorithms for optimizing Large Language Model inference on DNN Accelerators
  • Implement efficient algorithms for optimizing Large Language Model inference on DNN Accelerators
  • Work as part of a close-knit team of mathematicians, ML researchers, and ML engineers
  • Create and apply advanced algorithmic and numerical techniques to research in the overlap of mathematics, ML, and modern LLM applications

Skills

Key technologies and capabilities for this role

Machine LearningPythonOOPTransformer ArchitectureLLM OptimizationAlgorithm DesignDNN AcceleratorsMathematics

Questions & Answers

Common questions about this position

What is the salary range for this Senior Staff ML Researcher position?

The salary range is ₹40L - ₹60L.

Is this role remote or hybrid, and what's the location policy?

This is a hybrid role requiring on-site work at the Bangalore, India offices 3 days per week.

What skills and qualifications are required for this role?

Candidates need strong mathematical skills with an MSc or PhD in math, CS, statistics, physics, or related STEM field, experience in Python and OOP code design, and transformer architecture experience is advantageous.

What is the company culture like at d-Matrix?

The culture emphasizes respect, collaboration, humility, direct communication, inclusivity, and valuing differing perspectives for better solutions.

What makes a strong candidate for this ML Researcher role?

Strong candidates have humble expertise, kindness, dedication, a willingness to embrace challenges, and learn together every day, along with passion for tackling challenges and driven by execution.

d-Matrix

AI compute platform for datacenters

About d-Matrix

d-Matrix focuses on improving the efficiency of AI computing for large datacenter customers. Its main product is the digital in-memory compute (DIMC) engine, which combines computing capabilities directly within programmable memory. This design helps reduce power consumption and enhances data processing speed while ensuring accuracy. d-Matrix differentiates itself from competitors by offering a modular and scalable approach, utilizing low-power chiplets that can be tailored for different applications. The company's goal is to provide high-performance, energy-efficient AI inference solutions to large-scale datacenter operators.

Santa Clara, CaliforniaHeadquarters
2019Year Founded
$149.8MTotal Funding
SERIES_BCompany Stage
Enterprise Software, AI & Machine LearningIndustries
201-500Employees

Benefits

Hybrid Work Options

Risks

Competition from Nvidia, AMD, and Intel may pressure d-Matrix's market share.
Complex AI chip design could lead to delays or increased production costs.
Rapid AI innovation may render d-Matrix's technology obsolete if not updated.

Differentiation

d-Matrix's DIMC engine integrates compute into memory, enhancing efficiency and accuracy.
The company offers scalable AI solutions through modular, low-power chiplets.
d-Matrix focuses on brain-inspired AI compute engines for diverse inferencing workloads.

Upsides

Growing demand for energy-efficient AI solutions boosts d-Matrix's low-power chiplets appeal.
Partnerships with companies like Microsoft could lead to strategic alliances.
Increasing adoption of modular AI hardware in data centers benefits d-Matrix's offerings.

Land your dream remote job 3x faster with AI