Principal Runtime Systems Engineer at d-Matrix

Santa Clara, California, United States

d-Matrix Logo
Not SpecifiedCompensation
Expert & Leadership (9+ years)Experience Level
Full TimeJob Type
UnknownVisa
AI, Semiconductors, DatacenterIndustries

Requirements

  • In-depth knowledge of networking protocols like TCP/IP, OSPF, BGP, VLANs, ARP, etc., and how they relate to networking traffic
  • Understanding of network security, ACLs, and other security mechanisms relevant to network switches
  • Ability to troubleshoot complex network issues and develop solutions
  • Knowledge of cloud computing, SDN, and other emerging technologies
  • Strong understanding of Linux internals and systems programming
  • Proficiency in C/C++ and other relevant programming languages
  • Strong understanding of operating systems, kernel internals, and memory management
  • Bachelor’s in computer engineering or electrical engineering with a minimum of 12 years of industry experience in embedded software development

Responsibilities

  • Architect, document, and develop runtime firmware that executes in various on-chip multi-core CPU subsystems
  • Determine the delivery schedule and ensure the software meets d-Matrix coding and methodology guidelines
  • Collaborate with the hardware team, hardware verification team, and other members of the software team
  • Be largely responsible for all aspects of runtime performance of the silicon product
  • Work on the architecture, development, and validation of the functionality and efficiency of firmware/software executed on multiprocessor system-on-a-chip, low-level drivers, and systems software that hosts this SoC
  • Provide technical direction and guidance to the team and ensure project alignment with overall engineering strategy
  • Resolve complex technical issues
  • Evaluate new technologies and trends
  • Mentor and train junior engineers

Skills

TCP/IP
OSPF
BGP
VLANs
ARP
ACLs
Network Security
Firmware Development
Low-Level Drivers
Systems Software
Multiprocessor SoC
Multi-Core CPU
Runtime Firmware
Networking Protocols

d-Matrix

AI compute platform for datacenters

About d-Matrix

d-Matrix focuses on improving the efficiency of AI computing for large datacenter customers. Its main product is the digital in-memory compute (DIMC) engine, which combines computing capabilities directly within programmable memory. This design helps reduce power consumption and enhances data processing speed while ensuring accuracy. d-Matrix differentiates itself from competitors by offering a modular and scalable approach, utilizing low-power chiplets that can be tailored for different applications. The company's goal is to provide high-performance, energy-efficient AI inference solutions to large-scale datacenter operators.

Santa Clara, CaliforniaHeadquarters
2019Year Founded
$149.8MTotal Funding
SERIES_BCompany Stage
Enterprise Software, AI & Machine LearningIndustries
201-500Employees

Benefits

Hybrid Work Options

Risks

Competition from Nvidia, AMD, and Intel may pressure d-Matrix's market share.
Complex AI chip design could lead to delays or increased production costs.
Rapid AI innovation may render d-Matrix's technology obsolete if not updated.

Differentiation

d-Matrix's DIMC engine integrates compute into memory, enhancing efficiency and accuracy.
The company offers scalable AI solutions through modular, low-power chiplets.
d-Matrix focuses on brain-inspired AI compute engines for diverse inferencing workloads.

Upsides

Growing demand for energy-efficient AI solutions boosts d-Matrix's low-power chiplets appeal.
Partnerships with companies like Microsoft could lead to strategic alliances.
Increasing adoption of modular AI hardware in data centers benefits d-Matrix's offerings.

Land your dream remote job 3x faster with AI