Senior Runtime Software Engineer at d-Matrix

Sydney, New South Wales, Australia

d-Matrix Logo
Not SpecifiedCompensation
Senior (5 to 8 years)Experience Level
Full TimeJob Type
UnknownVisa
AI, Semiconductor, HardwareIndustries

Requirements

  • BS/MS preferred degree in computer science, computer engineering, or similar
  • Experience with multi-threaded C programming on multi-core CPUs running an RTOS in both AMP and SMP configurations
  • Understanding of methods used to synchronize many-core and many-CPU architectures
  • Managing static resources without an MMU
  • Zephyr OS experience is an advantage
  • Experience with PIC programming and developing interrupt service routines
  • Knowledge of bootloaders and Linux device drivers is an advantage
  • Ability to interpret HW-centric data sheets and register definitions to determine how to best program the architecture
  • Ability to work with HW design teams at both the early definition phase and during bring-up

Responsibilities

  • Architect, document, and develop the runtime firmware that executes in the various on-chip multi-core CPU subsystems to control all aspects of the AI subsystems in the chip and maximize hardware utilization
  • Achieve measures of success including overall AI hardware utilization, minimizing communication bottlenecks, and maximizing on-chip memory utilization
  • Bring the software up on FPGA platforms containing images of the embedded CPU subsystems and debug it using JTAG-connected IDE
  • Develop a firmware solution that can be developed and tested ahead of the availability of the AI subsystem hardware
  • Determine the delivery schedule and ensure the software meets d-Matrix coding and methodology guidelines
  • Collaborate with hardware teams to interpret hardware specifications and suggest changes that improve utilization, throughput, and/or reduce power
  • Collaborate with other members of the SW team (AU and US), SW quality & test team (US and India), and HW verification team to assist with SoC-level DV simulations and emulation
  • Develop and debug code on FPGA-based systems containing CPU subsystems and SystemC models of the AI subsystems and SoC
  • Port the software to a “big iron” emulation system (e.g., Veloce, Palladium) containing the final RTL
  • Be closely involved in bringing up the software on the AI subsystem hardware and validating silicon and software performance

Skills

Key technologies and capabilities for this role

PyTorchFirmwareRuntime SoftwareMulti-core CPULow-level DriversSystem-on-ChipAI InferenceSparse Matrix ProcessingDense Matrix ProcessingFixed-point ArithmeticFloating-point Arithmetic

Questions & Answers

Common questions about this position

Is this position remote or hybrid?

The position is hybrid, requiring onsite work at the Sydney, Australia office 3 days per week.

What is the salary range for this Senior Runtime Software Engineer role?

This information is not specified in the job description.

What key skills are required for this role?

The role requires expertise in architecting and developing runtime firmware for on-chip multi-core CPU subsystems, bringing up software on FPGA platforms with JTAG debugging, and collaborating with hardware and software teams on AI inference processors.

What is the company culture like at d-Matrix?

d-Matrix has a culture of respect and collaboration, valuing humility, direct communication, inclusivity, and diverse perspectives for better solutions.

What makes a strong candidate for this position?

Strong candidates are passionate about tackling challenges, driven by execution, and have experience in runtime firmware for AI hardware, FPGA development, and cross-team collaboration.

d-Matrix

AI compute platform for datacenters

About d-Matrix

d-Matrix focuses on improving the efficiency of AI computing for large datacenter customers. Its main product is the digital in-memory compute (DIMC) engine, which combines computing capabilities directly within programmable memory. This design helps reduce power consumption and enhances data processing speed while ensuring accuracy. d-Matrix differentiates itself from competitors by offering a modular and scalable approach, utilizing low-power chiplets that can be tailored for different applications. The company's goal is to provide high-performance, energy-efficient AI inference solutions to large-scale datacenter operators.

Santa Clara, CaliforniaHeadquarters
2019Year Founded
$149.8MTotal Funding
SERIES_BCompany Stage
Enterprise Software, AI & Machine LearningIndustries
201-500Employees

Benefits

Hybrid Work Options

Risks

Competition from Nvidia, AMD, and Intel may pressure d-Matrix's market share.
Complex AI chip design could lead to delays or increased production costs.
Rapid AI innovation may render d-Matrix's technology obsolete if not updated.

Differentiation

d-Matrix's DIMC engine integrates compute into memory, enhancing efficiency and accuracy.
The company offers scalable AI solutions through modular, low-power chiplets.
d-Matrix focuses on brain-inspired AI compute engines for diverse inferencing workloads.

Upsides

Growing demand for energy-efficient AI solutions boosts d-Matrix's low-power chiplets appeal.
Partnerships with companies like Microsoft could lead to strategic alliances.
Increasing adoption of modular AI hardware in data centers benefits d-Matrix's offerings.

Land your dream remote job 3x faster with AI