Platform Validation Engineer, Customer Platforms - Tech Lead/Principal at d-Matrix

Santa Clara, California, United States

d-Matrix Logo
$175,000 – $260,000Compensation
Senior (5 to 8 years), Expert & Leadership (9+ years)Experience Level
Full TimeJob Type
UnknownVisa
AI, Hardware, TechnologyIndustries

Requirements

  • Bachelor’s or Master’s in EE, CS, or related field
  • 5+ years of experience in GPU server platform validation, preferably with PCIe-based hardware
  • Strong understanding of server architecture, Linux environments, and hardware-software interactions (preferred)
  • Experience with test automation (e.g., Python, Bash) and validation tools (preferred)
  • Detail-oriented with strong debugging and documentation skills (preferred)
  • Experience working with ODM and OEM vendors for GPU servers and rack scale solutions (preferred)
  • Hands-on experience with Electrical Validation, Functional Validation and High Speed Bus Validation (preferred)
  • Hands-on experience updating firmware on servers and utilizing custom vendor software tools for debug (preferred)
  • Stress testing experience (preferred)
  • Strong debugging skills across hardware (compute and networking) and host/embedded software (preferred)

Responsibilities

  • Develop and execute system-level test plans for platform validation, including stress, thermal, and PCIe interoperability tests
  • Automate test frameworks and validation workflows to improve test coverage and efficiency
  • Drive root cause analysis and debug of failures in collaboration with hardware, firmware, and software teams
  • Ensure platforms meet internal and external quality criteria for production readiness
  • Document test procedures, results, and validation status across SKUs

Skills

PCIe
Linux
GPU
server architecture
platform validation
test automation
stress testing
thermal testing
root cause analysis
system-level testing
hardware-software interactions

d-Matrix

AI compute platform for datacenters

About d-Matrix

d-Matrix focuses on improving the efficiency of AI computing for large datacenter customers. Its main product is the digital in-memory compute (DIMC) engine, which combines computing capabilities directly within programmable memory. This design helps reduce power consumption and enhances data processing speed while ensuring accuracy. d-Matrix differentiates itself from competitors by offering a modular and scalable approach, utilizing low-power chiplets that can be tailored for different applications. The company's goal is to provide high-performance, energy-efficient AI inference solutions to large-scale datacenter operators.

Santa Clara, CaliforniaHeadquarters
2019Year Founded
$149.8MTotal Funding
SERIES_BCompany Stage
Enterprise Software, AI & Machine LearningIndustries
201-500Employees

Benefits

Hybrid Work Options

Risks

Competition from Nvidia, AMD, and Intel may pressure d-Matrix's market share.
Complex AI chip design could lead to delays or increased production costs.
Rapid AI innovation may render d-Matrix's technology obsolete if not updated.

Differentiation

d-Matrix's DIMC engine integrates compute into memory, enhancing efficiency and accuracy.
The company offers scalable AI solutions through modular, low-power chiplets.
d-Matrix focuses on brain-inspired AI compute engines for diverse inferencing workloads.

Upsides

Growing demand for energy-efficient AI solutions boosts d-Matrix's low-power chiplets appeal.
Partnerships with companies like Microsoft could lead to strategic alliances.
Increasing adoption of modular AI hardware in data centers benefits d-Matrix's offerings.

Land your dream remote job 3x faster with AI