Principal DRAM Architect – GPU Memory Solutions at NVIDIA

Santa Clara, California, United States

NVIDIA Logo
Not SpecifiedCompensation
Expert & Leadership (9+ years)Experience Level
Full TimeJob Type
UnknownVisa
AI, Graphics, Data Center, AutomotiveIndustries

Requirements

  • MS or PhD in Electrical Engineering, Computer Engineering, Physics (or equivalent experience)
  • 15+ years of experience in DRAM or memory system architecture, with at least 5+ years focused on HBM (HBM2/2e/3/3e or next-gen)
  • Expertise in HBM architecture: TSV design, die stacking, interposer/CoWoS integration, refresh schemes, ECC/CRC, pseudo-channels, and thermal/power management
  • Proven participation in JEDEC or equivalent standards organizations, contributing to DRAM or HBM specifications
  • Demonstrated ability to influence DRAM vendor roadmaps, negotiate trade-offs, and enable early silicon validation
  • Strong understanding of I/O and PHY design fundamentals — timing, SI/PI, equalization, jitter budgeting
  • Proven experience balancing system-level trade-offs across performance, bandwidth, power, cost, yield, and reliability
  • Exceptional technical leadership and cross-functional communication skills
  • Ways to Stand Out
  • Hands-on experience with GDDR6/7 and LPDDR5/6 architectures — including bank management, signaling, power states, and error handling
  • Deep understanding of thermal and mechanical challenges in advanced memory packaging and 3D integration
  • Familiarity with emerging memory technologies (3D DRAM, MRAM, RRAM, or next-gen hybrid memory)
  • Publications, patents, or JEDEC leadership roles demonstrating influence on memory architecture and standards
  • Background in high-bandwidth computing platforms — AI, HPC, or graphics accelerators

Responsibilities

  • Architect next-generation DRAM solutions and NVIDIA-specific implementations — including bank and stack structures, refresh mechanisms, retention schemes, ECC/CRC, power management, and reliability optimization
  • Lead innovation in high-speed memory interfaces, with deep expertise in HBM PHYs (wide I/O, TSV signaling, SI/PI, timing margins) and an understanding of GDDR/LPDDR PHY architectures
  • Collaborate across domains on advanced packaging technologies (TSVs, interposers, CoWoS, hybrid bonding, FOWLP) to optimize DRAM–GPU co-packaging for bandwidth, power, thermal performance, and yield
  • Evaluate emerging DRAM process nodes (sub-1x nm, EUV, new capacitor/dielectric materials) and their impact on density, power, retention, and cost
  • Influence industry direction by working with DRAM vendors and actively contributing to JEDEC committees, driving next-generation memory standards and NVIDIA-specific roadmap alignment
  • Model and quantify system-level trade-offs in bandwidth, latency, power, cost, yield, and thermal behavior to guide architectural decisions
  • Mentor engineers, lead technical reviews, and shape NVIDIA’s long-term memory architecture vision

Skills

DRAM
HBM
GDDR
LPDDR
TSV
HBM PHY
SI/PI
JEDEC
CoWoS
hybrid bonding
EUV
ECC
CRC
refresh management
power management

NVIDIA

Designs GPUs and AI computing solutions

About NVIDIA

NVIDIA designs and manufactures graphics processing units (GPUs) and system on a chip units (SoCs) for various markets, including gaming, professional visualization, data centers, and automotive. Their products include GPUs tailored for gaming and professional use, as well as platforms for artificial intelligence (AI) and high-performance computing (HPC) that cater to developers, data scientists, and IT administrators. NVIDIA generates revenue through the sale of hardware, software solutions, and cloud-based services, such as NVIDIA CloudXR and NGC, which enhance experiences in AI, machine learning, and computer vision. What sets NVIDIA apart from competitors is its strong focus on research and development, allowing it to maintain a leadership position in a competitive market. The company's goal is to drive innovation and provide advanced solutions that meet the needs of a diverse clientele, including gamers, researchers, and enterprises.

Santa Clara, CaliforniaHeadquarters
1993Year Founded
$19.5MTotal Funding
IPOCompany Stage
Automotive & Transportation, Enterprise Software, AI & Machine Learning, GamingIndustries
10,001+Employees

Benefits

Company Equity
401(k) Company Match

Risks

Increased competition from AI startups like xAI could challenge NVIDIA's market position.
Serve Robotics' expansion may divert resources from NVIDIA's core GPU and AI businesses.
Integration of VinBrain may pose challenges and distract from NVIDIA's primary operations.

Differentiation

NVIDIA leads in AI and HPC solutions with cutting-edge GPU technology.
The company excels in diverse markets, including gaming, data centers, and autonomous vehicles.
NVIDIA's cloud services, like CloudXR, offer scalable solutions for AI and machine learning.

Upsides

Acquisition of VinBrain enhances NVIDIA's AI capabilities in the healthcare sector.
Investment in Nebius Group boosts NVIDIA's AI infrastructure and cloud platform offerings.
Serve Robotics' expansion, backed by NVIDIA, highlights growth in autonomous delivery services.

Land your dream remote job 3x faster with AI