Manager, HPC Design at Lambda

San Francisco, California, United States

Lambda Logo
$330,000 – $550,000Compensation
Senior (5 to 8 years), Expert & Leadership (9+ years)Experience Level
Full TimeJob Type
UnknownVisa
AI, High Performance Computing, Cloud ComputingIndustries

Requirements

  • 5+ years of experience designing HPC or cloud infrastructure at scale
  • 2+ years in a technical leadership or management role
  • Understand the practical application of compute, storage, and network architectures in real-world, large-scale deployments
  • Can take an established architectural direction and lead your team in producing high-quality designs that deliver reliably, on-time, and within scope
  • Adept at managing tradeoffs and risk in infrastructure delivery—balancing technical ambition with operational realism
  • Experience mentoring senior-level technical contributors and building cohesive, execution-focused teams

Responsibilities

  • Lead a team of system designers responsible for translating architecture into detailed, executable infrastructure designs across compute, storage, and networking
  • Build and mature repeatable processes that turn Lambda’s reference architectures into site and customer-specific deployment plans
  • Own the delivery of infrastructure design packages, ensuring solutions meet functional requirements, budget targets, and delivery timelines
  • Partner closely with architecture, product, engineering, and customer teams to ensure alignment between design execution and platform roadmap
  • Guide the creation and review of design specifications, integration plans, and validation processes for new deployments
  • Mentor and grow a high-performing team of infrastructure designers, focused on disciplined execution and iterative delivery

Skills

HPC
Infrastructure Design
Compute Architecture
Storage Architecture
Networking
Cloud Infrastructure
System Deployment
Deployment Plans
Reference Architectures
Validation Processes

Lambda

Cloud-based GPU services for AI training

About Lambda

Lambda Labs provides cloud-based services for artificial intelligence (AI) training and inference, focusing on large language models and generative AI. Their main product, the AI Developer Cloud, utilizes NVIDIA's GH200 Grace Hopper™ Superchip to deliver efficient and cost-effective GPU resources. Customers can access on-demand and reserved cloud GPUs, which are essential for processing large datasets quickly, with pricing starting at $1.99 per hour for NVIDIA H100 instances. Lambda Labs serves AI developers and companies needing extensive GPU deployments, offering competitive pricing and infrastructure ownership options through their Lambda Echelon service. Additionally, they provide Lambda Stack, a software solution that simplifies the installation and management of AI-related tools for over 50,000 machine learning teams. The goal of Lambda Labs is to support AI development by providing accessible and efficient cloud GPU services.

San Jose, CaliforniaHeadquarters
2012Year Founded
$372.6MTotal Funding
DEBTCompany Stage
AI & Machine LearningIndustries
201-500Employees

Risks

Nebius' holistic cloud platform challenges Lambda's market share in AI infrastructure.
AWS's 896-core instance may draw customers seeking high-performance cloud solutions.
Existential crisis in Hermes 3 model raises concerns about Lambda's AI model reliability.

Differentiation

Lambda offers cost-effective Inference API for AI model deployment without infrastructure maintenance.
Nvidia HGX H100 and Quantum-2 InfiniBand Clusters enhance Lambda's AI model training capabilities.
Lambda's Hermes 3 collaboration showcases advanced AI model development expertise.

Upsides

Inference API launch attracts enterprises seeking low-cost AI deployment solutions.
Nvidia HGX H100 clusters provide competitive edge in high-performance AI computing.
Strong AI cloud service growth indicates rising demand for Lambda's GPU offerings.

Land your dream remote job 3x faster with AI