Senior HPC Operations Engineer at Lambda

San Francisco, California, United States

Lambda Logo
$207,000 – $401,000Compensation
Senior (5 to 8 years), Expert & Leadership (9+ years)Experience Level
Full TimeJob Type
UnknownVisa
AI, Cloud Computing, High Performance ComputingIndustries

Requirements

  • Deeply experienced HPC engineer comfortable with logical provisioning of a cluster
  • Strong understanding of HPC/AI architecture, operating systems, firmware, software, and networking
  • 10+ years of experience in deploying and configuring HPC clusters for AI workloads
  • Innate attention to detail
  • Experience with Bright Cluster Manager or similar cluster management tools
  • Expert in configuring and troubleshooting: SFP+ fiber, Infiniband (IB), and 100 GbE network fabrics; Ethernet, switching, power infrastructure, GPU direct, RDMA, NCCL, Horovod environments; Linux based compute nodes, firmware updates, driver installation; SLURM, Kubernetes, or other job scheduling systems
  • Work well under deadlines and structured project plans, knowing when and how to ask for changes to project timelines
  • Excellent problem solving and troubleshooting skills
  • Flexibility to travel to North American data centers as on-site needs arise or as part of training exercises
  • Able to work independently and as part of a team
  • Comfortable mentoring and supporting junior HPC engineers on cluster deployments

Responsibilities

  • Remotely deploy and configure large-scale HPC clusters for AI workloads (up to many thousands of nodes)
  • Remotely install and configure operating systems, firmware, software, and networking on HPC clusters both manually and using automation tools
  • Troubleshoot and resolve HPC cluster issues working closely with physical deployment teams on-site
  • Provide clear and detailed requirements back to other engineering teams on gaps and improvement areas, specifically in the areas of simplification, stability, and operational efficiency
  • Contribute to the creation of and maintenance of Standard Operating Procedures
  • Provide regular and well-communicated updates to project leads throughout each deployment
  • Mentor and assist less experienced team members
  • Stay up-to-date on the latest HPC/AI technologies and best practices

Skills

Key technologies and capabilities for this role

HPCCluster ManagementBright Cluster ManagerLinuxNetworkingAutomationFirmwareAI WorkloadsOperating SystemsTroubleshootingDeployment

Questions & Answers

Common questions about this position

What is the salary range for the Senior HPC Operations Engineer position?

The salary range is $207K - $401K.

Is this role remote or hybrid, and what are the office requirements?

This is a hybrid position requiring presence in the San Francisco/San Jose or Seattle office 4 days per week, with Tuesday designated as the work-from-home day.

What key skills and experience are required for this role?

Candidates need 10+ years of experience deploying and configuring HPC clusters for AI workloads, strong understanding of HPC/AI architecture including operating systems, firmware, software, and networking, and expertise in configuring/troubleshooting SFP+ fiber, Infiniband, 100 GbE, Ethernet, SLURM/Kubernetes, plus experience with Bright Cluster Manager or similar.

What is the team structure and work environment like at Lambda?

Engineering at Lambda builds and scales the cloud offering, including website, APIs, systems, and internal tooling, with a collaborative environment involving close work with physical deployment teams, mentoring junior engineers, and contributing to Standard Operating Procedures.

What makes a strong candidate for this Senior HPC Operations Engineer role?

A strong candidate has 10+ years of deep HPC engineering experience with logical provisioning of large-scale clusters for AI workloads, expertise in troubleshooting complex networking and systems like Infiniband and SLURM, excellent problem-solving skills, attention to detail, ability to work independently and mentor juniors, and flexibility for occasional travel to data centers.

Lambda

Cloud-based GPU services for AI training

About Lambda

Lambda Labs provides cloud-based services for artificial intelligence (AI) training and inference, focusing on large language models and generative AI. Their main product, the AI Developer Cloud, utilizes NVIDIA's GH200 Grace Hopper™ Superchip to deliver efficient and cost-effective GPU resources. Customers can access on-demand and reserved cloud GPUs, which are essential for processing large datasets quickly, with pricing starting at $1.99 per hour for NVIDIA H100 instances. Lambda Labs serves AI developers and companies needing extensive GPU deployments, offering competitive pricing and infrastructure ownership options through their Lambda Echelon service. Additionally, they provide Lambda Stack, a software solution that simplifies the installation and management of AI-related tools for over 50,000 machine learning teams. The goal of Lambda Labs is to support AI development by providing accessible and efficient cloud GPU services.

San Jose, CaliforniaHeadquarters
2012Year Founded
$372.6MTotal Funding
DEBTCompany Stage
AI & Machine LearningIndustries
201-500Employees

Risks

Nebius' holistic cloud platform challenges Lambda's market share in AI infrastructure.
AWS's 896-core instance may draw customers seeking high-performance cloud solutions.
Existential crisis in Hermes 3 model raises concerns about Lambda's AI model reliability.

Differentiation

Lambda offers cost-effective Inference API for AI model deployment without infrastructure maintenance.
Nvidia HGX H100 and Quantum-2 InfiniBand Clusters enhance Lambda's AI model training capabilities.
Lambda's Hermes 3 collaboration showcases advanced AI model development expertise.

Upsides

Inference API launch attracts enterprises seeking low-cost AI deployment solutions.
Nvidia HGX H100 clusters provide competitive edge in high-performance AI computing.
Strong AI cloud service growth indicates rising demand for Lambda's GPU offerings.

Land your dream remote job 3x faster with AI