Engineering Manager, HPC Deployments at Lambda

San Francisco, California, United States

Lambda Logo
$267,000 – $486,000Compensation
Senior (5 to 8 years), Expert & Leadership (9+ years)Experience Level
Full TimeJob Type
UnknownVisa
AI, Cloud Computing, High Performance ComputingIndustries

Requirements

  • Extensive experience in HPC or large-scale infrastructure, including at least 3 years in a leadership or management role
  • Work well under deadlines and structured project plans; able to successfully (and tactfully) negotiate changes to project timelines
  • Have excellent problem solving and troubleshooting skills
  • Can effectively collaborate with peer engineering managers to coordinate efforts that may impact deployment operations
  • Are comfortable leading and mentoring HPC engineers on cluster deployments as needed
  • Experience building a high-performance team through deliberate hiring, upskilling, planned skills redundancy, performance-management, and expectation setting
  • Have flexibility to travel to our North American data centers as on-site needs arise or as part of training exercises
  • Nice to Have
  • Experience with Linux systems administration, automation, scripting/coding
  • Experience with containerization technologies (Docker, Kubernetes)
  • Experience working with the technologies that underpin our cloud business (GPU acceleration, virtualization, and cloud computing)
  • Experience with machine learning and deep learning frameworks (PyTorch, Tensorflow) and benchmarking tools (DeepSpeed, MLPerf)
  • Soft Skills (customer awareness, diplomacy)
  • Bachelor’s degree

Responsibilities

  • Lead and grow a distributed top-talent team of HPC engineers responsible for the configuration, validation, deployment of large scale GPU clusters
  • Work cross functionally with teams in the organization to deliver projects and deployments on time, ensuring alignment across stakeholders
  • Identify opportunities for efficiency improvements in the tools / process / automation that the team relies upon day to day
  • Ensure stakeholders have clear visibility into deployment progress, risks, and outcomes
  • Drive outcomes by managing staff allocations, project priorities, deadlines, and deliverables
  • Conduct regular one-on-one meetings, provide constructive feedback, and support career development for team members
  • Stay current on the latest HPC/AI technologies and best practices
  • Participate in the qualification efforts of new technologies for use in our production deployments

Skills

HPC
NVIDIA GPUs
Cluster Deployment
Configuration
Validation
Automation
Metrics
Fleet Engineering
Infrastructure

Lambda

Cloud-based GPU services for AI training

About Lambda

Lambda Labs provides cloud-based services for artificial intelligence (AI) training and inference, focusing on large language models and generative AI. Their main product, the AI Developer Cloud, utilizes NVIDIA's GH200 Grace Hopper™ Superchip to deliver efficient and cost-effective GPU resources. Customers can access on-demand and reserved cloud GPUs, which are essential for processing large datasets quickly, with pricing starting at $1.99 per hour for NVIDIA H100 instances. Lambda Labs serves AI developers and companies needing extensive GPU deployments, offering competitive pricing and infrastructure ownership options through their Lambda Echelon service. Additionally, they provide Lambda Stack, a software solution that simplifies the installation and management of AI-related tools for over 50,000 machine learning teams. The goal of Lambda Labs is to support AI development by providing accessible and efficient cloud GPU services.

San Jose, CaliforniaHeadquarters
2012Year Founded
$372.6MTotal Funding
DEBTCompany Stage
AI & Machine LearningIndustries
201-500Employees

Risks

Nebius' holistic cloud platform challenges Lambda's market share in AI infrastructure.
AWS's 896-core instance may draw customers seeking high-performance cloud solutions.
Existential crisis in Hermes 3 model raises concerns about Lambda's AI model reliability.

Differentiation

Lambda offers cost-effective Inference API for AI model deployment without infrastructure maintenance.
Nvidia HGX H100 and Quantum-2 InfiniBand Clusters enhance Lambda's AI model training capabilities.
Lambda's Hermes 3 collaboration showcases advanced AI model development expertise.

Upsides

Inference API launch attracts enterprises seeking low-cost AI deployment solutions.
Nvidia HGX H100 clusters provide competitive edge in high-performance AI computing.
Strong AI cloud service growth indicates rising demand for Lambda's GPU offerings.

Land your dream remote job 3x faster with AI