Engineering Manager, AI Cloud Platform at Lambda

San Francisco, California, United States

Lambda Logo
$330,000 – $440,000Compensation
Senior (5 to 8 years), Expert & Leadership (9+ years)Experience Level
Full TimeJob Type
UnknownVisa
AI, Cloud ComputingIndustries

Requirements

  • 5+ years in a full-time management role at a high-growth technology company
  • 10+ years of industry experience in software engineering, with a focus on large-scale distributed systems and backend systems
  • Proven record of leading and building engineering teams that work on mission-critical, high performance systems
  • Proven track record leading teams that deliver enterprise features or governance platforms
  • Exceptional leadership skills that encompass leading by trust, building empathy with your reports and other teams, and maintaining a sustainable but rapid velocity
  • Demonstrated expertise in managing long-term projects alongside urgent, short-term priorities and incident resolution
  • Extensive experience collaborating with product, sales, and other engineering teams to build cohesive products with a focus on user experience and reliability
  • Ability to understand, review and structure Python and Go applications
  • Nice to Have: Experience with IAM, authentication/authorization (SSO, RBAC, SCIM), governance tooling, or compliance features
  • Nice to Have: Background building cloud application platforms
  • Nice to Have: Experience managing a remote, distributed team

Responsibilities

  • Lead the AI Cloud Core Platform team of ~6 engineers, with end-to-end ownership of Cloud Platform and governance capabilities
  • Drive execution of roadmap features including cluster lifecycle automation
  • Partner closely with Product and Design to ensure the user experience matches the needs of enterprise customers
  • Balance rapid feature delivery with longer-term investments in scalability, observability, and platform design
  • Hire, mentor, and grow a team of engineers, providing career development and feedback
  • Collaborate with other Lambda teams (Control Plane, Billing, Platform) to ensure smooth, integrated delivery across the stack
  • Contribute to a culture of high performance, documentation, humility, and curiosity
  • Be product-focused in your leadership and execution, always placing the needs of the customer first, with a particular focus on feature velocity, reliability and security
  • Shape a culture of sustainable, empathetic, and high-velocity engineering, with a deep focus on cross-team collaboration, documentation, and data-driven decision-making

Skills

Engineering Management
Team Leadership
Cloud Platform
Cluster Automation
Scalability
Observability
Platform Design
Hiring
Mentoring
Product Collaboration

Lambda

Cloud-based GPU services for AI training

About Lambda

Lambda Labs provides cloud-based services for artificial intelligence (AI) training and inference, focusing on large language models and generative AI. Their main product, the AI Developer Cloud, utilizes NVIDIA's GH200 Grace Hopper™ Superchip to deliver efficient and cost-effective GPU resources. Customers can access on-demand and reserved cloud GPUs, which are essential for processing large datasets quickly, with pricing starting at $1.99 per hour for NVIDIA H100 instances. Lambda Labs serves AI developers and companies needing extensive GPU deployments, offering competitive pricing and infrastructure ownership options through their Lambda Echelon service. Additionally, they provide Lambda Stack, a software solution that simplifies the installation and management of AI-related tools for over 50,000 machine learning teams. The goal of Lambda Labs is to support AI development by providing accessible and efficient cloud GPU services.

San Jose, CaliforniaHeadquarters
2012Year Founded
$372.6MTotal Funding
DEBTCompany Stage
AI & Machine LearningIndustries
201-500Employees

Risks

Nebius' holistic cloud platform challenges Lambda's market share in AI infrastructure.
AWS's 896-core instance may draw customers seeking high-performance cloud solutions.
Existential crisis in Hermes 3 model raises concerns about Lambda's AI model reliability.

Differentiation

Lambda offers cost-effective Inference API for AI model deployment without infrastructure maintenance.
Nvidia HGX H100 and Quantum-2 InfiniBand Clusters enhance Lambda's AI model training capabilities.
Lambda's Hermes 3 collaboration showcases advanced AI model development expertise.

Upsides

Inference API launch attracts enterprises seeking low-cost AI deployment solutions.
Nvidia HGX H100 clusters provide competitive edge in high-performance AI computing.
Strong AI cloud service growth indicates rising demand for Lambda's GPU offerings.

Land your dream remote job 3x faster with AI