[Remote] Head of Support at Lambda

United States

Lambda Logo
Not SpecifiedCompensation
N/AExperience Level
N/AJob Type
Not SpecifiedVisa
N/AIndustries

Requirements

  • 12+ years of experience Leading technical support, service delivery, or operations teams, with at least 3 years in a senior leadership role
  • Proven ability to scale support organizations in high-growth, infrastructure, or SaaS environments
  • Strong understanding of GPU/AI infrastructure, data centers, and managed services (Kubernetes, Slurm, HPC)
  • Demonstrated expertise in incident management, escalation handling, and SLA governance
  • Experience reviewing customer contracts and working cross-functionally to ensure support feasibility
  • Executive presence with ability to prepare and deliver business reviews and reporting for both customer and internal audiences
  • Skilled at balancing multiple complex projects in a fast-paced, dynamic environment
  • Excellent written and verbal communication, with the ability to influence executives, customers, and technical teams
  • Nice to have: Experience scaling support organizations during hyperscale customer growth
  • Nice to have: Background in designing global 24/7 coverage models and building new business units
  • Nice to have: Ability to shape contract structures, SLAs, and commercial terms to balance customer needs with operational realities
  • Nice to have: Familiarity with white-labeled support partnerships and third-party support delivery
  • Nice to have: Strong executive presence in C-level engagement during escalations and reviews

Responsibilities

  • Lead and scale a multi-tier global support organization, spanning Core and SuperIntelligence Support teams
  • Drive the scaling of SuperIntelligence Support, ensuring processes, tools, and staffing are ready to meet hyperscale customer demands
  • Oversee contract feasibility reviews for new customer deals, ensuring promises made to customers are realistic, executable, and fully supported
  • Own critical incident and escalation management, ensuring clear ownership, structured communication, and timely resolution
  • Establish and track key metrics (SLA adherence, MTTR, CSAT, NPS, time-to-next-action, and cross-divisional performance), and present findings to executives and customers
  • Prepare and deliver Monthly Operations Review (MOR) materials, ensuring leadership has visibility into performance, reliability, and problems
  • Build and maintain documentation, playbooks, and reporting frameworks that create consistency, transparency, and scalability
  • Partner with Sales, Engineering, Data Center Operations, and Product to ensure support delivery is aligned with both business strategy and customer success
  • Develop and implement a support enablement strategy, expanding scope, offloading work from engineering, and improving first-contact resolution
  • Mentor and grow support leaders and ICs, defining career paths and creating succession plans
  • Balance multiple high-impact projects in a fast-moving environment with shifting priorities, maintaining focus on customer trust and operational excellence

Skills

Lambda

Cloud-based GPU services for AI training

About Lambda

Lambda Labs provides cloud-based services for artificial intelligence (AI) training and inference, focusing on large language models and generative AI. Their main product, the AI Developer Cloud, utilizes NVIDIA's GH200 Grace Hopper™ Superchip to deliver efficient and cost-effective GPU resources. Customers can access on-demand and reserved cloud GPUs, which are essential for processing large datasets quickly, with pricing starting at $1.99 per hour for NVIDIA H100 instances. Lambda Labs serves AI developers and companies needing extensive GPU deployments, offering competitive pricing and infrastructure ownership options through their Lambda Echelon service. Additionally, they provide Lambda Stack, a software solution that simplifies the installation and management of AI-related tools for over 50,000 machine learning teams. The goal of Lambda Labs is to support AI development by providing accessible and efficient cloud GPU services.

San Jose, CaliforniaHeadquarters
2012Year Founded
$372.6MTotal Funding
DEBTCompany Stage
AI & Machine LearningIndustries
201-500Employees

Risks

Nebius' holistic cloud platform challenges Lambda's market share in AI infrastructure.
AWS's 896-core instance may draw customers seeking high-performance cloud solutions.
Existential crisis in Hermes 3 model raises concerns about Lambda's AI model reliability.

Differentiation

Lambda offers cost-effective Inference API for AI model deployment without infrastructure maintenance.
Nvidia HGX H100 and Quantum-2 InfiniBand Clusters enhance Lambda's AI model training capabilities.
Lambda's Hermes 3 collaboration showcases advanced AI model development expertise.

Upsides

Inference API launch attracts enterprises seeking low-cost AI deployment solutions.
Nvidia HGX H100 clusters provide competitive edge in high-performance AI computing.
Strong AI cloud service growth indicates rising demand for Lambda's GPU offerings.

Land your dream remote job 3x faster with AI