[Remote] Manager, Super Intelligence HPC Support at Lambda

United States

Lambda Logo
Not SpecifiedCompensation
N/AExperience Level
N/AJob Type
Not SpecifiedVisa
N/AIndustries

Requirements

  • Proven track record leading technical support or engineering teams serving enterprise or hyperscale customers
  • Skilled at managing customer escalations and major incidents with clarity, confidence, and urgency
  • Deep expertise in HPC environments including GPU clusters, InfiniBand/RoCE networks, and Linux system administration
  • Ability to guide engineers through troubleshooting at scale, from orchestration (Slurm/Kubernetes) down to kernel-level debugging
  • Strong leadership presence: able to inspire, set direction, and build a culture of accountability and customer-first execution
  • Excellent communication skills, capable of engaging with both engineers and executive stakeholders
  • Advanced degree in Computer Science, Engineering, or related field (Nice to have)
  • Certifications in HPC, networking, or related technologies (Nice to have)
  • Experience with Slurm, Kubernetes, InfiniBand, and other high-performance interconnects (RoCE, NVLink/NVSwitch) (Nice to have)
  • Background supporting Private Cloud environments or other dedicated enterprise clusters (Nice to have)
  • Experience supporting enterprise AI workloads across startups and Fortune 500 companies (Nice to have)

Responsibilities

  • Build, coach, and mentor a team of Super Intelligence HPC Support Engineers, ensuring technical excellence and strong execution in customer-facing work
  • Take point on high-visibility incidents and escalations with hyperscale customers, ensuring timely, transparent, and high-quality outcomes
  • Represent the needs of Super Intelligence customers in cross-functional discussions, influencing product design and roadmap decisions to improve supportability
  • Guide your team through major incidents, driving consistency in communication, coordination, and resolution under pressure
  • Define and refine support processes, runbooks, and documentation tailored to hyperscale environments
  • Collaborate closely with Product, Engineering, and Data Center teams to ensure Lambda delivers reliable, scalable solutions at the largest levels of deployment
  • Monitor team performance, drive improvements in SLA adherence, response/resolution quality, and customer satisfaction
  • Step in to troubleshoot complex issues and model the standard of excellence expected from your team

Skills

Lambda

Cloud-based GPU services for AI training

About Lambda

Lambda Labs provides cloud-based services for artificial intelligence (AI) training and inference, focusing on large language models and generative AI. Their main product, the AI Developer Cloud, utilizes NVIDIA's GH200 Grace Hopper™ Superchip to deliver efficient and cost-effective GPU resources. Customers can access on-demand and reserved cloud GPUs, which are essential for processing large datasets quickly, with pricing starting at $1.99 per hour for NVIDIA H100 instances. Lambda Labs serves AI developers and companies needing extensive GPU deployments, offering competitive pricing and infrastructure ownership options through their Lambda Echelon service. Additionally, they provide Lambda Stack, a software solution that simplifies the installation and management of AI-related tools for over 50,000 machine learning teams. The goal of Lambda Labs is to support AI development by providing accessible and efficient cloud GPU services.

San Jose, CaliforniaHeadquarters
2012Year Founded
$372.6MTotal Funding
DEBTCompany Stage
AI & Machine LearningIndustries
201-500Employees

Risks

Nebius' holistic cloud platform challenges Lambda's market share in AI infrastructure.
AWS's 896-core instance may draw customers seeking high-performance cloud solutions.
Existential crisis in Hermes 3 model raises concerns about Lambda's AI model reliability.

Differentiation

Lambda offers cost-effective Inference API for AI model deployment without infrastructure maintenance.
Nvidia HGX H100 and Quantum-2 InfiniBand Clusters enhance Lambda's AI model training capabilities.
Lambda's Hermes 3 collaboration showcases advanced AI model development expertise.

Upsides

Inference API launch attracts enterprises seeking low-cost AI deployment solutions.
Nvidia HGX H100 clusters provide competitive edge in high-performance AI computing.
Strong AI cloud service growth indicates rising demand for Lambda's GPU offerings.

Land your dream remote job 3x faster with AI