[Remote] Super Intelligence HPC Support Engineer at Lambda

United States

Lambda Logo
Not SpecifiedCompensation
N/AExperience Level
N/AJob Type
Not SpecifiedVisa
N/AIndustries

Requirements

  • 7+ years of experience in HPC or cloud support engineering, with customer-facing responsibilities
  • Proven experience managing large-scale Linux clusters and distributed HPC/AI workloads
  • Deep expertise in orchestration tools such as Kubernetes and/or Slurm
  • Strong knowledge of GPU technologies (CUDA, NCCL, MIG, NVLink, GPUDirect RDMA)
  • Skilled in high-throughput networking (InfiniBand, RoCE) and cluster storage solutions
  • Familiarity with monitoring/logging platforms (Prometheus, Grafana, Datadog)
  • Experience leading incident management and communicating directly with enterprise or hyperscale customers
  • Ability to balance deep technical troubleshooting with clear, concise communication to executives and stakeholders

Responsibilities

  • Act as the primary technical point of escalation for Super Intelligence customers running hyperscale GPU clusters
  • Lead incident response for complex issues, ensuring rapid triage, clear communication, and timely resolution
  • Proactively identify risks in large environments (firmware, performance bottlenecks, orchestration issues) and drive preventative improvements
  • Partner closely with Lambda Engineering and Product teams to influence roadmap decisions based on real customer needs
  • Contribute to runbooks, best practices, and operational guides tailored for hyperscale environments
  • Train and mentor other support engineers, raising the bar across Lambda’s support organization
  • Participate in a rotating on-call schedule, owning critical incidents and high-priority alerts for SI customers

Skills

Lambda

Cloud-based GPU services for AI training

About Lambda

Lambda Labs provides cloud-based services for artificial intelligence (AI) training and inference, focusing on large language models and generative AI. Their main product, the AI Developer Cloud, utilizes NVIDIA's GH200 Grace Hopper™ Superchip to deliver efficient and cost-effective GPU resources. Customers can access on-demand and reserved cloud GPUs, which are essential for processing large datasets quickly, with pricing starting at $1.99 per hour for NVIDIA H100 instances. Lambda Labs serves AI developers and companies needing extensive GPU deployments, offering competitive pricing and infrastructure ownership options through their Lambda Echelon service. Additionally, they provide Lambda Stack, a software solution that simplifies the installation and management of AI-related tools for over 50,000 machine learning teams. The goal of Lambda Labs is to support AI development by providing accessible and efficient cloud GPU services.

San Jose, CaliforniaHeadquarters
2012Year Founded
$372.6MTotal Funding
DEBTCompany Stage
AI & Machine LearningIndustries
201-500Employees

Risks

Nebius' holistic cloud platform challenges Lambda's market share in AI infrastructure.
AWS's 896-core instance may draw customers seeking high-performance cloud solutions.
Existential crisis in Hermes 3 model raises concerns about Lambda's AI model reliability.

Differentiation

Lambda offers cost-effective Inference API for AI model deployment without infrastructure maintenance.
Nvidia HGX H100 and Quantum-2 InfiniBand Clusters enhance Lambda's AI model training capabilities.
Lambda's Hermes 3 collaboration showcases advanced AI model development expertise.

Upsides

Inference API launch attracts enterprises seeking low-cost AI deployment solutions.
Nvidia HGX H100 clusters provide competitive edge in high-performance AI computing.
Strong AI cloud service growth indicates rising demand for Lambda's GPU offerings.

Land your dream remote job 3x faster with AI