[Remote] HPC Support Engineer at Lambda

United States

Lambda Logo
$137,000 – $206,000Compensation
Senior (5 to 8 years)Experience Level
Full TimeJob Type
UnknownVisa
AI, HPC, Cloud ComputingIndustries

Requirements

  • 7+ years in cloud support operations or systems engineering
  • Strong experience with public cloud platforms (AWS, Azure, GCP) or GPU cloud providers
  • Very strong understanding and experience with Linux (Ubuntu) system administration
  • Proven experience in HPC environments, showcasing expertise in Linux cluster administration, with strong preference for Kubernetes and/or Slurm for cluster orchestration
  • Proficiency with monitoring/logging tools (Prometheus, Grafana, Datadog)
  • Strong skills in log analysis, debugging kernel-level issues, and performance profiling
  • Experience with CUDA, NCCL, NVLink, MIG, GPUDirect RDMA
  • Experience with high throughput networking technologies (IB/RoCE)
  • Experience with virtualization and container (Docker, Kubernetes) technologies
  • Knowledge of distributed AI/ML or HPC workloads
  • Knowledge of TCP/IP, VPN, and firewalls in cloud environments
  • Ability to work independently and mentor junior support engineers
  • Participation in a 24/7 coverage model with one of the specified schedules (Monday-Friday 8AM-5PM PT, Sunday-Wednesday 12PM-11PM PT, or Wednesday-Saturday 12PM-11PM PT)
  • Participation in a rotating on-call schedule for major incidents and customer alerts

Responsibilities

  • Engage directly with customers to deeply understand their challenges, ensuring a personalized and effective support experience
  • Dive into complex software and hardware issues, providing timely and efficient solutions
  • Craft comprehensive documentation of solutions and contribute to enhancing support procedures for continuous improvement
  • Identify common customer pain points and collaborate with engineering teams to develop innovative solutions
  • Collaborate in the development of new and existing products, contributing expertise to deep learning cloud and HPC infrastructure
  • Take escalations from peers while training and educating them
  • Work cross-functionally on projects to create and improve support tooling

Skills

Key technologies and capabilities for this role

AWSAzureGCPLinuxUbuntuHPCLinux Cluster AdministrationSystems EngineeringCloud Support

Questions & Answers

Common questions about this position

What is the salary range for the HPC Support Engineer position?

The salary range is $137K - $206K.

Is this HPC Support Engineer role remote?

Yes, the position is remote.

What are the key required skills for this role?

Key requirements include 7+ years in cloud support operations or systems engineering, strong experience with public cloud platforms (AWS, Azure, GCP) or GPU cloud providers, very strong Linux (Ubuntu) system administration, proven HPC experience with preference for Kubernetes/Slurm, and proficiency with monitoring tools like Prometheus, Grafana, Datadog.

What work schedules are available for this position?

The position is part of a 24/7 coverage model with schedules: Monday - Friday, 8AM - 5PM Pacific Time; Sunday - Wednesday, 12PM - 11PM Pacific Time; or Wednesday - Saturday, 12PM - 11PM Pacific Time. It also includes a rotating on-call schedule for major incidents.

What experience makes a strong candidate for this HPC Support Engineer role?

Candidates with 7+ years in cloud support or systems engineering, strong public cloud and Linux expertise, HPC cluster administration (especially Kubernetes/Slurm), and experience with CUDA, networking technologies, and monitoring tools stand out. Nice-to-haves like Python proficiency, storage technologies, and IaC tools (Terraform, Ansible) are also valued.

Lambda

Cloud-based GPU services for AI training

About Lambda

Lambda Labs provides cloud-based services for artificial intelligence (AI) training and inference, focusing on large language models and generative AI. Their main product, the AI Developer Cloud, utilizes NVIDIA's GH200 Grace Hopper™ Superchip to deliver efficient and cost-effective GPU resources. Customers can access on-demand and reserved cloud GPUs, which are essential for processing large datasets quickly, with pricing starting at $1.99 per hour for NVIDIA H100 instances. Lambda Labs serves AI developers and companies needing extensive GPU deployments, offering competitive pricing and infrastructure ownership options through their Lambda Echelon service. Additionally, they provide Lambda Stack, a software solution that simplifies the installation and management of AI-related tools for over 50,000 machine learning teams. The goal of Lambda Labs is to support AI development by providing accessible and efficient cloud GPU services.

San Jose, CaliforniaHeadquarters
2012Year Founded
$372.6MTotal Funding
DEBTCompany Stage
AI & Machine LearningIndustries
201-500Employees

Risks

Nebius' holistic cloud platform challenges Lambda's market share in AI infrastructure.
AWS's 896-core instance may draw customers seeking high-performance cloud solutions.
Existential crisis in Hermes 3 model raises concerns about Lambda's AI model reliability.

Differentiation

Lambda offers cost-effective Inference API for AI model deployment without infrastructure maintenance.
Nvidia HGX H100 and Quantum-2 InfiniBand Clusters enhance Lambda's AI model training capabilities.
Lambda's Hermes 3 collaboration showcases advanced AI model development expertise.

Upsides

Inference API launch attracts enterprises seeking low-cost AI deployment solutions.
Nvidia HGX H100 clusters provide competitive edge in high-performance AI computing.
Strong AI cloud service growth indicates rising demand for Lambda's GPU offerings.

Land your dream remote job 3x faster with AI