Lambda

Senior Software Engineer - Managed Kubernetes

San Francisco, California, United States

$255,000 – $405,000Compensation
Senior (5 to 8 years)Experience Level
Full TimeJob Type
UnknownVisa
Artificial Intelligence, Cloud Computing, Software EngineeringIndustries

Senior Software Engineer - Managed Kubernetes (Mk8s)

Salary: $255K - $405K Location Type: Hybrid Employment Type: Full-Time

About Lambda

In 2012, Lambda started with a crew of AI engineers publishing research at top machine-learning conferences. We began as an AI company built by AI engineers. That hasn't changed. Today, we're on a mission to be the world's top AI computing platform. We equip engineers with the tools to deploy AI that is fast, secure, affordable, and built to scale. Whether they need powerhouse GPU hardware on-site or the flexibility of cloud-based solutions, we've got the horsepower to make it happen. Lambda’s AI Cloud has been adopted by the world’s leading companies and research institutions including Anyscale, Rakuten, The AI Institute, and multiple enterprises with over a trillion dollars of market capitalization. Our goal is to make computation as effortless and ubiquitous as electricity. If you'd like to build the world's best deep learning cloud, join us.

Note: This position requires presence in our San Francisco office location 4 days per week; Lambda’s designated work from home day is currently Tuesday.

About the Role

We are seeking a Senior Software Engineer to join our Managed Kubernetes (Mk8s) team. This is a hybrid role that blends deep software engineering capabilities with Site Reliability Engineering (SRE) principles. You will play a crucial role in shaping the architecture, reliability, and automation of our Kubernetes-based infrastructure, which powers mission-critical workloads across our global platform.

What You’ll Do

Software Engineering

  • Design, build, and maintain scalable control plane services, operators, and custom controllers for Kubernetes.
  • Develop automation for cluster lifecycle management (provisioning, upgrades, patching, deletion).
  • Develop internal tools, APIs, and command-line interfaces (CLIs) that enable customers and ML/AI teams to deploy and monitor inference services effectively.
  • Write resilient systems that gracefully handle failure across large-scale distributed environments.

SRE & Operations

  • Define and implement Service-Level Objectives (SLOs) and Service-Level Indicators (SLIs) for Kubernetes services, workloads, and the platform.
  • Dive into systems at a low level to solve unique cluster problems and write up your findings.
  • Assist customers with high-level Kubernetes questions and integration with applications, storage, and authentication.
  • Assist with initial cluster build-outs and validation to help identify failed hardware before customer delivery.
  • Work closely with our HPC Ops and Datacenter Ops teams on issues that require lower-level expertise or cross-functional solutions.
  • Participate in a well-managed, sustainable on-call rotation.

You Have

  • 6+ years of experience in software engineering or SRE roles, 3+ years leading large-scale complex projects, or tech lead experience.
  • Experience tuning Kubernetes internals and writing operators (CRDs, CSI, CNI, etc.).
  • Strong programming skills in Go and Python; experience with GitOps (e.g., ArgoCD), Helm, and Kubernetes operators.
  • Experience operating Kubernetes clusters in production environments (e.g., EKS, GKE, on-prem).
  • Deep understanding of SRE principles: incident response, chaos engineering, scaling, and reliability.
  • Proficiency in observability tools (Prometheus, Grafana, FluentBit, etc.).
  • Experience with infrastructure-as-code tools (Terraform, Pulumi) and CI/CD pipelines.
  • Solid knowledge of Linux systems, networking, containers, and cloud infrastructure.

Nice to Have

  • Deep Kubernetes expertise.
  • Experience with user-level restrictions and hardening (e.g. AppArmor).
  • Experience with HPC clusters, environments & tooling.
  • Experience with large-scale AI/ML training clusters.
  • Experience with machine learning/AI frameworks.
  • Expertise with hybrid or multi-cloud Kubernetes environments.
  • Familiarity with GPU, Infiniband, or high-performance computing on K8s.
  • Past contributions to CNCF projects or Kubernetes SIGs a plus.

If you don’t meet all of these requirements but believe you may be a good fit, please still apply and provide a cover letter.

Skills

Kubernetes
Control Plane Services
Operators
Custom Controllers
Automation
Cluster Lifecycle Management
APIs
CLI Development
Distributed Systems
Site Reliability Engineering (SRE)

Lambda

Cloud-based GPU services for AI training

About Lambda

Lambda Labs provides cloud-based services for artificial intelligence (AI) training and inference, focusing on large language models and generative AI. Their main product, the AI Developer Cloud, utilizes NVIDIA's GH200 Grace Hopper™ Superchip to deliver efficient and cost-effective GPU resources. Customers can access on-demand and reserved cloud GPUs, which are essential for processing large datasets quickly, with pricing starting at $1.99 per hour for NVIDIA H100 instances. Lambda Labs serves AI developers and companies needing extensive GPU deployments, offering competitive pricing and infrastructure ownership options through their Lambda Echelon service. Additionally, they provide Lambda Stack, a software solution that simplifies the installation and management of AI-related tools for over 50,000 machine learning teams. The goal of Lambda Labs is to support AI development by providing accessible and efficient cloud GPU services.

San Jose, CaliforniaHeadquarters
2012Year Founded
$372.6MTotal Funding
DEBTCompany Stage
AI & Machine LearningIndustries
201-500Employees

Risks

Nebius' holistic cloud platform challenges Lambda's market share in AI infrastructure.
AWS's 896-core instance may draw customers seeking high-performance cloud solutions.
Existential crisis in Hermes 3 model raises concerns about Lambda's AI model reliability.

Differentiation

Lambda offers cost-effective Inference API for AI model deployment without infrastructure maintenance.
Nvidia HGX H100 and Quantum-2 InfiniBand Clusters enhance Lambda's AI model training capabilities.
Lambda's Hermes 3 collaboration showcases advanced AI model development expertise.

Upsides

Inference API launch attracts enterprises seeking low-cost AI deployment solutions.
Nvidia HGX H100 clusters provide competitive edge in high-performance AI computing.
Strong AI cloud service growth indicates rising demand for Lambda's GPU offerings.

Land your dream remote job 3x faster with AI