Senior Site Reliability Engineer - Named Accounts at Lambda

Seattle, Washington, United States

Lambda Logo
$240,000 – $425,000Compensation
Senior (5 to 8 years)Experience Level
Full TimeJob Type
UnknownVisa
AI, Cloud Computing, Machine LearningIndustries

Requirements

  • 6+ years of experience in a SRE, software engineer, or similar role, with a deep knowledge of running Linux clusters and systems
  • Strong programming skills in Go and Python
  • Experience with GitOps (e.g., ArgoCD), Helm, and Kubernetes operators
  • Proven experience operating Kubernetes clusters in production environments (on-prem, EKS, GKE, or similar)
  • Hands-on experience with AI/ML workload management tools (Volcano, Kubeflow, or similar)
  • Can work either independently with limited direction or as part of a team
  • Familiarity with observability tools like Prometheus, Grafana, FluentBit, and CI/CD pipelines
  • Proven experience provisioning Kubernetes using tools such as kubeadm, Cluster API, or similar
  • Excellent communication skills with the ability to translate technical complexity for diverse audiences
  • Executive presence and ability to represent Lambda in customer-facing situations
  • Comfort operating in ambiguous environments with competing priorities
  • Strong bias for action and shipping iteratively

Responsibilities

  • Embed on-site with a named strategic customer, becoming an extension of their team
  • Act as the primary technical liaison between Lambda and the customer organization
  • Navigate ambiguous requirements to identify root problems and define clear technical solutions
  • Drive alignment across internal Lambda teams and customer stakeholders
  • Scope, sequence, and build full-stack solutions that deliver measurable business value
  • Design and implement infrastructure optimizations for AI/ML workloads at scale
  • Debug complex distributed systems issues across the infrastructure stack
  • Ship iteratively and learn fast, adjusting approach based on customer feedback and results
  • Identify reusable patterns from customer engagements that can scale across Lambda's customer base
  • Surface field intelligence that influences Lambda's product roadmap
  • Document and share learnings to elevate the capabilities of the broader team
  • Represent Lambda with executive presence in high-stakes customer interactions

Skills

SRE
Distributed Systems
AI/ML Workloads
Infrastructure Optimization
Full-Stack Solutions
Debugging
Customer Engagement

Lambda

Cloud-based GPU services for AI training

About Lambda

Lambda Labs provides cloud-based services for artificial intelligence (AI) training and inference, focusing on large language models and generative AI. Their main product, the AI Developer Cloud, utilizes NVIDIA's GH200 Grace Hopper™ Superchip to deliver efficient and cost-effective GPU resources. Customers can access on-demand and reserved cloud GPUs, which are essential for processing large datasets quickly, with pricing starting at $1.99 per hour for NVIDIA H100 instances. Lambda Labs serves AI developers and companies needing extensive GPU deployments, offering competitive pricing and infrastructure ownership options through their Lambda Echelon service. Additionally, they provide Lambda Stack, a software solution that simplifies the installation and management of AI-related tools for over 50,000 machine learning teams. The goal of Lambda Labs is to support AI development by providing accessible and efficient cloud GPU services.

San Jose, CaliforniaHeadquarters
2012Year Founded
$372.6MTotal Funding
DEBTCompany Stage
AI & Machine LearningIndustries
201-500Employees

Risks

Nebius' holistic cloud platform challenges Lambda's market share in AI infrastructure.
AWS's 896-core instance may draw customers seeking high-performance cloud solutions.
Existential crisis in Hermes 3 model raises concerns about Lambda's AI model reliability.

Differentiation

Lambda offers cost-effective Inference API for AI model deployment without infrastructure maintenance.
Nvidia HGX H100 and Quantum-2 InfiniBand Clusters enhance Lambda's AI model training capabilities.
Lambda's Hermes 3 collaboration showcases advanced AI model development expertise.

Upsides

Inference API launch attracts enterprises seeking low-cost AI deployment solutions.
Nvidia HGX H100 clusters provide competitive edge in high-performance AI computing.
Strong AI cloud service growth indicates rising demand for Lambda's GPU offerings.

Land your dream remote job 3x faster with AI