Lambda

Senior Site Reliability Engineer - Control Plane

San Francisco, California, United States

$245,000 – $385,000Compensation
Senior (5 to 8 years)Experience Level
Full TimeJob Type
UnknownVisa
Artificial Intelligence, Cloud Computing, DevOpsIndustries

Site Reliability Engineer

Position Overview

  • Location Type: Hybrid (San Francisco office - 4 days per week, Tuesday is designated work from home day)
  • Job Type: Full-Time
  • Salary: $245K - $385K

Lambda is an AI company built by AI engineers, focused on being the world’s top AI computing platform. We equip engineers with the tools to deploy AI that is fast, secure, affordable, and built to scale. Our AI Cloud has been adopted by leading companies and research institutions. Our goal is to make computation as effortless and ubiquitous as electricity. Join us to build the world’s best deep learning cloud.

Requirements

  • Experience: 5+ years of experience in Site Reliability Engineering or DevOps roles.
  • Cloud Platforms: Strong understanding of cloud platforms (AWS, GCP, Azure) and their core services.
  • Monitoring & Observability: Experience designing and implementing monitoring and observability solutions at scale.
  • Incident Management: Proven track record managing production incidents and driving root cause analysis.
  • IaC & CI/CD: Proficiency with Infrastructure as Code tools and CI/CD pipeline implementation (e.g., Argo, Terraform).
  • Networking: Strong understanding of network architecture, load balancing, and content delivery.
  • Database: Knowledge of database administration and optimization strategies.
  • Coding Skills: Solid coding skills in at least one language (Python, Go, Bash) for automation.

Responsibilities

  • Design and implement cloud-native architectures that deliver 99.99% reliability while balancing performance and cost efficiency.
  • Develop comprehensive monitoring and alerting systems with actionable dashboards for real-time system health visibility.
  • Implement SLIs, SLOs, and SLAs across services and maintain error budgets.
  • Automate deployments using tools like Argo and Terraform.
  • Create robust incident management processes, escalation paths, and documentation.
  • Design and implement disaster recovery solutions with regular testing procedures.
  • Lead post-incident reviews focused on systemic improvements.
  • Champion reliability best practices and system design principles.
  • Build automated, auditable, and compliant processes to improve efficiency and productivity.

Nice to Have (Bonus Skills)

  • Experience with high-throughput, low-latency systems.
  • Knowledge of security best practices and implementing defense-in-depth strategies.
  • Experience with multi-region and distributed systems.

Skills

AWS
GCP
Azure
Monitoring
Observability
Incident Management
IaC
Terraform
Argo
Networking
Load Balancing
Content Delivery
Database Administration
Python
Go
Bash

Lambda

Cloud-based GPU services for AI training

About Lambda

Lambda Labs provides cloud-based services for artificial intelligence (AI) training and inference, focusing on large language models and generative AI. Their main product, the AI Developer Cloud, utilizes NVIDIA's GH200 Grace Hopper™ Superchip to deliver efficient and cost-effective GPU resources. Customers can access on-demand and reserved cloud GPUs, which are essential for processing large datasets quickly, with pricing starting at $1.99 per hour for NVIDIA H100 instances. Lambda Labs serves AI developers and companies needing extensive GPU deployments, offering competitive pricing and infrastructure ownership options through their Lambda Echelon service. Additionally, they provide Lambda Stack, a software solution that simplifies the installation and management of AI-related tools for over 50,000 machine learning teams. The goal of Lambda Labs is to support AI development by providing accessible and efficient cloud GPU services.

San Jose, CaliforniaHeadquarters
2012Year Founded
$372.6MTotal Funding
DEBTCompany Stage
AI & Machine LearningIndustries
201-500Employees

Risks

Nebius' holistic cloud platform challenges Lambda's market share in AI infrastructure.
AWS's 896-core instance may draw customers seeking high-performance cloud solutions.
Existential crisis in Hermes 3 model raises concerns about Lambda's AI model reliability.

Differentiation

Lambda offers cost-effective Inference API for AI model deployment without infrastructure maintenance.
Nvidia HGX H100 and Quantum-2 InfiniBand Clusters enhance Lambda's AI model training capabilities.
Lambda's Hermes 3 collaboration showcases advanced AI model development expertise.

Upsides

Inference API launch attracts enterprises seeking low-cost AI deployment solutions.
Nvidia HGX H100 clusters provide competitive edge in high-performance AI computing.
Strong AI cloud service growth indicates rising demand for Lambda's GPU offerings.

Land your dream remote job 3x faster with AI