Engineering Manager, Core Services at Lambda

San Francisco, California, United States

Lambda Logo
$297,000 – $495,000Compensation
Expert & Leadership (9+ years)Experience Level
Full TimeJob Type
UnknownVisa
Artificial Intelligence, Cloud ComputingIndustries

Requirements

  • 7+ years of experience in either Release Engineering or Platform Engineering with at least 3 years in a management or lead role
  • Demonstrated experience leading a team of engineers and SREs on complex, cross-functional projects in a fast-paced startup environment
  • Experience managing, monitoring, and scaling CI/CD platforms
  • Deep experience using and operating AWS services
  • Solid background in software engineering and the SDLC
  • Strong project management skills, leading planning, project execution, and delivery

Responsibilities

  • Grow/hire, lead, and mentor a team of high-performing platform engineers and SREs
  • Foster a culture of technical excellence, collaboration, and customer service
  • Conduct regular one-on-one meetings, provide constructive feedback, and support career development for team members
  • Drive outcomes by managing project priorities, deadlines, and deliverables
  • Work with the engineering team to drive strategy for internal CI/CD and Cloud services
  • Develop self-service abstractions to make platform tooling easier to adopt and use
  • Lead the broader engineering organization in best-practices adoption of CI/CD, Workflow, and Cloud services
  • Manage costs of both vendors and internally developed platforms
  • Lead team in the continued development of existing CI/CD solutions based on Buildkite and Github Actions
  • Lead team in the expansion of Terraform / Atlantis infrastructure automation platform
  • Guide Lambda engineering in utilization of AWS services in line with technical standards
  • Guide team in problem identification, requirements gathering, solution ideation, and stakeholder alignment on engineering RFCs
  • Identify gaps in platform engineering posture and drive resolution
  • Lead the team in supporting internal customers from across Lambda engineering
  • Work closely with Lambda product engineering teams on requirements and planning to meet their needs
  • Work to understand the needs of engineering teams and drive Platform solutions towards self-service
  • Manage a short list of vendors that provide SaaS solutions used at Lambda

Skills

CI/CD
AWS
Release Engineering
Cloud Automation
SRE
Platform Engineering
Artifact Management

Lambda

Cloud-based GPU services for AI training

About Lambda

Lambda Labs provides cloud-based services for artificial intelligence (AI) training and inference, focusing on large language models and generative AI. Their main product, the AI Developer Cloud, utilizes NVIDIA's GH200 Grace Hopper™ Superchip to deliver efficient and cost-effective GPU resources. Customers can access on-demand and reserved cloud GPUs, which are essential for processing large datasets quickly, with pricing starting at $1.99 per hour for NVIDIA H100 instances. Lambda Labs serves AI developers and companies needing extensive GPU deployments, offering competitive pricing and infrastructure ownership options through their Lambda Echelon service. Additionally, they provide Lambda Stack, a software solution that simplifies the installation and management of AI-related tools for over 50,000 machine learning teams. The goal of Lambda Labs is to support AI development by providing accessible and efficient cloud GPU services.

San Jose, CaliforniaHeadquarters
2012Year Founded
$372.6MTotal Funding
DEBTCompany Stage
AI & Machine LearningIndustries
201-500Employees

Risks

Nebius' holistic cloud platform challenges Lambda's market share in AI infrastructure.
AWS's 896-core instance may draw customers seeking high-performance cloud solutions.
Existential crisis in Hermes 3 model raises concerns about Lambda's AI model reliability.

Differentiation

Lambda offers cost-effective Inference API for AI model deployment without infrastructure maintenance.
Nvidia HGX H100 and Quantum-2 InfiniBand Clusters enhance Lambda's AI model training capabilities.
Lambda's Hermes 3 collaboration showcases advanced AI model development expertise.

Upsides

Inference API launch attracts enterprises seeking low-cost AI deployment solutions.
Nvidia HGX H100 clusters provide competitive edge in high-performance AI computing.
Strong AI cloud service growth indicates rising demand for Lambda's GPU offerings.

Land your dream remote job 3x faster with AI