Staff Mechanical Controls Engineer at Lambda

San Francisco, California, United States

Lambda Logo
Not SpecifiedCompensation
N/AExperience Level
N/AJob Type
Not SpecifiedVisa
N/AIndustries

Requirements

  • Bachelor’s degree in Mechanical Engineering, Controls Engineering, or related field
  • 5+ years of experience in mechanical and controls engineering for data centers or critical infrastructure
  • Experience programming control logic systems in one or more platforms such as Siemens, Tridium, Distech, Struxureware, or Alerton
  • Proven experience developing SOO, controls hierarchy, and data pipelines for mechanical systems
  • Experience with communication network protocols: BACnet/IP, Modbus TCP, SNMP, OPC-UA
  • Strong understanding of liquid cooling technologies, thermal management, and mechanical system operations
  • Ability to lead troubleshooting, perform root cause analysis, and resolve controls-related issues in live environments

Responsibilities

  • Architecting, development, and scaling mechanical and controls infrastructure for Lambda’s GPU-based data centers
  • Developing end-to-end architectures for mechanical and control systems in high-density AI data centers
  • Write and validate Sequences of Operation (SOO) for mechanical and BMS integration
  • Define controls architecture and hierarchy for mechanical systems, ensuring scalability and resilience
  • Drive standardization of mechanical and control system design across sites and vendors
  • Architect and build BMS/controls platforms from the ground up
  • Develop and manage data pipelines for telemetry, controls feedback loops, and performance optimization
  • Identify, define, and validate BMS and EPMS points lists for monitoring, alarms, and control logic
  • Implement automated control logic and optimization algorithms to improve efficiency, reliability, and uptime
  • Support capacity planning for GPU racks up to 1MW+
  • Standardize points lists, SOO, and mechanical design templates across global sites
  • Provide technical leadership for incident response, root cause analysis, and system improvements
  • Partner with HPC architects, operations, and product engineering teams
  • Serve as a subject-matter expert in mechanical controls, and telemetry systems for hyperscale environments
  • Product development mindset with ability to translate concepts into scalable solutions

Skills

Lambda

Cloud-based GPU services for AI training

About Lambda

Lambda Labs provides cloud-based services for artificial intelligence (AI) training and inference, focusing on large language models and generative AI. Their main product, the AI Developer Cloud, utilizes NVIDIA's GH200 Grace Hopper™ Superchip to deliver efficient and cost-effective GPU resources. Customers can access on-demand and reserved cloud GPUs, which are essential for processing large datasets quickly, with pricing starting at $1.99 per hour for NVIDIA H100 instances. Lambda Labs serves AI developers and companies needing extensive GPU deployments, offering competitive pricing and infrastructure ownership options through their Lambda Echelon service. Additionally, they provide Lambda Stack, a software solution that simplifies the installation and management of AI-related tools for over 50,000 machine learning teams. The goal of Lambda Labs is to support AI development by providing accessible and efficient cloud GPU services.

San Jose, CaliforniaHeadquarters
2012Year Founded
$372.6MTotal Funding
DEBTCompany Stage
AI & Machine LearningIndustries
201-500Employees

Risks

Nebius' holistic cloud platform challenges Lambda's market share in AI infrastructure.
AWS's 896-core instance may draw customers seeking high-performance cloud solutions.
Existential crisis in Hermes 3 model raises concerns about Lambda's AI model reliability.

Differentiation

Lambda offers cost-effective Inference API for AI model deployment without infrastructure maintenance.
Nvidia HGX H100 and Quantum-2 InfiniBand Clusters enhance Lambda's AI model training capabilities.
Lambda's Hermes 3 collaboration showcases advanced AI model development expertise.

Upsides

Inference API launch attracts enterprises seeking low-cost AI deployment solutions.
Nvidia HGX H100 clusters provide competitive edge in high-performance AI computing.
Strong AI cloud service growth indicates rising demand for Lambda's GPU offerings.

Land your dream remote job 3x faster with AI