[Remote] AI Infrastructure Deployment Lead at Lambda

United States

Lambda Logo
Not SpecifiedCompensation
N/AExperience Level
N/AJob Type
Not SpecifiedVisa
N/AIndustries

Requirements

  • Bachelor’s degree in Computer Engineering, Information Technology, or related field
  • CCNA (Cisco Certified Network Associate) certification (CCNP or equivalent a plus)
  • PMP (project Management Professional) Certification (PMP or equivalent a plus)
  • 5+ years of experience in data center infrastructure deployment or network operations, preferably in AI, HPC, or cloud environments
  • Proven ability to lead complex technical projects and manage multidisciplinary teams
  • Strong understanding of data center network design (Layer 2/3, VLAN, Rack elevations, port mapping, Infiniband technologies)
  • Hands-on expertise in server hardware troubleshooting and rack-level integration

Responsibilities

  • Lead end-to-end deployment of GPU clusters, storage systems, and networking fabric across Lambda’s data centers
  • Design and implement data center network topologies optimized for AI and HPC workloads, including high-speed Ethernet and InfiniBand environments
  • Oversee rack implementation, cabling, and power/cooling validation for optimal efficiency and scalability
  • Collaborate with supply chain, logistics, and operations teams to ensure smooth delivery and installation timelines
  • Implement Layer 2/Layer 3 networks, including VLANs, Spine to Leaf architecture, Infiniband interconnect technology
  • Partner with network architects to ensure redundancy, scalability, and low-latency interconnects for distributed AI workloads
  • Monitor network health, identify bottlenecks, and implement optimizations to maintain peak performance
  • Oversee server hardware troubleshooting, including GPUs, NICs, CPUs, and storage components
  • Lead root-cause analysis for system issues and drive corrective actions in collaboration with vendors and internal hardware teams
  • Develop standard operating procedures (SOPs) for hardware validation, deployment, and maintenance
  • Serve as technical project lead for infrastructure rollouts and cluster expansion projects
  • Coordinate cross-functional teams — networking, facilities, cloud operations, and hardware engineering — to execute deployments on schedule
  • Manage project scope, budgets, risk assessments, and post-deployment reviews
  • Communicate status, challenges, and milestones to leadership with clarity and precision
  • Maintain detailed network topology diagrams, deployment runbooks, and hardware inventories
  • Identify opportunities for process automation and infrastructure standardization across deployments
  • Contribute to Lambda’s internal knowledge base and mentor junior engineers on data center best practices

Skills

Lambda

Cloud-based GPU services for AI training

About Lambda

Lambda Labs provides cloud-based services for artificial intelligence (AI) training and inference, focusing on large language models and generative AI. Their main product, the AI Developer Cloud, utilizes NVIDIA's GH200 Grace Hopper™ Superchip to deliver efficient and cost-effective GPU resources. Customers can access on-demand and reserved cloud GPUs, which are essential for processing large datasets quickly, with pricing starting at $1.99 per hour for NVIDIA H100 instances. Lambda Labs serves AI developers and companies needing extensive GPU deployments, offering competitive pricing and infrastructure ownership options through their Lambda Echelon service. Additionally, they provide Lambda Stack, a software solution that simplifies the installation and management of AI-related tools for over 50,000 machine learning teams. The goal of Lambda Labs is to support AI development by providing accessible and efficient cloud GPU services.

San Jose, CaliforniaHeadquarters
2012Year Founded
$372.6MTotal Funding
DEBTCompany Stage
AI & Machine LearningIndustries
201-500Employees

Risks

Nebius' holistic cloud platform challenges Lambda's market share in AI infrastructure.
AWS's 896-core instance may draw customers seeking high-performance cloud solutions.
Existential crisis in Hermes 3 model raises concerns about Lambda's AI model reliability.

Differentiation

Lambda offers cost-effective Inference API for AI model deployment without infrastructure maintenance.
Nvidia HGX H100 and Quantum-2 InfiniBand Clusters enhance Lambda's AI model training capabilities.
Lambda's Hermes 3 collaboration showcases advanced AI model development expertise.

Upsides

Inference API launch attracts enterprises seeking low-cost AI deployment solutions.
Nvidia HGX H100 clusters provide competitive edge in high-performance AI computing.
Strong AI cloud service growth indicates rising demand for Lambda's GPU offerings.

Land your dream remote job 3x faster with AI