Staff Network Engineer at Lambda

San Francisco, California, United States

Lambda Logo
$284,000 – $473,000Compensation
Senior (5 to 8 years), Expert & Leadership (9+ years)Experience Level
Full TimeJob Type
UnknownVisa
AI, Cloud Computing, TechnologyIndustries

Requirements

  • 15+ years of experience in designing and operating production datacenter networks
  • Have led the implementation of large production-scale networking projects
  • Expert in CLOS/Spine and Leaf fabrics, EVPN/VXLAN, ECMP, BGP, and fast convergence techniques
  • Have experience with multi-data center networks, backbone and hybrid cloud networks
  • Production experience with at least two switches/routers vendors (e.g., Arista, Juniper, Cisco, NVIDIA/Mellanox, Cumulus/SONiC)
  • Experience with Next-Generation Firewalls (NGFW) (e.g. Fortigate, Juniper)
  • Experience with LoadBalancers like F5, NetScaler
  • Are comfortable on the Linux command line, and have an understanding of the Linux networking stack
  • Strong automation skills (Python, Ansible) and network APIs
  • Nice To Have
  • Hands-on with HPC/AI networking: RoCEv2 and/or InfiniBand (Congestion Control, VLs, partitions), GPUDirect RDMA concepts
  • Experience with DWDM technologies and SD-WAN
  • Understanding of data center power/space/cooling trade-offs and their impact on topology choices
  • Experience with Observability tools like Datadog, Splunk, Grafana, Prometheus
  • Experience automating network configuration within public clouds, with tools like Terraform
  • Have led implementation of production-scale SDNs in a cloud context (e.g. helped implement the infrastructure that powers an AWS VPC-like feature)
  • Deep understanding of the Linux networking stack and its interaction with network virtualization
  • Experience with SDN ecosystem (e.g. OVS, Neutron, DPDK, Cisco ACI or Nexus Fabric Controller, Arista CVP)

Responsibilities

  • Help scale Lambda’s high performance cloud network
  • Contribute to the reproducible automation of network configuration
  • Contribute to the design and development of software defined networks
  • Help manage Spine and Leaf networks
  • Ensure high availability of our network through monitoring, failover, and redundancy
  • Ensure VMs/clients have predictable networking performance through the use of QoS and other applicable technologies
  • Help with deploying and maintaining network monitoring and management tools

Skills

Spine and Leaf
EVPN
VXLAN
ECMP
BGP
Arista
Juniper
Cisco
NVIDIA/Mellanox
Cumulus/SONiC
Fortigate
F5
NetScaler
Python
Ansible
Linux

Lambda

Cloud-based GPU services for AI training

About Lambda

Lambda Labs provides cloud-based services for artificial intelligence (AI) training and inference, focusing on large language models and generative AI. Their main product, the AI Developer Cloud, utilizes NVIDIA's GH200 Grace Hopper™ Superchip to deliver efficient and cost-effective GPU resources. Customers can access on-demand and reserved cloud GPUs, which are essential for processing large datasets quickly, with pricing starting at $1.99 per hour for NVIDIA H100 instances. Lambda Labs serves AI developers and companies needing extensive GPU deployments, offering competitive pricing and infrastructure ownership options through their Lambda Echelon service. Additionally, they provide Lambda Stack, a software solution that simplifies the installation and management of AI-related tools for over 50,000 machine learning teams. The goal of Lambda Labs is to support AI development by providing accessible and efficient cloud GPU services.

San Jose, CaliforniaHeadquarters
2012Year Founded
$372.6MTotal Funding
DEBTCompany Stage
AI & Machine LearningIndustries
201-500Employees

Risks

Nebius' holistic cloud platform challenges Lambda's market share in AI infrastructure.
AWS's 896-core instance may draw customers seeking high-performance cloud solutions.
Existential crisis in Hermes 3 model raises concerns about Lambda's AI model reliability.

Differentiation

Lambda offers cost-effective Inference API for AI model deployment without infrastructure maintenance.
Nvidia HGX H100 and Quantum-2 InfiniBand Clusters enhance Lambda's AI model training capabilities.
Lambda's Hermes 3 collaboration showcases advanced AI model development expertise.

Upsides

Inference API launch attracts enterprises seeking low-cost AI deployment solutions.
Nvidia HGX H100 clusters provide competitive edge in high-performance AI computing.
Strong AI cloud service growth indicates rising demand for Lambda's GPU offerings.

Land your dream remote job 3x faster with AI