Lambda

Senior Site Reliability Engineer - Networking

London, England, United Kingdom

£93,000 – £144,000Compensation
Senior (5 to 8 years)Experience Level
Full TimeJob Type
UnknownVisa
Artificial Intelligence, Cloud Computing, NetworkingIndustries

Job Description: Network Reliability Engineer

Salary: £93K - £144K Location Type: Remote Employment Type: Full-Time

Position Overview

Lambda is seeking a skilled Network Reliability Engineer to help scale our high-performance cloud network. As an AI company built by AI engineers, we are on a mission to be the world's top AI computing platform, equipping engineers with the tools to deploy AI that is fast, secure, affordable, and built to scale. You will contribute to the reproducible automation of network configuration and deployments, the implementation and operations of Software Defined Networks (SDN), and ensure the high availability and predictable performance of our network.

Responsibilities

  • Help scale Lambda’s high-performance cloud network.
  • Contribute to the reproducible automation of network configuration and deployments.
  • Contribute to the implementation and operations of Software Defined Networks (SDN).
  • Deploy and manage Spine and Leaf networks.
  • Ensure high availability of our network through observability, failover, and redundancy.
  • Ensure clients have predictable networking performance through network engineering and applicable technologies.
  • Deploy and maintain network monitoring and management tools.

Requirements

  • 5+ years of experience as a Software Engineer (SWE), Site Reliability Engineer (SRE), or Network Reliability Engineer (NRE).
  • Experience in the implementation of production-scale networking projects.
  • Experience with on-call duties and incident response management.
  • Experience building and maintaining Software Defined Networks (SDN).
  • Experience with OpenStack, Neutron, or OVN.
  • Comfortable on the Linux command line and understanding of the Linux networking stack.
  • Experience with multi-data center networks and hybrid cloud networks.
  • Python programming experience.
  • Experience with configuration management tools like Ansible.
  • Experience with CI/CD tools for deployment and Git.
  • Experience operating network environments with GitOps practices.
  • Experience with application lifecycle and deployments on Kubernetes.

Nice To Have

  • Experience operating production-scale SDNs in a cloud context (e.g., implementing or operating infrastructure powering AWS VPC-like features).
  • Software development experience with C, GO, or Python.
  • Experience automating network configuration within public clouds using tools like Kubernetes, HELM, Terraform, or Ansible.
  • Deep understanding of the Linux networking stack and its interaction with network virtualization, SR-IOV, and DPDK.
  • Understanding of the SDN ecosystem (e.g., OVS, Neutron, VMware NSX, Cisco ACI or Nexus Fabric Controller, Arista CVP).
  • Experience with Spine and Leaf (Clos) network topology.
  • Experience and understanding of BGP EVPN VXLAN networks.
  • Experience building and maintaining multi-data center networks, SD-WAN, or DWDM.
  • Experience with Next-Generation Firewalls (NGFW).

Company Information

Founded in 2012, Lambda has grown to approximately 350 employees (as of 2024) and is expanding rapidly. We are experiencing extremely high demand for our systems, demonstrating quarter-over-quarter and year-over-year profitability.

What We Offer:

  • Generous cash and equity compensation.
  • Health, dental, and vision coverage.

Our Investors Include: Andra Capital, SGW, Andrej Karpathy, ARK Invest, Fincadia Advisors, G Squared, In-Q-Tel (IQT), KHK & Partners, NVIDIA, Pegatron, Supermicro, Wistron, Wiwynn, US Innovative Technology, Gradient Ventures, Mercato Partners, SVB, 1517, Crescent Cove.

Our Mission: To be the world's top AI computing platform, making computation as effortless and ubiquitous as electricity. Our AI Cloud has been adopted by leading companies and research institutions worldwide. Our research papers have been accepted into top machine learning and graphics conferences, including NeurIPS, ICCV, SIGGRAPH, and TOG.

If you'd like to build the world's best deep learning cloud, join us.

Skills

Networking
Software Defined Networks (SDN)
OpenStack
Neutron
OVN
Linux command line
Linux networking stack
Network automation
Network monitoring
Python
High availability
Failover
Redundancy
Incident response
On-call experience
Multi-data center networks
Hybrid cloud networks

Lambda

Cloud-based GPU services for AI training

About Lambda

Lambda Labs provides cloud-based services for artificial intelligence (AI) training and inference, focusing on large language models and generative AI. Their main product, the AI Developer Cloud, utilizes NVIDIA's GH200 Grace Hopper™ Superchip to deliver efficient and cost-effective GPU resources. Customers can access on-demand and reserved cloud GPUs, which are essential for processing large datasets quickly, with pricing starting at $1.99 per hour for NVIDIA H100 instances. Lambda Labs serves AI developers and companies needing extensive GPU deployments, offering competitive pricing and infrastructure ownership options through their Lambda Echelon service. Additionally, they provide Lambda Stack, a software solution that simplifies the installation and management of AI-related tools for over 50,000 machine learning teams. The goal of Lambda Labs is to support AI development by providing accessible and efficient cloud GPU services.

San Jose, CaliforniaHeadquarters
2012Year Founded
$372.6MTotal Funding
DEBTCompany Stage
AI & Machine LearningIndustries
201-500Employees

Risks

Nebius' holistic cloud platform challenges Lambda's market share in AI infrastructure.
AWS's 896-core instance may draw customers seeking high-performance cloud solutions.
Existential crisis in Hermes 3 model raises concerns about Lambda's AI model reliability.

Differentiation

Lambda offers cost-effective Inference API for AI model deployment without infrastructure maintenance.
Nvidia HGX H100 and Quantum-2 InfiniBand Clusters enhance Lambda's AI model training capabilities.
Lambda's Hermes 3 collaboration showcases advanced AI model development expertise.

Upsides

Inference API launch attracts enterprises seeking low-cost AI deployment solutions.
Nvidia HGX H100 clusters provide competitive edge in high-performance AI computing.
Strong AI cloud service growth indicates rising demand for Lambda's GPU offerings.

Land your dream remote job 3x faster with AI