Lambda

Senior Cloud Solutions Engineer

United States

$249,600 – $374,400Compensation
Senior (5 to 8 years)Experience Level
Full TimeJob Type
UnknownVisa
Cloud Computing, AI/ML Services, GPU Hardware & Cloud InfrastructureIndustries

Solutions Architect, AI/ML Cloud

Salary: $249.6K - $374.4K OTE Employment Type: Full-Time Location Type: Remote

Position Overview

Lambda is the #1 GPU Cloud for ML/AI teams training, fine-tuning, and inferencing AI models. We provide engineers with an easy, secure, and affordable platform to build, test, and deploy AI products at scale. Our product portfolio includes on-prem GPU systems, hosted GPUs across public & private clouds, and managed inference services, serving governments, researchers, startups, and enterprises worldwide. If you're passionate about building the world's best deep learning cloud, join us.

Engineering at Lambda is responsible for building and scaling our cloud offering. Our scope includes the Lambda website, cloud APIs and systems, as well as internal tooling for system deployment, management, and maintenance.

Responsibilities

  • Advocate for Lambda’s Products:
    • Develop and maintain expertise in Lambda’s cloud products and services.
    • Demonstrate Lambda’s software and solutions to customers, partners, and staff.
    • Create field enablement materials for technical audiences, lead workshops, and support product advocacy efforts.
    • Provide technical feedback from customers to Lambda’s product and marketing teams.
  • Own the Technical Side of Lambda’s Sales Process:
    • Partner with Lambda account executives to drive customer adoption and ensure successful delivery.
    • Evaluate and assess customer needs to deeply understand pain-points, bottlenecks, and expected outcomes.
    • Recommend appropriate cloud services and configurations to design cohesive solutions that support customer applications and workflows.
    • Document proposals and designs in formats including, but not limited to, presentations, white-papers, visuals, Bill of Materials, and rack elevations.
  • Demonstrate Expertise on Lambda’s Cloud Infrastructure:
    • Build structured and purposeful learning into your work routine.
    • Develop and support the internal Lambda community as a subject matter expert.
    • Be an expert at deploying AI/ML workloads on Lambda cloud.
    • Stay up-to-date on the latest deep learning trends, best practices, and experiment with them using internal tools and resources.
  • Develop High-Quality Processes and Documentation:
    • Reinforce Lambda’s culture.
    • Contribute positively throughout the organization.
    • Maintain a high level of agility and responsiveness.
    • Be hyper-focused on customer satisfaction.

Requirements

  • 8+ years of experience designing, deploying, and scaling cloud infrastructure.
  • 4+ years of experience as a solutions architect or in a consultative capacity supporting cloud infrastructure and services.
  • 3+ years of experience working with cloud-based AI/ML services.
  • Deep knowledge of the ML ecosystem, including common models, practical use cases, and supporting tools.
  • Experience building with modern infrastructure tools such as Docker, Kubernetes, Ansible, and Terraform.
  • Deep knowledge of cloud infrastructure, security, networking, and cost optimization techniques.
  • Experience coding in Python, C#, or similar programming languages.
  • Experience developing with NVIDIA’s GPUs.
  • Have led complex technical projects with diverse stakeholders.
  • Demonstrated impact at an organizational/multi-departmental level.
  • Experience leading complex cloud deals, influencing C-level stakeholders, and mentoring junior SEs or architects.
  • Thrive in dynamic settings and embrace radical ownership of initiatives and outcomes.

Nice to Have

  • Experience with AI/ML use case to algorithm selection, training, tuning, inference, and training pipeline design and build.
  • Participation in go-to-market (GTM) initiatives or product launches.
  • Experience working with LLM architectures.
  • Experience working with RESTful APIs and general service-oriented architecture.

About Lambda

Founded in 2012, Lambda has approximately 350 employees (as of 2024) and is growing fast. We offer generous cash &

Skills

Cloud Solutions
AWS/Azure/GCP
AI/ML workloads
GPU computing
System deployment
Customer engagement
Technical presentations
Solution architecture
Deep learning trends
Infrastructure management

Lambda

Cloud-based GPU services for AI training

About Lambda

Lambda Labs provides cloud-based services for artificial intelligence (AI) training and inference, focusing on large language models and generative AI. Their main product, the AI Developer Cloud, utilizes NVIDIA's GH200 Grace Hopper™ Superchip to deliver efficient and cost-effective GPU resources. Customers can access on-demand and reserved cloud GPUs, which are essential for processing large datasets quickly, with pricing starting at $1.99 per hour for NVIDIA H100 instances. Lambda Labs serves AI developers and companies needing extensive GPU deployments, offering competitive pricing and infrastructure ownership options through their Lambda Echelon service. Additionally, they provide Lambda Stack, a software solution that simplifies the installation and management of AI-related tools for over 50,000 machine learning teams. The goal of Lambda Labs is to support AI development by providing accessible and efficient cloud GPU services.

San Jose, CaliforniaHeadquarters
2012Year Founded
$372.6MTotal Funding
DEBTCompany Stage
AI & Machine LearningIndustries
201-500Employees

Risks

Nebius' holistic cloud platform challenges Lambda's market share in AI infrastructure.
AWS's 896-core instance may draw customers seeking high-performance cloud solutions.
Existential crisis in Hermes 3 model raises concerns about Lambda's AI model reliability.

Differentiation

Lambda offers cost-effective Inference API for AI model deployment without infrastructure maintenance.
Nvidia HGX H100 and Quantum-2 InfiniBand Clusters enhance Lambda's AI model training capabilities.
Lambda's Hermes 3 collaboration showcases advanced AI model development expertise.

Upsides

Inference API launch attracts enterprises seeking low-cost AI deployment solutions.
Nvidia HGX H100 clusters provide competitive edge in high-performance AI computing.
Strong AI cloud service growth indicates rising demand for Lambda's GPU offerings.

Land your dream remote job 3x faster with AI