Lambda

Technical Program Manager - San Jose

San Jose, California, United States

$176,500 – $254,000Compensation
Mid-level (3 to 4 years), Senior (5 to 8 years)Experience Level
Full TimeJob Type
UnknownVisa
Enterprise Software, AI & Machine LearningIndustries

Requirements

Candidates must have 7+ years of experience in program or project management for complex product development programs. A thorough understanding of agile and waterfall management techniques is required, along with a technical background typically demonstrated by an Engineering degree or equivalent technical experience. Excellent leadership and organizational skills are necessary, as well as strong communication abilities to structure internal and external communication. Familiarity with various project management tools and the ability to navigate ambiguity while creating structure is essential.

Responsibilities

The Technical Program Manager will manage large scale deployments of GPU clusters in datacenter colocation across the country. They will work closely with Data Center engineering and operations to ensure proper deployment of infrastructure requirements and drive multiple simultaneous projects forward while assessing risks and monitoring tasks. The role involves proactively managing dependencies, anticipating and resolving execution issues, and partnering with cross-functional stakeholders to ensure that products are built correctly, tested properly, deployed on time, and meet stated SLAs. Additionally, they will manage communication of progress and status with internal stakeholders and customer groups, interact with various levels of stakeholders to resolve technical and scheduling issues, build strong partnerships across Lambda, and contribute to the development of new and existing business opportunities.

Skills

Program Management
Project Management
Agile
Waterfall
Risk Assessment
Stakeholder Management
Technical Communication
Dependency Management

Lambda

Cloud-based GPU services for AI training

About Lambda

Lambda Labs provides cloud-based services for artificial intelligence (AI) training and inference, focusing on large language models and generative AI. Their main product, the AI Developer Cloud, utilizes NVIDIA's GH200 Grace Hopper™ Superchip to deliver efficient and cost-effective GPU resources. Customers can access on-demand and reserved cloud GPUs, which are essential for processing large datasets quickly, with pricing starting at $1.99 per hour for NVIDIA H100 instances. Lambda Labs serves AI developers and companies needing extensive GPU deployments, offering competitive pricing and infrastructure ownership options through their Lambda Echelon service. Additionally, they provide Lambda Stack, a software solution that simplifies the installation and management of AI-related tools for over 50,000 machine learning teams. The goal of Lambda Labs is to support AI development by providing accessible and efficient cloud GPU services.

Key Metrics

San Jose, CaliforniaHeadquarters
2012Year Founded
$372.6MTotal Funding
DEBTCompany Stage
AI & Machine LearningIndustries
201-500Employees

Risks

Nebius' holistic cloud platform challenges Lambda's market share in AI infrastructure.
AWS's 896-core instance may draw customers seeking high-performance cloud solutions.
Existential crisis in Hermes 3 model raises concerns about Lambda's AI model reliability.

Differentiation

Lambda offers cost-effective Inference API for AI model deployment without infrastructure maintenance.
Nvidia HGX H100 and Quantum-2 InfiniBand Clusters enhance Lambda's AI model training capabilities.
Lambda's Hermes 3 collaboration showcases advanced AI model development expertise.

Upsides

Inference API launch attracts enterprises seeking low-cost AI deployment solutions.
Nvidia HGX H100 clusters provide competitive edge in high-performance AI computing.
Strong AI cloud service growth indicates rising demand for Lambda's GPU offerings.

Land your dream remote job 3x faster with AI