Staff Product Manager - Cloud Storage at Lambda

San Francisco, California, United States

Lambda Logo
$291,000 – $552,000Compensation
Senior (5 to 8 years), Expert & Leadership (9+ years)Experience Level
Full TimeJob Type
UnknownVisa
AI, Cloud Computing, TechnologyIndustries

Requirements

  • Bachelor’s degree or foreign equivalent in Computer Science, Electrical Engineering, Computer Engineering, or a closely related technical field
  • Seven (7) years of progressive, post-baccalaureate experience in product management, including at least four (4) years focused specifically on cloud-scale storage or infrastructure platforms
  • Proven expertise in designing and delivering large-scale storage platforms, including block, file, and object architectures, for performance-critical workloads
  • Proven expertise in evaluating and selecting storage technologies through benchmarking of throughput, IOPS, latency, durability, and total cost of ownership
  • Proven expertise in architecting and managing storage solutions (description cuts off but implied from context)

Responsibilities

  • Define and execute the long-term vision and strategic roadmap for Lambda’s storage platform across cloud and hybrid environments, ensuring it delivers uncompromising performance, scalability, durability, and cost efficiency for the world’s largest AI workloads
  • Lead the evaluation, selection, and seamless integration of advanced storage technologies — spanning block, file, and object architectures — using rigorous benchmarking to optimize IOPS, throughput, latency, and total cost of ownership
  • Translate complex infrastructure capabilities into clear product requirements, precise service-level objectives (SLOs), and measurable performance benchmarks that align with demanding AI and HPC use cases
  • Architect and implement intelligent data tiering strategies (hot, warm, cold) to maximize performance where it matters and drive significant cost savings at scale
  • Collaborate with infrastructure and operations leaders to forecast multi-year capacity growth, design for petabyte-to-exabyte scalability, and ensure consistent performance under peak workloads
  • Define and enforce lifecycle management, replication, and disaster recovery policies that guarantee data integrity, compliance, and near-zero downtime
  • Own the observability and optimization roadmap for the storage platform, deploying advanced telemetry, monitoring, and analytics to proactively detect and remediate bottlenecks before they impact customers
  • Partner closely with engineering to drive continuous performance tuning, eliminate systemic inefficiencies, and ensure the platform remains ahead of industry benchmarks

Skills

Product Management
Cloud Storage
Object Storage
Block Storage
File Systems
Software-Defined Storage
Cloud-Native Storage
AI Infrastructure
Scalability
Storage Architecture

Lambda

Cloud-based GPU services for AI training

About Lambda

Lambda Labs provides cloud-based services for artificial intelligence (AI) training and inference, focusing on large language models and generative AI. Their main product, the AI Developer Cloud, utilizes NVIDIA's GH200 Grace Hopper™ Superchip to deliver efficient and cost-effective GPU resources. Customers can access on-demand and reserved cloud GPUs, which are essential for processing large datasets quickly, with pricing starting at $1.99 per hour for NVIDIA H100 instances. Lambda Labs serves AI developers and companies needing extensive GPU deployments, offering competitive pricing and infrastructure ownership options through their Lambda Echelon service. Additionally, they provide Lambda Stack, a software solution that simplifies the installation and management of AI-related tools for over 50,000 machine learning teams. The goal of Lambda Labs is to support AI development by providing accessible and efficient cloud GPU services.

San Jose, CaliforniaHeadquarters
2012Year Founded
$372.6MTotal Funding
DEBTCompany Stage
AI & Machine LearningIndustries
201-500Employees

Risks

Nebius' holistic cloud platform challenges Lambda's market share in AI infrastructure.
AWS's 896-core instance may draw customers seeking high-performance cloud solutions.
Existential crisis in Hermes 3 model raises concerns about Lambda's AI model reliability.

Differentiation

Lambda offers cost-effective Inference API for AI model deployment without infrastructure maintenance.
Nvidia HGX H100 and Quantum-2 InfiniBand Clusters enhance Lambda's AI model training capabilities.
Lambda's Hermes 3 collaboration showcases advanced AI model development expertise.

Upsides

Inference API launch attracts enterprises seeking low-cost AI deployment solutions.
Nvidia HGX H100 clusters provide competitive edge in high-performance AI computing.
Strong AI cloud service growth indicates rising demand for Lambda's GPU offerings.

Land your dream remote job 3x faster with AI