[Remote] Manager, Support Response at Lambda

United States

Lambda Logo
$160,000 – $240,000Compensation
Expert & Leadership (9+ years)Experience Level
Full TimeJob Type
UnknownVisa
AI, Deep Learning, Cloud ComputingIndustries

Requirements

Candidates must have an understanding of AI development, GPU computing, and cloud computing technologies. A minimum of 2 years of experience in a technical team leadership or managerial role is required, along with the ability to evaluate and optimize support processes and workflows. Experience supporting customers in a Linux environment and experience building and leading a 24/7 technical support organization or NOC are also necessary. Experience with monitoring and alerting systems is required, and candidates must be flexible in working nights and weekends as needed, maintaining an on-call schedule.

Responsibilities

The Support Manager will collaborate with the Director of Customer Support to develop and coach the Support Response team, ensuring outstanding customer experiences. They will ensure 24/7 monitoring and support of all customer tickets, platforms, and applications, and achieve first response SLA's for customer cases. Responsibilities include developing and implementing standard operating procedures for issue triage, resolution, and escalation, and coordinating with different teams for efficient problem resolution. The manager will also participate in and lead training sessions, address roadblocks, and ensure departmental policies reflect best practices. They will review, create, and distribute support metrics, assist in developing workflows, manage 24/7 schedules, participate in hiring, conduct performance reviews, and actively engage in resolving customer cases.

Skills

Customer Support
Technical Support
Team Leadership
Problem Solving
SLA Management
Process Improvement
Customer Satisfaction
Training
Monitoring
Escalation

Lambda

Cloud-based GPU services for AI training

About Lambda

Lambda Labs provides cloud-based services for artificial intelligence (AI) training and inference, focusing on large language models and generative AI. Their main product, the AI Developer Cloud, utilizes NVIDIA's GH200 Grace Hopper™ Superchip to deliver efficient and cost-effective GPU resources. Customers can access on-demand and reserved cloud GPUs, which are essential for processing large datasets quickly, with pricing starting at $1.99 per hour for NVIDIA H100 instances. Lambda Labs serves AI developers and companies needing extensive GPU deployments, offering competitive pricing and infrastructure ownership options through their Lambda Echelon service. Additionally, they provide Lambda Stack, a software solution that simplifies the installation and management of AI-related tools for over 50,000 machine learning teams. The goal of Lambda Labs is to support AI development by providing accessible and efficient cloud GPU services.

San Jose, CaliforniaHeadquarters
2012Year Founded
$372.6MTotal Funding
DEBTCompany Stage
AI & Machine LearningIndustries
201-500Employees

Risks

Nebius' holistic cloud platform challenges Lambda's market share in AI infrastructure.
AWS's 896-core instance may draw customers seeking high-performance cloud solutions.
Existential crisis in Hermes 3 model raises concerns about Lambda's AI model reliability.

Differentiation

Lambda offers cost-effective Inference API for AI model deployment without infrastructure maintenance.
Nvidia HGX H100 and Quantum-2 InfiniBand Clusters enhance Lambda's AI model training capabilities.
Lambda's Hermes 3 collaboration showcases advanced AI model development expertise.

Upsides

Inference API launch attracts enterprises seeking low-cost AI deployment solutions.
Nvidia HGX H100 clusters provide competitive edge in high-performance AI computing.
Strong AI cloud service growth indicates rising demand for Lambda's GPU offerings.

Land your dream remote job 3x faster with AI