Data Center Manager - Quebec, Canada at Lambda

Québec City, Quebec, Canada

Lambda Logo
CA$104,700 – CA$157,300Compensation
Senior (5 to 8 years)Experience Level
Full TimeJob Type
UnknownVisa
Data Centers, AI Infrastructure, Cloud ComputingIndustries

Requirements

  • 5+ years experience with critical infrastructure systems supporting data centers, such as power distribution, air flow management, environmental monitoring, capacity planning, DCIM software, structured cabling, and cable management
  • Basic understanding of Linux administration
  • Experience in setting up networking appliances (Ethernet and InfiniBand) across multiple data center locations
  • Pays attention to detail and has the ability to follow instructions
  • Action-oriented and has a strong willingness to learn
  • Has a desire to mentor other team members and help them reach their full potential
  • Presence in Québec City Data Center 5 days per week (OnSite, FullTime)

Responsibilities

  • Manage and lead a team of data center technicians
  • Maintain high availability, reliability, and security in the data center environment
  • Ensure new server, storage and network infrastructure is properly racked, labeled, cabled, and configured
  • Troubleshoot hardware and software issues in some of the world’s most advanced systems
  • Document data center layout and network topology in DCIM software
  • Work with supply chain & manufacturing teams to ensure timely deployment of systems and project plans for large-scale deployments
  • Assess current and future state data center requirements based on growth plans and technology trends
  • Manage a parts depot inventory and track equipment through the delivery-store-stage-deploy-handoff process in each of our data centers
  • Create installation standards and documentation for placement, labeling, and cabling to drive consistency and discoverability across all data centers
  • Oversee deployments and day-to-day operations of the data center
  • Maintain uptime for assets and infrastructure, and ensure customer SLAs are met
  • Participate in technical discussions and provide expertise on data center integration and deployment strategies
  • Understand power/cooling requirements as well as cabling needs required within data center space to support high performance infrastructures
  • Work closely with cross-functional teams, including Hardware Engineering, Software Engineering, Supply Chain, Customer Experience and Sales, to align data center solutions with business goals
  • Ensure the data center complies with Lambda’s standards and policies

Skills

Data Center Management
Team Leadership
Hardware Troubleshooting
Server Racking
Cabling
DCIM Software
Inventory Management
Network Topology
Power Management
Cooling Systems

Lambda

Cloud-based GPU services for AI training

About Lambda

Lambda Labs provides cloud-based services for artificial intelligence (AI) training and inference, focusing on large language models and generative AI. Their main product, the AI Developer Cloud, utilizes NVIDIA's GH200 Grace Hopper™ Superchip to deliver efficient and cost-effective GPU resources. Customers can access on-demand and reserved cloud GPUs, which are essential for processing large datasets quickly, with pricing starting at $1.99 per hour for NVIDIA H100 instances. Lambda Labs serves AI developers and companies needing extensive GPU deployments, offering competitive pricing and infrastructure ownership options through their Lambda Echelon service. Additionally, they provide Lambda Stack, a software solution that simplifies the installation and management of AI-related tools for over 50,000 machine learning teams. The goal of Lambda Labs is to support AI development by providing accessible and efficient cloud GPU services.

San Jose, CaliforniaHeadquarters
2012Year Founded
$372.6MTotal Funding
DEBTCompany Stage
AI & Machine LearningIndustries
201-500Employees

Risks

Nebius' holistic cloud platform challenges Lambda's market share in AI infrastructure.
AWS's 896-core instance may draw customers seeking high-performance cloud solutions.
Existential crisis in Hermes 3 model raises concerns about Lambda's AI model reliability.

Differentiation

Lambda offers cost-effective Inference API for AI model deployment without infrastructure maintenance.
Nvidia HGX H100 and Quantum-2 InfiniBand Clusters enhance Lambda's AI model training capabilities.
Lambda's Hermes 3 collaboration showcases advanced AI model development expertise.

Upsides

Inference API launch attracts enterprises seeking low-cost AI deployment solutions.
Nvidia HGX H100 clusters provide competitive edge in high-performance AI computing.
Strong AI cloud service growth indicates rising demand for Lambda's GPU offerings.

Land your dream remote job 3x faster with AI