Senior Data Center Operations Engineer - Quebec, Canada at Lambda

Québec City, Quebec, Canada

Lambda Logo
CA$83,200 – CA$124,800Compensation
Senior (5 to 8 years)Experience Level
Full TimeJob Type
UnknownVisa
Data Centers, AI Cloud, InfrastructureIndustries

Requirements

  • Strong past experiences with critical infrastructure systems supporting data centers, such as power distribution, air flow management, environmental monitoring, capacity planning, DCIM software, structured cabling, and cable management
  • Familiar with carrier DIA circuit test and turn ups, fiber testing and troubleshooting
  • Basic knowledge of cable optics and the different types of use
  • Solid understanding of single and three phase power theories
  • PDU balancing and why it is important
  • Familiar with multiple cable media types and their uses
  • Knowledge of cold isle and hot isle containment
  • Solid understanding of server hardware and boot process
  • Ability to structure, collaborate and iteratively improve on complex maintenance MOPs
  • Working with product management, support, and other teams to align operational capabilities with company goals
  • Translating business priorities into technical and operational requirements
  • Supporting cross-functional projects where infrastructure plays a critical role
  • Action-oriented and willingness to train junior staff on best practices
  • Willing to travel for bring up of new data center locations as needed
  • Presence in Québec City Data Center 5 days per week

Responsibilities

  • Ensure new server, storage and network infrastructure is properly racked, labeled, cabled, and configured
  • Troubleshoot hardware and software issues in some of the world’s most advanced GPU and Networking systems
  • Document and update data center layout and network topology in DCIM software
  • Work with supply chain & manufacturing teams to ensure timely deployment of systems and project plans for large-scale deployments
  • Manage a parts depot inventory and track equipment through the delivery-store-stage-deploy-handoff process in each of our data centers
  • Partner with HW Support teams to ensure data center hardware incidents with higher level troubleshooting challenges are resolved, reported on and solutions are disseminated to the large operations organization
  • Work with RMA team to ensure faulty parts are returned and replacements are ordered
  • Follow installation standards and documentation for placement, labeling, and cabling to drive consistency and discoverability across all data centers
  • Nice to Have
  • 3+ years experience with critical infrastructure systems supporting data centers, such as power distribution, air flow management, environmental monitoring, capacity planning, DCIM software, structured cabling, and cable management
  • Experience with/or knowledge of network topology and configurations and 400gb Infiniband architectures
  • Experience with/or knowledge of DDP or SCM cluster storage systems
  • 3+ years working with and reporting from a ticketing systems like JIRA and Zendesk
  • Advanced experience with Linux administration
  • Experience with High Performance Compute GPU systems (air or water cooled) - especially Nvidia NVL72

Skills

Key technologies and capabilities for this role

DCIM softwarestructured cablingcable managementpower distributionPDU balancingcarrier DIA circuitsfiber testingenvironmental monitoringcapacity planningserver rackinghardware troubleshootingGPU systemsnetwork topologyinventory managementRMA process

Questions & Answers

Common questions about this position

What is the salary range for this Senior Data Center Operations Engineer position?

The salary range is CA$83.2K - CA$124.8K.

Is this role remote or onsite, and what are the location requirements?

This is an onsite position requiring presence in the Québec City Data Center 5 days per week.

What key skills and experiences are required for this role?

Candidates need strong experience with critical infrastructure systems like power distribution, air flow management, environmental monitoring, capacity planning, DCIM software, structured cabling, and cable management. Additional requirements include familiarity with carrier DIA circuits, fiber testing, cable optics, power theories, PDU balancing, server hardware, boot processes, and developing maintenance MOPs.

What is the company culture like at Lambda?

This information is not specified in the job description.

What makes a strong candidate for this position?

Strong candidates have 3+ years of experience with data center critical infrastructure systems and are action-oriented with a willingness to train junior staff, collaborate across teams, and travel as needed for new data center setups.

Lambda

Cloud-based GPU services for AI training

About Lambda

Lambda Labs provides cloud-based services for artificial intelligence (AI) training and inference, focusing on large language models and generative AI. Their main product, the AI Developer Cloud, utilizes NVIDIA's GH200 Grace Hopper™ Superchip to deliver efficient and cost-effective GPU resources. Customers can access on-demand and reserved cloud GPUs, which are essential for processing large datasets quickly, with pricing starting at $1.99 per hour for NVIDIA H100 instances. Lambda Labs serves AI developers and companies needing extensive GPU deployments, offering competitive pricing and infrastructure ownership options through their Lambda Echelon service. Additionally, they provide Lambda Stack, a software solution that simplifies the installation and management of AI-related tools for over 50,000 machine learning teams. The goal of Lambda Labs is to support AI development by providing accessible and efficient cloud GPU services.

San Jose, CaliforniaHeadquarters
2012Year Founded
$372.6MTotal Funding
DEBTCompany Stage
AI & Machine LearningIndustries
201-500Employees

Risks

Nebius' holistic cloud platform challenges Lambda's market share in AI infrastructure.
AWS's 896-core instance may draw customers seeking high-performance cloud solutions.
Existential crisis in Hermes 3 model raises concerns about Lambda's AI model reliability.

Differentiation

Lambda offers cost-effective Inference API for AI model deployment without infrastructure maintenance.
Nvidia HGX H100 and Quantum-2 InfiniBand Clusters enhance Lambda's AI model training capabilities.
Lambda's Hermes 3 collaboration showcases advanced AI model development expertise.

Upsides

Inference API launch attracts enterprises seeking low-cost AI deployment solutions.
Nvidia HGX H100 clusters provide competitive edge in high-performance AI computing.
Strong AI cloud service growth indicates rising demand for Lambda's GPU offerings.

Land your dream remote job 3x faster with AI