Software Engineer, ML Infrastructure
Serve Robotics- Full Time
- Junior (1 to 2 years)
Candidates should possess deep expertise in Linux and systems-level engineering, networking, software engineering, distributed systems, storage architecture, and machine learning research. A track record of shipping impactful products and leading large, complex projects is essential. Candidates must thrive in ambiguity and stay ahead of the AI/ML ecosystem while valuing a high-performance, low-ego team environment.
Engineers will be expected to build the world's best deep learning cloud, making decisions quickly and aligning teams effectively. They will own cross-team initiatives from conception to production, prioritize speed and execution, and deliver measurable business results. Additionally, they should foster collaboration and open communication within the team.
Cloud-based GPU services for AI training
Lambda Labs provides cloud-based services for artificial intelligence (AI) training and inference, focusing on large language models and generative AI. Their main product, the AI Developer Cloud, utilizes NVIDIA's GH200 Grace Hopper™ Superchip to deliver efficient and cost-effective GPU resources. Customers can access on-demand and reserved cloud GPUs, which are essential for processing large datasets quickly, with pricing starting at $1.99 per hour for NVIDIA H100 instances. Lambda Labs serves AI developers and companies needing extensive GPU deployments, offering competitive pricing and infrastructure ownership options through their Lambda Echelon service. Additionally, they provide Lambda Stack, a software solution that simplifies the installation and management of AI-related tools for over 50,000 machine learning teams. The goal of Lambda Labs is to support AI development by providing accessible and efficient cloud GPU services.