Research Internship (Fall 2025)
Cohere- Full Time
- Internship
Candidates must be Ph.D. students in Computer Science or a related field with a focus on machine learning. Experience publishing in top-tier machine learning conferences such as NeurIPS, ICML, ICLR, CVPR, ICCV, ECCV, or Siggraph is required. Proficiency in PyTorch or similar frameworks and strong communication skills are essential. Contributions to open-source projects and experience with generative AI, dataset pipeline design, and training foundation models are preferred.
The Machine Learning Research Intern will develop datasets to support efficient model training, build and improve generative AI models, and set new standards for evaluating generative AI performance. Interns will leverage Lambda's compute resources to produce impactful research through publications and open-source contributions.
Cloud-based GPU services for AI training
Lambda Labs provides cloud-based services for artificial intelligence (AI) training and inference, focusing on large language models and generative AI. Their main product, the AI Developer Cloud, utilizes NVIDIA's GH200 Grace Hopper™ Superchip to deliver efficient and cost-effective GPU resources. Customers can access on-demand and reserved cloud GPUs, which are essential for processing large datasets quickly, with pricing starting at $1.99 per hour for NVIDIA H100 instances. Lambda Labs serves AI developers and companies needing extensive GPU deployments, offering competitive pricing and infrastructure ownership options through their Lambda Echelon service. Additionally, they provide Lambda Stack, a software solution that simplifies the installation and management of AI-related tools for over 50,000 machine learning teams. The goal of Lambda Labs is to support AI development by providing accessible and efficient cloud GPU services.