LLM Data Researcher
Turing- Full Time
- Junior (1 to 2 years)
Candidates should possess a Bachelor's degree in a relevant field such as Computer Science, Electrical Engineering, or a related discipline, and ideally have a Master's or Ph.D. degree. They should have experience with large language models (LLMs) and deep learning architectures, demonstrating a strong understanding of model inference and optimization techniques. Experience landing contributions to major LLM training runs is highly desirable, as is the ability to thoroughly evaluate and improve deep learning architectures in a self-directed fashion.
As a Researcher (Engineer/Scientist), Training Architecture, you will design, prototype, and scale up new architectures to improve model intelligence, execute and analyze experiments autonomously and collaboratively, study, debug, and optimize both model performance and computational performance, and contribute to training and inference infrastructure. You will also investigate and implement transformer modifications for efficiency and ensure the safe deployment of LLMs in the real world, alongside collaborating with the team to produce model artifacts used by the rest of the company.
Develops safe and beneficial AI technologies
OpenAI develops and deploys artificial intelligence technologies aimed at benefiting humanity. The company creates advanced AI models capable of performing various tasks, such as automating processes and enhancing creativity. OpenAI's products, like Sora, allow users to generate videos from text descriptions, showcasing the versatility of its AI applications. Unlike many competitors, OpenAI operates under a capped profit model, which limits the profits it can make and ensures that excess earnings are redistributed to maximize the social benefits of AI. This commitment to safety and ethical considerations is central to its mission of ensuring that artificial general intelligence (AGI) serves all of humanity.