Research Engineer / Scientist, Alignment Science
AnthropicFull Time
Junior (1 to 2 years)
Candidates should have demonstrated experience in deep learning and transformer models, proficiency with frameworks like PyTorch or TensorFlow, and a strong foundation in data structures, algorithms, and software engineering principles. Familiarity with methods for training and fine-tuning large language models, experience designing and deploying technical safeguards at scale, and decisive leadership in ambiguous environments are essential. A background in biosecurity, computational biology, or adjacent technical fields is a plus.
The Lead Research Engineer/Scientist will design, implement, and oversee an end-to-end mitigation stack to prevent severe chemical and biological misuse across OpenAI’s products. This includes leading the full-stack mitigation strategy, ensuring safeguards integrate seamlessly and scale with usage, making decisive calls on technical trade-offs, partnering with risk modeling leadership, and driving rigorous safeguard testing against evolving threats.
Develops safe and beneficial AI technologies
OpenAI develops and deploys artificial intelligence technologies aimed at benefiting humanity. The company creates advanced AI models capable of performing various tasks, such as automating processes and enhancing creativity. OpenAI's products, like Sora, allow users to generate videos from text descriptions, showcasing the versatility of its AI applications. Unlike many competitors, OpenAI operates under a capped profit model, which limits the profits it can make and ensures that excess earnings are redistributed to maximize the social benefits of AI. This commitment to safety and ethical considerations is central to its mission of ensuring that artificial general intelligence (AGI) serves all of humanity.