Research Engineer / Scientist, Alignment Science
AnthropicFull Time
Junior (1 to 2 years)
Key technologies and capabilities for this role
Common questions about this position
The salary range is $245K - $440K.
The position is hybrid.
Candidates need a Ph.D. or research experience in computer science, machine learning, or related fields, 2+ years of research engineering experience, proficiency in Python or similar languages, and background in AI safety or mechanistic interpretability.
The team has a collaborative and curiosity-driven working style, focused on studying internal representations of deep learning models to understand behavior and ensure AI safety.
Strong candidates are excited about OpenAI’s mission for safe AGI, have deep experience in AI safety or mechanistic interpretability, hold a Ph.D. or equivalent research background, and thrive in collaborative environments with large-scale AI systems.
Develops safe and beneficial AI technologies
OpenAI develops and deploys artificial intelligence technologies aimed at benefiting humanity. The company creates advanced AI models capable of performing various tasks, such as automating processes and enhancing creativity. OpenAI's products, like Sora, allow users to generate videos from text descriptions, showcasing the versatility of its AI applications. Unlike many competitors, OpenAI operates under a capped profit model, which limits the profits it can make and ensures that excess earnings are redistributed to maximize the social benefits of AI. This commitment to safety and ethical considerations is central to its mission of ensuring that artificial general intelligence (AGI) serves all of humanity.