Research Engineer / Scientist, Alignment Science
AnthropicFull Time
Junior (1 to 2 years)
Key technologies and capabilities for this role
Common questions about this position
The salary range is $200K - $370K.
This information is not specified in the job description.
Required skills include experience in ML research engineering, ML observability and monitoring, creating large language model-enabled applications, a red-teaming mindset, and the ability to scope and deliver projects end-to-end in a fast-paced environment.
The Safety Systems team operates in a dynamic and extremely fast-paced research environment, focusing on AI safety, trust, transparency, and preparing for catastrophic risks to promote positive change.
Strong candidates are passionate about AI safety risks, have a red-teaming mindset, experience in ML research engineering or related technical domains, and can deliver projects end-to-end in a fast-paced setting; optional experience in red-teaming or understanding societal AI impacts is a plus.
Develops safe and beneficial AI technologies
OpenAI develops and deploys artificial intelligence technologies aimed at benefiting humanity. The company creates advanced AI models capable of performing various tasks, such as automating processes and enhancing creativity. OpenAI's products, like Sora, allow users to generate videos from text descriptions, showcasing the versatility of its AI applications. Unlike many competitors, OpenAI operates under a capped profit model, which limits the profits it can make and ensures that excess earnings are redistributed to maximize the social benefits of AI. This commitment to safety and ethical considerations is central to its mission of ensuring that artificial general intelligence (AGI) serves all of humanity.