Actuary, Analytics/Forecasting
HumanaFull Time
Mid-level (3 to 4 years), Senior (5 to 8 years)
Key technologies and capabilities for this role
Common questions about this position
The salary is $325K.
This information is not specified in the job description.
Candidates should understand risks from frontier AI systems and AI alignment literature, have deep experience in threat modeling, risk analysis, or adversarial thinking, know how AI evaluations work, enjoy multidisciplinary work across technical and policy domains, communicate complex risks clearly, and think in systems anticipating cascading risks.
The Safety Systems team drives OpenAI's commitment to AI safety, fostering a culture of trust and transparency while ensuring safe deployment of models to benefit society.
Strong candidates have experience in threat modeling across AI risks like misuse and alignment, can connect AI evaluations to safeguards, pair effectively with technical and policy teams, and explain complex risk rationales compellingly.
Develops safe and beneficial AI technologies
OpenAI develops and deploys artificial intelligence technologies aimed at benefiting humanity. The company creates advanced AI models capable of performing various tasks, such as automating processes and enhancing creativity. OpenAI's products, like Sora, allow users to generate videos from text descriptions, showcasing the versatility of its AI applications. Unlike many competitors, OpenAI operates under a capped profit model, which limits the profits it can make and ensures that excess earnings are redistributed to maximize the social benefits of AI. This commitment to safety and ethical considerations is central to its mission of ensuring that artificial general intelligence (AGI) serves all of humanity.