Threat Modeler Lead at OpenAI

San Francisco, California, United States

OpenAI Logo
Not SpecifiedCompensation
Senior (5 to 8 years), Expert & Leadership (9+ years)Experience Level
Full TimeJob Type
UnknownVisa
AI, TechnologyIndustries

Requirements

  • Understand risks from frontier AI systems and have a strong grasp of AI alignment literature
  • Bring deep experience in threat modeling, risk analysis, or adversarial thinking (e.g., security, national security, or safety)
  • Know how AI evaluations work and can connect eval results to both capability testing and safeguard sufficiency
  • Enjoy working across technical and policy domains to drive rigorous, multidisciplinary risk assessments
  • Communicate complex risks clearly and compellingly to both technical and non-technical audiences
  • Think in systems and naturally anticipate second-order and cascading risks

Responsibilities

  • Develop and maintain comprehensive threat models across all misuse areas (bio, cyber, attack planning, etc.)
  • Develop plausible and convincing threat models across loss of control, self-improvement, and other possible alignment risks from frontier AI systems
  • Forecast risks by combining technical foresight, adversarial simulation, and emerging trends
  • Pair closely with technical partners on capability evaluations to ensure these map to and cover the gambit of severe risks differentially enabled by frontier AI systems
  • Pair closely with Bio and Cyber Leads to size the remaining risk of the designed safeguards and translate threat models into actionable mitigation designs
  • Act as the thought partner and explainer of “why” and “when” for high-investment mitigation efforts—helping stakeholders understand the rationale behind prioritization
  • Serve as the central node connecting technical, governance, and policy perspectives on prioritization, focus and rationale on our approach to misuse risk

Skills

Key technologies and capabilities for this role

Threat ModelingRisk ForecastingAI SafetyCapability EvaluationsAdversarial SimulationBio RisksCyber RisksAlignment RisksSafeguards DesignTechnical Foresight

Questions & Answers

Common questions about this position

What is the salary for the Threat Modeler Lead position?

The salary is $325K.

Is this a remote or on-site role?

This information is not specified in the job description.

What skills are needed to thrive in this role?

Candidates should understand risks from frontier AI systems and AI alignment literature, have deep experience in threat modeling, risk analysis, or adversarial thinking, know how AI evaluations work, enjoy multidisciplinary work across technical and policy domains, communicate complex risks clearly, and think in systems anticipating cascading risks.

What is the team culture like at OpenAI for this role?

The Safety Systems team drives OpenAI's commitment to AI safety, fostering a culture of trust and transparency while ensuring safe deployment of models to benefit society.

What makes a strong candidate for the Threat Modeler Lead role?

Strong candidates have experience in threat modeling across AI risks like misuse and alignment, can connect AI evaluations to safeguards, pair effectively with technical and policy teams, and explain complex risk rationales compellingly.

OpenAI

Develops safe and beneficial AI technologies

About OpenAI

OpenAI develops and deploys artificial intelligence technologies aimed at benefiting humanity. The company creates advanced AI models capable of performing various tasks, such as automating processes and enhancing creativity. OpenAI's products, like Sora, allow users to generate videos from text descriptions, showcasing the versatility of its AI applications. Unlike many competitors, OpenAI operates under a capped profit model, which limits the profits it can make and ensures that excess earnings are redistributed to maximize the social benefits of AI. This commitment to safety and ethical considerations is central to its mission of ensuring that artificial general intelligence (AGI) serves all of humanity.

San Francisco, CaliforniaHeadquarters
2015Year Founded
$18,433.2MTotal Funding
LATE_VCCompany Stage
AI & Machine LearningIndustries
1,001-5,000Employees

Benefits

Health insurance
Dental and vision insurance
Flexible spending account for healthcare and dependent care
Mental healthcare service
Fertility treatment coverage
401(k) with generous matching
20-week paid parental leave
Life insurance (complimentary)
AD&D insurance (complimentary)
Short-term/long-term disability insurance (complimentary)
Optional buy-up life insurance
Flexible work hours and unlimited paid time off (we encourage 4+ weeks per year)
Annual learning & development stipend
Regular team happy hours and outings
Daily catered lunch and dinner
Travel to domestic conferences

Risks

Elon Musk's legal battle may pose financial and reputational challenges for OpenAI.
Customizable ChatGPT personas could lead to privacy and ethical concerns.
Competitors like Anthropic raising capital may intensify market competition.

Differentiation

OpenAI's capped profit model prioritizes ethical AI development over unlimited profit.
OpenAI's AI models, like Sora, offer unique video creation from text descriptions.
OpenAI's focus on AGI aims to create AI systems smarter than humans.

Upsides

OpenAI's $6.6 billion funding boosts its AI research and computational capacity.
Customizable ChatGPT personas enhance user engagement and satisfaction.
OpenAI's 'Operator' AI agent could revolutionize workforce automation by 2025.

Land your dream remote job 3x faster with AI