Senior Researcher — Safety Systems, Misalignment Research at OpenAI

New York, New York, United States

OpenAI Logo
$380,000 – $460,000Compensation
Senior (5 to 8 years)Experience Level
Full TimeJob Type
UnknownVisa
Artificial Intelligence, TechnologyIndustries

Requirements

  • Passionate about red-teaming and AI safety, thinking about these problems extensively and aligning with OpenAI's mission to build safe, universally beneficial AGI and the OpenAI Charter
  • 4+ years of experience in AI red-teaming, security research, adversarial ML, or related safety fields
  • Strong research track record, including publications, open-source projects, or high-impact internal work, demonstrating creativity in uncovering and exploiting system weaknesses
  • Fluent in modern ML/AI techniques and comfortable hacking on large-scale codebases and evaluation infrastructure
  • Ability to communicate clearly with both technical and non-technical audiences, translating complex findings into actionable recommendations
  • Enjoys collaboration and can drive cross-functional projects spanning research, engineering, and policy
  • Holds a Ph.D., master’s degree, or equivalent experience in computer science, machine learning, security, or a related discipline (nice to have but not required)

Responsibilities

  • Design and implement worst-case demonstrations that make AGI alignment risks concrete for stakeholders, focused on high-stakes use cases like deceptive behavior, scheming, reward hacking, deception in reasoning, and power-seeking
  • Develop adversarial and system-level evaluations grounded in those demonstrations, driving adoption across OpenAI
  • Create automated tools and infrastructure to scale automated red-teaming and stress testing
  • Conduct research on failure modes of alignment techniques and propose improvements
  • Publish influential internal or external papers that shift safety strategy or industry practice, aiming to concretely reduce existential AI risk
  • Partner with engineering, research, policy, and legal teams to integrate findings into product safeguards and governance processes
  • Mentor engineers and researchers, fostering a culture of rigorous, impact-oriented safety work

Skills

Key technologies and capabilities for this role

red-teamingAI safetyadversarial evaluationsmisalignment researchdeceptive behaviorschemingreward hackingpower-seekingstress testingalignment research

Questions & Answers

Common questions about this position

What is the salary range for this Senior Researcher position?

The salary range is $380K - $460K.

Is this role remote or hybrid, and what's the location policy?

The position is hybrid.

What key skills or experiences are required for this role?

The role requires passion for red-teaming and AI safety, ability to design and implement worst-case demonstrations, develop adversarial evaluations, conduct research on alignment failure modes, and publish influential papers.

What is the team and culture like at Safety Systems?

Safety Systems is at the forefront of OpenAI’s mission to build safe AGI, with a misalignment research team focusing on identifying and understanding AGI risks through worst-case demonstrations, adversarial evaluations, stress testing, and alignment research, fostering a culture of rigorous, impact-oriented safety work.

What makes someone a strong candidate for this Senior Researcher role?

Candidates who are already thinking about AGI misalignment problems night and day, share the mission to build safe AGI, and have expertise in red-teaming, adversarial evaluations, and alignment research will thrive.

OpenAI

Develops safe and beneficial AI technologies

About OpenAI

OpenAI develops and deploys artificial intelligence technologies aimed at benefiting humanity. The company creates advanced AI models capable of performing various tasks, such as automating processes and enhancing creativity. OpenAI's products, like Sora, allow users to generate videos from text descriptions, showcasing the versatility of its AI applications. Unlike many competitors, OpenAI operates under a capped profit model, which limits the profits it can make and ensures that excess earnings are redistributed to maximize the social benefits of AI. This commitment to safety and ethical considerations is central to its mission of ensuring that artificial general intelligence (AGI) serves all of humanity.

San Francisco, CaliforniaHeadquarters
2015Year Founded
$18,433.2MTotal Funding
LATE_VCCompany Stage
AI & Machine LearningIndustries
1,001-5,000Employees

Benefits

Health insurance
Dental and vision insurance
Flexible spending account for healthcare and dependent care
Mental healthcare service
Fertility treatment coverage
401(k) with generous matching
20-week paid parental leave
Life insurance (complimentary)
AD&D insurance (complimentary)
Short-term/long-term disability insurance (complimentary)
Optional buy-up life insurance
Flexible work hours and unlimited paid time off (we encourage 4+ weeks per year)
Annual learning & development stipend
Regular team happy hours and outings
Daily catered lunch and dinner
Travel to domestic conferences

Risks

Elon Musk's legal battle may pose financial and reputational challenges for OpenAI.
Customizable ChatGPT personas could lead to privacy and ethical concerns.
Competitors like Anthropic raising capital may intensify market competition.

Differentiation

OpenAI's capped profit model prioritizes ethical AI development over unlimited profit.
OpenAI's AI models, like Sora, offer unique video creation from text descriptions.
OpenAI's focus on AGI aims to create AI systems smarter than humans.

Upsides

OpenAI's $6.6 billion funding boosts its AI research and computational capacity.
Customizable ChatGPT personas enhance user engagement and satisfaction.
OpenAI's 'Operator' AI agent could revolutionize workforce automation by 2025.

Land your dream remote job 3x faster with AI