Research Engineer, Preparedness at OpenAI

San Francisco, California, United States

OpenAI Logo
$200,000 – $370,000Compensation
Senior (5 to 8 years)Experience Level
Full TimeJob Type
UnknownVisa
Artificial Intelligence, AI SafetyIndustries

Requirements

  • Passionate and knowledgeable about short-term and long-term AI safety risks
  • Ability to think outside the box and possess a robust "red-teaming mindset."
  • Experienced in ML research engineering, ML observability and monitoring, creating large language model-enabled applications, and/or another technical domain applicable to AI risk
  • Ability to operate effectively in a dynamic and extremely fast-paced research environment
  • Ability to scope and deliver projects end-to-end
  • (Optional) First-hand experience in red-teaming systems
  • (Optional) A good understanding of the societal aspects of AI deployment

Responsibilities

  • Identifying emerging AI safety risks and new methodologies for exploring the impact of these risks
  • Building (and continuously refining) evaluations of frontier AI models that assess the extent of identified risks
  • Designing and building scalable systems and processes to support these kinds of evaluations
  • Contributing to the refinement of risk management and the overall development of "best practice" guidelines for AI safety evaluations

Skills

Key technologies and capabilities for this role

ML Research EngineeringML ObservabilityMonitoringLLM ApplicationsRed TeamingAI Safety EvaluationsScalable Systems

Questions & Answers

Common questions about this position

What is the salary range for the Research Engineer, Preparedness position?

The salary range is $200K - $370K.

Is this position remote or does it require office work?

This information is not specified in the job description.

What skills are required for this Research Engineer role?

Required skills include experience in ML research engineering, ML observability and monitoring, creating large language model-enabled applications, a red-teaming mindset, and the ability to scope and deliver projects end-to-end in a fast-paced environment.

What is the company culture like at OpenAI for this team?

The Safety Systems team operates in a dynamic and extremely fast-paced research environment, focusing on AI safety, trust, transparency, and preparing for catastrophic risks to promote positive change.

What makes a strong candidate for this position?

Strong candidates are passionate about AI safety risks, have a red-teaming mindset, experience in ML research engineering or related technical domains, and can deliver projects end-to-end in a fast-paced setting; optional experience in red-teaming or understanding societal AI impacts is a plus.

OpenAI

Develops safe and beneficial AI technologies

About OpenAI

OpenAI develops and deploys artificial intelligence technologies aimed at benefiting humanity. The company creates advanced AI models capable of performing various tasks, such as automating processes and enhancing creativity. OpenAI's products, like Sora, allow users to generate videos from text descriptions, showcasing the versatility of its AI applications. Unlike many competitors, OpenAI operates under a capped profit model, which limits the profits it can make and ensures that excess earnings are redistributed to maximize the social benefits of AI. This commitment to safety and ethical considerations is central to its mission of ensuring that artificial general intelligence (AGI) serves all of humanity.

San Francisco, CaliforniaHeadquarters
2015Year Founded
$18,433.2MTotal Funding
LATE_VCCompany Stage
AI & Machine LearningIndustries
1,001-5,000Employees

Benefits

Health insurance
Dental and vision insurance
Flexible spending account for healthcare and dependent care
Mental healthcare service
Fertility treatment coverage
401(k) with generous matching
20-week paid parental leave
Life insurance (complimentary)
AD&D insurance (complimentary)
Short-term/long-term disability insurance (complimentary)
Optional buy-up life insurance
Flexible work hours and unlimited paid time off (we encourage 4+ weeks per year)
Annual learning & development stipend
Regular team happy hours and outings
Daily catered lunch and dinner
Travel to domestic conferences

Risks

Elon Musk's legal battle may pose financial and reputational challenges for OpenAI.
Customizable ChatGPT personas could lead to privacy and ethical concerns.
Competitors like Anthropic raising capital may intensify market competition.

Differentiation

OpenAI's capped profit model prioritizes ethical AI development over unlimited profit.
OpenAI's AI models, like Sora, offer unique video creation from text descriptions.
OpenAI's focus on AGI aims to create AI systems smarter than humans.

Upsides

OpenAI's $6.6 billion funding boosts its AI research and computational capacity.
Customizable ChatGPT personas enhance user engagement and satisfaction.
OpenAI's 'Operator' AI agent could revolutionize workforce automation by 2025.

Land your dream remote job 3x faster with AI