[Remote] Protection Scientist Engineer, Intelligence and Investigations at OpenAI

San Francisco, California, United States

OpenAI Logo
Not SpecifiedCompensation
N/AExperience Level
N/AJob Type
Not SpecifiedVisa
N/AIndustries

Requirements

  • Have at least 4 years of experience doing technical analysis and detection, especially using SQL and Python
  • Have experience in trust and safety and/or have worked closely with policy, enforcement, and engineering teams; investigative mindset
  • Have experience with basic data engineering, such as building core tables or writing data pipelines in production, and with machine learning principles and execution; basic software development skills are a plus as this role writes productionised code
  • Have experience scaling and automating processes, especially with language models

Responsibilities

  • Scope and implement abuse monitoring requirements for new product launches
  • Improve processes to sustain monitoring operations for existing products, including developing approaches to automate monitoring subtasks
  • Prototype and mature into production systems of detection, review, and enforcement of abuse for major harms
  • Work with Product, Policy, Ops, and Investigative teams to understand key risks and how to address them, and with Engineering teams to ensure we have sufficient data and scaled tooling
  • Respond to and investigate critical escalations, especially those not caught by existing safety systems

Skills

OpenAI

Develops safe and beneficial AI technologies

About OpenAI

OpenAI develops and deploys artificial intelligence technologies aimed at benefiting humanity. The company creates advanced AI models capable of performing various tasks, such as automating processes and enhancing creativity. OpenAI's products, like Sora, allow users to generate videos from text descriptions, showcasing the versatility of its AI applications. Unlike many competitors, OpenAI operates under a capped profit model, which limits the profits it can make and ensures that excess earnings are redistributed to maximize the social benefits of AI. This commitment to safety and ethical considerations is central to its mission of ensuring that artificial general intelligence (AGI) serves all of humanity.

San Francisco, CaliforniaHeadquarters
2015Year Founded
$18,433.2MTotal Funding
LATE_VCCompany Stage
AI & Machine LearningIndustries
1,001-5,000Employees

Benefits

Health insurance
Dental and vision insurance
Flexible spending account for healthcare and dependent care
Mental healthcare service
Fertility treatment coverage
401(k) with generous matching
20-week paid parental leave
Life insurance (complimentary)
AD&D insurance (complimentary)
Short-term/long-term disability insurance (complimentary)
Optional buy-up life insurance
Flexible work hours and unlimited paid time off (we encourage 4+ weeks per year)
Annual learning & development stipend
Regular team happy hours and outings
Daily catered lunch and dinner
Travel to domestic conferences

Risks

Elon Musk's legal battle may pose financial and reputational challenges for OpenAI.
Customizable ChatGPT personas could lead to privacy and ethical concerns.
Competitors like Anthropic raising capital may intensify market competition.

Differentiation

OpenAI's capped profit model prioritizes ethical AI development over unlimited profit.
OpenAI's AI models, like Sora, offer unique video creation from text descriptions.
OpenAI's focus on AGI aims to create AI systems smarter than humans.

Upsides

OpenAI's $6.6 billion funding boosts its AI research and computational capacity.
Customizable ChatGPT personas enhance user engagement and satisfaction.
OpenAI's 'Operator' AI agent could revolutionize workforce automation by 2025.

Land your dream remote job 3x faster with AI