Trust & Safety Agent
WhatNotFull Time
Entry Level & New Grad, Junior (1 to 2 years)
Key technologies and capabilities for this role
Common questions about this position
This role is based in our London office and includes participation in an on-call rotation.
Candidates need at least 4 years of experience doing technical analysis and detection using SQL and Python, experience in trust and safety or working with policy and engineering teams, and experience with data engineering, machine learning principles, and scaling processes with language models.
This information is not specified in the job description.
The Intelligence and Investigations team identifies and investigates misuses of OpenAI products, enabling partner teams to develop data-backed policies and safety mitigations, and works cross-functionally with product, policy, ops, investigative, and engineering teams.
A strong candidate has an investigative mindset, experience in trust and safety, technical skills in SQL, Python, data engineering, and machine learning, plus the ability to scale processes with language models and work cross-functionally.
Develops safe and beneficial AI technologies
OpenAI develops and deploys artificial intelligence technologies aimed at benefiting humanity. The company creates advanced AI models capable of performing various tasks, such as automating processes and enhancing creativity. OpenAI's products, like Sora, allow users to generate videos from text descriptions, showcasing the versatility of its AI applications. Unlike many competitors, OpenAI operates under a capped profit model, which limits the profits it can make and ensures that excess earnings are redistributed to maximize the social benefits of AI. This commitment to safety and ethical considerations is central to its mission of ensuring that artificial general intelligence (AGI) serves all of humanity.