Trust & Safety Agent
WhatNotFull Time
Entry Level & New Grad, Junior (1 to 2 years)
Key technologies and capabilities for this role
Common questions about this position
This information is not specified in the job description.
This information is not specified in the job description.
You will handle and resolve complex user issues involving fraud, impersonation, account access abuse, trust & safety incidents, regulatory inquiries, intellectual property matters, and AI governance; perform risk evaluations and investigations; act as incident manager; interface with cross-functional teams; build tooling and playbooks; contribute to vendor training; and lead cross-functional initiatives.
The User Operations team safeguards products and users from legal risk, regulatory non-compliance, fraud, and abuse, operating at the intersection of operations, compliance, and user trust, embedded within the broader User Operations organization and collaborating cross-functionally with Legal, Policy, Engineering, Product, and external vendors.
A strong candidate is sharp, adaptive, and operations-minded, with strong personal discretion, empathy, and resilience to handle sensitive content like harassment, fraud, or regulatory violations.
Develops safe and beneficial AI technologies
OpenAI develops and deploys artificial intelligence technologies aimed at benefiting humanity. The company creates advanced AI models capable of performing various tasks, such as automating processes and enhancing creativity. OpenAI's products, like Sora, allow users to generate videos from text descriptions, showcasing the versatility of its AI applications. Unlike many competitors, OpenAI operates under a capped profit model, which limits the profits it can make and ensures that excess earnings are redistributed to maximize the social benefits of AI. This commitment to safety and ethical considerations is central to its mission of ensuring that artificial general intelligence (AGI) serves all of humanity.