Full Stack Engineer, Trust & Safety
Calendly- Full Time
- Junior (1 to 2 years)
Candidates should possess 7+ years of experience in Trust & Safety, Fraud, Risk, or similar operational risk roles, ideally at a tech platform or startup. Strong data fluency, including proficiency in SQL and spreadsheet software, is required, along with familiarity with Python, R, or similar scripting languages as a plus. Experience with LLM prompt engineering or deploying LLMs in operations or moderation workflows is also desired.
The Trust & Safety Solutions Analyst will design and implement scalable, automated Trust & Safety workflows using in-house tools, third-party platforms, and LLM-based automation, building prototypes or scrappy technical solutions to assist the operations team in moving faster and smarter. They will rapidly analyze large-scale support and incident data to uncover abuse patterns, policy gaps, or emerging threats, translating these insights into actionable solutions. Furthermore, this role involves collaborating with Engineering, Policy, and Ops to develop feedback loops that transform frontline signals into model training data and detection heuristics, and anticipating and managing safety risks for new product and policy launches.
Develops safe and beneficial AI technologies
OpenAI develops and deploys artificial intelligence technologies aimed at benefiting humanity. The company creates advanced AI models capable of performing various tasks, such as automating processes and enhancing creativity. OpenAI's products, like Sora, allow users to generate videos from text descriptions, showcasing the versatility of its AI applications. Unlike many competitors, OpenAI operates under a capped profit model, which limits the profits it can make and ensures that excess earnings are redistributed to maximize the social benefits of AI. This commitment to safety and ethical considerations is central to its mission of ensuring that artificial general intelligence (AGI) serves all of humanity.