Trust and Safety Investigator
VultrFull Time
Mid-level (3 to 4 years), Senior (5 to 8 years)
Candidates should have Trust & Safety experience in the child safety space, preferably at a large tech company with considerable volume of child safety-related content and content review. Experience in an operations capacity with an understanding of typical Trust & Safety topics and concepts is expected. A clear understanding of legal obligations and reporting flows for submitting Cybertips to the National Center for Missing and Exploited Children (NCMEC) is necessary. Demonstrated ability to operate in an ambiguous and rapidly changing environment is ideal. While not required, software, stats, ML experience, and SQL skills are beneficial.
The Child Safety Enforcement Specialist will perform critical content reviews, work with vendors to train them and provide deep insights into policies, and maintain quality processes for workflows and automated content moderation. They will actively work to expand their knowledge of vulnerabilities and mitigation techniques, and collaborate cross-functionally to improve policies, tooling, and processes. This role involves conducting detailed reviews of user-generated content for violations of child safety policies, investigating high-risk user behavior escalations, and determining appropriate enforcement actions. The specialist will identify content requiring mandatory reporting and draft reports for NCMEC, use internal tools to triage and escalate content, and provide feedback to improve detection technologies. They will also partner with legal, policy, product, and engineering teams to inform safety product improvements and enforcement policies.
Develops safe and beneficial AI technologies
OpenAI develops and deploys artificial intelligence technologies aimed at benefiting humanity. The company creates advanced AI models capable of performing various tasks, such as automating processes and enhancing creativity. OpenAI's products, like Sora, allow users to generate videos from text descriptions, showcasing the versatility of its AI applications. Unlike many competitors, OpenAI operates under a capped profit model, which limits the profits it can make and ensures that excess earnings are redistributed to maximize the social benefits of AI. This commitment to safety and ethical considerations is central to its mission of ensuring that artificial general intelligence (AGI) serves all of humanity.