Content Policy Specialist (1-year contract; 20 hrs/week)
CoherePart Time
Mid-level (3 to 4 years)
Candidates should have extensive experience researching LLMs, ML, AI, tech policy, moral reasoning, or classification problems. Deep understanding of ML model policy definition, refinement, and enforcement is required, along with practical knowledge of operational challenges in enforcing policies with RLHF. The ability to analyze risks and benefits in open-ended problem spaces and generate solutions for ambiguous problems is also essential.
The Model Policy Manager will design objective and defensible model policies for safe behavior, developing taxonomies to guide data collection, model behavior, and monitoring strategies. This role involves balancing utility maximization with catastrophic risk prevention, leading prioritization for safety efforts across new model launches, and understanding technical and business trade-offs. The manager will also develop broad subject matter expertise, work across internal teams, and make confident decisions to ensure groundbreaking technologies do not cause harm.
Develops safe and beneficial AI technologies
OpenAI develops and deploys artificial intelligence technologies aimed at benefiting humanity. The company creates advanced AI models capable of performing various tasks, such as automating processes and enhancing creativity. OpenAI's products, like Sora, allow users to generate videos from text descriptions, showcasing the versatility of its AI applications. Unlike many competitors, OpenAI operates under a capped profit model, which limits the profits it can make and ensures that excess earnings are redistributed to maximize the social benefits of AI. This commitment to safety and ethical considerations is central to its mission of ensuring that artificial general intelligence (AGI) serves all of humanity.