Software Engineer, Safety Processing
Discord- Full Time
- Junior (1 to 2 years)
Candidates should have experience building and running production services in a high growth environment. Strong debugging skills for live issues and experience with content safety, fraud, or abuse are preferred. Proficiency in Python or modern languages such as C++, Rust, or Go is required, along with an understanding of trade-offs between capabilities and risks. Candidates should possess strong project management skills and be able to critically assess risks of new products or features.
The Software Engineer will architect, build, and maintain anti-abuse and content moderation infrastructure. They will collaborate with engineers and researchers to improve AI models’ alignment to human values. The role includes diagnosing and remediating active incidents on the platform and building new tooling and infrastructure to address root causes of system failures.
Develops safe and beneficial AI technologies
OpenAI develops and deploys artificial intelligence technologies aimed at benefiting humanity. The company creates advanced AI models capable of performing various tasks, such as automating processes and enhancing creativity. OpenAI's products, like Sora, allow users to generate videos from text descriptions, showcasing the versatility of its AI applications. Unlike many competitors, OpenAI operates under a capped profit model, which limits the profits it can make and ensures that excess earnings are redistributed to maximize the social benefits of AI. This commitment to safety and ethical considerations is central to its mission of ensuring that artificial general intelligence (AGI) serves all of humanity.