Senior Security Engineer, Application Security
Trail of Bits- Full Time
- Senior (5 to 8 years)
As a Novel Abuse Testing Specialist at OpenAI, you’ll be focused on advancing post-launch product testing efforts, ensuring our systems remain resilient against evolving, real-world adversarial threats. You’ll design, execute, and refine innovative adversarial testing and simulation protocols, utilizing a hands-on red team approach to simulate threat actor methodologies and conduct rigorous application testing. This role serves as a critical technical bridge between security, product, and policy teams, employing an attacker mindset and advanced red team tactics to drive actionable insights and strengthen our defenses. The ideal candidate will have a strong background in application security or penetration testing, with hands-on experience in web application security and proficiency in tools like Burp Suite and Metasploit.
The Intelligence and Investigations team is dedicated to ensuring the safe, responsible deployment of AI by rapidly detecting and mitigating abuse.
Focused on post-launch product testing and uncovering novel abuse vectors.
Develops safe and beneficial AI technologies
OpenAI develops and deploys artificial intelligence technologies aimed at benefiting humanity. The company creates advanced AI models capable of performing various tasks, such as automating processes and enhancing creativity. OpenAI's products, like Sora, allow users to generate videos from text descriptions, showcasing the versatility of its AI applications. Unlike many competitors, OpenAI operates under a capped profit model, which limits the profits it can make and ensures that excess earnings are redistributed to maximize the social benefits of AI. This commitment to safety and ethical considerations is central to its mission of ensuring that artificial general intelligence (AGI) serves all of humanity.