Fraud Model Risk Manager, Machine Learning
Affirm- Full Time
- Junior (1 to 2 years)
Candidates should have at least 5 years of software engineering experience in backend and data systems, along with at least 2 years of experience in fraud or abuse analysis, investigation, and/or operations. A strong intuition for understanding codebases and the ability to suggest improvements is essential. Candidates should possess a voracious desire to learn and the ability to communicate effectively. Comfort with ambiguity and rapid change is necessary, and experience in Machine Learning techniques is a plus.
The Software Engineer will design and build systems for fraud detection and remediation, balancing fraud loss, implementation cost, and customer experience. They will collaborate closely with finance, security, product, research, and trust & safety operations to combat fraudulent and abusive actors. Staying updated on the latest fraud detection techniques and utilizing GPT-4 and future models to enhance fraud and abuse combat strategies will also be key responsibilities.
Develops safe and beneficial AI technologies
OpenAI develops and deploys artificial intelligence technologies aimed at benefiting humanity. The company creates advanced AI models capable of performing various tasks, such as automating processes and enhancing creativity. OpenAI's products, like Sora, allow users to generate videos from text descriptions, showcasing the versatility of its AI applications. Unlike many competitors, OpenAI operates under a capped profit model, which limits the profits it can make and ensures that excess earnings are redistributed to maximize the social benefits of AI. This commitment to safety and ethical considerations is central to its mission of ensuring that artificial general intelligence (AGI) serves all of humanity.