Research Engineer / Scientist, Alignment Science
Anthropic- Full Time
- Junior (1 to 2 years)
Candidates should have a Ph.D. or other degree in computer science, machine learning, or a related field. They must possess 4+ years of experience in AI safety, particularly in areas such as RLHF, adversarial training, robustness, fairness, and biases. An in-depth understanding of deep learning research and strong engineering skills are required, along with experience in safety work for AI model deployment. A passion for AI safety and alignment with OpenAI's mission and charter is essential.
The Research Engineer/Scientist will conduct state-of-the-art research on AI safety topics including RLHF and adversarial training. They will implement new methods in OpenAI’s core model training and launch safety improvements in products. The role involves setting research directions to enhance safety and robustness of AI systems, coordinating with cross-functional teams to ensure compliance with safety standards, and actively evaluating the safety of models to identify risks and propose mitigation strategies.
Develops safe and beneficial AI technologies
OpenAI develops and deploys artificial intelligence technologies aimed at benefiting humanity. The company creates advanced AI models capable of performing various tasks, such as automating processes and enhancing creativity. OpenAI's products, like Sora, allow users to generate videos from text descriptions, showcasing the versatility of its AI applications. Unlike many competitors, OpenAI operates under a capped profit model, which limits the profits it can make and ensures that excess earnings are redistributed to maximize the social benefits of AI. This commitment to safety and ethical considerations is central to its mission of ensuring that artificial general intelligence (AGI) serves all of humanity.