Research Engineer / Scientist, Alignment Science
Anthropic- Full Time
- Junior (1 to 2 years)
Candidates should possess 3+ years of research experience in AI or a related field, demonstrate proficiency in Python or similar programming languages, and exhibit expertise in AI safety topics such as RLHF, adversarial training, and robustness. A passion for AI safety and socio-technical issues, along with experience in interdisciplinary research, is essential. Candidates should also thrive in environments involving large-scale AI systems and multimodal datasets.
The Research Engineer will set research strategies to study societal impacts of AI models and tie findings back into model design. They will build creative methods to enable public input into model values, increase the rigor of external assurances, and facilitate the de-risking of flagship model deployments in a timely manner.
Develops safe and beneficial AI technologies
OpenAI develops and deploys artificial intelligence technologies aimed at benefiting humanity. The company creates advanced AI models capable of performing various tasks, such as automating processes and enhancing creativity. OpenAI's products, like Sora, allow users to generate videos from text descriptions, showcasing the versatility of its AI applications. Unlike many competitors, OpenAI operates under a capped profit model, which limits the profits it can make and ensures that excess earnings are redistributed to maximize the social benefits of AI. This commitment to safety and ethical considerations is central to its mission of ensuring that artificial general intelligence (AGI) serves all of humanity.