Research Engineer / Scientist, Alignment Science
Anthropic- Full Time
- Junior (1 to 2 years)
Candidates should possess over 5 years of research engineering experience and be proficient in Python or similar programming languages. Experience with large-scale AI systems and multimodal datasets is a plus, along with expertise in AI safety topics such as RLHF, adversarial training, robustness, fairness, and biases. A strong enthusiasm for AI safety and a commitment to enhancing the safety of advanced AI models for real-world applications are essential.
The Research Engineer/Scientist will conduct applied research to improve foundational models' reasoning capabilities regarding human values, ethics, and cultural norms. They will develop and refine AI moderation models to detect and mitigate AI misuse, collaborate with policy researchers to iterate on content policies, contribute to multimodal content analysis research, and enhance pipelines for automated data labeling and model training. Additionally, they will experiment with and design a red-teaming pipeline to assess the robustness of harm prevention systems.
Develops safe and beneficial AI technologies
OpenAI develops and deploys artificial intelligence technologies aimed at benefiting humanity. The company creates advanced AI models capable of performing various tasks, such as automating processes and enhancing creativity. OpenAI's products, like Sora, allow users to generate videos from text descriptions, showcasing the versatility of its AI applications. Unlike many competitors, OpenAI operates under a capped profit model, which limits the profits it can make and ensures that excess earnings are redistributed to maximize the social benefits of AI. This commitment to safety and ethical considerations is central to its mission of ensuring that artificial general intelligence (AGI) serves all of humanity.