Research Engineer / Scientist, Alignment Science
Anthropic- Full Time
- Junior (1 to 2 years)
Candidates should have strong machine learning engineering skills and research experience, along with a deep understanding of Human-Machine Interaction challenges. Experience with machine learning frameworks such as PyTorch is essential, and candidates should be comfortable experimenting with large-scale models. An interest or background in cognitive science, computational linguistics, human-computer interaction, or social sciences is preferred. Candidates must be goal-oriented and motivated by OpenAI's mission of building safe and beneficial AGI.
As a Research Engineer, you will research and model mechanisms that create value for people, focusing on explaining or predicting preferences, behaviors, and satisfaction. You will quantify human behavior nuances in data-driven systems, design robust evaluations for measuring alignment and real-world utility, and develop new Human-AI interaction paradigms. Additionally, you will evaluate alignment capabilities that are subjective and context-dependent.
Develops safe and beneficial AI technologies
OpenAI develops and deploys artificial intelligence technologies aimed at benefiting humanity. The company creates advanced AI models capable of performing various tasks, such as automating processes and enhancing creativity. OpenAI's products, like Sora, allow users to generate videos from text descriptions, showcasing the versatility of its AI applications. Unlike many competitors, OpenAI operates under a capped profit model, which limits the profits it can make and ensures that excess earnings are redistributed to maximize the social benefits of AI. This commitment to safety and ethical considerations is central to its mission of ensuring that artificial general intelligence (AGI) serves all of humanity.