Research Engineer / Scientist, Alignment Science
Anthropic- Full Time
- Junior (1 to 2 years)
Candidates should possess a PhD in Computer Science, Robotics, or a related field with a strong focus on reinforcement learning, or a Master's degree with significant research experience in RL. They should have experience with deep reinforcement learning, particularly in areas such as model-based RL, offline RL, or hierarchical RL. Strong programming skills in Python are essential, along with familiarity with deep learning frameworks like TensorFlow or PyTorch. A solid understanding of machine learning principles and experience with large-scale ML codebases is also required.
As a Research Engineer/Research Scientist, you will advance the frontier of AI alignment and capabilities through cutting-edge RL methods, contributing to the training of intelligent and aligned agents. You will iterate quickly on ideas, drive them to completion, and value principled approaches and simple experiments. You will dive into a large ML codebase to debug and improve it, and conduct research to develop new RL techniques. Additionally, you will advise management on HR-related risk mitigation strategies and monitor and interpret changes in employment legislation.
Develops safe and beneficial AI technologies
OpenAI develops and deploys artificial intelligence technologies aimed at benefiting humanity. The company creates advanced AI models capable of performing various tasks, such as automating processes and enhancing creativity. OpenAI's products, like Sora, allow users to generate videos from text descriptions, showcasing the versatility of its AI applications. Unlike many competitors, OpenAI operates under a capped profit model, which limits the profits it can make and ensures that excess earnings are redistributed to maximize the social benefits of AI. This commitment to safety and ethical considerations is central to its mission of ensuring that artificial general intelligence (AGI) serves all of humanity.