Anthropic

Research Scientist, Frontier Red Team (Autonomy)

San Francisco, California, United States

$280,000 – $425,000Compensation
Junior (1 to 2 years)Experience Level
Full TimeJob Type
UnknownVisa
Artificial Intelligence, ResearchIndustries

Position Overview

  • Location Type: Hybrid (San Francisco or London office)
  • Job Type: Research Scientist
  • Salary: $280,000 - $425,000 USD

Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. We are looking for Research Scientists to develop and productionize advanced autonomy evaluations on our Frontier Red Team. Our goal is to develop and implement a gold standard of advanced autonomy evals to determine the AI Safety Level (ASL) of our models. This will have major implications for the way we train, deploy, and secure our models, as detailed in our Responsible Scaling Policy (RSP). We believe that developing autonomy evals is one of the best ways to study increasingly capable and agentic models. If you’ve thought particularly hard about how models might be agentic and associated risks, and you’ve built an eval or experiment around it, we’d like to meet you.

Note: We will be prioritizing candidates who can start ASAP. It is possible that this role might end up being the people manager of a few other individual contributors (ICs). If you would be interested in people management, you may express interest in the application.

Requirements

  • ML background and experience leading experimental research on LLMs/multimodal models and/or agents
  • Strong Python-based engineering skills
  • Driven to find solutions to ambiguously scoped problems
  • Ability to design and run experiments and iterate quickly to solve machine learning problems
  • Experience training, working with, and prompting models
  • At least a Bachelor’s degree in a related field or equivalent experience

Responsibilities

  • Lead the end-to-end development of autonomy evals and research, including risk and capability modeling, designing, implementing, and regularly running these evals.
  • Quickly iterate on experiments to evaluate autonomous capabilities and forecast future capabilities.
  • Provide technical leadership to Research Engineers to scope and build scalable and secure infrastructure to quickly run large-scale experiments.
  • Communicate the outcomes of the evaluations to relevant Anthropic teams, as well as policy stakeholders and research collaborators, where relevant.
  • Collaborate with other projects on the Frontier Red Team, Alignment, and beyond to improve infrastructure and design safety techniques for autonomous capabilities.

Application Instructions

  • Please note: We are prioritizing candidates who can start ASAP and can be based in either our San Francisco or London office.

Skills

Machine Learning
Large Language Models
Multimodal Models
Python
Experiment Design
Model Prompting
Research
Autonomy Evaluation
Risk and Capability Modeling

Anthropic

Develops reliable and interpretable AI systems

About Anthropic

Anthropic focuses on creating reliable and interpretable AI systems. Its main product, Claude, serves as an AI assistant that can manage tasks for clients across various industries. Claude utilizes advanced techniques in natural language processing, reinforcement learning, and code generation to perform its functions effectively. What sets Anthropic apart from its competitors is its emphasis on making AI systems that are not only powerful but also understandable and controllable by users. The company's goal is to enhance operational efficiency and improve decision-making for its clients through the deployment and licensing of its AI technologies.

San Francisco, CaliforniaHeadquarters
2021Year Founded
$11,482.1MTotal Funding
GROWTH_EQUITY_VCCompany Stage
Enterprise Software, AI & Machine LearningIndustries
1,001-5,000Employees

Benefits

Flexible Work Hours
Paid Vacation
Parental Leave
Hybrid Work Options
Company Equity

Risks

Ongoing lawsuit with Concord Music Group could lead to financial liabilities.
Technological lag behind competitors like OpenAI may impact market position.
Reliance on substantial funding rounds may indicate financial instability.

Differentiation

Anthropic focuses on AI safety, contrasting with competitors' commercial priorities.
Claude, Anthropic's AI assistant, is designed for tasks of any scale.
Partnerships with tech giants like Panasonic and Amazon enhance Anthropic's strategic positioning.

Upsides

Anthropic's $60 billion valuation reflects strong investor confidence and growth potential.
Collaborations like the Umi app with Panasonic tap into the growing wellness AI market.
Focus on AI safety aligns with increasing industry emphasis on ethical AI development.

Land your dream remote job 3x faster with AI