Research Scientist, Frontier Red Team (Autonomy)
Anthropic- Full Time
- Junior (1 to 2 years)
Candidates should possess a strong understanding of AI safety principles and practices, with experience in leading and managing complex projects. A background in risk management, regulatory affairs, or a related field is preferred, and the ability to collaborate effectively with diverse teams is essential.
As the External Safety Testing Lead, you will design and oversee the implementation of GDM’s external safety testing program, leading GDM’s input into external safety testing requirements from regulators and government bodies, and optimizing the program to support the growing needs of the business. You will also carry out cross-industry ‘horizon scanning’ to identify and maintain visibility of current and future external testing requirements, and matrix manage a cross-functional team to escalate risks and issues to wider stakeholder groups.
Develops artificial general intelligence systems
This company leads in the field of artificial general intelligence (AGI), with notable applications across healthcare, energy management, and biotechnology. Their work in early diagnostic tools for eye diseases, optimizing energy usage in major data centers, and groundbreaking contributions to protein structure prediction underlines their commitment to harnessing AI for diverse practical applications. The company's dedication to pushing the boundaries of AI technology not only propels the industry forward but also creates a dynamic and impactful working environment for its employees.