Anthropic

Staff Infrastructure Engineer, AI Scientist Team

San Francisco, California, United States

$180,000 – $250,000Compensation
Mid-level (3 to 4 years), Senior (5 to 8 years)Experience Level
InternshipJob Type
UnknownVisa
Artificial Intelligence, Cloud ComputingIndustries

Requirements

Candidates should possess 3+ years of highly relevant experience in infrastructure engineering with demonstrated expertise in large-scale distributed systems, strong knowledge of performance optimization techniques and system architectures for high-throughput ML workloads, experience with containerization technologies (Docker, Kubernetes) and orchestration at scale, and proven track record of building large-scale data pipelines and distributed storage systems. Familiarity with language model training, evaluation, and inference is highly encouraged, along with experience with GPU/TPU architectures and language model inference optimization.

Responsibilities

As a Staff Infrastructure Engineer, you will design and implement large-scale infrastructure systems to support AI scientist training, evaluation, and deployment across distributed environments, identify and resolve infrastructure bottlenecks impeding progress toward scientific capabilities, develop robust and reliable evaluation frameworks for measuring progress towards scientific AGI, build scalable and performant VM/sandboxing/container architectures to safely execute long-horizon AI tasks and scientific workflows, collaborate to translate experimental requirements into production-ready infrastructure, develop large scale data pipelines to handle advanced language model training requirements, and optimize large scale training and inference pipelines for stable and efficient reinforcement learning.

Skills

Docker
Kubernetes
GPU
TPU
Data Pipelines
Distributed Storage
Performance Optimization
System Architecture
Large-Scale Distributed Systems
VMs
Containerization
Reinforcement Learning
Language Model Training
Language Model Inference

Anthropic

Develops reliable and interpretable AI systems

About Anthropic

Anthropic focuses on creating reliable and interpretable AI systems. Its main product, Claude, serves as an AI assistant that can manage tasks for clients across various industries. Claude utilizes advanced techniques in natural language processing, reinforcement learning, and code generation to perform its functions effectively. What sets Anthropic apart from its competitors is its emphasis on making AI systems that are not only powerful but also understandable and controllable by users. The company's goal is to enhance operational efficiency and improve decision-making for its clients through the deployment and licensing of its AI technologies.

Key Metrics

San Francisco, CaliforniaHeadquarters
2021Year Founded
$11,482.1MTotal Funding
GROWTH_EQUITY_VCCompany Stage
Enterprise Software, AI & Machine LearningIndustries
1,001-5,000Employees

Benefits

Flexible Work Hours
Paid Vacation
Parental Leave
Hybrid Work Options
Company Equity

Risks

Ongoing lawsuit with Concord Music Group could lead to financial liabilities.
Technological lag behind competitors like OpenAI may impact market position.
Reliance on substantial funding rounds may indicate financial instability.

Differentiation

Anthropic focuses on AI safety, contrasting with competitors' commercial priorities.
Claude, Anthropic's AI assistant, is designed for tasks of any scale.
Partnerships with tech giants like Panasonic and Amazon enhance Anthropic's strategic positioning.

Upsides

Anthropic's $60 billion valuation reflects strong investor confidence and growth potential.
Collaborations like the Umi app with Panasonic tap into the growing wellness AI market.
Focus on AI safety aligns with increasing industry emphasis on ethical AI development.

Land your dream remote job 3x faster with AI