Research Engineer, Privacy at OpenAI

San Francisco, California, United States

OpenAI Logo
$380,000 – $460,000Compensation
Senior (5 to 8 years), Expert & Leadership (9+ years)Experience Level
Full TimeJob Type
UnknownVisa
Artificial Intelligence, TechnologyIndustries

Requirements

  • Hands-on research or production experience with Privacy-Enhancing Technologies (PETs)
  • Fluency in modern deep-learning stacks (PyTorch/JAX) and ability to turn cutting-edge papers into reliable, well-tested code
  • Experience stress-testing models for private data leakage and explaining complex attack vectors to non-experts
  • Track record of publishing (or implementing) novel privacy or security work, bridging academia and real-world systems
  • Ability to thrive in fast-moving, cross-disciplinary environments, alternating between open-ended research and shipping production features under tight deadlines
  • Strong communication skills, rigorous documentation, and deep care for building AI systems that respect user privacy while advancing capabilities

Responsibilities

  • Design and prototype privacy-preserving machine-learning algorithms (e.g., differential privacy, secure aggregation, federated learning) deployable at OpenAI scale
  • Measure and strengthen model robustness against privacy attacks such as membership inference, model inversion, and data memorization leaks—balancing utility with provable guarantees
  • Develop internal libraries, evaluation suites, and documentation to make cutting-edge privacy techniques accessible to engineering and research teams
  • Lead deep-dive investigations into privacy–performance trade-offs of large models, publishing insights to inform model-training and product-safety decisions
  • Define and codify privacy standards, threat models, and audit procedures guiding the ML lifecycle from dataset curation to post-deployment monitoring
  • Collaborate across Security, Policy, Product, and Legal to translate regulatory requirements into technical safeguards and tooling

Skills

Key technologies and capabilities for this role

differential privacyfederated learningsecure aggregationmembership inferencemodel inversiondata memorizationdata anonymizationmachine learningprivacy-preserving algorithmsmodel robustness

Questions & Answers

Common questions about this position

What is the salary range for the Research Engineer, Privacy position?

The salary range is $380K - $460K.

Is this role remote or hybrid, and where is it located?

This is a hybrid position located in San Francisco, with relocation assistance available.

What skills and experience are required for this role?

Candidates need hands-on research or production experience with privacy-enhancing technologies (PETs), fluency in modern deep-learning stacks like PyTorch/JAX, and the ability to turn cutting-edge papers into reliable code. Experience stress-testing models for data leakage, publishing novel privacy work, and thriving in cross-disciplinary environments is also key.

What is the team culture like at OpenAI's Privacy Engineering Team?

The Privacy Engineering Team is committed to integrating privacy as a foundational element in OpenAI's mission to advance AGI safely, focusing on high standards of data privacy and security across all products. They build production services, develop novel techniques, and collaborate cross-functionally in a fast-moving environment.

What makes a strong candidate for this Research Engineer role?

Strong candidates have hands-on experience with PETs like differential privacy and federated learning, proficiency in PyTorch/JAX, a track record of publishing or implementing privacy research, and the ability to thrive in fast-paced, collaborative settings while bridging academia and production systems.

OpenAI

Develops safe and beneficial AI technologies

About OpenAI

OpenAI develops and deploys artificial intelligence technologies aimed at benefiting humanity. The company creates advanced AI models capable of performing various tasks, such as automating processes and enhancing creativity. OpenAI's products, like Sora, allow users to generate videos from text descriptions, showcasing the versatility of its AI applications. Unlike many competitors, OpenAI operates under a capped profit model, which limits the profits it can make and ensures that excess earnings are redistributed to maximize the social benefits of AI. This commitment to safety and ethical considerations is central to its mission of ensuring that artificial general intelligence (AGI) serves all of humanity.

San Francisco, CaliforniaHeadquarters
2015Year Founded
$18,433.2MTotal Funding
LATE_VCCompany Stage
AI & Machine LearningIndustries
1,001-5,000Employees

Benefits

Health insurance
Dental and vision insurance
Flexible spending account for healthcare and dependent care
Mental healthcare service
Fertility treatment coverage
401(k) with generous matching
20-week paid parental leave
Life insurance (complimentary)
AD&D insurance (complimentary)
Short-term/long-term disability insurance (complimentary)
Optional buy-up life insurance
Flexible work hours and unlimited paid time off (we encourage 4+ weeks per year)
Annual learning & development stipend
Regular team happy hours and outings
Daily catered lunch and dinner
Travel to domestic conferences

Risks

Elon Musk's legal battle may pose financial and reputational challenges for OpenAI.
Customizable ChatGPT personas could lead to privacy and ethical concerns.
Competitors like Anthropic raising capital may intensify market competition.

Differentiation

OpenAI's capped profit model prioritizes ethical AI development over unlimited profit.
OpenAI's AI models, like Sora, offer unique video creation from text descriptions.
OpenAI's focus on AGI aims to create AI systems smarter than humans.

Upsides

OpenAI's $6.6 billion funding boosts its AI research and computational capacity.
Customizable ChatGPT personas enhance user engagement and satisfaction.
OpenAI's 'Operator' AI agent could revolutionize workforce automation by 2025.

Land your dream remote job 3x faster with AI