Deepgram

Research Scientist - Voice AI Foundations

Remote

$150,000 – $220,000Compensation
Mid-level (3 to 4 years), Senior (5 to 8 years)Experience Level
Full TimeJob Type
UnknownVisa
AI & Machine Learning, Data & AnalyticsIndustries

Position Overview

  • Location Type: Remote
  • Job Type: FullTime
  • Salary: $150K - $220K

Deepgram is the leading voice AI platform for developers building speech-to-text (STT), text-to-speech (TTS) and full speech-to-speech (STS) offerings. 200,000+ developers build with Deepgram’s voice-native foundational models – accessed through APIs or as self-managed software – due to our unmatched accuracy, latency and pricing. Customers include software companies building voice products, co-sell partners working with large enterprises, and enterprises solving internal voice AI use cases. The company ended 2024 cash-flow positive with 400+ enterprise customers, 3.3x annual usage growth across the past 4 years, over 50,000 years of audio processed and over 1 trillion words transcribed. There is no organization in the world that understands voice better than Deepgram.

The Opportunity

Voice is the most natural modality for human interaction with machines. However, current sequence modeling paradigms based on jointly scaling model and data cannot deliver voice AI capable of universal human interaction. The challenges are rooted in fundamental data problems posed by audio: real-world audio data is scarce and enormously diverse, spanning a vast space of voices, speaking styles, and acoustic conditions. Even if billions of hours of audio were accessible, its inherent high dimensionality creates computational and storage costs that make training and deployment prohibitively expensive at world scale. We believe that entirely new paradigms for audio AI are needed to overcome these challenges and make voice interaction accessible to everyone.

Responsibilities

  • Pioneer the development of Latent Space Models (LSMs), a new approach that aims to solve the fundamental data, scale, and cost challenges associated with building robust, contextualized voice AI.
  • Build next-generation neural audio codecs that achieve extreme, low bit-rate compression and high fidelity reconstruction across a world-scale corpus of general audio.
  • Pioneer steerable generative models that can synthesize the full diversity of human speech from the codec latent representation, from casual conversation to highly emotional expression to complex multi-speaker scenarios with environmental noise and overlapping speech.
  • Develop embedding systems that cleanly factorize the codec latent space into interpretable dimensions of speaker, content, style, environment, and channel effects -- enabling precise control over each aspect and the ability to massively amplify an existing seed dataset through “latent recombination”.
  • Leverage latent recombination to generate synthetic audio data at previously impossible scales, unlocking joint model and data scaling paradigms for audio.
  • Endeavor to train multimodal speech-to-speech systems that can 1) understand any human irrespective of their demographics, state, or environment and 2) produce empathic, human-like speech.

Requirements

  • (Details are missing in the provided text)

Application Instructions

  • (Details are missing in the provided text)

Company Information

  • Deepgram is the leading voice AI platform for developers.
  • They have 400+ enterprise customers and 3.3x annual usage growth.
  • Over 50,000 years of audio processed and over 1 trillion words transcribed.

Skills

Latent Space Models
Neural Audio Codecs
Speech-to-Text
Text-to-Speech
Speech-to-Speech
Audio Processing
Machine Learning
Deep Learning
Python
PyTorch
TensorFlow

Deepgram

Speech recognition APIs for audio transcription

About Deepgram

Deepgram specializes in artificial intelligence for speech recognition, offering a set of APIs that developers can use to transcribe and understand audio content. Their technology allows clients, ranging from startups to large organizations like NASA, to process millions of audio minutes daily. Deepgram's APIs are designed to be fast, accurate, scalable, and cost-effective, making them suitable for businesses needing to handle large volumes of audio data. The company operates on a pay-per-use model, where clients are charged based on the amount of audio they transcribe, allowing Deepgram to grow its revenue alongside client usage. With a focus on the high-growth market of speech recognition, Deepgram is positioned for future success.

San Francisco, CaliforniaHeadquarters
2015Year Founded
$100.5MTotal Funding
SERIES_BCompany Stage
Data & Analytics, AI & Machine LearningIndustries
51-200Employees

Benefits

Comprehensive Health Plans
FSA Health Matching up to $1,000
Work from Home Ergonomic Stipend
Healthy Food & Snacks in offices
Community Groups
Unlimited Vacation

Risks

Increased competition from open-source solutions like OpenAI's Whisper threatens market share.
Recent layoffs suggest potential financial instability or strategic restructuring challenges.
Integration of Poised may cause disruptions in service or product development.

Differentiation

Deepgram's APIs offer fast, accurate, and scalable speech recognition solutions.
The acquisition of Poised enhances Deepgram's real-time feedback capabilities in virtual meetings.
Aura API provides low-latency, human-like voice models for conversational AI agents.

Upsides

Strategic partnership with Clarifai accelerates AI application development and market expansion.
Aura API positions Deepgram to capitalize on real-time conversational voice AI trends.
Deepgram's technology is used by large enterprises like NASA, indicating strong market trust.

Land your dream remote job 3x faster with AI