Research Staff, Data Science
DeepgramFull Time
Expert & Leadership (9+ years)
Candidates must be currently enrolled in a PhD program in deep learning, computer vision, speech/audio processing, or a related field. They should have a strong understanding of Multimodal LLMs and their architectures, coupled with at least 1 year of programming experience in Python and model building in PyTorch. Experience with audio models like HuBERT/wav2vec is a plus. Strong skills in exploratory data analysis techniques, such as dimensionality reduction and clustering, are necessary, along with a publication record in reputable AI research venues like CVPR or ICLR. Good communication skills and a positive attitude are also essential.
The Applied Scientist Intern will investigate the composition of open-source audio deepfake datasets. They will utilize Multimodal LLMs to create rich feature representations of audio deepfake data and employ these representations to identify data trends related to bias, fairness, and classification difficulty between real and fake data. The intern will also collaborate with scientists and engineers across the organization.
Deepfake detection for enterprises and governments
Reality Defender offers deepfake detection solutions to protect enterprises, platforms, and governments from AI-generated threats. Its detection platform scans images, videos, and audio in real time to identify fabricated content, helping to prevent misinformation. The company stands out by providing enterprise-grade services through a subscription model that allows easy integration into existing systems. The goal is to enhance fraud prevention and maintain the authenticity of digital content for clients.