Job Description: Reality Defender
Employment Type
Full-Time
About Reality Defender
Reality Defender provides accurate, multi-modal AI-generated media detection solutions to enable enterprises and governments to identify and prevent fraud, disinformation, and harmful deepfakes in real time. A Y Combinator graduate, Comcast NBCUniversal LIFT Labs alumni, and backed by DCVC, Reality Defender is the first company to pioneer multi-modal and multi-model detection of AI-generated media. Our web app and platform-agnostic API built by our research-forward team ensures that our customers can swiftly and securely mitigate fraud and cybersecurity risks in real time with a frictionless, robust solution.
Why we stand out:
- Our best-in-class accuracy is derived from our sole, research-backed mission and use of multiple models per modality.
- We can detect AI-generated fraud and disinformation in near- or real time across all modalities including audio, video, image, and text.
- Our platform is designed for ease of use, featuring a versatile API that integrates seamlessly with any system, an intuitive drag-and-drop web application for quick ad hoc analysis, and platform-agnostic real-time audio detection tailored for call center deployments.
- We’re privacy first, ensuring the strongest standards of compliance and keeping customer data away from the training of our detection models.
Responsibilities
- Investigate the composition of open source audio deepfake datasets.
- Use Multimodal LLMs to create rich feature representations of audio deepfake data.
- Use feature representations of audio deepfake datasets to identify data trends as they relate to measures of bias, fairness, and difficulty of classification between real and fake data.
- Collaborate with scientists and engineers across the organization.
About You
- Currently enrolled in a PhD program in deep learning, computer vision, speech/audio processing, or a related field.
- Implemented and/or published peer-reviewed papers in reputable AI research venues such as CVPR, ICLR, Interspeech.
- Strong skills and intuition with exploratory data analysis techniques such as dimensionality reduction and clustering.
- Experience with Multimodal LLMs and an understanding of their architectures.
- Have 1+ years of programming experience in Python and model building in PyTorch; experience with audio models, e.g. HuBERT/wav2vec, would be a plus but not essential.
- Team player with a positive attitude and good communication skills.
Salary
- Information not provided.
Location Type
- Information not provided.