AI Engineer & Researcher, Inference
SpeechifyFull Time
Junior (1 to 2 years)
San Francisco, California, United States
Candidates should have experience with ML systems and deep learning frameworks such as PyTorch, TensorFlow, or ONNX, familiarity with common LLM architectures and inference optimization techniques, and an understanding of GPU architectures or experience with GPU kernel programming using CUDA.
The AI Inference Engineer will develop APIs for AI inference to be used by internal and external customers, benchmark and address bottlenecks throughout the inference stack, improve the reliability and observability of systems and respond to system outages, and explore novel research and implement LLM inference optimizations.
Advanced answer engine providing reliable information
Perplexity AI provides an advanced answer engine that delivers accurate and reliable responses to user queries. The platform uses current sources to ensure the information is both precise and relevant. It caters to a wide audience, including individuals looking for quick answers and businesses needing detailed information. Unlike many competitors, Perplexity AI emphasizes high-quality, source-backed answers, making it a valuable resource for users seeking trustworthy data. The company's goal is to meet the increasing demand for immediate access to reliable information, generating revenue through subscription fees, advertising, and partnerships.