AI Engineer & Researcher, Inference
SpeechifyFull Time
Junior (1 to 2 years)
Key technologies and capabilities for this role
Common questions about this position
Candidates need an MSc/PhD in Computer Science, Electrical Engineering, or a related field, at least 3 years of proven experience in deep learning research, and at least one publication in a top-tier AI/ML conference like NeurIPS, ICLR, or ICML.
Excellent programming skills in Python and deep learning frameworks like PyTorch are required, along with experience in software engineering best practices.
Stand out with hands-on experience in LLM inference optimization like speculative decoding, HPC environments with large-scale GPU clusters, familiarity with systems like vLLM or TensorRT-LLM, or experience from a world-class research group.
This information is not specified in the job description.
This information is not specified in the job description.
Designs GPUs and AI computing solutions
NVIDIA designs and manufactures graphics processing units (GPUs) and system on a chip units (SoCs) for various markets, including gaming, professional visualization, data centers, and automotive. Their products include GPUs tailored for gaming and professional use, as well as platforms for artificial intelligence (AI) and high-performance computing (HPC) that cater to developers, data scientists, and IT administrators. NVIDIA generates revenue through the sale of hardware, software solutions, and cloud-based services, such as NVIDIA CloudXR and NGC, which enhance experiences in AI, machine learning, and computer vision. What sets NVIDIA apart from competitors is its strong focus on research and development, allowing it to maintain a leadership position in a competitive market. The company's goal is to drive innovation and provide advanced solutions that meet the needs of a diverse clientele, including gamers, researchers, and enterprises.