Senior Software Engineer-Distributed Inference
NVIDIA- Full Time
- Senior (5 to 8 years)
Candidates should have expertise in CUDA or OpenCL, demonstrating experience in developing CUDA kernels or equivalent technologies. Proficiency in Python is required for AI and performance optimization tasks, along with hands-on knowledge of deep learning frameworks such as PyTorch or TensorFlow. A strong understanding of CPU and GPU architecture is essential for analyzing and optimizing performance at the hardware level.
The Staff AI Performance Engineer will optimize inference engines to improve performance, efficiency, and scalability. They will enhance scalable AI infrastructure by implementing optimizations that accelerate AI inference, develop and deploy CUDA kernels for deep learning workloads, and conduct performance analysis to identify and resolve bottlenecks. The engineer will engage with the AI research community, contribute to open-source projects, and improve internal documentation and tooling standards. Collaboration with AI researchers, engineers, and infrastructure teams will be essential to develop cutting-edge solutions.
Utilizes wasted energy for computing power
Crusoe Energy Systems Inc. provides digital infrastructure that focuses on using wasted, stranded, or clean energy sources to power high-performance computing and artificial intelligence. The company helps clients in the technology and energy sectors by offering scalable computing solutions that aim to reduce greenhouse gas emissions and support the transition to cleaner energy. Crusoe's approach involves converting excess natural gas and renewable energy into computing power, which allows them to maximize resource efficiency while minimizing environmental impact. Unlike many competitors, Crusoe specifically targets the intersection of energy and technology, generating revenue by supplying computing resources to enterprises that need significant computational power for applications like AI and machine learning, along with providing technical support.