Principal System Cloud Architect
NVIDIA- Full Time
- Expert & Leadership (9+ years)
Groq delivers fast, efficient AI inference. Our LPU-based system powers GroqCloud™, giving businesses and developers the speed and scale they need. Headquartered in Silicon Valley, we are on a mission to make high performance AI compute more accessible and affordable. When real-time AI is within reach, anything is possible. Build fast.
The Principal Systems Architect is responsible for Groq’s next-gen hardware platforms to enable state-of-the-art AI/ML workloads and guide future hardware systems development of the most advanced AI accelerator on the market. This role solves complex technical problems and leads multi-disciplined team projects focused on the delivery of rack-scale accelerator solutions.
If this sounds like you, we’d love to hear from you!
Groq is an Equal Opportunity Employer that is committed to inclusion and diversity. Qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, gender, sexual orientation, gender identity, and other protected characteristics. Our goal is to hire and promote an exceptional workforce as diverse as the global populations we serve. Groq values and celebrates diversity in thought, beliefs, talent, expression, and backgrounds. We know that our individual differences make us better.
AI inference technology for scalable solutions
Groq specializes in AI inference technology, providing the Groq LPU™, which is known for its high compute speed, quality, and energy efficiency. The Groq LPU™ is designed to handle AI processing tasks quickly and effectively, making it suitable for both cloud and on-premises applications. Unlike many competitors, Groq's products are designed, fabricated, and assembled in North America, which helps maintain high standards of quality and performance. The company targets a variety of clients across different industries that require fast and efficient AI processing capabilities. Groq's goal is to deliver scalable AI inference solutions that meet the growing demands for rapid data processing in the AI and machine learning market.