Groq

Principal Technical Program Manager

Mountain View, California, United States

Not SpecifiedCompensation
Senior (5 to 8 years), Expert & Leadership (9+ years)Experience Level
Full TimeJob Type
UnknownVisa
Artificial Intelligence, Semiconductors, Hardware, SoftwareIndustries

Position Overview

  • Location Type: Hybrid
  • Job Type: Full-time
  • Salary: $226,000 - $336,000 (Base Salary Range)

Groq delivers fast, efficient AI inference. Our LPU-based system powers GroqCloud™, giving businesses and developers the speed and scale they need. Headquartered in Silicon Valley, we are on a mission to make high performance AI compute more accessible and affordable. When real-time AI is within reach, anything is possible. Build fast.

Position: Principal Technical Program Manager

Mission: Lead, Plan and execute HW Systems and SW programs at Groq

Responsibilities

  • Drive and manage end-to-end projects involving the HW design, System SW, ML Compiler, Inference Engine, Infrastructure SW and larger AI SW stack.
  • Collaborate with Hardware Systems, Systems SW, product management, operations, and other cross-functional teams to ensure alignment on technical specifications, project goals, and deliverables.
  • Provide deep technical guidance on AI chip architectures, FPGA, interconnects, memory hierarchies, low level Embedded SW, ML Compiler, Inference Engine and Infrastructure SW to optimize performance and efficiency in datacenter deployments.
  • Drive communication internal and external to the engineering teams including leadership reviews, Core team meetings, etc.
  • Define and track clear goals, priorities, and milestones, ensuring alignment with overall Groq’s corporate goals.
  • Provide Programmatic guidance, mentorship, and foster a culture of execution within System SW, ML Compiler, Networking and other SW teams.
  • Monitor project execution, identify risks, and proactively implement mitigation strategies.

Requirements

  • 7+ years of Technical program management in Hardware/ SW System design, AI/ML SW development in AI domains.
  • B.S. in engineering, computer science or a related technical discipline, or equivalent experience.
  • Hands-on experience with either System board design, System Software development, AI/ML SW development.
  • Demonstrated experience working with external vendors (CM, etc.), including negotiation of deadlines, holding all parties accountable, etc.
  • Understanding of AI Software development and data center SW stack.
  • Experience in managing AI Software product cycle from concept to delivery.
  • Understanding of hardware and software integration challenges.
  • Experience with project & program management tools and methodologies (e.g., Agile, Scrum, JIRA).
  • Expertise with schedule tools like MS Project or Smartsheet.
  • Comfortable with ambiguity but actively seek clarity.

Ways to Stand Out From The Crowd

  • Extensive experience in System Design, end to end SW development including AI chip development.
  • Deep Knowledge of AI HW systems design, ML Compiler design, Inference Engine design, Infrastructure SW design.
  • Knowledge of LLVM and compiler architecture.
  • Experience with AI data center and cloud markets, technological and business trends, requirements, and ecosystem partners.

Attributes of a Groqster

  • Humility - Egos are checked at the door
  • Collaborative & Team Savvy - We make up the smartest person in the room, together
  • Growth & Giver Mindset - Learn it all versus know it all, we share knowledge generously
  • Curious & Innovative - Take a creative approach to projects, problems, and design
  • Passion, Grit, & Boldness - no limit thinking, fueling informed risk taking

Application Instructions

  • If this sounds like you, we’d love to hear from you!

Company Information

  • About Groq: Groq delivers fast, efficient AI inference. Our LPU-based system powers GroqCloud™, giving businesses and developers the speed and scale they need. Headquartered in Silicon Valley, we are on a mission to make high performance AI compute more accessible and affordable. When real-time AI is within reach, anything is possible. Build fast.
  • Compensation: At Groq, a competitive base salary is part of our comprehensive compensation package, which includes equity.

Skills

Technical Program Management
HW-SW co-design
AI
ML
FPGA
Embedded SW
ML Compiler
Inference Engine
Datacenter Deployments
Hardware Systems
System SW
Networking

Groq

AI inference technology for scalable solutions

About Groq

Groq specializes in AI inference technology, providing the Groq LPU™, which is known for its high compute speed, quality, and energy efficiency. The Groq LPU™ is designed to handle AI processing tasks quickly and effectively, making it suitable for both cloud and on-premises applications. Unlike many competitors, Groq's products are designed, fabricated, and assembled in North America, which helps maintain high standards of quality and performance. The company targets a variety of clients across different industries that require fast and efficient AI processing capabilities. Groq's goal is to deliver scalable AI inference solutions that meet the growing demands for rapid data processing in the AI and machine learning market.

Mountain View, CaliforniaHeadquarters
2016Year Founded
$1,266.5MTotal Funding
SERIES_DCompany Stage
AI & Machine LearningIndustries
201-500Employees

Benefits

Remote Work Options
Company Equity

Risks

Increased competition from SambaNova Systems and Gradio in high-speed AI inference.
Geopolitical risks in the MENA region may affect the Saudi Arabia data center project.
Rapid expansion could strain Groq's operational capabilities and supply chain.

Differentiation

Groq's LPU offers exceptional compute speed and energy efficiency for AI inference.
The company's products are designed and assembled in North America, ensuring high quality.
Groq emphasizes deterministic performance, providing predictable outcomes in AI computations.

Upsides

Groq secured $640M in Series D funding, boosting its expansion capabilities.
Partnership with Aramco Digital aims to build the world's largest inferencing data center.
Integration with Touchcast's Cognitive Caching enhances Groq's hardware for hyper-speed inference.

Land your dream remote job 3x faster with AI