Groq

Sr. Director of Hardware Systems Engineering

Palo Alto, California, United States

Not SpecifiedCompensation
Expert & Leadership (9+ years)Experience Level
Full TimeJob Type
UnknownVisa
Artificial Intelligence, Semiconductors, HardwareIndustries

Requirements

Candidates should possess 12-15+ years of progressive experience in hardware systems engineering, with a proven track record of bringing complex hardware systems to market. A BS/MS/PhD in Electrical Engineering, Computer Engineering, or a related field is required. Deep expertise is needed in custom silicon/ASIC/SoC design and integration, including chip-level interfaces, packaging, power delivery to SoC, thermal management for SoC, and integration challenges. Strong knowledge of high-speed digital design, including DDR, PCIe, Ethernet, SerDes, and other critical SoC connectivity interfaces, is essential. Experience with board-level design principles, power integrity, signal integrity, and EMI/EMC, particularly for robust SoC operation, is required. Proven ability in hardware/software co-design and debug is also necessary.

Responsibilities

The Sr. Director of Hardware Systems Engineering will lead the Hardware Systems Engineering team, taking responsibility for the product roadmap, vendor selection, and manufacturing and design houses to ensure the quality and timely delivery of all technologies. This role involves defining and owning the hardware systems engineering roadmap and strategy for the AI inference platform, with a focus on SoC integration and system-level performance. Responsibilities include leading architectural definition and trade-off analysis for complex hardware systems, optimizing for performance, power, cost, and reliability, and managing relationships with key hardware component suppliers and manufacturing partners. The individual will also recruit, mentor, and lead a high-performing team of hardware systems engineers, foster innovation, and collaborate closely with cross-functional teams such as silicon design, firmware, software, product management, operations, and supply chain. Additionally, they will provide technical leadership and oversight for all phases of hardware system development, lead the design and validation of high-speed interfaces, establish and implement robust reliability and qualification processes, and identify and mitigate technical risks throughout the development lifecycle.

Skills

Hardware Systems Engineering
AI inference
SoC integration
System-level performance
System architecture
Board design
Server integration
Rack-scale deployments
Hardware design
Component selection
Schematic capture
PCB layout
Team leadership
Mentoring
Cross-functional collaboration
Product roadmap
Vendor selection
Manufacturing
Design houses
Quality assurance
Product delivery
Firmware
Software
Product management
Operations
Supply chain

Groq

AI inference technology for scalable solutions

About Groq

Groq specializes in AI inference technology, providing the Groq LPU™, which is known for its high compute speed, quality, and energy efficiency. The Groq LPU™ is designed to handle AI processing tasks quickly and effectively, making it suitable for both cloud and on-premises applications. Unlike many competitors, Groq's products are designed, fabricated, and assembled in North America, which helps maintain high standards of quality and performance. The company targets a variety of clients across different industries that require fast and efficient AI processing capabilities. Groq's goal is to deliver scalable AI inference solutions that meet the growing demands for rapid data processing in the AI and machine learning market.

Mountain View, CaliforniaHeadquarters
2016Year Founded
$1,266.5MTotal Funding
SERIES_DCompany Stage
AI & Machine LearningIndustries
201-500Employees

Benefits

Remote Work Options
Company Equity

Risks

Increased competition from SambaNova Systems and Gradio in high-speed AI inference.
Geopolitical risks in the MENA region may affect the Saudi Arabia data center project.
Rapid expansion could strain Groq's operational capabilities and supply chain.

Differentiation

Groq's LPU offers exceptional compute speed and energy efficiency for AI inference.
The company's products are designed and assembled in North America, ensuring high quality.
Groq emphasizes deterministic performance, providing predictable outcomes in AI computations.

Upsides

Groq secured $640M in Series D funding, boosting its expansion capabilities.
Partnership with Aramco Digital aims to build the world's largest inferencing data center.
Integration with Touchcast's Cognitive Caching enhances Groq's hardware for hyper-speed inference.

Land your dream remote job 3x faster with AI