\u003C/p>\u003Cp>Our companies’ performance is highly dependent on the performance of our data processing pipelines. You will regularly need to appeal to all aspects of your developers’ skillset to find new and innovative ways to optimize your code. Some examples of the ‘everyday’ challenges you would encounter are and have to implement:\u003C/p>\u003Cul>\u003Cli>\u003Cp>low-latency networking code for fast communication with exchanges\u003C/p>\u003C/li>\u003Cli>\u003Cp>context-switch-free code\u003C/p>\u003C/li>\u003Cli>\u003Cp>design and implement custom data storage structures with minimalistic footprint\u003C/p>\u003C/li>\u003Cli>\u003Cp>data pipelines using a streaming paradigm\u003C/p>\u003C/li>\u003Cli>\u003Cp>complex trading logic for the decision engine with the lowest possible compute time\u003C/p>\u003C/li>\u003Cli>\u003Cp>reimplement existing code using advanced features of the CPU (SIMD)\u003Cbr />\u003C/p>\u003C/li>\u003C/ul>\u003Cp>To accommodate the growth of the company, we spend increasing efforts on the maintainability and manageability of our large, highly optimized and multithreaded codebases while preserving its main purpose (low latency). You will have to maneuver yourself between these worlds to get the best results.\u003C/p>\u003Cp style=\"min-height: 1.7em;\">\u003C/p>\u003Cp>Next to writing code; the Technology team builds and maintains the global (hardware) infrastructure to facilitate the trading. In your job you will be regularly involved in all aspects of the pipeline and different tech stacks; from hardware compositions, network design to data logging pipelines for the traders and quants.\u003Cbr />\u003C/p>\u003Cbr>\u003C/br>\u003Cp>\u003Cstrong>Who are you\u003C/strong>\u003C/p>\u003Cp>Due to the growth of our organization, we are looking for more than one C/C++ Engineer. As the new team member, you bring the following skillset:\u003C/p>\u003Cul>\u003Cli>\u003Cp>extensive and thorough knowledge of C and C++ in Linux\u003C/p>\u003C/li>\u003Cli>\u003Cp>an understanding of – and the ability to verify - the assembly the compiler will produce from the code you write\u003C/p>\u003C/li>\u003Cli>\u003Cp>understanding what (GNU/Linux) system calls you invoke will do and cost\u003C/p>\u003C/li>\u003Cli>\u003Cp>knowledge of x64 hardware and how to use it efficiently\u003C/p>\u003C/li>\u003Cli>\u003Cp>understanding what storage structures to select or implement given their use (time complexity)\u003C/p>\u003C/li>\u003Cli>\u003Cp>ability to work with debuggers and profilers\u003C/p>\u003C/li>\u003Cli>\u003Cp>operative coding (git, documentation)\u003C/p>\u003C/li>\u003Cli>\u003Cp>ability to absorb from and provide knowledge to the team\u003C/p>\u003C/li>\u003Cli>\u003Cp>pro-active, a self-starter, honest, flexible and stress resistant\u003Cbr />\u003C/p>\u003C/li>\u003C/ul>\u003Cp>Next to the above, having a background in one of the specializations are nice to have:\u003C/p>\u003Cul>\u003Cli>\u003Cp>experience with network engineering; either (low latency) networking hardware deployment or networking protocol implementations;\u003C/p>\u003C/li>\u003Cli>\u003Cp>experience with Big Data engineering and knowledge of Big Data best practices and implementations best suited for different use cases.\u003C/p>\u003C/li>\u003C/ul>\u003Cp>\u003Cstrong>\u003Cbr />What we offer\u003C/strong>\u003C/p>\u003Cul>\u003Cli>\u003Cp>Excellent remuneration (including discretionary bonus)\u003C/p>\u003C/li>\u003Cli>\u003Cp>Fun and inspiring work environment\u003C/p>\u003C/li>\u003Cli>\u003Cp>Experienced and knowledgeable colleagues\u003C/p>\u003C/li>\u003Cli>\u003Cp>25 vacation days\u003C/p>\u003C/li>\u003Cli>\u003Cp>Allowance for commuting expenses\u003C/p>\u003C/li>\u003Cli>\u003Cp class=\"MsoListParagraphCxSpLast\">Additional benefits: yearly office trip(s), fitness allowance, Friday afternoon drinks, weekly massages, excellent free lunch\u003C/p>\u003C/li>\u003C/ul>\u003Cp style=\"min-height: 1.7em;\">\u003C/p>\u003Cp>A pre-employment screening is part of our application process.\u003C/p>","https://mathrix.recruitee.com/o/senior-cc-low-latency-engineer-nl",{"id":133,"name":134,"urlSafeSlug":134,"logo":135},[164],{"city":165,"region":165,"country":166},"Utrecht","Netherlands","2022-03-15T00:00:00Z",23,"Candidates should possess extensive knowledge and experience in building, testing, monitoring, and maintaining large-scale parallel applications and databases, with a strong understanding of C and C++ in Linux. They should also have an understanding of assembly code, compiler output, system calls, x64 hardware, and storage structures, along with the ability to work with debuggers and profilers and operate using version control systems.","As a Senior C/C++ Low Latency Engineer at d-Matrix, the focus will be on developing code that facilitates automated trades on various exchanges worldwide, optimizing data processing pipelines, implementing low-latency networking code, designing context-switch-free code, creating custom data storage structures, developing complex trading logic, and re-implementing existing code using advanced CPU features. The role also involves building and maintaining the global (hardware) infrastructure, participating in data logging pipelines, and maneuvering between code optimization and maintainability efforts while preserving low-latency performance.",1,{"employment":173,"compensation":177,"experience":178,"visaSponsorship":183,"location":185,"skills":186,"industries":197},{"type":174},{"id":175,"name":176,"description":19},"7b45c8cf-5aad-4473-9b42-e655134195c8","Full Time",{"minAnnualSalary":19,"maxAnnualSalary":19,"currency":19,"details":19},{"experienceLevels":179},[180],{"id":181,"name":182,"description":19},"9f0ed8d0-b24f-43cb-84c3-62a181e19994","Senior (5 to 8 years)",{"type":184},3,{"type":11},[187,188,189,190,191,192,193,194,195,196],"C","C++","Low Latency Programming","Parallel Applications","Networking","Data Structures","Streaming Data Pipelines","CPU SIMD","Multithreading","Performance Optimization",[198,201,203],{"id":199,"name":200},"00000000-0000-0000-0000-000000000000","Financial Technology",{"id":199,"name":202},"High Frequency Trading",{"id":199,"name":204},"Data Processing",{"id":206,"title":207,"alternativeTitles":208,"slug":224,"jobPostId":206,"description":225,"isReformated":51,"applyUrl":226,"company":134,"companyOption":227,"locations":228,"listingDate":230,"listingSite":231,"isRemote":51,"requirements":232,"responsibilities":233,"status":171,"expiryDate":19,"isGoogleIndexed":51,"summary":234},"fcc145e5-f537-4218-a4ef-4ca2163442ee","Inference Runtime Systems - Software Engineer, Staff",[209,210,211,212,213,214,215,216,217,218,219,220,221,222,223],"Staff Software Engineer, Inference Runtime","Senior Software Engineer, Machine Learning Runtime","C++ Software Engineer, HPC Inference","Distributed Systems Engineer, ML Inference","Software Architect, AI Inference Runtime","Lead Inference Runtime Developer","Staff Engineer, PyTorch Integration","High-Performance Computing Software Engineer, Inference","Machine Learning Systems Engineer, Staff","C++ Developer, Inference Optimization","Software Engineer, Low Latency Inference","Staff ML Runtime Engineer","Senior C++ Engineer, Distributed ML","AI Hardware Software Engineer, Staff","Inference Systems Lead Engineer","inference-runtime-systems-software-engineer-staff-fcc145e5-f537-4218-a4ef-4ca2163442ee","### Position Overview\n- **Location Type:** Hybrid (Onsite at Santa Clara, CA headquarters 3 days per week)\n- **Job Type:** Full-Time\n- **Salary:** $142.5K - $230K\nd-Matrix is focused on unleashing the potential of generative AI. We are at the forefront of software and hardware innovation, pushing the boundaries of what is possible. Our culture is one of respect and collaboration, valuing humility and direct communication. We are seeking individuals passionate about tackling challenges and driven by execution. Ready to come find your playground? Together, we can help shape the endless possibilities of AI.\n\n### Requirements\n- **Education:**\n - Bachelor’s degree with a minimum of 6+ years of professional experience in software development with a focus on C++.\n - Master’s degree preferred in computer science, Engineering, or a related field with 3+ years of professional experience in software development with a focus on C++.\n- **Experience:**\n - Experience in architecting and building complex software systems.\n - Experience with distributed systems or high-performance computing (HPC) applications.\n - Familiarity with PyTorch internals or similar machine learning frameworks.\n- **Technical Skills:**\n - Strong proficiency in modern C++ (C++11 and above) and Python.\n - Solid understanding of software design patterns and best practices.\n - Experience with parallel and concurrent programming.\n - Proficient in CMake, Pytest, and other development tools.\n - Knowledge of GPU programming and acceleration techniques is a plus.\n\n### Responsibilities\n- **Architect and Develop:** Lead the design and implementation of a high-performance inference runtime that leverages d-Matrix's advanced hardware capabilities.\n- **Integrate Frameworks:** Integrate the inference runtime with PyTorch to enable upstream software capabilities like inference and finetuning.\n- **Collaborate:** Work closely with cross-functional teams including hardware engineers, data scientists, and product managers to define requirements and deliver integrated solutions.\n- **Optimize Performance:** Develop and implement optimization techniques to ensure low latency and high throughput in distributed and HPC environments.\n- **Code Quality:** Ensure the code quality, and performance through rigorous testing and code reviews.\n- **Documentation:** Create technical documentation to support development, deployment, and maintenance activities.\n\n### Company Information\n- **Company:** d-Matrix\n- **Focus:** Unleashing the potential of generative AI.\n- **Culture:** Respectful, collaborative, valuing humility and direct communication.","https://jobs.ashbyhq.com/d-matrix/8dfed088-0b31-4996-a7d6-f27e055b1de2",{"id":133,"name":134,"urlSafeSlug":134,"logo":135},[229],{"city":23,"region":24,"country":25},"2025-03-12T17:08:19.545Z",2,"Candidates should have a Bachelor's degree with a minimum of 6+ years of professional experience in software development focused on C++, or a Master's degree preferred in computer science, engineering, or a related field with 3+ years of relevant experience. A strong background in architecting and building complex software systems is required, along with experience in distributed systems or high-performance computing (HPC) applications. Familiarity with PyTorch internals or similar machine learning frameworks is a significant advantage. Technical skills should include strong proficiency in modern C++ (C++11 and above) and Python, a solid understanding of software design patterns and best practices, experience with parallel and concurrent programming, and proficiency in CMake and Pytest. Knowledge of GPU programming and acceleration techniques is a plus.","The Staff Software Engineer will lead the design and implementation of a high-performance inference runtime that leverages d-Matrix's advanced hardware capabilities. They will integrate the inference runtime with PyTorch to enable upstream software capabilities like inference and finetuning. The role involves collaborating closely with cross-functional teams including hardware engineers, data scientists, and product managers to define requirements and deliver integrated solutions. Additionally, they will develop and implement optimization techniques to ensure low latency and high throughput in distributed and HPC environments, ensure code quality and performance through rigorous testing and code reviews, and create technical documentation to support development, deployment, and maintenance activities.",{"employment":235,"compensation":237,"experience":241,"visaSponsorship":247,"location":248,"skills":249,"industries":260},{"type":236},{"id":175,"name":176,"description":19},{"minAnnualSalary":238,"maxAnnualSalary":239,"currency":240,"details":19},142500,230000,"USD",{"experienceLevels":242},[243,246],{"id":244,"name":245,"description":19},"d9dd41a2-3551-412f-981e-de2bc1e7bb34","Mid-level (3 to 4 years)",{"id":181,"name":182,"description":19},{"type":184},{"type":231},[188,250,251,252,253,254,255,256,257,258,259],"Python","CMake","Pytest","GPU Programming","Distributed Systems","HPC","PyTorch","Software Design Patterns","Parallel Programming","Concurrent Programming",[261,262],{"id":132,"name":131},{"id":263,"name":264},"7cb8f31d-5490-4ab3-a4ee-750ac6d23278","Hardware",{"id":266,"title":267,"alternativeTitles":268,"slug":279,"jobPostId":266,"description":280,"isReformated":51,"applyUrl":281,"company":134,"companyOption":282,"locations":283,"listingDate":285,"listingSite":231,"isRemote":51,"requirements":286,"responsibilities":287,"status":171,"expiryDate":19,"isGoogleIndexed":51,"summary":288},"5ebbc663-d8da-4ba3-92d2-c958d9faa69e","AI Hardware Systems Engineer, Principal",[269,270,271,272,273,274,275,276,277,278],"Principal AI Hardware Engineer","Senior AI Hardware Systems Architect","Lead AI Accelerator Hardware Engineer","Principal GenAI Hardware Development Engineer","Senior AI Systems Hardware Designer","Principal AI Compute Hardware Engineer","Lead AI Inference Hardware Engineer","Principal d-Matrix Hardware Systems Engineer","Senior AI Hardware Integration Engineer","Principal AI Hardware Platform Engineer","ai-hardware-systems-engineer-principal-5ebbc663-d8da-4ba3-92d2-c958d9faa69e","### Position Overview\n- **Location Type:** Hybrid (Onsite 3 days per week in Santa Clara, CA)\n- **Job Type:** FullTime\n- **Salary:** $180K - $280K\nd-Matrix is focused on unleashing the potential of generative AI. We are at the forefront of software and hardware innovation, pushing the boundaries of what is possible. Our culture is one of respect and collaboration, valuing humility and direct communication. We are seeking individuals passionate about tackling challenges and driven by execution. Ready to come find your playground? Together, we can help shape the endless possibilities of AI.\n\n### Requirements\n- BS EE or CE - MSEE desired\n- 10+ years’ experience in hardware development\n- Experience in deploying products to volume production\n- Hands-on experience in the design, bring up and debug of PCBAs and chassis\n- Experience prototyping, reworking, troubleshooting board & component level issues to resolution\n- Experience using schematic capture and PCB layout tools\n- Working knowledge of Thermal and Mechanical designs\n- Familiarity with signal and power integrity concepts\n\n### Responsibilities\n- Design, develop, and deploy scalable GenAI inference solutions with d-Matrix accelerator silicon.\n- Collaborate with cross-functional teams (chip design, verification, thermal, mechanical, and software engineers) to specify, design, and integrate custom accelerators, processors, memory modules, and other components required for AI workloads.\n- Lead the bring-up of new AI systems, conducting prototype testing and validation.\n- Debug and resolve hardware-software integration issues.\n- Prepare comprehensive documentation and support the transfer of knowledge to internal and external stakeholders.\n- Stay abreast of the latest advancements in GenAI hardware and software technologies and assess their suitability for integration into d-Matrix GenAI inference solutions.\n\n### Application Instructions\n- d-Matrix does not accept resumes or candidate submissions from external agencies.\n- Interested individuals should apply directly through official channels.\n\n### Company Information\n- **Company:** d-Matrix\n- **Culture:** Inclusive, valuing humility and direct communication.\n- **Equal Opportunity Employment:** d-Matrix is proud to be an equal opportunity workplace and affirmative action employer. We hire the best talent for our teams, regardless of race, religion, color, age, disability, sex, gender identity, sexual orientation, ancestry, genetic information, marital status, national origin, political affiliation, or veteran status.","https://jobs.ashbyhq.com/d-matrix/37378272-5b3d-43af-b182-b599d8e1e3fc",{"id":133,"name":134,"urlSafeSlug":134,"logo":135},[284],{"city":23,"region":24,"country":25},"2025-03-12T17:08:19.795Z","Candidates should have a BS in Electrical Engineering or Computer Engineering, with a Master's degree preferred. A minimum of 10 years of experience in hardware development is required, along with experience in deploying products to volume production. Hands-on experience in the design, bring-up, and debugging of PCBAs and chassis is essential, as well as proficiency in using schematic capture and PCB layout tools. Knowledge of thermal and mechanical designs, along with familiarity with signal and power integrity concepts, is also expected.","The AI Hardware Systems Engineer will design, develop, and deploy scalable GenAI inference solutions using d-Matrix accelerator silicon. They will collaborate with cross-functional teams to specify, design, and integrate custom accelerators, processors, and memory modules for AI workloads. The role involves leading the bring-up of new AI systems, conducting prototype testing and validation, and debugging hardware-software integration issues. Additionally, the engineer will prepare comprehensive documentation and stay updated on advancements in GenAI hardware and software technologies.",{"employment":289,"compensation":291,"experience":295,"visaSponsorship":299,"location":300,"skills":301,"industries":312},{"type":290},{"id":175,"name":176,"description":19},{"minAnnualSalary":292,"maxAnnualSalary":293,"currency":240,"details":294},180000,280000,"Base salary with potential bonuses.",{"experienceLevels":296},[297,298],{"id":244,"name":245,"description":19},{"id":181,"name":182,"description":19},{"type":184},{"type":231},[302,303,304,305,306,307,308,309,310,311],"PCBAs","PCB Layout","Schematic Capture","Thermal Design","Mechanical Design","Signal Integrity","Power Integrity","Hardware Development","AI Inference","GenAI",[313,314,315],{"id":132,"name":131},{"id":263,"name":264},{"id":129,"name":128},{"id":317,"title":318,"alternativeTitles":319,"slug":335,"jobPostId":317,"description":336,"isReformated":51,"applyUrl":337,"company":134,"companyOption":338,"locations":339,"listingDate":341,"listingSite":231,"isRemote":51,"requirements":342,"responsibilities":343,"status":171,"expiryDate":19,"isGoogleIndexed":51,"summary":344},"3675bb1d-381a-4ca9-8640-8425321f4b54","Machine Learning Engineer, Staff - Model Factory",[320,321,322,323,324,325,326,327,328,329,330,331,332,333,334],"Staff Machine Learning Engineer","Senior Machine Learning Engineer - Model Deployment","Machine Learning Infrastructure Engineer","MLOps Engineer - Model Factory","AI Deployment Engineer","Senior AI Engineer - Inference Optimization","Machine Learning Systems Engineer","Staff ML Engineer - Productionization","Deep Learning Deployment Specialist","Machine Learning Performance Engineer","AI Model Optimization Engineer","Distributed ML Systems Engineer","ML Inference Engineer","Staff ML Engineer - Scalability","AI Infrastructure Specialist","machine-learning-engineer-staff-model-factory-3675bb1d-381a-4ca9-8640-8425321f4b54","### Position Overview\n- **Location Type:** Hybrid (Onsite at Santa Clara, CA headquarters 3-5 days per week)\n- **Job Type:** Full-Time\n- **Salary:** $155K - $250K\nd-Matrix is a pioneering company focused on unleashing the potential of generative AI. We are at the forefront of software and hardware innovation, pushing the boundaries of what is possible. Our culture values humility, direct communication, and inclusivity. We are seeking individuals passionate about tackling challenges and driven by execution. Ready to come find your playground? Together, we can help shape the endless possibilities of AI.\n\n### Job Title:\nMachine Learning Engineer - d-Matrix Model Factory\n\n### What You Will Do:\n* Design, build, and optimize machine learning deployment pipelines for large-scale models.\n* Implement and enhance model inference frameworks.\n* Develop automated workflows for model development, experimentation, and deployment.\n* Collaborate with research, architecture, and engineering teams to improve model performance and efficiency.\n* Work with distributed computing frameworks (e.g., PyTorch/XLA, JAX, TensorFlow, Ray) to optimize model parallelism and deployment.\n* Implement scalable KV caching and memory-efficient inference techniques for transformer-based models.\n* Monitor and optimize infrastructure performance across different levels of custom hardware hierarchy - cards, servers, and racks, which are powered by the d-Matrix custom AI chips.\n* Ensure best practices in ML model versioning, evaluation, and monitoring.\n\n### What You Will Bring:\n* **Strong programming skills in Python** and experience with ML frameworks like PyTorch, TensorFlow, or JAX.\n* **Hands-on experience with model optimization, quantization, and inference acceleration.**\n* **Deep understanding of Transformer architectures, attention mechanisms, and distributed inference (Tensor Parallel, Pipeline Parallel, Sequence Parallel).**\n* **Knowledge of quantization (INT8, BF16, FP16) and memory-efficient inference techniques.**\n* **Solid grasp of software engineering best practices, including CI/CD, containerization (Docker, Kubernetes), and MLOps.**\n* **Strong problem-solving skills and ability to work**\n\n### Company Information:\nd-Matrix is a pioneering company specializing in data center AI inferencing solutions. Utilizing innovative in-memory computing techniques, d-Matrix develops cutting-edge hardware and software platforms designed to enhance the efficiency and scalability of generative AI applications. The Model Factory team at d-Matrix is at the heart of cutting-edge AI and ML model development and deployment. We focus on building, optimizing, and deploying large-scale machine learning models with a deep emphasis on efficiency, automation, and scalability for the d-Matrix hardware.","https://jobs.ashbyhq.com/d-matrix/f40a8854-59fb-4f87-84e0-156a4a6a2cea",{"id":133,"name":134,"urlSafeSlug":134,"logo":135},[340],{"city":23,"region":24,"country":25},"2025-03-12T17:08:20.258Z","Candidates must possess strong programming skills in Python and have experience with machine learning frameworks such as PyTorch, TensorFlow, or JAX. Hands-on experience with model optimization, quantization, and inference acceleration is required, along with a deep understanding of Transformer architectures and distributed inference techniques. Knowledge of quantization methods and memory-efficient inference techniques is essential, as well as a solid grasp of software engineering best practices including CI/CD and containerization technologies like Docker and Kubernetes.","The Machine Learning Engineer will design, build, and optimize machine learning deployment pipelines for large-scale models. They will implement and enhance model inference frameworks and develop automated workflows for model development, experimentation, and deployment. Collaboration with research, architecture, and engineering teams to improve model performance and efficiency is expected. The role also involves working with distributed computing frameworks to optimize model parallelism and deployment, implementing scalable KV caching and memory-efficient inference techniques, and monitoring and optimizing infrastructure performance across various levels of custom hardware.",{"employment":345,"compensation":347,"experience":350,"visaSponsorship":354,"location":355,"skills":356,"industries":370},{"type":346},{"id":175,"name":176,"description":19},{"minAnnualSalary":348,"maxAnnualSalary":349,"currency":240,"details":294},155000,250000,{"experienceLevels":351},[352,353],{"id":244,"name":245,"description":19},{"id":181,"name":182,"description":19},{"type":184},{"type":231},[250,256,357,358,359,360,361,362,363,364,365,366,367,368,369],"TensorFlow","JAX","Model Optimization","Quantization","Inference Acceleration","Transformer Architectures","Attention Mechanisms","Distributed Inference","Tensor Parallel","Pipeline Parallel","Sequence Parallel","KV Caching","Ray",[371,372],{"id":132,"name":131},{"id":263,"name":264},{"id":374,"title":375,"alternativeTitles":376,"slug":392,"jobPostId":374,"description":393,"isReformated":51,"applyUrl":394,"company":134,"companyOption":395,"locations":396,"listingDate":398,"listingSite":231,"isRemote":51,"requirements":399,"responsibilities":400,"status":171,"expiryDate":19,"isGoogleIndexed":51,"summary":401},"a5b41a0e-0920-49f0-8165-49e1d1d79bd1","ML Compiler Software Engineering Technical Lead",[377,378,379,380,381,382,383,384,385,386,387,388,389,390,391],"AI Compiler Technical Lead","MLIR Compiler Lead Engineer","LLVM ML Compiler Lead","Technical Lead, Machine Learning Compiler","Lead ML Compiler Engineer","Compiler Engineering Lead (AI/ML)","Senior ML Compiler Architect","AI Compiler Development Lead","Machine Learning Compiler Team Lead","Technical Lead, AI Compiler Framework","Lead Software Engineer, ML Compiler","ML Compiler Project Lead","AI/ML Compiler Technical Manager","Lead Compiler Engineer, NLP Models","ML Compiler Solutions Lead","ml-compiler-software-engineering-technical-lead-a5b41a0e-0920-49f0-8165-49e1d1d79bd1","### Position Overview\n- **Location Type:** Hybrid (Onsite 3 days/week at Santa Clara, CA headquarters)\n- **Job Type:** Full-Time\n- **Salary:** $196K - $300K\nd-Matrix is focused on unleashing the potential of generative AI. We are at the forefront of software and hardware innovation, pushing the boundaries of what is possible. Our culture is one of respect and collaboration, valuing humility and direct communication. We are seeking individuals passionate about tackling challenges and driven by execution. Ready to come find your playground? Together, we can help shape the endless possibilities of AI.\n\n### Role: MLIR Software Engineering Technical Lead\n\n### What You Will Do:\n- Design and implement the MLIR-based compiler framework.\n- Oversee the development of a compiler that partitions and maps large-scale NLP models to a scalable, multi-chiplet, parallel processing architecture.\n- Coordinate the scheduling of parallel tasks onto processors, data movements, and inter-processor synchronization.\n- Implement graph optimization passes, constant folding, data reshaping, padding, tiling, and backend-specific operations.\n- Support a split offline/online mapping process with just-in-time mapping to chiplets, processors, and DDR memory channels.\n- Collaborate with HW and SW architecture teams, the Pytorch front-end pre-processing team, the data science numerics team, AI kernel team, SW test group, the benchmark group, and simulator/emulation platform development teams.\n\n### Requirements:\n- Minimum: BS / MS\n- Preferred: 10+ years in ML Compiler\n- Experience establishing, growing, and/or developing engineering teams (and software teams in particular).\n- Experience with leading agile development methods (scrums, sprints, Kanban boards).\n\n### Desired Skills & Experience:\n- Familiarity with the TVM, Glow, or MLIR project.\n- Experience with the LLVM project.\n- Experience mapping graph operations to many-core processors (or spatial fabrics).\n- Understanding of trade-offs made by processor architects when implementing accelerators for DNNs, DCNNs, transformer models, and attention mechanisms.\n\n### Company Information:\n- d-Matrix is focused on unleashing the potential of generative AI.\n- Culture: Respect, collaboration, humility, direct communication, inclusivity.\n- Location: Hybrid, working onsite at Santa Clara, CA headquarters 3 days per week.","https://jobs.ashbyhq.com/d-matrix/dcf99b11-86fd-46e1-a019-b1112c42c16e",{"id":133,"name":134,"urlSafeSlug":134,"logo":135},[397],{"city":23,"region":24,"country":25},"2025-03-12T17:08:20.853Z","Candidates should have a BS or MS in Computer Science or equivalent with at least 10 years of experience in ML Compiler. Experience with AI compiler projects such as TVM, Glow, or MLIR is essential, along with familiarity with the LLVM project. A background in establishing and developing engineering teams, particularly in software, is required, as well as experience in leading agile development methods, including coordinating scrums and managing project tasks.","The ML Compiler Software Engineering Technical Lead will drive the design and implementation of the MLIR-based compiler framework. This includes overseeing the development of a compiler that partitions and maps large-scale NLP models to a multi-chiplet, parallel processing architecture. The role requires coordinating task scheduling, data movements, and inter-processor synchronization, as well as collaborating with various teams, including hardware and software architecture, data science, AI kernels, and testing groups to ensure the overall efficiency of the solution.",{"employment":402,"compensation":404,"experience":407,"visaSponsorship":413,"location":414,"skills":415,"industries":431},{"type":403},{"id":175,"name":176,"description":19},{"minAnnualSalary":405,"maxAnnualSalary":406,"currency":240,"details":19},196000,300000,{"experienceLevels":408},[409,410],{"id":181,"name":182,"description":19},{"id":411,"name":412,"description":19},"de6fc08d-b7f4-4514-8560-7f69a3611e1e","Expert & Leadership (9+ years)",{"type":184},{"type":231},[416,417,418,419,420,421,422,423,424,425,426,427,428,429,430],"MLIR","LLVM","TVM","Glow","Graph Optimization","Compiler Design","Parallel Processing","Agile Development","Scrum","Kanban","Data Reshaping","Padding","Tiling","LLM","Pytorch",[432,433,434],{"id":132,"name":131},{"id":263,"name":264},{"id":435,"name":436},"7f054cca-dcb4-431e-9f2a-c2fcf0128ec2","Robotics & Automation",{"id":438,"title":439,"alternativeTitles":440,"slug":456,"jobPostId":438,"description":457,"isReformated":51,"applyUrl":458,"company":134,"companyOption":459,"locations":460,"listingDate":462,"listingSite":231,"isRemote":51,"requirements":463,"responsibilities":464,"status":171,"expiryDate":19,"isGoogleIndexed":51,"summary":465},"ff746839-868f-48da-8874-3e3d4f9bffd9","FPGA Design Engineer, Staff ",[441,442,443,444,445,446,447,448,449,450,451,452,453,454,455],"Senior FPGA Engineer","Staff FPGA Developer","FPGA Architect","Hardware Design Engineer (FPGA)","FPGA Verification Engineer","Embedded FPGA Engineer","RISC-V FPGA Designer","AI Accelerator FPGA Engineer","FPGA Firmware Engineer","Hardware Security FPGA Engineer","FPGA Power Management Engineer","FPGA Integration Engineer","Senior Hardware Engineer (FPGA)","FPGA Solutions Architect","Digital Design Engineer (FPGA)","fpga-design-engineer-staff-ff746839-868f-48da-8874-3e3d4f9bffd9","### Position Overview\n- **Location Type:** Hybrid (Onsite at Santa Clara, CA headquarters 3 days per week)\n- **Job Type:** Full-Time\n- **Salary:** $142.5K - $237.5K\nd-Matrix is focused on unleashing the potential of generative AI. We are at the forefront of software and hardware innovation, pushing the boundaries of what is possible. Our culture is one of respect and collaboration, valuing humility and direct communication. We seek individuals passionate about tackling challenges and driven by execution. Ready to come find your playground? Together, we can help shape the endless possibilities of AI.\n\n### Requirements\n- Bachelor’s degree in Electrical Engineering, Computer Engineering, or a related field, Master’s degree preferred.\n- Minimum of 5+ years of experience in FPGA design and verification.\n- Expertise in hardware design using Hardware Description Languages (HDLs) like Verilog or VHDL.\n- Familiarity with RISC-V architecture and embedded systems development.\n- Understanding of hardware-software integration concepts.\n- Experience with scripting languages like Python for test automation.\n- Strong analytical and problem-solving skills.\n- Excellent communication, collaboration, and teamwork abilities.\n- Thrive in dynamic environments where innovative problem-solving is key.\n- Experience with industry-standard management protocols (MCTP, PLDM, SPDM).\n- Experience with platform BMC (Baseboard Management Controller).\n- Knowledge of power management techniques (PMBus).\n- Knowledge of hardware security and secure boot concepts.\n- Experience with cloud server architectures and concepts.\n\n### Responsibilities\n- Design and verify FPGA-based solutions for d-Matrix AI inference accelerator management.\n- Define FPGA microarchitecture specifications and collaborate with stakeholders to ensure alignment with project requirements.\n- Develop resilient dual boot architecture for multi-core multi chiplet booting.\n- Design and implement hardware and software modules for platform power management, health monitoring, and telemetry data acquisition.\n- Interface with host server BMC through SMBus mailbox with management protocol overlays such as MCTP, PLDM and SPDM.\n- Integrate RISC-V CPU cores and related firmware into FPGA designs.\n- Develop eFuse controller within the FPGA.\n- Design and integrate a secure boot solution adhering to NIST standards within the FPGA to enable secure booting of d-Matrix accelerator chiplets.\n- Collaborate with cross-functional teams to ensure seamless hardware-software integration and support inference accelerator hardware bring-up and troubleshooting.\n- Author Python scripts for hardware testing and automation.\n\n### Application Instructions\n- (Information not provided in the job description)\n\n### Company Information\n- **Company:** d-Matrix\n- **Focus:** Unleashing the potential of generative AI\n- **Culture:** Respect, collaboration, humility, direct communication, inclusivity\n- **Equal Opportunity Employer:** d-Matrix is proud to be an equal opportunity workplace and affirmative action employer. We’re committed to fostering an inclusive environment where everyone feels welcomed and valued.","https://jobs.ashbyhq.com/d-matrix/88643089-dfb9-4900-bdf2-ed3e2185aa72",{"id":133,"name":134,"urlSafeSlug":134,"logo":135},[461],{"city":23,"region":24,"country":25},"2025-03-12T17:08:21.04Z","Candidates should have a Bachelor's degree in Electrical Engineering, Computer Engineering, or a related field, with a Master's degree preferred. A minimum of 5+ years of experience in FPGA design and verification is required. Expertise in hardware design using Hardware Description Languages (HDLs) like Verilog or VHDL is essential. Familiarity with RISC-V architecture and embedded systems development is needed, along with an understanding of hardware-software integration concepts. Experience with scripting languages like Python for test automation is necessary, as well as knowledge of industry-standard management protocols (MCTP, PLDM, SPDM) and platform BMC. Candidates should also possess strong analytical and problem-solving skills, excellent communication, collaboration, and teamwork abilities, and knowledge of power management techniques (PMBus) and hardware security concepts.","The FPGA Design Engineer will design and verify FPGA-based solutions for AI inference accelerator management and define FPGA microarchitecture specifications while collaborating with stakeholders. They will develop resilient dual boot architecture for multi-core multi-chiplet booting, design and implement hardware and software modules for platform power management, health monitoring, and telemetry data acquisition. The role includes interfacing with host server BMC through SMBus mailbox, integrating RISC-V CPU cores and related firmware into FPGA designs, and developing an eFuse controller within the FPGA. Additionally, they will design and integrate a secure boot solution adhering to NIST standards, collaborate with cross-functional teams for hardware-software integration, and author Python scripts for hardware testing and automation.",{"employment":466,"compensation":468,"experience":470,"visaSponsorship":476,"location":477,"skills":478,"industries":494},{"type":467},{"id":175,"name":176,"description":19},{"minAnnualSalary":238,"maxAnnualSalary":469,"currency":240,"details":294},237500,{"experienceLevels":471},[472,473],{"id":181,"name":182,"description":19},{"id":474,"name":475,"description":19},"6d29fb0e-c389-4488-940b-7ca93a9f10bb","Junior (1 to 2 years)",{"type":184},{"type":231},[479,480,481,250,482,483,484,485,486,487,488,489,490,491,492,493],"Verilog","VHDL","RISC-V","Hardware Description Languages","Hardware-Software Integration","FPGA Design","FPGA Verification","Test Automation","MCTP","PLDM","SPDM","Platform BMC","PMBus","Secure Boot","Cloud Server Architectures",[495,496,497],{"id":132,"name":131},{"id":263,"name":264},{"id":129,"name":128},{"id":499,"title":500,"alternativeTitles":501,"slug":517,"jobPostId":499,"description":518,"isReformated":15,"applyUrl":519,"company":134,"companyOption":520,"locations":521,"listingDate":528,"listingSite":231,"isRemote":51,"requirements":529,"responsibilities":530,"status":171,"expiryDate":19,"isGoogleIndexed":51,"summary":531},"92855e6b-4542-48c8-a8f9-23e1511deec5","AI Hardware Architect",[502,503,504,505,506,507,508,509,510,511,512,513,514,515,516],"AI Chip Architect","Machine Learning Hardware Designer","Deep Learning Accelerator Architect","AI Systems Architect (Hardware Focus)","Hardware Architect for AI/ML","Computer Architect - AI Hardware","SoC Architect for AI","AI Processor Architect","Hardware Design Engineer - AI","ML Hardware Specialist","AI Compute Architect","Hardware Architect, Neural Networks","AI Accelerator Design Architect","Machine Learning Hardware Engineer","AI Hardware Solutions Architect","ai-hardware-architect-92855e6b-4542-48c8-a8f9-23e1511deec5","# AI Hardware Architect\n\n## Position Overview\n* **Location Type:** Hybrid (Santa Clara Headquarters, Seattle Office, Remote)\n* **Job Type:** Full-Time\n* **Salary:** Not specified\n\nAs a leading innovator in generative AI, d-Matrix is seeking a talented AI Hardware Architect to join our Hardware Architecture team. This role focuses on defining the next generation of our AI solutions, encompassing data plane and control plane design within SoCs. You'll be responsible for design space exploration, workload characterization, and ISA design, working closely with software and hardware experts to build architectural solutions from concept to specification.\n\n## Requirements\n### Education\n* Minimum PhD in Computer Science, Engineering, or a related field + 15 or more years of industry experience.\n\n### Technical Skills\n* Strong grasp of Computer Architecture, HW/SW Co-design, digital design, and machine learning fundamentals.\n* Experience with Data parallel architectures and SIMD/Vector extensions.\n* Experience with workload characterization and competitive analysis.\n* Experience with performance modeling, analysis, and correlation (w/ RTL) of GPU/AI Accelerator architectures.\n* Proficient in C/C++ or Python development in a Linux environment and using standard development tools.\n\n### Soft Skills\n* Self-motivated team player with a strong sense of ownership and leadership.\n\n## Responsibilities\n* Design, model, and drive new architectural features for next-generation AI hardware.\n* Evaluate the performance of cutting-edge AI workloads.\n* Collaborate with a team of hardware architects, software (ML, Systems, Compiler), and hardware (mixed signal, DSP, CPU) experts.\n* Build and scale architectural solutions within a tight development window.\n* Contribute to the design space exploration and workload mapping within the SoC.\n* Design and specify AI Accelerators.\n\n## Application Instructions\n* The job description does not provide specific application instructions. Interested candidates should visit d-Matrix’s careers page or contact the company directly to inquire about application procedures.\n\n## Company Information\n* **Company:** d-Matrix\n* **Focus:** Unleashing the potential of generative AI.\n* **Culture:** Values humility, direct communication, inclusivity, and execution.\n* **Equal Opportunity Employer:** d-Matrix is proud to be an equal opportunity workplace and affirmative action employer, committed to fostering an inclusive environment.","https://jobs.ashbyhq.com/d-matrix/165364d7-71d3-41c0-b9d3-6ea752126e78",{"id":133,"name":134,"urlSafeSlug":134,"logo":135},[522,523,526],{"city":23,"region":24,"country":25},{"city":524,"region":525,"country":25},"Seattle","Washington",{"city":19,"region":19,"country":527},"Remote","2025-04-01T01:06:19.64Z","Candidates must possess a minimum of a PhD in Computer Science, Engineering, or a related field, along with a minimum of 15 years of industry experience. Strong knowledge of computer architecture, hardware/software co-design, digital design, and machine learning fundamentals is required, as is experience with data parallel architectures and SIMD/Vector extensions. Proficiency in C/C++ or Python development within a Linux environment and utilizing standard development tools is also necessary.","The AI Hardware Architect will be responsible for design space exploration, workload characterization/mapping, and ISA design spanning the data and control planes within the SoC. They will design, model, and drive new architectural features to support next-generation hardware, evaluate the performance of cutting-edge AI workloads, and work closely with a team of hardware architects, software (ML, Systems, Compiler) and hardware (mixed signal, DSP, CPU) experts to build architectural solutions from strawman to detailed specification.",{"employment":532,"compensation":534,"experience":536,"visaSponsorship":539,"location":540,"skills":541,"industries":554},{"type":533},{"id":175,"name":176,"description":19},{"minAnnualSalary":19,"maxAnnualSalary":19,"currency":19,"details":535},"Salary not specified",{"experienceLevels":537},[538],{"id":411,"name":412,"description":19},{"type":184},{"type":231},[542,543,544,545,546,547,548,549,550,551,552,250,553],"Computer Architecture","HW/SW Co-design","digital design","machine learning fundamentals","Data parallel architectures","SIMD/Vector extensions","workload characterization","performance modeling","RTL analysis","GPU/AI Accelerator architectures","C/C++","Linux development tools",[555,557,559],{"id":199,"name":556},"Semiconductors",{"id":199,"name":558},"Hardware Design",{"id":263,"name":264},{"id":561,"title":562,"alternativeTitles":563,"slug":579,"jobPostId":561,"description":580,"isReformated":15,"applyUrl":581,"company":134,"companyOption":582,"locations":583,"listingDate":585,"listingSite":231,"isRemote":51,"requirements":586,"responsibilities":587,"status":171,"expiryDate":19,"isGoogleIndexed":51,"summary":588},"d5cf7ddd-ed45-48c7-8a74-28461bdcb94e","AI Software Application Engineer, Technical Lead / Principal ",[564,565,566,567,568,569,570,571,572,573,574,575,576,577,578],"Principal AI Inference Engineer","Technical Lead, Generative AI Solutions","Senior AI/ML Field Engineer","AI Software Lead, Datacenter Products","Lead AI Application Engineer","Principal Engineer, AI Deployment","Technical Lead, AI Software Integration","Senior Generative AI Engineer","AI/ML Solutions Architect, Technical Lead","Principal Customer Engineer, AI Inference","Lead AI Software Specialist","Technical Lead, AI Performance Optimization","Senior AI Software Deployment Engineer","Principal AI/ML Application Engineer","Technical Lead, Enterprise AI","ai-software-application-engineer-technical-lead-principal-d5cf7ddd-ed45-48c7-8a74-28461bdcb94e","# AI Software Application Engineer – Technical Lead / Principal\n\n## Position Overview\n- **Location Type:** Hybrid\n- **Job Type:** FullTime\n- **Salary:** $180K - $300K\n\nd-Matrix is focused on unleashing the potential of generative AI to power the transformation of technology. We are at the forefront of software and hardware innovation, pushing the boundaries of what is possible. Our culture is one of respect and collaboration. We value humility and believe in direct communication. Our team is inclusive, and our differing perspectives allow for better solutions. We are seeking individuals passionate about tackling challenges and are driven by execution. Ready to come find your playground? Together, we can help shape the endless possibilities of AI.\n\n## Requirements\n- 10+ years of experience in customer engineering and field support for enterprise-level AI and datacenter products, with a focus on AI/ML software and generative AI inference.\n- In-depth knowledge and hands-on experience with generative AI inference at scale, including the integration and deployment of AI models in production environments.\n- Experience with automation tools and scripting.\n\n## Responsibilities\n- **Customer Enablement & Support:**\n - Provide expert guidance and support to customers deploying generative AI inference models, including assisting with integration, troubleshooting, and optimizing AI/ML software stacks.\n - Respond promptly to customer queries, perform root cause analysis, and develop timely resolutions for complex issues.\n- **AI/ML Inference Optimization:**\n - Work directly with customers to understand their generative AI inference needs and deliver solutions that maximize performance across their AI workloads.\n - Collaborate with customers to implement best practices for model deployment and inference tuning.\n- **AI/ML Software Stack Installation & Validation:**\n - Lead the installation, configuration, and bring-up of d-Matrix’s AI software stack.\n - Perform functional and performance validation testing, ensuring that generative AI models run efficiently and meet customer expectations.\n- **Collaboration on Technical Collateral:**\n - Partner with internal engineering and product teams to produce developer guides, technical notes, and other supporting materials that facilitate the adoption of our AI/ML solutions by customers.\n\n## Company Information\n- **Company Focus:** Unleashing the potential of generative AI to power the transformation of technology.\n- **Culture:** Respect, collaboration, humility, direct communication, inclusivity.","https://jobs.ashbyhq.com/d-matrix/98aa2c6d-0a8c-40a3-bd8f-c775f343ff8f",{"id":133,"name":134,"urlSafeSlug":134,"logo":135},[584],{"city":23,"region":24,"country":25},"2025-05-10T08:01:50.362Z","Candidates should possess 10+ years of experience in customer engineering and field support for enterprise-level AI and datacenter products, with a focus on AI/ML software and generative AI inference. They require in-depth knowledge and hands-on experience with generative AI inference at scale, including the integration and deployment of AI models in production environments. Strong experience with automation tools and scripting is also necessary.","The AI Software Application Engineer – Technical Lead / Principal will provide expert guidance and support to customers deploying generative AI inference models, assisting with integration, troubleshooting, and optimizing AI/ML software stacks. They will work directly with customers to understand their needs and deliver solutions that maximize performance across their AI workloads, collaborating on technical collateral and leading the installation, configuration, and bring-up of d-Matrix’s AI software stack. Additionally, they will perform functional and performance validation testing and partner with internal engineering and product teams to produce developer guides and technical notes.",{"employment":589,"compensation":591,"experience":593,"visaSponsorship":596,"location":597,"skills":598,"industries":607},{"type":590},{"id":175,"name":176,"description":19},{"minAnnualSalary":292,"maxAnnualSalary":406,"currency":240,"details":592},"Base salary range for the role",{"experienceLevels":594},[595],{"id":411,"name":412,"description":19},{"type":184},{"type":231},[599,600,601,602,603,604,605,606],"AI/ML software","generative AI inference","AI model deployment","automation tools","scripting","troubleshooting","performance optimization","software stack installation",[608,610],{"id":199,"name":609},"Artificial Intelligence",{"id":199,"name":611},"Hardware & Software Innovation",["Reactive",613],{"$ssite-config":614},{"env":615,"name":616,"url":617},"production","nuxt-app","https://jobo.world/",["Set"],["ShallowReactive",620],{"company-d-Matrix":-1,"company-jobs-94e991b9-9262-404b-a288-4ef13a8d1eee-carousel":-1},"/company/d-Matrix",{}]