Solutions Architect, Inference Deployments at NVIDIA

Santa Clara, California, United States

NVIDIA Logo
Not SpecifiedCompensation
Senior (5 to 8 years)Experience Level
Full TimeJob Type
UnknownVisa
AI, TechnologyIndustries

Requirements

  • 5+ Years in Solutions Architecture with a proven track record of moving AI inference from POC to production on Kubernetes
  • Experience architecting GPU allocation using NVIDIA GPU Operator and NVIDIA NIM Operator
  • Troubleshoot sophisticated GPU orchestration, optimize with Multi-Instance GPU (MIG), and ensure efficient utilization in Kubernetes environments
  • Proficiency with TensorRT-LLM, Triton, and TensorRT for model optimization and serving
  • Success stories optimizing LLMs for low-latency inference in enterprise environments
  • BS or equivalent experience in CS/Engineering
  • Ways to stand out
  • Prior experience deploying NVIDIA NIM microservices for multi-model inference
  • Serverless Inference, knowledge of FaaS patterns (e.g., Google Cloud Run, AWS Lambda, NVCF) with NVIDIA GPUs
  • NVIDIA Certified AI Engineer or similar
  • Active contributions to Kubernetes SIGs or AI inference projects (e.g., KServe, Dynamo, SGLang or similar)
  • Familiarity with networking concepts which support multi-node inference such as MPI, LWS or similar

Responsibilities

  • Help customers craft, deploy, and maintain scalable, GPU-accelerated inference pipelines on Kubernetes for large language models (LLMs) and generative AI workloads
  • Enhance performance tuning using TensorRT/TensorRT-LLM, NVIDIA NIM, and Triton Inference Server to improve GPU utilization and model efficiency
  • Collaborate with multi-functional teams (engineering, product) and offer technical mentorship to customers implementing AI at scale
  • Architect zero-downtime deployments, autoscaling (e.g., HPA or equivalent experience with custom metrics), and integration with cloud-native tools (e.g., OpenTelemetry, Prometheus, Grafana)

Skills

Key technologies and capabilities for this role

KubernetesTensorRTTensorRT-LLMNVIDIA NIMTriton Inference ServerGPU OperatorNVIDIA NIM OperatorMulti-Instance GPUMIGHPAOpenTelemetryPrometheusGrafanaLLMsgenerative AI

Questions & Answers

Common questions about this position

What is the base salary range for this Solutions Architect position?

The base salary range is 148,000 USD - 235,750 USD, determined based on location, experience, and pay of employees in similar positions. You will also be eligible for equity and benefits.

Is this a remote position, or is there a location requirement?

This information is not specified in the job description.

What key skills and experience are required for this role?

Candidates need 5+ years in Solutions Architecture moving AI inference from POC to production on Kubernetes, experience with GPU allocation using NVIDIA GPU Operator and NIM Operator, troubleshooting GPU orchestration with MIG, and proficiency with TensorRT-LLM, Triton, and TensorRT. A BS or equivalent in CS/Engineering is also required.

What is the team structure and company culture like at NVIDIA for this role?

You'll collaborate closely with engineering, DevOps, customer success, and multi-functional teams (engineering, product) in a diverse work environment. NVIDIA is committed to fostering diversity and is an equal opportunity employer.

What makes a candidate stand out for this Solutions Architect role?

Stand out with prior experience deploying NVIDIA NIM microservices, serverless inference with FaaS patterns like Google Cloud Run or AWS Lambda with NVIDIA GPUs, NVIDIA Certified AI Engineer certification, contributions to Kubernetes SIGs or AI projects like KServe, and familiarity with multi-node networking like MPI.

NVIDIA

Designs GPUs and AI computing solutions

About NVIDIA

NVIDIA designs and manufactures graphics processing units (GPUs) and system on a chip units (SoCs) for various markets, including gaming, professional visualization, data centers, and automotive. Their products include GPUs tailored for gaming and professional use, as well as platforms for artificial intelligence (AI) and high-performance computing (HPC) that cater to developers, data scientists, and IT administrators. NVIDIA generates revenue through the sale of hardware, software solutions, and cloud-based services, such as NVIDIA CloudXR and NGC, which enhance experiences in AI, machine learning, and computer vision. What sets NVIDIA apart from competitors is its strong focus on research and development, allowing it to maintain a leadership position in a competitive market. The company's goal is to drive innovation and provide advanced solutions that meet the needs of a diverse clientele, including gamers, researchers, and enterprises.

Santa Clara, CaliforniaHeadquarters
1993Year Founded
$19.5MTotal Funding
IPOCompany Stage
Automotive & Transportation, Enterprise Software, AI & Machine Learning, GamingIndustries
10,001+Employees

Benefits

Company Equity
401(k) Company Match

Risks

Increased competition from AI startups like xAI could challenge NVIDIA's market position.
Serve Robotics' expansion may divert resources from NVIDIA's core GPU and AI businesses.
Integration of VinBrain may pose challenges and distract from NVIDIA's primary operations.

Differentiation

NVIDIA leads in AI and HPC solutions with cutting-edge GPU technology.
The company excels in diverse markets, including gaming, data centers, and autonomous vehicles.
NVIDIA's cloud services, like CloudXR, offer scalable solutions for AI and machine learning.

Upsides

Acquisition of VinBrain enhances NVIDIA's AI capabilities in the healthcare sector.
Investment in Nebius Group boosts NVIDIA's AI infrastructure and cloud platform offerings.
Serve Robotics' expansion, backed by NVIDIA, highlights growth in autonomous delivery services.

Land your dream remote job 3x faster with AI