Head of MLOps at Enterpret

Bengaluru, Karnataka, India

Enterpret Logo
Not SpecifiedCompensation
Expert & Leadership (9+ years)Experience Level
Full TimeJob Type
UnknownVisa
AI, TechnologyIndustries

Requirements

  • A minimum of 6 years' experience in MLOps and ML infrastructure, ideally with exposure to designing, deploying, and scaling machine learning systems in fast-paced, product-driven environments such as startups or high-growth companies
  • Deep expertise with AWS (SageMaker, EC2, EKS, S3, IAM), infrastructure-as-code (Terraform), and container orchestration (Docker, Kubernetes)
  • Strong Python skills, with bonus points for Go, Bash, or Rust scripting where appropriate
  • Hands-on experience with CI/CD systems like GitHub Actions, ArgoCD, or Jenkins—especially for ML model delivery
  • Proven ability to monitor and maintain production ML systems, including model drift, latency, uptime, and alerting
  • Comfort with cloud cost optimization, resource provisioning, and auto-scaling for ML-heavy environments
  • Familiarity with model serving stacks and experimentation tools (MLflow, Langsmith, etc.)
  • Bonus: exposure to GenAI workflows (LangChain, vector DBs, RAG), encoder/LLM model tuning, reinforcement learning

Responsibilities

  • Design and evolve Enterpret's ML platform for training, serving, and retraining our encoders and LLM models using AWS/Terraform/OpenAI/Anthropic
  • Build CI/CD pipelines tailored for ML—including model versioning, testing, canary releases, rollbacks, and gated production deploys
  • Deploy and manage model serving systems for both real-time inference (e.g., tagging support tickets on the fly) and batch pipelines (e.g., analyzing historical product feedback)
  • Set up observability for model performance and data drift—using Braintrust and custom alerts to catch issues before they affect customers
  • Lead incident response, root cause analysis, and postmortems for ML systems—ensuring uptime for insights that product teams rely on, alongside governance and security
  • Track and optimize cloud usage for ML workflows, making model delivery cost-aware and aligned with product usage
  • Implement governance and security across the stack—owning IAM, data access, auditability, and model explainability where needed
  • Partner with ML and product teams to productionize GenAI and AI models powering our Knowledge Graph and Adaptive Taxonomy engine, tackling problems on retrieval, encoder LLM fine-tuning, and reinforcement learning
  • Evaluate tools for model registry, feature stores, and orchestration—and build where needed to keep the feedback loop fast
  • Champion best practices in MLOps across the org—mentoring engineers and setting scalable foundations for the future
  • Act as a coach to our team of researchers who are transitioning into engineering, helping them self-serve their capabilities and self-service these tools rather than doing it yourself

Skills

Key technologies and capabilities for this role

AWSTerraformOpenAIAnthropicBraintrustCI/CDML PipelinesModel VersioningCanary ReleasesRollbacksModel ServingReal-time InferenceBatch PipelinesObservabilityData DriftLLM Fine-tuningPrompt Engineering

Questions & Answers

Common questions about this position

What is the salary or compensation for the Head of MLOps role?

This information is not specified in the job description.

Is this Head of MLOps position remote or does it require office work?

This information is not specified in the job description.

What experience and skills are required for the Head of MLOps role?

A minimum of 6 years of experience is required. The role demands expertise in designing ML platforms using AWS, Terraform, OpenAI, and Anthropic, building CI/CD pipelines for ML, model serving, observability, and MLOps best practices.

What is the team structure and reporting line for this role?

You'll work closely with ML researchers, backend engineers, and product teams, and report directly to the CTO. The role involves coaching a team of researchers transitioning into engineering to enable self-service.

What makes a strong candidate for the Head of MLOps position?

Strong candidates have at least 6 years of experience, deep knowledge in MLOps including LLM fine-tuning, CI/CD for ML, AWS infrastructure, and partnerships with OpenAI/Anthropic. High ownership, ability to mentor teams, and focus on speed, cost, and productionizing AI models are key.

Enterpret

Transforms customer feedback into actionable insights

About Enterpret

Enterpret specializes in turning customer feedback into actionable insights that help product companies grow. The platform uses adaptive AI models to unify and categorize feedback from various sources, allowing teams to gain precise insights that inform product development. It features a user-friendly interface with easy-to-build dashboards and automated summaries, making it accessible for non-technical users. A standout feature is its semantic search, which helps users understand the intent behind customer feedback. Enterpret's Custom Unified Feedback Taxonomy structures unstructured feedback and adapts over time, ensuring insights remain relevant. The company differentiates itself by providing dedicated data auditors and customer success managers to keep the platform effective. The goal is to empower product teams to prioritize and build products that truly resonate with their customers.

San Francisco, CaliforniaHeadquarters
2020Year Founded
$24.4MTotal Funding
SERIES_ACompany Stage
Data & Analytics, Consumer Software, AI & Machine LearningIndustries
11-50Employees

Benefits

Health Insurance
Dental Insurance
Vision Insurance
401(k) Retirement Plan
401(k) Company Match
Paid Vacation
Parental Leave

Risks

Increased competition from established players like Medallia and Qualtrics.
Dependence on key clients like Canva, Notion, and Monday.com poses a risk.
Challenges in scaling operations while maintaining service quality post-Series A funding.

Differentiation

Enterpret uses adaptive AI models for precise customer feedback insights.
The platform features a Custom Unified Feedback Taxonomy for structured feedback analysis.
Enterpret offers a user-friendly interface with semantic search for non-technical users.

Upsides

Enterpret raised $20.8M in Series A to scale operations and deploy no-code AI agents.
Integration with LLMs enhances semantic search capabilities for nuanced feedback understanding.
Partnerships with platforms like Census enable seamless data integration and analytics.

Land your dream remote job 3x faster with AI