Strong proficiency in Python and ML frameworks (e.g., TensorFlow, PyTorch, Scikit-learn)
Hands-on experience with NLP techniques and libraries (e.g., spaCy, Hugging Face Transformers)
Proven experience in fine-tuning LLMs for domain-specific tasks
Familiarity with software testing methodologies and QA lifecycle
Ability to work with structured and unstructured data sources
Excellent problem-solving and communication skills
Preferred Qualifications
LLM Expertise: Experience fine-tuning and deploying large language models (e.g., GPT, LLaMA) for domain-specific tasks such as test case generation, summarisation, and anomaly detection
Quality Engineering Knowledge: Familiarity with functional and regression testing principles, test case design, and QA lifecycle
Automation Frameworks: Hands-on experience with tools like Selenium, Playwright, or Cypress, and integrating AI into these frameworks
MLOps & Deployment: Exposure to MLOps practices including model versioning, monitoring, and CI/CD integration using platforms like MLflow, Kubeflow, or Azure ML
Cloud & Infrastructure: Experience working with cloud platforms (Azure) and containerization tools (Docker, Kubernetes) for scalable AI deployment
Data Engineering: Ability to work with large-scale datasets, including preprocessing, feature engineering, and data pipeline development
Security & Compliance Awareness: Understanding of data privacy, model interpretability, and compliance standards relevant to AI in enterprise environments
Collaboration & Communication: Proven ability to work cross-functionally with QA, DevOps, and product teams, and to communicate technical concepts to non-technical stakeholders
Exposure to Gherkin syntax and behavior-driven development (BDD) practices
Responsibilities
Design and implement AI-driven components that support quality engineering across the SDLC
Develop NLP pipelines to extract insights from requirements, user feedback, and test artifacts
Fine-tune and deploy LLMs to support intelligent test generation, summarisation, and anomaly detection
Collaborate with QA, DevOps and Product teams to integrate AI tooling into CI/CD pipelines and quality gates
Analyze historical test and defect data to identify patterns and optimize regression test coverage
Maintain and improve model performance through continuous learning and feedback loops