Bachelor’s degree in Computer Science, Engineering, Data Science or a closely related discipline
2-5 years of professional experience in machine learning, AI engineering, or software development with ML exposure
Proficiency in Python (including pandas, NumPy, scikit-learn)
Basic understanding of ML concepts and model evaluation techniques
Hands-on experience with AWS (particularly SageMaker), and an understanding of cloud-based ML workflows
Familiarity with DevOps tools (e.g., Git, Docker) and infrastructure-as-code tools such as CloudFormation or Terraform
Strong analytical thinking, problem-solving aptitude and clear written/verbal communication
Responsibilities
Assist in designing, developing and deploying ML models and algorithms under the guidance of senior engineers to address client challenges across a range of sectors
Help implement ML solutions on AWS (with emphasis on Amazon SageMaker)
Contribute to building and maintaining CI/CD pipelines using infrastructure-as-code tools such as AWS CloudFormation and Terraform to automate model training and deployment
Write clean, efficient and modular Python code using libraries such as pandas, NumPy, and scikit-learn to implement ML algorithms and data pipelines
Conduct model evaluations using metrics like accuracy, precision, and recall
Run experiments, document results, and iterate to improve performance
Work closely with data scientists, ML engineers and DevOps teams to integrate models into production
Participate in sprint planning, standups, and client calls to deliver technical updates in a clear and concise manner
Maintain thorough documentation of data pre-processing steps, model parameters and deployment workflows
Follow data security best practices and ensure compliance with confidentiality requirements for highly sensitive data
Provide guidance and coaching to junior engineers
Share best practices and contribute to scaling our ML engineering capability