ML Engineer (with Data Science background)

ML Engineer (with Data Science background)

Posted 7 days ago by Initialize

Negotiable
Undetermined
Undetermined
London Area, United Kingdom

Summary: The role of an ML Ops Engineer with a Data Science background involves bridging the gap between Data Scientists and IT DevOps Engineers to translate experimental ML models into scalable applications. The position requires expertise in AWS services, particularly SageMaker, and a solid understanding of Agile software development principles. The engineer will be responsible for automating workflows, packaging models, and deploying them as microservices, while also managing the entire ML lifecycle. The ideal candidate will have strong programming skills and experience in delivering end-to-end data science products.

Key Responsibilities:

  • Serve as the day-to-day liaison between Data Science and DevOps, ensuring effective deployment and integration of AI/ML solutions using AWS services.
  • Assist DevOps engineers with packaging and deploying ML models, helping them understand AI specific requirements and performance nuances.
  • Design, develop, and deploy standalone and micro-applications to serve AI/ML models, including Hugging Face Transformers and other pre-trained architectures.
  • Build, train, and evaluate ML models using services such as AWS SageMaker, Bedrock, Glue, Athena, Redshift, and RDS.
  • Help create the knowledge artefacts for Data Scientist around DevOps and ML Ops.
  • Where required, hand hold the data scientist and assist them with DevOps engineering issues, package installation issues, creating a Docker container, ML Ops tooling issues.
  • Develop and expose secure APIs using Apigee, enabling easy access to AI functionality across the organization.
  • Manage the entire ML life cycle-from training and validation to versioning, deployment, monitoring, and governance.
  • Build automation pipelines and CI/CD integrations for ML projects using tools like Jenkins and Maven.
  • Solve common challenges faced by Data Scientists, such as model reproducibility, deployment portability, and environment standardization.
  • Assist the product owner to define and implement the ML Ops roadmap.
  • Support knowledge sharing and mentorship across data Scientists teams, promoting a best practice-first culture.

Key Skills:

  • Degree in computer science, economics, data science or another technical field (eg maths, physics, statistics etc.), or equivalent relevant experience.
  • Strong programming proficiency in Python (or R), with practical experience in machine learning and statistical modelling.
  • Proven experience delivering end-to-end data science products, including both experimentation and deployment.
  • Solid understanding of data cleaning, feature engineering, and model performance evaluation.
  • Demonstrated experience deploying and maintaining AI/ML models in production environments.
  • Hands-on experience with AWS Machine Learning and Data services: SageMaker, Bedrock, Glue, Kendra, Lambda, ECS Fargate, and Redshift.
  • Familiarity with deploying Hugging Face models (eg, NLP, vision, and generative models) within AWS environments.
  • Ability to develop and host microservices and REST APIs using Flask, FastAPI, or equivalent frameworks.
  • Proficiency with SQL, version control (Git), and working with Jupyter or RStudio environments.
  • Experience integrating with CI/CD pipelines and infrastructure tools like Jenkins, Maven, and Chef.
  • Strong cross-functional collaboration skills and the ability to explain technical concepts to non technical stakeholders.
  • Ability to work across cloud-based architectures.

Salary (Rate): undetermined

City: London

Country: United Kingdom

Working Arrangements: undetermined

IR35 Status: undetermined

Seniority Level: undetermined

Industry: IT

Detailed Description From Employer:

ML Ops engineer with Data Science background (AWS services) - London/remote - £536 per day

ML Ops engineer with experience in data science, DevOps, and AWS SageMaker, along with a solid understanding of Agile software development principles. In this role, you will act as a bridge between Data Scientists and IT DevOps Engineers, helping translate experimental ML models into scalable, production-ready applications. You'll play a critical role in building practical solutions to real-world data science challenges, including automating workflows, packaging models, and deploying them as microservices using AWS services.

The ideal candidate will be adept at developing end-to-end applications to serve AI/ML models, including those from platforms like Hugging Face, and will work with a modern AWS-based toolchain (SageMaker, Fargate, Bedrock).

Your core responsibilities include:

  • Serve as the day-to-day liaison between Data Science and DevOps, ensuring effective deployment and integration of AI/ML solutions using AWS services.
  • Assist DevOps engineers with packaging and deploying ML models, helping them understand AI specific requirements and performance nuances.
  • Design, develop, and deploy standalone and micro-applications to serve AI/ML models, including Hugging Face Transformers and other pre-trained architectures.
  • Build, train, and evaluate ML models using services such as AWS SageMaker, Bedrock, Glue, Athena, Redshift, and RDS.
  • Help create the knowledge artefacts for Data Scientist around DevOps and ML Ops.
  • Where required, hand hold the data scientist and assist them with DevOps engineering issues, package installation issues, creating a Docker container, ML Ops tooling issues.
  • Develop and expose secure APIs using Apigee, enabling easy access to AI functionality across the organization.
  • Manage the entire ML life cycle-from training and validation to versioning, deployment, monitoring, and governance.
  • Build automation pipelines and CI/CD integrations for ML projects using tools like Jenkins and Maven.
  • Solve common challenges faced by Data Scientists, such as model reproducibility, deployment portability, and environment standardization.
  • Assist the product owner to define and implement the ML Ops roadmap.
  • Support knowledge sharing and mentorship across data Scientists teams, promoting a best practice-first culture.

What skills are required?

Minimum skills:

  • Degree in computer science, economics, data science or another technical field (eg maths, physics, statistics etc.), or equivalent relevant experience
  • Strong programming proficiency in Python (or R), with practical experience in machine learning and statistical modelling.
  • Proven experience delivering end-to-end data science products, including both experimentation and deployment.
  • Solid understanding of data cleaning, feature engineering, and model performance evaluation.

Essential skills:

  • Demonstrated experience deploying and maintaining AI/ML models in production environments.
  • Hands-on experience with AWS Machine Learning and Data services: SageMaker, Bedrock, Glue, Kendra, Lambda, ECS Fargate, and Redshift.
  • Familiarity with deploying Hugging Face models (eg, NLP, vision, and generative models) within AWS environments.
  • Ability to develop and host microservices and REST APIs using Flask, FastAPI, or equivalent frameworks.
  • Proficiency with SQL, version control (Git), and working with Jupyter or RStudio environments.
  • Experience integrating with CI/CD pipelines and infrastructure tools like Jenkins, Maven, and Chef.
  • Strong cross-functional collaboration skills and the ability to explain technical concepts to non technical stakeholders.
  • Ability to work across cloud-based architectures.

Tools & Technologies:

  • AWS Services: SageMaker, Bedrock, Glue, ECS Fargate, Athena, Kendra, RDS, Redshift, Lambda, CloudWatch
  • Other Tooling: Apigee, Hugging Face, RStudio, Jupyter, Git, Jenkins, Linux
  • Languages & Frameworks: Python, R, Flask, FastAPI, SQL