£700 Per day
Inside
Hybrid
London, UK
Summary: The role of ML Engineer involves developing and deploying machine learning models in a hybrid work environment based in London. The position requires a strong background in Python and relevant machine learning frameworks, with a focus on model interpretability and deployment. The contract duration is for 9 months, and the role is classified as inside IR35.
Key Responsibilities:
- Develop and deploy machine learning models in production-ready environments.
- Pre-process and feature engineer data for model training.
- Utilize frameworks such as TensorFlow, PyTorch, or Keras for building complex ML models.
- Implement model interpretability and explainability techniques.
- Work with containerization and orchestration tools like Docker and Kubernetes.
Key Skills:
- 3-4 years of relevant experience in machine learning.
- Proficiency in Python and familiarity with popular libraries.
- Knowledge of frameworks for building complex ML models (e.g., TensorFlow, PyTorch, Keras).
- Ability to pre-process and feature engineer data.
- Experience with deploying models in production environments.
- Familiarity with model interpretability techniques (e.g., SHAP, LIME).
Salary (Rate): £700 daily
City: London
Country: UK
Working Arrangements: hybrid
IR35 Status: inside IR35
Seniority Level: Mid-Level
Industry: IT
Role: ML Engineer
Location: London (Hybrid)
Duration: 9 Months
Day rate: £600 - £700 Inside IR35
Skills & Experience required:
- 3-4 years of relevant experience
Core expected skills (level based on seniority) :
- Python proficiency and familiarity with popular libraries
- Knowledge of frameworks for building complex ML models (eg TensorFlow, pytorch or keras)
- Ability to pre-process and feature engineer data (cleaning, transformation, feature extraction)
- Ability to deploy and serve models in prod ready environments (requiring knowledge of containerisation, orchestration, and model serving platforms - docker, Kubernetes, TensorFlow etc).
- Familiar with model interpretability and explainability and techniques to interpret and explain model results (eg SHAP, LIME)