£500 Per day
Inside
Remote
London
Summary: The role of ML Engineer involves working with a consultancy that supports the public sector, focusing on building and deploying machine learning models in production environments. The position requires strong programming skills in Python and experience with various machine learning frameworks and tools. The role is remote with occasional travel to London and is classified as inside IR35.
Key Responsibilities:
- Build and deploy machine learning models in a production environment.
- Utilize programming skills and expertise in Python.
- Work with agentic or RAG frameworks like LangChain or LlamaIndex.
- Use tools for Large Language Models via API or locally (e.g., HuggingFace transformers).
- Leverage managed AI services and foundation models from major cloud providers.
- Engage with conversational AI platforms (e.g., Google Dialogflow, Amazon Lex).
- Apply core Python ML libraries and deep learning frameworks.
Key Skills:
- Proven experience in building and deploying machine learning models.
- Strong programming skills in Python.
- Hands-on experience with agentic or RAG frameworks.
- Familiarity with tools for Large Language Models.
- Experience with managed AI services from cloud providers.
- Knowledge of conversational AI platforms.
- Understanding of core Python ML libraries and deep learning frameworks.
Salary (Rate): £500/day
City: London
Country: United Kingdom
Working Arrangements: remote
IR35 Status: inside IR35
Seniority Level: undetermined
Industry: IT
I am working with a consultancy feeding into the public sector, looking for multiple ML Engineers.
Inside IR35
£400 - £500 per day (depending on experience level)
Remote (occasional London travel)
Proven experience building and deploying machine learning models in a production environment.
Strong programming skills and deep expertise in Python.
Hands-on experience building with agentic or RAG (Retrieval-Augmented Generation) frameworks like LangChain or LlamaIndex.
Familiarity with tools for working with Large Language Models via API or in a local context (e.g. HuggingFace transformers).
Practical experience using managed AI services and foundation models from a major cloud provider (e.g., Amazon Bedrock, Google Vertex AI, Azure AI Services).
Experience with a major conversational AI platform (Google Dialogflow, Amazon Lex, Rasa, or similar).
A solid understanding of core Python ML libraries (Keras, scikit-learn, Pandas) and deep learning frameworks (TensorFlow, PyTorch).
Desirable (but not essential) experience:
- Working with tools/interfaces for AI applications e.g. MCP protocol.
- Training traditional ML and DL models using tools like Axolotl, LoRA, or QLoRA.
- Experience with multi-agent orchestration frameworks (LangGraph, AutoGen, CrewAI)
- Experience with observability and evaluation tools for LLMs such as TruLens or Helicone.
- Experience with AI safety and reliability frameworks like Guardrails AI.