Negotiable
Undetermined
Hybrid
London Area, United Kingdom
Summary: The role of Senior ML Engineer involves designing, building, and operating scalable real-time data pipelines and machine learning platforms on AWS. This contract position requires extensive experience in managing streaming data and implementing MLOps pipelines. The role is hybrid, requiring presence in London two days a week. Candidates should have a strong background in AWS and related technologies.
Key Responsibilities:
- Build and manage Real Time streaming pipelines using Kafka and Flink
- Implement micro-batch processing (5-minute, hourly, daily)
- Design and operate S3-based data pipelines and data lakes
- Set up and manage Redis clusters for low-latency data access
- Evaluate and implement MongoDB/Atlas where required
- Build and operate MLOps pipelines using AWS SageMaker (training, deployment, monitoring)
- Productionize ML models built in PyTorch
- Ensure scalability, reliability, and performance of data and ML systems
Key Skills:
- 2-3+ years hands-on AWS experience
- Kafka, Flink (Real Time streaming pipelines)
- AWS S3 data pipelines and data lake design
- Real Time and micro-batch processing
- Redis cluster setup and management
- AWS SageMaker (training, deployment, MLOps)
- PyTorch
- Strong Python skills
- Nice to Have: MongoDB/MongoDB Atlas, CI/CD and Infrastructure as Code, Experience with large-scale distributed systems
Salary (Rate): undetermined
City: London
Country: United Kingdom
Working Arrangements: hybrid
IR35 Status: undetermined
Seniority Level: Senior
Industry: IT
We are looking for a Senior ML Engineer to design, build, and operate scalable Real Time data pipelines and ML platforms on AWS. This is a contract role - hybrid role in London, UK ( 2-days a week) Experience-10+ yrs
Key Responsibilities
- Build and manage Real Time streaming pipelines using Kafka and Flink
- Implement micro-batch processing (5-minute, hourly, daily)
- Design and operate S3-based data pipelines and data lakes
- Set up and manage Redis clusters for low-latency data access
- Evaluate and implement MongoDB/Atlas where required
- Build and operate MLOps pipelines using AWS SageMaker (training, deployment, monitoring)
- Productionize ML models built in PyTorch
- Ensure scalability, reliability, and performance of data and ML systems
Required Skills
- 2-3+ years hands-on AWS experience
- Kafka, Flink (Real Time streaming pipelines)
- AWS S3 data pipelines and data lake design
- Real Time and micro-batch processing
- Redis cluster setup and management
- AWS SageMaker (training, deployment, MLOps)
- PyTorch
- Strong Python skills
Nice to Have
- MongoDB/MongoDB Atlas
- CI/CD and Infrastructure as Code
- Experience with large-scale distributed systems