£660 Per day
Inside
Hybrid
London Area, United Kingdom
Summary: The Data Engineer role in London involves working within a product-led data squad to build and operate data platforms and pipelines for a complex transport network. The position requires strong data engineering skills, particularly in Python and SQL, to support real-time operational decision-making. The role is hybrid, requiring on-site presence two days a week when necessary, and is classified as inside IR35. The contract duration is six months with a competitive daily rate.
Key Responsibilities:
- Designing, building, and maintaining batch and streaming data pipelines that power optimisation and ML products.
- Developing robust Python-based data workflows and production-grade ingestion and transformation logic.
- Modelling and optimising data structures to support analytics, reporting, and real-time decision systems.
- Owning end-to-end data workflows: ingestion, transformation, orchestration, and integration into downstream applications.
- Working closely with Embedded Data Scientists, Engineers, and Product teams to ensure data is reliable, timely, and fit for purpose.
Key Skills:
- Strong experience in data engineering within complex, data-rich operational environments.
- Advanced Python for production systems (clean, tested, modular code).
- Expert-level SQL for building performant, reliable data models and pipelines.
- Hands-on with modern data pipeline tooling and workflow orchestration (Dagster/Airflow or similar).
- Experience designing and implementing robust ETL/ELT processes across multiple data sources.
- Solid experience with cloud platforms (AWS preferred) and modern data architectures (data lakes/warehouses).
- Familiarity with CI/CD, Git, automated testing, and infrastructure-as-code practices.
- Exposure to large-scale operational or logistics domains.
Salary (Rate): £660/day
City: London
Country: United Kingdom
Working Arrangements: hybrid
IR35 Status: inside IR35
Seniority Level: undetermined
Industry: IT
Data Engineer London (Hybrid - 2 days on-site when required) Up to £660/day Inside | 6 months A rare opportunity to join a large-scale, mission-critical operations environment that runs one of the most complex transport networks in the world. You’ll sit within a modern product-led data squad, building and operating industrial-grade data platforms and pipelines that directly power optimisation and machine-learning models used in real-time operational decision-making.
Key skills & experience:
- Strong experience in data engineering within complex, data-rich operational environments
- Advanced Python for production systems (clean, tested, modular code)
- Expert-level SQL for building performant, reliable data models and pipelines
- Hands-on with modern data pipeline tooling and workflow orchestration (Dagster/Airflow or similar)
- Experience designing and implementing robust ETL/ELT processes across multiple data sources
- Solid experience with cloud platforms (AWS preferred) and modern data architectures (data lakes/warehouses)
- Familiarity with CI/CD, Git, automated testing, and infrastructure-as-code practices
- Exposure to large-scale operational or logistics domains
What you’ll be doing:
- Designing, building, and maintaining batch and streaming data pipelines that power optimisation and ML products
- Developing robust Python-based data workflows and production-grade ingestion and transformation logic
- Modelling and optimising data structures to support analytics, reporting, and real-time decision systems
- Owning end-to-end data workflows: ingestion, transformation, orchestration, and integration into downstream applications
- Working closely with Embedded Data Scientists, Engineers, and Product teams to ensure data is reliable, timely, and fit for purpose
To hear more, get in touch with Connor Smyth at Anson McCade.