£550 Per day
Outside
Undetermined
England, United Kingdom
Summary: This role is for a Senior Data Engineer on a 4-month contract with a leading media organization, focusing on building and optimizing data infrastructure for machine learning and LLM initiatives. The position requires collaboration with data scientists and ML engineers to enhance data pipelines and support advanced analytics. The contractor will be responsible for developing production-grade ETL/ELT workflows and ensuring data quality and performance. This opportunity is ideal for someone with strong technical skills in DBT, Airflow, and Databricks, looking to contribute to a fast-paced, innovative environment.
Key Responsibilities:
- Building and maintaining production-grade ETL/ELT workflows with DBT and Airflow
- Collaborating with AI/ML teams to support data readiness for experimentation and inference
- Writing clean, modular SQL and Python code for use in Databricks
- Contributing to architectural decisions around pipeline scalability and performance
- Supporting the integration of diverse data sources into the platform
- Ensuring data quality, observability, and cost-efficiency
Key Skills:
- Strong experience with DBT, Airflow, and Databricks
- Advanced SQL and solid Python scripting skills
- Solid understanding of modern data engineering best practices
- Ability to work independently and communicate with technical and non-technical stakeholders
- Experience in fast-paced, data-driven environments
Salary (Rate): £550.00/daily
City: undetermined
Country: United Kingdom
Working Arrangements: undetermined
IR35 Status: outside IR35
Seniority Level: Senior
Industry: IT
DATA ENGINEER - DBT / AIRFLOW / DATABRICKS 4-MONTH CONTRACT £450-550 PER DAY OUTSIDE IR35
This is an exciting opportunity for a Data Engineer to join a leading media organisation working at the forefront of data innovation. You'll play a key role in designing and building the data infrastructure that supports cutting-edge machine learning and LLM initiatives. Expect to work closely with data scientists, helping to power intelligent content and audience tools across the business.
THE COMPANY
A major player in the media sector, this company is investing heavily in its data capabilities to support everything from real-time analytics to AI-powered content discovery. The environment is collaborative, fast-moving, and focused on innovation. With strong engineering foundations already in place, they're looking for a contractor to accelerate delivery of critical pipelines and platform improvements.
THE ROLE
You'll join a skilled data team to lead the build and optimisation of scalable pipelines using DBT, Airflow, and Databricks. Working alongside data scientists and ML engineers, you'll support everything from raw ingestion to curated layers powering LLMs and advanced analytics.
Your responsibilities will include:
- Building and maintaining production-grade ETL/ELT workflows with DBT and Airflow
- Collaborating with AI/ML teams to support data readiness for experimentation and inference
- Writing clean, modular SQL and Python code for use in Databricks
- Contributing to architectural decisions around pipeline scalability and performance
- Supporting the integration of diverse data sources into the platform
- Ensuring data quality, observability, and cost-efficiency
KEY SKILLS AND REQUIREMENTS
Strong experience with DBT, Airflow, and Databricks
Advanced SQL and solid Python scripting skills
Solid understanding of modern data engineering best practices
Ability to work independently and communicate with technical and non-technical stakeholders
Experience in fast-paced, data-driven environments
DESIRABLE SKILLS
Exposure to LLM workflows or vector databases
Experience in the media, content, or publishing industries
Familiarity with cloud data platforms (e.g., AWS or Azure)
Knowledge of MLOps and ML/data science pipelines
HOW TO APPLY
To register your interest, please apply via the link or get in touch with your CV.