Negotiable
Undetermined
Onsite
Manchester, England, United Kingdom
Summary: The Data Engineer is tasked with designing, building, and maintaining scalable data pipelines and models to support analytics and operational use cases. This role emphasizes high-quality data ingestion, transformation, and orchestration across the data platform. The engineer will ensure data integrity and accessibility while collaborating with various stakeholders to meet business requirements. The position requires a strong technical background in data engineering practices and tools.
Key Responsibilities:
- Design, build, and maintain robust data pipelines for ingesting data from source systems.
- Implement batch and near-real-time data ingestion patterns.
- Ensure pipelines are resilient, performant, and recoverable with appropriate error handling and logging.
- Define and manage workflow orchestration using scheduling and orchestration tools.
- Manage dependencies, retries, alerts, and pipeline monitoring for reliable data delivery.
- Design and maintain data models to support reporting, analytics, and operational use cases.
- Apply best practices for data transformation, naming standards, and model documentation.
- Collaborate with analysts and stakeholders to ensure models meet business requirements.
- Work across development, test, and production environments for safe deployment of changes.
- Support environment configuration, version control, and CI/CD practices for data engineering workloads.
- Implement basic data quality checks and validation rules within pipelines.
- Support data lineage, metadata, and documentation to improve transparency and trust in data.
- Work within established data governance, security, and access control frameworks.
- Collaborate with data analysts, architects, and technology teams to deliver end-to-end data solutions.
- Participate in planning, estimation, and delivery of data engineering work.
- Support incident investigation and resolution related to data pipelines and data availability.
Key Skills:
- Strong experience building and maintaining data pipelines in a modern data platform.
- Solid understanding of data modelling concepts and patterns.
- Experience with workflow orchestration and scheduling tools.
- Strong capability in SQL and Python.
- Experience with Azure cloud-based data platforms such as Azure Synapse and Azure Data Factory.
- Experience working across multiple environments with version control.
- Good understanding of data quality, reliability, and operational considerations.
- Familiarity with CI/CD approaches for data engineering.
Salary (Rate): undetermined
City: Manchester
Country: United Kingdom
Working Arrangements: on-site
IR35 Status: undetermined
Seniority Level: undetermined
Industry: IT
Exciting Opportunity for Data Engineers
Role Purpose
The Data Engineer is responsible for designing, building, and maintaining reliable, scalable data pipelines and data models that support analytics, reporting, and operational use cases. The role focuses on high-quality data ingestion, transformation, orchestration, and environment management across the data platform, ensuring data is trusted, accessible, and fit for purpose.
Key Responsibilities
- Data Pipelines & Integration
- Design, build, and maintain robust data pipelines for ingesting data from source systems (e.g. operational systems, APIs, files, third-party platforms)
- Implement batch and, where required, near-real-time data ingestion patterns
- Ensure pipelines are resilient, performant, and recoverable, with appropriate error handling and logging
- Orchestration & Scheduling
- Define and manage workflow orchestration using scheduling and orchestration tools (e.g. Airflow or equivalent)
- Manage dependencies, retries, alerts, and pipeline monitoring to support reliable data delivery
- Optimise pipeline execution to meet agreed service levels for downstream reporting and analytics
- Data Modelling & Transformation
- Design and maintain data models to support reporting, analytics, and operational use cases (e.g. ODS, dimensional, or analytical models)
- Apply best practices for data transformation, naming standards, and model documentation
- Collaborate with analysts and stakeholders to ensure models meet business requirements
- Environments & Platform Management
- Work across development, test, and production environments, ensuring safe and controlled deployment of changes
- Support environment configuration, version control, and CI/CD practices for data engineering workloads
- Contribute to platform stability, performance tuning, and cost-effective use of infrastructure
- Data Quality & Governance
- Implement basic data quality checks and validation rules within pipelines
- Support data lineage, metadata, and documentation to improve transparency and trust in data
- Work within established data governance, security, and access control frameworks
- Collaboration & Delivery
- Work closely with data analysts, architects, and wider technology teams to deliver end-to-end data solutions
- Participate in planning, estimation, and delivery of data engineering work
- Support incident investigation and resolution related to data pipelines and data availability
Essential
- Strong experience building and maintaining data pipelines in a modern data platform
- Solid understanding of data modelling concepts and patterns
- Experience with workflow orchestration and scheduling tools
- Strong capability in SQL and Python
- Experience with Azure cloud-based data platforms such as Azure Synapse and Azure Data Factory
- Experience working across multiple environments with version control
- Good understanding of data quality, reliability, and operational considerations
- Familiarity with CI/CD approaches for data engineering
Desirable
- Experience supporting analytics and reporting use cases in a production environment
- Exposure to regulated or data-sensitive environments
Please apply with an updated CV, if you're available and can do 3 days onsite in Manchester.