£520 Per day
Inside
Hybrid
Brierley Hill, West Midlands
Summary: The Data Engineer role focuses on developing and deploying data warehouse solutions as part of a Business Data Service Project aimed at replacing existing data warehouses and simplifying reports. The position requires strong technical skills in Python, PySpark, and SQL, along with experience in cloud data platforms. The role involves ensuring data products are reliable and fit for purpose, with responsibilities spanning the entire data lifecycle. The position requires some onsite presence in Dudley, West Midlands.
Key Responsibilities:
- Develop and deploy ETL/ELT pipelines using PySpark and Python.
- Work with Microsoft Fabric lakehouse or similar cloud data platforms.
- Utilize Jupyter/Fabric Notebooks for data engineering workflows.
- Understand and implement data lakehouse architecture patterns.
- Work with Delta Lake or similar lakehouse storage formats.
- Manipulate, transform, and validate data using SQL.
- Ensure data products are resilient, robust, and reliable.
- Participate in the planning, ingestion, transformation, consolidation, and aggregation of data.
Key Skills:
- Strong experience in developing ETL/ELT pipelines using PySpark and Python.
- Hands-on experience with Microsoft Fabric lakehouse or similar cloud data platforms.
- Proficiency in Jupyter/Fabric Notebooks.
- Solid understanding of data lakehouse architecture patterns.
- Experience with Delta Lake or similar storage formats.
- Strong SQL skills for data manipulation and validation.
Salary (Rate): £520/day
City: Dudley
Country: United Kingdom
Working Arrangements: hybrid
IR35 Status: inside IR35
Seniority Level: Mid-Level
Industry: IT
Role: Data Engineer (Python, PySpark, SQL)
Day rate: £475pd-£520pd (Inside IR35)
Contract: 6 months initial
We are currently recruiting for a Data Engineer to be part of a team on a Business Data Service Project, which is a Data Warehouse Replacement & Report simplification project. You will be responsible for ensuring all data products and solutions created in the business insights ecosystem are fit for purpose, resilient, robust and reliable. You will play a pivotal role that builds, tests and deploys Data Warehouse solutions. This will cover the lifecycle for the planning, ingestion, transformation, consolidation and aggregation of data from source to target in the Data Warehouse environment.
Skills and experience required:
- Strong experience developing ETL/ELT pipelines using PySpark and Python
- Hands-on experience with Microsoft Fabric lakehouse or similar cloud data platforms (Azure Synapse Analytics, Databricks)
- Proficiency in working with Jupyter/Fabric Notebooks for data engineering workflows
- Solid understanding of data lakehouse architecture patterns and medallion architecture
- Experience working with Delta Lake or similar lakehouse storage formats
- Strong SQL skills for data manipulation, transformation, and quality validation
This is a role that will require 2/3 days per month onsite in Dudley, West Midlands. Please consider this when applying for the role. If you are interested in the role and would like to apply, please click on the link for immediate consideration.