£440 Per day
Outside
Hybrid
Dudley, UK
Summary: The Data Engineer role focuses on supporting a Material Spend Project by extracting, transforming, and analyzing large datasets from various sources. The position requires expertise in Python, PySpark, and SQL, with a strong emphasis on developing ETL/ELT pipelines and working with cloud data platforms. The role involves a hybrid working arrangement, requiring 2-3 days per month onsite in Dudley, West Midlands. This contract position is outside IR35 and has an initial duration of 6 months.
Key Responsibilities:
- Extract, transform, and analyze large data sets from multiple sources, including API integrations and ServiceNow.
- Develop ETL/ELT pipelines using PySpark and Python.
- Work with Microsoft Fabric lakehouse or similar cloud data platforms.
- Utilize Jupyter/Fabric Notebooks for data engineering workflows.
- Implement data lakehouse architecture patterns and medallion architecture.
- Integrate APIs and work with Delta Lake or similar storage formats.
- Manipulate, transform, and validate data using SQL.
- Support strategic decision-making through actionable insights.
Key Skills:
- Strong experience in developing ETL/ELT pipelines using PySpark and Python.
- Hands-on experience with Microsoft Fabric lakehouse or similar cloud data platforms (Azure Synapse Analytics, Databricks).
- Proficiency in Jupyter/Fabric Notebooks for data engineering workflows.
- Solid understanding of data lakehouse architecture patterns and medallion architecture.
- API integration experience.
- Experience with Delta Lake or similar lakehouse storage formats.
- Strong SQL skills for data manipulation, transformation, and quality validation.
- Previous experience within manufacturing environments is highly desirable.
Salary (Rate): £440 daily
City: Dudley
Country: UK
Working Arrangements: hybrid
IR35 Status: outside IR35
Seniority Level: undetermined
Industry: IT
Role: Data Engineer (Python, PySpark, SQL)
Day rate: £400pd-£440pd (Outside IR35)
Contract: 6 months initial
We are seeking a highly skilled Data Engineer to support a Material Spend Project. You will play a crucial role in extracting, transforming, and analysing large data sets from multiple sources, including API integrations and ServiceNow, to drive actionable insights and support strategic decision-making.
Skills and experience required:
- Strong experience developing ETL/ELT pipelines using PySpark and Python
- Hands-on experience with Microsoft Fabric lakehouse or similar cloud data platforms (Azure Synapse Analytics, Databricks)
- Proficiency in working with Jupyter/Fabric Notebooks for data engineering workflows
- Solid understanding of data lakehouse architecture patterns and medallion architecture
- API Integration experience
- Experience working with Delta Lake or similar lakehouse storage formats
- Strong SQL skills for data manipulation, transformation, and quality validation
- Any previous experience within Manufacturing environments would be highly desirable
This is a role that will require 2/3 days per month onsite in Dudley, West Midlands. Please consider this when applying for the role.
If you are interested in the role and would like to apply, please click on the link for immediate consideration.