£580 Per day
Outside
Remote
United Kingdom
Summary: The role of Data Engineer focuses on the deployment of D365 F&O, requiring expertise in Azure and Databricks to manage data workloads within a global intelligent data platform. The position involves designing, developing, and maintaining data ingestion processes and transformation pipelines while ensuring data governance and quality. The successful candidate will play a crucial role in a major transformation program, contributing to the organization's technology landscape. This is a remote position, but candidates must be based in the UK.
Key Responsibilities:
- Design, implement, and maintain scalable data pipelines for global enterprise datasets using Azure and Databricks.
- Take end-to-end ownership of application ingestion workloads, ensuring all platform steps/runbooks are adopted.
- Utilize Databrick's medallion architecture to ensure clean, reliable, and organized data flows.
- Ensure strict version control and reproducibility of data transformations using the DBT toolset.
- Develop and maintain ETL processes to transform raw data into structured datasets for analysis and consumption.
- Work within the data governance framework to implement best practices for data quality, accessibility, security, and compliance.
- Collaborate with data stewards to define productized data consumption models/products.
- Ensure master data structures and reference data are correctly augmented to each workload.
- Optimize and troubleshoot data pipelines for high performance and reliability.
- Implement observability, alerting, and monitoring best practices for effective data operations.
- Align workloads to the master data management function for data matching across applications/workloads.
Key Skills:
- Minimum of 5 years of experience in data engineering with a focus on cloud technologies.
- Proven experience with Azure services (e.g., Azure Data Factory, Azure Databricks, Azure SQL Database, Azure Synapse, Azure Blob Storage).
- Extensive experience with Databricks, including development and management of data pipelines.
- Strong proficiency in SQL and reasonable Python skills.
- Experience with data governance, data quality, and data security best practices.
- Familiarity with data-as-a-product concepts and methodologies.
- Excellent problem-solving skills and ability to work in a fast-paced environment.
- Strong communication and collaboration skills.
- Previous experience in a D365 Data Migration role, particularly in Finance & Operations.
- Extensive knowledge of the Microsoft D365 application stack.
- Proficiency in F&O standards documentation and best practices.
- Good understanding and solution design experience of relevant technologies.
Salary (Rate): £580 daily
City: undetermined
Country: United Kingdom
Working Arrangements: remote
IR35 Status: outside IR35
Seniority Level: undetermined
Industry: IT
Azure/Databricks Data Engineer (D365 F&O Deployment) £550 - £580 PER DAY 6 month contract OUTSIDE IR35 REMOTE, but have to be UK based
Our client, a leader in their field, requires a talented Data Engineer, to join their Group technology team, responsible for the global intelligent data platform. You will be joining the business at a key moment in their evolution and will make a key and lasting impact on our technology organisation and landscape. Reporting to the Group Director of Data and Architecture with responsibility for the data workloads delivered against their Azure/Databricks platform. You will be working within a major transformation programme migrating data to D365 F&O. You will be expected to have a proactive, hands-on approach. You will be a key contributor to designing, developing, and managing data ingestion processes and transformation pipelines within Azure and Databricks environments. The role involves utilizing Databrick's medallion architecture to create well-defined and governed data consumption models, adopting a data-as-a-product mindset and implementing key platform governance steps such as master data management and augmentation, governance, observability and exception/quality reporting
The ideal candidate will have experience in cloud data engineering, an understanding of Databricks, and a strong proficiency in Azure data services.
YOUR SKILLS :
- Minimum of 5 years of experience in data engineering with a focus on cloud technologies.
- Proven experience with Azure services (eg, Azure Data Factory, Azure Databricks, Azure SQL Database, Azure Synapse, Azure Blob Storage).
- Extensive experience with Databricks, including the development and management of data pipelines.
- Strong proficiency in SQL, reasonable Python skills.
- Experience with data governance, data quality, and data security best practices.
- Familiarity with data-as-a-product concepts and methodologies.
- Excellent problem-solving skills and ability to work in a fast-paced, dynamic environment.
- Strong communication and collaboration skills.
- Previous experience in a D365 Data Migration role, with experience in Finance & Operations.
- Extensive wider knowledge of the Microsoft D365 application stack.
- Proficiency in F&O standards documentation and best practices.
- Good understanding and solution design experience of technologies. This includes, but is not restricted to, solutions such as:
- Microsoft Dynamics Finance and Operations
- Microsoft Power Platform
- Microsoft Collaboration Platforms (Office365, SharePoint, Azure DevOps)
WHAT YOU WILL BE DOING:
- Design, implement, and maintain scalable and efficient data pipelines for ingestion, processing, and storage of global enterprise datasets using Azure and Databricks.
- Take end-to-end ownership of application ingestion workloads, ensuring all platform steps/runbooks are adopted.
- Utilize Databrick's medallion architecture (bronze, silver, gold layers) to ensure clean, reliable, and organized data flows.
- Ensure strict version control and reproducibility of data transformations using the DBT toolset.
- Develop and maintain ETL processes to transform raw data into structured data sets for analysis and consumption.
- Work within our data governance framework to implement our runbook to provide best practices to ensure data quality, accessibility, security, and compliance.
- Collaborate with identified data stewards to define productised data consumption models/products and ensure workload datasets map to the target models.
- Ensure master data structures and reference data are correctly augmented to each workload.
- Optimize and troubleshoot data pipelines to ensure high performance and reliability.
- Use best practice implementations for observability, alerting and monitoring to evolve an effective data operation function.
- Align workloads to our master data management function to ensure data can be matched across applications/workloads.
If your profile matches the above, please send your CV for full details: