£63 Per hour
Undetermined
Undetermined
London Area, United Kingdom
Summary: The role of Fabric Data Engineer involves delivering high-quality, scalable data solutions within the Microsoft Azure ecosystem. Key responsibilities include building data pipelines, integrating external systems, and designing data flows aligned with the Medallion architecture. The position requires collaboration with various teams to ensure efficient data processing and operational visibility. The engineer will also automate infrastructure provisioning and enhance CI/CD pipelines for deployment across Azure environments.
Key Responsibilities:
- Build and maintain data pipelines using Microsoft Fabric Data Factory, supporting database-based, file-based and API-based ingestion, transformation and orchestration
- Develop scalable data processing logic using PySpark and SQL for both Lakehouse and Warehouse workloads
- Design and implement data solutions aligned to the Medallion architecture (Bronze, Silver, Gold layers) to support analytics and reporting requirements
- Integrate external systems through secure and reliable API calls
- Ensure operational visibility across pipelines, including robust logging, monitoring and error-handling mechanisms
- Work with Azure SQL Database, OneLake and Lakehouse components to deliver efficient ingestion and transformation processes
- Automate infrastructure provisioning and configuration using Bicep (Infrastructure-as-Code)
- Contribute to and enhance CI/CD pipelines to enable automated deployment across Fabric workspaces and Azure environments
- Troubleshoot pipeline failures, resolve performance bottlenecks and address data quality issues
- Collaborate closely with data architects, analysts and engineering teams to deliver high-quality, production-ready data solutions
- Maintain comprehensive technical documentation, pipeline runbooks and governance guidelines
Key Skills:
- Experience with Microsoft Fabric Data Factory
- Proficiency in PySpark and SQL
- Knowledge of Medallion architecture
- Experience with API integration
- Familiarity with Azure SQL Database and Lakehouse components
- Understanding of Infrastructure-as-Code using Bicep
- Experience with CI/CD pipelines
- Strong troubleshooting and problem-solving skills
- Ability to maintain technical documentation
- Collaboration skills with cross-functional teams
Salary (Rate): £62.50/hr
City: London Area
Country: United Kingdom
Working Arrangements: undetermined
IR35 Status: undetermined
Seniority Level: undetermined
Industry: IT
We are searching for a hands-on Data Engineer who will deliver high-quality, scalable data solutions across our client's Microsoft Azure ecosystem. This role involves building robust data pipelines within Microsoft Fabric, integrating external systems via APIs, and designing data flows aligned to the Medallion architecture to support analytics, integration, and reporting requirements.
Responsibilities
- Build and maintain data pipelines using Microsoft Fabric Data Factory, supporting database-based, file-based and API-based ingestion, transformation and orchestration
- Develop scalable data processing logic using PySpark and SQL for both Lakehouse and Warehouse workloads
- Design and implement data solutions aligned to the Medallion architecture (Bronze, Silver, Gold layers) to support analytics and reporting requirements
- Integrate external systems through secure and reliable API calls
- Ensure operational visibility across pipelines, including robust logging, monitoring and error-handling mechanisms
- Work with Azure SQL Database, OneLake and Lakehouse components to deliver efficient ingestion and transformation processes
- Automate infrastructure provisioning and configuration using Bicep (Infrastructure-as-Code)
- Contribute to and enhance CI/CD pipelines to enable automated deployment across Fabric workspaces and Azure environments
- Troubleshoot pipeline failures, resolve performance bottlenecks and address data quality issues
- Collaborate closely with data architects, analysts and engineering teams to deliver high-quality, production-ready data solutions
- Maintain comprehensive technical documentation, pipeline runbooks and governance guidelines