Fabric and Databricks Data Engineer - Outside IR35 - Hybrid

Fabric and Databricks Data Engineer - Outside IR35 - Hybrid

Posted Today by Tenth Revolution Group

£600 Per day
Outside
Hybrid
Oxfordshire, UK

Summary: The Fabric and Databricks Data Engineer role involves designing, building, and maintaining scalable analytics and data engineering solutions using Microsoft Fabric and Databricks. The position requires collaboration with data analysts, scientists, and stakeholders to ensure the delivery of reliable data pipelines and models. The role is classified as outside IR35 and offers a hybrid working arrangement. The successful candidate will play a key role in enhancing the organization's data platform capabilities.

Key Responsibilities:

  • Design, develop, and maintain end-to-end data pipelines using Microsoft Fabric and Databricks
  • Build and optimize Lakehouse architectures using Delta Lake principles
  • Ingest, transform, and curate data from multiple sources (APIs, databases, files, streaming)
  • Develop scalable data transformations using PySpark and Spark SQL
  • Implement data models optimized for analytics and reporting (eg star schemas)
  • Monitor, troubleshoot, and optimize performance and cost of data workloads
  • Apply data quality, validation, and governance best practices
  • Collaborate with analysts and BI teams to enable self-service analytics
  • Contribute to CI/CD pipelines and infrastructure-as-code for data platforms
  • Ensure security, access controls, and compliance across the data estate
  • Document solutions and promote engineering best practices

Key Skills:

  • Strong experience with Microsoft Fabric (Lakehouse, Pipelines, Notebooks, Dataflows, OneLake)
  • Hands-on experience with Databricks in production environments
  • Proficiency in PySpark and SQL
  • Solid understanding of data engineering concepts (ETL/ELT, orchestration, partitioning)
  • Experience working with Delta Lake
  • Familiarity with cloud platforms (Azure preferred)
  • Experience integrating data from relational and non-relational sources
  • Knowledge of data modelling for analytics
  • Experience with version control (Git) and collaborative development workflows

Salary (Rate): £600 per day

City: Oxfordshire

Country: UK

Working Arrangements: hybrid

IR35 Status: outside IR35

Seniority Level: undetermined

Industry: IT

Detailed Description From Employer:

Fabric and Databricks Data Engineer - Outside IR35 - Hybrid

Role Overview
We're looking for a skilled Fabric & Databricks Engineer to design, build, and maintain scalable analytics and data engineering solutions. You'll work at the core of our data platform, enabling analytics, reporting, and advanced data use cases by leveraging Microsoft Fabric and Databricks.

You'll collaborate closely with data analysts, data scientists, and stakeholders to deliver reliable, performant, and secure data pipelines and models.

Key Responsibilities

  • Design, develop, and maintain end-to-end data pipelines using Microsoft Fabric and Databricks

  • Build and optimize Lakehouse architectures using Delta Lake principles

  • Ingest, transform, and curate data from multiple sources (APIs, databases, files, streaming)

  • Develop scalable data transformations using PySpark and Spark SQL

  • Implement data models optimized for analytics and reporting (eg star schemas)

  • Monitor, troubleshoot, and optimize performance and cost of data workloads

  • Apply data quality, validation, and governance best practices

  • Collaborate with analysts and BI teams to enable self-service analytics

  • Contribute to CI/CD pipelines and infrastructure-as-code for data platforms

  • Ensure security, access controls, and compliance across the data estate

  • Document solutions and promote engineering best practices

Required Skills & Experience

  • Strong experience with Microsoft Fabric (Lakehouse, Pipelines, Notebooks, Dataflows, OneLake)

  • Hands-on experience with Databricks in production environments

  • Proficiency in PySpark and SQL

  • Solid understanding of data engineering concepts (ETL/ELT, orchestration, partitioning)

  • Experience working with Delta Lake

  • Familiarity with cloud platforms (Azure preferred)

  • Experience integrating data from relational and non-relational sources

  • Knowledge of data modelling for analytics

  • Experience with version control (Git) and collaborative development workflows

Nice to Have

  • Experience with Power BI and semantic models

  • Exposure to streaming technologies (Kafka, Event Hubs, Spark Structured Streaming)

  • Infrastructure-as-code experience (Bicep, Terraform)

  • CI/CD tooling (Azure DevOps, GitHub Actions)

  • Familiarity with data governance and cataloging tools

  • Experience supporting ML or advanced analytics workloads

What We're Looking For

  • Strong problem-solving and analytical mindset

  • Ability to work independently and as part of a cross-functional team

  • Clear communication skills and stakeholder awareness

  • Passion for building reliable, scalable data platforms

To apply for this role please submit your CV or contact Dillon Blackburn (see below)

Tenth Revolution Group are the go-to recruiter for Data & AI roles in the UK offering more opportunities across the country than any other recruitment agency. We're the proud sponsor and supporter of SQLBits, Power Platform World Tour, and the London Fabric User Group. We are the global leaders in Data & AI recruitment.