£53 Per hour
Inside
Hybrid
England, United Kingdom
Summary: The SC Cleared Databricks Data Engineer role involves designing, building, and optimizing large-scale data workflows within the Databricks Data Intelligence Platform. The position requires expertise in delivering high-performing batch and streaming pipelines using PySpark and Azure services, with a focus on governance and workflow orchestration. The role is available on a 12-month contract basis, with a start date of January 5th, 2026, and requires active SC Clearance. The position offers a flexible working arrangement, either remote or hybrid, as agreed upon.
Key Responsibilities:
- Build and orchestrate Databricks data pipelines using Notebooks, Jobs, and Workflows
- Optimise Spark and Delta Lake workloads through cluster tuning, adaptive execution, scaling, and caching
- Conduct performance benchmarking and cost optimisation across workloads
- Implement data quality, lineage, and governance practices aligned with Unity Catalog
- Develop PySpark-based ETL and transformation logic using modular, reusable coding standards
- Create and manage Delta Lake tables with ACID compliance, schema evolution, and time travel
- Integrate Databricks assets with Azure Data Lake Storage, Key Vault, and Azure Functions
- Collaborate with cloud architects, data analysts, and engineering teams on end-to-end workflow design
- Support automated deployment of Databricks artefacts via CI/CD pipelines
- Maintain clear technical documentation covering architecture, performance, and governance configuration
Key Skills:
- Strong experience with the Databricks Data Intelligence Platform
- Hands-on experience with Databricks Jobs and Workflows
- Deep PySpark expertise, including schema management and optimisation
- Strong understanding of Delta Lake architecture and incremental design principles
- Proven Spark performance engineering and cluster tuning capabilities
- Unity Catalog experience (data lineage, access policies, metadata governance)
- Azure experience across ADLS Gen2, Key Vault, and serverless components
- Familiarity with CI/CD deployment for Databricks
- Solid troubleshooting skills in distributed environments
Salary (Rate): £53.00/hr
City: undetermined
Country: United Kingdom
Working Arrangements: hybrid
IR35 Status: inside IR35
Seniority Level: undetermined
Industry: IT
Job Title: SC Cleared Databricks Data Engineer – Azure Cloud
Contract Type: 12 month contract
Day Rate: Up to £400 a day inside IR35
Location: Remote or hybrid (as agreed)
Start Date: January 5th 2026
Clearance required: Must be holding active SC Clearance
We are seeking an experienced Databricks Data Engineer to design, build, and optimise large-scale data workflows within the Databricks Data Intelligence Platform. The role focuses on delivering high-performing batch and streaming pipelines using PySpark, Delta Lake, and Azure services, with additional emphasis on governance, lineage tracking, and workflow orchestration. Client information remains confidential.
Key Responsibilities
- Build and orchestrate Databricks data pipelines using Notebooks, Jobs, and Workflows
- Optimise Spark and Delta Lake workloads through cluster tuning, adaptive execution, scaling, and caching
- Conduct performance benchmarking and cost optimisation across workloads
- Implement data quality, lineage, and governance practices aligned with Unity Catalog
- Develop PySpark-based ETL and transformation logic using modular, reusable coding standards
- Create and manage Delta Lake tables with ACID compliance, schema evolution, and time travel
- Integrate Databricks assets with Azure Data Lake Storage, Key Vault, and Azure Functions
- Collaborate with cloud architects, data analysts, and engineering teams on end-to-end workflow design
- Support automated deployment of Databricks artefacts via CI/CD pipelines
- Maintain clear technical documentation covering architecture, performance, and governance configuration
Required Skills and Experience
- Strong experience with the Databricks Data Intelligence Platform
- Hands-on experience with Databricks Jobs and Workflows
- Deep PySpark expertise, including schema management and optimisation
- Strong understanding of Delta Lake architecture and incremental design principles
- Proven Spark performance engineering and cluster tuning capabilities
- Unity Catalog experience (data lineage, access policies, metadata governance)
- Azure experience across ADLS Gen2, Key Vault, and serverless components
- Familiarity with CI/CD deployment for Databricks
- Solid troubleshooting skills in distributed environments
Preferred Qualifications
- Experience working across multiple Databricks workspaces and governed catalogs
- Knowledge of Synapse, Power BI, or related Azure analytics services
- Understanding of cost optimisation for data compute workloads
- Strong communication and cross-functional collaboration skills