£400 Per day
Inside
Hybrid
United Kingdom
Summary: The SC Cleared DevOps Engineer (Azure) role involves designing, building, deploying, and operating large-scale data and analytics solutions on the Databricks platform within Azure. The position requires strong automation, CI/CD, and infrastructure reliability skills, with a focus on supporting high-performing workloads. Active SC Clearance is mandatory for this position. The role offers a contract for 12 months with a competitive day rate.
Key Responsibilities:
- Design, build, and maintain CI/CD pipelines for Databricks code, jobs, and configuration across environments
- Automate provisioning and configuration of Databricks and Azure infrastructure using infrastructure-as-code
- Standardise workspace configuration, cluster policies, secrets, libraries, and access controls
- Implement monitoring, logging, and alerting for platform health, job reliability, and pipeline performance
- Drive cost optimisation and FinOps practices through usage analysis and workload benchmarking
- Support production operations, including incident management, root-cause analysis, and runbooks
- Build and orchestrate Databricks pipelines using Notebooks, Jobs, and Workflows
- Optimise Spark and Delta Lake workloads through cluster tuning, autoscaling, adaptive execution, and caching
- Support development of PySpark-based ETL and streaming workloads
- Manage Delta Lake tables, including schema evolution, ACID compliance, and time travel
- Implement data governance, lineage, and access controls using Unity Catalog
- Integrate Databricks with Azure Data Lake Storage Gen2, Key Vault, and serverless Azure services
- Enforce security best practices using managed identities, RBAC, and secrets management
- Support secure, compliant deployments aligned with public sector security standards
- Collaborate with cloud architects, data engineers, and analysts on end-to-end solution design
- Maintain clear technical documentation covering architecture, CI/CD, monitoring, and governance
- Contribute to platform standards, reusable templates, and DevOps best practices
Key Skills:
- Proven experience as a DevOps Engineer on Azure
- Strong hands-on experience with the Databricks Data Intelligence Platform
- Experience building and maintaining CI/CD pipelines for cloud and data platforms
- Solid understanding of Spark, PySpark, and Delta Lake from a platform and operational perspective
- Experience with infrastructure-as-code (eg Terraform or equivalent)
- Azure experience across ADLS Gen2, Key Vault, managed identities, and serverless services
- Strong troubleshooting skills in distributed, cloud-based environments
- Experience supporting multiple Databricks workspaces and governed Unity Catalogs
- Knowledge of Azure analytics services such as Synapse or Power BI
- Experience implementing FinOps/cost governance in cloud environments
- Background working in regulated or public sector environments
- Strong communication and cross-functional collaboration skills
Salary (Rate): £400 daily
City: undetermined
Country: United Kingdom
Working Arrangements: hybrid
IR35 Status: inside IR35
Seniority Level: undetermined
Industry: IT
Job Title: SC Cleared DevOps Engineer (Azure)
Contract Type: 12-month contract
Day Rate: Up to £400 per day (Inside IR35)
Location: Remote or hybrid (as agreed)
Start Date: January 2026
Clearance Required: Active SC Clearance (mandatory)
We are seeking an experienced SC Cleared DevOps Engineer with strong Databricks platform experience to design, build, deploy, and operate large-scale data and analytics solutions on the Databricks Data Intelligence Platform within Azure.
This role focuses on automation, CI/CD, infrastructure reliability, security, and cost optimisation, while supporting high-performing batch and streaming workloads built on PySpark and Delta Lake. Client information remains confidential.
Required Skills & Experience
- Proven experience as a DevOps Engineer on Azure
- Strong hands-on experience with the Databricks Data Intelligence Platform
- Experience building and maintaining CI/CD pipelines for cloud and data platforms
- Solid understanding of Spark, PySpark, and Delta Lake from a platform and operational perspective
- Experience with infrastructure-as-code (eg Terraform or equivalent)
- Azure experience across ADLS Gen2, Key Vault, managed identities, and serverless services
- Strong troubleshooting skills in distributed, cloud-based environments
Platform Engineering & DevOps
- Design, build, and maintain CI/CD pipelines for Databricks code, jobs, and configuration across environments
- Automate provisioning and configuration of Databricks and Azure infrastructure using infrastructure-as-code
- Standardise workspace configuration, cluster policies, secrets, libraries, and access controls
- Implement monitoring, logging, and alerting for platform health, job reliability, and pipeline performance
- Drive cost optimisation and FinOps practices through usage analysis and workload benchmarking
- Support production operations, including incident management, root-cause analysis, and runbooks
Databricks & Data Platform Support
- Build and orchestrate Databricks pipelines using Notebooks, Jobs, and Workflows
- Optimise Spark and Delta Lake workloads through cluster tuning, autoscaling, adaptive execution, and caching
- Support development of PySpark-based ETL and streaming workloads
- Manage Delta Lake tables, including schema evolution, ACID compliance, and time travel
- Implement data governance, lineage, and access controls using Unity Catalog
Azure Integration & Security
- Integrate Databricks with Azure Data Lake Storage Gen2, Key Vault, and serverless Azure services
- Enforce security best practices using managed identities, RBAC, and secrets management
- Support secure, compliant deployments aligned with public sector security standards
Collaboration & Documentation
- Collaborate with cloud architects, data engineers, and analysts on end-to-end solution design
- Maintain clear technical documentation covering architecture, CI/CD, monitoring, and governance
- Contribute to platform standards, reusable templates, and DevOps best practices
Preferred Qualifications
- Experience supporting multiple Databricks workspaces and governed Unity Catalogs
- Knowledge of Azure analytics services such as Synapse or Power BI
- Experience implementing FinOps/cost governance in cloud environments
- Background working in regulated or public sector environments
- Strong communication and cross-functional collaboration skills