SC Cleared Python Data Engineer – Azure & PySpark

SC Cleared Python Data Engineer – Azure & PySpark

Posted 4 days ago by Montash

£53 Per hour
Inside
Hybrid
England, United Kingdom

Summary: The SC Cleared Python Data Engineer role focuses on designing, developing, and optimizing Azure-based data pipelines using Python and PySpark. The position requires active SC Clearance and involves collaboration with cloud, DevOps, and data science teams in a fast-paced engineering environment. The engineer will be responsible for delivering scalable data processing solutions and ensuring best practices in cloud security and data governance. The role offers a contract for 12 months with a competitive day rate.

Key Responsibilities:

  • Develop and maintain ingestion, transformation, and validation pipelines using Python and PySpark
  • Implement unit and BDD testing with Behave, including mocking, patching, and dependency management
  • Design and manage Delta Lake tables, ensuring ACID compliance, schema evolution, and incremental loading
  • Build and maintain containerised applications using Docker for development and deployment
  • Develop configuration-driven, modular, and reusable engineering solutions
  • Integrate Azure services including Azure Functions, Key Vault, and Blob Storage
  • Collaborate with cloud architects, data scientists, and DevOps teams on CI/CD processes and environment configuration
  • Tune and troubleshoot PySpark jobs for performance in production workloads
  • Maintain documentation and follow best practices in cloud security and data governance

Key Skills:

  • Strong Python programming skills with test-driven development
  • Experience writing BDD scenarios and unit tests using Behave or similar tools
  • Skilled in mocking, patching, and dependency injection for Python tests
  • Proficiency in PySpark and distributed data processing
  • Hands-on experience with Delta Lake (transactional guarantees, schema evolution, optimisation)
  • Experience with Docker for development and deployment
  • Familiarity with Azure Functions, Key Vault, Blob Storage or Data Lake Storage Gen2
  • Experience working with configuration-driven systems
  • Exposure to CI/CD tools (Azure DevOps or similar)

Salary (Rate): £53.00/hr

City: undetermined

Country: United Kingdom

Working Arrangements: hybrid

IR35 Status: inside IR35

Seniority Level: undetermined

Industry: IT

Detailed Description From Employer:

Job Title: SC Cleared Python Data Engineer – Azure & PySpark

Contract Type: 12 Month Contract

Day Rate: Up to £400 a day inside IR35

Location: Remote or hybrid (as agreed)

Start Date: January 5th 2026

Clearance required: Must be holding active SC Clearance

We are seeking an experienced Python Data Engineer to support the design, development, and optimisation of Azure-based data pipelines. The focus of this role is to deliver scalable, test-driven, and configuration-driven data processing solutions using Python, PySpark, Delta Lake, and containerised workloads. This opportunity sits within a fast-paced engineering environment working closely with cloud, DevOps, and data science teams.

Key Responsibilities

  • Develop and maintain ingestion, transformation, and validation pipelines using Python and PySpark
  • Implement unit and BDD testing with Behave, including mocking, patching, and dependency management
  • Design and manage Delta Lake tables, ensuring ACID compliance, schema evolution, and incremental loading
  • Build and maintain containerised applications using Docker for development and deployment
  • Develop configuration-driven, modular, and reusable engineering solutions
  • Integrate Azure services including Azure Functions, Key Vault, and Blob Storage
  • Collaborate with cloud architects, data scientists, and DevOps teams on CI/CD processes and environment configuration
  • Tune and troubleshoot PySpark jobs for performance in production workloads
  • Maintain documentation and follow best practices in cloud security and data governance

Required Skills & Experience

  • Strong Python programming skills with test-driven development
  • Experience writing BDD scenarios and unit tests using Behave or similar tools
  • Skilled in mocking, patching, and dependency injection for Python tests
  • Proficiency in PySpark and distributed data processing
  • Hands-on experience with Delta Lake (transactional guarantees, schema evolution, optimisation)
  • Experience with Docker for development and deployment
  • Familiarity with Azure Functions, Key Vault, Blob Storage or Data Lake Storage Gen2
  • Experience working with configuration-driven systems
  • Exposure to CI/CD tools (Azure DevOps or similar)

Preferred Qualifications

  • Experience working with Databricks or Synapse
  • Knowledge of data governance, security, and best practices in the Azure ecosystem
  • Strong communication and collaboration skills, ideally within distributed teams