Azure Data Platform Specialist

Azure Data Platform Specialist

Posted 2 weeks ago by RED Global

Negotiable
Undetermined
Hybrid
London Area, United Kingdom

Summary: The Azure Data Platform Specialist role involves optimizing the existing Apache Airflow environment to enhance reliability, performance, and scalability within a data engineering team. The position requires strong expertise in Azure cloud services and Azure DevOps to design and develop orchestration frameworks for enterprise-scale data pipelines. The role is hybrid, requiring on-site presence in London for 2 to 3 days a week. The contract is for 6 months with potential extensions.

Key Responsibilities:

  • Analyze and optimize the current Apache Airflow environment, identifying performance bottlenecks and implementing best practices for orchestration and scheduling.
  • Design and implement scalable, modular, and reusable DAGs (Directed Acyclic Graphs) to support complex data workflows.
  • Collaborate with data engineers and platform teams to integrate Airflow with Azure Data Factory, Azure Databricks, and other Azure-native services.
  • Develop and maintain CI/CD pipelines using Azure DevOps for Airflow DAG deployment, testing, and version control.

Key Skills:

  • Proven experience as an Apache Airflow SME or Lead Developer in a production-grade environment.
  • Strong understanding of Airflow internals, including scheduler, executor types (Celery, Kubernetes), and plugin development.
  • Experience with workload orchestration and autoscaling using KEDA (Kubernetes-based Event Driven Autoscaler), and familiarity with Celery for distributed task execution and background job processing, particularly in data pipeline or microservices environments.
  • Hands-on experience with Azure cloud services, especially Azure Data Factory, Azure Databricks, Azure Storage, and Azure Synapse.
  • Proficiency in designing and deploying CI/CD pipelines using Azure DevOps (YAML pipelines, release management, artifact handling).
  • Solid programming skills in Python, with experience in writing modular, testable, and reusable code.
  • Familiarity with containerization (Docker) and orchestration (Kubernetes) as it relates to Airflow deployment.
  • Experience with monitoring tools (e.g., Prometheus, Grafana, Azure Monitor) and log aggregation (e.g., ELK, Azure Log Analytics).
  • Strong problem-solving skills and the ability to work independently in a fast-paced, agile environment.

Salary (Rate): undetermined

City: London

Country: United Kingdom

Working Arrangements: hybrid

IR35 Status: undetermined

Seniority Level: undetermined

Industry: IT

Detailed Description From Employer:

Azure Data Platform Specialist

6+ Month Contract + Potential Extensions

Hybrid Work Model - 2/3 Days A Week On Site In London

We are seeking an experienced and highly skilled Apache Azure Data Platform Specialist to join our data engineering team. The primary objective of this role is to fine-tune and optimize our existing Airflow environment, ensuring high reliability, performance, and scalability. The ideal candidate will also bring strong expertise in Azure cloud services and Azure DevOps to design, solution, and develop robust orchestration frameworks that support our enterprise-scale data pipelines.

Key Responsibilities:

  • Analyze and optimize the current Apache Airflow environment, identifying performance bottlenecks and implementing best practices for orchestration and scheduling.
  • Design and implement scalable, modular, and reusable DAGs (Directed Acyclic Graphs) to support complex data workflows.
  • Collaborate with data engineers and platform teams to integrate Airflow with Azure Data Factory, Azure Databricks, and other Azure-native services.
  • Develop and maintain CI/CD pipelines using Azure DevOps for Airflow DAG deployment, testing, and version control.

Required Skills and Qualifications:

  • Proven experience as an Apache Airflow SME or Lead Developer in a production-grade environment.
  • Strong understanding of Airflow internals, including scheduler, executor types (Celery, Kubernetes), and plugin development.
  • Experience with workload orchestration and autoscaling using KEDA (Kubernetes-based Event Driven Autoscaler), and familiarity with Celery for distributed task execution and background job processing, particularly in data pipeline or microservices environments.
  • Hands-on experience with Azure cloud services, especially Azure Data Factory, Azure Databricks, Azure Storage, and Azure Synapse.
  • Proficiency in designing and deploying CI/CD pipelines using Azure DevOps (YAML pipelines, release management, artifact handling).
  • Solid programming skills in Python, with experience in writing modular, testable, and reusable code.
  • Familiarity with containerization (Docker) and orchestration (Kubernetes) as it relates to Airflow deployment.
  • Experience with monitoring tools (e.g., Prometheus, Grafana, Azure Monitor) and log aggregation (e.g., ELK, Azure Log Analytics).
  • Strong problem-solving skills and the ability to work independently in a fast-paced, agile environment.

If you would like immediate consideration, please send me an updated CV/contact details to jcaria@redglobal.com so we can discuss further or reach out to me through LinkedIn.