Databricks Pipelines Engineer

Databricks Pipelines Engineer

Posted 1 day ago by Smartedge Solutions

Negotiable
Undetermined
Remote
United Kingdom

Summary: The role of Databricks Pipelines Engineer involves designing, optimizing, and maintaining scalable Databricks pipelines, focusing on ETL, streaming, and ML workflows. The engineer will also be responsible for performance tuning, monitoring Spark job metrics, and implementing cost-optimization strategies. Collaboration with data scientists and platform teams is essential to enhance query performance on Delta Lake. The position is fully remote, based in the UK.

Key Responsibilities:

  • Design, optimize, and maintain scalable Databricks pipelines (ETL, streaming, ML workflows).
  • Perform cluster and job performance tuning: optimize cluster sizing, caching, partitioning, and shuffle management.
  • Monitor Spark job metrics, analyze logs, and identify bottlenecks in data throughput or latency.
  • Implement cost-optimization strategies for Databricks jobs and clusters using autoscaling and job consolidation.
  • Ensure efficient scheduling and orchestration of jobs using Databricks Workflows.
  • Collaborate with data scientists and platform teams to improve query performance on Delta Lake.
  • Develop automated monitoring and alerting mechanisms using Databricks REST APIs, Prometheus, or Azure Monitor.
  • Maintain best practices for data engineering performance, testing, and documentation.

Key Skills:

  • Strong proficiency with Databricks, Apache Spark (SQL, PySpark, Scala), and Delta Lake.
  • Deep understanding of Spark execution plans, shuffle optimization, and data partitioning strategies.
  • Experience tuning job configurations (e.g., memory management, parallelism, caching, adaptive query execution).
  • Familiarity with orchestration tools: Databricks Workflows, Airflow, or Azure Data Factory.
  • Proficiency in Python or Scala for data engineering and pipeline development.
  • Hands-on experience with Azure, AWS, or multi-cloud Databricks deployments.
  • Knowledge of data storage layers (Azure Data Lake Storage, AWS S3) and performance trade-offs.
  • Version control (Git, GitHub Actions, DevOps pipelines) and CI/CD practices for Databricks.

Salary (Rate): undetermined

City: undetermined

Country: United Kingdom

Working Arrangements: remote

IR35 Status: undetermined

Seniority Level: undetermined

Industry: IT

Detailed Description From Employer:

Smartedge’s Client is looking for an individual to help with their Databricks Pipelines Engineer @ London, UK (100 % Remote) Job Description :

  • Design, optimize, and maintain scalable Databricks pipelines (ETL, streaming, ML workflows).
  • Perform cluster and job performance tuning: optimize cluster sizing, caching, partitioning, and shuffle management.
  • Monitor Spark job metrics, analyze logs, and identify bottlenecks in data throughput or latency.
  • Implement cost-optimization strategies for Databricks jobs and clusters using autoscaling and job consolidation.
  • Ensure efficient scheduling and orchestration of jobs using Databricks Workflows
  • Collaborate with data scientists and platform teams to improve query performance on Delta Lake.
  • Develop automated monitoring and alerting mechanisms using Databricks REST APIs, Prometheus, or Azure Monitor.
  • Maintain best practices for data engineering performance, testing, and documentation.

Technical Skills Required

  • Strong proficiency with Databricks, Apache Spark (SQL, PySpark, Scala), and Delta Lake.
  • Deep understanding of Spark execution plans, shuffle optimization, and data partitioning strategies.
  • Experience tuning job configurations (e.g., memory management, parallelism, caching, adaptive query execution).
  • Familiarity with orchestration tools: Databricks Workflows, Airflow, or Azure Data Factory.
  • Proficiency in Python or Scala for data engineering and pipeline development.
  • Hands-on experience with Azure, AWS, or multi-cloud Databricks deployments.
  • Knowledge of data storage layers (Azure Data Lake Storage, AWS S3) and performance trade-offs.
  • Version control (Git, GitHub Actions, DevOps pipelines) and CI/CD practices for Databricks.

Desirable Skills

  • Experience with Databricks monitoring and observability (Ganglia, Prometheus, or Datadog).
  • Understanding of serverless Databricks clusters.
  • Familiarity with Terraform and infrastructure as code for Databricks resource management.
  • Exposure to MLflow and model-serving pipelines for end-to-end performance optimization.
  • Experience with Unity Catalog and fine-grained governance controls.

If this sounds like a role you would be interested in or if you know someone in this field. Connect with me or email me at nagamani.y@smartedgesolutions.co.uk Alternatively, you can call me on Tel: +44(0)2036038132