Senior Data Engineer

Senior Data Engineer

Posted 1 day ago by Signify Technology

Negotiable
Outside
Hybrid
London Area, United Kingdom

Summary: The role of Senior Data Engineer involves joining a large, community-driven marketplace to modernize its data platform, focusing on building scalable data pipelines and enhancing data quality. The position requires collaboration with backend teams and experience in Scala development. The contract is for 6 months and is classified as outside IR35. The role requires on-site presence twice a week.

Key Responsibilities:

  • Migrating existing SQL-based workflows into Spark ETL pipelines on Databricks
  • Designing and evolving Medallion data models (bronze, silver, gold layers)
  • Building and maintaining Airflow DAGs for orchestration
  • Creating reliable ingestion and transformation pipelines across AWS
  • Translating and integrating backend Scala models into data engineering workflows
  • Improving performance, observability, and reliability of the data platform

Key Skills:

  • Experience as a Scala developer
  • Proficiency in Spark ETL pipelines
  • Knowledge of Databricks
  • Experience with Medallion data models
  • Familiarity with Airflow for orchestration
  • Experience with AWS services

Salary (Rate): undetermined

City: London Area

Country: United Kingdom

Working Arrangements: hybrid

IR35 Status: outside IR35

Seniority Level: Senior

Industry: IT

Detailed Description From Employer:

6 Month Contract, Outside IR35, ASAP Start Date, 2x times on site per week. We’re hiring a Data Engineer to join a large, community-driven marketplace used by millions of buyers and sellers every day. You’ll play a key role in modernising the data platform that supports recommendations, trust and safety, operational insights, and high-volume event processing. This role is heavily focused on building scalable data pipelines, strengthening data quality, and supporting a shift toward a full lakehouse and Medallion architecture. You’ll be working closely with backend teams, so previous experience as a Scala developer is required.

What you’ll work on:

  • Migrating existing SQL-based workflows into Spark ETL pipelines on Databricks
  • Designing and evolving Medallion data models (bronze, silver, gold layers)
  • Building and maintaining Airflow DAGs for orchestration
  • Creating reliable ingestion and transformation pipelines across AWS
  • Translating and integrating backend Scala models into data engineering workflows
  • Improving performance, observability, and reliability of the data platform