Senior Data Engineer

Senior Data Engineer

Posted 3 days ago by Vallum Associates

Negotiable
Outside
Remote
England, United Kingdom

Summary: The Senior Data Engineer role is a contract position focused on delivering data-rich software and contributing to the architectural design and implementation of data-centric products. The position requires strong expertise in AWS, Databricks, and PySpark, particularly in data migration and enterprise-scale data platforms. The role is remote and classified as outside IR35, with a focus on building scalable data engineering solutions.

Key Responsibilities:

  • Deliver data-rich software and contribute to architectural design and technical approach.
  • Develop data-centric products from ingest to egress via pipelines, data warehousing, and integrations.
  • Design and implement data migrations at scale.
  • Build serverless data workflows using AWS Lambda.

Key Skills:

  • Strong experience with AWS services such as S3, Lambda, EMR/Glue, Athena, Redshift.
  • Proficiency in PySpark for data processing and transformation.
  • Experience working on enterprise scale data platforms.
  • Proactive mindset focused on clean, scalable data engineering solutions.

Salary (Rate): undetermined

City: undetermined

Country: United Kingdom

Working Arrangements: remote

IR35 Status: outside IR35

Seniority Level: Senior

Industry: IT

Detailed Description From Employer:

Senior Data Engineer - Contract - AWS, Databricks, PySpark, Data Migration

6 months rolling contract

Remote based

Outside IR35

Must have strong AWS, Data migrations, PySpark, Databricks experience

As a Senior Data Engineer you will deliver data-rich software and contribute to the architectural design, technical approach and implementation mechanisms adopted by the team. You will be directly involved in the development of data-centric products from ingest to egress via pipelines, data warehousing, cataloguing, integrations and other mechanisms.

Senior Data Engineer - What We’re Looking For

  • Experience designing and implementing data migrations at scale
  • Experience working on enterprise scale data platforms
  • Proficiency in AWS services such as S3, Lambda, EMR/Glue, Athena, Redshift
  • Strong PySpark skills for data processing and transformation
  • Ability to build serverless data workflows using AWS Lambda
  • A proactive mindset focused on clean, scalable data engineering solutions