Freelance Senior Data Engineer

Freelance Senior Data Engineer

Posted Today by Publicis Production

Negotiable
Undetermined
Hybrid
London Area, United Kingdom

Summary: The Senior Data Engineer role involves architecting, building, and maintaining scalable cloud-based data solutions, primarily focusing on GCP, Snowflake, and Databricks. The position requires a proactive individual with extensive experience in data engineering, capable of designing data pipelines and ensuring data quality. The candidate will collaborate with various teams to meet data needs and optimize workflows. This role is hybrid, requiring three days on-site and two days remote work.

Key Responsibilities:

  • Architect and maintain robust data pipelines (batch and streaming) integrating internal and external data sources (APIs, structured streaming, message queues etc.).
  • Collaborate with data analysts, scientists, and software engineers to understand data needs and develop solutions.
  • Understand requirements from operations and product to ensure data and reporting needs are met.
  • Implement data quality checks, data governance practices, and monitoring systems to ensure reliable and trustworthy data.
  • Optimize performance of ETL/ELT workflows and improve infrastructure scalability.

Key Skills:

  • 7+ years of experience in data engineering and solution delivery, with a strong track record of technical leadership.
  • Deep understanding of data modeling, data warehousing concepts, and distributed systems.
  • Excellent problem-solving skills and ability to progress with design, build and validate output data independently.
  • Deep proficiency in Python (including PySpark), SQL, and cloud-based data engineering tools.
  • Expertise in multiple cloud platforms (AWS, GCP, or Azure) and managing cloud-based data infrastructure.
  • Strong background in database technologies (SQL Server, Redshift, PostgreSQL, Oracle).
  • Familiarity with machine learning pipelines and MLOps practices.
  • Additional experience with Databricks and specific AWS such as Glue, S3, Lambda.
  • Proficient in Git, CI/CD pipelines, and DevOps tools (e.g., Azure DevOps).
  • Hands-on experience with web scraping, REST API integrations, and streaming data pipelines.
  • Knowledge of JavaScript and front-end frameworks (e.g., React).

Salary (Rate): undetermined

City: London

Country: United Kingdom

Working Arrangements: hybrid

IR35 Status: undetermined

Seniority Level: undetermined

Industry: IT

Detailed Description From Employer:

Senior Data Engineer

Start - ASAP

Duration - 3 months

Rates - TBC

Location - Chancery Lane

Hybrid - 3 days onsite / 2 days remote

We are seeking a proactive and self-motivated Senior Data Engineer with a proven track record in building scalable cloud-based data solutions across multiple cloud platforms to support our work in architecting, building and maintaining the data infrastructure. The specific focus for this role will start with GCP however we require experience with Snowflake and Databricks also. As a senior member within the data engineering space, you will play a pivotal role in designing scalable data pipelines, optimising data workflows, and ensuring data availability and quality for production technology. The ideal candidate brings deep technical expertise in AWS, GCP and/or Databricks alongside essential hands-on experience building pipelines in Python, analysing data requirements with SQL, and modern data engineering practices. Your ability to work across business and technology functions, drive strategic initiatives, and independently problem solve will be key to success in this role.

Qualifications:

Experience:

  • 7+ years of experience in data engineering and solution delivery, with a strong track record of technical leadership.
  • Deep understanding of data modeling, data warehousing concepts, and distributed systems.
  • Excellent problem-solving skills and ability to progress with design, build and validate output data independently.
  • Deep proficiency in Python (including PySpark), SQL, and cloud-based data engineering tools.
  • Expertise in multiple cloud platforms (AWS, GCP, or Azure) and managing cloud-based data infrastructure.
  • Strong background in database technologies (SQL Server, Redshift, PostgreSQL, Oracle).

Desirable Skills:

  • Familiarity with machine learning pipelines and MLOps practices.
  • Additional experience with Databricks and specific AWS such as Glue, S3, Lambda
  • Proficient in Git, CI/CD pipelines, and DevOps tools (e.g., Azure DevOps)
  • Hands-on experience with web scraping, REST API integrations, and streaming data pipelines.
  • Knowledge of JavaScript and front-end frameworks (e.g., React)

Key Responsibilities:

  • Architect and maintain robust data pipelines (batch and streaming) integrating internal and external data sources (APIs, structured streaming, message queues etc.).
  • Collaborate with data analysts, scientists, and software engineers to understand data needs and develop solutions.
  • Understand requirements from operations and product to ensure data and reporting needs are met
  • Implement data quality checks, data governance practices, and monitoring systems to ensure reliable and trustworthy data.
  • Optimize performance of ETL/ELT workflows and improve infrastructure scalability.