Data Engineer (GCP)

Data Engineer (GCP)

Posted 6 days ago by 1771246438

Negotiable
Outside
Hybrid
Greater London

Summary: As a GCP Data Engineer, you will design, build, and operate scalable data pipelines and infrastructure to ensure high-quality data is accessible for analytics and decision-making. Your role involves collaborating with analysts and data scientists to deliver reliable datasets while implementing best practices for data quality and compliance. This position is hybrid, requiring 2-3 days in London, and is classified as outside IR35.

Key Responsibilities:

  • Build and maintain data pipelines for ingestion, transformation, and export across multiple sources and destinations
  • Develop and evolve scalable data architecture to meet business and performance requirements
  • Partner with analysts and data scientists to deliver curated, analysis-ready datasets and enable self-service analytics
  • Implement best practices for data quality, testing, monitoring, lineage, and reliability
  • Optimise workflows for performance, cost, and scalability (eg, tuning Spark jobs, query optimisation, partitioning strategies)
  • Ensure secure data handling and compliance with relevant data protection standards and internal policies
  • Contribute to documentation, standards, and continuous improvement of the data platform and engineering processes

Key Skills:

  • 5+ years of experience as a Data Engineer, building and maintaining production-grade pipelines and datasets
  • Strong Python and SQL skills with a solid understanding of data structures, performance, and optimisation strategies
  • Familiarity with GCP and ecosystem knowledge: BigQuery, Composer, Dataproc, Cloud Run, Dataplex
  • Hands-on experience with orchestration (like Airflow, Dagster, Databricks Workflows) and distributed processing in a cloud environment
  • Experience with analytical data modelling (star and snowflake schemas), DWH, ETL/ELT patterns, and dimensional concepts
  • Experience with data governance concepts: access control, retention, data classification, auditability, and compliance standards
  • Familiarity with CI/CD for data pipelines, IaC (Terraform), and/or DataOps practices
  • Experience building observability for data systems (metrics, alerting, data quality checks, incident response)

Salary (Rate): undetermined

City: London

Country: United Kingdom

Working Arrangements: hybrid

IR35 Status: outside IR35

Seniority Level: undetermined

Industry: IT

Detailed Description From Employer:

GCP Data Engineer
Hybrid 2/3 days - London
OUTSIDE IR/35


As a Data Engineer, you'll design, build, and operate scalable, reliable data pipelines and data infrastructure. Your work will ensure high-quality data is accessible, trusted, and ready for analytics and data science - powering business insights and decision-making across the company.

What you'll do

  • Build and maintain data pipelines for ingestion, transformation, and export across multiple sources and destinations
  • Develop and evolve scalable data architecture to meet business and performance requirements
  • Partner with analysts and data scientists to deliver curated, analysis-ready datasets and enable self-service analytics
  • Implement best practices for data quality, testing, monitoring, lineage, and reliability
  • Optimise workflows for performance, cost, and scalability (eg, tuning Spark jobs, query optimisation, partitioning strategies)
  • Ensure secure data handling and compliance with relevant data protection standards and internal policies
  • Contribute to documentation, standards, and continuous improvement of the data platform and engineering processes

What makes you a great fit
  • 5+ years of experience as a Data Engineer, building and maintaining production-grade pipelines and datasets
  • Strong Python and SQL skills with a solid understanding of data structures, performance, and optimisation strategies
  • Familiarity with GCP and ecosystem knowledge: BigQuery, Composer, Dataproc, Cloud Run, Dataplex
  • Hands-on experience with orchestration (like Airflow, Dagster, Databricks Workflows) and distributed processing in a cloud environment
  • Experience with analytical data modelling (star and snowflake schemas), DWH, ETL/ELT patterns, and dimensional concepts
  • Experience with data governance concepts: access control, retention, data classification, auditability, and compliance standards
  • Familiarity with CI/CD for data pipelines, IaC (Terraform), and/or DataOps practices
  • Experience building observability for data systems (metrics, alerting, data quality checks, incident response)