Negotiable
Undetermined
Undetermined
United Kingdom
Summary: The Data Engineer role involves architecting and building data pipelines on Google Cloud Platform (GCP) while integrating various data sources. The position requires designing and optimizing Graph Database solutions and developing ETL/ELT workflows. Additionally, the engineer will support large-scale data ingestion and implement data governance measures across diverse teams in the EMEA region.
Key Responsibilities:
- Architect and build data pipelines on GCP integrating structured, semi-structured, and unstructured data sources.
- Design, implement, and optimize Graph Database solutions using Neo4j, Cypher queries, and GCP integrations.
- Develop ETL/ELT workflows using Dataflow, Pub/Sub, BigQuery, and Cloud Storage.
- Design graph models for real-world applications such as fraud detection, network analysis, and knowledge graphs.
- Optimize performance of graph queries and design for scalability.
- Support ingestion of large-scale datasets using Apache Beam, Spark, or Kafka into GCP environments.
- Implement metadata management, security, and data governance using Data Catalog and IAM.
- Work across functional teams and clients in diverse EMEA time zones and project settings.
Key Skills:
- Experience with Google Cloud Platform (GCP).
- Proficiency in Graph Database solutions, specifically Neo4j and Cypher queries.
- Knowledge of ETL/ELT workflows and tools like Dataflow, Pub/Sub, BigQuery, and Cloud Storage.
- Ability to design graph models for applications such as fraud detection and network analysis.
- Experience with large-scale data ingestion tools like Apache Beam, Spark, or Kafka.
- Understanding of metadata management, security, and data governance practices.
- Strong collaboration skills to work across functional teams in diverse time zones.
Salary (Rate): undetermined
City: undetermined
Country: United Kingdom
Working Arrangements: undetermined
IR35 Status: undetermined
Seniority Level: undetermined
Industry: IT
Requirements:
- - Architect and build data pipelines on GCP integrating structured, semi-structured, and unstructured data sources.
- - Design, implement, and optimize Graph Database solutions using Neo4j, Cypher queries, and GCP integrations.
- - Develop ETL/ELT workflows using Dataflow, Pub/Sub, BigQuery, and Cloud Storage.
- - Design graph models for real-world applications such as fraud detection, network analysis, and knowledge graphs.
- - Optimize performance of graph queries and design for scalability.
- - Support ingestion of large-scale datasets using Apache Beam, Spark, or Kafka into GCP environments.
- - Implement metadata management, security, and data governance using Data Catalog and IAM.
- - Work across functional teams and clients in diverse EMEA time zones and project settings.
Thanks & Regards,
Pooja K | Technical Recruiter UK/EU
AmpsTek Services Limited
Mail ID: pooja.k@ampstek.com