Data Engineer

Data Engineer

Posted Today by eTeam Workforce Limited

Negotiable
Inside
Hybrid
Glasgow, Scotland, UK

Summary: The Data Engineer role involves building and maintaining systems for data collection, storage, processing, and analysis, ensuring data accuracy, accessibility, and security. The position requires a strong technical background in data architectures and proficiency in various programming languages and tools. The role is based in Glasgow with a hybrid working arrangement, requiring 2-3 days onsite. The contract is set to last until December 31, 2025.

Key Responsibilities:

  • Build and maintenance of data architectures pipelines that enable the transfer and processing of durable, complete and consistent data.
  • Design and implementation of data warehouses and data lakes that manage the appropriate data volumes and velocity and adhere to the required security measures.

Key Skills:

  • Experience in Python, PySpark and SQL
  • Experience with AWS is a plus
  • Strong proficiency in Core Java, including Collections, Concurrency, and Memory Management.
  • Proficient in version control systems such as Git, GitLab, or Bitbucket.
  • Strong background in performance tuning, profiling, and resolving production issues in distributed systems.
  • Working knowledge of Agile methodologies and collaborative development practices.

Salary (Rate): £358/Day

City: Glasgow

Country: UK

Working Arrangements: hybrid

IR35 Status: inside IR35

Seniority Level: undetermined

Industry: IT

Detailed Description From Employer:

Role Title: Data Engineer
Location: Glasgow - 2-3 days onsite
Duration: 31/12/2025

Role Description:
To build and maintain the systems that collect, store, process, and analyse data, such as data pipelines, data warehouses and data lakes to ensure that all data is accurate, accessible, and secure.

Accountabilities:

  1. Build and maintenance of data architectures pipelines that enable the transfer and processing of durable, complete and consistent data.
  2. Design and implementation of data warehoused and data lakes that manage the appropriate data volumes and velocity and adhere to the required security measures.

Skillset Required -

  1. Experience in Python, PySpark and SQL
  2. Experience with AWS is a plus
  3. Strong proficiency in Core Java, including Collections, Concurrency, and Memory Management.
  4. Proficient in version control systems such as Git, GitLab, or Bitbucket.
  5. Strong background in performance tuning, profiling, and resolving production issues in distributed systems.
  6. Working knowledge of Agile methodologies and collaborative development practices.