Data Engineer

Data Engineer

Posted 1 week ago by Access Computer Consulting Plc

£500 Per day
Inside
Hybrid
Glasgow, Scotland, UK

Summary: The Data Engineer role based in Glasgow involves developing data pipelines and data warehousing solutions primarily using Python and cloud services like Databricks. The position requires expertise in ETL processes and proficiency with Snowflake or similar solutions. The role is structured to be inside IR35, necessitating work through an umbrella company. The position offers a hybrid working arrangement with three days on-site and two days remote.

Key Responsibilities:

  • Develop data pipelines and data warehousing solutions using Python and relevant libraries.
  • Utilize cloud services, particularly Databricks, for building and managing scalable data pipelines.
  • Design efficient, scalable, and reliable ETL processes in collaboration with cross-functional teams.
  • Develop and deploy ETL jobs that extract and transform data from various sources.
  • Work with Snowflake or similar cloud-based data warehousing solutions.
  • Handle data development in complex environments with large data volumes.

Key Skills:

  • Several years of experience in developing data pipelines and data warehousing solutions.
  • Proficiency in Python and libraries such as Pandas, NumPy, and PySpark.
  • Hands-on experience with cloud services, especially Databricks.
  • Expertise in ETL processes.
  • Experience with Snowflake or similar cloud-based data warehousing solutions.
  • Ability to work in complex data environments with large data volumes.

Salary (Rate): £500 per day

City: Glasgow

Country: UK

Working Arrangements: hybrid

IR35 Status: inside IR35

Seniority Level: undetermined

Industry: IT

Detailed Description From Employer:

I am recruiting for a Data Engineer to be based in Glasgow 3 days a week, 2 days remote.

The role falls inside IR35 so you will need to work through an umbrella company for the duration of the contract.

You must have several years of experience developing data pipelines and data warehousing solutions using Python and libraries such as Pandas, NumPy, PySpark, etc.

You will also have a number of years hands-on experience with cloud services, especially Databricks, for building and managing scalable data pipelines.

ETL process expertise is essential.

Proficiency in working with Snowflake or similar cloud-based data warehousing solutions is also essential.

Experience in data development and solutions in highly complex data environments with large data volumes is also required.

You will be responsible for collaborating with cross-functional teams to understand data requirements, and design efficient, scalable, and reliable ETL processes using Python and Databricks.

You will also develop and deploy ETL jobs that extract data from various sources, transforming them to meet business needs.

Please apply ASAP if this is of interest.