£350 Per day
Inside
Hybrid
Glasgow
Summary: The Data Engineer role involves working in Glasgow with a hybrid arrangement of three days on-site and two days remote. The position requires extensive experience in data engineering and analytics, particularly within the banking and financial services sector. Candidates must be proficient in SQL/PLSQL and have hands-on experience with data pipelines and warehousing solutions using Python and related libraries. The role is classified as inside IR35, necessitating work through an umbrella company.
Key Responsibilities:
- Support Software Engineering, Data Engineering, or Data Analytics projects.
- Develop data pipelines and data warehousing solutions using Python and libraries such as Pandas, NumPy, and PySpark.
- Write ad-hoc and complex SQL/PLSQL queries for data analysis.
- Work with structured, semi-structured, and unstructured data, integrating with various data stores (e.g., RDBMS, NoSQL DBs, Document DBs).
- Develop solutions in a hybrid data environment (on-Prem and Cloud).
Key Skills:
- Experience in Banking/Financial Services.
- Proficiency in SQL/PLSQL.
- Hands-on experience with Python and data libraries (Pandas, NumPy, PySpark).
- Experience in developing data pipelines and data warehousing solutions.
- Ability to work with complex data environments and large data volumes.
Salary (Rate): £350/day
City: Glasgow
Country: United Kingdom
Working Arrangements: hybrid
IR35 Status: inside IR35
Seniority Level: undetermined
Industry: IT
i am recruiting for a Data Engineer to work in Glasgow 3 days a week, 2 days remote. The role falls inside IR35 so you will have to work through an umbrella company. Banking / Financial Services experience is required. You will have a number of years of experience supporting Software Engineering, Data Engineering, or Data Analytics projects. Experience in data development and solutions in highly complex data environments with large data volumes. SQL / PLSQL experience with the ability to write ad-hoc and complex queries to perform data analysis. Experience developing data pipelines and data warehousing solutions using Python and libraries such as Pandas, NumPy, PySpark, etc. You will be able to develop solutions in a hybrid data environment (on-Prem and Cloud). Hands on experience with developing data pipelines for structured, semi-structured, and unstructured data and experience integrating with their supporting stores (e.g. RDBMS, NoSQL DBs, Document DBs, Log Files etc). Please apply ASAP to find out more!