Negotiable
Undetermined
Undetermined
London, UK
Summary: We are looking for a Senior Data Engineer with insurance domain experience to enhance our data infrastructure. The candidate should possess strong skills in SAS, AWS, Python, SQL, and PySpark, particularly with Amazon Redshift for data warehousing. Responsibilities include implementing scalable data pipelines and ensuring high data quality and performance. Collaboration with cross-functional teams is essential to deliver effective data solutions for analytics and reporting.
Key Responsibilities:
- Design, develop, and optimize data infrastructure.
- Implement scalable data pipelines and cloud-based data solutions.
- Ensure high data quality, performance, and reliability.
- Design and maintain ETL/ELT processes.
- Optimize data models in Redshift.
- Leverage PySpark for large-scale data processing within AWS environments.
- Collaborate with cross-functional teams to deliver robust data solutions.
Key Skills:
- Strong hands-on experience with SAS, AWS, Python, SQL, and PySpark.
- Experience working with Amazon Redshift for data warehousing solutions.
- Proactive problem-solving skills.
- Solid understanding of data architecture and distributed data processing.
- Knowledge of cloud best practices.
Salary (Rate): undetermined
City: London
Country: UK
Working Arrangements: undetermined
IR35 Status: undetermined
Seniority Level: Senior
Industry: IT
Must have Insurance Domain Experience
We are seeking a skilled & experienced Sr Data Engineer to join our team and support the design, development, and optimisation of our data infrastructure. The ideal candidate will have strong hands-on experience with SAS, AWS, Python, SQL, and PySpark, along with experience working with Amazon Redshift for data warehousing solutions.
They will be responsible for implementing scalable data pipelines, developing cloud-based data solutions, and ensuring high data quality, performance, and reliability. The role involves designing and maintaining ETL/ELT processes, optimising data models in Redshift, and leveraging PySpark for large-scale data processing within AWS environments.
This position requires a proactive problem-solver with a solid understanding of data architecture, distributed data processing, and cloud best practices. You will collaborate closely with cross-functional teams to deliver robust data solutions that power analytics, reporting, and business insights.
Craft & Skills:
SAS SME, solution design