£350 Per day
Outside
Remote
London, UK
Summary: Join a dynamic Digital Transformation Consultancy as a Data Engineer, where you will design and implement robust ETL pipelines for high-profile government clients. This remote role requires occasional visits to London and offers the chance to work with cutting-edge big data technologies. You will collaborate with data architects and scientists to deliver innovative, data-driven solutions in a fast-paced environment. The position is outside IR35, with a competitive daily rate.
Key Responsibilities:
- Design, implement, and debug ETL pipelines to process and manage complex datasets.
- Leverage big data tools, including Apache Kafka, Spark, and Airflow, to deliver scalable solutions.
- Collaborate with stakeholders to ensure data quality and alignment with business goals.
- Utilize programming expertise in Python, Scala, and SQL for efficient data processing.
- Build data pipelines using cloud-native services on AWS, including Lambda, Glue, Redshift, and API Gateway.
- Monitor and optimise data solutions using AWS CloudWatch and other tools.
Key Skills:
- Proven hands-on experience in data engineering projects.
- Good hands-on experience of designing, implementing, debugging ETL pipeline.
- Expertise in Python, PySpark and SQL languages.
- Expertise with Spark and Airflow.
- Experience of designing data pipelines using cloud native services on AWS.
- Extensive knowledge of AWS services like API Gateway, Lambda, Redshift, Glue, Cloudwatch, etc.
- Iac experience of deploying AWS resources using terraform.
- Hands-on experience of setting up CI/CD workflows using GitHub Actions.
Salary (Rate): £350 Day
City: London
Country: UK
Working Arrangements: remote
IR35 Status: outside IR35
Seniority Level: undetermined
Industry: IT
Data Engineer
Location: Remote/London occasional visits to London
Rate: £350 Day Outside IR35
Start Date: ASAP
About the Role
Join a dynamic Digital Transformation Consultancy as a Data Engineer and play a pivotal role in delivering innovative, data-driven solutions for high-profile government clients. You'll be responsible for designing and implementing robust ETL pipelines, leveraging cutting-edge big data technologies, and driving excellence in cloud-based data engineering.
This role offers the opportunity to work with leading technologies, collaborate with data architects and scientists, and make a significant impact in a fast-paced, challenging environment.
Key Responsibilities:
- Design, implement, and debug ETL pipelines to process and manage complex datasets.
- Leverage big data tools, including Apache Kafka, Spark, and Airflow, to deliver scalable solutions.
- Collaborate with stakeholders to ensure data quality and alignment with business goals.
- Utilize programming expertise in Python, Scala, and SQL for efficient data processing.
- Build data pipelines using cloud-native services on AWS, including Lambda, Glue, Redshift, and API Gateway.
- Monitor and optimise data solutions using AWS CloudWatch and other tools.
What We're Looking For:
- Proven hands-on experience in data engineering projects
- Good hands-on experience of designing, implementing, debugging ETL pipeline
- Expertise in Python, PySpark and SQL languages
- Expertise with Spark and Airflow
- Experience of designing data pipelines using cloud native services on AWS
- Extensive knowledge of AWS services like API Gateway, Lambda, Redshift, Glue, Cloudwatch, etc.
- Iac experience of deploying AWS resources using terraform
- Hands-on experience of setting up CI/CD workflows using GitHub Actions
Why Join Us?
- Be part of a forward-thinking consultancy driving digital transformation for industry leaders.
- Work with the latest big data and cloud technologies.
- Collaborate with a team of skilled professionals in a fast-paced and rewarding environment.
If you're passionate about delivering impactful data solutions and meet the criteria for this role, we'd love to hear from you. Apply today and lead the way in digital transformation!