Negotiable
Undetermined
Hybrid
London, England, United Kingdom
Summary: The AWS Data Engineer role in London involves designing and developing scalable data pipelines using Python and Apache Spark, while orchestrating workflows with AWS tools. The position requires collaboration with business teams to create data-driven solutions and contribute to a lakehouse architecture. Candidates should have a strong understanding of data engineering principles and be eager to learn about the financial indices domain. The role is hybrid and is offered on a 12-month fixed-term contract.
Key Responsibilities:
- Designing and developing scalable, testable data pipelines using Python and Apache Spark.
- Orchestrating data workflows with AWS tools like Glue, EMR Serverless, Lambda, and S3.
- Applying modern software engineering practices: version control, CI/CD, modular design, and automated testing.
- Contributing to the development of a lakehouse architecture using Apache Iceberg.
- Collaborating with business teams to translate requirements into data-driven solutions.
- Building observability into data flows and implementing basic quality checks.
- Participating in code reviews, pair programming, and architecture discussions.
- Continuously learning about the financial indices domain and sharing insights with the team.
Key Skills:
- Expertise in Python, Pyspark, AWS, Cloud, AWS Services, and AWS Components.
- Understanding of data engineering basics: batch processing, schema evolution, and building ETL pipelines.
- Experience with or eagerness to learn Apache Spark for large-scale data processing.
- Familiarity with the AWS data stack (e.g. S3, Glue, Lambda, EMR).
- Ability to write clean, maintainable Python code with type hints, linters, and tests like pytest.
- Enjoyment of learning the business context and working closely with stakeholders.
- Experience working in Agile teams and valuing collaboration.
- Nice-to-haves: Experience with Apache Iceberg, familiarity with CI/CD tools, exposure to data quality frameworks, curiosity about financial markets.
Salary (Rate): undetermined
City: London
Country: United Kingdom
Working Arrangements: hybrid
IR35 Status: undetermined
Seniority Level: undetermined
Industry: IT
Role – AWS Data Engineer
Location : London, UK
12 Months FTC
Work Mode : Hybrid
Need Expert in Python Pyspark, AWS, Cloud, AWS Services, AWS Components
Designing and developing scalable, testable data pipelines using Python and Apache Spark
Orchestrating data workflows with AWS tools like Glue, EMR Serverless, Lambda, and S3
Applying modern software engineering practices: version control, CI/CD, modular design, and automated testing
Contributing to the development of a lakehouse architecture using Apache Iceberg
Collaborating with business teams to translate requirements into data-driven solutions
Building observability into data flows and implementing basic quality checks
Participating in code reviews, pair programming, and architecture discussions
Continuously learning about the financial indices domain and sharing insights with the team
What You'll Bring
Writes clean, maintainable Python code (ideally with type hints, linters, and tests like pytest)
Understands data engineering basics: batch processing, schema evolution, and building ETL pipelines
Has experience with or is eager to learn Apache Spark for large-scale data processing
Is familiar with the AWS data stack (e.g. S3, Glue, Lambda, EMR)
Enjoys learning the business context and working closely with stakeholders
Works well in Agile teams and values collaboration over solo heroics
Nice-to-haves
It’s great (but not required) if you also bring:
Experience with Apache Iceberg or similar table formats
Familiarity with CI/CD tools like GitLab CI, Jenkins, or GitHub Actions
Exposure to data quality frameworks like Great Expectations or Deequ
Curiosity about financial markets, index data, or investment analytics