AWS Data Engineer - (S3/Glue/Redshift/SageMaker/Qlik Replicate/Qlik Compose/ DataBricks/Informatica/SAS/SSIS/Data Warehousing/Data Lake/Data Modelling/Banking/Fintech)
Posted 4 days ago by GIOS Technology
Negotiable
Undetermined
Hybrid
Birmingham, England, United Kingdom
Summary: The AWS Data Engineer role involves designing, developing, and maintaining scalable data engineering solutions on AWS, focusing on data pipelines for Data Warehousing and Data Lake architectures. The position requires expertise in ETL/ELT processes and data modeling techniques, along with ensuring quality assurance and performance optimization. The role also supports the industrialization of machine learning models in enterprise environments. The position is based in Birmingham, UK, with a hybrid working arrangement.
Key Responsibilities:
- Design, develop, and maintain scalable AWS-based data engineering solutions.
- Build and optimize data pipelines supporting Data Warehouse, Data Lake, and Lakehouse architectures.
- Develop robust ETL/ELT processes using modern cloud and BI tooling.
- Implement data modelling techniques for structured and unstructured datasets.
- Ensure Quality Assurance, test automation, and performance optimization within data pipelines.
- Support industrialisation and scaling of machine learning models within enterprise environments.
Key Skills:
- AWS S3
- AWS Glue
- AWS Redshift
- AWS SageMaker
- SQL
- Python
- PySpark
- Scala
- Qlik Replicate
- Qlik Compose
- Databricks
- Informatica
- SAS
- SSIS
- Data Warehousing
- Data Lake
- Lakehouse
- Kimball
- Data Vault
- Test Automation
Salary (Rate): undetermined
City: Birmingham
Country: United Kingdom
Working Arrangements: hybrid
IR35 Status: undetermined
Seniority Level: undetermined
Industry: IT
I am hiring for AWS Data Engineer - (S3/Glue/Redshift/SageMaker/Qlik Replicate/Qlik Compose/ DataBricks/Informatica/SAS/SSIS/Data Warehousing/Data Lake/Data Modelling/Banking/Fintech) Location: Birmingham, UK (2 Days Onsite per Week) Job Description: Design, develop, and maintain scalable AWS-based data engineering solutions. Build and optimize data pipelines supporting Data Warehouse, Data Lake, and Lakehouse architectures. Develop robust ETL/ELT processes using modern cloud and BI tooling. Implement data modelling techniques (Kimball, Data Vault, Lakehouse) for structured and unstructured datasets. Ensure Quality Assurance, test automation, and performance optimization within data pipelines. Support industrialisation and scaling of machine learning models within enterprise environments.
Key Skills: AWS S3, AWS Glue, AWS Redshift, AWS SageMaker, SQL, Python, PySpark, Scala, Qlik Replicate, Qlik Compose, Databricks, Informatica, SAS, SSIS, Data Warehousing, Data Lake, Lakehouse, Kimball, Data Vault, Test Automation