AWS Data Engineer - (Python/PySpark/Aws Services/Unit testing/CI/CD/Gitlab/Banking)
Posted Today by GIOS Technology
Negotiable
Undetermined
Onsite
Glasgow, Scotland, United Kingdom
Summary: The role of AWS Data Engineer involves designing and developing scalable cloud-based data solutions, with a focus on hands-on coding and expertise in AWS services. The ideal candidate will utilize Python and PySpark to build robust data pipelines and support data scientists in operationalizing models. This position requires a strong understanding of cloud-native architectures and infrastructure management. The role is based in Glasgow and requires on-site work 2-3 days per week.
Key Responsibilities:
- Design, develop, and maintain scalable data pipelines and ETL workflows using AWS services.
- Implement data processing solutions using PySpark and AWS Glue.
- Build and manage infrastructure as code using CloudFormation.
- Develop serverless applications using Lambda, Step Functions, and S3.
- Perform data querying and analysis using Athena.
- Support Data Scientists in model operationalization using SageMaker.
- Ensure secure data handling using IAM, KMS, and VPC configurations.
- Containerize applications using ECS.
- Write clean, testable Python code with strong unit testing practices.
- Use GitLab for version control and CI/CD.
Key Skills:
- Python
- PySpark
- S3
- Lambda
- Glue
- Step Functions
- Athena
- SageMaker
- VPC
- ECS
- IAM
- KMS
- CloudFormation
- GitLab
Salary (Rate): undetermined
City: Glasgow
Country: United Kingdom
Working Arrangements: on-site
IR35 Status: undetermined
Seniority Level: undetermined
Industry: IT
I am hiring for AWS Data Engineer
Location: Glasgow
2–3 days per weekly Onsite
Job Description
We are looking for an experienced AWS Data Engineer with strong hands-on coding skills and expertise in designing scalable cloud-based data solutions. The ideal candidate will be proficient in Python, PySpark, and core AWS services, with a strong background in building robust data pipelines and cloud-native architectures.
Key Responsibilities
- Design, develop, and maintain scalable data pipelines and ETL workflows using AWS services.
- Implement data processing solutions using PySpark and AWS Glue.
- Build and manage infrastructure as code using CloudFormation.
- Develop serverless applications using Lambda, Step Functions, and S3.
- Perform data querying and analysis using Athena.
- Support Data Scientists in model operationalization using SageMaker.
- Ensure secure data handling using IAM, KMS, and VPC configurations.
- Containerize applications using ECS.
- Write clean, testable Python code with strong unit testing practices.
- Use GitLab for version control and CI/CD.
Key Skills
- Python
- PySpark
- S3
- Lambda
- Glue
- Step Functions
- Athena
- SageMaker
- VPC
- ECS
- IAM
- KMS
- CloudFormation
- GitLab