AWS Data Engineer - (Python/PySpark/Aws Services/Unit testing/CI/CD/Gitlab/Banking)

AWS Data Engineer - (Python/PySpark/Aws Services/Unit testing/CI/CD/Gitlab/Banking)

Posted 1 day ago by GIOS Technology

Negotiable
Undetermined
Hybrid
Glasgow, Scotland, United Kingdom

Summary: The role of AWS Data Engineer involves designing and developing scalable cloud-based data solutions, primarily using AWS services. The ideal candidate will possess strong coding skills in Python and PySpark, with experience in building data pipelines and cloud-native architectures. This position requires hands-on expertise in various AWS tools and a commitment to writing clean, testable code. The role is based in Glasgow and requires on-site presence for 2-3 days per week.

Key Responsibilities:

  • Design, develop, and maintain scalable data pipelines and ETL workflows using AWS services.
  • Implement data processing solutions using PySpark and AWS Glue.
  • Build and manage infrastructure as code using CloudFormation.
  • Develop serverless applications using Lambda, Step Functions, and S3.
  • Perform data querying and analysis using Athena.
  • Support Data Scientists in model operationalization using SageMaker.
  • Ensure secure data handling using IAM, KMS, and VPC configurations.
  • Containerize applications using ECS.
  • Write clean, testable Python code with strong unit testing practices.
  • Use GitLab for version control and CI/CD.

Key Skills:

  • Python
  • PySpark
  • S3
  • Lambda
  • Glue
  • Step Functions
  • Athena
  • SageMaker
  • VPC
  • ECS
  • IAM
  • KMS
  • CloudFormation
  • GitLab

Salary (Rate): undetermined

City: Glasgow

Country: United Kingdom

Working Arrangements: hybrid

IR35 Status: undetermined

Seniority Level: undetermined

Industry: IT

Detailed Description From Employer:

I am hiring for AWS Data Engineer

Location: Glasgow

2–3 days per weekly Onsite

Job Description

We are looking for an experienced AWS Data Engineer with strong hands-on coding skills and expertise in designing scalable cloud-based data solutions. The ideal candidate will be proficient in Python, PySpark, and core AWS services, with a strong background in building robust data pipelines and cloud-native architectures.

Key Responsibilities

  • Design, develop, and maintain scalable data pipelines and ETL workflows using AWS services.
  • Implement data processing solutions using PySpark and AWS Glue.
  • Build and manage infrastructure as code using CloudFormation.
  • Develop serverless applications using Lambda, Step Functions, and S3.
  • Perform data querying and analysis using Athena.
  • Support Data Scientists in model operationalization using SageMaker.
  • Ensure secure data handling using IAM, KMS, and VPC configurations.
  • Containerize applications using ECS.
  • Write clean, testable Python code with strong unit testing practices.
  • Use GitLab for version control and CI/CD.

Key Skills

  • Python
  • PySpark
  • S3
  • Lambda
  • Glue
  • Step Functions
  • Athena
  • SageMaker
  • VPC
  • ECS
  • IAM
  • KMS
  • CloudFormation
  • GitLab