Data Engineer - AWS/Python/SQL/Spark/DBT.

Data Engineer - AWS/Python/SQL/Spark/DBT.

Posted 1 week ago by Initialize IT on JobServe

£400 Per day
Undetermined
Remote
Home based/London, UK
p>Data Infrastructure Engineer urgently needed by large broadcasting organisation. AWS/Python/SQL/DBT

Experience needed.

Expertise in the main components of continuous data delivery: set up, design, and delivery of data pipelines (testing, deploying, monitoring, and maintaining).

Expertise in data architecture across Software-as-a-Service, Platform as-a-Service, Infrastructure-as-a-Service, and cloud productivity suites.

Strong engineering background with experience in Python, SQL, and DBT, or similar frameworks used to ship data processing pipelines at scale.

Demonstrated experience with and solid knowledge of AWS: AWS Glue, AWS EMR, Amazon S3, Redshift.

Familiarity with Infrastructure-as-code (IaC) using Terraform or AWS CloudFormation. Demonstrated experience in data migration projects.

Ability to work with, communicate effectively, and influence stakeholders on internal/external engineering teams, product development teams, sales operations teams, and external partners and consumers. DBT (Data Build Tool) is a popular open-source tool for building and managing data pipelines. It's often used in conjunction with tools like AWS Glue and Redshift to create and maintain data transformations.

Responsibilities

Collaborate on designing and implementing scalable data solutions using a range of new and emerging technologies from the AWS platform.

Demonstrate AWS Data expertise when communicating with stakeholders and translating requirements into technical data solutions.

Manage both Real Time and batch data pipelines. Our technology stack includes a wide variety of technologies such as Spark, Kafka, AWS Kinesis, Redshift and DBT.

Work independently with minimal supervision in remote team configurations.

Automate existing manual processes.