AWS Data Engineer (W2 Contract)

AWS Data Engineer (W2 Contract)

Posted Today by Icon Global Technologies LLC

Negotiable
Undetermined
Remote
Remote

Summary: The AWS Data Engineer will be responsible for building and maintaining secure, scalable, and automated data pipelines to facilitate data ingestion, transformation, and curation. This role emphasizes the implementation of CI/CD pipelines, automated testing, and monitoring solutions to ensure data product integrity and performance. The position requires collaboration with data modelers and architects to align technical solutions with business needs. The ideal candidate will have extensive experience in data engineering and AWS services.

Key Responsibilities:

  • Data Pipelines: Develop configuration-driven ingestion and transformation pipelines using AWS Glue, Lambda, and Redshift.
  • Automation: Implement CI/CD pipelines for automated build, test, and deployment of data products.
  • Testing: Establish automated test frameworks for schema validation, data quality, and performance testing.
  • Monitoring: Set up monitoring and observability dashboards for product-level metrics (CloudWatch, custom metrics).
  • Collaboration: Work with data modelers and architects to ensure alignment with business requirements.

Key Skills:

  • Programming: Proficiency in Python, SQL, and shell scripting.
  • AWS Expertise: Hands-on experience with AWS services (Glue, Redshift, S3, Lambda, CloudWatch).
  • Automation: Experience with CI/CD tools (Terraform, Jenkins, AWS CodePipeline).
  • Data Integration: Strong knowledge of ETL/ELT processes and frameworks.
  • Soft Skills: Strong problem-solving, collaboration, and communication skills.

Salary (Rate): undetermined

City: undetermined

Country: undetermined

Working Arrangements: remote

IR35 Status: undetermined

Seniority Level: undetermined

Industry: IT

Detailed Description From Employer:
Role Summary
  • The Data Engineer will build and maintain secure, scalable, and automated data pipelines to support the ingestion, transformation, and curation of data products. This role will focus on implementing CI/CD pipelines, automated testing, and monitoring solutions.

Key Responsibilities
  • Data Pipelines: Develop configuration-driven ingestion and transformation pipelines using AWS Glue, Lambda, and Redshift.
  • Automation: Implement CI/CD pipelines for automated build, test, and deployment of data products.
  • Testing: Establish automated test frameworks for schema validation, data quality, and performance testing.
  • Monitoring: Set up monitoring and observability dashboards for product-level metrics (CloudWatch, custom metrics).
  • Collaboration: Work with data modelers and architects to ensure alignment with business requirements.

Required Skills:
  • Programming: Proficiency in Python, SQL, and shell scripting.
  • AWS Expertise: Hands-on experience with AWS services (Glue, Redshift, S3, Lambda, CloudWatch).
  • Automation: Experience with CI/CD tools (Terraform, Jenkins, AWS CodePipeline).
  • Data Integration: Strong knowledge of ETL/ELT processes and frameworks.
  • Soft Skills: Strong problem-solving, collaboration, and communication skills.
Qualifications
  • Bachelor’s degree in Computer Science, Data Engineering, or related field.
  • 5+ years of experience in data engineering and pipeline development.
  • AWS Certified Data Analytics or AWS Certified Developer is a plus.