Data Engineer

Data Engineer

Posted 2 days ago by 1761374165

Negotiable
Outside
Remote
USA

Summary: We are seeking a skilled Data Engineer to design, build, and maintain scalable data pipelines and infrastructure. The ideal candidate will possess strong expertise in ETL processes, data warehousing, cloud platforms, and big data technologies to ensure efficient data flow across systems. This role requires collaboration with various stakeholders to deliver high-quality data solutions while maintaining data integrity and governance.

Key Responsibilities:

  • Design, develop, and maintain data pipelines and ETL workflows to collect, process, and transform large datasets.
  • Collaborate with data analysts, scientists, and business stakeholders to understand data requirements and deliver high-quality data solutions.
  • Implement and optimize data models, data lakes, and data warehouses.
  • Ensure data quality, integrity, and governance through validation, monitoring, and documentation.
  • Work with large-scale structured and unstructured data from multiple sources.
  • Optimize data processing for performance and scalability.
  • Deploy and manage data pipelines using CI/CD practices.
  • Troubleshoot data issues and enhance existing data workflows.

Key Skills:

  • Bachelor's or Master's degree in Computer Science, Information Systems, Engineering, or related field.
  • 9+ years of experience in data engineering or a related role.
  • Strong experience with SQL and database technologies (e.g., Snowflake, Redshift, BigQuery, Synapse, or PostgreSQL).
  • Hands-on experience with ETL tools such as Informatica, Talend, AWS Glue, Azure Data Factory, or dbt.
  • Proficiency in Python, Scala, or Java for data processing.
  • Experience with big data frameworks (e.g., Apache Spark, Hadoop, Kafka).
  • Strong understanding of cloud data platforms (AWS, Azure, or Google Cloud Platform).
  • Familiarity with data warehousing concepts, star/snowflake schemas, and dimensional modeling.
  • Experience with version control (Git) and CI/CD pipelines.
  • Good understanding of data governance, security, and compliance standards.

Salary (Rate): undetermined

City: undetermined

Country: USA

Working Arrangements: remote

IR35 Status: outside IR35

Seniority Level: undetermined

Industry: IT

Detailed Description From Employer:

Job Summary:

We are looking for an experienced Data Engineer to design, build, and maintain scalable data pipelines and infrastructure. The ideal candidate will have strong expertise in ETL processes, data warehousing, cloud platforms, and big data technologies, ensuring reliable and efficient data flow across systems.


Key Responsibilities:

  • Design, develop, and maintain data pipelines and ETL workflows to collect, process, and transform large datasets.

  • Collaborate with data analysts, scientists, and business stakeholders to understand data requirements and deliver high-quality data solutions.

  • Implement and optimize data models, data lakes, and data warehouses.

  • Ensure data quality, integrity, and governance through validation, monitoring, and documentation.

  • Work with large-scale structured and unstructured data from multiple sources.

  • Optimize data processing for performance and scalability.

  • Deploy and manage data pipelines using CI/CD practices.

  • Troubleshoot data issues and enhance existing data workflows.


Required Skills & Qualifications:

  • Bachelor s or Master s degree in Computer Science, Information Systems, Engineering, or related field.

  • 9+ years of experience in data engineering or a related role.

  • Strong experience with SQL and database technologies (e.g., Snowflake, Redshift, BigQuery, Synapse, or PostgreSQL).

  • Hands-on experience with ETL tools such as Informatica, Talend, AWS Glue, Azure Data Factory, or dbt.

  • Proficiency in Python, Scala, or Java for data processing.

  • Experience with big data frameworks (e.g., Apache Spark, Hadoop, Kafka).

  • Strong understanding of cloud data platforms (AWS, Azure, or Google Cloud Platform).

  • Familiarity with data warehousing concepts, star/snowflake schemas, and dimensional modeling.

  • Experience with version control (Git) and CI/CD pipelines.

  • Good understanding of data governance, security, and compliance standards.