Data Engineer

Data Engineer

Posted 1 day ago by 1751526109

Negotiable
Outside
Remote
USA

Summary: The Data Engineer role involves building and optimizing data pipelines and architectures for analytics and operational use, requiring expertise in data integration, transformation, and modeling with modern big data and cloud technologies. The position is remote and seeks a candidate with over six years of experience in the field.

Key Responsibilities:

  • Design, build, and maintain scalable data pipelines.
  • Develop ETL/ELT processes to ingest data from various sources.
  • Work with structured and unstructured data.
  • Implement data quality and data governance frameworks.
  • Optimize data systems for performance and reliability.
  • Collaborate with data scientists, analysts, and other engineers.
  • Ensure compliance with data security and privacy standards.

Key Skills:

  • Strong SQL and data modeling experience
  • Python or Scala for data processing
  • ETL tools (e.g., Apache Airflow, Informatica, Talend)
  • Big Data frameworks (e.g., Apache Spark, Hadoop)
  • Cloud platforms (AWS/Google Cloud Platform/Azure) especially services like S3, Redshift, BigQuery, or Azure Data Lake
  • Data warehouse technologies (Snowflake, Redshift, BigQuery)

Salary (Rate): undetermined

City: undetermined

Country: USA

Working Arrangements: remote

IR35 Status: outside IR35

Seniority Level: undetermined

Industry: IT

Detailed Description From Employer:

Job Title: Data Engineer
Location: Remote
Experience: 6+ years

Job Description:
We are seeking a skilled Data Engineer to build and optimize data pipelines and architectures for analytics and operational use. The ideal candidate will be experienced in data integration, transformation, and modeling using modern big data and cloud technologies.

Key Responsibilities:

  • Design, build, and maintain scalable data pipelines.
  • Develop ETL/ELT processes to ingest data from various sources.
  • Work with structured and unstructured data.
  • Implement data quality and data governance frameworks.
  • Optimize data systems for performance and reliability.
  • Collaborate with data scientists, analysts, and other engineers.
  • Ensure compliance with data security and privacy standards.

Must-Have Skills:

  • Strong SQL and data modeling experience
  • Python or Scala for data processing
  • ETL tools (e.g., Apache Airflow, Informatica, Talend)
  • Big Data frameworks (e.g., Apache Spark, Hadoop)
  • Cloud platforms (AWS/Google Cloud Platform/Azure) especially services like S3, Redshift, BigQuery, or Azure Data Lake
  • Data warehouse technologies (Snowflake, Redshift, BigQuery)

Good-to-Have Skills:

  • Experience with streaming data (Kafka, Kinesis)
  • Version control (Git)
  • Containerization tools (Docker, Kubernetes)
  • Experience with orchestration tools and CI/CD pipelines
  • Data governance tools (e.g., Apache Atlas, Collibra)