Data Engineer

Data Engineer

Posted 2 days ago by 1762593426

Negotiable
Outside
Remote
USA

Summary: The Data Engineer will collaborate with data architects, analysts, software engineers, and business stakeholders to design, build, and optimize data pipelines and platforms. This role emphasizes maintaining and enhancing data processes with a focus on performance, reliability, and data quality. The ideal candidate will have experience in data engineering and a strong technical background. Responsibilities include developing ETL/ELT pipelines and ensuring data governance and quality standards are met.

Key Responsibilities:

  • Collaborate with architects, analysts, and development teams to understand data requirements and translate them into technical solutions.
  • Design, develop, and maintain scalable ETL/ELT pipelines and data workflows for structured and unstructured data.
  • Build and optimize data models, data lakes, and data warehouses to support reporting, analytics, and machine learning use cases.
  • Troubleshoot and resolve complex data pipeline and performance issues; implement long-term solutions and improvements.
  • Create and maintain technical documentation, data dictionaries, and pipeline monitoring dashboards.
  • Research and integrate new data engineering tools, cloud services, and automation solutions to improve scalability and efficiency.
  • Ensure data security, governance, and quality standards are followed across all data environments.
  • Work closely with platform and DevOps teams to deploy, monitor, and scale data infrastructure in cloud or hybrid environments.

Key Skills:

  • Bachelor's degree in Computer Science, Data Engineering, Information Systems, or related technical field (or equivalent experience).
  • 1-5 years of hands-on experience designing, building, and supporting data pipelines or data platforms in a production environment.
  • Programming/Scripting: Python, SQL, Scala, or Java (Python preferred)
  • Experience with ETL/ELT tools or frameworks (Airflow, DBT, Informatica, Matillion, Glue, etc.)
  • Experience with cloud data services: AWS (Glue, Redshift, S3), Azure (Data Factory, Synapse), or Google Cloud Platform (BigQuery, Dataflow)
  • Strong SQL experience - query optimization, data modeling, stored procedures
  • Experience with streaming or messaging technologies (Kafka, Kinesis, Pub/Sub, etc.)
  • Familiarity with version control (Git) and CI/CD workflows
  • Experience working with relational and/or NoSQL databases (PostgreSQL, MySQL, MongoDB, DynamoDB, etc.)
  • Experience with data lake or lakehouse architectures (Delta Lake, Iceberg, Hudi)
  • Experience with Spark, Databricks, Snowflake, or similar platforms
  • Cloud or data certifications (AWS, Azure, Google Cloud Platform, Databricks, Snowflake)
  • Strong analytical and problem-solving skills
  • Ability to work effectively in a collaborative, Agile team environment

Salary (Rate): undetermined

City: undetermined

Country: USA

Working Arrangements: remote

IR35 Status: outside IR35

Seniority Level: undetermined

Industry: IT

Detailed Description From Employer:

Register Here: _2swj

The ideal candidate will work collaboratively with data architects, analysts, software engineers, and business stakeholders to design, build, and optimize data pipelines, data models, and scalable data platforms. This role involves maintaining and enhancing data ingestion, transformation, and storage processes with a strong focus on performance, reliability, and data quality.

Key Responsibilities:

  • Collaborate with architects, analysts, and development teams to understand data requirements and translate them into technical solutions.
  • Design, develop, and maintain scalable ETL/ELT pipelines and data workflows for structured and unstructured data.
  • Build and optimize data models, data lakes, and data warehouses to support reporting, analytics, and machine learning use cases.
  • Troubleshoot and resolve complex data pipeline and performance issues; implement long-term solutions and improvements.
  • Create and maintain technical documentation, data dictionaries, and pipeline monitoring dashboards.
  • Research and integrate new data engineering tools, cloud services, and automation solutions to improve scalability and efficiency.
  • Ensure data security, governance, and quality standards are followed across all data environments.
  • Work closely with platform and DevOps teams to deploy, monitor, and scale data infrastructure in cloud or hybrid environments.

Minimum Requirements:

  • Education: Bachelor s degree in Computer Science, Data Engineering, Information Systems, or related technical field (or equivalent experience).
  • Experience: 1 5 years of hands-on experience designing, building, and supporting data pipelines or data platforms in a production environment.

Technical Skills:

  • Programming/Scripting: Python, SQL, Scala, or Java (Python preferred)
  • Experience with ETL/ELT tools or frameworks (Airflow, DBT, Informatica, Matillion, Glue, etc.)
  • Experience with cloud data services: AWS (Glue, Redshift, S3), Azure (Data Factory, Synapse), or Google Cloud Platform (BigQuery, Dataflow)
  • Strong SQL experience query optimization, data modeling, stored procedures
  • Experience with streaming or messaging technologies (Kafka, Kinesis, Pub/Sub, etc.)
  • Familiarity with version control (Git) and CI/CD workflows
  • Experience working with relational and/or NoSQL databases (PostgreSQL, MySQL, MongoDB, DynamoDB, etc.)

Preferred Qualifications:

  • Experience with data lake or lakehouse architectures (Delta Lake, Iceberg, Hudi)
  • Experience with Spark, Databricks, Snowflake, or similar platforms
  • Cloud or data certifications (AWS, Azure, Google Cloud Platform, Databricks, Snowflake)
  • Strong analytical and problem-solving skills
  • Ability to work effectively in a collaborative, Agile team environment