Negotiable
Outside
Remote
USA
Summary: The role of Data Engineer involves developing and optimizing SQL queries, designing foundational datasets, and implementing data-driven solutions using Snowflake and Databricks. The position requires extensive experience in data engineering and analytics, with a focus on creating efficient data pipelines and conducting advanced statistical analysis. The role is fully remote and is intended for W2 candidates only. The position is classified as outside IR35.
Key Responsibilities:
- Develop and optimize complex batch and near-real-time SQL queries to meet product requirements and business needs.
- Design and implement core foundational datasets that is reusable, scalable, and performant.
- Architect, implement, deploy, and maintain data-driven solutions in Snowflake and Databricks.
- Develop and manage data pipelines supporting multiple reports, tools, and applications.
- Conduct advanced statistical analysis to yield actionable insights, identify correlations/trends, measure performance, and visualize disparate sources of data.
Key Skills:
- 5+ years in operations, supply chain, data engineering, data analytics, and/or database administration.
- Experience in design, implementations, and optimization in Snowflake or other relational SQL databases.
- Experience with data cleansing, curation, mining, manipulation, and analysis from disparate systems.
- Experience with configuration control (GitHub preferred) and Data DevOps practices using GitHub Actions, Jenkins or other deployment pipelines that provide Continuous Integration and Continuous Delivery (CI/CD).
- Experience with Data Extract, Transform and Load (ETL) processes using SQL as the foundation.
- Experience with database structures, modeling implementation such as third normal form.
Salary (Rate): undetermined
City: undetermined
Country: USA
Working Arrangements: remote
IR35 Status: outside IR35
Seniority Level: undetermined
Industry: IT
Title: Data Engineer
Duration: 8 months
Location: 100% Remote
Key Responsibilities
- Develop and optimize complex batch and near-real-time SQL queries to meet product requirements and business needs.
- Design and implement core foundational datasets that is reusable, scalable, and performant.
- Architect, implement, deploy, and maintain data-driven solutions in Snowflake and Databricks.
- Develop and manage data pipelines supporting multiple reports, tools, and applications.
- Conduct advanced statistical analysis to yield actionable insights, identify correlations/trends, measure performance, and visualize disparate sources of data.
Required Skills and Experience
- 5+ years in in operations, supply chain, data engineering, data analytics, and/or database administration.
- Experience in design, implementations, and optimization in Snowflake or other relational SQL databases
- Experience with data cleansing, curation, mining, manipulation, and analysis from disparate systems
- Experience with configuration control (GitHub preferred) and Data DevOps practices using GitHub Actions, Jenkins or other deployment pipelines that provide Continuous Integration and Continuous Delivery (CI/CD)
- Experience with Data Extract, Transform and Load (ETL) processes using SQL as the foundation
- Experience with database structures, modeling implementation such as third normal form
- ship
Preferred Skills
- Experience with Manufacturing, Operations, and/or Supply Chain process and systems
- Experience using MRP/ERP systems (SAP)
- Experience with Python and related libraries such as Pandas for advanced data analytics
- Experience with schema deployment solutions such as SchemaChange or Liquibase
- Working knowledge of Agile Software development methodologies
- Ability to filter, extract, and analyze information from large, complex datasets
- Great verbal and written communication skills to collaborate cross functionally
- Experience with data deployment solutions such as Snowflake Tasks, Matillion, SSIS, Azure Data Factory (ADF) or Alteryx
- Experience with Snowflake Streams, Stages and Snowpipes for data ingestion
- Experience with VS Code and GitHub Desktop for integrated development
- Experience with web application relational database models and APIs
- Knowledge of statistical modeling and machine learning methods