Data Engineer with DBT

Data Engineer with DBT

Posted 2 days ago by 1761379507

Negotiable
Outside
Remote
USA

Summary: The role of Data Engineer with DBT involves developing and maintaining data pipelines for both streaming and batch processing, primarily utilizing Snowflake as the data warehouse. The position requires extensive experience in data engineering, ETL processes, and collaboration with engineering and DevOps teams. The candidate should possess strong coding skills in Python and SQL, along with hands-on experience in various cloud platforms and business intelligence tools. This is a long-term remote position with a focus on large-scale data analytics use cases.

Key Responsibilities:

  • Develop data pipelines for streaming and batch data processing needs to move data in and out of Snowflake data warehouse
  • Collaborate with engineering and DevOps team members to implement, test, deploy, and operate data pipelines and ETL solutions
  • Hands-on experience with Snowflake, including schema design, query optimization, and data load techniques
  • Hands-on reporting experience leveraging Business Intelligence tools such as Looker, Qlik, Tableau, Power BI, etc.
  • Experience with DBT, including model development, testing, and documentation
  • Optimize data pipelines, ETL processes, and data integrations for large-scale data analytics use cases
  • Experience in building and maintaining data pipelines and ETL/ELT processes in data-intensive organizations
  • Design, build, and maintain scalable data pipelines using Snowflake and DBT
  • Develop and manage ETL processes to ingest data from various sources into Snowflake
  • Strong coding skills with Python and SQL for manipulating and analyzing data
  • Strong experience on Anaconda and Jupyter Notebook
  • Hands-on experience with data movement using Snowpipe, Snow SQL, etc.
  • Able to build data integrations and ingestion pipelines for streaming and batch data
  • Experience in designing and implementing operational production grade large-scale data pipelines, ETL/ELT and data integration solutions
  • Hands-on experience with cloud platforms such as AWS and Google Cloud
  • Proficiency with Kafka, AWS S3, SQS, Lambda, Pub/Sub, AWS DMS, Glue

Key Skills:

  • 12-14 years of experience in data engineering
  • 10 years of hands-on experience with Snowflake
  • 8 years of experience with Business Intelligence tools
  • Experience with DBT
  • Strong coding skills in Python and SQL
  • Experience with Anaconda and Jupyter Notebook
  • Experience with cloud platforms (AWS, Google Cloud)
  • Proficiency with Kafka, AWS S3, SQS, Lambda, Pub/Sub, AWS DMS, Glue
  • Familiarity with API security frameworks
  • Background in healthcare data is a plus

Salary (Rate): undetermined

City: undetermined

Country: USA

Working Arrangements: remote

IR35 Status: outside IR35

Seniority Level: undetermined

Industry: IT

Detailed Description From Employer:

Role: Data Engineer with DBT

Duration: Long Term

Location: Remote EST working hours

  • 12- 14 Plus years - Develop data pipelines for streaming and batch data processing needs to move data in and out of Snowflake data warehouse
  • Collaborate with engineering and DevOps team members to implement, test, deploy, and operate data pipelines and ETL solutions
  • 10 Years Hands-on experience with Snowflake, including schema design, query optimization, and data load techniques
  • 8 years of hands-on reporting experience leveraging Business Intelligence tools such as Looker, Qlik, Tableau, Power BI, etc.
  • Experience with DBT, including model development, testing, and documentation
  • Optimize data pipelines, ETL processes, and data integrations for large-scale data analytics use cases
  • 7+ years of experience in building and maintaining data pipelines and ETL/ELT processes in data-intensive organizations
  • Design, build, and maintain scalable data pipelines using Snowflake and DBT.
  • Design, build, and maintain scalable data pipelines using Snowflake and DBT.
  • Develop and manage ETL processes to ingest data from various sources into Snowflake.
  • Strong coding skills with Python and SQL for manipulating and analyzing data
  • Strong Experience on Anaconda and Jupyter Notebook
  • Hands-on experience with data movement using Snowpipe, Snow SQL, etc.
  • Able to build data integrations and ingestion pipelines for streaming and batch data
  • 5 years of designing and implementing operational production grade large-scale data pipelines, ETL/ELT and data integration solutions
  • Hands-on experience with cloud platforms such as AWS and Google Cloud
  • Proficiency with Kafka, AWS S3, SQS, Lambda, Pub/Sub, AWS DMS, Glue

Preferred - Good to Have

  • Familiarity with API security frameworks, token management and user access control including OAuth, JWT etc.,
  • Background in healthcare data especially patient centric clinical data and provider data is a plus