Junior Snowflake Data Engineer

Junior Snowflake Data Engineer

Posted 4 days ago by 1756534786

Negotiable
Outside
Remote
USA

Summary: The role of Junior Snowflake Data Engineer involves designing, building, and optimizing cloud data platforms with a focus on Snowflake, DBT, and Airflow. The candidate will be responsible for developing ETL/ELT pipelines, data modeling, and ensuring data governance and compliance. This position requires collaboration with cross-functional teams to deliver reliable data solutions. The ideal candidate should have a strong technical background and experience in data engineering practices.

Key Responsibilities:

  • Design and implement end-to-end data pipelines using Snowflake, DBT, Airflow, and Python.
  • Ingest data from diverse sources (transaction systems, APIs, streaming platforms like Kafka) into cloud data platforms (AWS, Azure, Google Cloud Platform).
  • Manage data across bronze, silver, and gold layers ensuring scalability, lineage, and compliance.
  • Develop and optimize data models (star schema, Data Vault, SCD handling) for analytics and reporting use cases.
  • Build reusable frameworks for data ingestion, transformation, and quality checks.
  • Define and manage Airflow DAGs with dependencies, retries, SLAs, and monitoring.
  • Implement CI/CD pipelines for DBT and Airflow jobs using GitLab/Jenkins.
  • Ensure data governance, masking, encryption, and regulatory compliance (HIPAA, GDPR, SOX, PCI DSS).
  • Collaborate with cross-functional teams (Data Architects, BI Developers, Data Scientists) to deliver business-ready datasets.
  • Troubleshoot pipeline failures, optimize query performance, and ensure 99.9% pipeline reliability.

Key Skills:

  • 6+ years of professional experience in Data Engineering.
  • Hands-on expertise with Snowflake (warehousing, task automation, RBAC, query optimization).
  • Proficiency in DBT (models, macros, testing, documentation, dbt Cloud/CLI).
  • Strong experience with Apache Airflow (DAG design, custom operators, retries, SLAs, alerting).
  • Advanced SQL and Python programming skills.
  • Experience with AWS services (S3, Glue, Lambda, EMR, RDS, Kinesis, MSK).
  • Familiarity with streaming technologies (Kafka, Spark Streaming).
  • Strong understanding of data modeling, ETL/ELT frameworks, and pipeline orchestration.
  • Knowledge of DevOps practices (CI/CD, Terraform, Docker, GitLab/Jenkins).
  • Excellent problem-solving and communication skills; ability to explain technical solutions in business terms.

Salary (Rate): undetermined

City: undetermined

Country: USA

Working Arrangements: remote

IR35 Status: outside IR35

Seniority Level: undetermined

Industry: IT

Detailed Description From Employer:

We don't need a senior person here as the rate is offered for this role is not on higher side.

Job Title: Junior Snowflake Data Engineer

Location: Remote

Required Experience: 6+ years

Duration: 6 Month+

Need someone focused on Snowflake and DBT along with Python, and Airflow.

About the Role:

We are seeking a Data Engineer with strong expertise in Snowflake, DBT, and Airflow to design, build, and optimize modern cloud data platforms. The ideal candidate has hands-on experience with ETL/ELT pipelines, data modeling, and orchestration frameworks, and can translate business requirements into scalable, secure, and performant data solutions.

Key Responsibilities:

  • Design and implement end-to-end data pipelines using Snowflake, DBT, Airflow, and Python.
  • Ingest data from diverse sources (transaction systems, APIs, streaming platforms like Kafka) into cloud data platforms (AWS, Azure, Google Cloud Platform).
  • Manage data across bronze, silver, and gold layers ensuring scalability, lineage, and compliance.
  • Develop and optimize data models (star schema, Data Vault, SCD handling) for analytics and reporting use cases.
  • Build reusable frameworks for data ingestion, transformation, and quality checks.
  • Define and manage Airflow DAGs with dependencies, retries, SLAs, and monitoring.
  • Implement CI/CD pipelines for DBT and Airflow jobs using GitLab/Jenkins.
  • Ensure data governance, masking, encryption, and regulatory compliance (HIPAA, GDPR, SOX, PCI DSS).
  • Collaborate with cross-functional teams (Data Architects, BI Developers, Data Scientists) to deliver business-ready datasets.
  • Troubleshoot pipeline failures, optimize query performance, and ensure 99.9% pipeline reliability.

Required Skills & Experience:

  • 6+ years of professional experience in Data Engineering.
  • Hands-on expertise with Snowflake (warehousing, task automation, RBAC, query optimization).
  • Proficiency in DBT (models, macros, testing, documentation, dbt Cloud/CLI).
  • Strong experience with Apache Airflow (DAG design, custom operators, retries, SLAs, alerting).
  • Advanced SQL and Python programming skills.
  • Experience with AWS services (S3, Glue, Lambda, EMR, RDS, Kinesis, MSK).
  • Familiarity with streaming technologies (Kafka, Spark Streaming).
  • Strong understanding of data modeling, ETL/ELT frameworks, and pipeline orchestration.
  • Knowledge of DevOps practices (CI/CD, Terraform, Docker, GitLab/Jenkins).
  • Excellent problem-solving and communication skills; ability to explain technical solutions in business terms.

Preferred Qualifications:

  • Prior experience in Healthcare, Finance, or Retail domains.
  • Familiarity with Redshift, Databricks, or BigQuery.
  • Exposure to data governance tools (Collibra, Alation).
  • Experience with real-time analytics use cases (fraud detection, patient monitoring, stockouts reduction).
  • Certification in Snowflake, AWS, or DBT.