Data Engineer DBT, Databricks & SQL :: Remote

Data Engineer DBT, Databricks & SQL :: Remote

Posted Today by 1760429757

Negotiable
Outside
Remote
USA

Summary: The Data Engineer role focuses on designing, building, and maintaining ETL/ELT pipelines using DBT and Databricks, with a strong emphasis on SQL for data transformations. The position requires collaboration with various stakeholders to ensure data quality and governance while working with structured and semi-structured data. Candidates should have experience with cloud data platforms and orchestration tools, along with a solid understanding of data modeling. This is a remote position based in the USA.

Key Responsibilities:

  • Design, build, and maintain scalable and efficient ETL/ELT pipelines using DBT and Databricks.
  • Develop, optimize, and troubleshoot complex SQL queries for data transformations, validations, and reporting.
  • Collaborate with data analysts, data scientists, and business stakeholders to understand data needs.
  • Implement data quality and data governance best practices in pipelines.
  • Work with structured and semi-structured data from multiple sources (e.g., APIs, flat files, cloud storage).
  • Build and maintain data models (star/snowflake schemas) to support analytics and BI tools.
  • Monitor pipeline performance and troubleshoot issues in production environments.
  • Maintain version control, testing, and CI/CD for DBT projects using Git and DevOps pipelines.

Key Skills:

  • experience as a Data Engineer
  • Strong experience with DBT (Cloud or Core) for transformation workflows.
  • Proficiency in SQL deep understanding of joins, window functions, CTEs, and performance tuning.
  • Hands-on experience with Databricks (Spark, Delta Lake, Notebooks).
  • Experience with at least one cloud data platform: AWS (Redshift), Azure (Synapse), or Google Cloud Platform (BigQuery).
  • Familiarity with data lake and lakehouse architecture.
  • Experience with Git and version control in data projects.
  • Knowledge of orchestration tools like Airflow, Azure Data Factory, or dbt Cloud scheduler.
  • Comfortable with Python or PySpark for data manipulation (bonus).

Salary (Rate): undetermined

City: undetermined

Country: USA

Working Arrangements: remote

IR35 Status: outside IR35

Seniority Level: undetermined

Industry: IT

Detailed Description From Employer:

Job Title: Data Engineer DBT, Databricks & SQL Expert
Location: Remote
Key Responsibilities:

  • Design, build, and maintain scalable and efficient ETL/ELT pipelines using DBT and Databricks.
  • Develop, optimize, and troubleshoot complex SQL queries for data transformations, validations, and reporting.
  • Collaborate with data analysts, data scientists, and business stakeholders to understand data needs.
  • Implement data quality and data governance best practices in pipelines.
  • Work with structured and semi-structured data from multiple sources (e.g., APIs, flat files, cloud storage).
  • Build and maintain data models (star/snowflake schemas) to support analytics and BI tools.
  • Monitor pipeline performance and troubleshoot issues in production environments.
  • Maintain version control, testing, and CI/CD for DBT projects using Git and DevOps pipelines.

Required Skills & Experience:

  • experience as a Data Engineer
  • Strong experience with DBT (Cloud or Core) for transformation workflows.
  • Proficiency in SQL deep understanding of joins, window functions, CTEs, and performance tuning.
  • Hands-on experience with Databricks (Spark, Delta Lake, Notebooks).
  • Experience with at least one cloud data platform: AWS (Redshift), Azure (Synapse), or Google Cloud Platform (BigQuery).
  • Familiarity with data lake and lakehouse architecture.
  • Experience with Git and version control in data projects.
  • Knowledge of orchestration tools like Airflow, Azure Data Factory, or dbt Cloud scheduler.
  • Comfortable with Python or PySpark for data manipulation (bonus).