Negotiable
Outside
Remote
USA
Summary: We are looking for a Senior Data Engineer proficient in modern data stack technologies to design, build, and maintain scalable data transformation pipelines. The role requires expertise in SQL and dbt development, along with orchestration and cloud data engineering practices to ensure high-quality data for analytics and decision-making. Collaboration with cross-functional teams is essential to translate business needs into effective data solutions. The position is remote and classified as outside IR35.
Key Responsibilities:
- Design, build, and maintain end-to-end data pipelines using dbt for modular transformations and Apache Airflow for orchestration.
- Integrate data from multiple SQL sources and stitch them into unified, well-documented data models.
- Develop and optimize data transformations and warehouse structures (e.g., Snowflake, BigQuery, or Redshift) for scalability, performance, and maintainability.
- Implement data engineering best practices including version control (Git), testing, and CI/CD automation.
- Collaborate closely with data analysts, product managers, and software engineers to translate business needs into robust data solutions.
- Leverage dbt's documentation, testing, and lineage features to enhance transparency and data governance.
- Ensure reliability and observability of pipelines through monitoring, alerting, and performance tuning.
- Contribute to data architecture improvements and drive adoption of modern tools and processes within the data ecosystem.
Key Skills:
- Advanced SQL skills with experience writing and optimizing complex queries.
- Hands-on expertise in dbt (Data Build Tool) for building modular, version-controlled, and tested data transformations.
- Experience with Apache Airflow (or equivalent orchestration tools) for scheduling and dependency management.
- Strong understanding of cloud-based data warehouses such as Snowflake, BigQuery, or Amazon Redshift.
- Solid grasp of data modeling concepts (star/snowflake schemas, normalization, incremental models).
- Familiarity with software engineering practices such as Git workflows, CI/CD pipelines, and automated testing.
- Strong analytical and problem-solving abilities with the ability to work independently and in cross-functional teams.
Salary (Rate): undetermined
City: undetermined
Country: USA
Working Arrangements: remote
IR35 Status: outside IR35
Seniority Level: Senior
Industry: IT
We are seeking a Senior Data Engineer with strong expertise in modern data stack technologies to design, build, and maintain scalable data transformation pipelines. This role blends advanced SQL and dbt development with orchestration and cloud data engineering practices to deliver reliable, high-quality data for analytics and business decision-making.
Job Title: Sr. Data Engineer
Location: Sunnyvale, CA.
Duration: 6 months
Key Responsibilities:
- Design, build, and maintain end-to-end data pipelines using dbt for modular transformations and Apache Airflow for orchestration.
- Integrate data from multiple SQL sources and stitch them into unified, well-documented data models.
- Develop and optimize data transformations and warehouse structures (e.g., Snowflake, BigQuery, or Redshift) for scalability, performance, and maintainability.
- Implement data engineering best practices including version control (Git), testing, and CI/CD automation.
- Collaborate closely with data analysts, product managers, and software engineers to translate business needs into robust data solutions.
- Leverage dbt s documentation, testing, and lineage features to enhance transparency and data governance.
- Ensure reliability and observability of pipelines through monitoring, alerting, and performance tuning.
- Contribute to data architecture improvements and drive adoption of modern tools and processes within the data ecosystem.
Required Skills and Experience:
- Advanced SQL skills with experience writing and optimizing complex queries.
- Hands-on expertise in dbt (Data Build Tool) for building modular, version-controlled, and tested data transformations.
- Experience with Apache Airflow (or equivalent orchestration tools) for scheduling and dependency management.
- Strong understanding of cloud-based data warehouses such as Snowflake, BigQuery, or Amazon Redshift.
- Solid grasp of data modeling concepts (star/snowflake schemas, normalization, incremental models).
- Familiarity with software engineering practices such as Git workflows, CI/CD pipelines, and automated testing.
- Strong analytical and problem-solving abilities with the ability to work independently and in cross-functional teams.
Preferred Qualifications:
- Experience in analytics engineering and modern data stack platforms.
- Excellent communication and collaboration skills with both technical and business teams.
We are an Equal Opportunity Employer.