Data Engineer - Databricks/Azure

Data Engineer - Databricks/Azure

Posted Today by Acunor Infotech

Negotiable
Undetermined
Remote
Remote

Summary: We are seeking a skilled Data Engineer with extensive hands-on experience in Databricks and Azure-based modern data platforms. The role involves designing, building, and optimizing scalable data pipelines, data models, and analytics solutions. The ideal candidate will utilize tools such as Azure Data Factory and PySpark to enhance data processing capabilities. This position is remote and contract-based, requiring a minimum of 5 years of relevant experience.

Key Responsibilities:

  • Design and develop scalable ETL/ELT pipelines using Databricks and Azure Data Factory
  • Build config-driven and reusable data pipelines for enterprise data platforms
  • Develop workflows using Databricks Workflows for orchestration and automation
  • Write optimized PySpark, Python, and Databricks SQL code for large-scale data processing
  • Implement data models including Star Schema and Snowflake Schema
  • Work with Delta Lake and Iceberg open table formats
  • Support data virtualization and modern data access strategies
  • Optimize data platform performance, scalability, and cost efficiency
  • Collaborate with analytics, BI, and business teams for data delivery
  • Use tools like GitHub Copilot / Databricks Genie for productivity and AI-driven insights

Key Skills:

  • Strong hands-on Databricks experience
  • Azure Data Factory expertise
  • PySpark, Python, Databricks SQL
  • Data Modeling (Star / Snowflake Schema)
  • Databricks Workflows
  • Delta Lake / Iceberg
  • Azure Cloud Platform

Salary (Rate): undetermined

City: undetermined

Country: USA

Working Arrangements: remote

IR35 Status: undetermined

Seniority Level: undetermined

Industry: IT

Detailed Description From Employer:
Job Title: Data Engineer Databricks / Azure
Location: USA (Remote)
Type: Contract

Experience: 5+ Years
Job Summary
We are seeking a skilled Data Engineer with strong hands-on experience in Databricks and Azure-based modern data platforms. The ideal candidate will design, build, and optimize scalable data pipelines, data models, and analytics solutions using Databricks, Azure Data Factory, and PySpark.
Key Responsibilities
  • Design and develop scalable ETL/ELT pipelines using Databricks and Azure Data Factory
  • Build config-driven and reusable data pipelines for enterprise data platforms
  • Develop workflows using Databricks Workflows for orchestration and automation
  • Write optimized PySpark, Python, and Databricks SQL code for large-scale data processing
  • Implement data models including Star Schema and Snowflake Schema
  • Work with Delta Lake and Iceberg open table formats
  • Support data virtualization and modern data access strategies
  • Optimize data platform performance, scalability, and cost efficiency
  • Collaborate with analytics, BI, and business teams for data delivery
  • Use tools like GitHub Copilot / Databricks Genie for productivity and AI-driven insights
Required Skills
  • Strong hands-on Databricks experience
  • Azure Data Factory expertise
  • PySpark, Python, Databricks SQL
  • Data Modeling (Star / Snowflake Schema)
  • Databricks Workflows
  • Delta Lake / Iceberg
  • Azure Cloud Platform
Preferred Skills
  • Data virtualization tools/frameworks
  • GitHub Copilot / AI-assisted development tools
  • Experience with enterprise analytics platforms