Negotiable
Undetermined
Undetermined
Salford, England, United Kingdom
Summary: The Senior Azure Data Engineer will be responsible for designing, developing, and maintaining metadata-driven data pipelines using Azure Data Factory and Databricks. This role requires collaboration with cross-functional teams to integrate data solutions into the enterprise architecture while ensuring data quality and compliance. The engineer will also implement CI/CD pipelines and provide technical leadership on assigned projects.
Key Responsibilities:
- Design, develop, and maintain metadata-driven data pipelines using ADF and Databricks.
- Build and implement end-to-end metadata frameworks, ensuring scalability and reusability.
- Optimize data workflows leveraging SparkSQL and Pandas for large-scale data processing.
- Collaborate with cross-functional teams to integrate data solutions into enterprise architecture.
- Implement CI/CD pipelines for automated deployment and testing of data solutions.
- Ensure data quality, governance, and compliance with organizational standards.
- Provide technical leadership and take complete ownership of assigned projects.
Key Skills:
- Azure Data Factory (ADF): Expertise in building and orchestrating data pipelines.
- Databricks: Hands-on experience with notebooks, clusters, and job scheduling.
- Pandas: Advanced data manipulation and transformation skills.
- SparkSQL: Strong knowledge of distributed data processing and query optimization.
- CI/CD: Experience with tools like Azure DevOps, Git, or similar for automated deployments.
- Metadata-driven architecture: Proven experience in designing and implementing metadata frameworks.
- Programming: Proficiency in Python and/or Scala for data engineering tasks.
Salary (Rate): undetermined
City: Salford
Country: United Kingdom
Working Arrangements: undetermined
IR35 Status: undetermined
Seniority Level: undetermined
Industry: IT
ob description: Key Responsibilities
- Design, develop, and maintain metadata-driven data pipelines using ADF and Databricks.
- Build and implement end-to-end metadata frameworks, ensuring scalability and reusability.
- Optimize data workflows leveraging SparkSQL and Pandas for large-scale data processing.
- Collaborate with cross-functional teams to integrate data solutions into enterprise architecture.
- Implement CI/CD pipelines for automated deployment and testing of data solutions.
- Ensure data quality, governance, and compliance with organizational standards.
- Provide technical leadership and take complete ownership of assigned projects.
Technical Skills Required
- Azure Data Factory (ADF): Expertise in building and orchestrating data pipelines.
- Databricks: Hands-on experience with notebooks, clusters, and job scheduling.
- Pandas: Advanced data manipulation and transformation skills.
- SparkSQL: Strong knowledge of distributed data processing and query optimization.
- CI/CD: Experience with tools like Azure DevOps, Git, or similar for automated deployments.
- Metadata-driven architecture: Proven experience in designing and implementing metadata frameworks.
- Programming: Proficiency in Python and/or Scala for data engineering tasks