Negotiable
Undetermined
Undetermined
London Area, United Kingdom
Summary: The Senior Data Engineer is responsible for designing and building data foundations and end-to-end solutions to maximize business value from data. This role fosters a data-driven culture across the organization and serves as a subject matter expert while mentoring junior engineers. The engineer will also play a crucial role in translating the vision and data strategy into actionable IT solutions. Key responsibilities include developing scalable data pipelines and collaborating with cross-functional teams to deliver high-quality data solutions.
Key Responsibilities:
- Design, develop, and maintain scalable data pipelines using Azure Data Factory and Azure Databricks.
- Implement data transformation workflows using PySpark and Delta Live Tables (DLT).
- Manage and govern data assets using Unity Catalog.
- Write efficient and optimized SQL queries for data extraction, transformation, and analysis.
- Collaborate with cross-functional teams to understand data requirements and deliver high-quality solutions.
- Demonstrate strong ownership and accountability in delivering end-to-end data solutions.
- Communicate effectively with stakeholders to gather requirements, provide updates, and manage expectations.
Key Skills:
- Proven hands-on experience with Azure Data Factory, Databricks, PySpark, DLT, and Unity Catalog.
- Hands-on Databricks expertise including Structured Streaming, Performance Tuning & Cost Optimization.
- Strong command of SQL and data modelling concepts.
- Excellent communication and interpersonal skills.
- Ability to manage stakeholders and work collaboratively in a team environment.
- Self-motivated, proactive, and capable of working independently with minimal supervision.
- Strong problem-solving skills and a mindset focused on continuous improvement.
- Experience with multiple if not all of the following: Synapse, Stream Analytics, Glue, Airflow, Kinesis, Redshift, SonarQube, PyTest, Qlik.
- Azure certifications (e.g., Azure Data Engineer Associate, Databricks Professional) are a plus.
- Experience with CI/CD pipelines and DevOps practices in data engineering.
- Familiarity with data governance and security best practices in Azure.
- Experience in project management, running a scrum team.
- Exposure to working with external technical ecosystem.
- MKDocs documentation experience.
Salary (Rate): undetermined
City: London Area
Country: United Kingdom
Working Arrangements: undetermined
IR35 Status: undetermined
Seniority Level: undetermined
Industry: IT
Day to Day: The Sr. Data Engineer is the one who designs and builds data foundations and end to end solutions for the business to maximize value from data. The role helps create a data-driven thinking within the organization, not just within IT teams, but also in the wider business stakeholder community. The Sr. Data engineer is expected to be a subject matter expert, who design & build data solutions and mentor junior engineers. They are also the key drivers to convert Vison and Data Strategy for IT solutions and deliver.
Must Haves: Proven hands-on experience with Azure Data Factory, Databricks, PySpark, DLT, and Unity Catalog Hands on Databricks expertise including Structured Streaming, Performance Tunning & Cost Optimization Strong command of SQL and data modelling concepts Excellent communication and interpersonal skills Ability to manage stakeholders and work collaboratively in a team environment Self-motivated, proactive, and capable of working independently with minimal supervision Strong problem-solving skills and a mindset focused on continuous improvement Experience with multiple if not all of the following: Synapse, Stream Analytics, Glue, Airflow, Kinesis, Redshift, SonarQube, PyTest, Qlik
Pluses: Azure certifications (e.g., Azure Data Engineer Associate, Databricks Professional ) Experience with CI/CD pipelines and DevOps practices in data engineering Familiarity with data governance and security best practices in Azure Experience in project management, running a scrum team Exposure to working with external technical ecosystem MKDocs documentation experience
Key Responsibilities: Design, develop, and maintain scalable data pipelines using Azure Data Factory and Azure Databricks . Implement data transformation workflows using PySpark and Delta Live Tables (DLT) . Manage and govern data assets using Unity Catalog . Write efficient and optimized SQL queries for data extraction, transformation, and analysis. Collaborate with cross-functional teams to understand data requirements and deliver high-quality solutions. Demonstrate strong ownership and accountability in delivering end-to-end data solutions. Communicate effectively with stakeholders to gather requirements, provide updates, and manage expectations.