Negotiable
Undetermined
Undetermined
Hong Kong
Summary: The Senior Data Engineer role focuses on leveraging Azure Databricks to build and optimize scalable data pipelines and enhance data workflows for a leading insurer. The position requires hands-on experience with Azure technologies and strong coding skills in Python or Scala. The successful candidate will tackle complex data engineering challenges while collaborating with cross-functional teams and mentoring junior engineers. This role is integral to ensuring data reliability and performance within a technology-driven environment.
Key Responsibilities:
- Develop and maintain end-to-end data pipelines using Azure Databricks, Delta Lake, and other Azure data services.
- Architect and implement scalable, high-performance data solutions to support business needs.
- Apply best practices for data modeling, using both relational and dimensional techniques.
- Act as the technical lead for the organization's cloud-based analytics platform, encompassing the data warehouse, data lake, and operational data stores.
- Partner with cross-functional stakeholders to ensure the platform supports analytics, reporting, and decision-making requirements.
- Guide and mentor junior engineers to foster technical growth within the team.
- Collaborate with business analysts, architects, and other team members to define and deliver data solutions.
- Design and implement ETL/ELT workflows to transform raw data into clean, usable datasets.
- Ensure data quality by implementing validation, monitoring, and governance practices.
- Stay informed on emerging cloud technologies and incorporate them into the organization's data strategy.
Key Skills:
- Deep knowledge of Azure Databricks, Azure Data Factory, Azure Data Lake, Azure SQL, and related Azure services.
- Experience in ETL frameworks and data pipeline orchestration.
- Proficiency in Python and PySpark for developing scalable data solutions.
- Advanced knowledge of SQL for querying and transforming data.
- Experience with distributed computing frameworks such as Apache Spark.
- Experience designing and implementing data models using dimensional and relational approaches.
- Strong knowledge of data transformation techniques for building analytics-ready datasets.
- Experience with DevOps, version control, and CI/CD pipelines.
- Bachelor's degree in Computer Science, Data Science, or a related field.
- 8+ years of experience in data engineering, with a focus on cloud-based solutions.
- Proven experience delivering enterprise-scale data platforms, including data lakes and warehouses.
- Exposure to end-to-end software development lifecycle (SDLC), from design to deployment.
Salary (Rate): undetermined
City: Hong Kong
Country: Hong Kong
Working Arrangements: undetermined
IR35 Status: undetermined
Seniority Level: Senior
Industry: IT