Negotiable
Undetermined
Remote
Remote
Summary: The Lead Azure Data Engineer-S role focuses on leveraging Microsoft Azure data services to design and implement data solutions. The position requires hands-on experience with various Azure tools and technologies, as well as proficiency in SQL and programming languages like Python or Scala. The engineer will also be responsible for integrating data and ensuring efficient data warehousing practices. This role is fully remote, allowing for flexible working arrangements.
Key Responsibilities:
- Design and implement data solutions using Microsoft Azure data services.
- Utilize Azure Data Factory, Databricks, Synapse Analytics, and Blob Storage/Data Lake.
- Develop and maintain SQL databases and data warehousing solutions.
- Integrate data using REST APIs and other data integration techniques.
- Implement CI/CD pipelines and adhere to DevOps practices.
- Collaborate with teams to understand data requirements and deliver solutions.
Key Skills:
- Strong experience with Microsoft Azure data services.
- Hands-on experience with Azure Data Factory, Azure Databricks, Azure Synapse Analytics, and Azure Blob Storage/Data Lake.
- Proficiency in SQL and database design.
- Experience with Python or Scala.
- Knowledge of data warehousing concepts (Star/Snowflake schema).
- Understanding of big data technologies (Spark, Hadoop).
- Experience with CI/CD pipelines and DevOps practices.
- Familiarity with REST APIs and data integration techniques.
Salary (Rate): £57.50 hourly
City: undetermined
Country: undetermined
Working Arrangements: remote
IR35 Status: undetermined
Seniority Level: undetermined
Industry: IT
Required Skills & Qualifications
- Strong experience with Microsoft Azure data services
- Hands-on experience with:
- Azure Data Factory (ADF)
- Azure Databricks
- Azure Synapse Analytics
- Azure Blob Storage / Data Lake
- Proficiency in SQL and database design
- Experience with Python or Scala
- Knowledge of data warehousing concepts (Star/Snowflake schema)
- Understanding of big data technologies (Spark, Hadoop)
- Experience with CI/CD pipelines and DevOps practices
- Familiarity with REST APIs and data integration techniques