Negotiable
Outside
Remote
USA
Summary: The Azure Data Engineer with Big Data role involves designing and implementing scalable data solutions for investment banking, utilizing Azure technologies. The position requires extensive experience in data processing and big data technologies, with a focus on building an enterprise data platform. The role is remote and is classified as outside IR35.
Key Responsibilities:
- Design and implement scalable data solutions using ADLS, Azure Synapse Analytics, and Azure Databricks.
- Develop the Enterprise Data platform.
- Build Azure Data Lake leveraging Databricks technology.
- Utilize big data technologies for data processing and analytics.
- Build data pipelines and infrastructure.
Key Skills:
- 10+ years of experience in data processing, ETL, and big data technologies.
- 4+ years of experience in Databricks and Python.
- Hands-on programming skills in Spark, Azure Synapse, Data Factory, and Azure Data Bricks.
- Experience with Hadoop, Java, Spark, Scala, Hive, and NoSQL data stores.
- Ability to build data pipelines and infrastructure.
Salary (Rate): undetermined
City: undetermined
Country: USA
Working Arrangements: remote
IR35 Status: outside IR35
Seniority Level: undetermined
Industry: IT
Title: Azure Data Engineer with Big Data
Location: Remote
Duration: 6+ Months Contract
Job Description:
- Senior Azure Data Engineer Investment Banking
- Design and implement scalable data solutions using ADLS, Azure Synapse Analytics, and Azure Databricks to support high-volume financial data processing and analytics
- Excellent hands on / programming skills on big data analytics azure cloud & on-prem technologies - spark, python, azure synapse, data factory , azure data bricks.
- The Senior Data Engineer will be responsible for the build of Enterprise Data platform.
- Build Azure Data Lake leveraging Databricks technology to consolidate all data across the company and serve the data to various products and services.
- 10+ years working experience in Data processing / ETL / Big data technologies like Informatica, Hadoop, Cloudera.
- 4+ years working experience in Databricks (essential) and Python.
- Experience with Big Data tech stack, including Hadoop, Java, Spark, Scala, and Hive, NoSQL data stores.
- Experience building data pipelines and infrastructure.