Azure/DataBricks Engineer @ Remote

Azure/DataBricks Engineer @ Remote

Posted 2 days ago by 1757740998

Negotiable
Outside
Remote
USA

Summary: The Azure/DataBricks Engineer role involves developing data products by translating business requirements into efficient data pipelines within a remote team. The position requires collaboration with product owners and stakeholders to implement data supply chains using Microsoft Azure technologies. Candidates should possess extensive experience in data engineering, particularly within the Azure Databricks ecosystem. The role emphasizes technical expertise, data quality assurance, and adherence to best practices in data management.

Key Responsibilities:

  • Ensure data quality, reliability of data, orchestration of data pipelines, CI/CD implementation, performance and optimization of data pipelines while collaborating with cross-functional teams to integrate data sources and optimize data pipelines.
  • Provide technical expertise and support for Azure Databricks-related projects.
  • Follow technical best practices and coding standards, job orchestration & monitoring, and produce the mandatory artifacts.

Key Skills:

  • Bachelor's degree in computer science, information technology, or a related field.
  • Prior experience with data lakes and data warehouses, working knowledge with dimensional modeling is a must.
  • Demonstrate advanced understanding in data engineering practices, including ETL/ELT, data integrations, reusable data pipelines, data management methods.
  • Advanced Pyspark, Python, Spark SQL, Dimensional models, optimization in Azure, data bricks skills required.
  • Ability to service various business stakeholders, internal and external partners to support data demands.
  • Health care experience is desired with a data-centric motive is a plus.
  • 8+ years of experience in data engineering with Azure data bricks ecosystem.
  • Strong knowledge of Azure data services, including ADF, Databricks, Delta Lake, delta live tables, Power BI, and Azure SQL Server.
  • Experience with data processing languages such as SQL, Python, Pyspark, and Spark SQL.
  • Experience with agile methodologies on Azure DevOps and continuous integration/continuous delivery (CI/CD).
  • Strong problem-solving and analytical skills; excellent communication and collaboration skills.
  • Self-motivated and a self-starter with strong ability to multitask projects/tasks effectively.
  • Ability to work independently and collaborate effectively in a team environment.

Salary (Rate): undetermined

City: undetermined

Country: USA

Working Arrangements: remote

IR35 Status: outside IR35

Seniority Level: undetermined

Industry: IT

Azure/DataBricks Engineer

Location: Remote

Duration: 6+ Months

The client is seeking strong data engineers to be part of their data product development team. It consists of data engineers who can assist to translate business requirements into data products. create efficient data pipelines with reusability, work with product owners and stakeholders to design, develop, and implement data supply chains in enterprise data warehouse environments using Microsoft Azure, ADF, Python, Pyspark, Spark SQL and Databricks environment.
Responsibilities::

  • Ensure data quality, reliability of data, orchestration of data pipelines, CI/CD implementation, performance and optimization of data pipelines to ensure efficiency, while collaborating with cross-functional teams to integrate data sources and optimize data pipelines.
  • Also, this engineer must provide technical expertise and support for Azure Databricks-related projects.
  • Follow technical best practices and coding standards, job orchestration & monitoring, and produce the mandatory artifacts.

Requirements:

  • Bachelor's degree in computer science, information technology, or a related field.
  • Prior experience with data lakes and data warehouses, working knowledge with dimensional modeling is must.
  • Demonstrate advanced understanding in data engineering practices, Including ETL/ELT, data integrations, reusable data pipelines, data management methods.
  • Advanced Pyspark, Python, Spark SQL, Dimensional models, optimization in Azure, data bricks skills required.
  • Ability to service various business stakeholders, internal and external partners to support data demands.
  • Health care experience is desired with a data-centric motive is a plus.
  • 8+ years of experience in data engineering with Azure data bricks ecosystem.
  • Strong knowledge of Azure data services, including ADF, Databricks, Delta Lake, delta live tables, Power BI, and Azure SQL Server
  • Experience with data processing languages such as SQL, Python, Pyspark and Spark SQL
  • Experience with agile methodologies on Azure DevOps and continuous integration/continuous delivery (CI/CD)
  • Strong problem-solving and analytical skills; excellent communication and collaboration skills
  • Self-motivated and a self-starter with strong ability to multitask projects/tasks effectively.
  • Ability to work independently and collaborate effectively in a team environment

Best Regards,

Hari Krishna

Eight One Three 435 Five Three Four Seven