Data Engineer

Data Engineer

Posted Today by 1759912554

Negotiable
Outside
Remote
USA

Summary: The Data Engineer role is focused on designing, building, and maintaining data pipelines and architectures, specifically utilizing Databricks expertise. The position requires ensuring data integrity, scalability, and compliance to support data-driven decision-making within the organization. The role is contract-based and can extend beyond six months, ideally suited for candidates with 6-10 years of experience, particularly in the insurance domain. This position is remote and classified as outside IR35.

Key Responsibilities:

  • Develop and maintain scalable data pipelines and ETL processes to support diverse business and analytics requirements.
  • Design and optimize data architectures (including data lakes, data warehouses, and other modern data platforms).
  • Manage and enhance data ingestion processes and data cataloging to ensure discoverability and governance.
  • Build, monitor, and troubleshoot data workflows and pipeline performance to ensure reliability and efficiency.
  • Ensure data quality, integrity, security, and compliance with organizational and regulatory standards.
  • Collaborate with cross-functional teams (engineering, analytics, product, and business) to understand data needs and deliver effective solutions.
  • Leverage Databricks to manage large-scale data processing, transformations, and orchestration.
  • Continuously evaluate and implement improvements for performance optimization and scalability.

Key Skills:

  • Proven experience as a Data Engineer with strong background in Databricks.
  • Hands-on experience with building and managing data pipelines and ingestion frameworks.
  • Expertise in ETL development, orchestration, and monitoring.
  • Strong knowledge of data cataloging, governance, and metadata management.
  • Proficiency in SQL, Python, and Spark.
  • Experience with data lakes, warehouses, and modern cloud-based data platforms (Azure, AWS, or Google Cloud Platform).
  • Strong problem-solving skills with the ability to troubleshoot complex data workflow issues.
  • Knowledge of data security, compliance, and best practices.
  • Familiarity with BI/analytics tools and stakeholder collaboration.

Salary (Rate): undetermined

City: undetermined

Country: USA

Working Arrangements: remote

IR35 Status: outside IR35

Seniority Level: undetermined

Industry: Other

Detailed Description From Employer:

Title: Data Engineer
Location: Remote
Duration: 6 month+ can be extendable
Domain/Industry: Preferably Insurance
Year of Experience: 6-10 Years
Type: Contract

About the Role: We are looking for a highly skilled Data Engineer with strong Databricks expertise to design, build, and maintain robust data pipelines and architectures. The role involves ensuring data integrity, scalability, and compliance while enabling data-driven decision-making across the organization.

Key Responsibilities:

  • Develop and maintain scalable data pipelines and ETL processes to support diverse business and analytics requirements.
  • Design and optimize data architectures (including data lakes, data warehouses, and other modern data platforms).
  • Manage and enhance data ingestion processes and data cataloging to ensure discoverability and governance.
  • Build, monitor, and troubleshoot data workflows and pipeline performance to ensure reliability and efficiency.
  • Ensure data quality, integrity, security, and compliance with organizational and regulatory standards.
  • Collaborate with cross-functional teams (engineering, analytics, product, and business) to understand data needs and deliver effective solutions.
  • Leverage Databricks to manage large-scale data processing, transformations, and orchestration.
  • Continuously evaluate and implement improvements for performance optimization and scalability.


Required Skills & Qualifications:

  • Proven experience as a Data Engineer with strong background in Databricks.
  • Hands-on experience with building and managing data pipelines and ingestion frameworks.
  • Expertise in ETL development, orchestration, and monitoring.
  • Strong knowledge of data cataloging, governance, and metadata management.
  • Proficiency in SQL, Python, and Spark.
  • Experience with data lakes, warehouses, and modern cloud-based data platforms (Azure, AWS, or Google Cloud Platform).
  • Strong problem-solving skills with the ability to troubleshoot complex data workflow issues.
  • Knowledge of data security, compliance, and best practices.
  • Familiarity with BI/analytics tools and stakeholder collaboration.


Preferred Qualifications:

  • Experience with Delta Lake, Lakehouse architectures, and data modeling.
  • Exposure to tools like Airflow, Azure Data Factory, or similar orchestration tools.
  • Knowledge of CI/CD practices for data pipelines.
  • Relevant certifications in Databricks, Azure, AWS, or Google Cloud Platform.