Data Engineer (Spark, Kafka)

Data Engineer (Spark, Kafka)

Posted 1 day ago by Stott and May on JobServe

£510 Per day
Inside
Hybrid
Windsor, UK

Summary: The Data Engineer role focuses on designing, implementing, and managing Kafka-based data pipelines and messaging solutions. The position requires configuring and maintaining Kafka clusters while ensuring high availability and performance. The role is hybrid, requiring in-office presence once a week in Windsor, and is classified as inside IR35. The contract duration is initially set for six months, starting as soon as possible.

Key Responsibilities:

  • Design, implement, and manage Kafka-based data pipelines and messaging solutions.
  • Configure, deploy, and maintain Kafka clusters, ensuring high availability and scalability.
  • Monitor Kafka performance and troubleshoot issues to minimize downtime and ensure uninterrupted data flow.
  • Develop and maintain Kafka connectors such as JDBC, MongoDB, and S3 connectors, along with topics and schemas, to streamline data ingestion from databases.

Key Skills:

  • Data Integration, Data Security and Compliance.
  • Monitor and manage the performance of data systems and troubleshoot issues.
  • Strong knowledge of data engineering tools and technologies (e.g., SQL, ETL, data warehousing).
  • Experience in tools like Azure ADF, Apache Kafka, Apache Spark SQL.
  • Proficiency in programming languages such as Python, PySpark.

Salary (Rate): £500

City: Windsor

Country: UK

Working Arrangements: hybrid

IR35 Status: inside IR35

Seniority Level: undetermined

Industry: IT