Hadoop (Spark/Scala) Developer

Hadoop (Spark/Scala) Developer

Posted 1 week ago by eTeam Workforce Limited

£304 Per day
Inside
Hybrid
Northampton - Hybrid (2/3 days per week onsite), UK

Summary: The role of Hadoop (Spark/Scala) Developer involves designing and developing Hadoop-based applications and data pipelines, with a focus on debugging and developing Spark jobs in Scala. The position requires collaboration with data scientists and analysts to meet data needs while ensuring data security and compliance. The developer will also be responsible for optimizing MapReduce jobs and managing HDFS storage. This is a hybrid role based in Northampton, UK, with a pay rate of £304 per day through an FCSA umbrella company.

Key Responsibilities:

  • Design and develop Hadoop-based applications and data pipelines.
  • Build, operate, monitor, and troubleshoot Hadoop clusters.
  • Write scalable ETL processes using tools like Hive, Pig, and Spark.
  • Develop and maintain data ingestion processes using Sqoop, Flume, or Kafka.
  • Optimize MapReduce jobs and manage HDFS storage.
  • Collaborate with data scientists and analysts to support data needs.
  • Ensure data security and compliance with organizational policies.
  • Create and maintain technical documentation and playbooks.
  • Evaluate and integrate cloud-based big data solutions (AWS, GCP, Azure).

Key Skills:

  • Proficiency in Spark jobs development in Scala.
  • Experience in Java Bigdata development.
  • Familiarity with Python is a plus.
  • Knowledge of Hadoop core concepts (Mapreduce), Hive, Pig, HBase.
  • Experience in building and managing Hadoop clusters.
  • Ability to write scalable ETL processes.
  • Experience with data ingestion tools like Sqoop, Flume, or Kafka.
  • Understanding of cloud-based big data solutions (AWS, GCP, Azure).
  • Strong documentation and technical writing skills.

Salary (Rate): £304 per day

City: Northampton

Country: UK

Working Arrangements: hybrid

IR35 Status: inside IR35

Seniority Level: undetermined

Industry: IT

Detailed Description From Employer:

We are a Global Recruitment specialist that provides support to the clients across EMEA, APAC, US and Canada. We have an excellent job opportunity for you.

Role Title: Hadoop (Spark/Scala) Developer
Location: Northampton
Duration: 31/12/2026
Days on site: 2-3
Pay Rate: £304 per day through FSCA umbrella

Role Description:

  • In general, the resource should be able to comfortably debug and develop Spark jobs in Scala.
  • Alternatively, comfortability in Java Bigdata development and exposure is acceptable.
  • Python is also nice to have.
  • Looking for Hadoop, Spark, Scala/Python developer.
  • Candidate should be conversant with concept of Hadoop core (Mapreduce), Hive, Pig HBase

Key Responsibilities:

  • Design and develop Hadoop-based applications and data pipelines.
  • Build, operate, monitor, and troubleshoot Hadoop clusters.
  • Write scalable ETL processes using tools like Hive, Pig, and Spark.
  • Develop and maintain data ingestion processes using Sqoop, Flume, or Kafka.
  • Optimize MapReduce jobs and manage HDFS storage.
  • Collaborate with data scientists and analysts to support data needs.
  • Ensure data security and compliance with organizational policies.
  • Create and maintain technical documentation and playbooks.
  • Evaluate and integrate cloud-based big data solutions (AWS, GCP, Azure)

If you are interested in this position and would like to learn more, please send through your CV and we will get in touch with you as soon as possible. Please note, candidates are often Shortlisted within 48 hours.