Negotiable
Undetermined
Hybrid
London, UK
Summary: The Spark/Scala Developer role is focused on developing complex data transformation workflows using Spark and Scala, with a strong emphasis on optimizing big data applications. The position requires hands-on experience with various big data technologies and tools, as well as exceptional analytical skills. The role is based in London and involves a hybrid working arrangement with a specified number of days on-site. The candidate will be expected to contribute to innovative solutions in data processing and distributed computing.
Key Responsibilities:
- Develop complex data transformation workflows (ETL) using Big Data Technologies.
- Optimize and fine-tune Spark jobs for performance and efficiency.
- Utilize expertise in HIVE, Impala, and HBase.
- Work with Java and distributed computing frameworks.
- Analyze and solve problems with innovative solutions.
- Contribute to external technology projects and open-source initiatives.
- Utilize tools such as Apache Hadoop, Spark, and Kafka.
Key Skills:
- Expertise in Spark and Scala.
- Experience with complex data transformation workflows (ETL).
- Proficiency in HIVE, Impala, and HBase.
- Hands-on experience in optimizing Spark jobs.
- Knowledge of Java and distributed computing.
- Understanding of Big Data/Hadoop technologies.
- Exceptional analytical and problem-solving skills.
- Experience with tools like Apache Hadoop, Spark, and Kafka.
- Good to have: Contributions to open-source projects.
Salary (Rate): £308 Daily
City: London
Country: UK
Working Arrangements: hybrid
IR35 Status: undetermined
Seniority Level: undetermined
Industry: IT
Role Title: Spark/Scala DeveloperLocation: London - Days on site: 2-32 months
£308
- Expertise on Spark & Scala
- Experience in developing complex data transformation workflows(ETL) using Big Data Technologies
- Good expertise on HIVE, Impala, HBase
- Hands on experience to finetune Spark jobs
- Experience with Java and distributed computing
- In-depth comprehension of Big data/Hadoop technologies, distributed computing, and data processing frameworks
- Exceptional analytical and problem-solving skills, focusing on innovative and efficient solutions.
- Demonstrable experience in optimizing and fine-tuning big data applications for heightened performance and efficiency. -
- Hands-on experience with relevant tools such as Apache Hadoop, Spark, Kafka, and other industry-standard platforms.
- Good to have : External technology contributions (Noteworthy Open Source attributions).