
Sr Big Data Engineer with Hadoop, AWS, HIVE, HDFS, Pyspark, SQL, EMR, Lambda Scala, Spark, Python, Kafka -Remote must need 12+ Years experience
Posted 1 week ago by 1756188140
Negotiable
Outside
Remote
USA
Summary: The role of Sr Big Data Engineer requires extensive experience in the Hadoop ecosystem and proficiency in various technologies including AWS, HIVE, Pyspark, and SQL. The candidate should have a strong background in Scala and Python development, along with the ability to write complex SQL queries. This position is remote and demands a minimum of 12 years of relevant experience.
Key Responsibilities:
- Develop and maintain big data solutions using Hadoop ecosystem components.
- Write and optimize Hive and Impala queries.
- Implement complex SQL queries for data manipulation and analysis.
- Utilize AWS services such as Lambda and EMR for data processing.
- Work with streaming technologies including Kinesis and Kafka.
Key Skills:
- Extensive experience with Hadoop ecosystem components: HIVE, Pyspark, HDFS, SPARK, Scala.
- Strong programming skills in Scala and Python.
- Proficient in writing complex SQL queries.
- Experience with AWS services including Lambda and EMR.
- Knowledge of data pipelines and cluster management.
Salary (Rate): undetermined
City: undetermined
Country: USA
Working Arrangements: remote
IR35 Status: outside IR35
Seniority Level: undetermined
Industry: IT
Sr Big Data Engineer with Hadoop, AWS, HIVE, HDFS, Pyspark, SQL, EMR, Lambda Scala, Spark, Python, Kafka -
Experience in Hadoop ecosystem components: HIVE,Pyspark, HDFS, SPARK, Scala, Streaming,(kinesis, Kafka) * Strong experience in Scala, PySpark, Phython development * Proficient with writing Hive and Impala Queries * Ability to write complex SQL queries * Experience with AWS Lambda, EMR, Clusters, Partitions, Datapipelies