Senior Big Data & DevOps Engineer

Senior Big Data & DevOps Engineer

Posted 1 day ago by 1763637941

Negotiable
Outside
Remote
USA

Summary: We are seeking a highly experienced Senior Big Data & DevOps Engineer to manage end-to-end data operations for enterprise-scale platforms. The ideal candidate will have 8+ years of experience in Big Data technologies, ETL development, and DevOps automation, with hands-on expertise in HDFS, Hive, Impala, PySpark, Python, Jenkins, and uDeploy. This role is critical in ensuring the stability, scalability, and efficiency of data platforms while enabling smooth development-to-production workflows.

Key Responsibilities:

  • Manage end-to-end data operations for enterprise-scale platforms.
  • Ensure stability, scalability, and efficiency of data platforms.
  • Enable smooth development-to-production workflows.

Key Skills:

  • Bachelor's degree in Computer Science, IT, or related field.
  • 8+ years of experience in Big Data engineering and DevOps practices.
  • Advanced proficiency in HDFS, Hive, Impala, PySpark, Python, and Linux.
  • Hands-on experience with CI/CD tools such as Jenkins and uDeploy.
  • Strong understanding of ETL development, orchestration, and performance optimization.
  • Experience with ServiceNow for incident/change/problem management.
  • Excellent analytical, troubleshooting, and communication skills.
  • Exposure to cloud-based Big Data platforms (e.g., AWS EMR) is a plus.
  • Familiarity with containerization (Docker, Kubernetes) and infrastructure automation tools (Ansible, Terraform) is a plus.

Salary (Rate): undetermined

City: undetermined

Country: USA

Working Arrangements: remote

IR35 Status: outside IR35

Seniority Level: Senior

Industry: IT

Detailed Description From Employer:

We are seeking a highly experienced Senior Big Data & DevOps Engineer to manage end-to-end data operations for enterprise-scale platforms. The ideal candidate will have 8+ years of experience in Big Data technologies, ETL development, and DevOps automation, with hands-on expertise in HDFS, Hive, Impala, PySpark, Python, Jenkins, and uDeploy. This role is critical in ensuring the stability, scalability, and efficiency of data platforms while enabling smooth development-to-production workflows.

Required Qualifications

  • Bachelor s degree in Computer Science, IT, or related field.
  • 8+ years of experience in Big Data engineering and DevOps practices.
  • Advanced proficiency in HDFS, Hive, Impala, PySpark, Python, and Linux.
  • Hands-on experience with CI/CD tools such as Jenkins and uDeploy.
  • Strong understanding of ETL development, orchestration, and performance optimization.
  • Experience with ServiceNow for incident/change/problem management.
  • Excellent analytical, troubleshooting, and communication skills.

Nice to Have

  • Exposure to cloud-based Big Data platforms (e.g., AWS EMR).
  • Familiarity with containerization (Docker, Kubernetes) and infrastructure automation tools (Ansible, Terraform).