Data Engineer

Data Engineer

Posted Today by Apidel Technologies

Negotiable
Undetermined
Remote
Remote

Summary: We are seeking an experienced Data Engineer to design, develop, and optimize scalable data platforms and pipelines that support advanced analytics and enterprise reporting initiatives. The ideal candidate will possess strong expertise in big data technologies, cloud platforms, and distributed data processing frameworks. This role involves building and maintaining ETL/ELT pipelines, optimizing big data workflows, and ensuring data quality and governance. Collaboration with analytics, engineering, and business teams is essential to deliver data-driven solutions.

Key Responsibilities:

  • Build and maintain scalable ETL/ELT pipelines and enterprise data solutions.
  • Design and optimize big data processing workflows using Spark/Hadoop ecosystems.
  • Develop data integration, transformation, and aggregation frameworks.
  • Write complex SQL queries and optimize large-scale data processing jobs.
  • Support cloud-based data lake and warehouse architectures.
  • Collaborate with analytics, engineering, and business teams to deliver data-driven solutions.
  • Ensure data quality, governance, scalability, and disaster recovery readiness.

Key Skills:

  • Strong experience with Python, SQL, Spark, Hadoop.
  • Experience with Scala and/or Java.
  • Hands-on experience with cloud platforms such as AWS, Azure, or Google Cloud Platform.
  • Expertise in distributed data processing and big data technologies.
  • Experience building data solutions for analytics and machine learning initiatives.
  • Familiarity with tools such as Databricks, Airflow, Kafka, Snowflake, or similar.
  • Strong communication and collaboration skills.

Salary (Rate): £53.00 hourly

City: undetermined

Country: undetermined

Working Arrangements: remote

IR35 Status: undetermined

Seniority Level: undetermined

Industry: IT

Detailed Description From Employer:

Job Overview:
o We are seeking an experienced Data Engineer to design, develop, and optimize scalable data platforms and pipelines supporting advanced analytics and enterprise reporting initiatives.
o The ideal candidate will have strong expertise in big data technologies, cloud platforms, and distributed data processing frameworks.
Key Responsibilities:
o Build and maintain scalable ETL/ELT pipelines and enterprise data solutions.
o Design and optimize big data processing workflows using Spark/Hadoop ecosystems.
o Develop data integration, transformation, and aggregation frameworks.
o Write complex SQL queries and optimize large-scale data processing jobs.
o Support cloud-based data lake and warehouse architectures.
o Collaborate with analytics, engineering, and business teams to deliver data-driven solutions.
o Ensure data quality, governance, scalability, and disaster recovery readiness.

Required Skills:
o Strong experience with Python, SQL, Spark, Hadoop.
o Experience with Scala and/or Java.
o Hands-on experience with cloud platforms such as AWS, Azure, or Google Cloud Platform.
o Expertise in distributed data processing and big data technologies.
o Experience building data solutions for analytics and machine learning initiatives.
o Familiarity with tools such as Databricks, Airflow, Kafka, Snowflake, or similar.
o Strong communication and collaboration skills.

Preferred Qualifications:
o Experience with real-time streaming/data ingestion frameworks.
o Exposure to modern lakehouse/data warehouse architectures.
o Experience in enterprise-scale data environments and performance optimization.