Negotiable
Inside
Remote
London Area, United Kingdom
Summary: The Data Engineer role focuses on building big data pipelines in distributed environments, utilizing technologies such as Kafka, Hadoop, and Spark. The position requires a strong background in data governance and modeling, along with a passion for Continuous Integration and Agile methodologies. This is a fully remote position for an initial duration of 6 months, requiring 37.5 hours of work per week.
Key Responsibilities:
- Build and maintain big data pipelines in distributed environments using Kafka, Hadoop, Spark, and DBT.
- Embed data governance, quality, lineage, retention, monitoring, and alerting into data pipelines.
- Develop solid data models, including Dimensional and Data Vault models.
- Implement Continuous Integration and Continuous Delivery practices within Agile frameworks.
- Ensure security principles are adhered to and write secure code.
- Work on large-scale, well-governed, and compliant systems.
- Lead, guide, and coach team members both technically and procedurally.
- Apply basic analytics and machine learning concepts as needed.
- Communicate effectively, both in writing and verbally.
- Understand and apply cloud security best practices, particularly in AWS.
Key Skills:
- 5+ years of experience in building big data pipelines.
- Proficiency in Kafka, Hadoop, Spark, and DBT.
- Strong data modeling skills (Dimensional, Data Vault).
- Knowledge of Continuous Integration, Continuous Delivery, and Agile methodologies.
- Understanding of security principles and secure coding practices.
- Experience with large-scale, compliant systems.
- Leadership and coaching abilities.
- Basic analytics and machine learning knowledge.
- Excellent communication skills.
- Experience with AWS or other cloud platforms.
Salary (Rate): 92
City: London Area
Country: United Kingdom
Working Arrangements: remote
IR35 Status: inside IR35
Seniority Level: undetermined
Industry: IT