Data Engineer

Data Engineer

Posted 1 day ago by Gravitas Recruitment Group Ltd

£500 Per day
Outside
Hybrid
Leeds, UK

Summary: The role is for a Senior Data Engineer on a hybrid basis in Leeds, UK, for an initial 6-month contract focused on delivering a MS Fabric proof of concept. The position requires expertise in building and optimizing data pipelines, supporting data transformation, and integrating complex client needs. The ideal candidate will possess strong analytical skills and a deep understanding of data engineering practices. This role is classified as outside IR35, offering a competitive daily rate.

Key Responsibilities:

  • Design, develop and maintain scalable and high-performance data pipelines for structured and unstructured data.
  • Implement data integration, extraction, transformation, and loading processes using Apache Spark and Python.
  • Develop and maintain dataset documentation and data modelling standards.
  • Work with stakeholders to understand business requirements and translate them into technical data solutions.
  • Ensure system performance through query optimisation, partitioning, and indexing strategies.
  • Contribute to the development and deployment of Power BI dashboards and reports, ensuring appropriate data access and Row-Level Security.
  • Follow DevOps and CI/CD practices, maintaining source control using Git and implementing pull-request workflows.

Key Skills:

  • Strong proficiency in SQL, with deep knowledge of indexing, data partitioning, and performance tuning for large datasets.
  • Proven recent experience working with MS Fabric will be essential.
  • Proven expertise in Python with a focus on data libraries such as Pandas, PySpark, and PyArrow.
  • Comprehensive experience working with Apache Spark, including structured streaming, batch processing, and Delta Lake architecture.
  • Advanced understanding of Power BI visualisation tools, including DAX, data modelling best practices, and implementation of Row-Level Security.
  • Hands-on experience with cloud platforms, preferably Azure, including Azure Data Factory, Lake Storage Gen2, Synapse Analytics, and Databricks. Knowledge of AWS or GCP is also acceptable.
  • Experience using version control systems such as Git and applying CI/CD pipelines in data engineering projects.

Salary (Rate): £500 per day

City: Leeds

Country: United Kingdom

Working Arrangements: hybrid

IR35 Status: outside IR35

Seniority Level: Senior

Industry: IT

Detailed Description From Employer:

Location: Hybrid Leeds, United Kingdom

Duration: 6 months

Rate: Up to £500 per day (DOE)

IR35 Status: Outside IR35

We are looking for a highly skilled Senior Data Engineer to join our client's team for an initial 3-month contract role to deliver a MS Fabric POC. This position is Outside IR35. You will be involved in building and optimising data pipelines and architecture, supporting data transformation, integration, and delivery. This role is ideal for someone with a strong analytical mindset, a deep understanding of data engineering practices, and an ability to adapt to complex client needs.

Key Responsibilities:

  • Design, develop and maintain scalable and high-performance data pipelines for structured and unstructured data.
  • Implement data integration, extraction, transformation, and loading processes using Apache Spark and Python.
  • Develop and maintain dataset documentation and data modelling standards.
  • Work with stakeholders to understand business requirements and translate them into technical data solutions.
  • Ensure system performance through query optimisation, partitioning, and indexing strategies.
  • Contribute to the development and deployment of Power BI dashboards and reports, ensuring appropriate data access and Row-Level Security.
  • Follow DevOps and CI/CD practices, maintaining source control using Git and implementing pull-request workflows.
Required Skills and Experience:
  • Strong proficiency in SQL, with deep knowledge of indexing, data partitioning, and performance tuning for large datasets.
  • Proven recent experience working with MS Fabric will be essential
  • Proven expertise in Python with a focus on data libraries such as Pandas, PySpark, and PyArrow.
  • Comprehensive experience working with Apache Spark, including structured streaming, batch processing, and Delta Lake architecture.
  • Advanced understanding of Power BI visualisation tools, including DAX, data modelling best practices, and implementation of Row-Level Security.
  • Hands-on experience with cloud platforms, preferably Azure, including Azure Data Factory, Lake Storage Gen2, Synapse Analytics, and Databricks. Knowledge of AWS or GCP is also acceptable.
  • Experience using version control systems such as Git and applying CI/CD pipelines in data engineering projects.
This is an exciting opportunity to work on cutting-edge data engineering projects with significant impact. If you're a problem solver with a passion for data engineering and want to work in a dynamic and collaborative environment, we'd love to hear from you.