Senior Data Engineer

Senior Data Engineer

Posted 3 days ago by Gravitas Recruitment Group Ltd

£550 Per day
Outside
Remote
England, UK

Summary: We are seeking a highly skilled Senior Data Engineer for a 6-month remote contract role in the UK. The position involves building and optimizing data pipelines and architecture, with a focus on data transformation and integration in a consultancy environment. The ideal candidate will possess strong analytical skills and a deep understanding of data engineering practices. This role is classified as outside IR35.

Key Responsibilities:

  • Design, develop and maintain scalable and high-performance data pipelines for structured and unstructured data.
  • Implement data integration, extraction, transformation, and loading processes using Apache Spark and Python.
  • Develop and maintain dataset documentation and data modelling standards.
  • Work with stakeholders to understand business requirements and translate them into technical data solutions.
  • Ensure system performance through query optimisation, partitioning, and indexing strategies.
  • Contribute to the development and deployment of Power BI dashboards and reports, ensuring appropriate data access and Row-Level Security.
  • Follow DevOps and CI/CD practices, maintaining source control using Git and implementing pull-request workflows.

Key Skills:

  • Strong proficiency in SQL, with deep knowledge of indexing, data partitioning, and performance tuning for large datasets.
  • Proven recent experience working with MS Fabric will be essential.
  • Proven expertise in Python with a focus on data libraries such as Pandas, PySpark, and PyArrow.
  • Comprehensive experience working with Apache Spark, including structured streaming, batch processing, and Delta Lake architecture.
  • Advanced understanding of Power BI visualisation tools, including DAX, data modelling best practices, and implementation of Row-Level Security.
  • Hands-on experience with cloud platforms, preferably Azure, including Azure Data Factory, Lake Storage Gen2, Synapse Analytics, and Databricks. Knowledge of AWS or GCP is also acceptable.
  • Experience using version control systems such as Git and applying CI/CD pipelines in data engineering projects.

Salary (Rate): £550 per day

City: undetermined

Country: United Kingdom

Working Arrangements: remote

IR35 Status: outside IR35

Seniority Level: Senior

Industry: IT

Detailed Description From Employer:

Location: Remote, United Kingdom

Duration: 6 months

Rate: Up to £550 per day (DOE)

IR35 Status: Outside IR35

We are looking for a highly skilled Senior Data Engineer to join our client's team for an initial 6-month contract role. This position is Outside IR35. You will be involved in building and optimising data pipelines and architecture, supporting data transformation, integration, and delivery in a consultancy environment. This role is ideal for someone with a strong analytical mindset, a deep understanding of data engineering practices, and an ability to adapt to complex client needs.

Key Responsibilities:

  • Design, develop and maintain scalable and high-performance data pipelines for structured and unstructured data.
  • Implement data integration, extraction, transformation, and loading processes using Apache Spark and Python.
  • Develop and maintain dataset documentation and data modelling standards.
  • Work with stakeholders to understand business requirements and translate them into technical data solutions.
  • Ensure system performance through query optimisation, partitioning, and indexing strategies.
  • Contribute to the development and deployment of Power BI dashboards and reports, ensuring appropriate data access and Row-Level Security.
  • Follow DevOps and CI/CD practices, maintaining source control using Git and implementing pull-request workflows.
Required Skills and Experience:
  • Strong proficiency in SQL, with deep knowledge of indexing, data partitioning, and performance tuning for large datasets.
  • Proven recent experience working with MS Fabric will be essential
  • Proven expertise in Python with a focus on data libraries such as Pandas, PySpark, and PyArrow.
  • Comprehensive experience working with Apache Spark, including structured streaming, batch processing, and Delta Lake architecture.
  • Advanced understanding of Power BI visualisation tools, including DAX, data modelling best practices, and implementation of Row-Level Security.
  • Hands-on experience with cloud platforms, preferably Azure, including Azure Data Factory, Lake Storage Gen2, Synapse Analytics, and Databricks. Knowledge of AWS or GCP is also acceptable.
  • Experience using version control systems such as Git and applying CI/CD pipelines in data engineering projects.
This is an exciting opportunity to work on cutting-edge data engineering projects with significant impact. If you're a problem solver with a passion for data architecture and want to work in a dynamic and collaborative consultancy environment, we'd love to hear from you.