(SC cleared) Data Engineer - MS Fabric

(SC cleared) Data Engineer - MS Fabric

Posted 1 day ago by Methods

£550 Per day
Inside
Remote
United Kingdom

Summary: The role is for a Data Engineer with strong hands-on experience in Microsoft Fabric, focusing on data engineering tasks such as building and maintaining data pipelines. The position requires proficiency in Python and SQL, along with a good understanding of data modeling and cloud technologies. The contract is remote and requires SC clearance, with an initial duration until March 2026.

Key Responsibilities:

  • Develop and maintain scalable ETL/ELT pipelines using cloud technologies.
  • Utilize Microsoft Fabric, including spark notebooks, pipelines, and data flows.
  • Transform and model data using strong SQL skills (T-SQL or similar).
  • Implement data engineering best practices, including version control and CI/CD pipelines.
  • Ingest data using REST APIs and SFTP.
  • Create dashboards using PowerBI.
  • Maintain data pipelines and ensure data integrity.

Key Skills:

  • Strong hands-on experience with Microsoft Fabric.
  • Proficient in Python for data engineering (Pandas, PySpark, asyncio).
  • Strong SQL skills (T-SQL or similar).
  • Experience with cloud technologies for ETL/ELT.
  • Good understanding of data modeling (star/snowflake) and data warehousing.
  • Familiarity with version control (Git) and CI/CD pipelines.
  • Knowledge of REST APIs and SFTP data ingestion.
  • Experience with PowerBI for dashboarding.
  • Some knowledge of LLMs, especially Azure AI.

Salary (Rate): £550 daily

City: undetermined

Country: United Kingdom

Working Arrangements: remote

IR35 Status: inside IR35

Seniority Level: undetermined

Industry: IT

Detailed Description From Employer:

Daily rate: £500 to £550 inside IR35

Must be SC Cleared

Location: Remote - UK based only

Duration: initial contract until end for March 2026

Experience: Strong hands-on experience with Microsoft Fabric (spark notebooks, pipelines, data flows). Proficient in Python for data engineering (eg, Pandas, PySpark, asyncio, automation scripts). Strong SQL skills (T-SQL or similar) for transforming and modelling data. Experience building scalable ETL/ELT pipelines using cloud technologies. Good understanding of data modelling (star/snowflake), data warehousing, and modern data lakehouse principles. Familiarity with version control (Git) and CI/CD pipelines

The main skillset is Data engineering - specifically experience in ingesting and maintaining data pipelines. Knowledge of REST APIs, SFTP data ingestion is useful. PowerBI for dashboarding Some knowledge of LLMs would be useful - especially Azure AI