Negotiable
Inside
Hybrid
London Area, United Kingdom
Summary: The role of a PySpark Developer involves joining a new project on a 6-month contract basis, focusing on building and optimizing data pipelines within the Azure data ecosystem. The ideal candidate will have hands-on experience with PySpark and Databricks, and will be responsible for developing data wrangling solutions and supporting cloud data processing. Strong communication skills and the ability to work autonomously are essential for success in this fast-paced environment. The position is hybrid, requiring three days on-site and two days remote work.
Key Responsibilities:
- Build and optimize data pipelines using PySpark.
- Develop data wrangling solutions.
- Support large-scale cloud data processing.
- Work with Databricks and the Azure data ecosystem.
- Maintain ingestion pipelines and database integrations.
- Collaborate effectively with stakeholders across multiple time zones.
- Work autonomously and solve problems with minimal guidance.
Key Skills:
- Strong hands-on development experience with PySpark.
- Experience with PySpark DataFrames, partitioning, SparkSQL optimization, and clustering.
- Strong experience with Databricks.
- Knowledge of Azure SQL, Azure Data Factory, and broader Azure Cloud services.
- Experience working with Delta tables, Parquet, and CSV file formats.
- Good knowledge of Azure SQL DB, Storage Accounts, Key Vault, Application Gateways, VNETs, Azure Portal, and Power BI integration.
- Strong communication skills.
Salary (Rate): undetermined
City: London
Country: United Kingdom
Working Arrangements: hybrid
IR35 Status: inside IR35
Seniority Level: undetermined
Industry: IT
PySpark Developer | London | Hybrid | 6 Month Contract | Inside IR35
RED is currently looking for a strong PySpark Developer to join a new project on an initial 6 month contract with an ASAP start. This role is ideal for someone with strong hands-on development experience in PySpark, alongside solid exposure to Databricks and the wider Azure data ecosystem. You will be focused on building and optimising data pipelines, developing data wrangling solutions, and supporting large-scale cloud data processing in a fast-paced delivery environment.
Key skills required:
- Strong hands-on development experience with PySpark
- Experience with PySpark DataFrames, partitioning, SparkSQL optimisation, and clustering
- Strong experience with Databricks
- Knowledge of Azure SQL, Azure Data Factory, and broader Azure Cloud services
- Experience working with Delta tables, Parquet, and CSV file formats
- Experience building and maintaining ingestion pipelines, data wrangling pipelines, and database integrations
- Good knowledge of Azure SQL DB, Storage Accounts, Key Vault, Application Gateways, VNETs, Azure Portal, and Power BI integration
- Strong communication skills and the ability to work effectively with stakeholders across multiple time zones
- Comfortable working autonomously, solving problems, and delivering with minimal direct guidance
Contract details:
- Contract Type: Inside IR35
- Start Date: ASAP
- Duration: 6 Months+
- Location: London
- Work Model: Hybrid – 3 days on-site, 2 days remote
- Workload: Full-time
If you are a strong PySpark Developer with Databricks and Azure experience, please apply with your latest CV