£700 Per day
Undetermined
Remote
England, UK
Summary: This role involves a fully remote contract focused on enhancing Platform Excellence within Databricks for a UK-based client. The position requires promoting best practices by developing reusable patterns, Python SDKs, and frameworks to standardize workflows across teams. The candidate will work at the intersection of software engineering, data engineering, and MLOps to improve the overall developer experience and system performance. Strong collaboration with stakeholders is essential to identify pain points and implement practical solutions.
Key Responsibilities:
- Design and build reusable Python packages/SDKs to accelerate delivery across teams
- Create best-practice frameworks and reference implementations (patterns, templates, guardrails)
- Drive improvements across the Databricks estate (eg, jobs/workflows, cluster optimisation, Unity Catalog, MLflow)
- Partner closely with stakeholders to understand pain points and turn them into practical platform solutions
- Produce clear documentation so engineering and data science teams can self-serve and adopt consistently
Key Skills:
- Advanced Python (packaging, modular design, dependency management)
- Proven experience building reusable libraries/internal SDKs
- Strong Databricks experience (jobs/workflows, MLflow, Unity Catalog, optimisation)
- Strong software & platform engineering mindset (API/system design, scalability, performance)
- Strong MLOps knowledge (life cycle management, reproducibility, monitoring)
- Solid data engineering fundamentals (Spark, distributed computing, pipelines)
- Strong documentation and developer enablement focus
Salary (Rate): £700 per day
City: undetermined
Country: UK
Working Arrangements: remote
IR35 Status: undetermined
Seniority Level: undetermined
Industry: IT
I'm working with my client on a UK-based, fully remote contract opportunity focused on Platform Excellence within Databricks. This role is all about promoting best-practice use of Databricks across the organisation by building reusable patterns, Python SDKs/libraries, and pragmatic frameworks that teams can adopt quickly and consistently.
The opportunity
You'll sit at the intersection of software engineering, data engineering, and MLOps, helping teams standardise how they build, run, and govern workloads on Databricks - improving developer experience, reliability, scalability, and performance.
Key responsibilities
- Design and build reusable Python packages/SDKs to accelerate delivery across teams
- Create best-practice frameworks and reference implementations (patterns, templates, guardrails)
- Drive improvements across the Databricks estate (eg, jobs/workflows, cluster optimisation, Unity Catalog, MLflow)
- Partner closely with stakeholders to understand pain points and turn them into practical platform solutions
- Produce clear documentation so engineering and data science teams can self-serve and adopt consistently
Priority skills/experience (what my client is looking for)
- Advanced Python (packaging, modular design, dependency management)
- Proven experience building reusable libraries/internal SDKs
- Strong Databricks experience (jobs/workflows, MLflow, Unity Catalog, optimisation)
- Strong software & platform engineering mindset (API/system design, scalability, performance)
- Strong MLOps knowledge (life cycle management, reproducibility, monitoring)
- Solid data engineering fundamentals (Spark, distributed computing, pipelines)
- Strong documentation and developer enablement focus
Nice to have
- Machine Learning/Deep Learning/GenAI exposure
What makes someone successful here
- Excellent communication and stakeholder engagement
- Self-starter who works well with minimal direction
- Pragmatic delivery mindset (balancing best practice with business reality)
- Patient, coaching-oriented approach and strong problem-solving skills