Negotiable
Undetermined
Remote
Remote
Summary: The Sr MLOps Engineer role focuses on designing and managing machine learning pipelines and model lifecycles in a cloud-based environment. The position requires expertise in containerization, orchestration, and data architecture, with a strong emphasis on monitoring and quality assurance of deployed models. The engineer will work closely with portfolio companies to ensure effective integration of MLOps infrastructure with existing data flows. This role demands a proactive approach to risk management and adaptability across varying levels of data maturity.
Key Responsibilities:
- Design and orchestrate ML pipelines using tools like MLflow and Kubeflow.
- Deploy models across cloud and hybrid environments, ensuring proper serialization and serving infrastructure.
- Implement scaling strategies for inference workloads and manage feature store and experiment tracking.
- Monitor model performance and establish retraining pipelines for continuous improvement.
- Assess and enhance data architecture to prevent deployment failures.
- Integrate MLOps infrastructure with existing ERP and CRM systems.
- Utilize AI-assisted development practices throughout the software delivery lifecycle.
- Build and deploy AI workflows on major cloud platforms, primarily Azure.
- Design systems for maintainability by future engineering teams.
- Adapt MLOps capabilities to varying levels of data infrastructure maturity.
Key Skills:
- 7+ years of experience in ML engineering or MLOps roles.
- Strong understanding of ML pipeline design and orchestration.
- Proficiency in containerization and orchestration using Docker and Kubernetes.
- Experience with model monitoring and quality assurance techniques.
- Solid data architecture fundamentals.
- Familiarity with enterprise integration patterns.
- Experience with AI-driven development practices.
- Hands-on experience with cloud-based AI workflow implementation.
- Ability to work with companies at various data maturity levels.
- LLM deployment and observability experience preferred.
Salary (Rate): undetermined
City: undetermined
Country: undetermined
Working Arrangements: remote
IR35 Status: undetermined
Seniority Level: undetermined
Industry: IT
Job Title: Sr MLOps Engineer
Location: 100% Remote
Job Summary:
Core Technical Skills
ML Pipeline & Model Lifecycle
ML pipeline design and orchestration MLflow, Kubeflow, Azure ML Pipelines
Model deployment across cloud and hybrid environments including model serialisation (ONNX, Pickle, TorchScript, SavedModel) and serving infrastructure design. Knowing how to package a model correctly for the target runtime is a baseline expectation, not a bonus
Model scaling horizontal and vertical scaling strategies for inference workloads, autoscaling configurations, and load balancing for high-availability model serving across portfolio company environments
Feature store management and experiment tracking for reproducibility
LLM-specific operations: prompt versioning, RAG pipeline observability, evaluation frameworks, and fine-tuning pipeline management
Containerisation & Orchestration
Docker hands-on experience containerising ML models and pipelines. Must be comfortable writing Dockerfiles, managing images, and designing container-based deployment workflows for model serving
Kubernetes working knowledge of deploying and managing containerised workloads in Kubernetes environments. Does not need to be a Kubernetes administrator, but must understand deployments, services, resource limits, and horizontal pod autoscaling as they apply to ML model serving
Familiarity with Kubernetes-native ML tooling (Kubeflow, KServe, Seldon) is a plus given the Azure-dominant environment across GemSpring's portfolio
Model Monitoring & Quality
Data drift and concept drift detection identifying when model inputs or outputs have shifted enough to degrade performance
Prediction quality monitoring and alerting systems that give portfolio company teams early warning before failures surface in production
Model retraining pipeline design structured, automated pathways for updating models as conditions change
Data Architecture Foundations Critical Requirement
Strong data architecture fundamentals Abhilash Vantaram flagged this explicitly: most AI engagements fail because of improper data architectures. This person must understand data modelling, pipeline design, and storage patterns well enough to assess an existing data environment and identify structural risks before they become deployment failures
Enterprise Integration
Deep ERP-specific expertise is not required familiarity with enterprise integration patterns is understanding what connectors, MCPs, and cloud bridges already exist and leveraging them rather than rebuilding from scratch
Ability to design MLOps infrastructure that integrates cleanly with existing ERP and CRM data flows across SAP, Salesforce, NetSuite, and equivalent mid-market platforms
Scope clarity: once AI is in production across multiple portfolio companies simultaneously, the model lifecycle demands a dedicated owner. This role is not the infrastructure engineer that is the DevOps Engineer. Splitting attention between pipelines and model lifecycle management produces underperformance in both; the two functions must stay distinct
AI-Driven Development Practices
AI-assisted SDLC fluency uses AI tooling as a matter of course across the full software delivery lifecycle code generation and review (GitHub Copilot, Cursor, Claude), AI-assisted test writing, automated documentation generation, AI-driven PR review, and AI-powered security scanning. These are table-stakes instruments in a 2026 delivery environment, not differentiators. A candidate who is still treating them as optional is already behind the pace this team operates at
Prompting as an engineering discipline chain-of-thought, few-shot, system-level, role-based, and structured output prompting patterns with the ability to version, evaluate, and iterate on prompts as first-class engineering artifacts. Ad hoc prompting is not the standard; repeatable, testable, and documented prompt design is. Knows how to evaluate prompt performance systematically rather than informally
Hands-on cloud-based AI workflow implementation direct, production-grade experience building and deploying AI workflows on at least one major hyperscaler Azure AI Services, AWS Bedrock, or Google Vertex AI. Understands managed inference endpoints, AI orchestration services, serverless AI functions, and AI gateway patterns from having built them, not from reading about them. Azure is the dominant environment across GemSpring's portfolio and must be a primary area of fluency; exposure to a second hyperscaler is a strong differentiator given the portfolio's mix of legacy and cloud-native companies
Design for the team that follows the portfolio company's engineers will maintain this after the Tiger Team moves on. Every pipeline, monitoring setup, and deployment pattern must be operable by people who weren't part of the original build
Risk-first orientation data drift and model degradation are silent failures. The ability to design monitoring systems that surface problems early before they become visible to the business is the core value this role provides
Containerisation as standard practice Docker is not optional every model deployment should be containerised from the outset. This person must treat containerised, reproducible deployments as the baseline, not a nice-to-have. Kubernetes knowledge is required to operate and scale those deployments reliably in production
Adaptability across maturity levels will work with companies ranging from those with mature data platforms to those with minimal data infrastructure. Must deliver meaningful MLOps capability in both contexts without requiring the company to reach an ideal state first
Experience Baseline
7+ years in ML engineering or MLOps roles with demonstrated production model management experience
LLM deployment and observability experience strongly preferred
Strong data architecture fundamentals verifiable, not just stated
Iflowsoft Solutions inc is an Equal Opportunity Employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, age, disability, military status, national origin or any other characteristic protected under federal, state, or applicable local law. We promote and support a diverse workforce at all levels in the company. This is not an unsolicited mail and if it is not intended for you or you are not interested in receiving our e-mails, you may