MLOps/GenAI Expert - Freelance

MLOps/GenAI Expert - Freelance

Posted 1 week ago by Cognitive Group | Part of the Focus Cloud Group

Negotiable
Undetermined
Hybrid
London Area, United Kingdom

Summary: The role of MLOps/GenAI Expert involves providing technical leadership and architectural guidance for implementing scalable Large Language Model (LLM) solutions, particularly within the Moodle framework. The position requires hands-on expertise in deploying AI and MLOps solutions, with a focus on integrating AI services from Azure and AWS. The ideal candidate will bridge the gap between advanced AI technologies and enterprise integration needs, ensuring best practices are followed throughout the solution lifecycle.

Key Responsibilities:

  • Provide technical leadership and architectural guidance for scalable LLM solutions.
  • Support integration of AI services with Moodle.
  • Collaborate with client’s infrastructure and development teams.
  • Identify and document challenges and trade-offs in integrating LLM services.
  • Ensure MLOps best practices are followed throughout the solution lifecycle.
  • Deliver technical findings, integration challenges, and operational recommendations.
  • Recommend best practices for managing and monitoring LLM services within Moodle.

Key Skills:

  • Strong experience in MLOps.
  • Hands-on expertise in deploying AI and LLM-based solutions.
  • Knowledge of integrating AI services from Azure and AWS.
  • Experience with Moodle framework.
  • Ability to document and articulate technical challenges.
  • Understanding of best practices in AI/ML and cloud architecture.

Salary (Rate): undetermined

City: London

Country: United Kingdom

Working Arrangements: hybrid

IR35 Status: undetermined

Seniority Level: undetermined

Industry: IT

Detailed Description From Employer:

**contract project** London Based Start ASAP hybrid working I am looking for a Solution Architect/Designer with strong experience in MLOps and hands-on expertise in deploying real-world AI and LLM-based solutions at scale.

Role Overview: Provide technical leadership and architectural guidance for implementing scalable LLM (Large Language Model) solutions. Support integration of AI services (primarily from Azure and AWS ) with Moodle , which serves as the front-end of the current application. Work collaboratively with the client’s infrastructure and development teams , already engaged in the project. Identify, document, and articulate the key challenges, trade-offs, and considerations when integrating and scaling LLM services within the Moodle framework. Ensure that MLOps best practices are considered throughout the solution lifecycle, from development and deployment to monitoring and maintenance.

Expected Deliverables: Technical findings and observations Integration challenges specific to Moodle and AI backend services Architectural and operational recommendations for deploying and scaling the solution Best practices for managing and monitoring LLM services within a Moodle-based application This engagement is ideal for someone with a blend of AI/ML, cloud architecture, and MLOps experience who can bridge the gap between cutting-edge AI technologies and real-world enterprise integration requirements.