Lead AI Engineer - AWS Platform

Lead AI Engineer - AWS Platform

Posted 2 days ago by dotSolved

Negotiable
Undetermined
Remote
Remote

Summary: The Lead AI Engineer role focuses on building and scaling production-grade AI solutions on AWS within a regulated enterprise environment. This hands-on position emphasizes the development and operationalization of AI systems rather than pure research. The engineer will be responsible for designing end-to-end AI/ML solutions and ensuring the integration of AI workflows with existing enterprise systems. The role also includes mentoring other engineers and contributing to engineering standards and practices.

Key Responsibilities:

  • Design and implement end-to-end AI/ML solutions including LLM-based applications
  • Build RAG pipelines using vector databases and enterprise data sources
  • Build machine learning models that automate their training, validation, monitoring, and retraining
  • Develop APIs and services to operationalize AI capabilities across the organization
  • Build ingestion for multimodal content and transformation pipelines for structured and unstructured data
  • Integrate AI workflows with enterprise systems (policy, claims, billing, etc.)
  • Ensure data quality, traceability, reliability, and governance in all AI pipelines
  • Implement CI/CD for AI/ML workflows
  • Deploy, monitor, and maintain models in production
  • Manage model versioning, performance monitoring, and retraining processes
  • Develop solutions using: Amazon SageMaker, AWS Lambda, S3, Glue, EKS, and related services
  • Contribute to evolving use of AWS Bedrock
  • Implement guardrails for LLM-based systems (grounding, validation, safety)
  • Ensure secure handling of sensitive data (PII, financial, etc.)
  • Build systems aligned with enterprise governance and compliance standards
  • Provide technical guidance and mentorship to engineers
  • Contribute to engineering standards and reusable patterns
  • Partner with architects and business teams to deliver high-impact use cases

Key Skills:

  • 10+ years in software, data engineering, 5 years AI/ML engineering
  • Hands-on experience building production AI/ML systems
  • Experience with RAG pipelines, LLMs, or NLP-based systems
  • Experience with AWS Bedrock or similar GenAI platforms
  • Experience with data pipelines and distributed systems
  • Experience deploying and operating systems in AWS
  • Working knowledge of MLOps practices (CI/CD, monitoring, versioning)
  • Experience with vector databases (Pinecone, Weaviate, etc.)
  • Experience in regulated industries (insurance, finance, healthcare)
  • Exposure to microservices and containerized environments (Docker, Kubernetes)

Salary (Rate): undetermined

City: undetermined

Country: undetermined

Working Arrangements: remote

IR35 Status: undetermined

Seniority Level: undetermined

Industry: IT

Job Description:

We are modernizing our data and analytics ecosystem by embedding AI and Generative AI across core insurance platforms (Policy, Claims, Billing, and Enterprise systems).

We are hiring a Lead AI Engineer to build and scale production-grade AI solutions on AWS. This role is hands-on and focused on delivering real systems, while helping shape the foundation of our emerging AI platform.

This is not a pure research or modeling role. It is an engineering role focused on building, deploying, and operating AI systems in a regulated enterprise environment.

Accountabilities:

Build AI Systems (Core Responsibility)

  • Design and implement end-to-end AI/ML solutions including LLM-based applications

  • Build RAG pipelines using vector databases and enterprise data sources

  • Build machine learning models that automate their training, validation, monitoring, and retraining

  • Develop APIs and services to operationalize AI capabilities across the organization

Develop Data + AI Pipelines

  • Build ingestion for multimodal content and transformation pipelines for structured and unstructured data

  • Integrate AI workflows with enterprise systems (policy, claims, billing, etc.)

  • Ensure data quality, traceability, reliability, and governance in all AI pipelines

Operationalize Models (MLOps)

  • Implement CI/CD for AI/ML workflows

  • Deploy, monitor, and maintain models in production

  • Manage model versioning, performance monitoring, and retraining processes

Build on AWS

  • Develop solutions using: Amazon SageMaker, AWS Lambda, S3, Glue, EKS, and related services

  • Contribute to evolving use of AWS Bedrock

Apply Responsible AI Practices

  • Implement guardrails for LLM-based systems (grounding, validation, safety)

  • Ensure secure handling of sensitive data (PII, financial, etc.)

  • Build systems aligned with enterprise governance and compliance standards

Lead by Doing

  • Provide technical guidance and mentorship to engineers

  • Contribute to engineering standards and reusable patterns

  • Partner with architects and business teams to deliver high-impact use cases

Qualifications:

Required

  • 10+ years in software, data engineering, 5 years AI/ML engineering

  • Hands-on experience building production AI/ML systems

  • Experience with RAG pipelines, LLMs, or NLP-based systems

  • Experience with AWS Bedrock or similar GenAI platforms

  • Experience with data pipelines and distributed systems

  • Experience deploying and operating systems in AWS

  • Working knowledge of MLOps practices (CI/CD, monitoring, versioning)

Preferred

  • Experience with vector databases (Pinecone, Weaviate, etc.)

  • Experience in regulated industries (insurance, finance, healthcare)

  • Exposure to microservices and containerized environments (Docker, Kubernetes)