Negotiable
Inside
Remote
Remote , UK
Summary: The AI/ML Test Engineer role focuses on ensuring the quality and reliability of AI systems through comprehensive testing strategies for machine learning models and data pipelines. The position requires diagnosing issues, designing test scenarios, and collaborating with data scientists to enhance the AI/ML life cycle. Candidates should possess strong problem-solving skills and a solid understanding of data science and software testing. This role is critical in driving quality assurance for AI/ML products.
Key Responsibilities:
- Diagnose and debug failures in AI systems, including technical bugs, data quality issues, or model limitations.
- Design and implement creative test scenarios that push the limits of AI/ML models.
- Assess and articulate the impact of model behavior on end-user experience and business objectives.
- Create, manage, and validate large and diverse test datasets, including synthetic and adversarial data.
- Automate model and API testing using frameworks such as pytest, unittest, or behave.
- Collaborate with data scientists and engineers to understand ML pipeline components and potential failure points.
- Evaluate model robustness against noisy, manipulated, or adversarial inputs.
- Perform load and performance testing on deployed ML models to ensure production readiness.
- Conduct UAT (User Acceptance Testing) and manage defect triaging and resolution.
- Create and maintain detailed test plans, test cases, and test scripts.
- Report test coverage, defects, and progress via tools such as Jira and test dashboards.
- Contribute to test strategy and drive improvements in QA processes for AI/ML products.
Key Skills:
- Proficient in identifying data issues, distribution shifts, and dataset bias.
- Understanding of various data formats (CSV, JSON, Parquet) used in ML workflows.
- Familiarity with ML frameworks: TensorFlow, PyTorch, scikit-learn.
- Able to validate outputs against ground truth and expected behavior.
- Strong Python skills, especially for test automation, data analysis (Pandas, NumPy), and API testing.
- Familiar with Java for integration and component testing (optional but beneficial).
- Experience with REST API testing tools: Postman, RestAssured.
- Understanding of API documentation standards (Swagger/OpenAPI) and HTTP protocol essentials.
- Skilled in test case design, scripting, execution, and defect life cycle.
- Experience using Python-based test frameworks (pytest, unittest, etc.).
- Exposure to test management tools like Jira and structured reporting.
- Experience in leading QA efforts and coordinating test activities.
- Familiarity with cloud-based ML services: AWS SageMaker, Azure ML, GCP Vertex AI, etc.
- Understanding of deployment pipelines, serverless components (eg, Lambda, Step Functions).
- Exposure to Jupyter Notebooks, visualization libraries (eg, Matplotlib, Seaborn).
- Knowledge of synthetic data generation, data augmentation, and perturbation techniques.
- Bachelor's or Master's in Computer Science, Data Science, Software Engineering, or related field.
- 3+ years in QA/testing roles, with at least 1-2 years focused on AI/ML systems.
- Certifications in ML, cloud services, or test automation frameworks are a plus.
- Strong analytical and communication skills.
- Ability to translate technical findings into business insights.
- Self-starter with an innovative mindset for tackling complex testing challenges.
Salary (Rate): undetermined
City: undetermined
Country: UK
Working Arrangements: remote
IR35 Status: inside IR35
Seniority Level: undetermined
Industry: IT
We are seeking a detail-oriented and technically skilled AI/ML Test Engineer to ensure the quality, reliability, and robustness of AI systems. This role involves designing and executing comprehensive test strategies for machine learning models and data pipelines, identifying issues across APIs, datasets, and model outputs, and driving quality throughout the AI/ML life cycle. The ideal candidate will combine strong problem-solving abilities with a deep understanding of data science, software testing, and automation frameworks.
Key Responsibilities:
-
Diagnose and debug failures in AI systems, including technical bugs, data quality issues, or model limitations.
-
Design and implement creative test scenarios that push the limits of AI/ML models.
-
Assess and articulate the impact of model behavior on end-user experience and business objectives.
-
Create, manage, and validate large and diverse test datasets, including synthetic and adversarial data.
-
Automate model and API testing using frameworks such as pytest, unittest, or behave.
-
Collaborate with data scientists and engineers to understand ML pipeline components and potential failure points.
-
Evaluate model robustness against noisy, manipulated, or adversarial inputs.
-
Perform load and performance testing on deployed ML models to ensure production readiness.
-
Conduct UAT (User Acceptance Testing) and manage defect triaging and resolution.
-
Create and maintain detailed test plans, test cases, and test scripts.
-
Report test coverage, defects, and progress via tools such as Jira and test dashboards.
-
Contribute to test strategy and drive improvements in QA processes for AI/ML products.
Required Skills & Experience: AI/ML and Data Testing:
-
Proficient in identifying data issues, distribution shifts, and dataset bias.
-
Understanding of various data formats (CSV, JSON, Parquet) used in ML workflows.
-
Familiarity with ML frameworks: TensorFlow, PyTorch, scikit-learn.
-
Able to validate outputs against ground truth and expected behavior.
Programming & Tools:
-
Strong Python skills, especially for test automation, data analysis (Pandas, NumPy), and API testing.
-
Familiar with Java for integration and component testing (optional but beneficial).
-
Experience with REST API testing tools: Postman, RestAssured.
-
Understanding of API documentation standards (Swagger/OpenAPI) and HTTP protocol essentials.
Testing & QA Processes:
-
Skilled in test case design, Scripting, execution, and defect life cycle.
-
Experience using Python-based test frameworks (pytest, unittest, etc.).
-
Exposure to test management tools like Jira and structured reporting.
-
Experience in leading QA efforts and coordinating test activities.
Cloud & Deployment:
-
Familiarity with cloud-based ML services: AWS SageMaker, Azure ML, GCP Vertex AI, etc.
-
Understanding of deployment pipelines, serverless components (eg, Lambda, Step Functions).
Data Science Collaboration:
-
Exposure to Jupyter Notebooks, visualization libraries (eg, Matplotlib, Seaborn).
-
Knowledge of synthetic data generation, data augmentation, and perturbation techniques.
Preferred Qualifications:
-
Bachelor's or Master's in Computer Science, Data Science, Software Engineering, or related field.
-
3+ years in QA/testing roles, with at least 1-2 years focused on AI/ML systems.
-
Certifications in ML, cloud services, or test automation frameworks are a plus.
Soft Skills:
-
Strong analytical and communication skills.
-
Ability to translate technical findings into business insights.
-
Self-starter with an innovative mindset for tackling complex testing challenges.