Negotiable
Undetermined
Remote
London, England, United Kingdom
Summary: The role of Human Data for AI Training (Coding) - Instructor involves teaching coding skills for AI training workflows, focusing on code generation, review, and annotation. The instructor will evaluate learner submissions and help develop curriculum materials while staying updated on AI data quality standards. This position is remote and part-time, catering to learners in LATAM time zones.
Key Responsibilities:
- Deliver live or asynchronous instruction covering code generation, review, and annotation for AI training pipelines.
- Teach learners how to write clear, well-structured prompts, responses, rubrics, and trajectories in one or more programming languages (Python, JavaScript, etc.).
- Evaluate and give feedback on learner submissions, focusing on code quality, accuracy, and adherence to annotation guidelines.
- Help develop and refine curriculum materials, rubrics, and assessment criteria.
- Stay current with evolving AI data quality standards and integrate updates into instruction.
Key Skills:
- Strong programming skills, with Python strongly preferred.
- Prior experience in AI data labeling, RLHF workflows, and/or code evaluation for LLMs.
- Ability to explain technical concepts clearly to learners of varying skill levels.
- Familiarity with prompt engineering and current LLM capabilities.
- Detail-oriented with a commitment to data quality and consistency.
- Experience working with platforms like Scale AI, Turing, Mercor, Surge AI, Invisible Technologies, or similar (nice to have).
- Background in computer science education or bootcamp instruction (nice to have).
- Understanding of post-training methodologies and model evaluation (nice to have).
Salary (Rate): undetermined
City: London
Country: United Kingdom
Working Arrangements: remote
IR35 Status: undetermined
Seniority Level: undetermined
Industry: IT
About The Role We're looking for a skilled human data professional with hands-on coding experience in data generation for AI training workflows (e.g. SWE-Bench like tasks, Agentic tasks, etc) who's ready to share that expertise with others. If you've spent time writing, reviewing, or evaluating code for AI models and are interested in a teaching role, this is for you.
What You'll Do
- Deliver live or asynchronous instruction covering code generation, review, and annotation for AI training pipelines
- Teach learners how to write clear, well-structured prompts, responses, rubrics and trajectories in one or more programming languages (Python, JavaScript, etc.)
- Evaluate and give feedback on learner submissions, focusing on code quality, accuracy, and adherence to annotation guidelines
- Help develop and refine curriculum materials, rubrics, and assessment criteria
- Stay current with evolving AI data quality standards and integrate updates into instruction
What We're Looking For
- Strong programming skills, with Python strongly preferred
- Prior experience in AI data labeling, RLHF workflows, and/or code evaluation for LLMs
- Ability to explain technical concepts clearly to learners of varying skill levels
- Familiarity with prompt engineering and current LLM capabilities
- Detail-oriented with a commitment to data quality and consistency
Nice to Have
- Experience working with platforms like Scale AI, Turing, Mercor, Surge AI, Invisible Technologies, or similar
- Background in computer science education or bootcamp instruction
- Understanding of post-training methodologies and model evaluation
Format & Commitment This is a remote position aligned with LATAM timelines and part-time position