£36 Per hour
Undetermined
Remote
London Area, United Kingdom
Summary: The role of Senior Code Reviewer for LLM Data Training focuses on evaluating AI-generated JavaScript code responses to ensure quality and adherence to guidelines. The position requires extensive JavaScript expertise and involves reviewing annotator evaluations, validating code snippets, and providing feedback. This role is integral to maintaining high standards in AI model training and offers remote flexibility. Candidates will work within Project Atlas guidelines to ensure evaluation integrity.
Key Responsibilities:
- Review and audit annotator evaluations of AI-generated JavaScript code.
- Assess if the JavaScript code follows the prompt instructions, is functionally correct, and secure.
- Validate code snippets using proof-of-work methodology.
- Identify inaccuracies in annotator ratings or explanations.
- Provide constructive feedback to maintain high annotation standards.
- Work within Project Atlas guidelines for evaluation integrity and consistency.
Key Skills:
- 5–7+ years of experience in JavaScript development, QA, or code review.
- Strong knowledge of JavaScript syntax, debugging, edge cases, and testing.
- Comfortable using code execution environments and testing tools.
- Excellent written communication and documentation skills.
- Experience working with structured QA or annotation workflows.
- English proficiency at B2, C1, C2, or Native level.
Salary (Rate): £36 hourly
City: London Area
Country: United Kingdom
Working Arrangements: remote
IR35 Status: undetermined
Seniority Level: Senior
Industry: IT
About Company SME is a platform that bridges subject-matter experts with AI projects, enabling them to contribute their knowledge to improve AI models. It offers flexible opportunities to work on tasks like data labeling, quality assurance, and domain-specific problem-solving while earning competitive pay.
About the Role We’re hiring a Code Reviewer with deep JavaScript expertise to review evaluations completed by data annotators assessing AI-generated JavaScript code responses. Your role is to ensure that annotators follow strict quality guidelines related to instruction-following, factual correctness, and code functionality.
Responsibilities
- Review and audit annotator evaluations of AI-generated JavaScript code.
- Assess if the JavaScript code follows the prompt instructions, is functionally correct, and secure.
- Validate code snippets using proof-of-work methodology.
- Identify inaccuracies in annotator ratings or explanations.
- Provide constructive feedback to maintain high annotation standards.
- Work within Project Atlas guidelines for evaluation integrity and consistency.
Required Qualifications
- 5–7+ years of experience in JavaScript development, QA, or code review.
- Strong knowledge of JavaScript syntax, debugging, edge cases, and testing.
- Comfortable using code execution environments and testing tools.
- Excellent written communication and documentation skills.
- Experience working with structured QA or annotation workflows.
- English proficiency at B2, C1, C2, or Native level.
Preferred Qualifications
- Experience in AI training, LLM evaluation, or model alignment.
- Familiarity with annotation platforms.
- Exposure to RLHF (Reinforcement Learning from Human Feedback) pipelines.
Compensation : $25-$45 Hourly
Why Join Us? Join a high-impact team working at the intersection of AI and software development. Your Python expertise will directly influence the accuracy, safety, and clarity of AI-generated code. This role offers remote flexibility, milestone-based delivery, and competitive compensation.