Negotiable
Undetermined
Remote
London, England, United Kingdom
Summary: The role involves experienced software engineers contributing to the enhancement of advanced AI systems through human feedback, specifically in training large language models related to software development practices. Engineers will analyze systems, improve code quality, and tackle complex technical challenges while shaping AI models' performance evaluation and output generation. This position is ideal for those with a strong background in backend engineering and a passion for AI automation. The work is project-based and fully remote, allowing for a flexible schedule.
Key Responsibilities:
- Develop objective, verifiable evaluation criteria (rubrics) for system performance
- Review system logs and execution paths to improve reliability and code quality
- Refactor code and optimize system behavior toward ideal outcomes
- Test systems for vulnerabilities, including data exposure and edge-case failures
- Provide detailed, high-quality feedback on system performance and outputs
Key Skills:
- 2+ years of experience in backend engineering, AI automation, or systems integration
- Strong proficiency in at least two programming languages (e.g., Python, JavaScript, Go, Java)
- Experience working with SQL databases
- Proven ability to build and maintain production-grade systems
- Experience working in live (non-mocked) environments with multi-step interactions
- Strong analytical skills and attention to detail
Salary (Rate): undetermined
City: London
Country: United Kingdom
Working Arrangements: remote
IR35 Status: undetermined
Seniority Level: undetermined
Industry: IT
An enterprise client is currently seeking experienced software engineers to contribute to improving advanced AI systems through human feedback. This work supports leading AI organizations in training large language models to better understand software development practices, debugging, and code quality. This is part of a cutting-edge initiative focused on enhancing how AI systems write, review, and optimize code in real-world scenarios. You’ll play a key role in shaping how AI models evaluate performance, detect issues, and generate reliable outputs.
Job Description
This opportunity is ideal for engineers who enjoy analyzing systems, improving code quality, and working on complex technical challenges. You will contribute to AI training projects by evaluating outputs, refining logic, and identifying potential vulnerabilities.
What You'll Do:
- Develop objective, verifiable evaluation criteria (rubrics) for system performance
- Review system logs and execution paths to improve reliability and code quality
- Refactor code and optimize system behavior toward ideal outcomes
- Test systems for vulnerabilities, including data exposure and edge-case failures
- Provide detailed, high-quality feedback on system performance and outputs
Qualifications Requirements:
- 2+ years of experience in backend engineering, AI automation, or systems integration
- Strong proficiency in at least two programming languages (e.g., Python, JavaScript, Go, Java)
- Experience working with SQL databases
- Proven ability to build and maintain production-grade systems
- Experience working in live (non-mocked) environments with multi-step interactions
- Strong analytical skills and attention to detail
Nice to Haves:
- Experience with multi-stage system workflows and coordination tasks
- Familiarity with integrating tools such as APIs, databases, or external platforms
- Understanding of system vulnerabilities (e.g., privacy leaks, prompt injection, access escalation)
- Experience working with AI systems or agent-based workflows
- Comfort working with persistent state tracking or similar frameworks
Additional Information
- Fully remote and flexible work schedule
- Project-based engagement with no guaranteed hours
- Work on tasks based on availability and project assignment
- Payment is based on completed tasks only
- Must accept project invitations before beginning work
- Freelancers may accept or decline tasks depending on availability
- No guaranteed workload; volume may vary weekly