Remote Senior Software Engineer (LLM) - 34953

Remote Senior Software Engineer (LLM) - 34953

Posted 4 days ago by Turing

£80 Per hour
Inside
Remote
United Kingdom

Summary: The role of Remote Senior Software Engineer (LLM) at Turing involves collaborating with AI researchers to enhance AI-assisted software development through the evaluation and curation of datasets for Large Language Models (LLMs). The position requires reviewing model-generated code responses and providing detailed evaluations to improve LLM performance on software engineering tasks. Candidates should have extensive experience in software engineering and a strong understanding of code quality and evaluation processes. This is a contractor position with a commitment of approximately 20 hours per week.

Key Responsibilities:

  • Review and compare model-generated code responses using a structured ranking system.
  • Evaluate code diffs for correctness, quality, style, and efficiency.
  • Provide clear rationales for ranking decisions.
  • Maintain consistency and objectivity in evaluations.
  • Collaborate with the team to identify edge cases and ambiguities in model behavior.

Key Skills:

  • At least 3 years of experience at top-tier product or research companies.
  • 7+ years of overall professional software engineering experience.
  • Strong fundamentals in software design, coding best practices, and debugging.
  • Excellent ability to assess code quality, correctness, and maintainability.
  • Proficient with code review processes and reading diffs in real-world repositories.
  • Exceptional written communication skills.
  • Prior experience with LLM-generated code or evaluation work is a plus.

Salary (Rate): £80 hourly

City: undetermined

Country: United Kingdom

Working Arrangements: remote

IR35 Status: inside IR35

Seniority Level: Senior

Industry: IT

Detailed Description From Employer:

About Us: Turing is one of the world’s fastest-growing AI companies, pushing the boundaries of AI-assisted software development. Our mission is to empower the next generation of AI systems to reason about and work with real-world software repositories. You’ll be working at the intersection of software engineering, open-source ecosystems, and frontier AI.

Project Overview: We're building high-quality evaluation and training datasets to improve how Large Language Models (LLMs) interact with realistic software engineering tasks. A key focus of this project is curating verifiable software engineering challenges from public GitHub repository histories using a human-in-the-loop process.

Why This Role Is Unique: Collaborate directly with AI researchers shaping the future of AI-powered software development. Work with high-impact open-source projects and evaluate how LLMs perform on real bugs, issues, and developer tasks. Influence dataset design that will train and benchmark next-gen LLMs.

What does day-to-day look like: Review and compare 3–4 model-generated code responses for each task using a structured ranking system. Evaluate code diffs for correctness, code quality, style, and efficiency. Provide clear, detailed rationales explaining the reasoning behind each ranking decision. Maintain high consistency and objectivity across evaluations. Collaborate with the team to identify edge cases and ambiguities in model behavior.

Required Skills: At least 3 years of experience at top-tier product or research companies (e.g., Stripe, Datadog, Snowflake, Dropbox, Canva, Shopify, Intuit, PayPal, or research roles at IBM, GE, Honeywell, Schneider, etc.), with a total of 7+ years of overall professional software engineering experience. Strong fundamentals in software design, coding best practices, and debugging. Excellent ability to assess code quality, correctness, and maintainability. Proficient with code review processes and reading diffs in real-world repositories. Exceptional written communication skills to articulate evaluation rationale clearly. Prior experience with LLM-generated code or evaluation work is a plus.

Bonus Points: Experience in LLM research, developer agents, or AI evaluation projects. Background in building or scaling developer tools or automation systems.

Engagement Details: Commitment: ~20 hours/week (partial PST overlap required) Type: Contractor (no medical/paid leave) Duration: 1 month (starting next week; potential extensions based on performance and fit) Rates: $40–$100/hour, based on experience and skill level.