Full Stack Developer | Remote

Full Stack Developer | Remote

Posted 1 day ago by Crossing Hurdles

Negotiable
Undetermined
Remote
United Kingdom

Summary: The role of Python Open Source Contributor involves working with established open-source Python repositories to support AI benchmarking projects. The contributor will evaluate coding agents' performance, assess outputs against technical criteria, and document findings for model improvement. This position requires strong experience in open-source Python development and the ability to work independently in a remote setting.

Key Responsibilities:

  • Work with well-known, well-documented open-source Python repositories to support AI benchmarking projects.
  • Evaluate the performance of coding agents across open-ended software engineering tasks.
  • Assess agent outputs against predefined technical and quality criteria.
  • Apply real-world open-source development judgment to identify strengths, weaknesses, and edge cases in AI-generated code.
  • Document findings clearly to support model evaluation and improvement.
  • Collaborate asynchronously with research and engineering teams throughout the project lifecycle.

Key Skills:

  • Strong experience contributing to or maintaining open-source Python repositories.
  • Deep proficiency in Python and familiarity with common open-source development workflows.
  • Ability to evaluate code quality, correctness, and maintainability.
  • Strong analytical skills with high attention to detail.
  • Clear written communication for documenting evaluations and insights.
  • Comfortable working independently in a remote, project-based environment.

Salary (Rate): £150.00/hr

City: undetermined

Country: United Kingdom

Working Arrangements: remote

IR35 Status: undetermined

Seniority Level: undetermined

Industry: IT

Detailed Description From Employer:

Position: Python Open Source Contributor

Type: Hourly contract

Compensation: $70–$150/hour

Location: Remote

Commitment: 10–40 hours/week

Role Responsibilities

  • Work with well-known, well-documented open-source Python repositories to support AI benchmarking projects.
  • Evaluate the performance of coding agents across open-ended software engineering tasks.
  • Assess agent outputs against predefined technical and quality criteria.
  • Apply real-world open-source development judgment to identify strengths, weaknesses, and edge cases in AI-generated code.
  • Document findings clearly to support model evaluation and improvement.
  • Collaborate asynchronously with research and engineering teams throughout the project lifecycle.

Requirements

  • Strong experience contributing to or maintaining open-source Python repositories.
  • Deep proficiency in Python and familiarity with common open-source development workflows.
  • Ability to evaluate code quality, correctness, and maintainability.
  • Strong analytical skills with high attention to detail.
  • Clear written communication for documenting evaluations and insights.
  • Comfortable working independently in a remote, project-based environment.

Application Process (Takes 20 Min)

  • Upload resume
  • Interview (15 min)
  • Submit form