Negotiable
Inside
Remote
United Kingdom
Summary: The Software Engineer (C#) role focuses on developing and optimizing high-performance systems for AI infrastructure, specifically in data pipelines and evaluation tooling. Candidates will engage in full-stack development, enhancing existing C# codebases, and ensuring interoperability between .NET and Python ecosystems. This position offers the opportunity to work on impactful projects in a fully remote setting, with a commitment of 20–40 hours per week.
Key Responsibilities:
- Design, build, and optimize high-performance C# systems supporting large-scale AI data pipelines and evaluation workflows
- Develop full-stack tooling and backend services for data annotation, validation, and quality control at scale
- Improve the reliability, performance, and safety of existing C# codebases used in production AI environments
- Bridge the gap between .NET and Python ML ecosystems — invoking models, wrapping native libraries, and enabling smooth interoperability
- Build robust benchmarking harnesses to evaluate system performance and surface edge cases
- Collaborate with data, research, and engineering teams to support model training and evaluation workflows
- Participate in synchronous design reviews to iterate quickly on architecture and implementation decisions
Key Skills:
- 3–5+ years of professional experience writing production-grade C#
- Experienced full-stack developer with a strong systems programming background
- Skilled at interoperability scenarios — calling Python ML models from .NET, wrapping native libraries, crossing runtime boundaries cleanly
- Experienced designing benchmarking and evaluation harnesses for real systems
- Clear, precise written and verbal communicator
- Native or fluent English speaker
- Able to commit 20–40 hours per week reliably
- Prior experience with data annotation platforms, data quality systems, or evaluation pipelines (nice to have)
- Familiarity with AI/ML workflows, model training infrastructure, or benchmarking tooling (nice to have)
- Experience with distributed systems or internal developer tooling (nice to have)
- Background in performance engineering or systems-level optimization (nice to have)
Salary (Rate): £37.50/hr
City: undetermined
Country: United Kingdom
Working Arrangements: remote
IR35 Status: inside IR35
Seniority Level: Mid-Level
Industry: IT
Software Engineer (C#) — Internal Tooling (AI Infrastructure)
About The Role
What if your C# expertise could directly shape the infrastructure powering the next generation of AI? We're looking for experienced full-stack C# engineers to build and optimize the data pipelines, annotation systems, and evaluation tooling that leading AI labs depend on every day. This isn't maintenance work or low-stakes ticket-churning. You'll be working on real production systems at the frontier of AI development — the kind of infrastructure that determines how AI models are trained, measured, and improved.
Organization: Alignerr
Type: Hourly Contract
Location: Remote
Commitment: 20–40 hours/week
What You'll Do
- Design, build, and optimize high-performance C# systems supporting large-scale AI data pipelines and evaluation workflows
- Develop full-stack tooling and backend services for data annotation, validation, and quality control at scale
- Improve the reliability, performance, and safety of existing C# codebases used in production AI environments
- Bridge the gap between .NET and Python ML ecosystems — invoking models, wrapping native libraries, and enabling smooth interoperability
- Build robust benchmarking harnesses to evaluate system performance and surface edge cases
- Collaborate with data, research, and engineering teams to support model training and evaluation workflows
- Participate in synchronous design reviews to iterate quickly on architecture and implementation decisions
Who You Are
- 3–5+ years of professional experience writing production-grade C#
- Experienced full-stack developer with a strong systems programming background
- Skilled at interoperability scenarios — calling Python ML models from .NET, wrapping native libraries, crossing runtime boundaries cleanly
- Experienced designing benchmarking and evaluation harnesses for real systems
- Clear, precise written and verbal communicator — you can explain a design decision as well as implement one
- Native or fluent English speaker
- Able to commit 20–40 hours per week reliably
Nice to Have
- Prior experience with data annotation platforms, data quality systems, or evaluation pipelines
- Familiarity with AI/ML workflows, model training infrastructure, or benchmarking tooling
- Experience with distributed systems or internal developer tooling
- Background in performance engineering or systems-level optimization
Why Join Us
- Work on cutting-edge AI infrastructure alongside leading research labs and engineering teams
- Fully remote and flexible — structure your hours around your life
- Freelance autonomy with the depth and substance of meaningful, long-term engineering work
- Make a direct, tangible impact on how AI systems are built, evaluated, and improved at scale
- Potential for ongoing work and expanded scope as new projects launch