Negotiable
Undetermined
Remote
United Kingdom
Summary: The AI Red-Teamer position at Crossing Hurdles involves conducting adversarial AI testing by developing exploit scenarios and generating human data to identify vulnerabilities in AI models. The role offers flexible scheduling and can be performed full-time or part-time, with a focus on supporting various projects related to AI security. Candidates should possess a strong background in adversarial machine learning or red-teaming experience. This is a remote position with an hourly compensation rate based on experience.
Key Responsibilities:
- Red-team AI models and agents by crafting jailbreaks, prompt injections, misuse cases, and exploit scenarios
- Generate high-quality human data: annotate AI failures, classify vulnerabilities, and flag systemic risks
- Apply structured approaches using taxonomies, benchmarks, and playbooks to maintain consistency in testing
- Document findings comprehensively to produce reproducible reports, datasets, and attack cases
- Flexibly support multiple projects including LLM jailbreaks and socio-technical abuse testing across different customers
Key Skills:
- Prior red-teaming experience, such as AI adversarial work, cybersecurity, or socio-technical probing OR a strong AI background that supports rapid learning
- Expertise in adversarial machine learning, including jailbreak datasets, prompt injection, RLHF/DPO attacks, model extraction
- Cybersecurity skills such as penetration testing, exploit development, reverse engineering
- Experience with socio-technical risk areas like harassment, disinformation, or abuse analysis
- Creative probing using psychology, acting, or writing to develop unconventional adversarial methods
Salary (Rate): £111.00/hr
City: undetermined
Country: United Kingdom
Working Arrangements: remote
IR35 Status: undetermined
Seniority Level: undetermined
Industry: IT
At Crossing Hurdles, we work as a referral partner. We refer candidates to our partner that collaborates with world’s leading AI research labs to build and train cutting-edge AI models.
Position: AI Red-Teamer — Adversarial AI Testing (Advanced)
Type: Hourly Contract
Compensation: $54–$111/hour
Location: Remote
Duration: Full-time or part-time 10-40hours, flexible
Commitment: Flexible, asynchronous scheduling
Key Responsibilities
- Red-team AI models and agents by crafting jailbreaks, prompt injections, misuse cases, and exploit scenarios
- Generate high-quality human data: annotate AI failures, classify vulnerabilities, and flag systemic risks
- Apply structured approaches using taxonomies, benchmarks, and playbooks to maintain consistency in testing
- Document findings comprehensively to produce reproducible reports, datasets, and attack cases
- Flexibly support multiple projects including LLM jailbreaks and socio-technical abuse testing across different customers
Required Qualifications
- Prior red-teaming experience, such as AI adversarial work, cybersecurity, or socio-technical probing OR a strong AI background that supports rapid learning
- Expertise in adversarial machine learning, including jailbreak datasets, prompt injection, RLHF/DPO attacks, model extraction
- Cybersecurity skills such as penetration testing, exploit development, reverse engineering
- Experience with socio-technical risk areas like harassment, disinformation, or abuse analysis
- Creative probing using psychology, acting, or writing to develop unconventional adversarial methods
Application process: (Takes 20 min)
Upload resume
AI interview based on your resume (15 min)
Submit form