Negotiable
Undetermined
Remote
London, England, United Kingdom
Summary: The AI Security Analyst role involves evaluating and enhancing the security of AI systems through red-teaming exercises and adversarial prompt crafting. Candidates will document vulnerabilities and collaborate with engineering teams to improve AI safety and robustness. This position is remote and offers flexible hours, making it suitable for security-savvy professionals looking to impact AI safety directly.
Key Responsibilities:
- Conduct red-teaming exercises to identify security weaknesses in AI systems
- Craft adversarial prompts and edge-case scenarios to test model guardrails
- Evaluate AI outputs for safety, bias, and policy compliance
- Document vulnerabilities, exploits, and unexpected behaviors in structured reports
- Collaborate with engineering teams to recommend mitigations and improvements
- Stay current on emerging AI security threats, jailbreak techniques, and best practices
- Help define and refine security evaluation rubrics and testing protocols
Key Skills:
- Strong understanding of cybersecurity concepts, threat modeling, or penetration testing
- Hands-on experience with AI/ML systems, LLMs, or prompt engineering
- Creative and analytical thinker
- Excellent written communication and documentation skills
- Comfortable working independently on task-based, asynchronous assignments
- Familiarity with the OpenClaw ecosystem or similar open-source AI platforms is a plus
- Background in infosec, ethical hacking, or AI safety research is a plus but not required
Salary (Rate): £75.00/hr
City: London
Country: United Kingdom
Working Arrangements: remote
IR35 Status: undetermined
Seniority Level: undetermined
Industry: IT
About The Role We're looking for security-savvy professionals to help stress-test, evaluate, and harden AI systems. You'll probe for vulnerabilities, craft adversarial prompts, and provide expert feedback that directly improves AI safety and robustness.
Organization: Alignerr
Type: Hourly Contract
Compensation: $15–$75 /hour
Location: Remote
Commitment: 10–40 hours/week
What You'll Do
- Conduct red-teaming exercises to identify security weaknesses in AI systems
- Craft adversarial prompts and edge-case scenarios to test model guardrails
- Evaluate AI outputs for safety, bias, and policy compliance
- Document vulnerabilities, exploits, and unexpected behaviors in structured reports
- Collaborate with engineering teams to recommend mitigations and improvements
- Stay current on emerging AI security threats, jailbreak techniques, and best practices
- Help define and refine security evaluation rubrics and testing protocols
Who You Are
- Strong understanding of cybersecurity concepts, threat modeling, or penetration testing
- Hands-on experience with AI/ML systems, LLMs, or prompt engineering
- Creative and analytical thinker — you enjoy breaking things to make them better
- Excellent written communication and documentation skills
- Comfortable working independently on task-based, asynchronous assignments
- Familiarity with the OpenClaw ecosystem or similar open-source AI platforms is a plus
- Background in infosec, ethical hacking, or AI safety research is a plus but not required
Why Join Us
- Work on the cutting edge of AI security with top research labs
- Directly shape the safety and reliability of AI products used by millions
- Freelance perks: full autonomy, flexible schedule, and global collaboration
- Build expertise in one of the fastest-growing domains in tech
- Potential for ongoing work, expanded scope, and contract extension
Application Process (Takes 10–15 min)
- Submit your resume
- Complete a short screening
- Project matching and onboarding
PS: Our team reviews applications daily. Please complete your application steps to be considered for this opportunity.