Negotiable
Undetermined
Remote
London, England, United Kingdom
Summary: The Offensive Security Analyst role focuses on leveraging adversarial thinking to enhance AI's understanding of cybersecurity threats. Candidates will analyze attack scenarios and model threat movements to train AI systems in risk assessment. This fully remote contract position emphasizes structured reasoning rather than exploit development, requiring expertise in real-world attack dynamics. Ideal candidates will have a background in pentesting, red teaming, or blue-team roles.
Key Responsibilities:
- Analyze real-world attack paths, kill chains, and adversary strategies across modern production environments
- Identify and classify weaknesses, misconfigurations, and defensive gaps in realistic scenarios
- Review and evaluate red-team-style scenarios and intrusion narratives for accuracy and completeness
- Generate, label, and validate adversarial reasoning data used to train and evaluate AI systems
- Clearly articulate how attack chains unfold, what impact they carry, and where defenses fail
- Work independently and asynchronously on task-based assignments — fully on your own schedule
Key Skills:
- 2+ years of hands-on experience in pentesting, red teaming, or a blue-team role with deep attack knowledge
- Understanding of how real attacks unfold in production environments
- Ability to break down complex attack chains into clear, structured reasoning
- Detail-oriented and systematic approach
- Comfortable working independently without close supervision
- No exploit development experience required
- Familiarity with frameworks such as MITRE ATT&CK, Cyber Kill Chain, or STRIDE (nice to have)
- Experience writing threat models, red-team reports, or security assessments (nice to have)
- Background in cloud security, network security, or enterprise infrastructure (nice to have)
- Prior exposure to AI systems, security research, or structured data labeling (nice to have)
- Relevant certifications such as OSCP, CEH, GPEN, or equivalent (nice to have)
Salary (Rate): £40.00/hr
City: London
Country: United Kingdom
Working Arrangements: remote
IR35 Status: undetermined
Seniority Level: undetermined
Industry: IT
Offensive Security Analyst (Structured / Non-Exploit) — AI Training
About The Role
What if your ability to think like an adversary could directly shape how AI understands and reasons about cybersecurity threats? We're looking for Offensive Security Analysts to work with realistic attack scenarios, model how threats move through systems, and help train the next generation of AI systems to reason about risk the way seasoned security professionals do. This is a fully remote, flexible contract role focused on structured adversarial reasoning — not exploit development. If you've spent time in pentesting, red teaming, or a hands-on blue-team role and know how real attacks unfold, your expertise is exactly what we need.
Organization: Alignerr
Type: Hourly Contract
Location: Remote
Commitment: 10–40 hours/week
What You'll Do
- Analyze real-world attack paths, kill chains, and adversary strategies across modern production environments
- Identify and classify weaknesses, misconfigurations, and defensive gaps in realistic scenarios
- Review and evaluate red-team-style scenarios and intrusion narratives for accuracy and completeness
- Generate, label, and validate adversarial reasoning data used to train and evaluate AI systems
- Clearly articulate how attack chains unfold, what impact they carry, and where defenses fail
- Work independently and asynchronously on task-based assignments — fully on your own schedule
Who You Are
- 2+ years of hands-on experience in pentesting, red teaming, or a blue-team role with deep attack knowledge
- You understand how real attacks unfold in production environments — not just in theory
- You can break down complex attack chains into clear, structured reasoning that others can follow
- Detail-oriented and systematic — you notice what others miss and communicate it precisely
- Comfortable working independently without close supervision
- No exploit development experience required for this role
Nice to Have
- Familiarity with frameworks such as MITRE ATT&CK, Cyber Kill Chain, or STRIDE
- Experience writing threat models, red-team reports, or security assessments
- Background in cloud security, network security, or enterprise infrastructure
- Prior exposure to AI systems, security research, or structured data labeling
- Relevant certifications such as OSCP, CEH, GPEN, or equivalent
Why Join Us
- Work directly on frontier AI systems alongside leading research labs
- Fully remote and flexible — work when and where it suits you
- Freelance autonomy with the structure of meaningful, task-based work
- Apply your offensive security expertise to a domain that's shaping the future of technology
- Potential for ongoing work and contract extension as new projects launch