HackerOne has clarified its stance on GenAI after researchers fretted their submissions were being used to train its models.
A storm erupted on X after the bug bounty platform launched its Agentic PTaaS last month, which it said "delivers continuous security validation by combining autonomous agent execution with elite human expertise."
It said the agents "are trained and refined using proprietary exploit intelligence informed by years of testing real enterprise systems."
However, this prompted researchers to ask exactly where the data used to train the agents came from. "As a former H1 hunter, I hope you haven't used my reports to train your AI agents," said @YShahinzadeh on X.
Another researcher declared: "We're literally training our own replacement." @AegisTrail struck an ominous note: "When white hats feel the legal system is rigged against them, the appeal of the 'dark side' becomes a matter of anger and survival rather than ethics. Just saying."
HackerOne CEO Kara Sprague took to LinkedIn late last week to "address this directly and unambiguously."
She stated: "HackerOne does not train generative AI models, internally or through third-party providers, on researcher submissions or customer confidential data."
Neither, she continued, are researcher submissions used to "train, fine-tune, or otherwise improve generative AI models." And third-party model providers are not permitted to "retain or use researcher or customer data for their own model training."
Hai – HackerOne's agentic AI system – was designed "to help accelerate outcomes, such as validated reports, confirmed fixes, and paid rewards, while preserving the integrity and confidentiality of researcher contributions," she said.
Sprague assured researchers: "You are not inputs to our models... Hai is designed to complement your work, not replace it."
- AI security startup CEO posts a job. Deepfake candidate applies, inner turmoil ensues.
- Palo Alto CEO says AI isn't great for business, yet
- Anthropic tries to hide Claude's AI actions. Devs hate it
- Agents gone wild! Companies give untrustworthy bots keys to the kingdom
The furor prompted others to spell out their position on researcher data and AI.
Intigriti's founder and CEO, Stijn Jans, said he wanted to be "crystal clear" about its position, telling researchers via LinkedIn: "You own your work."
"We apply AI to create mutual benefit for both customers and researchers, amplifying human creativity so you can continue finding the complex, critical vulnerabilities that models often miss."
"We are evolving our AI capabilities to help researchers bring value faster and to ensure our team triages them with higher speed and accuracy."
Bugcrowd's Ts&Cs state: "We do not allow third parties to train AI, LLM, or generative AI models on customer or researcher data."
At the same time, it holds researchers responsible for their use of GenAI tools. "Using GenAI does not exempt them from strict compliance with platform rules or specific program scopes," while "automated or unverified outputs are not accepted as valid submissions." ®
