If I ever received a vuln report from an AI, or other such glorified spreadsheet, I would promptly dismiss it then wait for a human to organically discover it on its own to consider that as proof of actual existence.
If the bug was actually legitimate, and was verified, I don’t think its a good idea to just wait till someone actually experiences it.
Of course this depends on the severity of the bug as well. In the case of this article, he was refusing to submit anything until he actually verified it, but he defo was using the AI as a origin of discovery.
I would prefer those types of reports over blanket AI vulnerability reports that aren’t proven. Discrediting a valid bug because it was not human generated may lessen workflow, but it’s at the cost of your software’s security and reliability.
I agree I would throw out reports that are AI driven & not proven, but if someone did the actual PoC and demonstrated actual risk I wouldn’t care if it was originally AI or not. I would just assign it based off severity like normal.
If I ever received a vuln report from an AI, or other such glorified spreadsheet, I would promptly dismiss it then wait for a human to organically discover it on its own to consider that as proof of actual existence.
If the bug was actually legitimate, and was verified, I don’t think its a good idea to just wait till someone actually experiences it.
Of course this depends on the severity of the bug as well. In the case of this article, he was refusing to submit anything until he actually verified it, but he defo was using the AI as a origin of discovery.
I would prefer those types of reports over blanket AI vulnerability reports that aren’t proven. Discrediting a valid bug because it was not human generated may lessen workflow, but it’s at the cost of your software’s security and reliability.
I agree I would throw out reports that are AI driven & not proven, but if someone did the actual PoC and demonstrated actual risk I wouldn’t care if it was originally AI or not. I would just assign it based off severity like normal.
Letting your users get hacked just to own the AIs is certainly a strategy.