this post was submitted on 22 Apr 2025
17 points (84.0% liked)
cybersecurity
4045 readers
1 users here now
An umbrella community for all things cybersecurity / infosec. News, research, questions, are all welcome!
Community Rules
- Be kind
- Limit promotional activities
- Non-cybersecurity posts should be redirected to other communities within infosec.pub.
Enjoy!
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The vulnerability is the scary part, not the exploit code. It's like someone saying they can walk through an open door if they're told where it is.
Using your analogy, this is more like telling someone there's an unlocked door and asking them to find it on their own using blueprints.
Not a prefect analogy, but they didn't tell the AI where the vulnerability was in the code. They just gave it the CVE description (which is intentionally vague) and a set of patches from that time period that included a lot of irrelevant changes.
I'm referencing this:
It wrote a fuzzer before it was told to compare the diff and extrapolate the answer, implying it didn't know how to get to a solution either.
"So if you give it the neighbourhood of the building with the open door and a photo of the doorway that's open, then drive it to the neighbourhood when it tries to go to the mall (it's seen a lot of open doors there), it can trip and fall right before walking through the door."
That still seems a little hyperbolic, but I see your point.