this post was submitted on 26 May 2025
45 points (95.9% liked)

Programming

20404 readers
169 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities [email protected]



founded 2 years ago
MODERATORS
 

Marketers promote AI-assisted developer tools as workhorses that are essential for today’s software engineer. Developer platform GitLab, for instance, claims its Duo chatbot can “instantly generate a to-do list” that eliminates the burden of “wading through weeks of commits.” What these companies don’t say is that these tools are, by temperament if not default, easily tricked by malicious actors into performing hostile actions against their users.

Researchers from security firm Legit on Thursday demonstrated an attack that induced Duo into inserting malicious code into a script it had been instructed to write. The attack could also leak private code and confidential issue data, such as zero-day vulnerability details. All that’s required is for the user to instruct the chatbot to interact with a merge request or similar content from an outside source.

top 1 comments
sorted by: hot top controversial new old
[–] [email protected] 13 points 4 days ago* (last edited 4 days ago)

Before my actual comment, I just want to humorously remark about the group which found and documented this vulnerability, Legit Security. With a name like that, I would inadvertently hang up the phone if I got a call from them haha:

"Hi! This is your SBOM vendor calling. We're Legit.

Me: [hangs up, thinking it's a scam]

Anyway...

In a lot of ways, this is the classic "ignore all prior instructions" type of exploit, but with more steps and is harder to scrub for. Which makes it so troubling that GitLab's AI isn't doing anything akin to data separation when taking instructions vs referencing other data sources. What LegitSecurity revealed really shouldn't have been a surprise to GitLab's developers.

IMO, this class of exploit really shouldn't exist, in the same way that SQL injection attacks shouldn't be happening in 2025 due to a lack of parameterized queries. Am I to believe that AI developers are not developing a cohesive list of best practices, to avoid silly exploits? [rhetorical question]