Marketers promote AI-assisted developer tools as workhorses that are essential for today’s software engineer. Developer platform GitLab, for instance, claims its Duo chatbot can “instantly generate a to-do list” that eliminates the burden of “wading through weeks of commits.” What these companies don’t say is that these tools are, by temperament if not default, easily tricked by malicious actors into performing hostile actions against their users.
Researchers from security firm Legit on Thursday demonstrated an attack that induced Duo into inserting malicious code into a script it had been instructed to write. The attack could also leak private code and confidential issue data, such as zero-day vulnerability details. All that’s required is for the user to instruct the chatbot to interact with a merge request or similar content from an outside source.
Before my actual comment, I just want to humorously remark about the group which found and documented this vulnerability, Legit Security. With a name like that, I would inadvertently hang up the phone if I got a call from them haha:
"Hi! This is your SBOM vendor calling. We’re Legit.
Me: [hangs up, thinking it’s a scam]
Anyway…
In a lot of ways, this is the classic “ignore all prior instructions” type of exploit, but with more steps and is harder to scrub for. Which makes it so troubling that GitLab’s AI isn’t doing anything akin to data separation when taking instructions vs referencing other data sources. What LegitSecurity revealed really shouldn’t have been a surprise to GitLab’s developers.
IMO, this class of exploit really shouldn’t exist, in the same way that SQL injection attacks shouldn’t be happening in 2025 due to a lack of parameterized queries. Am I to believe that AI developers are not developing a cohesive list of best practices, to avoid silly exploits? [rhetorical question]