GitLab’s AI assistant, Duo, has been found to have vulnerabilities that could potentially expose developers to code theft and other security risks. Researchers discovered that Duo’s lack of input scrutiny allowed attackers to manipulate its responses, leading to serious implications for users of the platform.
Key Takeaways
- GitLab’s AI assistant Duo has vulnerabilities that can lead to code theft.
- Attackers can inject malicious prompts through comments, code, and merge requests.
- HTML injection risks were identified, allowing sensitive data to be leaked.
- GitLab has patched some vulnerabilities but others remain unaddressed.
Overview of Duo’s Functionality
Duo is an AI-powered tool integrated into GitLab, designed to assist developers by analyzing code, suggesting changes, and automating various coding tasks. It functions similarly to GitHub’s Copilot, providing a seamless experience for users within the GitLab ecosystem. However, its deep integration across the platform has raised significant security concerns.
Vulnerabilities Discovered
Researchers from Legit Security identified an indirect prompt injection flaw in Duo that could allow attackers to:
- Steal source code from repositories.
- Direct users to malicious websites.
- Inject malware into code suggestions.
The vulnerability stems from Duo’s inability to critically analyze the input it receives, treating all content—whether it be source code, comments, or merge request descriptions—equally. This lack of discrimination means that malicious prompts can be hidden in various components of a project, making it easy for attackers to exploit the system.
The Risks of HTML Injection
One of the most alarming aspects of the vulnerabilities is the potential for HTML injection. Duo formats its responses in Markdown and renders them in real-time, which means that if an attacker successfully injects HTML code, it can be executed by the user’s browser without proper sanitization. This could lead to:
- Unauthorized access to sensitive data.
- Exfiltration of private source code.
- Manipulation of code suggestions to include malicious content.
For instance, researchers demonstrated that they could craft a prompt that, when processed by Duo, would result in the browser executing malicious HTML, thereby leaking sensitive information from private projects.
GitLab’s Response
In response to these findings, GitLab has released a patch to address the HTML rendering vulnerability. However, the company has not fully acknowledged the broader implications of the prompt injection risks, stating that they do not directly lead to unauthorized access or code execution. This stance has drawn criticism from security experts who argue that the potential for data leakage is significant.
Conclusion
The vulnerabilities in GitLab’s Duo highlight the growing risks associated with AI-powered development tools. As these technologies become more integrated into the software development lifecycle, it is crucial for companies to prioritize security and ensure that their tools can effectively manage and mitigate potential threats. Developers using GitLab should remain vigilant and consider the implications of these vulnerabilities on their projects.
Sources
- Vulnerability in GitLab assistant enabled code theft, Techzine Europe.
- GitLab’s AI Assistant Opened Devs to Code Theft, Dark Reading.
- Prompt injection flaws in GitLab Duo highlights risks in AI assistants, CSO Online.


