
Critical RCE Vulnerability in InvokeAI (CVE-2024-12029): Update Immediately to Protect Your System
A severe security vulnerability has been identified in InvokeAI, the popular open-source platform for AI image generation. This critical flaw, tracked as CVE-2024-12029, could allow an attacker to gain complete control over a user’s computer through a specially crafted file. If you use InvokeAI, it is essential to understand this threat and take immediate action to secure your system.
The vulnerability is a form of insecure deserialization, which can lead to Remote Code Execution (RCE). This is one of the most serious types of security flaws, as it effectively hands the keys to your system over to an attacker.
How the Vulnerability Works
In simple terms, the vulnerability exists in the way older versions of InvokeAI process certain types of files. The software uses a common Python module called pickle
to load data, such as custom models or workflows. However, it fails to properly sanitize or validate this data before processing it.
An attacker can exploit this by creating a malicious .pkl
or other compatible file that contains hidden, executable code. When an unsuspecting user loads this file into InvokeAI, the software executes the malicious code without any warning. Think of it like opening a digital package that is booby-trapped to run a malicious program as soon as it’s unboxed.
The consequences of this are severe. A successful attack could grant a malicious actor full control over your system, allowing them to:
- Steal sensitive personal data, including passwords, financial information, and private files.
- Install ransomware, malware, or spyware.
- Use your computer as part of a botnet to attack other systems.
- Delete or corrupt your files.
Are You Affected?
This vulnerability impacts all versions of InvokeAI prior to version 3.7.0.
- Affected Versions: All InvokeAI releases before 3.7.0.
- Patched Version: InvokeAI 3.7.0 and all subsequent releases.
If you are using an older version of the software, your system is at risk, especially if you download and use custom models, LoRAs, or workflows from online communities.
How to Protect Yourself: Actionable Security Steps
The developers behind InvokeAI have already released a patch to fix this critical issue. Protecting yourself requires immediate and straightforward action.
Update Your Software Immediately: This is the most crucial step. You must update your InvokeAI installation to version 3.7.0 or newer. The latest version contains the necessary security patch that resolves the insecure deserialization flaw. Do not delay this update.
Exercise Extreme Caution with Downloads: The primary way this vulnerability is exploited is through malicious files shared online. Be incredibly skeptical of custom models, workflows, and other assets downloaded from unverified sources. Only download files from the official InvokeAI repositories or from highly trusted, well-vetted community creators.
Verify Your Sources: Before loading any third-party file into InvokeAI, try to verify its origin. If you downloaded it from a forum or a social media link, understand that the risk is significantly higher. Stick to reputable sources that have a history of providing safe content.
Staying vigilant is key to maintaining your digital security. While the world of AI art is built on community collaboration and sharing, it’s vital to remember that not everyone has good intentions. By updating your software promptly and practicing safe file-handling habits, you can continue to create amazing art with InvokeAI without putting your system at risk.
Source: https://www.offsec.com/blog/cve-2024-12029/