A number of major companies have unknowingly incorporated a software package created by generative AI into their source code. The fraudulent package, made real by a security researcher as part of an experiment, has been downloaded thousands of times by developers due to incorrect recommendations from the AI. If the package had contained malware instead of harmless code, the consequences could have been severe.
Alibaba is one of the businesses that were misled by AI into utilizing the fake package huggingface-cli in its GraphTranslator installation instructions. Despite there being a legitimate huggingface-cli available for download, the version referenced in Alibaba’s instructions is a product of AI imagination, turned real by the security researcher.
The researcher created the fake huggingface-cli to investigate the persistence of AI-generated package names and the potential risks associated with them being used to distribute malicious code. By posing questions to various AI models, it was observed that imaginary package names tend to be repeated across different models, opening up the possibility for these names to be exploited in malware distribution campaigns.
Although there have been no reported instances of this attack technique being used maliciously, the ease with which fictitious AI-generated package names can be transformed into real packages could pose a serious threat to software ecosystems. Extra caution is needed to prevent the inadvertent inclusion of fraudulent code based on AI recommendations in software development practices.