Many organizations’ ongoing enthusiasm for incorporating artificial intelligence (AI) is leaving them open to sophisticated and carefully planned cyber-attacks. Cybersecurity company Mandiant, a Google subsidiary, has issued an urgent warning for companies to be wary of downloading AI tools from unvetted websites.
“Threat actors haven’t wasted a moment capitalizing on the global fascination with Artificial Intelligence. As AI’s popularity surged over the past couple of years, cybercriminals quickly moved to exploit the widespread excitement. Their actions have fuelled a massive and rapidly expanding campaign centered on fraudulent websites masquerading as cutting-edge AI tools,” says Mandiant.
Since November, Mandiant’s threat defense team has been investigating a cybercrime campaign that weaponizes AI tools, using fake “AI video generator” websites to distribute malware such as Python-based infostealers and several backdoors. Victims are typically directed to these fake websites via malicious social media ads that masquerade as legitimate AI video generator tools like Luma AI, Canva Dream Lab, and Kling AI, among others.
Mandiant Threat Defense says it has identified thousands of ads that have collectively reached millions of users across various social media platforms like Facebook and LinkedIn. The Google subsidiary also believes that similar campaigns may now also be active on other platforms as well, as cybercriminals consistently target multiple platforms to increase their chances of success.
An “unprecedented shift” in cybercriminal tactics
Mandiant’s warning follows on from a report from cybersecurity company Morphisec earlier this month. In what Morphisec calls an “unprecedented shift” in tactics, cybercriminals are rapidly weaponizing the public enthusiasm for AI to deliver malware. Instead of relying on traditional phishing or cracked software sites, they build convincing AI-themed platforms to attract users eager for free AI tools for video and image editing. Morphisec also reports that the downloaded malware is bundled with a newly-identified infostealer, dubbed Noodlophile Stealer, designed to harvest browser credentials, cryptocurrency wallets, and sensitive data. In many cases, it also deploys a remote access trojan like XWorm to establish deeper control over the infected system.
“Unlike older malware campaigns disguised as pirated software or game cheats, this operation targets a newer, more trusting audience: creators and small businesses exploring AI for productivity,” says Morphisec.
While AI can offer organizations genuine benefits and cost savings, the risk associated with deploying it is now growing rapidly, particularly for small-to-medium sized enterprises (SMEs). The danger is particularly acute when staff members are lured by social media to download legitimate-looking AI tools without clearing it with their IT department first.