While Silicon Valley is finding that artificial intelligence (AI) is proving a tough sell to businesses and consumers, cybercriminals worldwide have lost little time in adapting the technology to cybercrime.
The latest rogue AI offering is GhostGPT. According to Abnormal Security, Ghost GPT follows hard on the heels of earlier illicit AI offerings: WormGPT, WolfGPT, and EscapeGPT. To test its capabilities, Abnormal Security researchers asked GhostGPT to create a Docusign phishing email. The chatbot produced a convincing template with ease, demonstrating its ability to trick potential victims
“GhostGPT is a chatbot specifically designed to cater to cybercriminals. ..By eliminating the ethical and safety restrictions typically built into AI models, GhostGPT can provide direct, unfiltered answers to sensitive or harmful queries that would be blocked or flagged by traditional AI systems,” says Abnormal Security.
Cybercrime’s love affair with AI comes at a time when the ardor of businesses and end users for the new technology is rapidly cooling. While businesses cite rising deployment and security costs, workforces are genuinely skeptical of it ever living up to its marketing hype from the likes of Google and Microsoft.
Majority of workforce doubt AI will reduce their workload
According to a Job Seeker Insights Survey conducted by Resume Genius, the majority of the workforce, 69 percent, don’t believe AI will boost job performance, and 62 percent even doubt that AI will reduce their workload. In addition to other reservations, 33 percent see AI as a security risk.
Microsoft’s latest AI offering, Copilot, has just been introduced into Microsoft 365. This follows hard on the introduction of Apple Intelligence onto iDevices. Neither upload was done with the consent of the end users. Google is also heavily promoting Gemini. But, so far, customer enthusiasm for the over-hyped add-ons that Silicon Valley is pushing remains low.
By contrast, threat actors have enthusiastically embraced the new technology in order to streamline and speed up their operations. Cybercriminals’ adoption of Generative AI tools has led to a substantial rise in impersonation attacks, enabling criminals to use digital twins and face-swapping technologies. AI also facilitates Software-as-a-Service (SaaS), which routinely enables relatively unskilled cybercriminals to write their own code in a way that is calculated to penetrate corporate defenses.
“Take, for example, the AI-powered tool Nebula by BerylliumSec, which is effectively an assistant for hackers, who can interact with the computer using natural language—making it possible for hackers to use it to do the heavy lifting of commands and execution to target vulnerable people and organizations,” says Abnormal Security.
While the legitimate market for AI continues to have reservations about Silicon Valley’s latest offering of Generative AI, cybercriminals are constantly finding new ways to deploy the new technology.
“The overall popularity of GhostGPT, evidenced by thousands of views on online forums, underscores the growing interest among cybercriminals in leveraging AI tools for more efficient cybercrime,” says Abnormal Security.