Companies using public artificial intelligence (AI) services such as Microsoft-backed ChatGPT are at increasing risk of allowing cybercriminals to access confidential data. According to cybersecurity firm Group-IB’s Hi-Tech Crime Trends Report 2023/2024, between June and October of 2023, over 130,000 unique hosts with access to OpenAI were compromised, representing a 36 percent rise over the first five months of the year.
Companies currently take one of two main approaches to integrating AI into workflows. One is to use public AI models and the second is to create bespoke proprietary AI systems based on pre-trained and available models. The second approach is by far the safest as it helps control data exchange with AI systems at every stage, guaranteeing confidentiality. But this is far more expensive and labor-intensive than using more insecure publicly available AI services.
“When using AI systems, users often enter all sorts of data, including confidential information such as internal source code, financial information, and trade secrets. Users sometimes even enter data intended for authentication in internal systems. This creates risks, especially if the processing servers become targets for threat actors, thereby creating new attack vectors,” warns the report.
100,000 ChatGPT credentials are now for sale
Group-IB has uncovered over 100,000 ChatGPT credentials available on Darkweb marketplaces that were stolen from compromised devices. The sharp increase in the number of ChatGPT credentials now being offered for sale by cybercriminal gangs is a direct result of an overall rise in the number of hosts infected with information stealers, data from which is then put up for sale on criminal marketplaces.
Cybercriminals are now increasingly focusing on focus on devices with access to public AI systems, giving them access to logs detailing the communication history between employees and systems. The threat actors can then use these to search for confidential information to be used for purposes of commercial and industrial espionage. Even more damaging attacks can also be conducted with stolen authentication data. Criminal gangs are also able to access information concerning application source code details in order to identify potential vulnerabilities that can then be exploited.
Organizations hoping to increase efficiencies by using public AI services may like to reconsider the policy of encouraging staff to use services such as ChatGPT. Those already embarked on such a course must now take extra care to secure their systems while educating staff about the inherent dangers.