The increasing use of artificial intelligence (AI) tools by staff ahead of IT departments involvement has resulted in the growing problem of ‘shadow AI’.
“Similar to the early days of cloud adoption, workers are using AI tools before IT departments formally buy them. The result is “shadow AI,” employee usage of AI tools through personal accounts that are not sanctioned by – or even known to – the company,” says Silicon Valley-based data protection company Cyberhaven’s report: How Employees are Leading the Charge in AI Adoption and Putting Company Data at Risk.
Cyberhaven reports that, as a result of the use of unsanctioned AI tools, companies are inadvertently exposing rising volumes of sensitive corporate data, where it is available to their competitors as well as to cybercriminals. In March 2024, 27.4% of corporate data employees copied into AI tools was sensitive, up from 10.7% a year ago. This data frequently includes valuable intellectual property.
R&D research can fall into the wrong hands
“Research and development material is 10.8% of sensitive data flowing to AI. This category includes results from R&D tests that in the wrong hands would provide insights to competitors working on competing products,” says Cyberhaven.
Although OpenAI launched an enterprise version of ChatGPT in August 2023 with enhanced security safeguards, 73.8 percent of ChatGPT accounts in the workplace are non-corporate ones that lack the security and privacy controls of ChatGPT Enterprise. The percentage is even higher for AI tools Gemini (94.4 percent) and Bard (95.9 percent). Chat GPT, launched in November 2022, is by far the most popular AI tool used by staff. AI products from OpenAI, Google and Microsoft currently account for 96 percent of AI usage in the workplace.
Techies use unsanctioned AI far more than anyone else
The widespread unsanctioned use of AI tools is not, however, equally common across all sectors. Staff in the technology sector use AI far more than their counterparts in other fields. In March 2024, 23.6 percent of technology sector staff put corporate data into an AI tool. In contrast, the figure is only 5.2 percent for employees at media and entertainment firms, 4.7 percent for employees at financial firms, and 2.8 percent in pharmaceuticals and life sciences. Retail and manufacturing employees have even lower adoption rates of AI technology than other industries. Merely 0.6 percent of staff at manufacturing firms and 0.5 percent of retail firm employees copied company data into AI tools in March 2024.
The US National Security Agency (NSA) recently released a Cybersecurity Information Sheet (CSI) addressing the security weaknesses inherent in today’s AI tools, entitled “Deploying AI Systems Securely: Best Practices for Deploying Secure and Resilient AI Systems,” which covers the guidelines set to avoid the exploitation of malicious activity targeting software of still-developing AI technology.