AI-driven cyber-attacks are becoming a widespread threat, with 87% of security professionals reporting incidents in the past year, according to SoSafe’s latest cybercrime trends report. Despite the growing concern, only 26% of security experts express high confidence in their ability to detect such attacks. The World Economic Forum noted a 223% rise in deepfake-related tools on dark web forums between early 2023 and 2024, further fueling concerns. While 91% of experts expect AI-driven attacks to surge over the next three years, nearly all respondents acknowledge the urgency of improving detection capabilities.
Companies are largely ignorant of the looming threat of increased artificial intelligence (AI) identity theft, despite the fact that 93 per cent of companies surveyed suffered two or more identity-related breaches in 2024. According to leading identity management company CyberArk Software, executives and employees alike are overconfident of their ability to spot ongoing ID-theft and subsequent cyber breaches, with over 75 per cent of respondents to a recent survey saying that they are confident their employees can identify deepfake videos or audio of their leaders. “Employees are [also] largely confident in their ability to identify a deepfake video or audio of the leaders in their organization. Whether we chalk it up to the illusion of control, planning fallacy, or just plain human optimism, this level of systemic confidence is misguided,” warns Cyberark following a survey of 4,000 US-based employees.
DISA Global Solutions, Inc., a provider of employment screening services, confirmed a data breach impacting over 3.3 million individuals. The breach, which occurred between February 9 and April 22, 2024, granted an unauthorized third party access to names, Social Security numbers, driver’s license details, financial account information, and other sensitive data. While forensics could not confirm the exact extent of the stolen data, the exposure raises concerns over identity theft risks for affected individuals.
Most organizations have no clear idea of the value of the data they hold on themselves and their customers. According to technology research and consulting firm Gartner, 30 percent of chief data and analytics officers (CDAOs) say that their top challenge is the inability to measure data, analytics, and AI's impact on business outcomes. Gartner also reports that only 22 percent of organizations surveyed have defined, tracked, and communicated business impact metrics for the bulk of their data and analytics (D&A) use cases. “There is a massive value vibe around data, where many organizations talk about the value of data, desire to be data-driven, etc., but there are few who can substantiate it,” said Michael Gabbard, senior director analyst at Gartner.
On January 31, Texas became the first US state to ban the Chinese-owned generative artificial intelligence (AI) application, DeepSeek, on state-owned devices and networks. New York swiftly followed suit on February 10 with Virginia imposing a ban on February 11. The Texas state governor’s office stated: “Texas will not allow the Chinese Communist Party to infiltrate our state’s critical infrastructure through data-harvesting AI and social media apps. State agencies and employees responsible for handling critical infrastructure, intellectual property, and personal information must be protected from malicious espionage operations by the Chinese Communist Party. Texas will continue to protect and defend our state from hostile foreign actors.”
California-based cybersecurity goliath Palo Alto Networks has issued a bullish revenue forecast based on a perceived rising global demand for artificial intelligence (AI)-driven security products. “In Q2 [2025], our strong business performance was fuelled by customers adopting technology driven by the imperative of AI, including cloud investment and infrastructure modernization," said CEO Nikesh Arora. “Our growth across regions and demand for our platforms demonstrates our customers' confidence in our approach. It reaffirms our faith in our 2030 plans and our $15 billion next-generation technology annual recurring revenue goal.”
Cybercriminals have been quick to see nefarious possibilities in search engine giant Google’s new Gemini 2.0 AI assistant. According to Google’s own findings, nation-state-backed threat actors are already leveraging Gemini to accelerate their criminal campaigns. The actors are using Gemini 2.0 for “researching potential infrastructure and free hosting providers, reconnaissance on target organizations, research into vulnerabilities, payload development, and assistance with malicious scripting and evasion techniques,” says Google.
While Silicon Valley is finding that artificial intelligence (AI) is proving a tough sell to businesses and consumers, cybercriminals worldwide have lost little time in adapting the technology to cybercrime. The latest rogue AI offering is GhostGPT. According to Abnormal Security, Ghost GPT follows hard on the heels of earlier illicit AI offerings: WormGPT, WolfGPT, and EscapeGPT. To test its capabilities, Abnormal Security researchers asked GhostGPT to create a Docusign phishing email. The chatbot produced a convincing template with ease, demonstrating its ability to trick potential victims
The World Economic Forum (WEF) Global Cybersecurity Outlook 2025 reports that several compounding factors are creating an increasingly complex and risky business environment. These include the growing complexity of supply chains, rising geopolitical tensions, cybercriminal's increasing use of artificial intelligence (AI), and the entry of traditional organized crime groups into cybercrime. Ransomware remains the top organizational cyber risk year on year, with 45 percent of respondents ranking it as a top concern in this year’s survey. Over half of the large organizations surveyed worldwide, 54 percent, identified supply chain challenges as the most challenging barrier to achieving cyber resilience, citing the increasing complexity of supply chains, coupled with a lack of visibility and oversight into the security levels of suppliers.
Gartner issued a stern warning this week to organizations across all sectors that the cost of introducing artificial intelligence (AI) to the workplace could easily balloon by 500 -1,000 percent. Speaking at Gartner's flagship Symposium event in Australia, VP analyst Mary Mesaglio said: “Factors contributing to these inflated costs include vendor price increases and neglecting the expense of utilizing cloud-based resources.”
The 'ShinyHunters' threat actor group posted data from a Ticketmaster data breach, potentially belonging to 560M users, asking for $500K in exchange for the data. Analysts at Vx-Underground analyzed a sample of the Ticketmaster data and determined that the data was authentic, containing entries dating back to 2011.
The increasing use of artificial intelligence (AI) tools by staff ahead of IT departments involvement has resulted in the growing problem of ‘shadow AI’. “Similar to the early days of cloud adoption, workers are using AI tools before IT departments formally buy them. The result is “shadow AI,” employee usage of AI tools through personal accounts that are not sanctioned by - or even known to - the company,” says Silicon Valley-based data protection company Cyberhaven’s report: How Employees are Leading the Charge in AI Adoption and Putting Company Data at Risk.
Sign in to your account