Tag: generative ai

Deepfake Phishing Targets Trump’s Chief of Staff – May 30th

In today's daily roundup - Deepfake Phishing Targets Trump’s Chief of Staff, ConnectWise Breached by Suspected Nation-State Actor, and Unbound Security Raises $4M Seed Funding.

1 Min Read

Companies must identify the value of their data

Most organizations have no clear idea of the value of the data they hold on themselves and their customers. According to technology research and consulting firm Gartner,  30 percent of chief data and analytics officers (CDAOs) say that their top challenge is the inability to measure data, analytics, and AI's impact on business outcomes. Gartner also reports that only 22 percent of organizations surveyed have defined, tracked, and communicated business impact metrics for the bulk of their data and analytics (D&A) use cases. “There is a massive value vibe around data, where many organizations talk about the value of data, desire to be data-driven, etc., but there are few who can substantiate it,” said Michael Gabbard, senior director analyst at Gartner.

3 Min Read

Toxic warning for China’s DeepSeek AI app

On January 31,  Texas became the first US state to ban the Chinese-owned generative artificial intelligence (AI) application, DeepSeek, on state-owned devices and networks. New York swiftly followed suit on February 10 with Virginia imposing a ban on February 11. The Texas state governor’s office stated: “Texas will not allow the Chinese Communist Party to infiltrate our state’s critical infrastructure through data-harvesting AI and social media apps. State agencies and employees responsible for handling critical infrastructure, intellectual property, and personal information must be protected from malicious espionage operations by the Chinese Communist Party. Texas will continue to protect and defend our state from hostile foreign actors.”

4 Min Read

2025 forecast to be boom year for cybersecurity

California-based cybersecurity goliath Palo Alto Networks has issued a bullish revenue forecast based on a perceived rising global demand for artificial intelligence (AI)-driven security products. “In Q2 [2025], our strong business performance was fuelled by customers adopting technology driven by the imperative of AI, including cloud investment and infrastructure modernization," said CEO Nikesh Arora. “Our growth across regions and demand for our platforms demonstrates our customers' confidence in our approach. It reaffirms our faith in our 2030 plans and our $15 billion next-generation technology annual recurring revenue goal.”

3 Min Read

Cybercriminals Weaponize Google AI assistant

Cybercriminals have been quick to see nefarious possibilities in search engine giant Google’s new Gemini 2.0 AI assistant. According to Google’s own findings, nation-state-backed threat actors are already leveraging Gemini to accelerate their criminal campaigns. The actors are using Gemini 2.0 for “researching potential infrastructure and free hosting providers, reconnaissance on target organizations, research into vulnerabilities, payload development, and assistance with malicious scripting and evasion techniques,” says Google.

3 Min Read

WEF predicts perfect storm for cybercrime

The World Economic Forum (WEF) Global Cybersecurity Outlook 2025 reports that several compounding factors are creating an increasingly complex and risky business environment. These include the growing complexity of supply chains, rising geopolitical tensions, cybercriminal's increasing use of artificial intelligence (AI), and the entry of traditional organized crime groups into cybercrime. Ransomware remains the top organizational cyber risk year on year, with 45 percent of respondents ranking it as a top concern in this year’s survey. Over half of the large organizations surveyed worldwide, 54 percent, identified supply chain challenges as the most challenging barrier to achieving cyber resilience, citing the increasing complexity of supply chains, coupled with a lack of visibility and oversight into the security levels of suppliers.

3 Min Read

Security minefield ahead for GenAI users

In an exclusive interview with Cyber Intelligence, Gadi Bashvitz, CEO of cybersecurity testing firm, Bright Security warns of the security challenges facing organizations in the wake of widespread adoption of GenAI. Cyber Intelligence: Are there any specific dangers of which companies using GenAI to generate new code should be particularly aware? Gadi Bashvitz: There are multiple considerations here. On one hand, any solution developed leveraging LLMs is prone to LLM-specific vulnerabilities such as Insecure Output Handling and Broken Access Control and it is critical to make sure organizations are aware and can detect such vulnerabilities before releasing LLM-based solutions.

5 Min Read

UK Takes Down Russian Money Laundering Ring – December 5th

The FBI warns the public about rising fraud schemes using generative artificial intelligence. The FBI observed that GenAI can be utilized by hackers to create fraudulent social media accounts, generate false websites to entice cryptocurrency investors, and create AI chatbots in order to lure victims into clicking malicious links.

1 Min Read

Generative AI – the current state of play

In an exclusive interview with Cyber Intelligence, Mike Finley, the Co-Founder and CTO of AnswerRocket, a business intelligence platform that deals with big data and AI agents, explains what generative AI can do for companies right now. AI is changing faster than people are capable of understanding. So the general misunderstanding of what AI can do is going to be a lasting problem. The fact is that key scientists believe AI is now capable of improving itself, meaning we are at the start of a runaway path forward. At AnswerRocket, our basic DNA is artificial intelligence (AI) to enable business intelligence (BI). This obviously took a new direction with the widespread introduction of generative AI, but our basic approach remains the same.

6 Min Read

Cost of AI could rise tenfold – warns Gartner

Gartner issued a stern warning this week to organizations across all sectors that the cost of introducing artificial intelligence (AI) to the workplace could easily balloon by 500 -1,000 percent. Speaking at Gartner's flagship Symposium event in Australia, VP analyst Mary Mesaglio said: “Factors contributing to these inflated costs include vendor price increases and neglecting the expense of utilizing cloud-based resources.”

4 Min Read

‘Shadow AI’ is putting companies at risk

The increasing use of artificial intelligence (AI) tools by staff ahead of IT departments involvement has resulted in the growing problem of ‘shadow AI’.  “Similar to the early days of cloud adoption, workers are using AI tools before IT departments formally buy them. The result is “shadow AI,” employee usage of AI tools through personal accounts that are not sanctioned by - or even known to - the company,” says Silicon Valley-based data protection company Cyberhaven’s report: How Employees are Leading the Charge in AI Adoption and Putting Company Data at Risk.

4 Min Read

91% of Orgs Report Use of Gen AI for Cybersecurity – May 1st

Splunk reported that 91% of organizations reported the use of Generative AI for specific cybersecurity usage. The report “State of Security 2024: The Race to Harness AI” also disclosed that 93% of security leaders said public Gen AI was in use across their respective organizations, among other insightful statistics on Gen AI's impact on cybersecurity.

1 Min Read