Deepfake Phishing Targets Trump’s Chief of Staff – May 30th
In today’s daily roundup – Deepfake Phishing Targets Trump’s Chief of Staff, ConnectWise Breached by Suspected Nation-State Actor, and Unbound Security Raises $4M Seed Funding.
In today’s daily roundup – Deepfake Phishing Targets Trump’s Chief of Staff, ConnectWise Breached by Suspected Nation-State Actor, and Unbound Security Raises $4M Seed Funding.
Most organizations have no clear idea of the value of the data they hold on themselves and their customers. According to technology research and consulting firm Gartner, 30 percent of chief data and analytics officers (CDAOs) say that their top challenge is the inability to measure data, analytics, and AI’s impact on business outcomes. Gartner also reports that only 22 percent of organizations surveyed have defined, tracked, and communicated business impact metrics for the bulk of their data and analytics (D&A) use cases.
“There is a massive value vibe around data, where many organizations talk about the value of data, desire to be data-driven, etc., but there are few who can substantiate it,” said Michael Gabbard, senior director analyst at Gartner.
On January 31, Texas became the first US state to ban the Chinese-owned generative artificial intelligence (AI) application, DeepSeek, on state-owned devices and networks. New York swiftly followed suit on February 10 with Virginia imposing a ban on February 11.
The Texas state governor’s office stated: “Texas will not allow the Chinese Communist Party to infiltrate our state’s critical infrastructure through data-harvesting AI and social media apps. State agencies and employees responsible for handling critical infrastructure, intellectual property, and personal information must be protected from malicious espionage operations by the Chinese Communist Party. Texas will continue to protect and defend our state from hostile foreign actors.”
California-based cybersecurity goliath Palo Alto Networks has issued a bullish revenue forecast based on a perceived rising global demand for artificial intelligence (AI)-driven security products.
“In Q2 [2025], our strong business performance was fuelled by customers adopting technology driven by the imperative of AI, including cloud investment and infrastructure modernization,” said CEO Nikesh Arora. “Our growth across regions and demand for our platforms demonstrates our customers’ confidence in our approach. It reaffirms our faith in our 2030 plans and our $15 billion next-generation technology annual recurring revenue goal.”
Cybercriminals have been quick to see nefarious possibilities in search engine giant Google’s new Gemini 2.0 AI assistant. According to Google’s own findings, nation-state-backed threat actors are already leveraging Gemini to accelerate their criminal campaigns.
The actors are using Gemini 2.0 for “researching potential infrastructure and free hosting providers, reconnaissance on target organizations, research into vulnerabilities, payload development, and assistance with malicious scripting and evasion techniques,” says Google.
The World Economic Forum (WEF) Global Cybersecurity Outlook 2025 reports that several compounding factors are creating an increasingly complex and risky business environment. These include the growing complexity of supply chains, rising geopolitical tensions, cybercriminal’s increasing use of artificial intelligence (AI), and the entry of traditional organized crime groups into cybercrime.
Ransomware remains the top organizational cyber risk year on year, with 45 percent of respondents ranking it as a top concern in this year’s survey. Over half of the large organizations surveyed worldwide, 54 percent, identified supply chain challenges as the most challenging barrier to achieving cyber resilience, citing the increasing complexity of supply chains, coupled with a lack of visibility and oversight into the security levels of suppliers.
In an exclusive interview with Cyber Intelligence, Gadi Bashvitz, CEO of cybersecurity testing firm, Bright Security warns of the security challenges facing organizations in the wake of widespread adoption of GenAI.
Cyber Intelligence: Are there any specific dangers of which companies using GenAI to generate new code should be particularly aware?
Gadi Bashvitz: There are multiple considerations here. On one hand, any solution developed leveraging LLMs is prone to LLM-specific vulnerabilities such as Insecure Output Handling and Broken Access Control and it is critical to make sure organizations are aware and can detect such vulnerabilities before releasing LLM-based solutions.
The FBI warns the public about rising fraud schemes using generative artificial intelligence. The FBI observed that GenAI can be utilized by hackers to create fraudulent social media accounts, generate false websites to entice cryptocurrency investors, and create AI chatbots in order to lure victims into clicking malicious links.
In an exclusive interview with Cyber Intelligence, Mike Finley, the Co-Founder and CTO of AnswerRocket, a business intelligence platform that deals with big data and AI agents, explains what generative AI can do for companies right now.
AI is changing faster than people are capable of understanding. So the general misunderstanding of what AI can do is going to be a lasting problem. The fact is that key scientists believe AI is now capable of improving itself, meaning we are at the start of a runaway path forward. At AnswerRocket, our basic DNA is artificial intelligence (AI) to enable business intelligence (BI). This obviously took a new direction with the widespread introduction of generative AI, but our basic approach remains the same.
Gartner issued a stern warning this week to organizations across all sectors that the cost of introducing artificial intelligence (AI) to the workplace could easily balloon by 500 -1,000 percent.
Speaking at Gartner’s flagship Symposium event in Australia, VP analyst Mary Mesaglio said: “Factors contributing to these inflated costs include vendor price increases and neglecting the expense of utilizing cloud-based resources.”
The increasing use of artificial intelligence (AI) tools by staff ahead of IT departments involvement has resulted in the growing problem of ‘shadow AI’.
“Similar to the early days of cloud adoption, workers are using AI tools before IT departments formally buy them. The result is “shadow AI,” employee usage of AI tools through personal accounts that are not sanctioned by – or even known to – the company,” says Silicon Valley-based data protection company Cyberhaven’s report: How Employees are Leading the Charge in AI Adoption and Putting Company Data at Risk.
Splunk reported that 91% of organizations reported the use of Generative AI for specific cybersecurity usage.
The report “State of Security 2024: The Race to Harness AI” also disclosed that 93% of security leaders said public Gen AI was in use across their respective organizations, among other insightful statistics on Gen AI’s impact on cybersecurity.
Apple has joined Google and Microsoft in launching its own generative artificial intelligence (AI) offering, OpenELM. Apple claims that OpenELM, “a state-of-the-art open language model,” will offer users more accurate and less misleading results than its widely criticized competitors.
“OpenELM uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to enhanced accuracy,” says Apple.
Apple claims that OpenELM exhibits a 2.36 percent improvement in accuracy compared to its initial predecessor OLMo, while requiring half as many pre-training tokens. So far, Apple has delayed offering modern AI capabilities on its devices, but it is expected that the next version of its operating systems will need to include some unique AI features. The launch of iOS 18 is scheduled for June 10.
Five years after its proposal, European Union lawmakers approved the artificial intelligence law, a world-first on AI rules.
Centered around consumer safety, the EU’s AI Act takes a “risk-based approach” to AI-powered products.
Researchers from the Israel Institute of Technology, in collaboration with Intuit, and Cornell Tech developed the “Morris II Worm” to automatically leverage GenAI systems to spread malware and steal data.
The researchers made the worm to demonstrate the dangers behind GenAI systems through the dangerous “0-click propagation” worm which unleashes unprompted payloads, allowing easier attacks from threat actors.
The reaction of businesses to the introduction of generative AI (GenAI) in the year since the launch of Microsoft-backed ChatGPT is one of increasing suspicion and disappointment.
Over one in four organizations have banned the use of GenAI outright. The majority of companies are now also refusing to trust a technology that has already gained a reputation for making errors and even entirely fabricating information, a failing that is referred to as “hallucinating”.
According to Cisco’s newly-released 2024 Data Privacy Benchmark Study, 68 percent of organizations mistrust GenAI because it gets results wrong and 69 percent also believe it could hurt their company’s legal rights. The study draws on responses from 2,600 privacy and security professionals across 12 geographies.
The New Year is set to start with a call to regulate artificial intelligence (AI) coming from a man whose views are considered by hundreds of millions of people to be infallible. On New Year’s Day, His Holiness Pope Francis is scheduled to issue a stark warning to the governments of the world on the dangers inherent in AI.
On January 1, 2024, His Holiness will announce: “Techno-scientific advances, by making it possible to exercise hitherto unprecedented control over reality, are placing in human hands a vast array of options, including some that may pose a risk to our survival and endanger our common home”.
Having warned that AI is a threat not to humanity but to the existence of the Planet Earth itself, His Holiness will then exhort “the global community of nations” to urgently adopt a binding international treaty to regulate not only the use of AI, but also its development.
Next Wednesday will see the last round in a “King Kong meets Godzilla”-style contest between the European Union and the global technology sector over proposed regulations from Brussels to control AI. The opening rounds have been fought by lawyers, lobbyists, and bureaucrats over the monitoring of foundation model AI services such as GPT-4, access to source codes, fines for disobeying the Brussels rulings, and other related topics.
However, EU member states France, Germany, and Italy are known to be opposed to the EU’s proposed rulings and to favor self-legislation by the technology sector, as opposed to being constrained by hard rules dictated by Brussels. French AI company Mistral and Germany’s Aleph Alpha have criticized the EU’s tiered approach to regulating foundation models, defined as those with more than 45 million users.
The verdict on artificial intelligence (AI) from the real experts is finally in; professional cybercriminal fraternities have judged AI to be “overrated, overhyped and redundant,” according to fresh research from cybersecurity firm Sophos.
It has, hitherto, been accepted wisdom in the cybersecurity industry that cybercriminals, free from any regulatory authority or moral scruples, were among the first to harness the awesome power of AI to create bespoke and virtually unstoppable malware. However, having infiltrated the Dark Web forums where top professional cybercriminals discuss their trade, Sophos reports that the cybercrime sector has thoroughly tested the capabilities of AI and found it wanting.
In a startling revelation, Vikas Singla, the former COO of cybersecurity firm Securolytics, confessed to hacking two Georgia hospitals in June 2021 to enhance the company’s profile. Singla disrupted services at Gwinnett Medical Center hospitals, stealing patient data and publicizing the breach on Twitter.
Facing 17 counts of computer damage and one count of information theft, Vikas Singla agreed to pay over $817,000 in restitution. Due to health issues, prosecutors recommended 57 months of probation, raising concerns about cyber threats jeopardizing public safety and healthcare data.
SlashNext’s “State of Phishing Report for 2023” report stated the 1265% phishing increase in malicious phishing emails since Q4 2022, correlating to ChatGPT’s launch.
It was also reported that 31,000 phishing emails were sent on a daily basis in the past year, 68% of them being text-based Business Email Compromise (BEC).
US President Joe Biden has issued an executive order aimed at regulating artificial intelligence (AI), urging Congress to pass the necessary legislation as swiftly as possible. The announcement was made only 48 hours before tomorrow’s Global AI Summit in the UK, which US Vice President Kamala Harris will attend. The push to swiftly legislate indicates that the threat of AI is being taken seriously globally, with governments taking a coordinated approach. A mass of legislation and backroom deals with IT companies is surely set to follow.
The UK-hosted Artificial Intelligence (AI) Safety Summit due to take place on Wednesday and Thursday this week, attended by world leaders and AI experts, is set to become the focus of a widening global debate on the dangers of AI. Last Thursday, UK Prime Minister Rishi Sunak set out the agenda for the discussion, coming down heavily on the side of the AI doom-mongers, who once again are warning that AI poses an existential threat to humanity itself.
Google’s Vulnerability Rewards Program (VRP), a program made to reward researchers who find system vulnerabilities, has been expanded for generative AI.
Google explained the expansion of the VRP as a reaction to the risks brought by AI, and the magnified implications it has for traditional digital security.
The healthcare sector is coming under increasingly severe pressure from cyber-attacks. On the heels of news earlier last week that the infamous Lazarus Group is launching a new campaign targeting internet backbone infrastructure and healthcare facilities in the US and Europe comes news of a major attack by the Rhysida ransomware group on Los Angeles-based Prospect Medical Holdings.