November 30, 2025
Dark Light

Blog Post

Cyber Intelligence >

AI-powered ransomware fuels cybercrime

Cybercriminals are now weaponizing artificial intelligence (AI) to create potentially devastating off-the-shelf ransomware. Researchers at cybersecurity company ESET have discovered what they called “the first known AI-powered ransomware”. The malware, which ESET has named PromptLock, has the ability to exfiltrate, encrypt, and possibly even destroy data, though this last functionality appears not to have been implemented in the malware as yet.

Read More

Criminal use of AI enters new and dangerous phase

Cybercriminals have just added what may be the most dangerous weapon yet to their arsenal of illegal software, a Dark Web version of legitimate artificial intelligence (AI) platforms. Tel Aviv-based network security company, Cato Networks, has uncovered an emerging criminal platform called Nytheon AI that it says is “a fully-fledged illicit AI platform”. While there have been other attempts to offer criminal versions of popular AI models, Nytheon AI is the first truly comprehensive multilingual offering. Threat actors can now use the platform to conduct a variety of attacks including tailored spear-phishing campaigns, deepfake documents, and polymorphic malware capable of constantly mutating its appearance.

Read More

InfoSecurity Europe 2025 focuses on weaponized AI

InfoSecurity Europe 2025, which begins in London today, Tuesday, June 2nd, will this year be dominated by the rapidly growing threat posed by the weaponization of artificial intelligence (AI).

New to the conference is an AI and cloud security stage, which will exhibit ways organizations can counter the threat posed by AI. AI-driven cybersecurity also dominated the recent RSA conference in San Francisco. Over the last 12 months, threat actors haven’t wasted a moment capitalizing on the global fascination with Artificial Intelligence. As AI’s popularity surged over the past year, cybercriminals have been quick to exploit the new technology to carry out cyberattacks on an industrial scale.

Read More

AI increasingly used to deliver malware

Many organizations’ ongoing enthusiasm for incorporating artificial intelligence (AI) is leaving them open to sophisticated and carefully planned cyber-attacks. Cybersecurity company Mandiant, a Google subsidiary,  has issued an urgent warning for companies to be wary of downloading AI tools from unvetted websites.

Read More

AI system blackmails its creator

Artificial Intelligence (AI) is learning to think like a human. But the critical question now being asked in IT circles is: “What kind of human?”

Claude, Opus 4, a groundbreaking new AI system released by AI developers Anthropic on Tuesday, is attempting to blackmail its creator by exposing an alleged extramarital affair. This follows on from other AI systems programmed to interact with humans effectively, lying by making up fake information, a phenomenon known by developers as “hallucinating”.

Read More

AI-driven cybersecurity dominates RSA in San Francisco

Artificial Intelligence (AI)-driven cybersecurity is set to dominate RSA, the world’s largest cybersecurity event, which kicked off yesterday in San Francisco and runs from April 28 to May 1.

Networking giant Cisco set the pace with an announcement that it is deepening its partnership with ServiceNow, a leading AI platform for business transformation. It is claimed that the combination of Cisco’s infrastructure and security platforms and ServiceNow’s AI-driven platform and security solutions will unlock mutual customers’ ability to secure and scale their use of AI while decreasing risk and complexity. The first such integration will bring together Cisco’s AI Defense capabilities with ServiceNow SecOps.

Read More

Deepfake news lures new victims

Deepfake videos of TV news presenters are being used to dupe gullible viewers into logging onto illegal gambling sites where malware is then downloaded onto their devices. News anchors on Sky and other channels appear to be quoting Apple CEO Tim  Cook recommending an app where users can easily get rich by winning vast sums of money. The news reports have been identified as deepfake videos. It has been further revealed that thousands of similar videos of deepfakes of journalists have been circulated in the US and the UK. 

Read More

Cybersecurity has become an ongoing war

In our business, assessing risk is crucial. There is a constantly evolving threat landscape, and cybercriminals are constantly introducing new techniques and developing existing ones. And as online connectivity grows, so does every organization’s overall attack surface. Unit 42 are constantly conducting research examining the full scope of the ever expanding attack surface and constantly testing existing defenses. They play the role of cybercriminals, acting as white-hat hackers, if you like, in order to detect potential weaknesses. This research is conducted across the board and also directed at each client specific attacks surface. And when there is a breach, Unit 42 is there to detect and control it. They effectively act as wartime consiglieres – remember that the ongoing Russia/Ukraine conflict started in cyberspace. They must also act immediately to mitigate any breach that does occur. Constant research and testing of defenses are vital. We have to be right every time, but the cybercriminal gangs only have to be right once to effect a breach and perform a successful attack.

Read More

From deepfakes to in-person fraudsters

Boeing Employees’ Credit Union (BECU) is a not-for-profit credit union based in Washington, dedicated to improving the financial well-being of its members and communities. It has grown beyond serving Boeing’s employees to more than 1.5 million members and $29 billion in assets. In an exclusive interview, Sean Murphy, Chief Information Security Officer (CISO) at BECU, explains the changing cyber-threats now facing consumers.

The cybersecurity challenges faced by all consumers have escalated with the growth of artificial intelligence (AI). We have witnessed the growing use of botnets, and AI is at such a stage that it can be used to attempt to gain access to accounts on an individual level. The use of virtual private networks (VPNs) simplifies this process and makes it difficult to track. Remember – while organizations are constantly monitoring for threats and attacks, the cybercriminals only have to get it right one time to cause a highly damaging breach. Advanced persistent threats (APTs) have now become a major ongoing threat. Financial institution employees are the first line of defense against cyber attackers and play a key role in protecting consumers. As such, a robust cybersecurity team and the regular training of employees is crucial.

Read More

Majority of Orgs Hit by AI Cyber-Attacks as Detection Lags – March 7th

AI-driven cyber-attacks are becoming a widespread threat, with 87% of security professionals reporting incidents in the past year, according to SoSafe’s latest cybercrime trends report. Despite the growing concern, only 26% of security experts express high confidence in their ability to detect such attacks.

The World Economic Forum noted a 223% rise in deepfake-related tools on dark web forums between early 2023 and 2024, further fueling concerns. While 91% of experts expect AI-driven attacks to surge over the next three years, nearly all respondents acknowledge the urgency of improving detection capabilities.

Read More

Companies complacent about AI-generated cyber-attacks

Companies are largely ignorant of the looming threat of increased artificial intelligence (AI) identity theft, despite the fact that 93 per cent of companies surveyed suffered two or more identity-related breaches in 2024.

According to leading identity management company CyberArk Software, executives and employees alike are overconfident of their ability to spot ongoing ID-theft and subsequent cyber breaches, with over 75 per cent of respondents to a recent survey saying that they are confident their employees can identify deepfake videos or audio of their leaders.

“Employees are [also] largely confident in their ability to identify a deepfake video or audio of the leaders in their organization. Whether we chalk it up to the illusion of control, planning fallacy, or just plain human optimism, this level of systemic confidence is misguided,” warns Cyberark following a survey of 4,000 US-based employees.

Read More

Three million Google Chrome users hacked

Over three million Google Chrome users have been issued a warning concerning 16 browser extensions that have been compromised by hackers. This alarming news comes hard on the heels of reports earlier this month that cybercriminals are also leveraging search engine giant Google’s new Gemini 2.0 (artificial intelligence) AI assistant.

The list of Google’s hacked browser extensions includes: Emojis, Video Effects for YouTube, Audio Enhancer, Blipshot, Color Changer for YouTube, Themes for Chrome, and YouTube Picture in Pictures. Adblocker for Chrome, Adblock for You, Adblock for Chrome, Nimble Capture, KProxy and Page Refresh, Wistia Video Downloader have also been compromised.

Read More

Companies must identify the value of their data

Most organizations have no clear idea of the value of the data they hold on themselves and their customers. According to technology research and consulting firm Gartner,  30 percent of chief data and analytics officers (CDAOs) say that their top challenge is the inability to measure data, analytics, and AI’s impact on business outcomes. Gartner also reports that only 22 percent of organizations surveyed have defined, tracked, and communicated business impact metrics for the bulk of their data and analytics (D&A) use cases.

“There is a massive value vibe around data, where many organizations talk about the value of data, desire to be data-driven, etc., but there are few who can substantiate it,” said Michael Gabbard, senior director analyst at Gartner.

Read More

Darcula can suck the blood out of any brand

Cybercrime just got easier. A new artificial intelligence off-the-shelf phishing kit named darcula now enables even inexperienced cyber criminals to impersonate any corporate brand with a complex, customizable campaign. Phishing generally refers to a form of online fraud where attackers attempt to steal sensitive information such as passwords, credit card numbers, or bank account details.

“The criminals at darcula are back for more blood, and they mean business with one of the more impactful innovations in phishing in recent years. The new version of their “Phishing-as-a-Service” (PhaaS) platform, darcula-suite adds first-of-its-kind personalization capabilities …to allow criminals to build advanced phishing kits that can now target any brand with the click of a button,” says Cybersecurity company, Netcraft. 

Read More

Toxic warning for China’s DeepSeek AI app

On January 31,  Texas became the first US state to ban the Chinese-owned generative artificial intelligence (AI) application, DeepSeek, on state-owned devices and networks. New York swiftly followed suit on February 10 with Virginia imposing a ban on February 11.

The Texas state governor’s office stated: “Texas will not allow the Chinese Communist Party to infiltrate our state’s critical infrastructure through data-harvesting AI and social media apps. State agencies and employees responsible for handling critical infrastructure, intellectual property, and personal information must be protected from malicious espionage operations by the Chinese Communist Party. Texas will continue to protect and defend our state from hostile foreign actors.”

Read More

2025 forecast to be boom year for cybersecurity

California-based cybersecurity goliath Palo Alto Networks has issued a bullish revenue forecast based on a perceived rising global demand for artificial intelligence (AI)-driven security products.

“In Q2 [2025], our strong business performance was fuelled by customers adopting technology driven by the imperative of AI, including cloud investment and infrastructure modernization,” said CEO Nikesh Arora. “Our growth across regions and demand for our platforms demonstrates our customers’ confidence in our approach. It reaffirms our faith in our 2030 plans and our $15 billion next-generation technology annual recurring revenue goal.”

Read More

The coming St Valentine’s Day cyber-massacre

This coming Friday is St Valentine’s Day and cybercriminals all over the world are rubbing their hands together with glee at the harvest they intend to reap. Developments in artificial intelligence and the widespread availability of off-the-shelf cybercrime software have enabled a new generation of cyber-scams specifically designed around St Valentine’s Day.

In the recent past, cybercriminals typically used February 14th as an excuse to introduce themselves to lonely people with a view to patiently winning their victims’ trust in the short-term and cruelly robbing them of their savings in the longer term.

Read More

Cybercriminals Weaponize Google AI assistant

Cybercriminals have been quick to see nefarious possibilities in search engine giant Google’s new Gemini 2.0 AI assistant. According to Google’s own findings, nation-state-backed threat actors are already leveraging Gemini to accelerate their criminal campaigns.

The actors are using Gemini 2.0 for “researching potential infrastructure and free hosting providers, reconnaissance on target organizations, research into vulnerabilities, payload development, and assistance with malicious scripting and evasion techniques,” says Google.

Read More

“Crazy Evil” Threatens Cryptocurrency Ecosystem

A new and rising threat to decentralized financing has been identified. Threat intelligence researcher, the Insikt group, has uncovered “Crazy Evil,” a rapidly growing Russian crypto-scam gang that targets cryptocurrency users and influencers. According to Insikt Group, over ten active social media scams are linked directly to Crazy Evil, garnering millions of dollars in illicit funds and infiltrating tens of thousands of devices.

Crazy Evil is what is referred to as a “traffer” team, which Insikt describes as “a collective of social engineering specialists tasked with redirecting legitimate traffic to malicious landing pages.” Allegedly operating since 2021 on dark web forums and amassing thousands of followers on their public Telegram channels, Crazy Evil’s primary targets are cryptocurrency users, non-fungible token (NFT) traders and gaming professionals – all of whom often use decentralized platforms with little or no regulatory oversight.

Read More

Shoring up SMEs Cyber-Defenses

In an exclusive interview with Cyber Intelligence, CEO and co-founder of cybersecurity firm EyeR, Sean Tsvik, explains what small-to-medium-sized organizations (SMEs) can do to protect their systems and customers’ critical data from increasingly sophisticated cyber-attacks.
They should start by using a managed detection and response (MDR) service. That allows medium-sized organizations to protect themselves against increasingly sophisticated cyber-attacks without paying high salaries to in-house cyber experts. MDR services work out costing only a couple of dollars per endpoint and are by far the best starting point for small-to-medium-sized companies looking to strengthen their cyber defenses. Small organizations can also benefit from moving to the cloud as this leaves even fewer endpoints to secure.

Read More

Chinese AI offering rattles Big Tech investors

The start of this week saw roughly $1 trillion wiped off leading US tech stocks, following the launch of Deepseek, a Chinese rival to AI offerings such as Microsoft ChatGPT. What has really spooked the markets is that the Chinese artificial intelligence (AI) assistant uses less data and generates lower all-round costs than its current Silicon Valley rivals.

The expense of training and developing DeepSeek’s models is claimed to be only a small fraction of that required for OpenAI, putting into question the need to invest in the latest and most powerful AI accelerator chips from Nvidia. At the start of trading this week, Shares in Nvidia dropped a full10 percent and AI data analytics company Palantir lost seven percent in pre-market trading. Microsoft, Google’s parent company Alphabet, and Meta all also experienced a drop in their share price.

Read More

GenAI speeds up cybercrime

While Silicon Valley is finding that artificial intelligence (AI) is proving a tough sell to businesses and consumers, cybercriminals worldwide have lost little time in adapting the technology to cybercrime.

The latest rogue AI offering is GhostGPT. According to Abnormal Security, Ghost GPT follows hard on the heels of earlier illicit AI offerings: WormGPT, WolfGPT, and EscapeGPT. To test its capabilities, Abnormal Security researchers asked GhostGPT to create a Docusign phishing email. The chatbot produced a convincing template with ease, demonstrating its ability to trick potential victims

Read More

WEF predicts perfect storm for cybercrime

The World Economic Forum (WEF) Global Cybersecurity Outlook 2025 reports that several compounding factors are creating an increasingly complex and risky business environment. These include the growing complexity of supply chains, rising geopolitical tensions, cybercriminal’s increasing use of artificial intelligence (AI), and the entry of traditional organized crime groups into cybercrime.

Ransomware remains the top organizational cyber risk year on year, with 45 percent of respondents ranking it as a top concern in this year’s survey. Over half of the large organizations surveyed worldwide, 54 percent, identified supply chain challenges as the most challenging barrier to achieving cyber resilience, citing the increasing complexity of supply chains, coupled with a lack of visibility and oversight into the security levels of suppliers.

Read More

AI enables ransomware boom

A new ransomware group, named Funksec, is the latest example of relatively inexperienced cybercriminals using AI to develop weaponized malware. The group claims that over 85 organizations fell victim to its ransomware attacks in December alone, potentially surpassing every other ransomware group in terms of victim numbers.

According to Check Point Research: “FunkSec operators appear to use AI-assisted malware development which can enable even inexperienced actors to quickly produce and refine advanced tools…Presenting itself as a new Ransomware-as-a-Service (RaaS) operation, FunkSec appears to have no known connections to previously identified ransomware gangs.”

Read More

AI gives the game away

The latest threat for companies using large language (LLM) AI software to replace human staff is the software’s innate gullibility. LLM software can be likened to some cowardly bank clerk in an old Western hold-up who not only willingly opens a back door for the bad guys but also willingly tells them the combination of the safe.

The methods for persuading LLMs into naively disclosing the keys to the corporate kingdom are known as ‘LLM Jailbreak’ techniques. Palo Alto Networks Unit 42 researchers have named one such LLM Jailbreak, “Bad Likert Judge”.

Read More

Security minefield ahead for GenAI users

In an exclusive interview with Cyber Intelligence, Gadi Bashvitz, CEO of cybersecurity testing firm, Bright Security warns of the security challenges facing organizations in the wake of widespread adoption of GenAI.

Cyber Intelligence: Are there any specific dangers of which companies using GenAI to generate new code should be particularly aware?

Gadi Bashvitz: There are multiple considerations here. On one hand, any solution developed leveraging LLMs is prone to LLM-specific vulnerabilities such as Insecure Output Handling and Broken Access Control and it is critical to make sure organizations are aware and can detect such vulnerabilities before releasing LLM-based solutions.

Read More

How can companies deal with data overload?

Sanjaya Kumar, MD, is the CEO of cybersecurity company SureShield, Inc. Dr. Kumar has more than 25 years of healthcare compliance, risk management, and security experience. In an exclusive interview with Cyber Intelligence, he outlines the challenge presented by the current environment of data overload and some of the steps organizations should take to mitigate the associated risks from it.

Read More

Seasonal cybercrime bonanza is under way

Cybercriminals now have an unprecedented of highly effective custom-made tools designed to defraud online retailers and shoppers during the holiday season.  

“As we approach the end of 2024, the upcoming holiday season and events like Thanksgiving, Black Friday, Cyber Monday, and Christmas bring millions of shoppers online with attractive discounts and limited-time offers. They also create ideal conditions for cybercriminals to exploit users and shoppers,” warns threat intelligence firm FortiGuard in its report, Threat Actor Readiness for the Upcoming Holiday Season.

Read More

Generative AI – the current state of play

In an exclusive interview with Cyber Intelligence, Mike Finley, the Co-Founder and CTO of AnswerRocket, a business intelligence platform that deals with big data and AI agents, explains what generative AI can do for companies right now.

AI is changing faster than people are capable of understanding. So the general misunderstanding of what AI can do is going to be a lasting problem. The fact is that key scientists believe AI is now capable of improving itself, meaning we are at the start of a runaway path forward. At AnswerRocket, our basic DNA is artificial intelligence (AI) to enable business intelligence (BI). This obviously took a new direction with the widespread introduction of generative AI, but our basic approach remains the same.

Read More

Big Tech’s rapidly-shrinking green credentials

Big Tech is currently performing a rather awkward fan dance, trying to cover up its rape and pillage of the earth’s more finite resources with its rapidly shrinking green credentials. Silicon Valley’s green credentials may, however, soon vanish altogether under the vast amount of e-waste the rapid rollout of generative artificial intelligence (AI) has already started to generate.

Measures such as the installation of waterless urinals and charging points for e-vehicles for Big Tech staff are merely Silicon Valley window dressing for what has always been an incredibly dirty and polluting industry. Named after the material used to manufacture semiconductors in Intel’s chip fabrication plants, Silicon Valley began with an ugly reputation for allowing vast amounts of toxic chemicals to seep into the local environment, allegedly making their way into the bodies of workers and children. Californian locals ruefully commented that the area should be renamed “Cyanide Valley”, as the notorious poison, which is used in the manufacture of semi-conductors, was claimed to have seeped into local soil and water sources.

Read More

Can MSN’s new AI Copilot replace human workers?

In a matter of days, Microsoft will unveil the much-heralded new version of its Copilot software to a business world already severely disappointed by Big Tech’s initial AI offerings. It also comes hard on the heels of a stern warning from Gartner to organizations across all sectors that the cost of introducing artificial intelligence (AI) to the workplace could easily balloon by a staggering  500 -1,000 percent.

But Microsoft’s current marketing push for its latest AI offering, a souped up version of its Copilot service, is rapidly gathering momentum, in spite of commercial AI’s dismal performance to date. Microsoft chief executive Satya Nadella is currently touring 39 cities around the world with new products and use cases for AI. He predicts that the performance of AI systems will double approximately every six months, and the AI revolution is about to be led by a souped-up version of the company’s existing Copilot software, part of the 365 package.

“The question now is how do we transfer this to the real world…Think of Copilot as a user interface for AI,” Nadella told an audience in Berlin.

Read More

AI-Enabled Drones: Navigating the New Frontier of Cyber C-UAS Defense

In the rapidly evolving landscape of cybersecurity, artificial intelligence (AI) has emerged as a transformative force, revolutionizing operational methodologies across various industries. This development is especially evident in the drone sector, where AI integration into unmanned aerial vehicles (UAVs) presents unprecedented opportunities alongside critical security challenges. As AI-enhanced drones are increasingly deployed across diverse sectors—from precision agriculture to military reconnaissance—their growing autonomy and intelligence introduce significant risks, making robust cybersecurity countermeasures essential.

Read More

Rocky start for Big Tech’s AI rollout

Companies are already becoming disenchanted with the initial rollout of Big Tech’s new artificial intelligence (AI) technology. Rapidly diminishing return on investment (ROI) and poor initial outcomes are forcing companies to rethink their earlier strategies, according to a new report from AI data services company, Appen.

“As enterprises gain more AI experience, they are becoming more selective about which projects to pursue, and fewer initiatives are reaching deployment. Appen believes this trend is likely driven by diminishing ROI or the lack of significant outcomes,” says Appen.

Gartner also recently issued a stern warning to organizations across all sectors that the cost of introducing artificial intelligence (AI) to the workplace could easily balloon by a staggering 500 -1,000 percent.

Read More

Big tech goes nuclear

America’s leading technology companies are now engaged in their own nuclear power race. Advertising and search giant Google has announced that it has signed the world’s first corporate agreement to purchase nuclear energy from multiple small modular reactors (SMR), to be developed by Kairos Power.

By investing in its own nuclear energy facilities, Google has now joined the ranks of Amazon, Microsoft, and Oracle in investing heavily in nuclear facilities to power the rollout of new services based around their prematurely launched artificial intelligence (AI) services. According to a recent report from US Madison Avenue investment bankers, Jeffries: “If it feels like Graphics Processing Units (GPUs) are suddenly everywhere, it’s because they are. GPUs drive computation across a wide range of industries and applications, from big data analytics to machine learning [AI].”

Read More