November 30, 2025
Dark Light

Blog Post

Cyber Intelligence >

Public AI opens doors to cybercrime

Companies using public artificial intelligence (AI) services such as Microsoft-backed ChatGPT are at increasing risk of allowing cybercriminals to access confidential data. According to cybersecurity firm Group-IB’s Hi-Tech Crime Trends Report 2023/2024, between June and October of 2023, over 130,000 unique hosts with access to OpenAI were compromised, representing a 36 percent rise over the first five months of the year.

Companies currently take one of two main approaches to integrating AI into workflows. One is to use public AI models and the second is to create bespoke proprietary AI systems based on pre-trained and available models. The second approach is by far the safest as it helps control data exchange with AI systems at every stage, guaranteeing confidentiality. But this is far more expensive and labor-intensive than using more insecure publicly available AI services.

Read More

“Morris II Worm” Built to Target GenAI Systems – March 4th

Researchers from the Israel Institute of Technology, in collaboration with Intuit, and Cornell Tech developed the “Morris II Worm” to automatically leverage GenAI systems to spread malware and steal data.

The researchers made the worm to demonstrate the dangers behind GenAI systems through the dangerous “0-click propagation” worm which unleashes unprompted payloads, allowing easier attacks from threat actors. 

Read More

FBI declares cyber-war on China

US Federal Bureau of Investigation (FBI) director Christopher Wray used his keynote speech at the weekend’s Munich Cyber Security Conference, which many regard as the security version of Davos, to effectively declare cyber-war on the People’s Republic of China (PRC).

“Our adversaries have been improving exponentially,” warns Wray. “Chief among those adversaries is the Chinese government…the cyber threat posed by the Chinese government is massive.”

Wray added that China’s hacking program is larger than that of all the other major world nations combined and that the PRC is using AI technology stolen from the Western powers to vastly increase the present threat. The FBI director told the major world powers assembled in Munich at the weekend that a new enhanced level of cooperation between government agencies such as his and the private sector is the only way to counter this new Red Menace.

Read More

Deepfake face swaps hijack video meetings

Artificial Intelligence (AI) tools such as face swaps are now being used in Mission Impossible-style cyber-enabled financial crimes. The South China Morning Post reports that last month criminals defrauded a multinational Hong Kong firm of HK$200 million (US$26 million) by using deepfake video technology.

The cybercriminal gang initially sent a message to an employee in the finance department of the unnamed company, inviting him to a video conference via a message purporting to be from the organization’s chief financial officer (CFO). While on the video conference, the employee was joined by what looked and sounded sufficiently like his CFO and other colleagues to convince him to make a fraudulent transfer of company funds.

Read More

‘Pig Butchering’ crypto-fraudsters net billions

‘Pig Butchering’, a new and particularly mean and ruthless form of cryptocurrency fraud that originated in China, has evolved into a global scourge.

Sha zhu pan, which translates as “pig-butchering”, uses sophisticated fraudulent decentralized finance (DeFi) applications to bypass most of the defenses provided by mobile device vendors. WhatsApp is the preferred platform for targets outside China; Telegram is also used, as is Skype.

According to cybersecurity firm Sophos: “Originating in China at the beginning of the COVID pandemic, ‘pig butchering’ scams have expanded globally ever since, becoming a multi-billion-dollar fraud phenomenon.”

Read More

Businesses turn their back on GenAI

The reaction of businesses to the introduction of generative AI (GenAI) in the year since the launch of Microsoft-backed ChatGPT is one of increasing suspicion and disappointment.

Over one in four organizations have banned the use of GenAI outright. The majority of companies are now also refusing to trust a technology that has already gained a reputation for making errors and even entirely fabricating information, a failing that is referred to as “hallucinating”.

According to Cisco’s newly-released 2024 Data Privacy Benchmark Study, 68 percent of organizations mistrust GenAI because it gets results wrong and 69 percent also believe it could hurt their company’s legal rights. The study draws on responses from 2,600 privacy and security professionals across 12 geographies. 

Read More

Budget shortfalls power cybercrime surge

Over half of all companies worldwide quote inadequate cybersecurity budgets as a key factor underpinning a dramatic rise in global cybercrime in the first three quarters of 2023.

According to a survey of almost 2,000 cybersecurity practitioners worldwide undertaken by the Ponemon Institute and commissioned by cybersecurity firm Barracuda:  “There are a number of common factors that contribute to organizations’ exposable security postures. These include significant IT security budget shortfalls, a general lack of consistent enterprise-wide security policies and programs, ineffective (or no) incident response plans, and an inability to protect against automated security attacks criminals create using generative AI technology.”

Fifty-five percent of respondents quoted inadequate IT security budgets as the chief cause of their growing vulnerability to cyber-attacks. A further  42 percent highlighted inadequate enterprise-wide security policies and programs. A lack of inventory of third parties with access to sensitive and confidential data adversely impacted 38 percent. Another key factor is a lack of support from senior leadership, with 25 percent of respondents saying that management teams fail to regard cyberattacks as a significant risk.

Read More

The UK Warns on AI-Generated Malware from Nation-States – January 25th

According to the UK’s National Cyber Security Centre (NCSC), AI-generated malware built to avoid detection could be a serious threat inflicted by nation-state threat actors this year.

The NCSC further stated that based on their investigations, they believe nation-state groups hold repositories of malware large enough to effectively train an AI model to bolster ransomware attack capabilities.

Read More

JP Morgan Chase Combats 45 Billion Cyber Attacks Daily – January 18th

On Wednesday, January 17th, JPMorgan Chase’s asset and wealth management division head, Mary Callahan Erdoes, said during the World Economic Forum in Davos that the firm faces a staggering 45 billion breach attempts daily.

Mary explained on a panel session that they have more security engineers than Google and Amazon, out of necessity, as threat actors increasingly get “smarter, savvier, quicker, more devious and mischievous.”

Read More

77% of CEOs Believe AI More Risk Than Reward in Cyber – January 16th

Despite the hype of AI in cybersecurity, a PwC survey revealed that 77% of CEOs still believe AI increases the risk of breaches rather than boosts cybersecurity.

The PwC survey interviewed 4,700 executives globally, the majority of whom are CEOs. The survey also found that 63% of respondents believed AI to be a misinformation risk, causing a barrier for legal and reputational damage stemming from generative AI.

Read More

Pope calls for global AI regulation in 2024

The New Year is set to start with a call to regulate artificial intelligence (AI) coming from a man whose views are considered by hundreds of millions of people to be infallible. On New Year’s Day, His Holiness Pope Francis is scheduled to issue a stark warning to the governments of the world on the dangers inherent in AI.

On January 1, 2024, His Holiness will announce: “Techno-scientific advances, by making it possible to exercise hitherto unprecedented control over reality, are placing in human hands a vast array of options, including some that may pose a risk to our survival and endanger our common home”. 

Having warned that AI is a threat not to humanity but to the existence of the Planet Earth itself, His Holiness will then exhort “the global community of nations” to urgently adopt a binding international treaty to regulate not only the use of AI, but also its development.

Read More

EU’s planned AI rulings meet opposition

Next Wednesday will see the last round in a “King Kong meets Godzilla”-style contest between the European Union and the global technology sector over proposed regulations from Brussels to control AI. The opening rounds have been fought by lawyers, lobbyists, and bureaucrats over the monitoring of foundation model AI services such as GPT-4, access to source codes, fines for disobeying the Brussels rulings, and other related topics.

However, EU member states France, Germany, and Italy are known to be opposed to the EU’s proposed rulings and to favor self-legislation by the technology sector, as opposed to being constrained by hard rules dictated by Brussels. French AI company Mistral and Germany’s Aleph Alpha have criticized the EU’s tiered approach to regulating foundation models, defined as those with more than 45 million users.

Read More

GE Military Project Hack Sparks National Security Concerns – November 30th

General Electric (GE) recognized the data theft from threat actor IntelBroker pertaining to a project involving the Defence Advanced Research Projects Agency, sparking national security concerns.

The GE Spokesperson commented on the data theft, saying they are thoroughly investigating the claims, will work on further protecting the integrity of their security systems, and that business operations will not be affected. 

Read More

AI “overrated and overhyped” say cybercriminals

The verdict on artificial intelligence (AI) from the real experts is finally in; professional cybercriminal fraternities have judged AI to be “overrated, overhyped and redundant,” according to fresh research from cybersecurity firm Sophos.

It has, hitherto, been accepted wisdom in the cybersecurity industry that cybercriminals, free from any regulatory authority or moral scruples, were among the first to harness the awesome power of AI to create bespoke and virtually unstoppable malware. However, having infiltrated the Dark Web forums where top professional cybercriminals discuss their trade, Sophos reports that the cybercrime sector has thoroughly tested the capabilities of AI and found it wanting.

Read More

UK and US Develop Global AI Security Guidelines – November 27th

The UK’s National Cyber Security Center (NCSC), in partnership with the US’s Cybersecurity and Infrastructure Security Agency (CISA) launched the ‘Guidelines for Secure AI System Development’.

The guidelines are set to secure AI system development, to help developers make informed cybersecurity decisions at every step of the AI development process. These AI guidelines were also co-signed in cooperation with 21 other international agencies and ministries from across the world. 

Read More

Interpol demands global action to tackle cybercrime

Interpol is demanding that the world’s governments and business leaders act together to stem the rapidly rising global tide of cybercrime. Speaking this week at the Global Cybersecurity Forum in Riyadh, Interpol’s assistant director of cybercrime operations, Bernardo Pillot, urged the world’s governments and business leaders to adopt a more collective approach to online dangers.

Read More

United States to regulate AI

US President Joe Biden has issued an executive order aimed at regulating artificial intelligence (AI), urging Congress to pass the necessary legislation as swiftly as possible. The announcement was made only 48 hours before tomorrow’s Global AI Summit in the UK, which US Vice President Kamala Harris will attend. The push to swiftly legislate indicates that the threat of AI is being taken seriously globally, with governments taking a coordinated approach. A mass of legislation and backroom deals with IT companies is surely set to follow.

Read More

Global AI summit mired in controversy

The UK-hosted Artificial Intelligence (AI) Safety Summit due to take place on Wednesday and Thursday this week, attended by world leaders and AI experts, is set to become the focus of a widening global debate on the dangers of AI. Last Thursday, UK Prime Minister Rishi Sunak set out the agenda for the discussion, coming down heavily on the side of the AI doom-mongers, who once again are warning that AI poses an existential threat to humanity itself.

Read More