November 30, 2025
Dark Light

Blog Post

Cyber Intelligence >

Insider attacks rise by over 50 percent

Insider attacks, where staff either deliberately or accidently compromise an organization’s security, are rising steeply. According to Cybersecurity firm, Gurucul, almost half of organizations, 48 percent, report that insider attacks have become increasingly common over the last 12 months. Just over half, 51 percent, experienced six or more such attacks in the past year.

Gurucul’s 2024 Insider Threat report identifies the major causes for the sudden spike in insider attacks: “The top three drivers behind the surge in insider attacks are complex IT environments (39 percent), the adoption of new technologies (37 percent), and inadequate security measures (33 percent).”

Read More

Cybercriminals ramp up AI-driven deepfake scams

Cyber toolkits for threat actors are now harnessing the latest deepfake technology and artificial intelligence (AI) for targeted email attacks, known as ‘spear-phishing.’ According to cloud cybersecurity firm Egress, a staggering 82 percent of phishing toolkits mentioned deepfakes, and 75 percent referenced AI.

The growing threat presented by the use of deepfakes by cybercriminals was highlighted earlier this year at InfoSecurity Europe in London. Widely available toolkits now enable even relatively unskilled hackers to create highly convincing video and audio clips of chief executives (CEOs) and other senior staff members in any specific organization. All the threat actor needs is a short video clip of the person they wish to impersonate. This can easily be copied from a corporate seminar or from a video podcast.

Read More

A Deluge of Powerful Fraud Tactics Are Giving Businesses Trust Issues

It feels like fraudsters are consistently staying one step ahead of us. Back in early 2022, a study found that one out of every four accounts made online was fake—and that number has only gotten worse. The auto lending industry, for example, saw a staggering $7.9 billion in losses due to a 98% spike in synthetic fraud in 2023. They’re not alone in fending off more fraud attempts than ever as malicious actors turn to generative artificial intelligence to increase both the sophistication and the sheer number of fake accounts trying to bypass verification steps and swindle businesses.

The increase we’ve seen in synthetic identities is causing a new host of problems. Not only are more businesses finding themselves with fake customers in their systems—financial institutions mistakenly giving credit to synthetic identities, colleges and universities grappling with applications from fake students, and more—but some of the measures being taken to tamp down on fraudsters’ relentless advances have had the unfortunate side effect of pushing away legitimate customers.

Read More

Cost of AI could rise tenfold – warns Gartner

Gartner issued a stern warning this week to organizations across all sectors that the cost of introducing artificial intelligence (AI) to the workplace could easily balloon by 500 -1,000 percent.

Speaking at Gartner’s flagship Symposium event in Australia, VP analyst Mary Mesaglio said: “Factors contributing to these inflated costs include vendor price increases and neglecting the expense of utilizing cloud-based resources.”

Read More

EU AI Act to act as a template for other regions

The European Union (EU) Artificial Intelligence (AI) Act, which came into effect earlier this month, is now set to act as a template for other regions, such as the US. The American government has already drafted an AI Bill of Rights, which aims to create a similar framework regulating AI.

However, while governments are rightly concerned about the personal privacy aspect of the universal adoption of AI, some have a dangerously bullish view of the new technology’s potential. Despite a deluge of hilarious howlers, such as Google’s AI-driven images of African Vikings and American founding fathers, politicians anxious not to be left behind in the tech race swallowed Silicon Valley’s AI hype hook, line, and sinker.

Read More

America’s enemies strive to sway the US presidential election

Nations hostile to America, primarily Russia and China, are currently doubling down on their efforts to influence the outcome of the upcoming US elections. So far, their efforts appear to be directed at preventing Donald Trump from winning a second term as president, possibly fearing a Republican victory could herald the US taking a tougher stance on international affairs.

According to an extensive nine-page Microsoft threat intelligence report: “Foreign malign influence concerning the 2024 US election started off slowly but has steadily picked up pace over the last six months due initially to Russian operations, but more recently from Iranian activity.”

Read More

Sharp rise in blindside cyber-attacks

More than one in five cybersecurity professionals report having had a cyber hit requiring immediate attention despite having threat-based detection and response security measures in place. According to a survey conducted by cybersecurity firm Criticalstart, 2024 Cyber Risk Landscape Peer Report, 2023’s figure of 83 percent represents a 21 percent increase from 2023.

Criticalstart also reports a sharp rise in the cost of data breaches. The average cost of a data breach reached an all-time high of $4.45 million in 2023 – a 15 percent increase over the past three years. Organizations with under 500 employees reported an average breach-impact increase from $2.92 million to $3.31 million—a rise of 13.4%.

Read More

Deepfakes set to deceive at DEF CON

It looks as if deepfakes will be the hot topic at the big international hacker conference DEF CON in Las Vegas next week, just as they took center stage at InfoSecurity Europe in London in June.

Visitors to DEF CON’s Artificial Intelligence (AI) village will be encouraged to create their own highly professional deepfake videos of fellow conference attendees by cybersecurity company Bishop Fox’s red team. The purpose is to educate conference goers about the growing dangers now posed to all organizations by deepfake calls purporting to come from senior executives or highly-trusted members of staff.

Read More

‘Shadow IT’ poses a rapidly growing risk

The use of ‘shadow IT’, where staff purchase software without the approval of their IT department, is still on the rise. Despite being acutely aware of the cyber risks involved, three-quarters of security professionals admit to using off-the-shelf software-as-a-service (SaaS) applications in the last year.

According to a survey of over 250 global security professionals carried out by cybersecurity firm Next DLP, 73 percent admitted to using SaaS applications, with over half of the respondents naming data loss (65 percent), lack of visibility and control (62 percent) and data breaches (52 percent) as the chief risks inherent in using unauthorized tools. One in ten also admitted they were certain their organization had suffered a data breach or data loss as a result.

Read More

Exclusive: Expanding AI data centers have become tempting targets

Big Tech’s rapidly-expanding server farms are becoming increasingly tempting targets for ransomware gangs. In their Gadarene rush to be first with AI-based services, companies such as Google and Microsoft are not only abandoning any previous pretences about reducing their greenhouse emissions and energy consumption, they are also inadvertently building increasingly tempting targets for organized cybercriminals and nation-state threat actors.

The online industry’s vast data centers and server farms run on similar operational technology (OT) systems to other industrial facilities. Originally designed to run offline, these systems are notoriously difficult to secure, particularly when they need to interface with newer information technology (IT) systems.

Read More

Exclusive: Deepfakes being used to manipulate share prices

Cash-rich cybercriminals are learning that the easiest way to make money on the stock markets while laundering cash at the same time is to use deepfake videos to impact share prices, albeit temporarily.

According to Tim Grieveson, Senior Vice President of Global Cyber Risk, BitSight: “Using video and audio deepfakes to manipulate share prices for financial gain is definitely happening, but is something no one is currently talking about.”

“Using a deepfake to announce a takeover could, for instance, drive up a stock in which the threat actor owns shares. Alternatively, a negative announcement such as a dire profits warning could be used to lower the share price so that the threat actor could buy the shares at a knock-down price, only to sell them again when the profits warning was seen to be fake” adds Grieveson.

Read More

AI-engineered email attacks are on the rise

Email scams aimed at business users are becoming increasingly sophisticated and increasingly tough to detect. Threat actors are now using artificial intelligence to research their targets in advance of an attack, a process known as ‘social engineering.’

Phishing attacks and email scams that appear to come from a trusted source make up 35.5% of all socially engineered threats, according to a report from cybersecurity firm Barracuda: Top Email Threats and Trends. Although these types of attacks have been around for some time, cybercriminals have recently devised ingenious new methods to avoid detection and being blocked by email-scanning technologies.

Read More

Musk deems “Apple Intelligence” offering insecure

Bereft of fresh ideas or new products, Apple’s main offering at its long-awaited annual Worldwide Developer’s Conference in Cupertino, California, is a cobbled-together artificial intelligence (AI) offering.

While AI may be Silicon Valley’s latest buzzword and marketing tool, “Apple Intelligence,” as Apple AI is branded, is already attracting heavy criticism – even from other tech giants. By pairing Microsoft-backed OpenAI’s ChatGPT with Apple’s voice-activated assistant, Siri, Apple hopes to make AI mainstream. But its critics say that all Apple has done is create a cybersecurity nightmare for corporations while sounding a death knell for the personal privacy of Apple users.

“It’s patently absurd that Apple isn’t smart enough to make their own AI, yet is somehow capable of ensuring that OpenAI will protect your security & privacy!… Apple has no clue what’s actually going on once they hand your data over to OpenAI. They’re selling you down the river,” says Elon Musk, Tesla and SpaceX founder and the owner of X Corp, formerly Twitter.

Read More

InfoSecurity Europe 2024 – Was It All Worth It?

As the stands were being packed away on the show floor at the end of the InfoSecurity Europe 2024 conference in London this week (June 4-6), it was time for exhibitors and attendees to take stock of the three-day event. The mood among exhibitors as they packed everything away in cardboard boxes was distinctly upbeat compared to last year’s event, which was still overshadowed by two long years of lockdown.

“It was great to be among people two years post-pandemic and to be able to see the whites of their eyes and the smiles on their faces. In an industry as serious as cybersecurity, it is also important to have face-to-face moments of levity and bonhomie,” said Matt Butterworth, senior account manager at data erasure specialist Blancco Technologies.

Neal Smyth, of managed cloud and cybersecurity company Ekco, commented: “Our presentation was oversubscribed with standing-room only. As well as generating leads, we had more customers coming to the stand this year. For example, a representative of a  government department simply turned up and asked us to tender. I also hear that other exhibitors were seeing more potential customers attending InfoSecurity this year.”

Read More

New cyber threat from North Korea

Microsoft has identified a new North Korean threat actor, Moonstone Sleet. Also known as Storm-1789, Moonstone Sleet has set up fake companies and job opportunities to engage with potential targets and has even created a fully functioning computer game designed to trap the unwary.

The potentially hostile nation-state of North Korea has long been suspected of resorting to cybercrime, targeting the West to fund its military build-up and commit ongoing cyber espionage against countries such as the US and the UK. But Moonstone Sleet is taking cyber-attacks on the West to new levels of sophistication, posing a threat to all organizations.

Microsoft says Moonstone Sleet “uses both a combination of many tried-and-true techniques used by other North Korean threat actors and unique attack methodologies to target companies for its financial and cyberespionage objectives.”

Read More

‘Shadow AI’ is putting companies at risk

The increasing use of artificial intelligence (AI) tools by staff ahead of IT departments involvement has resulted in the growing problem of ‘shadow AI’.

 “Similar to the early days of cloud adoption, workers are using AI tools before IT departments formally buy them. The result is “shadow AI,” employee usage of AI tools through personal accounts that are not sanctioned by – or even known to – the company,” says Silicon Valley-based data protection company Cyberhaven’s report: How Employees are Leading the Charge in AI Adoption and Putting Company Data at Risk.

Read More

ID security acquisition to spark M&A growth

Identity security company CyberArk has announced that it is acquiring machine identity management specialist Venafi for US $1.54 billion from software-focused investor Thoma Bravo, which already manages US$138 billion in assets.

The acquisition is being seen by some market sources as the start of more highly-focused acquisition-driven growth in the increasingly sharply defined and specialized cybersecurity sector. The logic behind the Venafi acquisition is clear. According to CyberArk, the number of machines is rapidly outpacing the growth of their human counterparts, with more than 40 machine identities for every human identity. By adding Venafi’s machine identity management to its dominant identity security position, CyberArk expects to expand its total addressable market by almost US$10 billion to around US$60 billion.

Read More

Cybercrime continues to cold-shoulder AI

Organized cybercriminals continue to give artificial intelligence (AI) the cold shoulder. New research from US telecoms conglomerate Verizon confirms a report in November from cybersecurity firm Sophos revealing that cybercriminals judged AI to be “overrated, overhyped and redundant.”

According to Verizon’s 2024 Data Breach Investigations Report: “We did keep an eye out for any indications of the use of the emerging field of generative artificial intelligence (GenAI) in attacks and the potential effects of those technologies, but nothing materialized in the incident data we collected globally…The number of mentions of GenAI terms alongside traditional attack types and vectors such as “phishing,” “malware,” “vulnerability,” and “ransomware” was shockingly low, barely breaching 100 cumulative mentions over the past two years.”

Read More

AI is fueling China’s cyber war against the US

Once again, China is harnessing new Western technology to attack and undermine the US at home and overseas. According to a new report from Microsoft, this time, China is using AI-generated fake social media accounts to influence the outcome of the upcoming US presidential elections.

The report, Same targets, new playbooks: East Asia threat actors employ unique methods, details China’s recent attempts to discredit the US government, including misinformation regarding: the Kentucky train derailment in November; the Maui wildfires in August; the disposal of Japanese nuclear wastewater, illegal drug use in the US as well as exacerbating the increasing racial tensions across the US.

Read More

Cisco bets the farm on Splunk

Cisco’s US$28 billion acquisition of cybersecurity firm Splunk is the largest acquisition in the networking giant’s history. It is now being seen as a clear signpost for the future value of cybersecurity companies worldwide.

The price paid for the 20-year-old San Francisco company represented over 12 percent of Cisco’s US$198 billion market capitalization. The $28 billion acquisition was closed within only six months, at a time when many large mergers are being blocked or delayed by regulators.

 “We will revolutionize the way our customers leverage data to connect and protect every aspect of their organization as we help power and protect the AI revolution,” said Cisco CEO Chuck Robbins.

Read More

OpenAI’s voice cloning raises security concerns

OpenAI, the maker of Microsoft-backed consumer-facing artificial intelligence (AI) service ChatGPT, may have scored something of an own-goal with the unveiling of Voice Engine, billed as “a model for creating custom voices”.

While OpenAI’s blog on Friday highlights the legitimate use of voice cloning, sometimes referred to as ‘deepfake voice’, such as providing reading assistance to non-readers and children, its widespread availability could soon metamorphose into a cybersecurity nightmare.

Deepfake voice and video software are already being used by cybercriminals to mimic the voices of senior executives to commit financial fraud and other crimes. But the widespread availability and marketing of deepfake voice software is now set to make cybercrime a virtual cottage industry where any number can play. It will open the floodgates to a whole new generation of cybercriminals, terrorists, pranksters, and disgruntled employees.

Read More

‘INC Ransom’ Group Threatens to Release NHS Data – March 28th

The ‘INC Ransom’ ransomware group publicly threatened to release three terabytes of NHS Scotland sensitive patient and staff data, after publishing a smaller sample size proving the viability of the threat.

NHS Dumfries and Galloway’s efforts to prevent the attack from being repeated are underway in collaboration with Police Scotland and the National Cyber Security Centre (NCSC).

Read More

UN drafts US-led AI resolution

The United Nations has drafted a resolution aimed at bringing the rest of the world in line with existing US artificial intelligence (AI) security guidelines. These follow those already developed by the US Cybersecurity and Infrastructure Security Agency (CISA) and the United Kingdom’s National Cyber Security Centre (NCSC).

Both emphasize the importance of “secure-by-design” and “secure-by-default” principles for AI systems. The UN Assembly called on all Member States and stakeholders “to refrain from or cease the use of artificial intelligence systems that are impossible to operate in compliance with international human rights law.” The Assembly added that the same rights that people have offline must also be protected online throughout the life cycle of artificial intelligence systems.

Read More

Rise in Tax-Related Phishing Scams Detected – March 22nd

Microsoft’s Threat Intelligence arm issued a warning on the rise of new, sophisticated tax phishing scams that could lead to stolen personal and financial data.

These tax-related phishing scams are initiated by impersonating trusted employers, tax agencies, and payment processors. Victims click on a malicious attachment, which leads to a believable landing page designed to capture sensitive information.

Read More

SEC fines companies $400k for over-hyping AI

Ever since the launch of the deeply flawed Microsoft-backed public-facing artificial intelligence (AI) service ChatGPT at the end of 2022, AI has been used to power a whole range of services. But the days of marketing and PR departments simply attaching the words “AI-driven” to over-hype any digital offering in the hope of attracting investors and customers are now hopefully coming to an end.

Earlier this week, the US Securities and Exchange Commission (SEC) fined two investment advisers, Delphia (USA) Inc. and Global Predictions Inc., a total of US$400,000 between them. The SEC’s order against Global Predictions alleged that the San Francisco-based firm made false and misleading claims in 2023 on its website and on social media about its purported use of AI. The order against Toronto-based Delphia alleged that the firm had made false and misleading statements in its SEC filings, in a press release, and on its website regarding its purported use of AI and machine learning.

Read More

Employee mistrust of workplace AI is growing

Amid widespread speculation that artificial intelligence (AI) will make most of today’s jobs redundant and even replace humanity itself, the UK’s Institute for the Future of Work has taken a more pragmatic approach.

Its study on the impact of modern technologies on almost 5,000 workers highlights employee concerns about the adverse effect AI is already having on their day-to-day work lives. While the majority of those surveyed believed that older technologies such as laptops and smartphones generally improve their quality of life, the same is not true of AI.

Read More

Ransomware alert for US critical infrastructure

The US Federal Bureau of Investigation (FBI) and the US Cybersecurity and Infrastructure Security Agency (CISA) have jointly issued a stark warning. The Phobos ransomware-as-a-service (RaaS) model is now being widely used by threat actors of all kinds to attack a wide variety of critical infrastructure across America.

“Since May 2019, Phobos ransomware targeted municipal and county governments, emergency services, education, public healthcare, and other critical infrastructure entities,” says the joint cybersecurity advisory document.

Phobos RaaS is particularly dangerous as it is an off-the-shelf software that can be deployed by even relatively unskilled threat actors in conjunction with other open-source tools such as Smokeloader, Cobalt Strike, and Bloodhound. These tools are all widely accessible and easy to use in various operating environments, making Phobos the obvious go-to choice for a wide variety of threat actors.

Read More