Deepfake videos of TV news presenters are being used to dupe gullible viewers into logging onto illegal gambling sites where malware is then downloaded onto their devices. News anchors on Sky and other channels appear to be quoting Apple CEO Tim Cook recommending an app where users can easily get rich by winning vast sums of money. The news reports have been identified as deepfake videos. It has been further revealed that thousands of similar videos of deepfakes of journalists have been circulated in the US and the UK.
Companies are largely ignorant of the looming threat of increased artificial intelligence (AI) identity theft, despite the fact that 93 per cent of companies surveyed suffered two or more identity-related breaches in 2024. According to leading identity management company CyberArk Software, executives and employees alike are overconfident of their ability to spot ongoing ID-theft and subsequent cyber breaches, with over 75 per cent of respondents to a recent survey saying that they are confident their employees can identify deepfake videos or audio of their leaders. “Employees are [also] largely confident in their ability to identify a deepfake video or audio of the leaders in their organization. Whether we chalk it up to the illusion of control, planning fallacy, or just plain human optimism, this level of systemic confidence is misguided,” warns Cyberark following a survey of 4,000 US-based employees.
In an exclusive interview with Cyber Intelligence, Patrick Harding, chief product architect at digital identity security company, Ping Identity, outlines the growing threat of identity theft and fraud, explaining how it evolved and what can be done to counter it. Everybody is forced into digital transactions and relationships and identity management is fundamental to knowing who you are interacting with. The problem goes back to the beginning of the internet in the 1990s and a cartoon of a dog in front of a computer with the caption, “On the internet no-one knows you’re a dog!” That really illustrates the core problem of identifying online users and customers. The extent to which this is carried out largely depends on the sensitivity of the activity concerned. There is a big difference between buying a pair of jeans online and opening a bank account. In both cases, there is a significant series of steps which could include requesting passport ID for financial services.
Insider attacks, where staff either deliberately or accidently compromise an organization’s security, are rising steeply. According to Cybersecurity firm, Gurucul, almost half of organizations, 48 percent, report that insider attacks have become increasingly common over the last 12 months. Just over half, 51 percent, experienced six or more such attacks in the past year. Gurucul’s 2024 Insider Threat report identifies the major causes for the sudden spike in insider attacks: “The top three drivers behind the surge in insider attacks are complex IT environments (39 percent), the adoption of new technologies (37 percent), and inadequate security measures (33 percent).”
Cyber toolkits for threat actors are now harnessing the latest deepfake technology and artificial intelligence (AI) for targeted email attacks, known as ‘spear-phishing.’ According to cloud cybersecurity firm Egress, a staggering 82 percent of phishing toolkits mentioned deepfakes, and 75 percent referenced AI. The growing threat presented by the use of deepfakes by cybercriminals was highlighted earlier this year at InfoSecurity Europe in London. Widely available toolkits now enable even relatively unskilled hackers to create highly convincing video and audio clips of chief executives (CEOs) and other senior staff members in any specific organization. All the threat actor needs is a short video clip of the person they wish to impersonate. This can easily be copied from a corporate seminar or from a video podcast.
It looks as if deepfakes will be the hot topic at the big international hacker conference DEF CON in Las Vegas next week, just as they took center stage at InfoSecurity Europe in London in June. Visitors to DEF CON’s Artificial Intelligence (AI) village will be encouraged to create their own highly professional deepfake videos of fellow conference attendees by cybersecurity company Bishop Fox’s red team. The purpose is to educate conference goers about the growing dangers now posed to all organizations by deepfake calls purporting to come from senior executives or highly-trusted members of staff.
Cash-rich cybercriminals are learning that the easiest way to make money on the stock markets while laundering cash at the same time is to use deepfake videos to impact share prices, albeit temporarily. According to Tim Grieveson, Senior Vice President of Global Cyber Risk, BitSight: “Using video and audio deepfakes to manipulate share prices for financial gain is definitely happening, but is something no one is currently talking about.” “Using a deepfake to announce a takeover could, for instance, drive up a stock in which the threat actor owns shares. Alternatively, a negative announcement such as a dire profits warning could be used to lower the share price so that the threat actor could buy the shares at a knock-down price, only to sell them again when the profits warning was seen to be fake” adds Grieveson.
The firm that lost $25 million to deepfake video scammers in Hong Kong earlier this year has been revealed to be UK-based engineering firm Ove Arup. Ove Arup is known for world landmarks, including the Sydney Opera House. The company employs roughly 18,000 people worldwide and has annual revenues of over £2 billion. In early February of this year, Cyber Intelligence reported that an as-yet-unidentified firm in Hong Kong had been defrauded of roughly US$25 million by criminals using deepfake video technology to pose as the company’s corporate finance officer (CFO) and other trusted colleagues. Not knowing how sophisticated even off-the-shelf deepfake video has become, the staff member who had been targeted was totally duped by what he logically assumed must be his CFO asking him to make the $25 million transfer during the course of an entirely fake but highly convincing video conference. When the attack was originally reported, the Hong Kong police gave a stark warning:
Sign in to your account