InfoSecurity Europe 2025, which begins in London today, Tuesday, June 2nd, will this year be dominated by the rapidly growing threat posed by the weaponization of artificial intelligence (AI). New to the conference is an AI and cloud security stage, which will exhibit ways organizations can counter the threat posed by AI. AI-driven cybersecurity also dominated the recent RSA conference in San Francisco. Over the last 12 months, threat actors haven’t wasted a moment capitalizing on the global fascination with Artificial Intelligence. As AI’s popularity surged over the past year, cybercriminals have been quick to exploit the new technology to carry out cyberattacks on an industrial scale.
Deepfake videos of TV news presenters are being used to dupe gullible viewers into logging onto illegal gambling sites where malware is then downloaded onto their devices. News anchors on Sky and other channels appear to be quoting Apple CEO Tim Cook recommending an app where users can easily get rich by winning vast sums of money. The news reports have been identified as deepfake videos. It has been further revealed that thousands of similar videos of deepfakes of journalists have been circulated in the US and the UK.
Cyber toolkits for threat actors are now harnessing the latest deepfake technology and artificial intelligence (AI) for targeted email attacks, known as ‘spear-phishing.’ According to cloud cybersecurity firm Egress, a staggering 82 percent of phishing toolkits mentioned deepfakes, and 75 percent referenced AI. The growing threat presented by the use of deepfakes by cybercriminals was highlighted earlier this year at InfoSecurity Europe in London. Widely available toolkits now enable even relatively unskilled hackers to create highly convincing video and audio clips of chief executives (CEOs) and other senior staff members in any specific organization. All the threat actor needs is a short video clip of the person they wish to impersonate. This can easily be copied from a corporate seminar or from a video podcast.
It looks as if deepfakes will be the hot topic at the big international hacker conference DEF CON in Las Vegas next week, just as they took center stage at InfoSecurity Europe in London in June. Visitors to DEF CON’s Artificial Intelligence (AI) village will be encouraged to create their own highly professional deepfake videos of fellow conference attendees by cybersecurity company Bishop Fox’s red team. The purpose is to educate conference goers about the growing dangers now posed to all organizations by deepfake calls purporting to come from senior executives or highly-trusted members of staff.
Bereft of fresh ideas or new products, Apple’s main offering at its long-awaited annual Worldwide Developer's Conference in Cupertino, California, is a cobbled-together artificial intelligence (AI) offering. While AI may be Silicon Valley’s latest buzzword and marketing tool, “Apple Intelligence,” as Apple AI is branded, is already attracting heavy criticism – even from other tech giants. By pairing Microsoft-backed OpenAI’s ChatGPT with Apple’s voice-activated assistant, Siri, Apple hopes to make AI mainstream. But its critics say that all Apple has done is create a cybersecurity nightmare for corporations while sounding a death knell for the personal privacy of Apple users. "It's patently absurd that Apple isn't smart enough to make their own AI, yet is somehow capable of ensuring that OpenAI will protect your security & privacy!... Apple has no clue what's actually going on once they hand your data over to OpenAI. They're selling you down the river,” says Elon Musk, Tesla and SpaceX founder and the owner of X Corp, formerly Twitter.
As the stands were being packed away on the show floor at the end of the InfoSecurity Europe 2024 conference in London this week (June 4-6), it was time for exhibitors and attendees to take stock of the three-day event. The mood among exhibitors as they packed everything away in cardboard boxes was distinctly upbeat compared to last year’s event, which was still overshadowed by two long years of lockdown. “It was great to be among people two years post-pandemic and to be able to see the whites of their eyes and the smiles on their faces. In an industry as serious as cybersecurity, it is also important to have face-to-face moments of levity and bonhomie,” said Matt Butterworth, senior account manager at data erasure specialist Blancco Technologies. Neal Smyth, of managed cloud and cybersecurity company Ekco, commented: “Our presentation was oversubscribed with standing-room only. As well as generating leads, we had more customers coming to the stand this year. For example, a representative of a government department simply turned up and asked us to tender. I also hear that other exhibitors were seeing more potential customers attending InfoSecurity this year.”
InfoSecurity Europe, widely acknowledged as the chief global challenger to RSA in the US, kicked off with a Keynote speech and panel discussion on “Mapping the Deepfake Landscape.” Broadcaster and researcher Henry Adjer quoted numerous examples of the increasing sophistication of malicious deepfakes. The most interesting example of a deepfake was a false image purporting to show an explosion near the Pentagon shared by multiple verified Twitter accounts last year, resulting in a brief dip in the value of the New York Stock Exchange. “Threat actors are starting to explore the possibility of using deepfakes to move share prices with fake podcasts and video interviews with company C-suite executives of listed companies. Even if the fake is quickly spotted and squashed and the company’s shares are only impacted for 10 minutes, the threat actor can make a huge profit by speculating on the movement of a specific stock,” says Tim Grieveson, senior vice president of global cyber risk at cybersecurity firm BitSight, which in 2021 received £250 million funding from financial services giant Moody’s.
The SonicWall Capture Labs team reported on threat actors developing malicious, fake Android apps to impersonate Google, Instagram, Snapchat, WhatsApp, and X. When downloaded by victims and once permissions have been granted to use them, illegitimate apps aim to steal sensitive data from Android devices, such as contacts, text messages, call logs, and passwords.
Sign in to your account