In an exclusive interview with Cyber Intelligence, Andy Sheldon, North American VP of US-based ID fraud specialist Deduce reveals how banks are now struggling to combat armies of fake account holders currently being generated by artificial intelligence (AI).
Cyber Intelligence: Can you explain how cybercriminals are using AI to attack financial institutions?
Andy Sheldon: Bad actors were early adopters of AI, which they have used to scale up their attacks to create what are literally armies of realistic fake identities to open bank accounts that have the potential to defraud the banks on an industrial scale.
Cyber Intelligence: Do we have any idea of the scale of the threat?
Andy Sheldon: Online identity fraud’s growth is accelerating and financial services and fintech companies are struggling in their efforts to control the threat. Our recent survey of 500 US financial services fraud and risk professionals reveals that 87 percent of companies have extended credit to fake online customers. While it is not uncommon for financial companies to extend credit through a marketing campaign or promotion, it is troubling that of organizations who have extended credit to a bogus online customer, 61 percent offered credit to the fake account holder without even asking them to fill in an application.
Cyber Intelligence: How much does this type of account fraud cost the banks?
Andy Sheldon: Good question – 20 percent of respondents to our recent survey estimate that the average monetary loss per incident is between $50,000 and $100,000, while 23 percent calculate it to be more than $100,000 per incident.
Cyber Intelligence: How difficult is it to scam financial institutions by opening a fake bank account?
Andy Sheldon: You need only four things: a social security number, first name, last name, and date of birth. These are constants, whereas people continually change addresses. The wholesale purchase of stolen IDs and email addresses on the Dark Web enables the cybercriminals to use AI to conduct their fake-account scams on an industrial scale. You can, for example, buy 1,000 fake email addresses geolocated in the [San Francisco] Bay area for a few hundred dollars. The next stage is to establish a credible fake identity for each of the many thousands of fake accounts being marshaled for the attack.
Cyber Intelligence: How do the scammers build up a convincing enough customer profile for so many fake accounts?
Andy Sheldon: AI trawls the internet and signs up for newsletters, makes online purchases, joins discussion groups and forums and anything else it can find in order to try to establish a credible online identity for each of the accounts. Organized cyber criminals now spend up to two or three years building fake identities. This a slight twist on the kind of ‘pig-butchering’ scam that has now been around for a while.
Cyber Intelligence: Pig butchering?
Andy Sheldon: In this context, ‘Pig butchering’ refers to a type of socially-engineered scam where the target is ‘fattened up’ with faked data prior to the kill. In the case of fake bank accounts, this means using AI to trawl the internet, looking for opportunities to build a human-like online profile. The fraudsters generally begin by applying for a third-party sub-prime loan – say $300-$600 – and pay it back over six months. This might, for instance, initially raise the fake account’s credit rating up from 640 points to around 730.
Cyber Intelligence: But don’t the banks request photo ID in the form of a driving license or a passport?
Andy Sheldon: If photo ID is required, it is very easy to buy extremely convincing digital fake passports and drivers’ licenses. The Onlyfakes website, which the authorities shut down last year, was only one example of an online service offering fake passports and driver’s licenses from any one of 80 countries for $15 per faked document. Of course, in reality, this type of fake passport is only a digital picture. Generating large numbers genuinely of convincing physical passports for countries such as the US and the UK would involve the theft of large quantities of highly-guarded special papers and would, therefore, be prohibitively expensive for online scammers.
Cyber Intelligence: How then can the banks ensure that the photo ID is that of a real individual and not a fake account?
Andy Sheldon: That can be difficult. Even when a bank decides to conduct a video interview to make sure the face of the account holder matches that on the photo ID, the fraudsters now employ freely-available deepfake video interview technology to thwart this final verification.
Cyber Intelligence: In the UK, some banks are considering re-opening branches in order to physically meet customers face-to-face, do you think the current scale of fake-account fraud will encourage others to follow suit?
Andy Sheldon: This is a very contentious issue at the moment as physical meetings with customers represent a high-cost alternative to online management and monitoring of bank accounts. A solution has, however, been found in the US, where there is currently a large number of traveling notaries, mainly because of the COVID lockdowns. Some banks are now starting to hire traveling notaries to visit the account holders’ addresses simply to physically verify their photo ID.
Cyber Intelligence: Are there any new technologies in the pipeline that would enable banks to automatically identify the rapidly growing number of fake accounts from those of legitimate customers?
Andy Sheldon: At Deduce, we have been countering stolen and synthetic ID theft for five years and have built a massive identity graph that sees 1.5billion authenticated activities daily, we have developed AI that can identify fake identities online by consistently monitoring the behavior of all identities, not just the suspicious ones. This approach consistently throws up anomalies that can be used to pinpoint fake accounts. For example, it is very hard for any software program to try to appear as a human interacting from a specific location. Most people, for example, do not go online to check on their bank account every Tuesday at precisely 7:15 am, although an AI-driven fake account would. As always, it is a question of staying one step ahead of the fraudsters.
Cyber Intelligence: Thank you.