
It feels like fraudsters are consistently staying one step ahead of us. Back in early 2022, a study found that one out of every four accounts made online was fake—and that number has only gotten worse. The auto lending industry, for example, saw a staggering $7.9 billion in losses due to a 98% spike in synthetic fraud in 2023. They’re not alone in fending off more fraud attempts than ever as malicious actors turn to generative artificial intelligence to increase both the sophistication and the sheer number of fake accounts trying to bypass verification steps and swindle businesses.
The increase we’ve seen in synthetic identities is causing a new host of problems. Not only are more businesses finding themselves with fake customers in their systems—financial institutions mistakenly giving credit to synthetic identities, colleges and universities grappling with applications from fake students, and more—but some of the measures being taken to tamp down on fraudsters’ relentless advances have had the unfortunate side effect of pushing away legitimate customers.
SuperSynthetic Identities On the Rise
While synthetic identities have been a thorn in the side of security and risk management professionals for years, “SuperSynthetic” identities have risen to prominence thanks to advances in AI and are even more formidable. These identities are fully automated, but rather than brute forcing their way through defenses and simply playing to the law of large numbers (if you try enough, eventually one will get through), they go for the long con.
SuperSynthetic identities prefer to slowly make small transactions like deposits, account balance checks, and other things that seem humanlike over the course of weeks or even months. Then, when the low-key activity hasn’t set off any alarms in a bank’s defenses (and it is usually financial institutions that are the targets here), an application for a line of credit is put in, with the odds of acceptance higher due to the slow drip of legitimate seeming activity. Of course, once that line is extended, the damage is done, and the money might as well be considered lost.
To use a metaphor from classic science fiction—and a recent pair of movies—think of it like the way Dune’s warriors use blades rather than firearms. The advanced shields they wear are set to repel anything that comes at them at high speed, but a blade that crosses the threshold slow enough will avoid detection and be able to pierce the opponent. When security systems are geared toward detecting fake identities created en masse and thrown at a company, the subtler, slower frauds might sneak in undetected.
Are Banks and Businesses Overcorrecting?
In response to the increased activity and sophistication of recent synthetic fraud attempts, many organizations have greatly heightened their security measures—sometimes to the detriment of their customers.
While in theory, it seems like a good idea to leave no stone unturned and turn the sensitivity up, so to speak, flagging every user action that is the slightest bit suspicious, in practice, you’ll end up with a good number of false positives—and annoyed customers. Some things that normal people do, such as use VPNs, shorten their name with a nickname (like Sam instead of Samantha), or check on their account while traveling to a new place, can be interpreted as suspicious by security systems tuned to anything outside of the routine.
As a result, these businesses have to spend time and resources manually reviewing these instances, which should have coasted through without issue. They might need the user to clear other verification hurdles around their identity, and there are few ways to chase off customers more effectively than making them wait or work more for a process they have to come to expect to be quick and seamless.
In this way, increased fraud hurts businesses on a second front, impacting customer relationships and causing some users to abandon the company or avoid getting on board in the first place.
Who Can You Trust?
Financial institutions and other industries must start fighting fraudsters on the axis of trust. If they utilize ways to confirm quickly if a user is likely to be who they say they are and keep them from any delays or additional security steps, that’s the most beneficial path forward from the customer service perspective. At the same time, if the user or applicant is not trustworthy, the additional hoops to jump through can be laid out—this way, overall security is not negatively impacted and there’s not a tradeoff when smoothing the path for real customers.
It all comes down to how trust is built into security protocols and how reliable those trust assessments are. Preemptively assessing user identities by tapping into anonymized identity networks that review the actions of a user rather than simply their credentials can be a reliable way of assessing user trustworthiness and then tailoring an application or checkout process accordingly without them ever realizing what’s happening on the backend. “Trust signal” factors such as a user’s geolocation, recency and frequency of activity, whether they utilize VPNs or travel often, and activity on other in-network sites and apps all should be considered and combined into a single and clear picture of trustworthiness.
With these considerations tracked, financial institutions and other businesses fighting off fraud can hold off on tuning their security systems to Fort Knox levels, which will dissuade real users who are just trying to check their account balance.
Deduce uncovers stolen & AI-driven synthetic identities that bypass incumbent fraud detection systems. Deduce tracks almost 200 million identities multiple times every week, monitoring 1.5 billion online activities daily from 150,000 websites & apps. Multi-layered digital forensics reduce friction & boost customer acquisition performance while capturing incremental fraud.