Deep within the encrypted bowels of the dark Web, beyond the reach of regular search engines, hackers and cybercriminals are brazenly trading a new breed of digital fakes. Yet unlike AI-generated deepfake audio and video—which embarrass the likes of politicians and celebrities by making them appear to say or do things they never would—this new breed of imitators is aimed squarely at relieving us of our hard-earned cash.
Comprising highly detailed fake user profiles known as digital doppelgängers, these entities convincingly mimic numerous facets of our digital device IDs, alongside many of our tell-tale online behaviors when conducting transactions and e-shopping. The result: credit card fraudsters can use these doppelgängers to attempt to evade the machine-learning-based anomaly-detecting antifraud measures upon which banks and payments service providers have come to rely.
It is proving to be big criminal business: many tens of thousands of doppelgängers are now being sold on the dark Web. With corporate data breaches fueling further construction of what market analyst Juniper Research calls “synthetic identities,” Juniper estimates online payment fraud losses will jump to $48 billion by 2023, more than double the $22 billion lost in 2018.
The existence of a doppelgänger dark market was first discovered in February 2019 by security researcher Sergey Lozhkin and his colleagues at Kaspersky Lab, the Moscow-based security software house. His team was carrying out their regular threat analyses on several underground dark forums, “when we discovered a private forum where Russian cybercriminals were hosting information about something called the Genesis Store,” Lozhkin says.
Figure. The home page of Genesis Market, a referral market focused on scam prevention for both vendors and buyers. As the site says, “Our mission is to create a market where scams are not be tolerated from neither vendors nor buyers. Period.”
Fraud-on-Demand
When the security researchers gained access to it, Genesis turned out to be an invitation-only, crime-as-a-service, identity-theft e-shop containing sophisticated doppelgänger datasets mimicking 60,000 people, including, in many cases, their stolen logins and passwords for online shops and payment service providers. Each identity was for sale, at prices varying from $5 to $200 (depending on the amount of useful credit-card-hacking data each contained.) Once launched in a browser, each doppelgänger could then be used for fraud.
What is actually going on here, it turns out, is the turning of one of the major pillars of latter-day antifraud technology against itself. To detect fraudulent transactions in real time, credit card companies, banks, and payment processors use commercial machine learning (ML) anomaly-detection software, which determines whether the dataset covering the devices and behaviors of the user attempting to make a transaction are close enough to that of a previously authenticated template that digitally describes legitimate users.
This authenticated template is known in the payments industry as the user’s “digital mask.” Such masks have two major components: a device fingerprint (which includes device fingerprinting factors like your commonly used IP addresses, firmware version, installed plugins, time zone, screen resolution, preferred window sizes, GPU type, OS version, and browser cookies), and a behavioral profile based on factors such as how ‘clicky’ you are when using a mouse or touchscreen online, the amount of time you typically spend at an e-store, your usual items of interest, the amount you usually spend, and whether you tend to buy digital or actual goods.
However, here’s the thing: hackers can penetrate payment systems and steal copies of those masks. Alternatively, and more likely, they can use malware planted on poorly defended computers and smartphones to transmit device and behavioral data to them over a botnet, so they can then create their own competing masks from scratch. Then they just launch them through a customized browser the Genesis gangsters call Tenebris, and connect using an IP-address-mimicking proxy to make transactions that fool the fraud detection systems.
Lozhkin says the Genesis dark market has effectively turned the decades-old practice of credit card fraud (also known as ‘carding’) into a new, highly targeted, industrial-scale criminal activity. It’s one slick high-tech operation: for instance, the Kaspersky team found the fraudsters had written algorithms that automatically price each doppelgänger, based on its fraudulent earnings potential.
Because the Genesis Store uses botnets to harvest data to construct its own fake masks, criminals can target victims directly. “A search panel lets users search for specific bots, website logins and passwords, the victim’s country, operating system, date the profile first appeared on the market—everything is searchable,” Kaspersky Lab says in an analysis of the offerings on Genesis that it has posted on its Securelist blog.
It appears to be popular. “We are seeing monthly increases in the numbers of [data-stealing] bots that are being sold on the market, and also in the numbers of cybercriminals that want to buy stolen information, so it’s still growing,” says Lozkhin.
Combating the Threat
On the front line of the antifraud industry, doppelgängers are indeed being seen as a latter-day adversary. “Our customers continue to see fraud attacks of many different types, including those using doppelgängers, and the landscape of these attacks changes daily,” says David Excell, founder of Featurespace, one of the leading antifraud anomaly detection technology providers, with bases in Cambridge, U.K. and Atlanta, GA.
To fight digital payment fraud, Featurespace has developed a probabilistic machine learning platform that alerts finance firms to fraud attempts. Called the Adaptive Real-time Change Identifier (ARIC), it lets companies “construct their own doppelgänger for each consumer, in the form of an individual behavioral profile,” says Excell. “Using these profiles, we’re able to identify if the interaction for a customer is normal, based on how they’ve behaved in the past. Typically, a fraudster is revealed when they attempt to monetize their attack, because at that point, their behaviors aren’t matching the ones we expect to see from the actual customer.”
Like its rivals in the anomaly detection arena, Featurespace does not reveal how its proprietary algorithms spot that mismatch. However, Excell says, defending against digital fraud is about far more than the ‘secret sauce’ behind their ever-changing algorithms. “Protecting a financial institution from fraud is no easy task and relies on a combination of data, technology and processes,” he says.
One measure finance firms can take in the war against fraud is to eschew the use of standalone disconnected businesses. “Financial institutions tend to operate multiple business units in silos, making it difficult to join systems together. This is one of the weaknesses that fraudsters often try to exploit,” says Excell.
By linking data sources in such units together and having oversight of customer interactions across many channels, Excell says, “financial institutions can build individual behavioral profiles in real time to spot the unusual or incorrectly mimicked behavior of the fraudster.” As an example, he points to how the channels a bank would need to integrate would include traffic on its online service, its mobile app, transactions undertaken in the branch, and also those on the phone.
It’s the sunk costs in older technology, such as the kit and code in those silos, that are leaving some firms behind in the cyber arms race against fraudsters, says Ian Thornton-Trump, an IT security analyst at cybersecurity insurer and underwriter AmTrust Financial Services in New York City. “The current cybersecurity problem has little to do with security controls or their effectiveness: the arch nemesis of cybersecurity is network complexity and technological debt,” he says.
The problem, he says, is that while physical network complexity has changed little, logical network complexity has gone through the roof with the addition of low-cost, off-premise, cloud-hosted software-as-a-service (SaaS), platform-as-a-service (PaaS) and infrastructure-as-a-service (IaaS) facilities to those physical networks. Understanding that complexity, adopting a roadmap that reduces it, and removing the least secure parts (probably the old legacy systems) is one key to enhancing security, Thornton-Trump says. “If you know what ‘normal’ is, and what systems you need to protect, this would make anomaly detection far more effective.”
Bounty Hunters vs. Fraudsters
Still another way to defeat doppelgänger fraudsters is to harness the skills of crowdsourced ethical hackers, says Laurie Mercer, a security engineer with London-based HackerOne, a bug bounty firm that offers cash rewards to the more than 400,000 ethical hackers it has registered on its books. By probing corporate systems, they can help firms find the kind of vulnerabilities that let criminals plant their data-stealing bots.
“Online payment systems and e-commerce brands have sophisticated technology in place to prevent fraud, but humans can often outwit these technical controls. And, it takes a human to outwit a human,” Mercer says, suggesting his organization’s hacker army is the best way to unleash human intelligence against the fraudsters. He’s not alone: Goldman Sachs, the U.K. challenger bank Starling, and the Germany-based direct bank N26 are all financial organizations working with white-hat hackers to secure their digital assets. “Anyone who finds a security vulnerability on these companies’ assets can report them to the company, potentially earning a bounty,” Mercer says.
The human factor matters at another level, too, says Amanda Widdowson, cybersecurity champion at the Chartered Institute of Ergonomics & Human Factors in London, noting that the Genesis Store’s doppelgängers are more of a threat if they include login/password pairs, something people have the power to limit spilling to fraudsters.
“The temptation is to use the same, easy-to-remember and so easy-to-guess password for multiple applications. The danger is, if one application is compromised, our other applications are also at risk. What’s needed is more investment in alternative methods of user validation such as biometrics—face, iris, and fingerprint recognition—to reduce reliance on limited human memory capacity,” Widdowson says.
Goldman Sachs, the U.K. challenger bank Starling, and Germany’s direct bank N26 are all working with white-hat hackers to secure their digital assets.
Kaspersky Lab agrees, urging that people use strong passwords, biometrics, and multi-factor authentication to keep the bots out. “People should be practicing safe cybersecurity habits, implementing the same methods they would to prevent any malware infection on their personal devices,” says Lozhkin.
Most importantly, he says, since the doppelgängers were constructed via botnets, “strong cooperation and quick information sharing between cybersecurity vendors and law enforcement agencies around the world will be key to a fast shutdown of such services.”
For the future, however, the banking and finance industry’s move to voice-based account management services—following on the success of voice assistants like Amazon’s Alexa, Google Home, and Apple’s Siri—may end up making them vulnerable to hack attacks via deepfake audio. That might add a voiceprint component to digital doppelgängers, with heavy ramifications for services.
Indeed, the threat that fake AI-generated voices pose to humans, rather than voice assistants, is already more than apparent. In September, it emerged that the CEO of a U.K.-based energy company had been convinced that he was talking on the phone to his boss in Germany, who asked him to make a $220,000 cash transfer to Hungary, which he did. However, his supposed superior was actually a criminal using very convincing voice synthesizer software that had been trained to mimic every aspect of the target’s voice, from his tonality to his German accent. The money is lost.
Such attacks will force changes on the payments industry. “Since the attack surface of deepfakes is primarily banking information and money transfer functions, I believe we will see mandatory holds for cash amounts over certain levels, with additional authorizations required to complete money transfers,” says Thornton-Trump.
“Although this may impede business agility, the risk of being victimized by a multi-thousand- or multimillion-dollar fraud exceeds the inconvenience.”
Losses from Online Payment Fraud to More than Double by 2023, Reaching $48 Billion Annually, Juniper Research, Nov. 20, 2018 https://www.juniperresearch.com/press/press-releases/losses-from-online-payment-fraud
Digital Doppelgangers: Cybercriminals cash out money using stolen digital identities, Kasperky Lab Securelist blog, April 9, 2019 https://securelist.com/digital-doppelgangers/90378/
Cimpanu, C.
Cybercrime market selling full digital fingerprints of over 60,000 users, ZDNet, April 9, 2019 https://www.zdnet.com/article/cybercrime-market-selling-full-digital-fingerprints-of-over-60000-users/
Providing a unique behavioral analytics approach to prevent fraud attacks Featurespace, 50 Smartest Companies of the Year 2017, The Silicon Review http://bit.ly/2lcD8lE
Marks, P.
Bounties Mount For Bugs, ACM News, August 23 2018, https://cacm.acm.org/news/230582-bounties-mount-for-bugs/fulltext
Statt, N.
Thieves are now using AI deepfakes to trick companies into sending them money, The Verge, Sept. 5, 2019 http://bit.ly/2mKoD8O
Join the Discussion (0)
Become a Member or Sign In to Post a Comment