Glitch in the Matrix: How Asia Cracks Down on Evolving Fraud
- By Winston Thomas
- January 31, 2024
Gone are the days when Asia was solely viewed as a breeding ground for scams. Now, the tables have turned. The breakneck economic growth that fueled the region's rise has created a paradox: while regulations mandate basic anti-fraud measures, growth remains king. Companies often view security as a compliance checkbox, leaving gaping vulnerabilities for tech-savvy predators.
So, it’s no surprise that the region, with its fragmented by diverse regulations, porous borders, and a patchwork of law enforcement systems, has become a fraudster’s paradise. And it’s not just about stolen credit card numbers, either. We're talking about large-scale identity theft, corporate espionage, and even human trafficking fueled by online scams. The victims? Not just naive tourists but everyday citizens and businesses caught in the crossfire of this digital gold rush.
So, can Asia weave a stronger cybersecurity net without stifling its entrepreneurial spirit? And how can AI help? The future of Asia's digital landscape hinges on these questions, and the stakes couldn't be higher.
Boom breeds bust
Imagine: millions of people across Asia, newly minted converts to the cashless gospel, tapping phones and scanning QR codes with carefree abandon. It’s a digital nirvana. Well, not quite.
Lurking in the shadows are cybercriminals exploiting a system built for speed, not security. This is the dark side of Asia's digital payments boom, and the stakes are getting higher with every transaction.
At the heart of this frenzy lies a seemingly innocuous feature: irrevocable settlements. Unlike traditional methods where funds are "reserved" before the final transfer, new payment platforms move money instantly, irreversibly, describes Ian Holmes, SAS’s global lead for enterprise fraud solutions and its director.
Take PromptPay in Thailand as an example. “The money is moved irrevocably between accounts within 15 seconds,” explains Holmes. “That includes all the technical Network Time confirmation that you have enough money in your account to make the payment and the fraud checks.”
Once the money's gone, it's gone, leaving victims scrambling in the dust. "The problem," says Holmes, "the becomes detecting fraud in real-time.” Asia's fragmented business landscape, with diverse regulations and tech setups, makes such an effort a nightmare.
Enter the banks, strong-armed by regulators, to become the frontline defense. “Regulators are really forcing banks and financial institutions to have real-time fraud detection to ensure that they are the best place to stop the funds before they actually leave these accounts,” says Holmes.
Today’s banks have digital moats patrolled by AI sentries scanning for suspicious activity. But building these fortresses takes time and resources. Besides, some banks are more legacy medieval watchtowers than high-tech strongholds.
When two moats merge: AML meets fraud
In reality, banks have two mighty digital moats guarding against financial foes. One, vigilant against fraud, the other a bulwark against the insidious flow of dirty money.
For years, these moats stood separate, each manned by specialized forces. But as the financial landscape evolves, a daring question arises: can they be merged, creating an impregnable fortress against a new breed of enemy?
The explosive rise of digital payments and cryptocurrencies drives many to take this question seriously. “Besides, there's a lot of issues with fraud leading into terrorism, financing and things like that,” says Holmes. “This means that the regulators must be able to stop the fraud in the first place to help prevent money laundering and corruption.”
But obstacles lurk within these very walls. Fraud detection, inherently customer-centric, requires interaction and explanation with those defrauded. AML, on the other hand, operates in the shadows, its investigations cloaked in secrecy. The last thing you do is give a potential fraudster a hint of an ongoing investigation. Finding common ground between these two activities can be tricky.
It's early days, admits Holmes. But what’s clear is that not addressing the potential merging of these two separate processes is simply not an option. Banks need to explore it, not just for efficiency but for the very security of the financial system.
The mules in the machine
Remember the Wild West, where shady figures lurked at the edges of town, their roles murky, their motives suspect? Today's online frontier harbors a similar breed: the money mule.
There are countless unsuspecting individuals who are unwittingly carrying out criminals' dirty work—transferring stolen funds and opening accounts under false pretenses. It's a disturbing twist on the drug mule trope played out in the digital sphere.
But identifying these mules is a delicate dance. Falsely accusing someone of fraud can be disastrous for their finances and the bank's reputation. It's not like we can just block every suspicious account, says Holmes. Imagine the chaos if someone's legitimate windfall triggered suspicion.
Now, enter the chilling prospect of synthetic identities. In countries with lax controls, criminals can easily fabricate entire digital personas with fake IDs and online footprints. These "ghost accounts" blur the lines, making it nearly impossible to distinguish a human mule from an AI puppet.
“It's a massive issue in many geographies,” warns Holmes. “In many countries where there is weak governance of people's identities, especially if both physical and digital. Then it's very difficult, and that's where synthetic comes in because it's hard to triangulate them,” says Holmes.
“Which is why proof of life becomes key,” he continues. “We need to know that there is a person at the end of a device, not a bot trained to be human-like?”
The federated future of fraud detection
The battle against financial fraud is an ever-evolving arms race. AI has emerged as a powerful weapon in this high-stakes game, but the fight is far from over.
Traditional AI models for fraud detection often rely on narrow use cases and extensive manual feature engineering, a time-consuming and resource-intensive process.
Generative AI (GenAI) offers a better alternative. GenAI can significantly reduce the workload associated with feature engineering by automatically generating realistic data, allowing banks to fine-tune their models and identify new fraud patterns more efficiently.
However, GenAI is a double-edged sword. Just as banks are leveraging this technology, so too are fraudsters. Using generative adversarial networks (GANs), they can create synthetic data to "game" the anti-fraud systems, making their attacks more sophisticated and harder to detect.
In this cat-and-mouse game, speed is not on the banks’ side. “Fraudsters are utilizing GANs much faster than the industry, to be honest," says Holmes. They can scrape public data to create deepfakes, perfectly timed scams that exploit personal information and special occasions like birthdays.
So, how can banks stay ahead of the curve? One answer is creating a network of banks, each with unique data and AI models, collaborating to fight fraud collectively. This is the concept of federated learning.
"For example, the SAS platform being able to process multiple models in combination really helps you to have that baseline, long-term model as well fill up gaps in your current machine learning models," explains Holmes.
Federated learning holds immense promise for the future of fraud detection. By harnessing the collective power of multiple AI models and data sets, banks can create a more robust and resilient defense against ever-evolving threats.
But like any shiny new gadget, federated learning isn't without its kinks. Data privacy and security within the federated network are paramount concerns. Think of it like a high-stakes poker game, where each player keeps their cards hidden but contributes to the pot (the training data) to build a better hand (the fraud detection model). Trust is critical, and ensuring no one peeks at each other's cards or marks the deck is essential.
Federated learning is still a young buck; its full potential is yet to be unlocked. But the early signs are promising, hinting at a future where AI-powered banks outwit even the craftiest fraudsters. Now, wouldn’t that be nice for a change?
Image credit: iStockphoto/tonefotografia
Winston Thomas
Winston Thomas is the editor-in-chief of CDOTrends. He likes to piece together the weird and wondering tech puzzle for readers and identify groundbreaking business models led by tech while waiting for the singularity.