How ChatGPT Could Worsen the ‘Scam Pandemic’
- By CK Leo, FICO
- May 08, 2023
Scammers are increasingly using generative AI technology to perpetuate their schemes, and there are mounting concerns about how this could worsen the scam pandemic. In 2020, scammers used generative AI technology to mimic a company director's voice and duped a bank manager in Hong Kong into authorizing the transfer of HKD35 million. Since then, the technology has only grown more convincing, enabling scammers to generate human-like text in seconds, bypassing the usual spelling and grammatical errors that have long been synonymous with scam messages.
In 2023, ChatGPT's impressive ability to generate highly realistic and coherent responses across various prompts, from creative writing to technical assistance, captured the public's imagination. Unfortunately, scammers also saw the potential and began exploring ways to use the AI chatbot to generate malware. ChatGPT's remarkable capacity to mimic the language style of specific organizations and institutions enabled scammers to create highly detailed and realistic copies, which they could use to create scam messages or fake websites. This made generating scam messages a low-effort, high-reward endeavor, leading to even more challenging detection and discernment of phishing and impersonation scams.
The rise of generative AI technology looks to bolster the already effective toolset employed by scammers. In Singapore, the number of reported scam cases rose by 32.6% in 2022 compared to 2021, with losses totaling SGD660M. In Australia, consumers lost a record amount of more than AUD3.1 billion to scammers in the past year, which is more than an 80% increase from the year before, according to the Australian Competition and Consumer Commission (ACCC). Although not all scams are directly related to AI technology, the increasing frequency of scams is worrisome. The fact that scammers now have more comprehensive access to AI technology raises concerns about the public's vulnerability to scams.
Banks and financial institutions must match the pace of technology and evolve their protection toolsets for consumers to effectively combat the ease of how fraud and scams can be automated with precision tools such as ChatGPT. By leveraging AI and machine learning technologies, such as sophisticated behavioral analytics and resulting fraud and scam detection scores, banks can significantly improve detection accuracy and react in real-time to the increasing volume of fraud and scam cases facilitated by the criminal use of ChatGPT.
While ChatGPT does have restrictions in place for anyone looking to generate malicious content, these can be easily bypassed by savvy scammers. The Singapore Police Force has started monitoring ChatGPT-related crimes. Last month, policing organization Europol issued an international advisory about the potential criminal use of ChatGPT and other “large language models”. Similarly, the U.S. Federal Trade Commission has been sharing its concerns about generative AI’s capabilities to deceive the public.
This should come as no surprise as criminals are constantly on the lookout for new methods and tools for their schemes and are already exploring how the AI chatbot could be used to generate malware.
This should sound alarm bells for banks and financial institutions in the region, especially as digital banking services continue to grow in popularity in the region and consumers become keen adopters of real-time payments. The likelihood of customers spotting nonsense messages from fraudsters or scammers will only become slimmer while the reliance on behavioral analytics to detect changes in financial payments will become much more important – now that the lines between poorly generated scam messages and expertly generated ones have become blurred.
How banks can outsmart AI-powered scammers
With scam methods evolving at such a rapid pace, consumer education will be essential. By regularly communicating with customers, banks can provide useful advice on scams and fraud prevention, along with practical checks individuals can follow to protect themselves. This is especially pertinent for AI-powered scams, such as those using generative AI, where the playbook against such threats will need to be refreshed. It will be crucial for banks to encourage customers to keep their contact information updated so that they can receive the latest fraud or scam alerts.
Banks must also increase their use of real-time fraud and scam detection models to stop payments from leaving accounts due to far-too-convincing ChatGPT-powered social attacks on customers. The proliferation of mobile payment apps and new open banking standards in the wake of the pandemic have caused real-time payment scams to grow. We have all heard of scams like the “hey mum, I lost my phone” approach that looks to trick people into sending money to an account controlled by scammers.
The use of targeted profiling of customer behavior is now able to spot scams and has yielded some impressive results. FICO has found that 50% more scam transactions were detected using this approach. Banks can use models to examine typical customer behaviors and flag anything suspicious, such as adding a new payee and preparing to send them a large amount of money.
By leveraging AI, machine learning, and education, banks can better prevent AI-powered fraud and scams and protect their customers from financial threats such as account takeovers and imposter scams.
CK Leo, FICO’s lead for fraud, security and financial crime in Asia Pacific, wrote this article.
The views and opinions expressed in this article are those of the author and do not necessarily reflect those of CDOTrends. Image credit: iStockphoto/napong rattanaraktiya