Fighting Online Fraud in Banking With Privacy-First Collaborative AI
- By Karen Kim, Human Managed
- December 08, 2024
Online fraud, costing ASEAN economies billions of dollars annually, is a major security crisis.
Traditional banks and fintech companies have faced significant threats in recent years due to fast-evolving fraud techniques. Recent studies show that the average cost of a data breach in ASEAN reached an all-time high of USD3.33 million in 2024. Efforts from regulators and businesses show the rising importance of effective fraud management.
On October 24, 2024, the Monetary Authority of Singapore announced the Shared Responsibility Framework, assigning relevant duties to financial institutions (FIs) and telecommunication companies (Telcos) to mitigate phishing scams and pay affected scam victims where these duties are breached, coming into effect from December 16, 2024. Recent reports in Malaysia reference “the establishment of a fraud intelligence network across Asean banks to enable real-time sharing of data on fraudulent activities, enhance threat detection, and foster a unified response to cyber risks, in line with the Asean Cybersecurity Cooperation Strategy (2021-2025).” A self-initiated proactive approach between Globe Telecom and the Bankers Association of the Philippines is helping reduce financial scams.
So, how can we access better intelligence to fight online fraud in ASEAN? At Human Managed, we believe that the answer lies with the collaborative machine learning technology of Federated Learning and privacy preservation techniques.
But let’s first have a look at why fraud management needs a different approach in the first place.
Effective fraud management needs to overcome issues of data, privacy and cost
Over years of co-creating data-centric use cases with enterprise customers, we have learned that intelligence is particularly effective and trustable for outcomes when it has three defining qualities: traceable (based on verifiable sources and knowledge bases), timely (served at the right time for decision making) and fresh (generated from a recent collection of events and analysis).
However, 3 key factors limit organizations from using AI for building systems for continuous fraud management. These pertain to quality data, privacy and LLM model training costs.
Reusable, scalable and adaptable AI requires quality data.
A recent survey of 600 data leaders shows that “quality of data” is the top (42%) data-related obstacle to the adoption of generative AI and large language models, followed by data privacy and protection (40%). Additionally, researchers also predict that if current Large Language Model (LLM) development trends for training AI models continue, we may run out of available datasets between 2026 -32.
Data privacy is business critical.
Enterprises are concerned about loss of data privacy and misuse. According to Cisco’s 2024 Data Privacy Benchmark study, data privacy is a critical element and enabler of customer trust. 94% of organizations report that customers would not purchase from them if they did not protect their data properly. There is also a clear understanding that promoting privacy is good business, with 95% of respondents saying benefits exceed costs and the average organization realizing 1.6x return on their investment.
Specialized technologies are required to reduce large language model training costs.
Current estimates highlight that some of the biggest models cost USD100 million to train; the next generation could cost USD1 billion and the following iteration USD10 billion. To make Gen-AI economically viable, the industry is trending towards innovations in specialized technologies, from chips to software and a creative mix of models to reduce processing time and costs.
Collaborative Machine Learning Needed For Better Intelligence
While blockchain and cryptocurrencies have sparked regional interest, regulators in ASEAN remain cautious. Project Inthanon explored the use of blockchain for cross-border payments in 2018, while Singapore’s Payment Services Act (PSA), implemented in 2019, provided regulatory guidance for digital token services. These technologies, along with tokenization for authentication, will be on the rise for building fraud management strategies. However, technologies are still needed to address the primary requirement of better quality data and quality AI models.
For continuous and consistent fraud intelligence, a wide variety of data needs to be processed, analyzed and applied to well-defined problems, then distributed to the right channels at the right time. At Human Managed, our journey of building better intelligence has led us to the emerging groundbreaking solution of federated learning (FL) and privacy preservation techniques that allow models to be trained across decentralized devices while keeping the data localized and secure — this is Privacy First, Collaborative AI.
Fraud Management Use Cases For Privacy-First, Collaborative AI
Human Managed recently launched a whitepaper on federated learning that includes industry expert insights on fraud management. Aloysius Chong Kin Faa, head of fraud & projects at PayNet, shares the intent behind the recently launched National Fraud Portal in Malaysia for fraud response and proactive detection for the financial ecosystem, along with use cases for eKYC and credit risk scoring.
He says, “While we don’t have a federated learning (FL) use case in production today, we are exploring its potential as a more secure and collaborative approach to fraud detection within our ecosystem.”
Centralized Intelligence for Fraud Management
As a primarily B2B organization, PayNet facilitates the exchange of data or intelligence on a sector level for both payments as well as other value-added services (e.g., fraud intelligence sharing) via the newly launched National Fraud Portal (NFP), which was co-designed by Bank Negara Malaysia (BNM), PayNet and other financial institutions to strengthen the capabilities of the National Scam Response Centre (NSRC).
The NFP was developed with the objective of facilitating collaboration and data sharing across financial institutions to combat online financial fraud and scams. Today, the system serves as a centralized channel to handle incidents received by the National Scam Response Centre (NSRC) call center and financial institution customer complaint channels. It also utilizes models to trace and intercept victim funds for recovery. This enables a comprehensive and continuous collection of standardized fraud labels for the development of fraud detection models, which is a crucial next step in our strategy to combat fraud alongside our ecosystem players proactively.
AI and ML will be essential as manual processes cannot effectively manage fraud monitoring, detection and investigation at current payment volumes. However, given most contextual customer data reside in silos with the financial institutions, exploring FL applications for collaborative model development with our participants is a potential alternative to centralized data sharing and fraud modeling.
Separately, PayNet has been exploring other value-added services with fintech providers that may involve AI and ML models with FL applications (e.g., alternate credit scoring, digital ID, etc.).
Electronic Know Your Customer (eKYC) Solution
Providers have successfully demonstrated FL in this space, especially in cross-device applications. For example, in an eKYC facial recognition and proofing solution, sensitive biometric data like facial images, videos or fingerprints are not shared directly with the users’ bank. Instead, the model is deployed locally on the users’ device, sending only the model learnings back to the bank’s central system. This ensures the users’ data remains secure in the device. Such use cases are more feasible, given that a single organization can deploy and maintain a common model architecture.
Credit Risk Scoring
Credit risk scoring is potentially a low-hanging fruit for FL adoption. All banks manage credit risk and aim to minimize bad loans and would stand to benefit from the shared insight on risky or fraudulent loan applications. Additionally, regulatory standards and centralized credit intelligence agency data for customer credit assessments have led to more standardized datasets being used for credit assessments, making feature set convergence more achievable. While individual bank credit assessment models may differ, these are generally less complex models, enabling banks to more easily develop and adopt an FL global model as an alternate credit risk scoring reference to their internal models.
Incentive Structures for Collaborative Fraud Management
Experts also believe that while the technology exists for federated learning [FL] to be actioned easily, bigger socio-economic challenges lie around participation incentives, centralized stewardship and regulatory know-how. In the previously mentioned whitepaper, Rishu Saxena, principal specialist, AI/ML strategy, APJ at Snowflake, says,
“There are use cases for federated learning, but industries are decentralized, and enterprises work in such silos that there is no overarching point of view on common problems. The real issue is not building a centralized system of intelligence; the real issue is intelligence sharing across an industry.” Saxena poses two questions that need to be addressed for collaborative AI to be successful in ASEAN.
- What is the incentive to cooperate? Participating organizations need to see a viable economic incentive for an FL system to work. At the moment, every enterprise will have its training model system with its format of data structures. Reconfiguring data sets to match a standardized system or running a system on FL data sets and their existing systems involves costs and effort. Since this technology is still in its infancy, organizations may not be interested in experimenting with it, even though intuitively, collective intelligence would make sense for risk assessment.
- Who owns the centralized intelligence system? If organizations were to cooperate in a federated learning network, who owns the centralized model and intelligence? Most probably, FL could work in highly regulated sectors such as banking and healthcare where a central authority such as a Central Bank or a central medical authority could own and lead the federated learning platform. For such authorities, the incentive is better outcomes for the citizens of the country. Such a central authority could eventually mandate organizations in the regulated sector to participate. Hence, ownership and regulation by a central authority may be needed.
Scaling Federated Learning in ASEAN with Modular Technologies
After identifying business requirements, organizations must decide how to implement, operationalize and scale federated learning use cases. The trend towards smaller and specialized technologies, from chips to software, each suited to a different type of problem, drastically reduces processing time and costs.
One way of addressing communication, computation, and data and model heterogeneity challenges is via modular architectural data platforms. The key benefits such a modular system needs to deliver are - interoperability, agility, relevance and privacy — for all data being processed — whether for one or multiple participants on the platform.
At Human Managed, we have developed a Collective Intelligence platform, hm.works, that delivers AI-native solutions for enterprises' cyber, digital and risk problems.
This platform is a modular collection of 14 functions and 92 microservices abstracted into infrastructure, software, data, and AI stacks. It integrates data from any source and develops AI models for business context and specific use cases. Through federated learning and AI-powered apps, the HM collective intelligence platform can build a distributed intelligence sharing system for organizations that will ensure that:
- relevant data with the appropriate business context is processed for each participant via context models
- data structures are organized for efficient interoperability or real-time exchange of information across different systems via STIX v2.1 standards
- detections and insights for individual enterprises are agile and fast via Nano Models (e.g., cyber incidents and threat intel)
- privacy across multiple platform members is preserved due to collaborative training of machine learning models without actual sharing of raw data via distributed federated learning.
Conclusion: Better Intelligence for Fraud Management is Collective
In conclusion, collective intelligence for building risk assessment frameworks across the region will be critical to unlocking the ASEAN digital economy's multiplier impact. Digital financial services in Southeast Asia are at an inflection point, expected to generate revenues of USD38 billion, accounting for 11 percent of the total financial services industry, with digital payments expected to exceed USD1 trillion by 2025. Banks and financial services providers increasingly seek advanced solutions by leveraging machine learning and AI to tap into this potential.
Effective and trustable outcomes for data-centric use cases rely on intelligence that is traceable, timely and fresh. Federated learning with privacy protection techniques delivered via modular technologies has emerged as a disruptive and innovative way of scaling collective intelligence. However, while technology continues to improve, the real challenge is adopting a mindset of collective intelligence and implementing a practice of intelligence sharing amongst diverse and distributed entities.
The views and opinions expressed in this article are those of the author and do not necessarily reflect those of CDOTrends. Image credit: iStockphoto/Feodora Chiosea; charts: Human Managed
Karen Kim, Human Managed
Karen Kim is the chief executive officer of Human Managed.