Why 2022 Can Be the Year Financial Services Providers Embrace Ethical AI
- By Scott Zoldi, FICO
- September 14, 2022
Nearly two years after a global pandemic sent most banking customers online, the majority of financial institutions appear to be embracing digital transformation. In the Asia Pacific (APAC) region, the digital banking landscape is in an exciting stage of growth, especially in Southeast Asia. But many still have a long way to go when it comes to adding AI to key areas of their business.
A FICO poll from 2019 found that 91 percent of APAC banks felt they lagged behind banks in the US and Europe when it came to implementing AI. Key reasons given included a lack of available talent, the use of legacy systems and cost. However, I believe that one of the explanations behind the lag in uptake is many banks’ new reluctance to use artificial intelligence (AI) and machine learning technologies. Key to this is a skepticism that has developed out of AI bias.
AI has become deeply mistrusted even among many of the workers who deploy it, with research finding that 61 percent of knowledge workers believe the data that feeds AI is biased. According to FICO research, 93 percent of C-level analytic and data executives said that “ensuring AI is used responsibly and ethically in business context is a huge, but critical task.”
Of course, even with these concerns, there is considerable pressure in Asia, like other parts of the world for financial institutions to compete in a dynamic that intensifies the digitize or die mentality, even if that means some imperfect solutions are put into production.
Organizations of all sizes can embrace ethical AI
Even if there is some reticence, ignoring AI isn’t a feasible avoidance strategy because it’s already being embraced by the business world at large. Plus, AI and machine learning present the best possible solution to a problem encountered by many financial institutions: After implementing anytime, anywhere digital access – and collecting the high volume of customer data it produces – they often realize they’re not actually leveraging this data appropriately to serve customers better than before.
This ability to deliver personalized experiences and serve customers better is only possible when customer data can be analyzed and leveraged through the responsible application of explainable, ethical AI and machine learning.
It’s therefore unsurprising that spending on AI systems in the APAC region is expected to increase from USD17.6 billion in 2022 to around USD32 billion in 2025 as businesses invest in AI to improve customer insights, increase efficiency, and accelerate decision making.
The impact of a mismatch between increased digital access and customers’ unmet needs can be seen in a recent FICO study, which found that while most customers in APAC were highly satisfied with their main banking providers, about 35 percent said they opened a new banking account or took up a new product. The growing appetite among for digital banking services has also led to as many as 72 percent of retail banking consumers choosing a fintech product despite having the option to use their main bank’s services.
The importance of responsible AI is a message that is spreading throughout the APAC region as central monetary bodies such as the Monetary Authority of Singapore look into establishing a methodology to determine the degree of transparency required to explain and interpret precedents of machine learning models.
The solution is for financial institutions of all sizes to implement AI that is explainable, ethical and responsible, by incorporating interpretable, auditable and humble techniques that will push AI to become a reliable and safe, mainstream business technology.
Why Ethics by Design is the solution
September 15, 2021 saw a major step toward a global standard for Responsible AI with the release of the IEEE 7000-2021 Standard. It provides businesses (including financial services providers) with an ethical framework for implementing artificial intelligence and machine learning by establishing standards for:
- The quality of data used in the AI system;
- The selection processes feeding the AI;
- Algorithm design;
- The evolution of the AI’s logic;
- The AI’s transparency.
As the Chief Analytics Officer at one of the world’s foremost developers of AI decisioning systems, I have been advocating Ethics by Design as the standard in AI modeling for years. The framework established by IEEE 7000 is long overdue. As it solidifies into broad adoption, I see three new, complementary branches of AI becoming mainstream:
- Interpretable AI focuses on machine learning algorithms that specify which machine learning models are interpretable versus those that are explainable. Explainable AI applies algorithms to machine learning models post-hoc to infer behaviors what drove an outcome (typically a score), whereas Interpretable AI specifies machine learning models that provide an irrefutable view into the latent features that actually produced the score. This is an important differentiation; interpretable machine learning allows for exact explanations (versus inferences) and, more importantly, this deep knowledge of specific latent features allows us to ensure the AI model can be tested for ethical treatment.
- Auditable AI produces a trail of details about itself, including variables, data, transformations, and model processes including algorithm design, machine learning and model logic, making it easier to audit (hence the name). Addressing the transparency requirement of the IEEE 7000 standard, Auditable AI is backed by firmly established model development governance frameworks such as blockchain.
- Humble AI is artificial intelligence that knows if it is unsure of the right answer. Humble AI uses uncertainty measures such as a numeric uncertainty score to measure a model’s confidence in its own decisioning, ultimately providing researchers with more confidence in decisions produced.
When implemented properly, Interpretable AI, Auditable AI and Humble AI are symbiotic; Interpretable AI takes the guesswork out of what is driving the machine learning for explainability and ethics; Auditable AI records a model’s strengths, weaknesses, and transparency during the development stage; and ultimately establishes the criteria and uncertainly measures assessed by Humble AI.
Together, Interpretable AI, Auditable AI and Humble AI provide financial services institutions and their customers with not only a greater sense of trust in the tools and technologies driving digital transformation, but the benefits they can provide.
This post was written by Scott Zoldi, the chief analytics officer of FICO. The views and opinions expressed in this article are those of the author and do not necessarily reflect those of CDOTrends.
Image credit: iStockphoto/tumsasedgars