The XAI Revolution: The Race to Explain the Unexplainable
- By Winston Thomas
- May 06, 2024
The AI world is abuzz with a new battle cry: Explainability. Consumers want it, regulators are sniffing around, and governments are eyeing it suspiciously. The days of the inscrutable AI “black box” are numbered—or is it?
Here’s the rub: explaining how an AI model does its thing will be tricky. So, is there a better way to become explainable, or will our sudden demand for XAI chain those fancy neural networks before they spread their wings?
Explaining the XAI craze
In early February this year, Forrester published a report, “The State Of Explainable AI, 2024.” It showed that explainability or explainable AI (XAI) is seeping into the corporate consciousness.
The big numbers show it: 55% of business and technology professionals who see XAI as more than a three-letter IT jargon say that their organizations are on the XAI train. Another 13% plan to hop aboard in the next 12 months.
Brandon Purcell, Forrester’s vice president and principal analyst, who authored the report, says it concerns existing regulations created long before GenAI entered the public consciousness.
“For instance, in the U.S., there’s the Fair Lending Act. You need to be able to show your work when you decide, as a bank, whether or not to extend someone's credit. If you’re denying a loan, you have to have good reason to show that you’re not doing it in a biased way against any sort of protected groups,” he explains.
Banks have to show how they decide who gets a loan, and proving they aren’t discriminating is a lot more complex when the decision-maker is an algorithm. Hence, the sudden push for XAI.
Looming AI-specific regulations, like the new E.U. Artificial Intelligence Act, with steep fines but focused on use cases, are fueling the XAI fire.
Purcell also sees an internal reason for the hype: closing the trust between AI investment and skeptical users. Projects get derailed when the folks using the AI don’t understand and, therefore, don’t trust it. Plus, instances of hallucinations and inherent bias see many of them questioning predictive scores.
“They’re worried that the information being provided is inaccurate. If they have to make a critical decision based upon it, they might not use it,” says Purcell.
This need for trust is a significant reason companies take XAI seriously.
The lost benefit
Purcell thinks companies are missing out on another major benefit, though.
A lot of predictive AI gets stuck on those output scores. But XAI helps the data science teams and business users go deeper, revealing why, for example, a customer might leave in the first place. That insight is way more valuable at a systemic level than an individual score.
“When people build and use predictive models, they get very focused on the propensity scores,” says Purcell.
XAI can dig deeper into the different interaction points, signals, and triggers that lead to the churn. “Those things can be much more valuable on a systemic level than that actual output that only impacts that one customer,” says Purcell.
So why aren’t companies demanding their data science teams embrace XAI for this purpose? It’s a question of framing.
“The reason is that when a predictive analytics project gets initiated, most of the time they use the propensity score to create an audience and reengage, let’s say, if it’s a customer churn model,” says Purcell.
“And so everyone gets laser-focused on the model's prediction and explainability, and the insights within it become an afterthought.”
Purcell thinks companies should demand that their data science teams, the ones building the AI models, use XAI in this way. After all, data scientists can get tunnel vision with so many demands on them.
XAI isn’t simple, and LLMs make it worse
Here’s where it gets messy. XAI makes sense to number crunchers, but to a marketer looking at a SHAP analysis, it might as well be Martian.
Purcell sees companies needing that “translator,” someone who bridges the geek-speak and the business needs.
“And that’s what’s missing in a lot of companies, that translation layer. Whether it’s a data storyteller or a dedicated resource who understands that business well and the context enough to illuminate what’s most interesting in the PDP [partial dependence plot] or SHAP values,” says Purcell.
This will be more important as companies continue to play with open source and commercial large language models. Today, there’s zero explainability for those.
“There are different ways like chain-of-thought reasoning. But explainability wouldn’t hold up in court. So if you’re using an LLM, you have opacity,” says Purcell.
It’s like adding a super-brain to your operations, except that super-brain has all kinds of baked-in problems you can’t untangle. By adding or deploying LLMs, you end up inheriting all these problems, weaknesses and biases that come with them.
This is one reason why data science teams are beginning to talk about large language modelops (LLMOps) and want to add XAI elements.
“Companies need to realize that that misalignment is dangerous and also inevitable. This is what our research shows. And so companies need effective guardrails that not just look to stop it from happening, but mitigate the impact when it does,” says Purcell.
Explainability = Eye of the beholder
XAI sounds great. But the whole conversation is built on an assumption: everyone agrees on what ‘explainable’ means. The fact is, not everyone doesn’t.
“I like to say explainability is in the eye of the beholder. Because it depends on the audience,” says Purcell.
What satisfies a regulator won’t mean squat to a customer who got denied a loan. And what a data scientist considers an explanation is not in the same language the C-Suite speaks.
“Then there’s a local explanation. A customer wants to know why he or she didn't get the loan,” says Purcell.
You also need to decide the language in which the explanation is conveyed, “which is hugely persona dependent.”
“There’s a lot of room for a personalization of explanations,” adds Purcell.
The takeaway? Before we put all our faith in explainability, we have to get honest about who needs explaining and what that even looks like. Maybe, this is where you should start your XAI adventure after all.
Image credit: iStockphoto/Ole_CNX
Winston Thomas
Winston Thomas is the editor-in-chief of CDOTrends. He likes to piece together the weird and wondering tech puzzle for readers and identify groundbreaking business models led by tech while waiting for the singularity.