Explaining Explainable AI

Image credit: iStockphoto/Besjunior

Explainable AI (XAI) has long been a fringe discipline in the broader world of AI and machine learning. It exists because many machine-learning models are either opaque or so convoluted that they defy human understanding. But why is it such a hot topic today?

AI systems making inexplicable decisions are your governance, regulatory, and compliance colleagues’ worst nightmare. But aside from this, there are other compelling reasons for shining a light into the inner workings of AI. For one, as more and more companies adopt AI, they find that the business stakeholders who will rely on AI for their workflows won’t trust decisions if they don’t have at least a general understanding of how they were made. Also, opaque AI obfuscates the “second-order insights,” such as nonintuitive correlations that emerge from the inner workings of a machine-learning model.

Explainable AI Is Not One-Dimensional

There are many different flavors of explainable AI and a whole host of related techniques. Determining the right approach depends whether:

  • Your use case requires complete transparency or if interpretability is sufficient. Use transparent approaches for high-risk and highly regulated use cases. For less risky use cases where explainability is important, consider adopting an interpretability approach such as LIME or SHAP that produces a post-hoc surrogate model to explain the opaque model.
  • Your stakeholders require global or local explanations. Some stakeholders, such as regulators, may want to understand how the entire model operates — a global explanation. Other stakeholders, such as your end customers, may want local explanations that clarify how the system made the decision that impacted them. Tailor your explanations to the technical acuity of your stakeholders. Not everyone’s a data scientist.

To learn more about the different approaches to explainable AI and best practices for applying them, please see my recent piece on the subject. Also, if you’re in the market for a solution to help ensure that your AI systems are explainable, please see my recent report, “New Tech: Responsible AI Solutions, Q4 2020” (client access is required for the research featured here).

The original article by Brandon Purcell, principal analyst at Forrester, is here

The views and opinions expressed in this article are those of the author and do not necessarily reflect those of CDOTrends. Image credit: iStockphoto/Besjunior