Explainable AI (XAI) has long been a fringe discipline in the broader world of AI and machine learning. It exists because many machine-learning models are either opaque or so convoluted that they defy human understanding. But why is it such a hot topic today?
AI systems making inexplicable decisions are your governance, regulatory, and compliance colleagues’ worst nightmare. But aside from this, there are other compelling reasons for shining a light into the inner workings of AI. For one, as more and more companies adopt AI, they find that the business stakeholders who will rely on AI for their workflows won’t trust decisions if they don’t have at least a general understanding of how they were made. Also, opaque AI obfuscates the “second-order insights,” such as nonintuitive correlations that emerge from the inner workings of a machine-learning model.
Explainable AI Is Not One-Dimensional
There are many different flavors of explainable AI and a whole host of related techniques. Determining the right approach depends whether:
To learn more about the different approaches to explainable AI and best practices for applying them, please see my recent piece on the subject. Also, if you’re in the market for a solution to help ensure that your AI systems are explainable, please see my recent report, “New Tech: Responsible AI Solutions, Q4 2020” (client access is required for the research featured here).
The original article by Brandon Purcell, principal analyst at Forrester, is here.
The views and opinions expressed in this article are those of the author and do not necessarily reflect those of CDOTrends. Image credit: iStockphoto/Besjunior