Home / Banking Strategies / Explainable AI: Taking the algorithm out of the black box

Explainable AI: Taking the algorithm out of the black box

Jul 2, 2020 / Technology
Share

A 2020 report from the World Economic Forum and the University of Cambridge found that nearly two-thirds of financial services leaders expect to broadly adopt AI within the next two years – that compares to just 16 percent today.

As technical teams at banks of all sizes evaluate how and where to apply AI in their IT infrastructure, they must consider this: What are the risks associated with ceding control of sensitive and important decision-making to an AI application?

While AI algorithms can handle enormous volumes of information, the inner workings are typically unknown and unexplainable. The result is essentially a “black box” that sometimes produces bias in its pursuit of accuracy. In sensitive applications, such as banking, this trade-off is undesirable – any increase in prediction error could make it more difficult for certain segments of the population to get access to credit and mortgages.

A foundational fix is needed to deal with the risk of bias. “Explainable AI,” which can offer both accuracy and transparency, can help bankers address these concerns.

Take the recent example of the Apple Card, managed by Goldman Sachs. What started as a tweet thread contending gender bias (including from Apple co-founder, Steve Wozniak and his spouse) quickly became a brand-damaging spectacle. A number of women reported experiencing significantly lower credit limits than their male spouse when all of their other input factors were the same (or in some cases higher). Apple ended up with a black eye and a regulator opened an investigation into Goldman Sachs and its algorithm-prediction practices.

How could this issue have been avoided, or at least handled better? Explainable AI provides that foundational fix I mentioned earlier. It combines both accuracy and transparency in a way that reduces the risks of deploying AI solutions in the banking industry.

To better understand how this aligns with classic development practices, let’s look at the high-level lifecycle of an Explainable AI application:

  1. Examine the data for any potential bias.
  2. Check individual model predictions for a deeper understanding of model behavior.
  3. Validate every iteration of the model for bias or suspicious performance issues.
  4. Keep track of all models being deploying to production.
  5. Monitor models in production for both performance and fairness.

The data used to train AI is critical – if the data is flawed to begin with, this flaw permeates into everything that an AI algorithm does going forward. Therefore, we need a way to check for bias and other issues in both data and models through all stages of the AI lifecycle. We also need human oversight throughout the training process because, without it, you may end up building a black-box AI application.

In the Apple Card example, this issue may have been avoided if humans had visibility into every stage of the AI lifecycle. They could have seen examples in the validation stage of how the model was behaving when a certain input factor was isolated and compared with the global dataset. They also could have had the ability to override an algorithm’s prediction if they felt it was unfair or incorrect.

As you plan for and embark on your AI projects, here are some key guidelines for infusing visibility and insight into the final product.

First, make building Explainable AI a priority. This means thinking from the outset about the ethics in your AI, who’s involved in developing your applications, and how to tackle bias.

Make sure you infuse explainability across your AI development process. This includes making sure you have the needed visibility and transparency into how results are produced so you can correct your course as needed.

Next, ensure that when your AI models are deployed, you are continuously monitoring them. AI models can encounter vast differences in data when in production, so continuous monitoring is essential to prevent model decay and to prevent outliers and data drift situations skewing its decision-making.

Lastly, implement an AI governance process. This means developing a framework where you can track and manage your models, validate your models for fairness and bias on a regular basis, ensure humans are in the loop to approve or override sensitive decisions, and continuously monitor and improve your models. With these approaches, you can build AI with trust, visibility and insights.

 

Krishna Gade is the co-founder and CEO of Fiddler Labs.