What is Explainable Artificial Intelligence (XAI)? A guide to the basics

Blog
AIArtificialIntelligenceXAI

Artificial intelligence algorithms can make decisions and take actions on their own. Depending on the industry, their importance may be more or less. In the latter case, the question of what is behind a particular algorithm’s decision is increasingly being asked. The problem is that we often don’t know how to answer it. And this is where XAI, or Explainable Artificial Intelligence, comes to the rescue. Let’s take a look at what is involved in trying to understand how algorithms make key decisions. 

What is explainable artificial intelligence (XAI)?

Explainable artificial intelligence is the processes and techniques that ensure that people understand how artificial intelligence makes specific decisions. Why is this important? 

Suppose one hospital implements a new algorithm that is able to detect lesions based on diagnostic images. Such a program then analyzes the patient’s past performance and arranges a treatment plan. At this point the question arises: on what exact basis did the program make such a decision and not another? 

The problem is that today we often do not know this. Why? Because more and more AI models are so-called black-boxes, or “black boxes.” These are models with such a complex structure and mode of operation that we are unable to understand them. In practice, this means that we don’t know why a particular model made a particular decision. Returning to our example: the doctor may not understand why the algorithm has arranged such a treatment plan and not another. In that case, will we expect him to take responsibility for such a decision? And if not him, then who? 

Of course, the example of using AI in health care is a kind of extreme. However, it shows how important it is for us to know exactly what the specific decisions of individual artificial intelligence algorithms are based on.

News in the field of explainable artificial intelligence

Despite appearances, explainable artificial intelligence is not at all a new issue. The first mentions of the subject date back to the 1970s, although at the time the term XAI was not yet used. They referred to so-called abductive reasoning in expert systems, including those related to medical diagnoses. 

With the development of artificial intelligence and the possibilities of its implementation, the development of XAI has also accelerated. Since the decisions made by computer programs have an increasing impact on our lives, we increasingly demand explanations of the basis on which they are made. 

Therefore, modern XAI specialists are working to solve the following problems:

  • lack of transparency around artificial intelligence/machine learning models,
  • a lack of trust for programs using artificial intelligence, both among those who use them on a daily basis and among the public,
  • concerns about unintended but potentially negative or unethical decisions made by artificial intelligence,
  • the inability to maintain oversight and control over the decisions and actions of artificial intelligence. 

In other words, XAI specialists aim to open the “black boxes.” Without this, it will be impossible to use artificial intelligence algorithms, for example, in disease diagnosis or autonomous vehicles.

Principles and examples of explanatory design of artificial intelligence

What conditions must a computer program meet to be considered an XAI? This is stated by 4 principles of explainable artificial intelligence. These are:

  • Explanation – the program must provide or include evidence or reasons for the decisions it makes,  
  • Meaningful – the program must provide explanations that are understandable to the target audience,
  • Explanation Accuracy – the explanation provided must correctly reflect the reasons for generating such and not other results,
  • Knowledge Limits – the program can only operate under the conditions for which it was designed and only after it has succeeded in achieving certainty in the results obtained. 

Importantly, individual models can be considered more or less explainable, depending on how many types of explanations they are able to offer us.  

Let’s add that explainable artificial intelligence can be embedded in a given machine learning algorithm or can be an independent superstructure of it. In the former case, we are talking about so-called glass-boxes (or white-boxes), i.e. models whose results are explained on the basis of the model itself. Black-boxes models, on the other hand, are based on methods that are independent of the model (model agnostic).

Implementation and use of explainable artificial intelligence

As we have already mentioned, explainable artificial intelligence is especially important in health care. Thanks to XAI, we will be able to understand why a particular diagnosis was made and on what basis a particular treatment plan for a patient was developed. 

In addition, XAI will prove itself in such cases as:

  • defense – XAI models can be used to explain decisions made, for example, by autonomous vehicles on the battlefield,
  • autonomous vehicles – XAI models can explain why a vehicle behaved a certain way in a given situation,
  • finance – XAI models can explain why a loan application was granted or denied,
  • financial fraud detection – XAI models can explain why a particular transaction was flagged as suspicious.

Summary

Until we understand how individual AI algorithms make decisions, the possibilities for using artificial intelligence will be limited. However, the question arises whether we will ever be able to understand what is going on in the “black box.” After all, we ourselves do not fully comprehend what is behind human decisions. Will we go easier with understanding AI algorithms?

Sources: 

https://nvlpubs.nist.gov/nistpubs/ir/2021/NIST.IR.8312.pdf

https://gcmori.medium.com/what-is-explainable-ai-xai-and-why-you-should-care-e0fd4663beac