June 11, 2021
Share via

What is Explainable AI and why does it matter

In the last decade, the use of artificial intelligence has made significant progress. Applications powered by AI have appeared in our daily lives. At the same time concerns about understanding how AI algorithms make decisions have emerged. At DAIN Studios, we have studied how artificial intelligence can be ‘explained’. In this and following articles, we open up what Explainable AI (XAI) is and what we can expect from it.

What is XAI?

Artificial Intelligence (AI) supported decision making is increasing rapidly across industries as well as in public services, raising societal demand for more transparency about how algorithms make decisions. For those affected by AI-assisted decisions, trust is crucial for any AI-based systems’ success. Consistent with the industrial revolutions, AI, like electricity and automation, has become a general-purpose technology that further increase the need for trust. For this, explaining how AI makes decisions is fundamental to ensure a fair and prosperous development of this technology and its impact on society.

From 2010 onwards, the use cases in the modern era of AI were facilitated by scalable business solutions. Microsoft’s Xbox Kinect brought machine vision to our homes, Apple’s Siri taught us to talk to phones and DeepMind’s AlphaGo amazed us with its capabilities by beating the world’s best Go players. These are all early technology-centric use cases. One could also consider them as showcases of the initial capabilities of artificial intelligence currently taken for granted.

Now in 2021, almost every business and organization is either dependent on, has deployed, or started building AI-powered solutions that use AI-assisted decision making. At face value, AI solutions such as facial recognition, voice recognition and predictive analytics, allow businesses and organizations to rapidly profile and predict users’ desires with uncanny accuracy. As such, these technologies are creating high-value data assets and AI portfolios for those companies that monetize data and AI. However, the implications of these use cases are barely understood, creating ethical questions and drives a strong need for transparency in AI-assisted decisions. Explaining how AI makes its decisions and understanding the direct and indirect effects of those decisions is the driving motivation of our work.

Explainable Artificial Intelligence (XAI) is the field of study which focuses on improving the transparency in AI-assisted decision making. This clarity is reached by revealing what is inside the “Black Box” and explaining the processes and data that lead to AI-assisted decisions.

What is there to explain?

With the emergence of everyday AI use cases that recommend and make simple and complex decisions accessible to any human with a digital device, domain experts and the general public have started to ask questions related to decisions made by machines. Without understanding the basis for their decisions, it is difficult for us humans to trust and accept the machines. In many cases, we tend to trust more humans than machines when it comes to making decisions, even though both may suffer from similar decision bias. In several cases though, it is proven that certain algorithms have been significantly skewed which has led to bad decisions.

In addition to decision bias, the model used by the machine to make a decision may be very complex. This complexity of the model may be due to a complicated data set used for training the model or the dynamics caused by the model constantly adapting itself when new input data is provided. These factors make the manual verification of the correctness of the system difficult if not impossible.

Concerns and questions regarding algorithm behaviour are most prominent when the machine is making decisions that have a significant impact on human lives. Such decisions can be, for example, when AI is diagnosing a disease or deciding who is applicable or prioritized for certain services or benefits.

Explainability becomes even more paramount when algorithms are racist and sexist (e.g.the algorithm itself might not have explicitly any inbuilt unfairness but the preference may have been learned from the training data). A recent article in the journal Nature explores how an algorithm widely used in US hospitals to allocate healthcare to patients has been systematically discriminating against black people.

There are many examples from insurance premiums to decisions on bank loans that show how AI-assisted decisions require transparency and explainability.

From these concerns, several questions arise. Specifically:

What kind of data does the system learn from?

  • How accurate is the output?
  • How did the algorithm come to this decision/recommendation?
  • Why weren’t the other alternatives chosen?

A more complete list of questions can be found e.g. in this paper. Some of these questions are easier to answer but some of them are more difficult. For example, accuracy and error rates are often reported together with AI results so they are quite straightforward to compute with a test set. But understanding the decisions of deep learning models containing millions of parameters that are connected in nontrivial ways, is a much harder task to do.

It might be tempting to say that everywhere we have deployed machine learning models, XAI will also be important. This is a truism since XAI methods can always be used as debugging tools for machine learning systems. Nevertheless, in some areas, there is a higher potential and urgency for explainability than in others. To limit the scope of this paper we focus on XAI use cases in medical imaging.

AI in Computer Vision

Over the past few years, there have been developments across several areas that have enhanced our ability to use and apply Computer Vision AI. The most crucial change has been the cost reduction in storage space, availability of computing power and commoditizaiton of image capturing devices such as mobile phones. With these advancement in enabling technologies allow us to solve faster more complex problems with more complex models.

Looking into the future, companies like Google and Amazon are proposing and developing accessible Computer Vision development applications, with almost no code needed. These types of developments are going to increase the accessibility and use cases of Computer Vision AI.

While Computer Vision AI is technically progressing, the use of the technology in medical imaging is still mostly only deployed through pilots. The lack of scalability is hampered by an understanding of the output from the algorithms, however, explainable AI may have the potential to change this.

An XAI example in the Computer Vision domain

At DAIN Studios, we developed a computer vision explainable AI called Naama to experiment with Computer Vision and XAI. Essentially it combines face detection, face recognition and emotion classifiers. In addition, Naama contains an Explainable AI layer which shows the part of the image used for classifying the emotional state of the person. With Graphics Processing Unit also known as GPU, all of these runs in near real-time and allow an analysis of several people at the same time.

Naama system detects faces and shows the name, gender and emotion.

The models that are used in the Naama API for facial emotion recognition have been pre-trained using a generic data set and retrained for this specific purpose using transfer learning. In the system, we use the code from an open code repository as the basis for facial emotion recognition.

The system is trained to detect consenting individuals with their name and real-time facial emotional states based on several categories e.g. angry, happy, neutral, sad and surprise. The emotional states have average confidence presented, while in another layer Naama’s algorithm is unpacked to show the facial points that led to the allocation of emotional categories.

The grey box on the right illustrates what area of the image was most significant in detecting the emotion.
The grey layer is an example of an explainable AI solution and illustrates a way to communicate the significance of pixels to the user. The darker the pixel in this layer, the more significant that area is for detecting the person’s sentiment.

Next article in the XAI series

In this article we covered what XAI and why it is an important research topic. The next article in this XAI series dives into XAI methods in practice focusing on medical imaging.

References & more

Reach out to us, if you want to learn more about how we can help you on your data journey, AI or XAI.

Details

Title: How to Make Artificial Intelligence More Transparent?
Author:
DAIN Studios, Data & AI Strategy Consultancy
Published in
Updated on November 23, 2023