Hyperight

Explainable AI: Do we trust AI enough to make decisions for us?

Explainable AI: Do we trust AI enough to make decisions for us?
Image by Phonlamai Photo/Shutterstock.com

AI has engrossed every aspect of our everyday lives and expanded in various industries, causing different levels of disruption. Today, AI-informed decisions are no longer confined to tech hubs and data science experts. Artificial intelligence is increasingly empowering an array of stakeholders and functions, including business leaders, managers, executives, government entities and customers. But as more business decisions are influenced by AI, it opens numerous questions and requirements for greater oversight, transparency and ethics in how those decisions are made — leading to rising demand for explainable AI or XAI.

After all, if we entrust AI with our life and business decisions, we have to be sure its reasoning is accountable and ethical. Business has to be able to explain the logic behind their decisions because they directly influence the company revenue. The demand for AI explainability has been even more emphasised after a surge of criticism towards numerous critical failures like bias in recruiting and credit scoring, racial discrimination in facial recognition software, unfair criminal risk assessment and autonomous cars involved in accidents.

The black box problem – Can it be turned into a glass box?

While discussion about explainable AI is not new and goes back several decades, it reemerged with greater intensity in 2019 with Google’s announcement of its new set of XAI tools for developers. AI models have presented, and for some people still are, a “black box” that relies on millions or billions of complex, interwoven parameters in order to deliver outcomes that we should trust and act upon, even though they may seem wrong or counter-intuitive at first. A good example of the “black box” is deep learning models or neural networks. They are trained on large datasets and can output highly accurate predictions, but still incomprehensible to humans who can’t grasp the complex internal workings, features and data representations that the models used to deliver outcomes.

These outcomes may cause a far-reaching and intense impact, leading to louder demand for XAI. However, experts note that some models like decision trees and Bayesian classifiers are easier to interpret than deep learning models used in image recognition and NLP. It’s also important to mention that there’s a trade-off between accuracy and explainability, as not all bias is negative and can be leveraged to make more accurate predictions. Fortunately, explainable AI can help us understand if a model uses good bias or bad bias to make a decision, and which factors are essential when a model makes a decision.

“There is a lot of talk about AI in the light of ethics, accountability, explainability, which was previously the domain of the humanities in academia. It’s a bit of a novelty for the tech community to be so heavily focused on ethics. But AI, as a label for so many technologies, is so transformative, that we need to give thought to ethical implications of it,” stated Patrick Couch, Business Developer AI & IoT at IBM, during a panel on How to Build Human Centered and Explanaible AI Models and Products, at the Data Innovation Summit 2020.

When it comes to human-centric AI, the imperative is to make sure the powerful capabilities that are promoted are also understood by the users, to be sure that the technology serves the right purpose, added Patrick. Explainability and ethics are challenges for AI, and they are related to the data required to get the magic out of the technology, he explained. 

“Over the years, we’ve seen a tremendous amount of funny, weird, sad, tragic examples of AI applications gone wrong,” said Patrick. When IBM was faced with the challenge of bias in facial recognition software, they immediately jumped to solving it by acknowledging and mitigating the bias in data sets for their AI capability.

AI’s ability to not only make predictions, but explain why it made them is especially important in healthcare, where a wrong prediction can cost a human life. “Being able to explain why you propose a certain medication or treatment is key in medical care,” emphasised Stefan Vlachos, Head of the Center for Innovation at Karolinska University Hospital and Board Member of “The Innovation Leaders” in his AIAW Podcast discussion. “If we are do build trust between man and machine, you have to be able to backtrack AI’s suggestions and question how it got to them,” he added.

Explainable AI: Do we trust AI enough to make decisions for us?
Image by Connect world/Shutterstock.com

Trust built on explainability, not understandability

In terms of AI in medical diagnosis, experts put higher standards on it to explain the reason for its decisions and demand to see how the model works, ask for insight, introspection into the parameters and look for introspective motivation about how the model came up with the solution. Suppose roles were reversed and a human doctor gave the same diagnosis. In that case, people don’t demand to backtrack their decision and look in their neurons to see how they came to that conclusion, stated Anders Arpteg, Head of Research at Peltarion. 

We often hear people say, ‘I don’t trust AI models because I can’t understand them. This is a scary statement because it’s like saying to a person: ‘I don’t trust you because I don’t understand you’ instead of saying ‘If you explain your decision, I would trust you even though I don’t understand you,’ stated Anders.

Explainable AI: Do we trust AI enough to make decisions for us?
Image by everything possible/Shutterstock.com

The AI explainability question comes down to what it means to be fundamentally human and how we build trust as humans. Does building trust in AI mean understanding all models and their parameters? Considering AI expansion in all spheres of life and the different people from innumerable backgrounds and fields it touches, that would be impossible.

“When we talk about explainable AI, it does not have to be about understanding the intricacies of the entire model but understanding what factors can influence the output of that model. There is a significant difference between understanding how a model works and understanding why it gives a particular result,” states Dr Shou-De Lin, Chief Machine Learning Scientist at Appier for Campaign Asia.

“I think you need to be accustomed to it. Just like any relationship, you need to work together for a while and get to know each other to see that you are on the same path,” asserted Stefan Vlachos.

“Trust is built because of explainability, but not because of understandability – these two are very different things,” emphasised Anders Arpteg. 

This was the point Cassie Kozyrkov, Chief Decision Scientist at Google, also made in her article on why Explainable AI won’t deliver. What Cassie has so well explained is that we can’t expect to have a simple answer to how an AI model made the decision, because it was built to solve a complex problem with a complex solution – a solution that is so entangled that it eludes the capacity of our human mind. 

As Cassie brilliantly explained, “AI is all about automating the ineffable, but don’t expect the ineffable to be easy to wrap your head around.” She is by no means saying that interpretability, transparency, and explainability aren’t important. But instead, their place is in analytics, she added. It all comes down to the purpose of the AI model: research or business goal.

“Much of the confusion comes from not knowing which AI business you’re in. Arguments that are appropriate for researchers (the mechanics of how something works – interpretability ) make little sense for those who apply AI (performance),” explains Cassie.

We again come to the trade-off, which in Cassie’s example is between interpretability and performance. When the model is so simple that we can understand it, it can’t solve complex problems. But if we need a model to solve a really complex task, we shouldn’t limit it to only what our minds can wrap around. So how can we trust that our complex models are working? By carefully testing our system, making sure that it works as it is supposed to — this is how we gain trust in it, adds Cassie.

Cassie also points out that we are holding AI to superhuman standards. “If you require an interpretation of how a person came to a decision at the model level, you should only be satisfied with an answer in terms of electrical signals and neurotransmitters moving from brain cell to brain cell. Do any of your friends describe the reason they ordered coffee instead of tea in terms of chemicals and synapses? Of course not,” she explains.

Just like humans make up an oversimplified reason that fits their inputs and outputs (decisions), we can have the same level of model explainability in the input and output data, and this is where analytics comes into play. “Explainability provides a cartoon sketch of a why, but it doesn’t provide the how of decision-making,” adds Cassie.

Undeniably, with the advancement of AI, humanity has turned a new leaf of complicated solutions that are beyond our understanding, and as Stefan Vlachos stated above, and as Cassie also contends, it’s a reality we should get accustomed to.

Explainable AI: Do we trust AI enough to make decisions for us?
Image by Peter Pieras from Pixabay 

Are we holding AI to superhuman standards? 

Whether we already trust AI, or we need a “trial period” to get accustomed to it and start trusting it, it’s no doubt that XAI is the direction that should guide AI development. 

Technology is quickly catching up with the increased demand for explainable AI and we are seeing different solutions that researches are proposing. There is already a new wave of AI explainability that introduces an attribution-based type of explainability that can be used for text, tabular, time series. However, the latest type that is being created is more of a generative kind, revealed Anders Arpteg. 

Instead of attributing the input data used for the decision, this type of explainable AI enables the model to explain itself in natural language (English or Swedish), explaining why it recommends a certain action referencing the general text it used as an input. With the latest breakthrough, we can just ask a question and get an answer from AI explaining itself in natural language.

Another way of building more explainable models is by using proxy models, which are more explainable to mimic the behaviour of deep learning models, suggests Dr Shou-De Lin. Alternatively, he proposes building models to be more explainable by design by using fewer parameters in neural networks that may deliver similar accuracy with less complexity, therefore making the model more explainable.

Others still are questioning if humans are not good at explaining themselves and if we fail the explainability test, can we put explainability as a standard for AI? But, at least as humans, we have the ability to do so, and we try our best to explain our choices, contrary to deep learning that can’t do this yet. Therefore, experts suggest the direction deep learning, and AI in general, should head is work towards being able to identify which input data is triggering the systems to make the decision, however imperfect they are.

Add comment