Hyperight

How to build interpretable machine learning models

machine learning models

Machine learning models have become smarter and more accurate with providing predictions. However, most of them remain “black boxes”, meaning although their predictions are correct, we can’t explain and understand how the model came to a particular decision.

But as AI and machine learning are becoming an increasing part of our lives and businesses, our need to interpret machine learning models so we can trust them and deploy them with confidence is also growing.

Josefin Rosén, Principal Advisor Advanced Analytics and Artificial Intelligence at SAS Institute, will deliver a presentation at the Online Data Innovation Summit together with Wendy Czika, Senior Manager Analytics R&D at SAS Institute on the topic Automation in SAS Visual Data Mining and Machine Learning, on the Machine and Deep Learning Stage.

Josefin and Wendy will explain how automated machine learning can help every data scientist, from the novice to the most experienced practitioner enabling you to focus on solving the problem at hand:

1) You can choose to have features automatically constructed or to automate the process of algorithm selection and hyperparameter tuning by using dedicated Model Studio nodes in the pipeline that represents your machine learning process.

2) You can build on or edit a pipeline that includes these nodes, inserting your domain expertise into the process.

3) You can ask the software to automatically build an entire pipeline that includes various feature engineering steps and predictive models, optimized for your specific data according to the assessment criterion of your choice.

Before their Data Innovation Summit session, we talked to Josefin about one very crucial topic in machine learning – building interpretable machine learning models and what having an ethical AI framework means.

Hyperight: Hi Josefin, we are happy to have you at the 5th edition of the Data Innovation Summit. To start off, please introduce yourself to our readers and tell us a bit about yourself.

Josefin Rosén, Principal Advisor Advanced Analytics and Artificial Intelligence at SAS Institute

Josefin Rosén: Hi. Thank you, I am very happy to join! I work as Principal Advisor Advanced Analytics and Artificial Intelligence at SAS Institute, where I on a daily business discuss how to efficiently operationalize analytics with customers from a variety of industries.

As SAS’s spokesperson for AI I also often write and speak out at events and in media on challenges and issues as well as possibilities related to the advance of AI, especially around the importance of sustainable and responsible AI.

I also hold a PhD in Computational Chemistry (Dept of Medicinal Chemistry) from Uppsala University from 2009.

When I don’t work, I love to go for a long run as well as spend time with my husband and two daughters.

building interpretable machine learning models

Hyperight: Why is it essential for machine learning to be explainable and interpretable, and what are the implications if they aren’t?

Josefin RosĂ©n: Explainability is important to build trust in AI. How could you, for instance, take responsibility for something if you can’t explain or understand it?

To feel safe and to trust something, we need to understand how it works. That is in our human nature. In addition, regulations such as GDPR can require that you can explain, e.g. your automated decisions on request.

Machine learning algorithms are per definition complex and not very good at revealing their inner secrets. They are usually referred to as a “black box”. For example, a deep learning algorithm can easily be defined by millions of parameters. But a long list of millions of parameters will not be a suitable explanation. To shine a light on the black box, we need toolkits for interpretability.

Machine learning algorithms learn all they know from the data they are fed with; training data that was put together handled and often labelled by a human team. This in terms, means that our human bias is easily transferred to the AI system. The technology we have today, computing power, distributed calculations and similar, enables potentially thousands or even millions of decisions every minute, which means that bias can quickly be amplified. Things could very quickly go very wrong. In order to detect bias early, we need to make sure that we have tools and ways to make machine learning explainable and interpretable.

Of course, different situations put different demands on the level of explainability, where it is naturally often more urgent when the outcome has the potential to affect us on a personal level.

interpretable machine learning models

Hyperight: As AI capabilities develop, government leaders, business leaders and academics are more interested than ever in the ethics of AI, emphasising the importance of having a strong ethical framework surrounding its use. But few really have the answer to what developing ethical and responsible AI means. What should a solid, responsible AI framework entail?

Josefin RosĂ©n: When we invest in AI, we also have a responsibility to ensure that it is ethical. Today technology is a bit ahead of regulations which means that we must all take our responsibility. We can’t blame an algorithm if something goes wrong.

First of all, we must ensure that someone is responsible for each step of the AI lifecycle. From data to decision as well as the feedback loops. We need transparency in every step of the AI lifecycle as well. This includes explainability of the black box models. Being aware of the bias problem is central and a first step towards avoiding it.

It is also important to remember that automation and autonomy are not the same thing. Even if it is about self-learning systems, you can’t just leave them alone and expect them to behave on their own. You must continuously monitor the systems over the whole lifecycle from data selection to the output or action that comes out on the other side, to make sure that the system is doing as intended. And not just every now and then, but continuously. A diverse team will be efficient here since seeing things from different points of view not only will help reflect different perspectives in the data but also be more likely to identify bias.

trustworthy and explainable AI

There is of course software for this, and you also use algorithms to review other algorithms. But preferably man and machine should be working together. I wrote an article in Ny Teknik a while ago where I proposed that we may need a whole new professional role – the AI auditor.

Finally, every organisation needs an AI policy. There are many available guidelines for responsible AI. Personally, I really recommend “EU’s Ethics Guidelines for Trustworthy AI”. In addition to seven requirements, it also contains a checklist, several questions that provide guidance on how each requirement can be implemented in practice.

I recommend everyone that wants a kick-start to their responsible AI framework or AI policy to go through this list and tick off what you already have in place, and become aware of what you don’t have in place, i.e. where you need to start. 

Hyperight: And lastly, what are your future outlooks for AI and machine learning? Can we expect a future in which AI is fully explainable and trustworthy?

Josefin Rosén: Yes…, and no. Technically yes, we are basically there already. But no.., there will always be people/parties that for different reasons, want to keep their AI a non-transparent secret.


Add comment