Health Care and the Promise of Explainable Artificial Intelligence (XAI)


By Tina Wallman,
Sr. Director of Strategic Initiatives, Optum

There has recently been a lot of talk around explainable AI (XAI). How can we open up the ‘black box’ around AI to better understand what decisions it is making, what data it is using to make those decisions and how can we have resources get more confident in ’s ability to make decisions?

One area where (XAI) holds a lot of promise is in health care. The question becomes, is the promise of (XAI) going to follow a similar hype as ()? has been around for over 50 years, with at least 3 time periods where it was going to take over the world, in the 1960’s, in the 1990’s, and now. Will it take 5 years, 10 years, or longer for explainable AI (XAI) to be adopted?

Health care has to change and explainable AI (XAI) might just be the push the ecosystem needs to transform itself.

The areas around pathology, radiology and dermatology have all seen advancements in AI. There have been studies that show great strides, with AI models; where AI is better at detecting melanomas than dermatologist. Just because a model is better at detecting something, doesn’t that mean that it is ready to make decisions autonomously.

These are business and challenges that need to be reviewed and understood on a case by case basis. Not every model that gets created will need additional rigor about how it came to its decision. You need to fully understand a system before you can allow it to make decisions without human involvement if that is the goal.

This is where the breakdown starts to happen. Technical teams love experimenting with different technologies and generally look to fully automate their models. Business and owners aren’t looking for full automation of models because they won’t blindly trust a model. They don’t have confidence in how the model is working and transparency is absolute. The regulatory constraints that healthcare has to operate within, provide an opportunity for explainable AI.

This is where business, and technical stakeholders have to all be on the same page. Business and owners have to be involved in working with their technical resources to define what parts of the organization they are comfortable with systems making decisions autonomously. What areas should systems be and verified by stakeholders? Do business owners, owners and technical owners understand what the risks are around the systems that are being created?

The healthcare ecosystem could be greatly advanced with  (XAI). With many strides that academic institutions like the University of Toronto are making and technology companies like Google continuing to do research around (XAI), how can healthcare leverage these advancements to transform itself? Technology, Business and owners will all need to start by working in smaller and less complex areas to test their hypothesis. Business and owners will need to push their technical teams from the beginning to design models that can be explained. Business owners will need to start asking more questions around the , regulatory, and financial risks of the models.

The hope of (XAI) is that future models will be able to share the features that it used to make the decision. There are two open source projects 1) Local Interpretable Model-agnostic Explanations (LIME) which looks to help explain what AI models are doing and 2) SHAP (SHapley Additive exPlanations) is a unified approach to explain the output of any machine learning model. Both of these projects have repositories on GitHub that can be utilized by technical teams to start creating explainable AI (XAI) models.

Health care has to change and explainable AI (XAI) might just be the push the ecosystem needs to transform itself.