Artificial IntelligenceCMIOData SecurityFeaturedHealthcare AnalyticsHealthcare Security

7 Practical Tips for Implementing Predictive Models in Healthcare


By Mark Weisman,
CMIO, Peninsula Regional Medical Center

If you are fortunate enough to have a brilliant biomedical informaticist hanging around your hospital, then understanding and implementing predictive models into your care delivery systems is second nature to you. The rest of us mere mortals face the daunting task of getting these tools adopted by the nurses and doctors at the bedside.  Don’t for a second doubt the importance of these models in helping you run your business; the systems that can effectively deploy predictive and prescriptive models will outperform their competitors. Other industries figured this out over a decade ago (Davenport, 2006) and healthcare will be no different. Whether you are deeply committed to population health models and trying to focus resources on those patients who will need help the most or churning in the fee for service model and need to understand who is most likely to purchase services at your facility, you need your people to use these models. So here are 7 pearls of wisdom from a CMIO who has walked this road already to help you get predictive analytics in place at your hospital.

1. Putting models into operation

Carefully consider how many models you can put into place at one time. The vendors will come with lists of models that will cure every pain point your organization has ever experienced, but even if they are perfect models you shouldn’t underestimate the resources required to get the end users to adopt something new.  Can you really institute the new readmission prevention model, asthma prediction model, the risk of MI in the next year model, and the risk of no-shows all at the same time? Probably not, at least not with broad success.   Putting a tool like this into practice requires a lot of 1 on 1 interaction with the physician champions and the resistant naysayers. Pace yourself.

2. Build versus buy decisions

Think twice if you are considering building your own models rather than buying them. If correctly implemented, you will be using this tool for a long time so it is tempting to calculate the cost of doing this yourself versus paying a vendor forever. To build a good model you need three things:  a rich data set, scary smart people, and a high degree of risk tolerance. Small facilities won’t have the data, so stop right there.  For mid to large organizations that can get the data, they need smart people which can be found but they are not cheap. Adding expensive FTEs that do not directly generate revenue can be a tough sell to your colleagues in the c-suite. If you pass the first two hurdles, then it is risk tolerance that usually discourages most organizations from building models. Will your CFO tolerate a multi-year adventure in discovery with no guarantee for a return on investment? If your organization is similar to the ones I have been in the desired time to breakeven is 3 minutes and an ROI that takes 3 years is dead on arrival. Unless you are a large system with academic resources, buying the models will likely be your best answer.

3. Overcome resistance

Be prepared for “Nurse X can do a better job at predicting than your model by just looking at the patient”. That may be true.  Nurse X has 20+ years of experience and is considered one of the best in the field.  You can respond by asking how they plan to scale that knowledge across the organization. Will the nurse with 2 years of knowledge do as well? Can they look at and consider hundreds of data points in seconds? Can they be at all the places where decisions are being made at the exact time needed? Clearly, the answer is no and that is why we need the tools.

4. Improve chances for adoption

Don’t run two models that look at the same issue simultaneously. This is most commonly seen when a system that uses LACE+ adopts a new readmission model and they keep both active.  Invariably, someone will say that model A is always right and model B is horrible so we aren’t going to use it.  Someone else will come to the opposite conclusion and then you have people talking about the same issue but using different numbers. The operational confusion that ensues will distract from your true objective. Run one model silently in the background if you want to compare performance, but don’t show the end user two models at once.

5. Careful planning pays off

Spend a lot of time upfront evaluating where these tools need to appear in the workflow to be effective. Does putting the readmission risk score in the discharge navigator give the teams enough lead time to be effective? Does putting the score on the hospitalists rounding list capture their attention? You can have a fantastic model that is poorly positioned and derive no value from it.

6. Align with strategic objectives

You want to make sure you have a model that fits with your organizational priorities and is not just someone’s pet project. We put a readmission model into the middle of nursing workflows with the belief that nurses will use it to help gather readmission risk factors. That wasn’t well received. The nurses said they ignored the numbers because it wasn’t an operational priority for them. At that time they were focusing on CAUTIs and CLABSIs. Once our readmission rate went up and the organization focus returned to the issue we were better able to implement the model into workflows. Timing is everything.

7. Transparency resolves the black box

Fight the “black box” concern with transparency. If providers do not understand how the model is working, what weight a single variable has on the model, and which factors are in the model, they will not trust it and won’t use it to make decisions. I sat across from a physician champion and presented them with this scenario: “It’s 2 am and the nurse pages you because the tool that predicts clinical deterioration on the floor has crossed a certain threshold. Would you come in to see the patient?” When she said no I knew we had a problem. We chose to go with a different model that had multiple-peer reviewed articles about its effectiveness and then she said yes. That peer-reviewed model was one she could understand and trust.

My advice is to move into predictive modeling deliberately, with well thought out implementation plans, high-level support, and broad participation by the end users in the selection process. The time upfront will more than pay off over time.


Warning: Undefined array key "sfsi_mastodonIcon_order" in /var/www/wp-content/plugins/ultimate-social-media-icons/libs/controllers/sfsi_frontpopUp.php on line 175

Warning: Undefined array key "sfsi_mastodon_display" in /var/www/wp-content/plugins/ultimate-social-media-icons/libs/controllers/sfsi_frontpopUp.php on line 268

Warning: Undefined array key "sfsi_snapchat_display" in /var/www/wp-content/plugins/ultimate-social-media-icons/libs/controllers/sfsi_frontpopUp.php on line 277

Warning: Undefined array key "sfsi_reddit_display" in /var/www/wp-content/plugins/ultimate-social-media-icons/libs/controllers/sfsi_frontpopUp.php on line 274

Warning: Undefined array key "sfsi_fbmessenger_display" in /var/www/wp-content/plugins/ultimate-social-media-icons/libs/controllers/sfsi_frontpopUp.php on line 271

Warning: Undefined array key "sfsi_tiktok_display" in /var/www/wp-content/plugins/ultimate-social-media-icons/libs/controllers/sfsi_frontpopUp.php on line 265
error

Share now:

LinkedIn
LinkedIn
Share