Introduction
So you’ve developed (or are planning to develop) a predictive model, and want to integrate it with Epic -- awesome! You can likely get the data you need to train and validate the model out of one of our data warehouses (the RD, SD, Clarity, Caboodle, the EDW, etc.) but the real fun comes when you actually integrate it into clinical care, and this is actually the first and most important consideration -- your goal isn’t just to “integrate the model into Epic” -- it’s to find a care process (or care processes) that you want to inform or influence with your model. Figuring this out is actually the hardest and most important step of the process. In the video below, Dr. Adam Wright discusses some important questions to guide your thinking, including what clinical outcome or process you want to influence, who may be the potential recipient of your intervention, when are decisions related to your model being made, what information is needed and what actions may they take as a result. Dr. Wright also discusses "back-end" options (how the clinical data will create a prediction or inference) and "front-end" options (how that prediction or inference will be displayed to users).
Cognitive Computing with Nebula
One of the newest ways to implement predictive models in Epic is the cloud computing platform Nebula. In the video below, Dr. Wright introduces the Cognitive Computing Platform within Nebula which allows for localization of Epic released models or implementation of custom predictive models in the cloud. This platform has numerous advantages including real-time integration with the EHR and compatibility with non-Epic data science tooling. Some additional back-end options are also listed below.
Conceptual Example
In the video below, Dr. Wright introduces a conceptual example for the purposes of this lesson, which is a model for predicting which hospitalized patients will be discharged to a skilled nursing or rehabilitation facility for sub-acute care. He also outlines the general steps for implementing the model which will be discussed in subsequent videos.
Train and Validate the Model
In the video below, Dr. Wright demonstrates how to train and validate the example predictive model introduced in the prior video using the Databricks environment. Databricks allows users to seamlessly blend SQL queries against the Clarity data warehouse with Python code used to train the model. Dr. Wright shows a Clarity query which pulls all the patients that have been discharged from the hospital and their discharge disposition to determine if it's a skilled nursing facility or a Vanderbilt or external rehabilitation facility. The query returns one line for each patient including numbers to indicate their age, gender, and discharge location. Without having to download any files, Databricks then allows him to use the data retrieved to train a predictive model simply by entering code from R for a logistic regression. He then uses the formula produced to determine his own probability of being discharged to another facility, and generates an ROC curve to see how accurate the model is.
Implement the Model Within Epic
In the video linked below, Dr. Wright first discusses the use of Reporting Workbench in Epic to create a report in the testing environment (not real patients) that pulls the relevant features for the model (patient name, age, gender). Epic can run the report as a batch job every 15 minutes which is packaged as a NumPy dataframe to be sent to a program he'll write. He then shows the program he wrote in Jupyter which first unpacks the Reporting Workbench report data, makes predictions using the logistic regression formula from the last video, formats the results before packing and returning the output. Dr. Wright then shows how the model file can be uploaded to the Nebula cloud within Epic using the Predictive Model Admin. The uploaded model can then be tested on one patient at a time or many patients at once by adding an autogenerated column to the Reporting Workbench report that display the predictive score for each. Once the scores are filed, they can be used in many different contexts, which are discussed in the next video.
You have many options when it comes to integrating your model in Epic, which vary in complexity, performance, what data they are able to access and what kinds of inferences they can make. Some additional options are listed below:
Convert the model to Boolean logic: If you can convert your model to a decision tree, truth table, or Boolean logic statement (these are all equivalent) you can run it using Epic's built in rule and OPA logic evaluation functions. This is usually feasible as long as your model doesn't use a large number of features.
Use a scoring system: Epic has several frameworks for calculating point-based scores. Many regressions can be implemented using a point-based scoring system. These scores can be calculated in real-time or in batch.
Run the model outside of Epic and write scores back: You could set up a daily job that extracts data from a data warehouse (e.g. the EDW or Clarity), runs model inferences, and then writes scores back to Epic. These scores can be stored in flowsheets, SmartData elements, or RDI data, and can be written using web services or imported in bulk using Clarity DataLink.
Integrate the model using PMML (Predictive Model Markup Language): PMML is an XML-based standard for representing predictive models. Epic can import PMML models, and currently supports Naive Bayes, decision trees, random forests, and certain regression models. After you import the PMML model, you will map the features to Epic data elements and then configure a batch job to evaluate the model on a specified set of patients (for example, all patients currently in the ICU, or everyone with an appointment scheduled tomorrow) at a predetermined interval (often every 15 minutes).
Use Nebula and Slate: If you need more flexibility than you can achieve with PMML, Epic offers a custom Docker image called Slate. This lets you write a custom Python program that can implement almost any type of model. The image is deployed to Nebula, Epic's Azure-based cloud. You then create a Reporting Workbench report which defines a population of patients you want to do inference on and the features you want to use in your model. Using a batch job, Epic then periodically (e.g. every 15 minutes) invokes your Python script and writes model scores back to Epic.
Use a SMART on FHIR app: SMART uses FHIR to access patient data. This solution pairs a front end option (SMART on FHIR app embedding) with a FHIR back end approach.
Use Epic's App Market APIs: In addition to the SMART on FHIR standard, Epic offers a large number of proprietary APIs for reading and writing data that can be used to support model integration.
Use HL7 interfaces: Epic supports many HL7 standards that let you subscribe to data and events (for example, you could get a message whenever a patient is admitted or a lab result is filed). You can subscribe to these messages, make model interences, and write results back. At VUMC, we use Tibco as our service bus for routing messages such as these, and have used this approach for many NLP use cases.
Integrate the Score in Epic
In the video linked below, Dr. Wright goes over some potential applications of the predictive scores within Epic, including Storyboards, Patient lists, BestPractice Advisories, and SmartText. More examples are also listed below. The takeaway is that the Nebula machine learning component really only has to be done once, and then the traditional Epic build capabilities allow the model scores to be deeply integrated into basically every facet of the EHR.
Front-end Options within Epic
OurPractice Advisories (OPAs) are advisory messages that appear within workflows in Epic. Some of them are interruptive (pop-ups) and others are non-interruptive. OPAs can look at most data and most types of model inferences, they are actionable, and we have a fair amount of control over how they appear. You can also use Care Paths for more complex sequential pathways built on the OPA framework.
Patient lists: Patient lists are used for many purposes in Epic, including showing all the patients in a particular unit or being followed by a particular specialty. You can add a column to a patient list that shows a model inference. You can also configure a pop-up that will appear when you hover over entries in the column, and configure what happens when you click entries in the column. Many other things, like the ED trackboard, OR schedule, and clinic schedules can also show these columns. It's also possible to create a list based on a model inference (i.e. a list of all patients with a high score on your model.
Reporting Workbench: Like a patient list, a Reporting Workbench report can show a list of patients matching certain criteria (including model scores) and can also show the same columns as a patient list. Reporting Workbench reports often enable bulk actions, such as sending a message to or placing an order for many patients at once.
Storyboard: The storyboard is the patient information column on the left of most screens in Epic. It shows things like the patient's name, age, and allergies. It can also show model scores and flags.
SmartSet and Order Set restrictors: SmartSets and order sets are used in Epic to place orders, and are often organized by condition (for example, sepsis orders, adult admission orders, hip replacement orders). You can show or hide guidance text and orderable items based on model scores using restrictors.
Health Maintenance: Health maintenance is Epic's system for tracking screening and preventative measures, especially those used in primary care. You can use model scores to control which health maintenance items apply to a patient.
Registries: Registries are Epic's population health framework. They feature both inclusion criteria (to decide which patients are in the registry population) and metrics (calculated data elements about patients in the registry). Model scores can be implemented as registry metrics and also used to control who's included in a registry.
SmartText and SmartLinks: Users can use SmartText and SmartLinks to add custom data to their notes. It's possible to create a SmartText or SmartLink that shows a model score, as well as the data elements used to calculate the model score as well as conditional text such as interpretations or recommendations. SmartText and SmartLinks can also be used in other Epic tools, for example in OPAs.
Print groups and reports: Epic has several reusable components for displaying data to users, including print groups, accordion reports, timeline reports, Synopsis, and radar dashboards. All of these can be configured to show model scores.
Active Guidelines: Active Guidelines is an Epic framework for integrating external web applications in Epic. You can build a custom web app that can access context, including the identity of the patient and user, as well as some clinical data. This app can be embedded in Epic and can even be used to place orders.
SMART on FHIR applications: SMART on FHIR is an open standard for integrating web apps into any EHR, including Epic. It has capabilities similar to Active Guidelines.
In Basket: You can use model scores to send In Basket messages to Epic users. It's possible to apply logic to decide who gets messages for a given patient, and in some cases, it can be possible to take actions from the message.
MyChart: MyChart is Epic's patient portal. It's possible to show model scores, inferences, or suggestions within MyChart using several different approaches.
Options outside Epic:
Email: You can email clinicians or staff based on model scores.
Paging or Mobile Heartbeat: You can use VUMC's Alerts and Notifications Framework to send pages or Mobile Heartbeat messages to users who have those tools enabled.
StarPanel: You can use StarPanel to show model scores.
Build an app or tool outisde of Epic: You can build external tools, such as standable web apps, Tableau dashboards, PowerBI dashboards, or SSRS reports.