AI Model Identifies Patients at High Risk of In-Hospital Mortality
FEBRUARY 11, 2020
Nathan Brajer, BS
Nathan Brajer, BS, an analyst at Duke Institute for Health Innovation, and colleagues sought to prospectively and externally validate an artificial intelligence (AI) model to predict in-hospital mortality for all adult patients at the time of hospital admission. They learned that the model demonstrated good discrimination in identifying at-risk patients.
Brajer and the team designed the model to use commonly available electronic health record (EHR) data so it could be implemented at a system level.
The investigators used EHR data from 43,180 hospitalizations which represented 31,003 unique adult patients admitted to a quaternary academic hospital (hospital A) from October 2014-December 2015. The data formed a training and validation cohort.
Nearly 200 model features were built using 57 EHR data elements, including patient demographic characteristics (5 data elements), laboratory test results (33 data elements), vital signs (9 data elements), and medication administrations (10 data elements) that occurred between the patient presenting at the hospital and the time of admission.
Brajer and colleagues randomly selected 75% of the hospital encounter for model training and held out 25% for testing.
A separate sample of 16,122 hospitalizations which represented 13,094 unique adult patients admitted to hospital A from March-August 2018 helped the investigators assess temporal generalizability. The team assessed external generalizability by using 2 additional samples from different community-based hospitals (B and C)—hospital B had a cohort of 6586 hospitalizations representing 5613 unique adult patients and hospital C had a cohort of 4086 hospitalizations representing 3428 unique adult patients.
The investigators integrated the model into the EHR system and prospectively validated it at hospital A.
Overall, the training and retrospective and prospective validation included 75,247 hospitalizations (median patient age, 59.5 years old; 45.9% involving male patients). The percentage of hospitalizations with in-hospital mortality was 2021 (2.7%). In-hospital mortality rates were 3% for the training validation group; 2.7% for retrospective validation at hospital A; 1.8% for retrospective validation at hospital B; 2.1% for retrospective validation at hospital C; and 1.6% for the prospective validation cohort.
The area under the receiver operator curve (AUROC) for the retrospective validations was .87 (95% CI, .83-.89) for the 25% held-out test portion of the original training data set (retrospective evaluation, hospital A, 2014-2015); .85 (95% CI, .83-.87) for a temporal validation cohort (retrospective evaluation, hospital A, 2018); .89 (95% CI, .86-.92) for an external temporal and geographic validation cohort (retrospective evaluation, hospital B, 2018); and .84 (95% CI, .8-.89) for another external temporal and geographic validation cohort (retrospective evaluation, hospital C, 2018).
The AUROC for the prospective validation was .86 (95% CI, .83-.9) for the prospective validation at hospital A in 2019.
The area under the precision recall curves were .29 (95% CI, .25-.37); .17 (95% CI, .13-.22); .22 (95% CI, .14-.31); .13 (95% CI, .08-.21); and .14 (95% CI, .09-.21), respectively.
The findings supported the basis that machine-learning models could be used to predict in-hospital mortality and such software could be implemented on live EHR data. The benefit-to-cost ratio of deploying the AI in clinical settings could continue to increase as more commonly available EHR data elements are used.
Additional research would help the investigators understand how to effectively implement the models into the clinical workflow, identify opportunities to scale, and quantify the impact on clinical and operational outcomes.
The study, “Prospective and External Evaluation of a Machine Learning Model to Predict In-Hospital Mortality of Adults at Time of Admission,” was published online in JAMA Network Open.