EHRs Fail to Detect 1 in 3 Harmful Drug Interactions, Medication Errors
MAY 29, 2020
David Classen, MD
The findings from David Classen, MD, and colleagues highlighted potentially life-threatening flaws to a technology system commonly used in hospitals nationwide. In fact, the team found the systems consistently failed to detect errors that could injure or kill patients. Wide variation in the safety performance of such systems remains across a large sample of hospitals and EHR vendors.
Classen and the investigators designed a computerized physician order entry EHR evaluation tool to collect data from a sample of hospitals over a 10-year period. They aimed to evaluate the progress of the overall safety performance of EHRs to prevent potential adverse drug events. The team also assessed hospital EHR safety performance for specific subcategories of potential adverse drug events and examined the associations of EHR vendor with the safety performance of the Leapfrog computerized physician order entry (CPOE) EHR evaluation tool.
The CPOE tool was designed by the investigators and included as part of the annual Leapfrog Hospital Survey—a free, annual survey given to hospitals across the US. The tool was a simulation and used real-world test patients and medication orders to mimic the experience of a physician writing an order for actual patients to evaluate the safety performance of the EHR.
The sample included hospitals that took the survey, including the evaluation tool, in at least 1 year from 2009-2018. If the hospital took it >1 time in a single year, the team kept the highest overall scoring test.
Self-reported data were used to determine a hospital’s EHR vendor. Vendors with >100 observations were kept as a separate vendor, while those with <100 observations were each categorized as “other.”
The overall sample included data from 2314 hospitals with at least 1 year of test results, totaling 8657 unique hospital-year observations. A majority of the observations were from medium-sized hospitals with 100-399 beds (51.2%), followed by large-sized (31.5%) and small hospitals (17.3%).
The mean total CPOE EHR assessment score increased from 53.9% in 2009 to 65.6% in 2018. The mean score for categories representing basic clinical decision support tools raised from 69.8% in 2009 to 85.6% in 2018.
The highest performing category was drug-allergy in each year, with the score increasing from 92.9% in 2009 to 98.4% in 2018. The lowest category was drug-diagnosis contraindications, with a mean score of 20.4% in 2009 and 33.2% in 2018. The greatest improvement was seen in the drug-age contraindications category, which went from 17.7% in 2011 when it was added to the test to 33.2% in 2018. The least improved category was drug-allergy contraindications.
There were 30 different EHRs used across the hospitals, of which, 8 had >100 observations and were kept distinctly identified as vendors A-H. Vendor A was the largest and had the highest overall score (67.4%) for the overall period.
The investigators used multivariate regression analysis and controlled for observable hospital characteristics such as size, teaching status, ownership model, system membership, and location. Three vendors had higher assessment scores: vendor A (β=11.26; 95% CI, 8.1-14.42; P <.001), vendor G (β=5.49; 95% CI, .77-7.81; P=.02), and vendor C (β=3.57; 95% CI, .32-6.81; P=.03).
Overall, in 2009, EHRs correctly issued warnings or alerts about potential medication problems 54% of the time. In 2018, EHRs detected about 66% of the errors.
“These systems meet the most basic safety standards less than 70% of the time,” Classen and the team concluded. “These systems have only modestly increased their safety during a 10-year period, leaving critical deficiencies in these systems to detect and prevent critical safety issues.”
The study, “National Trends in the Safety Performance of Electronic Health Record Systems From 2009 to 2018,” was published online in JAMA Network Open.