computer and stethoscope

Closing the Electronic Health Record Usability Gap

By Raj M. Ratwani, PhD, Christine A. Sinsky, MD, and Edward R. Melnick, MD, MHS

Two recent studies on the usability of electronic health records (EHRs) use the same standardized metric report, but come to very different conclusions.

One study, using data generated by vendors, found satisfaction to exceed the average benchmark for products across industries (a “C” grade), while the other study, using data from a representative sample of practicing physicians, reports satisfaction as failing (a “F” grade). In this piece, we explore the likely reasons for this discrepancy and propose a path forward.

Introduction

Electronic health records (EHRs) offer tremendous benefits for patient safety, communication, and care coordination. However, the usability of EHRs, defined as the extent to which this technology can be used efficiently, effectively, and satisfactorily, continues to frustrate physicians. Poor EHR usability contributes to physician burnout and can result in medical errors that may harm patients.

Usability satisfaction is commonly measured using the system usability scale (SUS) which is a 10-question, validated instrument administered to usability test participants after they complete tasks on a product being tested. SUS scores range from 0-100 indicating the overall usability satisfaction level. A higher score is better. A score of 68 is considered the average benchmark across different industries; whereas, a score of 80 is considered an above-average benchmark. The SUS provides a quick and reliable way to assess usability.

System usability score
Figure 1. EHR usability reality gap. Figure adapted from: Melnick et al. The Association Between Perceived Electronic Health Record Usability and Professional Burnout Among US Physicians. Mayo Clin Proc. 2020;95(3):476-487. With permission from Taylor & Francis publishing, License Number 4785430742054.

The Office of the National Coordinator for Health Information Technology (ONC), which is part of the Department of Health and Human Services, created a certification program to promote the standards, security, and functionality of EHRs. As part of this program, EHR vendors must measure satisfaction using the SUS or a similar type of measure for certain capabilities as part of their usability testing. The average vendor-reported SUS score across 27 of the most widely used EHR products was 75 (Figure 1). These scores did not increase over time (SUS 73.2 in 2014 vs 75.0 in 2015), suggesting that despite greater attention to EHR, usability satisfaction may not be improving.

In contrast, a study of a nationally representative sample of 870 physicians asked to broadly reflect on the EHR they currently use found an average SUS score of 45.9, which is considered an “F” grade.

EHR Usability Testing Challenges Contributing to the Reality Gap

First, the vendor-reported SUS scores are based on testing conducted by the vendor (or designated contractor) and are based on self-created test case scenarios with testing conducted in a controlled environment.

Closer inspection of these test cases has shown they do not resemble actual clinical scenarios and, therefore, do not provide rigorous usability testing conditions. Further, the testing environment does not accurately mimic the actual environment in which EHRs are used, lacking variables such as task interruptions, increased noise levels, and other potential stressors. Consequently, the high satisfaction scores from certification testing likely reflect EHR interaction at a very basic level that fails to account for the clinical context and its effect on the actual EHR user experience.

Second, the implemented EHR is often configured and customized in ways that make it dramatically different from the vendor product tested during certification. As a result, the certification testing may not accurately reflect the product as used by frontline clinicians to treat patients. The usability satisfaction scores based on implemented products reflect the actual set-up of those products and, therefore, may be a better representation of the actual usability of current EHR products.

Finally, research examining the background of participants used in certification testing has shown that some EHR vendors are using non-clinical participants to usability test clinical EHR capabilities. Usability test participants should represent the end-user population and, when clinical functions that are intended for use by physicians or nurses are being tested, the test participants should have this background. Using participants that do not have the appropriate background may lead to erroneous usability test results.

Closing the Reality Gap

The EHR usability reality gap is a result of certification testing not accurately representing actual use of the EHR by clinicians in their clinical environment. Until actions are taken by the ONC, EHR vendors, and other stakeholders to make certification testing better resemble actual use, the reality gap will persist–with physicians continuing to slog through EHR work that they find to be clunky, and poor usability continuing to put patients at risk.

The three challenges contributing to the reality gap should be addressed using a two-pronged approach that provides immediate action solutions while also pursuing long-term certification policy optimization. In the short term, usability test cases used during vendor testing should better resemble actual clinical scenarios and the testing environment should more closely approximate the noisy, interruption-rich, multi-tasking care context. The product being tested should be one that represents what is actually implemented in the care setting. Usability test participants should represent the intended users and have varying levels of technology comfort to prevent participant samples that over-represent individuals with extensive technology use expertise and experience.

The long-term solution requires certification policy optimization which can be achieved through the 21st Century Cures Act and ONC proposed rules around testing EHR real-world use.

While real-world testing is currently focused on interoperability of implemented EHRs, it could be extended to require usability testing of implemented EHRs as well. This testing could be performed in partnership between EHR vendors and the healthcare facilities adopting their technology. Certification focused on testing implemented EHRs would require vendors and healthcare facilities to work together. This could potentially bring greater transparency to usability challenges and incentivize both stakeholders to address these usability challenges, since certification would depend on successfully addressing identified issues. While this proposal would increase the volume of testing performed, the resources required to perform usability testing could be shared between vendors and healthcare facilities.

Ultimately, certification testing is only useful if the product being tested adequately represents actual clinical use, otherwise it is artificial. Requiring usability testing of implemented products would help us to close the EHR usability reality gap.

The Petrie-Flom Center Staff

The Petrie-Flom Center staff often posts updates, announcements, and guests posts on behalf of others.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.