The Impact of Alert Fatigue in Healthcare

5119 words (20 pages) Essay

8th Feb 2020 Health Reference this

Disclaimer: This work has been submitted by a university student. This is not an example of the work produced by our Essay Writing Service. You can view samples of our professional work here.

Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UKEssays.com.

The 1990’s brought on the rapid development of digitalization within society; the share of the US population with internet access increased by 58 percent between December 1998 and August 2000. Computerization rapidly brought on changes to how the country communicated, received news and even shopped. It was inevitable that the healthcare sector would be the next to implement information technology and in 2001, the Institute of Medicine(IOM) further endorsed the importance of digitalizing healthcare. According to IOM, “information technology must play a central role in the redesign of the health care system if a substantial improvement in health care quality is to be achieved…”[1] The enactment of the Affordable Care Act (ACA) in March 2010, once more promoted the development of electronic health records to improve the quality of health care. Support for health information technology was deemed a necessary condition for many of ACA’s initiatives, and EHR implementation greatly expanded throughout the health care system. [2]

Get Help With Your Essay

If you need assistance with writing your essay, our professional essay writing service is here to help!

Find out more

The introduction of computerization into any arena has produced benefits, and with health care the rapidly increasing digitalization has been advantageous to both patients and providers. No longer did health care workers have to decipher physician handwriting that looked like Egyptian hieroglyphics. Clinical decision support would alert providers to drug allergies and interactions, barcode scanning would eliminate medication administration errors and pharmacy robots would select the correct medication to be dispensed to the patient.

However, the integration of technology into medicine has also endured many rough patches along the way. The introduction of newer and more sophisticated programs to the clinical environment and provider workflows have provided great benefits towards improving quality and patient safety but as with any new process implementation, there have been unintended and unanticipated consequences. Designs and products that were promoted to improve patient safety have resulted in new types of threats and medication errors. Technologies such as computerized provider order entry (CPOE), smart pump medication delivery devices and automated dispensing cabinets (ADC) are programmed to provide warnings to clinicians when unsafe situations arise, and interventions need to be made. However, on any given day there are a staggering number of alerts as well as multiple different alert-generating devices.[3] In one study, CPOE generated warnings for physicians on 3-6 percent of all orders that were entered. This translates into dozens of warnings per day and doesn’t even account for the number of warnings that are generated for pharmacists, who often receive the vast majority of alerts.[4]

The term alert fatigue describes how busy health care clinicians become overwhelmed and desensitized to computer generated safety alerts and as a result, ignore or fail to respond appropriately to the warnings. Alert fatigue occurs when too many red flags are triggered across applications in which patient information is entered and can be compared to the law of diminishing returns in economics, to the saturation point in chemistry, and to Shakespeare’s As You Like It, “Why then, can one desire too much of a good thing?”[5]

Unfortunately, the vast number of alerts that are generated by systems do not warrant action and deserve to be ignored. The threat to patient safety occurs when because of the high number of alerts, critical alerts that warn of impending serious harm are also ignored. Alert fatigue is now recognized as a major unintended consequence of health care computerization. The ECRI Institute, a nonprofit medical safety organization, listed alert fatigue as a top technology hazard.[6] The proliferation of alerts in electronic health records and supporting technologies was intended to improve patient safety but has resulted in a paradoxical increase in the likelihood that patients will be harmed. [7]

While clinical alerts have been shown to reduce adverse effects of medications resulting in fewer deaths, disabilities and hospitalizations and subsequently lowering health care costs, they are not always beneficial and patient harm can occur when low value or false positive alerts appear. In one study, 331 alerts were needed to prevent one adverse drug event.[8]

Consider the case of a teenager who received 38 times the normal dose of an antibiotic due to the clinical alert being overshadowed by a large number of clinically insignificant alerts. Pablo Garcia, 16 years old had a rare genetic disease known as NEMO syndrome which leads to a lifetime of frequent infections and bowel inflammation. In July 2013, he was admitted to UCSF Medical Center to undergo a routine colonoscopy. He received a bowel prep regimen that night as well as his evening medications which included steroids and antibiotics. Soon after, he began complaining of numbness tingling and the on-call physician was summoned. It was then discovered that Pablo had received 38.5 tablets of Septra, 37.5 more tablets than what he should have, and the next morning Pablo suffered a grand mal seizure due to the overdose. This error began with a physician entering an order into the EHR upon patient admission. Pablo took one tablet of Septra DS twice daily as prophylaxis for his skin infections. This dose was appropriate for him and the physician wanted to continue this same dose upon admission. UCSF Medical Center used Epic as their electronic health record and while dose limits can be programmed into the EHR, UCSF had decided not to set limits on doses because it was a teaching hospital. The hospital treated many patients with rare diseases who were also on research protocols and in these patients, high or unusual doses of medications would usually have been acceptable. Additionally, for pediatric patients the informatics committee had decided to require weight-based dosing for every patient under 40 kg. If dose rounding was necessary to conform with commercially available strengths and the change in dose was greater than 5% of the calculated dose, the policy stated that the pharmacist would contact the physician to ensure the conversion was acceptable. Pablo weighed 38.6 kg and so the calculated dose of Septra was 5 mg/kg of Trimethoprim or 193 mg. The closest commercially available dose to 193 mg was 160 mg, or one tablet of Septra DS. When the physician chose to continue with this dose, she was under the assumption that she was ordering one tablet. The order then populated to the pharmacist verification queue as 193 mg, which was 17% greater than the available 160 mg tablet and prompted the pharmacist to contact the physician per policy. The physician then attempted to reorder the dose as 160 mg, but accidentally entered the dose as 160 mg/kg because that was how the default dosing was set up in Epic based on this patient’s weight. Upon doing this, the dose was calculated as 6,160 mg or 38.5 tablets. The physician continued with signing the order and an alert fired warning her of the overdose, but she took no action.

UCSF Medical Center had chosen to disable thousands of alerts built into the Epic database, however despite this decision pharmacists still received pop-up alerts and of the 350,000 medication orders processed per month, pharmacists received pop-up alerts on half of them. In the course of one month, physicians received 17,000 alerts. In this case, the physician assumed that the alert she was receiving was yet another alert with no clinical significance and so she paid no attention to it. Additionally, the senior residents at the medical center were known to advise juniors to simply ignore all alerts and pop-ups and so the physician felt quite comfortable ignoring the alert. The order then made its way back to the pharmacist who only saw the 160, which is what he was expecting since this is what he had communicated to the physician. Of note, the 6,160 mg dose ordered looked deceivingly similar to 160 mg and while a massive overdose had just been ordered, the alert that fired in Epic for both the physician and the pharmacist looked like any other alert; nothing highlighted the severity of the overdose. The pharmacy satellite where the pharmacist was stationed was cramped and very busy. He was constantly being interrupted by phone calls and continually answering the door for nurses who were coming to pick up medications. It was not unusual to be interrupted up to 6-7 times while working on any given patient’s medication order and so it was not a surprise that he also overrode the alert. It is a known fact that every interruption in the medication process increases the possibility of an error occurring exponentially. Once the order was verified by the pharmacist, the pharmacy robot then dutifully prepared the dose and implemented the dispensing process as it was designed to do. The nurse who administered the dose was unfamiliar with the unit, was inexperienced and afraid to speak up, and the barcode medication administration technology and eMar system falsely reassured her and so Pablo Garcia received 38.5 tablets of Septra.

James Reason’s swiss cheese model of error holds that all complex organizations harbor many latent errors, or mistakes waiting to happen. On most occasions, the errors are caught in time and even if the first layer of protection is breached, the second or third layer will catch them. When all the layers are breached, the swiss cheese holes have aligned and the system has failed, thereby resulting in an error. After the 1999 report by the IOM which estimated that nearly 100,000 patients in the United States die per year due to medication errors, a massive patient safety movement was launched, and computerization was touted as a promising fix. Computerization in health care did solve many problems such as illegible handwriting, or misplaced decimals, but also added increased complexities and additional challenges and new hazards. Pablo Garcia did survive the massive Septra overdose, but a failure of multiple layers aligned the holes perfectly. Reason also states that most errors which occur are caused by competent people and fortifying the system by adding layers or shrinking the holes is much more productive than blaming the people.

After this incident, one of the steps that UCSF Medical Center took to prevent an error of this nature from occurring again was to tackle the issue of alert fatigue by forming a committee to review alerts and two years later only 30 percent of alerts have been removed. These are the types of issues the health care industry is facing; finding a balance between too many alerts along with not removing alerts that have the potential to benefit patient safety.[9]

The tragic death of a 12-year-old child with congenital long QT syndrome in 2015 highlights the need for clinical decision support systems and brings to light the failure of one hospital that chose to disable alerts. A physician prescribed Zithromax for this child to treat otitis media and sinusitis using the hospital’s EHR system. Zithromax has been associated with QT interval prolongation and after taking the medication for four days, the child developed torsade’s and died despite efforts to save her. Citing alert fatigue, the hospital had disabled the drug-disease state alerts from firing, therefore no warning fired for the physician or the pharmacist.[10]

CPOE alerts represent a small fraction of the alerts that health-care workers receive on any given day. A 2011 investigation by the Boston Globe found that between January 2005 and June 2010, alarm malfunction or alarm fatigue was attributed to 216 deaths in the US.[11] In early 2013, Barbara Drew, a UCSF researcher set out to quantify the problem of alarm fatigue in the UCSF Medical Center 66-bed ICU. A total of 381,560 audible alerts fired during a 31-day period, or 187 alarms per bed per day. Of the alerts that fired for arrhythmias, 89 percent were found to be false positives.[12]

Studies on alert fatigue consistently show three main findings. First, that alerts are only modestly effective at best. Second, that alert fatigue is common, and clinicians generally override the vast majority of alerts, including those deemed to be critical and having the potential to cause serious harm. According to a prominent Harvard Medical School professor, clinicians override alerts between 49-96 percent of the time.[13] Lastly, that the more alerts clinicians receive, the higher the potential for alert fatigue. While this finding is intuitive, consequences of alert fatigue have the likelihood of increasing over time and therefore, alert fatigue has become a high-profile patient safety issue. In April 2015, The Joint Commission released a sentinel event alert calling for health care organizations to play close attention to information technology as a safety issue and in order to mitigate alert fatigue, it was recommended that the culture of safety be improved by creating a shared sense of responsibility between platform developers and the end users.3

Busy clinicians rely on equipment and technology to carry out the life-saving interventions that are they are trained to do, and it is assumed that technology will improve patient outcomes, but interactions between machines and the people who rely on them sometimes increases the risk of a disastrous occurrence. Human factors engineering is the discipline that attempts to identify and address these issues and takes into account human strengths and limitations in the design of interactive systems that involve people and technology to ensure safety and effectiveness. It focuses on how systems work in actual practice and attempts to optimize safety and minimize risk of errors in complex environments. Human factors engineering has been used in to improve safety in industries such as aviation and automobile but its application to healthcare is relatively recent.[14]

Solving alert fatigue will also require a marriage between informatics and human factors engineering because the fundamental problem arises from the technology itself and the interaction of busy humans with the technology. Pablo Garcia was nearly killed by a medication error and this error demonstrates that solutions to computerized systems and human factors need to be broadly based and aligned. When staff involved in this incidence were questioned, one of the issues discovered was the fact that the clinicians trusted the computers more than they trusted themselves. As the computers generate their accuracy and trustworthiness, the bias grows.9

The aviation industry provides an example of how human factors engineering has improved safety because like medicine, aviation professionals have to perform tasks in high-stakes environments and the approach that aviation has taken has been learned from tragedies. The aviation industry has taken steps to prioritize warnings and cockpit alerts and has worked very hard to avoid false positives to prevent pilots from tuning out. Alerts are separated by hierarchy of significance; red lights, flashes, voice alerts and stick shaking indicate an impending stall and action needs to be taken immediately to prevent the plane from falling out of the sky. The next level of alerts are warnings that require immediate attention but do not directly threaten the flight path; red lights, voice alarm but no stick shaking and there are about 40 of these. Of note, the color red is only used for high level warnings. The next level is a caution and there are about 150 of these situations. Caution requires immediate awareness, but not instant action. With cautions, the lights and text are amber, and there is only one visual alert. The final level is an advisory where no action is required, but the pilot should be aware of it. Advisory alerts are amber text messages. For every kind of alert, a checklist automatically pops up on a central screen to help guide the crew to a solution and are programmed to match the triggered problem. Boeing utilizes a team of experts who make the judgment call and unlike in health care, resist the urge to warn the flight crew about everything. Because of this process, less than 10 percent of flights have any alerts triggered.9

The use of human factors engineering and deep attention to the experience of the end-user has thus far been lacking in health care technology design. While many steps can be taken to address alert fatigue, it’s important to point out that system developers have been reluctant to remove alerts due to fear of litigation and being held liable in the event that patients are harmed in the absence of a warning.[15] While the program developers have been slow to address the excessive number of alerts, functionality is present in the software that allows hospitals and health-systems to turn alerts off.

One step that can be taken to address the sheer number of alerts is increasing specificity and eliminating inconsequential alerts. Taskforces consisting of informatics personnel, administration and end-users must be established and this takes time, money and manpower. The team should review why alerts are firing and how they can be tweaked. To address ongoing issues, Group Health Cooperative of South Central Wisconsin (GHC-SCW) undertook such an initiative. The initiative included a holistic strategy that leveraged industry-accepted metrics and clinical staff input. They were able to implement a filtering strategy that reduced the number of alerts firing which resulted in more relevant alerts being delivered to providers. Within 60 days, they noticed that instead of overriding alerts, providers were taking action on nearly twice the number of alerts as before. [16]

While reducing or eliminating the number of alerts is important, it is not the only strategy that should be used to combat this complex issue. Targeted alerts based on patient characteristics and indicators such as lab results or test results should also be developed. Epic Systems is working on developing software that might target alerts based on patients’ health conditions. This occurs by including more parameters and filters into the data. For example, incorporating renal function tests results into the alert system so that alerts for nephrotoxic medications are triggered only for those patients that are at a higher risk. Another example where a targeted alert system would be beneficial is in a cancer patient who needs higher doses of pain medications than recommended. A smart system would be able to differentiate this type of patient from one who would not need the high doses, or when a patient has an allergy to penicillin flagged but has taken a cephalosporin in the past without issue and the clinical decision support system is able to differentiate and not fire another alert when a subsequent order for cephalosporins is entered. Such a change could limit distractions so that clinicians focus on the alerts that matter for that specific patient.

A third mechanism by which alert fatigue could be better managed is by utilizing a tiered alert system. Warnings would be presented in different ways according to their severity and clinical consequence, similar to how aviation has handled alerts in aircraft. Only the most severe interactions would require hard stops and interruptions. Anecdotal experience suggests that how alerts are presented can have a major impact on compliance rates, but there are few studies comparing how alerts are presented. In 2005, Partners Health Care performed a retrospective analysis of data on hospitalized patients at two academic medical centers during a one-year period. Both inpatient CPOE systems used the same alert service, but one displayed alerts by severity level, using a tiered presentation while the other did not. For the tiered system, the alerting modules only required a response by the clinician for severe interactions and less serious ones presented in a non-interruptive fashion. The tiered system was set up with three levels of alerts; Level 1 alerts are the most serious and are considered to be life-threatening. Level 1 alerts are set up as hard stops and the clinician is required to either cancel the order being entered or discontinue the pre-existing order. Level 2 alerts are less serious, but still require action by the clinician. The clinician is required to either discontinue one of the drugs or select an override reason. Overriding alert reasons are set up as a pick list with the most frequent override reasons. Multiple reasons may be selected, and text can be added if a suitable reason is not selected. The largest proportion of alerts is in Level 3, which is also the least serious. Alerts are presented as information only and require no action of any kind from the clinician and no keystroke is needed because the presentation uses the available screen. 71,350 alerts were reviewed, of which approximately 39,000 occurred at the non-tiered site and 32,000 at the tiered site. Compliance at the tiered site was significantly higher with 100% of the most severe alerts accepted, vs. on 34% at the non-tiered site. The moderately severe, or Level 2 alerts were also more likely to be accepted at the tiered site.[17]

Find out how UKEssays.com can help you!

Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.

View our services

This is one type of tiered alert system. Other types of tiered alerts utilize color coding to differentiate between the types of alerts; red color for severe warnings and interruptive hard stops, yellow for moderate and less severe. After Pablo Garcia was overdosed on Septra, aside from forming a committee to review alerts, UCSF Medical Center blocked any effort to prescribe more than nine pills in a single dose. This can also become complex and have unintended consequences, especially in the age of drug shortages where sometimes the only strengths available are low doses and there is potential for more units being needed to make the dose prescribed.

Alert fatigue and the unintended consequences of health care computerization is only recently recognized but has become a high-profile patient safety issue. There is intense interest in developing specific methods to combat alert fatigue but no consensus on how to pave the way forward. UCSF Medical Center formed a committee to review all of their alerts and after two years, had only succeeded in removing about 30 percent of alerts from the system. The sophisticated analytics are not there, even from the software developer, Epic. It will be imperative for clinicians and developers to work together in order to tackle these issues. Clinicians must be willing to provide meaningful input so that the developers can design the technologies with the functionality that addresses the needs of clinical decision support.

Aside from the required informatics principles, solving alert fatigue will require human factors engineering principles when designing the alerts because the problem arises not just from the technology, but the human interaction with the technology. In the case of Pablo Garcia, the error by the pharmacist was owed in part to the working conditions in the pharmacy and the number of interruptions he was experiencing. Making health care safer also requires that organizations develop a just culture of safety, making it possible for every employee to speak up if they feel unsure and would like to question, and not feeling like they will be reprimanded when something goes wrong. Additionally, it is imperative that we not over-trust the technology and assume that it is correct; critical thinking must continue to exist and thrive.

Alert fatigue is a complex and growing health care safety concern that will continue and increase in existence with the increase in reliance on computerization. While computerization has made many things in health care safer, one of the biggest unintended consequence of digitalization is alert fatigue. The issue is complex and while system developers can tailor alerts to an extent, they are reluctant to do so due to liability concerns. A potential solution to this is stronger governmental regulation and guidelines to allow for minimization of alert fatigue and improved safety performance of decision support systems. Since there is no clear consensus form a national perspective, there are steps that hospitals and health systems can take to tailor their systems and increase compliance to enhance safety. This includes increasing alert specificity and removing inconsequential alerts, tailoring alerts specific to patient characteristics and disease states, tiering alerts based on their severity and applying interruptive alerts only to high severity levels. In addition to the informatic fixes, the health care industry in collaboration with the system developers must utilize human factors engineering principles in the designing of alerts. A multi-faceted approach is required to tackle this complex and significant patient safety issue.

References:


[1] Committee on Quality of Health Care in America, Institute of Medicine. Crossing the quality chasm: A new health system for the 21st century. National Academies Press: 2001, 147.

[2] Fontenot, S. The Affordable Care Act and Electronic Health Records. PEJ. November 2013:72-76. https://sarahfontenot.com/wp-content/uploads/2015/04/5-Dec-2013-Will-EHRs-Improve-Quality-Article.pdf. Accessed December 4, 2018.

[3] Alert Fatigue. AHRQ Patient Safety Network. https://psnet.ahrq.gov/primers/primer/28/alert-fatigue. August 2018. Accessed December 4, 2018.

[4] Isaac, T et al. Overrides of Medication Alerts in Ambulatory Care. Arch Intern Med. 2009;169(3):305-311. 

[5] Cash, J. Alert Fatigue. American Journal of Health-System Pharmacy December 2009, 66 (23) 2098-2101. http://www.ajhp.org/content/66/23/2098. Accessed December 6, 2018.

[6]  ECRI Institute. Top 10 health technology hazards for 2015: a report from Health Devices. November 2014. https://www.ecri.org/Documents/White_papers/Top_10_2015.pdf. Accessed December 4, 2018.

[7] Ash JS, et al. The extent and importance of unintended consequences related to computerized provider order entry. J Am Med Inform Assoc. 2007; 14:415-423.

[8] Rush, J, et al. Improving Patient Safety by Combating Alert Fatigue. J Grad Med Educ. 2016 Oct; 8(4): 620–621. Accessed December 4, 2018.

[9] Wachter R. The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age. New York, NY: McGraw-Hill; 2015.

[10] The Absence of a Drug-Disease Interaction Alert Leads to a Child’s Death. ISMP. May 21,2015. https://www.ismp.org/resources/absence-drug-disease-interaction-alert-leads-childs-death. Accessed December 8, 2018.

[11] Kowalczyk, L. No Easy Solutions for Alarm Fatigue. Boston Globe. February 14, 2011. http://archive.boston.com/lifestyle/health/articles/2011/02/14/no_easy_solutions_for_alarm_fatigue/?page=full. Accessed December 8, 2018.

[12] Drew, B., et al. Insights into the Problem of Alarm Fatigue with Physiologic Monitor Devices: A Comprehensive Observational Study of Consecutive Intensive Care Unit Patients. PLOS. October 22, 2014. https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0110274. Accessed December 8, 2018.

[13] Van der Sijs H, et al. Overriding of drug safety alerts in computer physician order entry. J Am Med Inform Assoc. 2006; 13:138–47.

[14] Human Factors Engineering. AHRQ Patient Safety Network. https://psnet.ahrq.gov/primers/primer/20/Human-Factors-Engineering. August 2018. Accessed December 7, 2018.

[15] Kesselheim, A, et al. Clinical Decision Support Systems Could Be Modified To Reduce ‘Alert Fatigue’ While Still Minimizing The Risk Of Litigation. Health Aff (Millwood). 2011; 30:2310-2317.

The 1990’s brought on the rapid development of digitalization within society; the share of the US population with internet access increased by 58 percent between December 1998 and August 2000. Computerization rapidly brought on changes to how the country communicated, received news and even shopped. It was inevitable that the healthcare sector would be the next to implement information technology and in 2001, the Institute of Medicine(IOM) further endorsed the importance of digitalizing healthcare. According to IOM, “information technology must play a central role in the redesign of the health care system if a substantial improvement in health care quality is to be achieved…”[1] The enactment of the Affordable Care Act (ACA) in March 2010, once more promoted the development of electronic health records to improve the quality of health care. Support for health information technology was deemed a necessary condition for many of ACA’s initiatives, and EHR implementation greatly expanded throughout the health care system. [2]

The introduction of computerization into any arena has produced benefits, and with health care the rapidly increasing digitalization has been advantageous to both patients and providers. No longer did health care workers have to decipher physician handwriting that looked like Egyptian hieroglyphics. Clinical decision support would alert providers to drug allergies and interactions, barcode scanning would eliminate medication administration errors and pharmacy robots would select the correct medication to be dispensed to the patient.

However, the integration of technology into medicine has also endured many rough patches along the way. The introduction of newer and more sophisticated programs to the clinical environment and provider workflows have provided great benefits towards improving quality and patient safety but as with any new process implementation, there have been unintended and unanticipated consequences. Designs and products that were promoted to improve patient safety have resulted in new types of threats and medication errors. Technologies such as computerized provider order entry (CPOE), smart pump medication delivery devices and automated dispensing cabinets (ADC) are programmed to provide warnings to clinicians when unsafe situations arise, and interventions need to be made. However, on any given day there are a staggering number of alerts as well as multiple different alert-generating devices.[3] In one study, CPOE generated warnings for physicians on 3-6 percent of all orders that were entered. This translates into dozens of warnings per day and doesn’t even account for the number of warnings that are generated for pharmacists, who often receive the vast majority of alerts.[4]

The term alert fatigue describes how busy health care clinicians become overwhelmed and desensitized to computer generated safety alerts and as a result, ignore or fail to respond appropriately to the warnings. Alert fatigue occurs when too many red flags are triggered across applications in which patient information is entered and can be compared to the law of diminishing returns in economics, to the saturation point in chemistry, and to Shakespeare’s As You Like It, “Why then, can one desire too much of a good thing?”[5]

Unfortunately, the vast number of alerts that are generated by systems do not warrant action and deserve to be ignored. The threat to patient safety occurs when because of the high number of alerts, critical alerts that warn of impending serious harm are also ignored. Alert fatigue is now recognized as a major unintended consequence of health care computerization. The ECRI Institute, a nonprofit medical safety organization, listed alert fatigue as a top technology hazard.[6] The proliferation of alerts in electronic health records and supporting technologies was intended to improve patient safety but has resulted in a paradoxical increase in the likelihood that patients will be harmed. [7]

While clinical alerts have been shown to reduce adverse effects of medications resulting in fewer deaths, disabilities and hospitalizations and subsequently lowering health care costs, they are not always beneficial and patient harm can occur when low value or false positive alerts appear. In one study, 331 alerts were needed to prevent one adverse drug event.[8]

Consider the case of a teenager who received 38 times the normal dose of an antibiotic due to the clinical alert being overshadowed by a large number of clinically insignificant alerts. Pablo Garcia, 16 years old had a rare genetic disease known as NEMO syndrome which leads to a lifetime of frequent infections and bowel inflammation. In July 2013, he was admitted to UCSF Medical Center to undergo a routine colonoscopy. He received a bowel prep regimen that night as well as his evening medications which included steroids and antibiotics. Soon after, he began complaining of numbness tingling and the on-call physician was summoned. It was then discovered that Pablo had received 38.5 tablets of Septra, 37.5 more tablets than what he should have, and the next morning Pablo suffered a grand mal seizure due to the overdose. This error began with a physician entering an order into the EHR upon patient admission. Pablo took one tablet of Septra DS twice daily as prophylaxis for his skin infections. This dose was appropriate for him and the physician wanted to continue this same dose upon admission. UCSF Medical Center used Epic as their electronic health record and while dose limits can be programmed into the EHR, UCSF had decided not to set limits on doses because it was a teaching hospital. The hospital treated many patients with rare diseases who were also on research protocols and in these patients, high or unusual doses of medications would usually have been acceptable. Additionally, for pediatric patients the informatics committee had decided to require weight-based dosing for every patient under 40 kg. If dose rounding was necessary to conform with commercially available strengths and the change in dose was greater than 5% of the calculated dose, the policy stated that the pharmacist would contact the physician to ensure the conversion was acceptable. Pablo weighed 38.6 kg and so the calculated dose of Septra was 5 mg/kg of Trimethoprim or 193 mg. The closest commercially available dose to 193 mg was 160 mg, or one tablet of Septra DS. When the physician chose to continue with this dose, she was under the assumption that she was ordering one tablet. The order then populated to the pharmacist verification queue as 193 mg, which was 17% greater than the available 160 mg tablet and prompted the pharmacist to contact the physician per policy. The physician then attempted to reorder the dose as 160 mg, but accidentally entered the dose as 160 mg/kg because that was how the default dosing was set up in Epic based on this patient’s weight. Upon doing this, the dose was calculated as 6,160 mg or 38.5 tablets. The physician continued with signing the order and an alert fired warning her of the overdose, but she took no action.

UCSF Medical Center had chosen to disable thousands of alerts built into the Epic database, however despite this decision pharmacists still received pop-up alerts and of the 350,000 medication orders processed per month, pharmacists received pop-up alerts on half of them. In the course of one month, physicians received 17,000 alerts. In this case, the physician assumed that the alert she was receiving was yet another alert with no clinical significance and so she paid no attention to it. Additionally, the senior residents at the medical center were known to advise juniors to simply ignore all alerts and pop-ups and so the physician felt quite comfortable ignoring the alert. The order then made its way back to the pharmacist who only saw the 160, which is what he was expecting since this is what he had communicated to the physician. Of note, the 6,160 mg dose ordered looked deceivingly similar to 160 mg and while a massive overdose had just been ordered, the alert that fired in Epic for both the physician and the pharmacist looked like any other alert; nothing highlighted the severity of the overdose. The pharmacy satellite where the pharmacist was stationed was cramped and very busy. He was constantly being interrupted by phone calls and continually answering the door for nurses who were coming to pick up medications. It was not unusual to be interrupted up to 6-7 times while working on any given patient’s medication order and so it was not a surprise that he also overrode the alert. It is a known fact that every interruption in the medication process increases the possibility of an error occurring exponentially. Once the order was verified by the pharmacist, the pharmacy robot then dutifully prepared the dose and implemented the dispensing process as it was designed to do. The nurse who administered the dose was unfamiliar with the unit, was inexperienced and afraid to speak up, and the barcode medication administration technology and eMar system falsely reassured her and so Pablo Garcia received 38.5 tablets of Septra.

James Reason’s swiss cheese model of error holds that all complex organizations harbor many latent errors, or mistakes waiting to happen. On most occasions, the errors are caught in time and even if the first layer of protection is breached, the second or third layer will catch them. When all the layers are breached, the swiss cheese holes have aligned and the system has failed, thereby resulting in an error. After the 1999 report by the IOM which estimated that nearly 100,000 patients in the United States die per year due to medication errors, a massive patient safety movement was launched, and computerization was touted as a promising fix. Computerization in health care did solve many problems such as illegible handwriting, or misplaced decimals, but also added increased complexities and additional challenges and new hazards. Pablo Garcia did survive the massive Septra overdose, but a failure of multiple layers aligned the holes perfectly. Reason also states that most errors which occur are caused by competent people and fortifying the system by adding layers or shrinking the holes is much more productive than blaming the people.

After this incident, one of the steps that UCSF Medical Center took to prevent an error of this nature from occurring again was to tackle the issue of alert fatigue by forming a committee to review alerts and two years later only 30 percent of alerts have been removed. These are the types of issues the health care industry is facing; finding a balance between too many alerts along with not removing alerts that have the potential to benefit patient safety.[9]

The tragic death of a 12-year-old child with congenital long QT syndrome in 2015 highlights the need for clinical decision support systems and brings to light the failure of one hospital that chose to disable alerts. A physician prescribed Zithromax for this child to treat otitis media and sinusitis using the hospital’s EHR system. Zithromax has been associated with QT interval prolongation and after taking the medication for four days, the child developed torsade’s and died despite efforts to save her. Citing alert fatigue, the hospital had disabled the drug-disease state alerts from firing, therefore no warning fired for the physician or the pharmacist.[10]

CPOE alerts represent a small fraction of the alerts that health-care workers receive on any given day. A 2011 investigation by the Boston Globe found that between January 2005 and June 2010, alarm malfunction or alarm fatigue was attributed to 216 deaths in the US.[11] In early 2013, Barbara Drew, a UCSF researcher set out to quantify the problem of alarm fatigue in the UCSF Medical Center 66-bed ICU. A total of 381,560 audible alerts fired during a 31-day period, or 187 alarms per bed per day. Of the alerts that fired for arrhythmias, 89 percent were found to be false positives.[12]

Studies on alert fatigue consistently show three main findings. First, that alerts are only modestly effective at best. Second, that alert fatigue is common, and clinicians generally override the vast majority of alerts, including those deemed to be critical and having the potential to cause serious harm. According to a prominent Harvard Medical School professor, clinicians override alerts between 49-96 percent of the time.[13] Lastly, that the more alerts clinicians receive, the higher the potential for alert fatigue. While this finding is intuitive, consequences of alert fatigue have the likelihood of increasing over time and therefore, alert fatigue has become a high-profile patient safety issue. In April 2015, The Joint Commission released a sentinel event alert calling for health care organizations to play close attention to information technology as a safety issue and in order to mitigate alert fatigue, it was recommended that the culture of safety be improved by creating a shared sense of responsibility between platform developers and the end users.3

Busy clinicians rely on equipment and technology to carry out the life-saving interventions that are they are trained to do, and it is assumed that technology will improve patient outcomes, but interactions between machines and the people who rely on them sometimes increases the risk of a disastrous occurrence. Human factors engineering is the discipline that attempts to identify and address these issues and takes into account human strengths and limitations in the design of interactive systems that involve people and technology to ensure safety and effectiveness. It focuses on how systems work in actual practice and attempts to optimize safety and minimize risk of errors in complex environments. Human factors engineering has been used in to improve safety in industries such as aviation and automobile but its application to healthcare is relatively recent.[14]

Solving alert fatigue will also require a marriage between informatics and human factors engineering because the fundamental problem arises from the technology itself and the interaction of busy humans with the technology. Pablo Garcia was nearly killed by a medication error and this error demonstrates that solutions to computerized systems and human factors need to be broadly based and aligned. When staff involved in this incidence were questioned, one of the issues discovered was the fact that the clinicians trusted the computers more than they trusted themselves. As the computers generate their accuracy and trustworthiness, the bias grows.9

The aviation industry provides an example of how human factors engineering has improved safety because like medicine, aviation professionals have to perform tasks in high-stakes environments and the approach that aviation has taken has been learned from tragedies. The aviation industry has taken steps to prioritize warnings and cockpit alerts and has worked very hard to avoid false positives to prevent pilots from tuning out. Alerts are separated by hierarchy of significance; red lights, flashes, voice alerts and stick shaking indicate an impending stall and action needs to be taken immediately to prevent the plane from falling out of the sky. The next level of alerts are warnings that require immediate attention but do not directly threaten the flight path; red lights, voice alarm but no stick shaking and there are about 40 of these. Of note, the color red is only used for high level warnings. The next level is a caution and there are about 150 of these situations. Caution requires immediate awareness, but not instant action. With cautions, the lights and text are amber, and there is only one visual alert. The final level is an advisory where no action is required, but the pilot should be aware of it. Advisory alerts are amber text messages. For every kind of alert, a checklist automatically pops up on a central screen to help guide the crew to a solution and are programmed to match the triggered problem. Boeing utilizes a team of experts who make the judgment call and unlike in health care, resist the urge to warn the flight crew about everything. Because of this process, less than 10 percent of flights have any alerts triggered.9

The use of human factors engineering and deep attention to the experience of the end-user has thus far been lacking in health care technology design. While many steps can be taken to address alert fatigue, it’s important to point out that system developers have been reluctant to remove alerts due to fear of litigation and being held liable in the event that patients are harmed in the absence of a warning.[15] While the program developers have been slow to address the excessive number of alerts, functionality is present in the software that allows hospitals and health-systems to turn alerts off.

One step that can be taken to address the sheer number of alerts is increasing specificity and eliminating inconsequential alerts. Taskforces consisting of informatics personnel, administration and end-users must be established and this takes time, money and manpower. The team should review why alerts are firing and how they can be tweaked. To address ongoing issues, Group Health Cooperative of South Central Wisconsin (GHC-SCW) undertook such an initiative. The initiative included a holistic strategy that leveraged industry-accepted metrics and clinical staff input. They were able to implement a filtering strategy that reduced the number of alerts firing which resulted in more relevant alerts being delivered to providers. Within 60 days, they noticed that instead of overriding alerts, providers were taking action on nearly twice the number of alerts as before. [16]

While reducing or eliminating the number of alerts is important, it is not the only strategy that should be used to combat this complex issue. Targeted alerts based on patient characteristics and indicators such as lab results or test results should also be developed. Epic Systems is working on developing software that might target alerts based on patients’ health conditions. This occurs by including more parameters and filters into the data. For example, incorporating renal function tests results into the alert system so that alerts for nephrotoxic medications are triggered only for those patients that are at a higher risk. Another example where a targeted alert system would be beneficial is in a cancer patient who needs higher doses of pain medications than recommended. A smart system would be able to differentiate this type of patient from one who would not need the high doses, or when a patient has an allergy to penicillin flagged but has taken a cephalosporin in the past without issue and the clinical decision support system is able to differentiate and not fire another alert when a subsequent order for cephalosporins is entered. Such a change could limit distractions so that clinicians focus on the alerts that matter for that specific patient.

A third mechanism by which alert fatigue could be better managed is by utilizing a tiered alert system. Warnings would be presented in different ways according to their severity and clinical consequence, similar to how aviation has handled alerts in aircraft. Only the most severe interactions would require hard stops and interruptions. Anecdotal experience suggests that how alerts are presented can have a major impact on compliance rates, but there are few studies comparing how alerts are presented. In 2005, Partners Health Care performed a retrospective analysis of data on hospitalized patients at two academic medical centers during a one-year period. Both inpatient CPOE systems used the same alert service, but one displayed alerts by severity level, using a tiered presentation while the other did not. For the tiered system, the alerting modules only required a response by the clinician for severe interactions and less serious ones presented in a non-interruptive fashion. The tiered system was set up with three levels of alerts; Level 1 alerts are the most serious and are considered to be life-threatening. Level 1 alerts are set up as hard stops and the clinician is required to either cancel the order being entered or discontinue the pre-existing order. Level 2 alerts are less serious, but still require action by the clinician. The clinician is required to either discontinue one of the drugs or select an override reason. Overriding alert reasons are set up as a pick list with the most frequent override reasons. Multiple reasons may be selected, and text can be added if a suitable reason is not selected. The largest proportion of alerts is in Level 3, which is also the least serious. Alerts are presented as information only and require no action of any kind from the clinician and no keystroke is needed because the presentation uses the available screen. 71,350 alerts were reviewed, of which approximately 39,000 occurred at the non-tiered site and 32,000 at the tiered site. Compliance at the tiered site was significantly higher with 100% of the most severe alerts accepted, vs. on 34% at the non-tiered site. The moderately severe, or Level 2 alerts were also more likely to be accepted at the tiered site.[17]

This is one type of tiered alert system. Other types of tiered alerts utilize color coding to differentiate between the types of alerts; red color for severe warnings and interruptive hard stops, yellow for moderate and less severe. After Pablo Garcia was overdosed on Septra, aside from forming a committee to review alerts, UCSF Medical Center blocked any effort to prescribe more than nine pills in a single dose. This can also become complex and have unintended consequences, especially in the age of drug shortages where sometimes the only strengths available are low doses and there is potential for more units being needed to make the dose prescribed.

Alert fatigue and the unintended consequences of health care computerization is only recently recognized but has become a high-profile patient safety issue. There is intense interest in developing specific methods to combat alert fatigue but no consensus on how to pave the way forward. UCSF Medical Center formed a committee to review all of their alerts and after two years, had only succeeded in removing about 30 percent of alerts from the system. The sophisticated analytics are not there, even from the software developer, Epic. It will be imperative for clinicians and developers to work together in order to tackle these issues. Clinicians must be willing to provide meaningful input so that the developers can design the technologies with the functionality that addresses the needs of clinical decision support.

Aside from the required informatics principles, solving alert fatigue will require human factors engineering principles when designing the alerts because the problem arises not just from the technology, but the human interaction with the technology. In the case of Pablo Garcia, the error by the pharmacist was owed in part to the working conditions in the pharmacy and the number of interruptions he was experiencing. Making health care safer also requires that organizations develop a just culture of safety, making it possible for every employee to speak up if they feel unsure and would like to question, and not feeling like they will be reprimanded when something goes wrong. Additionally, it is imperative that we not over-trust the technology and assume that it is correct; critical thinking must continue to exist and thrive.

Alert fatigue is a complex and growing health care safety concern that will continue and increase in existence with the increase in reliance on computerization. While computerization has made many things in health care safer, one of the biggest unintended consequence of digitalization is alert fatigue. The issue is complex and while system developers can tailor alerts to an extent, they are reluctant to do so due to liability concerns. A potential solution to this is stronger governmental regulation and guidelines to allow for minimization of alert fatigue and improved safety performance of decision support systems. Since there is no clear consensus form a national perspective, there are steps that hospitals and health systems can take to tailor their systems and increase compliance to enhance safety. This includes increasing alert specificity and removing inconsequential alerts, tailoring alerts specific to patient characteristics and disease states, tiering alerts based on their severity and applying interruptive alerts only to high severity levels. In addition to the informatic fixes, the health care industry in collaboration with the system developers must utilize human factors engineering principles in the designing of alerts. A multi-faceted approach is required to tackle this complex and significant patient safety issue.

References:


[1] Committee on Quality of Health Care in America, Institute of Medicine. Crossing the quality chasm: A new health system for the 21st century. National Academies Press: 2001, 147.

[2] Fontenot, S. The Affordable Care Act and Electronic Health Records. PEJ. November 2013:72-76. https://sarahfontenot.com/wp-content/uploads/2015/04/5-Dec-2013-Will-EHRs-Improve-Quality-Article.pdf. Accessed December 4, 2018.

[3] Alert Fatigue. AHRQ Patient Safety Network. https://psnet.ahrq.gov/primers/primer/28/alert-fatigue. August 2018. Accessed December 4, 2018.

[4] Isaac, T et al. Overrides of Medication Alerts in Ambulatory Care. Arch Intern Med. 2009;169(3):305-311. 

[5] Cash, J. Alert Fatigue. American Journal of Health-System Pharmacy December 2009, 66 (23) 2098-2101. http://www.ajhp.org/content/66/23/2098. Accessed December 6, 2018.

[6]  ECRI Institute. Top 10 health technology hazards for 2015: a report from Health Devices. November 2014. https://www.ecri.org/Documents/White_papers/Top_10_2015.pdf. Accessed December 4, 2018.

[7] Ash JS, et al. The extent and importance of unintended consequences related to computerized provider order entry. J Am Med Inform Assoc. 2007; 14:415-423.

[8] Rush, J, et al. Improving Patient Safety by Combating Alert Fatigue. J Grad Med Educ. 2016 Oct; 8(4): 620–621. Accessed December 4, 2018.

[9] Wachter R. The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age. New York, NY: McGraw-Hill; 2015.

[10] The Absence of a Drug-Disease Interaction Alert Leads to a Child’s Death. ISMP. May 21,2015. https://www.ismp.org/resources/absence-drug-disease-interaction-alert-leads-childs-death. Accessed December 8, 2018.

[11] Kowalczyk, L. No Easy Solutions for Alarm Fatigue. Boston Globe. February 14, 2011. http://archive.boston.com/lifestyle/health/articles/2011/02/14/no_easy_solutions_for_alarm_fatigue/?page=full. Accessed December 8, 2018.

[12] Drew, B., et al. Insights into the Problem of Alarm Fatigue with Physiologic Monitor Devices: A Comprehensive Observational Study of Consecutive Intensive Care Unit Patients. PLOS. October 22, 2014. https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0110274. Accessed December 8, 2018.

[13] Van der Sijs H, et al. Overriding of drug safety alerts in computer physician order entry. J Am Med Inform Assoc. 2006; 13:138–47.

[14] Human Factors Engineering. AHRQ Patient Safety Network. https://psnet.ahrq.gov/primers/primer/20/Human-Factors-Engineering. August 2018. Accessed December 7, 2018.

[15] Kesselheim, A, et al. Clinical Decision Support Systems Could Be Modified To Reduce ‘Alert Fatigue’ While Still Minimizing The Risk Of Litigation. Health Aff (Millwood). 2011; 30:2310-2317.

[16] Presti, C. Combatting Alert Fatigue: Holistically Reducing Noise at the Point of Care. Pharmacy Times. August 24, 2015. https://www.pharmacytimes.com/publications/directions-in-pharmacy/2015/august2015/combatting-alert-fatigue-holistically-reducing-noise-at-the-point-of-care. Accessed December 6, 2018.

[17] Paterno MD, et al. Tiering drug–drug interaction alerts by severity increases compliance rates. J Am Med Inform Assoc. 2009; 16:40-46.

Cite This Work

To export a reference to this article please select a referencing stye below:

Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.

Related Services

View all

DMCA / Removal Request

If you are the original writer of this essay and no longer wish to have your work published on the UKDiss.com website then please: