A review on the reasons why, and the methodology behind, incident reporting continues. We welcome your thoughts, and reference summaries as well.
Dr. R Lawton from the School of Psychology at the University of Leeds and colleague looked to better understand “the willingness of healthcare professionals (doctors, nurses, and midwives) to report colleagues to a superior member of staff following an adverse incident or near miss.” They also explored “the difference in reporting of events involving three kinds of behavior defined by (James Reason)–compliance with a protocol, violation of a protocol, an improvisation where no protocol exists.” Lawton theorized that the culture of medicine, along with the increasing fear of litigation, would likely constrain healthcare providers from reporting.
And that is almost exactly what he found, as results showed:
- Doctors were less likely to report a colleague across the board, even when a colleague deliberately went around protocols.
- Nurses were the group most likely to report if there was a bad outcome for the patient.
- Protocol violations were reported most frequently, regardless of outcome.
The authors speculate that doctors’ unwillingness to report violations of protocol equal to their nursing or midwife colleagues may be a reflection of the perception that protocols “by many in the medical community (are viewed) as a threat to their professional autonomy”, and that doctors are reluctant to report a colleague as a reflection of “a professional culture in which what may be seen as whistle blowing is taboo.” Either way, the authors conclude that culture change within the NHS may first have to occur in order for incident reporting to deliver its true benefits.
The introduction of Evans and colleagues paper on incident reporting gives an excellent overview of the benefits these reports provide:
Incident reporting captures more contextual information about incidents and, when actively promoted within the clinical setting, it can detect more preventable adverse events than medical record review at a fraction of the cost. Near misses are rarely documented in medical records, yet occur more frequently than adverse events and provide valuable lessons in recovery mechanisms without the detrimental consequences of an adverse event. The subjective data provided by incident reporting enable hypothesis building and preventative strategies to be developed and tested. (See article for references).
An anonymous survey modified from Vincent et al J Eval Clin Pract 1999 was given to participants (186 doctors, 587 nurses both with >70% response rate) asking the following:
- Do you know if your hospital has an incident reporting system?
- If yes, do you know what form to use to submit a report?
- If yes, do you know how to access the reporting system?
- If yes, do you know how to submit a report?
- How often do you report 11 iatrogenic injuries (listed in Figure 2), and how often should these injuries be reported?
- Nineteen reasons as barriers to reporting were evaluated using a likert scale (listed in Table 2)
Results indicated that:
- Doctors and nurses were equally aware of an incident reporting system at their institutions, but nurses were significantly more likely to have filed a report (89.2% v 64.6% p<0.001). This may have to do with the fact that nurses also knew how to locate, and what to do with, a report to a significantly greater degree.
- Senior doctors were significantly less likely than their younger colleagues to have submitted a report.
- Both doctors and nurses completed reports most often for falls and least often for pressure sores.
Perceived barriers to reporting for doctors were: 1) Lack of feedback 2) Form took too long to complete and 3) Incident was perceived as too trivial. Barriers for nurses were: 1) Lack of feedback 2) Belief that there was no point in reporting near misses and 3) Forgetting to report when the ward was busy.
Of note in this study was that a poor reporting culture had less to do with the cultural environment and more to do with the functionality of the reporting system. Authors did note however, that the “poor reporting practices by doctors…probably reflects the prevailing deeply entrenched belief in medicine that only bad doctors make mistakes.” Authors conclude by highlighting the importance of sharing with staff the changes that are implemented as a result of the incidents reported.
Ravi Mahajan from the Division of Anaesthesia and Intensive Care at Queen’s Medical Centre in Nottingham, UK reviews how high reliability organizations, such as aviation and the rail industry, have been using incident reporting as a learning tool for improvement for some time yet that same well documented tool has not caught on in healthcare. According to Mahajan, the main reason to report incidents to improve patient safety, which is also well documented:
…is the belief that safety can improved by learning from incidents and near misses, rather than pretending that they have not happened.(5)
He states that leadership, larger governing bodies of healthcare and consumers are calling for the information incident reports provide in order to “better understand error and their contributing factors.” Mahajan highlights the World Health Organization (WHO) as having outlined guidelines for implementing effective reporting systems. Additional information on the WHO reporting guidelines can be found here.
What Mahajan also includes in his review is the need for a human factors approach to the analysis of medical errors, which considers the human component within the larger context of the health system. Instead of taking the quick and easy summation of an error as “someone’s fault”, a human factors approach takes into consideration all the events leading up to the error through a much larger lens, considering Reason’s “active and latent failures” mentioned earlier. All of this, he says, incorporated into the framework for analyzing critical incidents suggested by Vincent et al Br J Med 1998; (316) which takes into consideration the socio-technical pyramid discussed by Hurst and Ratcliffe and adapted to the clinical setting provides a structured approach for a meaningful root cause analysis of the error. The framework can found in Table 1 of Mahajan’s review.
Without meaningful feedback, however, Mahajan and others continue to point out the reports and the analysis are meaningless.
As we’ve mentioned, the road to high reliability starts with the formation of a just culture that supports the reporting of unsafe conditions, near misses and adverse events, in order to uncover those conditions within a system that make it prone to harm. It’s a simple statement–one that makes intuitive sense–so why then, has a reporting culture evaded medicine? The following authors weigh in on the how, what and why of incident reporting to show that any related growing pains are well worth the struggle in the best interest of our patients.
Please share references and information that will help raise our collective knowledge, and provide a road map for others seeking to build a reporting culture en route to high reliability. Our patients are depending on us to take this journey–
In this 2002 paper, Lucian Leape MD reiterates the recommendation of the Institute of Medicine’s To Err Is Human report, calling for the then controversial expansion of reporting around serious adverse events and medical errors. He also highlights that in order to stop the frequency of harm befalling patients, a greater understanding of the harm and its causes is needed “for the development of more effective methods of prevention (as) it seems evident that improved reporting of accidents and serious errors that do not cause harm (“close calls”) must be an essential part of any strategy to reduce injuries.”
Lucian describes the primary purpose of reporting these events is to learn from them, and the only way to learn is to first be aware the problem exists. Additional reasons to developing a robust internal reporting system according to Lucian include:
- Allows for monitoring of progress
- Allows lessons to be shared so others can avoid similar mishaps
- Holds everyone accountable
Table 2 in his report lists the characteristics of a successful reporting system along with an explanation. In brief, those characteristics are: 1) Non-punitive 2) Confidential 3) Independent 4) Expert analysis 5) Timely 6) Systems-oriented 7) Responsive
James Reason has been mentioned more than one time on this blog because of the focus we have on becoming a high reliability organization. Reason’s work in just culture and his in-depth research examining a person versus system’s approach to understanding medical error reinforces the need for a reporting culture in order to achieve high reliability. Reason writes:
Effective risk management depends crucially on establishing a reporting culture.(3) Without a detailed analysis of mishaps, incidents, near misses, and “free lessons,” we have no way of uncovering recurrent error traps or of knowing where the “edge” is until we fall over it…
…Trust is a key element of a reporting culture and this, in turn, requires the existence of a just culture–one possessing a collective understanding of where the line should be drawn between blameless and blameworthy actions.(5)
Reason’s explanation of a just culture is one in which error reporting is handled in a non-punitive manner, looking to understand active failures and latent conditions within a systems context. However, he recognizes that within the system, each individual remains accountable for their actions. In a high reliability organization, every individual is reminded of the value of incident reporting as the focus is put upon intentionally looking for anything that could result in harm.
Charles Vincent raises the point that incident reporting is only as effective as the measurement and patient safety programs that result from gathering the reports. As many agree, one of the reasons physicians give for failing to report is that having taken the time and emotional energy to do so, the report then sits without response or action. Vincent editorializes that:
…a functioning reporting system should no longer be equated with meaningful patient safety activity. Organisations must move towards active measurement and improvement programmes on a scale commensurate with the human and economic costs of unsafe, poor quality care.
The follow up on each report is reinforcement for the next incident to be reported. And it must be meaningful, productive feedback that rewards those who take the time and stick their neck out to share information.
As we have touched on previously, high reliability organizations are built upon a foundation with a just culture as the framework. That framework is comprised of a number of components, with incident reporting playing an integral role. Without a thorough understanding of a health system’s strengths and weaknesses achieving high reliability will be challenging, according to the experts. To better understand the areas of potential weakness, every unsafe condition, near miss or harm event needs to be reported and analyzed in order to find the place where, as James Reason advises, the holes in the swiss cheese are soon likely to line up, and cause greater harm. For those unfamiliar with Reason’s work, Bob Wachter MD provides good background information, describing him as the “intellectual father of the patient safety field” in his post, “James Reason and the foundation of patient safety” on KevinMD.
One reason given by healthcare providers for not reporting, is that they are unsure of what to report. As an organization moves into incident reporting as a system, it’s important to define expectations around what should be reported, and to convey the importance of reporting as well as the mechanics of how to generate a report. While definitions for unsafe conditions, near misses and patient harm may be subtly different from health system to health system, a “common formats” definition for patient safety terminology was developed by AHRQ to level the playing field. Those definitions related to our discussion are:
- Unsafe condition: Any circumstance that increases the probability of a patient safety event; includes a defective or deficient input to (or) environment of a care process that increases the risk of an unsafe act, care process failure or error, or patient safety event. An unsafe condition does not involve an identifiable patient.
- Near miss: An event that did not reach a patient. For example: discovery of a dispensing error by a nurse as part of the process of administering the medication to a patient (which if not discovered would have become an incident); discovery of a mislabeled specimen in a laboratory (which if not discovered might subsequently have resulted in an incident).
- Patient harm: Physical or psychological injury (including increased anxiety), inconvenience (such as prolonged treatment), monetary loss, and/or social impact, etc. suffered by a person.
Why should we report these incidents? Because every report submitted provides an opportunity to improve upon an aspect of patient care that could potentially cause greater harm in the future. High reliability organizations are mindful and working toward improvement every minute of every day. As such, they look for the opportunities to improve that these reports provide. It is important that everyone in the health system–from environmental services to CEO–understands they can (and should) play a role in improving patient care by submitting an incident report in real-time when they see, or are part of, any of the events mentioned above. Having everyone in the organization with a like mindset, increases the chance a potential harm will be caught before it reaches the patient. And by rewarding and celebrating those who submit reports providing that opportunity to improve a just culture is further solidified.
In my last post, I mentioned the intervention we designed at the University of Illinois Medical Center to increase incident reporting by resident anesthesiology physicians. In the two-year retrospective analyses we used as our baseline measure, we found that residents reported 0 adverse events (AE) per quarter. At the end of the intervention period (7 quarters after study completion), we found that number had increased to 30 reports per quarter.
What changed in the residents, other than the fact that we required they submit an AE into the incident reporting system? And equally, what changed in us and our system through this study? To begin with, the students were exposed to an educational program that:
- Defined AEs, medical error, serious error, minor error and near miss.
- Discussed ACGME core competencies in relation to reporting of harm events.
- Discussed and clearly communicated the mechanics of filing a report.
- Provided 24 hour access to a consult service.
- Discussed how their report would be followed-up, and consistently adhered to this committment.
- Provided an educational manual/reference tool.
- Included regular conferences every 3-4 weeks for: a) Review of educational material; b) An opportunity for discussion around the AEs that were reported in an aggregated, de-identified manner; and c) Process improvements that came from their reports and feedback.
- Included support of the Department of Safety & Risk Management which provided “near-immediate feedback to residents upon receipt of their reports”. When possible, the residents were included in the root cause analysis of the event, or the quality improvement the team put together to address the near miss or unsafe condition.
The increase in resident reporting was very encouraging. But what truly tested our just culture was that the reports in one of the three-month periods shed light upon the fact that more than 50% of procedure related incidents reported that quarter were associated with lack of attending physician supervision. No one knew who the reporting residents were in these cases (except our safety department team) and no one knew who the attending physicians were that the residents felt had not supervised them adequately enough. This was because the data was always shared in an aggregated, de-identified manner. The purpose was to learn and improve, not finger-point or blame. Instead of arguing the data, our department rallied around it, and improved our own system of being there at the times residents reported feeling we had not been. Not only were the residents exposed to the educational messaging of the intervention, but maybe just as important, they also experienced firsthand the just culture that engenders a reporting culture after the study is complete and the researchers are no longer measuring.
Without a culture in place to reward, support and model incident reporting that effectively addresses the incidents and devises solutions to problems in real-time, results like those found in this study will not last. It would be interesting to give the survey now, to the same group of residents–some who have moved on to other institutions–and see what their attitudes and beliefs around reporting are today.
These same messages are shared with our students in Telluride, during the Student Summer Camps, and they have embraced the idea that the reporting of unsafe conditions, near misses and harm is a good thing. See numerous posts on the Transparent Health blog that give evidence to a fresh culture emerging:
It is up to us, as role models and educators, to ensure this is the culture that takes hold in medicine moving forward.