For the study, University of Utah researchers and colleagues analyzed the records of nearly 800 patients admitted in October 2004 at three large U.S. medical centers with strong patient safety initiatives (McKinney, Modern Healthcare, 4/7).
A “key challenge” to measuring and reducing adverse events in hospitals “has been agreeing on a yardstick measuring the safety of care in hospitals,” according to the study authors. Prior research has indicated that automated measurements of patient safety are not specific enough to accurately identify adverse events.
In this study, researchers compared the Institute for Healthcare Improvement’s Global Trigger Tool — a new systematic review of patient charts that is conducted by two or three hospital employees — with the Agency for Healthcare Research and Quality’s Patient Safety Indicators and hospitals’ self-reporting systems (Classen et al., Health Affairs, April 2011).
Using all three measurement methods, the researchers identified 393 adverse events across approximately one-third of admissions.
The IHI tool detected 354 events — 90% of the total and 10 times more than the AHRQ indicators, which detected only 35 events, or 9% of the total. Meanwhile, hospitals’ voluntary reporting systems identified only four events, or 1% of the total.
The study’s authors suggest that adverse-event estimates that rely on the use of AHRQ indicators or voluntary reporting systems “may be seriously misjudging actual performance” (Modern Healthcare, 4/7). The researchers also note that more widespread adoption of electronic health records could “facilitate the use of multiple parallel adverse event detection methods.”
Despite the high number of adverse events identified by the IHI tool, the study notes that the “true rates are likely to be higher still” (Health Affairs, April 2011).
Health Affairs Editor-in-Chief Susan Dentzer said, “It’s clear we still have a great deal of work to do in order to achieve a health care system that is consistently high-quality” (Steenhuysen, Reuters,