U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

McDonald KM, Romano PS, Geppert J, et al. Measures of Patient Safety Based on Hospital Administrative Data - The Patient Safety Indicators. Rockville (MD): Agency for Healthcare Research and Quality (US); 2002 Aug. (Technical Reviews, No. 5.)

Cover of Measures of Patient Safety Based on Hospital Administrative Data - The Patient Safety Indicators

Measures of Patient Safety Based on Hospital Administrative Data - The Patient Safety Indicators.

Show details

Summary

Introduction

The longstanding cornerstone of medicine “first, do no harm” exists because of the fragility of life and health during medical care encounters, and represents the medical profession's understanding that patient safety has always been an important part of quality health care. Recently, however, concerns and evidence have mounted about the complexities of the health care system potentially causing patient deaths and significant unintended adverse effects. With a major national interest in addressing patient safety issues, a wide spectrum of individuals and organizations are working toward developing methods and systems to detect, characterize, and report potentially preventable adverse events. These activities are crucial precursors to prioritizing areas for action and for studying the effects of approaches to reduce sources of medical error.

As part of this activity, the Evidence-based Practice Center (EPC) at the University of California San Francisco and Stanford University (UCSF-Stanford), with collaboration from the University of California Davis, was commissioned by the Agency for Healthcare Research and Quality (AHRQ) to review and improve the evidence base related to potential patient safety indicators (PSIs) that can be developed from routinely collected administrative data. For the purposes of this report, PSIs refer to measures that screen for potential problems that patients experience resulting from exposure to the health care system, and that are likely amenable to prevention by changes at the level of the system.

Reporting the Evidence

The primary goal of this report is to document the evidence from a variety of sources on potential measures of patient safety suitable for use based on hospital discharge abstract data. The approach to identification and evaluation of PSIs presented in this report serves as the basis for development of a third module for the AHRQ QI tool set (referred to as the HCUP II in previous work by the UCSF-Stanford EPC reporting on the research underpinning the refinement of the initial AHRQ HCUP QIs, available on AHRQ's web site at http://www.achq.gov/data/hcup/qirefine.htm). This third module will be the Patient Safety Indicators (PSIs), which focus on potentially preventable instances of harm to patients, such as surgical complications and other iatrogenic events. The two other modules are the Prevention Quality Indicators, based on hospital admissions that might have been avoided through high-quality outpatient care; and the Inpatient Quality Indicators, consisting of inpatient mortality, utilization of procedures for which there are questions of overuse, underuse, or misuse; as well as volume of procedures for which higher volume is consistently associated with lower mortality.

Purpose of the PSIs

Like the companion AHRQ Quality Indicators (QIs) screening tool set refined by the UCSF-Stanford EPC, the PSIs are a starting point for further analysis to reduce preventable errors through system or process changes. Additionally, these measures are likely to support the public mandate for aggregate statistical reporting to monitor trends over time, as planned for the National Quality Report.

Scope of the Project

This report reviews previous studies and presents new empirical evidence for identifying potential patient safety problems based on one potentially important source of data: computerized hospital discharge abstracts from the AHRQ Healthcare Cost and Utilization Project (HCUP). Therefore, the measures considered needed to be defined using variables that are available from most state-level hospital administrative data. Data elements in these sets include International Classification of Disease, Clinical Modification (ICD-9-CM) discharge diagnosis and procedure codes; dates of admission, discharge and major procedures; age; gender; and diagnostic related group (DRG). Data from outside the hospital stay (e.g., post-hospital mortality or readmissions) were not used because most state databases do not accommodate linkages between datasets. The HCUP State Inpatient Databases (SID) is an example of such a common denominator hospital discharge dataset, and was used for the development of the AHRQ PSIs, reported here. The PSIs presented in this report therefore relate to inpatient care, and the adverse events that have either a high likelihood or at least a reasonable possibility of being iatrogenic. These two constraints - the data source and the location of care—guided the development and evaluation of a promising set of patient safety indicators.

Following from these constraints, the PSIs by necessity capture adverse events that may, but possibly are not, related to medical care. They do not capture “near misses” or other undocumented adverse events. They also do not include adverse events related to a number of important patient safety concerns that are not reliably specified using ICD-9-CM, the official codes assigned to diagnoses and procedures associated with hospital utilization in the United States. Based on previous validation work and the limitations inherent in the data source, PSIs derived from discharge data capture a mixture of adverse events, including those that are almost certainly preventable and those that current best practices and error-mitigating systems of care have not been able to prevent. However, the evidence is presented for their promise as a low-cost screen for potential quality concerns to guide further investigations with additional data gathering and information collection.

Methodology

Following the previous refinement of quality indicators described in a companion technical report from the EPC, and published by AHRQ, an evaluation framework for validity testing (i.e., face validity, precision, minimum bias, and construct validity) was applied to each candidate PSI. Specifically, a four pronged strategy to collect validation data and descriptive information included two aspects of the previous work: a background literature review, and empirical analyses of the potential candidate PSIs using the HCUP SID. In addition to these approaches of the previous project, expert coders from the American Health Information Management Association (AHIMA) were consulted, and clinical panel reviews of potential indicators were conducted based on a process adapted from the RAND organization and University of California Los Angeles (RAND/UCLA) Appropriateness Method.

Evidence from these four sources was used to modify and select the most promising indicators for use as a screening tool to provide an accessible and low-cost approach to identifying potential problems in the quality of care related to patient safety. The methods applied provide baseline information on the ability of a fairly broad range of discharge-based PSIs to identify systematic differences across hospitals, and potentially to monitor trends on a national or regional basis.

Results

A review of previously reported measures in the literature (e.g. Complications Screening Program by Iezzoni et al, Patient Safety Indicators by Miller et al), and of medical coding manuals, resulted in identification of over 200 ICD-9-CM codes representing potential patient safety problems. Most of these codes were grouped into clinically meaningful indicators either based on previous indicator definitions or on clinical and coding expertise. Based on literature review of the published evidence related to their validity, several potential PSIs were eliminated. Because of the limited validation literature available on PSIs and complications indicators from which many PSIs were derived, the research team conducted a clinical panel review process to assess the face validity and to guide refinements to the initial definitions of the 34 most promising PSIs. Response to a questionnaire by clinicians (i.e., physicians from a number of specialties, nurses, and pharmacists) for each indicator, augmented by coding review and initial empirical testing, provided the basis for selecting the indicators expected to be most useful for screening for potentially preventable adverse events. Tables 1S and 2S summarize the strength of the evidence literature, definitions, and key findings for the set of 20 hospital level PSIs that are recommended for implementation as the initial AHRQ PSI set (designated Accepted indicators).

Table 1S. Strength of Evidence Literature for PSIs.

Table 1S

Strength of Evidence Literature for PSIs.

Table 2S. Summary of Evidence for Accepted Hospital Level PSIs.

Table 2S

Summary of Evidence for Accepted Hospital Level PSIs.

Several accepted patient safety indicators were also modified into area level indicators, which were designed to assess the total incidence of the adverse event within geographic areas. For example, the transfusion reaction indicator can be specified at both the hospital and area level. Transfusion reactions that occur after discharge from a hospitalization would result in a readmission. The area level indicator includes these cases, while the hospital level restricts the number of transfusion reactions to only those that occur during the same hospitalization that exposed the patient to this risk. The five hospital level indicators that have area level analogs are Iatrogenic Pneumothorax, Transfusion Reaction, Infection Due to Medical Care, Wound Dehiscence, Foreign Body Left in During Procedure, and Technical Difficulty with Medical Care.

In addition to the accepted PSIs, another 17 indicators show promise, though have more concerning limitations. These were designated “experimental” and examined empirically. They performed empirically somewhat less well than the accepted indicators empirically. In addition, the concerns raised about various aspects of these indicators during the clinical panel discussions limit their potential usefulness. However, with possible further refinements to the underlying coding of data and to the indicator definitions, these indicators have the potential to measure what they purport to identify. For example, Reopening of Surgical Wound, while conceptually a useful PSI, requires further information to exclude cases that are planned during staged operations for example, and requires coding changes in order to capture only similarly serious reopening procedures.

Conclusions

This project took a four pronged approach to the identification, development and evaluation of PSIs that included use of literature, clinician panels, expert coders and empirical analyses. For the best-performing subset of PSIs, this project has demonstrated that rates of adverse events differ substantially and significantly across hospitals. The literature review and the findings from the clinical panels combined with data analysis provide evidence to suggest that a number of discharge-based PSIs may be useful screens for organizations, purchasers, and policymakers to identify safety problems at the hospital level, as well as to document systematic area level differences in patient safety problems.

Few adverse events captured by administrative data are unambiguous enough for a great deal of certainty that every case identified reflects medical error. Most adverse events identified by the PSIs have a variety of causes in addition to potential medical error leading to the adverse event, including underlying patient health and factors that do not vary systematically. Clinician panelists rated only two of the accepted indicators as very likely to reflect medical error: 1.) “Transfusion reaction” and 2.) “Foreign body left in during a procedure.” As is expected for indicators of this case-finding type, these indicators proved to be very rare with less than 1 per 10,000 cases at risk. All other accepted indicators identify adverse events which represent a spectrum of likelihood of reflecting either medical error or potentially preventable complications of care, but cannot be expected to identify only cases in these categories.

Potential Uses of PSIs

Because the PSIs are intended for use as an initial, efficient screen to target areas for further data exploration, the primary goal is to find indicators that guide those interested in quality improvement and patient safety to areas where there are systematic differences between hospitals or geographic areas. These systematic differences may relate to underlying processes or structures that an organization could change to improve patient care and safety. These errors may be attributed to human error on the part of physicians or nurses, or system deficiencies. On the other hand, the systematic differences will sometimes correspond to coding practices, patient characteristics not captured by administrative data, or other factors. These will be dead ends to some degree. In the application of these PSIs, users will be determining how well patient safety problems are identified at the level of groups of patients. Sharing experiences about application of these PSIs, researchers and health care practitioners will build on the information highlighted in this report about each indicator, as well as the set of PSIs.

At the national or state level, these indicators could be used to monitor the frequency of potential patient safety problems, to determine whether the rates are increasing or decreasing over time, and to explore large variations among settings of care. While the indicators were primarily developed at the hospital level, some were also implemented to provide an analogous area level measure, and analyses show that additional cases are in fact identified that correspond to care received at one institution, and the potentially iatrogenic complication addressed in another hospital. Clearly, the locus of control and the ability to study the potential underlying causes for an adverse event is simpler in the case of the hospital level PSIs. However, trends over time in area rates, as well as aggregations of the hospital level rates are likely to reveal points of leverage outside of individual institutions. No measure is perfect. Each is suited to its designed purpose. Methods of aggregating across groups of PSIs still need to be tested. This report provides the background for “safe” use of a tool that has the potential to guide prevention of medical error, reductions of potentially preventable complications, and quality improvement in general. Table 3S provides examples of potential uses and potentially inappropriate uses.

Table 3S. Use of patient safety indicators.

Table 3S

Use of patient safety indicators.

Limitations and Future Research

Many important concerns cannot currently be monitored well using administrative data, such as adverse drug events. Just as administrative data limited specific indicators chosen, the use of administrative data tends to favor specific types of indicators. The PSIs evaluated in this report contain a large proportion of surgical indicators, rather than medical or psychiatric. Medical complications are often difficult to distinguish from comorbidities that are present on admission. In addition medical populations tend to be more heterogeneous than surgical, especially elective surgical populations, making it difficult to account for case-mix. Panelists often expressed that indicators were more applicable to patient safety when limited to elective surgical admissions.

The initial validation evaluations reviewed and performed for the PSIs leave substantial room for further research with detailed chart data and other data sources. Future validation work should focus on the sensitivity and specificity of these indicators in detecting the occurrence of a complication; the extent to which failures in processes of care at the system or individual level are detected using these indicators; the relationship of these indicators with other measures of quality, such as mortality; and further explorations of bias and risk adjustment.

Enhancements to administrative data are worth exploring in the context of further validation studies that utilize data from other sources. For example, as with other quality indicators, the addition of timing variables may prove particularly useful in order to identify whether or not a complication was present on admission, or occurred during the hospitalization. While some of the complications that are present on admission may indeed reflect adverse events of care in a previous hospitalization or outpatient care, many may reflect comorbidities instead of complications. A second example area, linking of hospital data over time and with outpatient data and other hospitalizations, would allow inclusion of complications that occur after discharge, and likely would increase the sensitivity of the PSIs.

The current development and evaluation effort will best be augmented by a continuous communication loop between users of these measures, researchers interested in improving these measures, and policy makers with influence over the resources aimed at data collection and patient safety measurement.

Views

  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this title (1.5M)

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...