Unplanned Readmissions after Hospital Discharge among Heart Failure Patients At Risk for 30-Day Readmission Using an Administrative Dataset and “Off the Shelf” Readmission Models
J F Reed, J L Bokovoy, K R Doram
Citation
J F Reed, J L Bokovoy, K R Doram. Unplanned Readmissions after Hospital Discharge among Heart Failure Patients At Risk for 30-Day Readmission Using an Administrative Dataset and “Off the Shelf” Readmission Models. The Internet Journal of Cardiovascular Research. 2014 Volume 9 Number 1.
Abstract
Background: Readmission of patients who were recently discharged after hospitalization with heart failure (HF) represents an important, expensive, and often preventable adverse outcome. The risk of readmission may be modified by the quality and type of care provided to these patients. Improving readmission rates is the joint responsibility of hospitals and clinicians. Measuring readmission will create incentives to invest in interventions to improve hospital care, better assess the readiness of patients for discharge and facilitate transitions to outpatient status. This measure is also responsive to the recent call by Medicare Payment Advisory Commission to develop readmission measures, with HF as a priority condition.
Objective: Unplanned hospital readmission has emerged as a major CMS focus of quality improvement and payment reform. Coupled with national initiatives, Adventist Health West chose unplanned readmissions following an index hospitalization for HF for a major system-wide initiative for CY 2013.
Methods: Given competitive heart failure readmission models, our strategy became one of using the best of three – LACE, Hansan and PARR – as the basic building blocks to find a better predictive readmissions model.
Results: The ROC curve and C-statistics for the five models using the combined data from Adventist Hospitals were computed for the combined hospitals and individually for each entity. Overall, the Hasan, PARR, and AH Models (C-statistics of 0.802, 0.821 and 0.846, respectively) were superior to either the CMS or LACE prediction model (C-statistic of 0.749 and 0.547, respectively).
Conclusion: Using “Off the Shelf” readmission HF models as a guide, a useful readmission model may be derived which, in this case, is slightly superior than competing readmission models.
Introduction
Unplanned hospital readmissions within 30 days of a prior hospitalization for heart failure (HF) are common, expensive and often preventable. These unplanned readmissions are recognized as a marker of hospital-level quality and efficiency of care and a significant contributor to rising healthcare costs. Since Heart failure is the leading cause of hospitalization among patients over the age of 65 years, the magnitude of unplanned readmissions within 30 days is enormous. Nearly one fifth of Medicare fee-for-service enrollees discharged from acute care hospitals are readmitted within 30 days, incurring additional costs of US$17.4 billion dollars annually (1). While it is unclear whether such readmissions are entirely preventable, there is evidence that targeted interventions initiated before and/or shortly after discharge can decrease the likelihood of readmission by 25% to 45% (2-7).
Readmission rates are influenced by the quality of inpatient and outpatient care, availability and use of effective disease management programs, and the bed capacity of the local health care system. Some of the variation in readmissions may be attributable to delivery system characteristics (8). Also, interventions during and after a hospitalization can be effective in reducing readmission rates in geriatric populations (3, 6) and for elderly HF patients (2, 4, 9-12) Tracking readmissions also emphasizes improvement in care transitions and care coordination. Although discharge planning is required by Medicare as a condition of participation for hospitals, transitional care focuses more broadly on “hands-off” of care from one setting to another, and may have implications for quality and costs (Coleman, 2005). Despite positive results in disease management studies, many post-hospital HF management programs have been discontinued, most often due to financial considerations (13).
Readmission of patients who were recently discharged after hospitalization with HF represents an important, expensive, and often preventable adverse outcome. The risk of readmission can certainly be modified by the quality and type of care provided to these patients. Improving readmission rates is the joint responsibility of hospitals and clinicians. Measuring readmission will create incentives to invest in interventions to improve hospital care, better assess the readiness of patients for discharge and facilitate transitions to outpatient status. This measure is also responsive to the recent call by Medicare Payment Advisory Commission to develop readmission measures, with HF as a priority condition.
Unplanned hospital readmission has emerged as a major CMS focus of quality improvement and payment reform. Coupled with national initiatives, Adventist Health West chose unplanned readmissions following an index hospitalization for HF for a major system-wide initiative for CY 2013.
Heart Failure Readmission Models
In a systematic review, Kansagara and colleagues identified 26 unique models for predicting unplanned readmission (14). Fourteen of these models were based on retrospective administrative data. Most included variables for medical comorbidity and use of prior medical services, but a few considered mental health, functional status, and social determinants. Multicenter US studies generally had poor discriminative ability (C-statistic range: 0.55-0.65). From this set of 14 models, we choose four models with the highest C-statistic and derived a fifth – to evaluate using Adventist Health administrative data extracted from an administrative database.
CMS Readmission Model
The CMS developed hospital-specific, risk-standardized, 30-day all-cause readmission rates for Medicare fee-for-service (FFS) patients discharged from the hospital with a principal diagnosis of HF. To account for the clustering of observations within hospitals and differences in number of admissions across hospitals, they used hierarchical regression to estimate risk-adjusted rates. This model uses administrative claims data from each index HF hospitalization, and from inpatient and outpatient Medicare claims from the 12 months prior to the hospitalization.
The CMS readmission model was intended to estimate hospital-specific readmission and mortality rates, using administrative claims data to profile hospital performance among Medicare patients admitted with heart failure. Although the CMS heart failure model is not designed to predict 30-day outcomes in real time for actual bedside application, this risk model may be used to predict a future outcome. Therefore, we considered the CMS model to be a benchmark with which to judge the performance of competitive models outlined in the literature and subsequently derived from Adventist Health System administrative datasets. Our data strategy mirrored the CMS data strategy that relies on administrative data collected in year prior to, and including the index admission up to the day of discharge.
The CMS used a hierarchical logistic regression model that may be used to calculate hospital risk-standardized 30-day all-cause readmission rates for patients hospitalized with heart failure. This model was derived using administrative claims data and included 37 variables (Table 1). These variables were constructed using a Condition Categories (CCs) from CMS’s Hierarchical Condition Category (HCC) methodology. The area under the ROC curve was 0.601. A model with age and gender had an ROC of 0.516 and a model with all candidate variables had an ROC equal to 0.604 (15).
The discrimination and the explained variation of the CMS model at the patient-level are consistent with the few published models of readmission after HF that report predictive ability (16-17). The CMS research group excluded covariates such as potential complications, certain patient demographics (e.g., race, socioeconomic status), and patients’ admission path and discharge disposition (e.g. admit from, or discharge to, a skilled nursing facility). These characteristics may be associated with readmission and thus could increase the model performance to predict patient readmissions. However, these variables may be related to quality or supply factors that should not be included in an adjustment that seeks to control for patient clinical characteristics. For example, if hospitals with a higher share of a particular ethnic group have higher readmission rates, then including ethnic group in the model will attenuate this difference and obscure differences that are important to identify. With regard to non-clinical variables, the hospitals are expected to do well with the patients they have. Thus, their choice was to focus on adjustment for clinical differences in the populations among hospitals.
The LACE Index
The LACE index (Table 2) uses 4 factors to determine the risk of death or unplanned readmission within 30 days after hospital discharge for all hospitalizations: Length of stay in days for the index hospitalization; acuity of illness at the time of the index admission; Charlson comorbidity score, and the number of emergency department visits in the 6 months before the index hospitalization. This index was derived using clinical data collected on hospital inpatients and validated using both a split-sample method and administrative hospital records in Ontario, Canada (18). The original intent of the LACE was to identify patients who might benefit from additional post-discharge care. While the original intent of the LACE Index was to predict death or unplanned hospital readmission, the initial study team uncoupled the composite endpoint to an unplanned hospital readmission within 30 days after hospital discharge. The LACE Index was moderately discriminative for 30-day unplanned readmission (C-statistic 0.679, 95% CI 0.650-0.708). The LACE has several strengths to support its use: (1) Its simplicity, (2) Each component of the LACE index is readily determined, and (3) The discrimination of the LACE is better than that of the widely used CMS Score.
Table 2
The Hansan Score
A modified Hansan score (19) was developed at the Division of General Internal Medicine at the Brigham and Women’s Hospital to identify early unplanned hospital readmissions in a “diverse patient population and derive and validate a simple model for identifying patients at high readmission risk”. Patient data was collected from general medicine services at six academic medical centers.
Readmissions were identified from administrative data and 30-day post-discharge follow-up telephone calls. Patient level factors were grouped into four categories: socio-demographic factors, social support, health condition, and healthcare utilization. The Hansan model discrimination was fair with a C-statistic of 0.65. The modified Hansan Score variables and weights are shown in Table 3.
Table 3
The PARR Score
Billings and colleagues (20) developed a predictive model using a limited set of variables generated from the hospital episode statistics of the National Health Service (NHS) in England. This model estimates the risk of readmission to an HHS hospital within 30 days of discharge. The variables selected were “readily available from patients’ notes or from the hospital patient administration system (administrative data). Using their model, the authors suggest that institutions could build simple software tools to calculate readmission risk scores. The performance of this model was a respectable ROC curve C-statistic of 0.70. We modified the PARR scoring system to include only those variables that could be extracted from our administrative database (Table 4).
Table 4
Given the latter three models, our strategy became one of combining the best of the three – LACE, Hansan and PARR – to find a better predictive model that uses administrative data. Predictive models were derived using the basic building blocks of the three candidate models. After performing a relatively exhaustive sensitivity analysis (i.e. testing variable weighting schemes) our best predictive model is given in Table 5.
30-Day Timeframe
The outcome evaluated is HF 30-day all-cause readmission, as measured from the date of discharge of the index HF admission. CMS chose 30 days because as it is a “clinically meaningful timeframe for hospitals, in collaboration with their medical communities, to take actions to reduce readmissions, such as: ensure patients are clinically ready at discharge; reduce risk of infection; reconcile medications; improve communication among providers in transitions of care; encourage strategies that promote disease management principles and educate patients on what symptoms to monitor, who to contact with questions and where and when to seek follow-up care.” CMS uses all-cause readmission for several reasons. First, from the patient perspective, readmission for any cause is a key concern. Second, limiting the measure to HF readmissions may make it susceptible to gaming. Likewise, it is often hard to exclude quality issues and accountability based on the documented cause of readmission. For example, a patient with heart failure who develops a hospital-acquired infection may ultimately be readmitted for sepsis. CMS considers it inappropriate to treat this readmission as unrelated to the care the patient received for HF. Another patient might have a complication leading to renal failure, resulting in readmission for renal failure, and yet quality of care during the HF admission could have reduced the risk of the complication. Finally, while the measure does not presume that each readmission is preventable, there are interventions that have shown reductions in non-HF as well as HF readmissions.
The CMS, LACE, Hansan, PARR and AH models constructed using data extracted from an administrative dataset of patients discharged from Adventist Health System hospitals with a discharge diagnosis of heart failure as indicated by the International Classification of Diseases, Ninth Revision – Clinical Modification (ICD-9-CM) Codes. This set of ICD-9-CM codes is identical to the CMS defined universe of heart failure codes (Table 6).
Statistical methods
Discrimination - the ability to differentiate patients who would be readmitted versus not readmitted - was determined by creating receiver-operating characteristic (ROC) curves and calculating the area under the curve (AUC). C-statistics and accompanying 95% confidence intervals are standard metrics used to describe model discrimination. The C-statistic is equivalent to the area under the receiver operating characteristic curve, is defined as the proportion of times the model correctly discriminates a pair of high- and low- risk individuals. A C-statistic of 0.50 indicates that the model performs no better than chance; a C-statistic of 0.70 to 0.80 indicates modest or acceptable discriminative ability; and a C-statistic of greater than 0.80 indicates good discriminative ability.
The HF patient datasets from each hospital were combined into one system dataset and randomly partitioned into a derivation dataset (60%) and a validation dataset (40%) for the development of a system-wide unplanned readmission model. The model was then validated using the 40% validation sample. We then used the prediction algorithm for each of the hospitals.
Results
The ROC curve and C-statistics for the five models using the combined data from the Adventist Hospital system are given in Figure 1. A summary table of the derivation AUC and validation AUC is given in Table 7. Overall, the Hasan, PARR, and AH Models (C-statistics of 0.802, 0.821 and 0.846, respectively) were superior to either the CMS or LACE prediction model (C-statistic of 0.749 and 0.547, respectively).
The AUC for each of prediction models for individual Adventist Hospitals is summarized in Table 8. The Hasan, PARR, and AH models again perform better than either the CMS or LACE models.
Conclusion
We investigated five hierarchical logistic regression models for 30-day readmission after HF hospitalizations that are based on administrative data. These models are created using administrative datasets that are readily available in hospitals. Three of these models – Hansan, PARR and AH Model – emerged as the strongest predictors of subsequent unplanned 30-day readmission for HF. None of these models adjust for variables that may represent complications, rather than comorbidities.
The patient-level discrimination and the explained variation of the latter three models are consistent with the published models of readmission after HF that report predictive ability (16-17). These models perform as expected given that the risk of readmission is likely much more dependent on the quality of care and system characteristics than on patient severity and comorbidity characteristics. The unmeasured readiness for discharge, the proper medications, and the proper transition to the outpatient setting may be even more important for readmission. In addition, research suggests that some HF admissions may be discretionary, with higher rates in geographic areas with a greater supply of hospital beds than areas with fewer beds (8).
The Hansan, PARR and AH Models are simple risk scoring systems that are convenient and inexpensive, as both are based on data that are readily available and not dependent upon chart review. More precise mathematical prediction of readmission risk that reduces the unexplained residual variation would require additional data input from two broad domains: 1) patient- and disease-specific characteristics that are identified during the index hospitalization; and 2) factors related to the patient’s HF and other medical conditions and their management assessed after discharge.
This study differed from others by using “Off the Shelf” unplanned HF readmission models to derive a readmissions model that uses only Adventist Health administrative databases. This approach is perhaps contrary to a pure statistical approach. However, this study differs from others by including a much larger number of patients from 16 hospitals, perhaps with more diffuse demographic characteristics. Our approach in determining a better unplanned HF readmission model uses consensus patient comorbidities as a major determinant of readmission risk. In a sensitivity analysis, we found that admission source, age, gender and discharge destination factors were not predictors of readmission. Unfortunately, administrative data lacks some of the richness of clinical information, patient social support system, health status, and compliance. Our simple risk scoring system is convenient and inexpensive as it relies on data that are readily available and not dependent on a patient chart review.
The ability to identify those patients at high risk of readmission represents the first step in any strategy to improve care and services for HF patients. The real goal is to combine the risk assessment process with interventions that lessen the risk of readmission. Only a limited amount is known about what interventions work and for whom. A broad range of interventions have been employed including pre-discharge interventions (e.g. improved discharge planning, patient education, medication reconciliation) and post-discharge interventions (e.g. patient hotlines, home visits).
Implications
From a clinical viewpoint, quantification of readmission risk at the time of hospital discharge is of value in that it provides the opportunity to enroll high risk patients into proactive care management programs. Such programs have demonstrated to be effective in reducing costs from hospitalization for HF while improving quality of care and patient functional status (2; West, et al, 1997; Fonarow, et al, 1997; Kornowski, et al, 1995). The use of the AH scoring system developed is an improvement to the prediction of readmission risk as it differentiates between those with a very low risk from those at a medium and high risk rather than assuming that all previously hospitalized patients have equally high risk.