Article Text

PDF

Applicability and relevance of models that predict short term outcome after intracerebral haemorrhage
  1. M J Ariesen1,
  2. A Algra1,2,
  3. H B van der Worp2,
  4. G J E Rinkel2
  1. 1Julius Centre for Health Sciences and Primary Care, University Medical Centre Utrecht, the Netherlands
  2. 2Department of Neurology, Rudolf Magnus Institute of Neuroscience, University Medical Centre Utrecht, the Netherlands
  1. Correspondence to:
 Dr A Algra
 Department of Neurology and Julius Centre for Health Sciences and Primary Care, University Medical Centre Utrecht, Str. 06.131, PO Box 85500, 3508 GA Utrecht, the Netherlands; A.Algraumcutrecht.nl; http://www.juliuscenter.nl

Abstract

Objectives: Several models for prediction of short term outcome after intracerebral haemorrhage (ICH) have been published, however, these are rarely used in clinical practice for treatment decisions. This study was conducted to identify current models for prediction of short term outcome after ICH and to evaluate their clinical applicability and relevance in treatment decisions.

Methods: MEDLINE was searched from 1966 to June 2003 and studies were included if they met predefined criteria. Regression coefficients of multivariate models were extracted. Two neurologists independently evaluated the models for applicability in clinical practice. To assess clinical relevance and accuracy of each model, in a validation series of 122 patients the proportion with a ⩾95% probability of death or poor outcome and the actual 30 day case fatality in these patients were calculated. Receiver operator characteristic (ROC) curves were computed for assessment of discriminatory power.

Results: A total of 18 prognostic models were identified, of which 14 appeared easy to apply. In the validation series, the proportion of patients with a ⩾95% probability of death or poor outcome ranged from 0% to 43% (median 23%). The 30 day case fatality in these patients ranged from 75% to 100% (median 93%). The area under the ROC curves ranged from 0.81 to 0.90.

Conclusions: Most models are easy to apply and can generate a high probability of death or poor outcome. However, only a small proportion of patients have such a high probability, and 30 day case fatality is not always correctly predicted. Therefore, current models have limited relevance in triage, but can be used to estimate the chances of survival of individual patients.

  • GCS, Glasgow Coma Scale
  • ICH, intracerebral haemorrhage
  • intracerebral haemorrhage
  • prognosis

Statistics from Altmetric.com

Intracerebral haemorrhage (ICH) represents about 12% of all strokes.1 Short term prognosis of patients with spontaneous ICH is poor, and about 50% of patients who experience an ICH die within 30 days.2–4 Early survival of patients with ICH, in general, is known to be strongly dependent on the Glasgow Coma Scale (GCS) score on admission. Other factors that are known to predict outcome after ICH are size of the haemorrhage and presence of intraventricular haemorrhage.5

Prediction of outcome in patients with ICH can be used for two main purposes:

  1. in an emergency department to differentiate between patients who might still benefit from intensive care and those who have such a poor prognosis that they will not benefit from intensive care any more. Thus, in this sense, outcome prediction can be used for making decisions about starting intensive treatment or not.

  2. to inform patients and relatives about the chances of recovery.

Several models have been developed for prediction of short term outcome after ICH, but, to our knowledge, these are rarely used for triage in clinical practice. Additionally, previous authors have noted that no grading scale for ICH is consistently used for triage and acute intervention in either clinical care or clinical research.6

The aims of the present study were:

  • to identify existing models for the prediction of short term outcome after primary ICH

  • to evaluate whether these models can be rapidly and easily applied at the time of presentation

  • to evaluate whether the prediction by the models is valid and accurate enough to base major treatment decisions upon for a relevant proportion of all admitted patients

  • to assess the discriminatory power of the models in the estimation of a patient’s prognosis.

This allowed us to evaluate the correctness of the estimate of a patient’s prognosis over the whole range of outcome probabilities.

METHODS

Literature search

We searched MEDLINE from 1966 to June 2003 for studies in which a prognostic model was described for short term outcome after ICH. The following search strategy was used: “Cerebral haemorrhage” [Medical Subject Headings (MESH)] AND “Prognosis” [MESH] NOT (“Animals” [MESH] OR “Animal” [MESH] OR “Models, Animal” [MESH] OR “Infant” [MESH] OR “Infant, Newborn” [MESH] OR “Craniocerebral Trauma” [MESH] OR “Injuries” [Subheading] OR “Cerebrovascular Trauma” [MESH] OR “Head Injuries, closed” [MESH] OR “Brain Injuries” [MESH]). Bibliographies of retrieved articles were examined for further relevant publications. This method of cross-checking was continued until no further publications were found.

Inclusion criteria

Studies were included if:

  • the described multivariate model was developed in patients with primary ICH

  • short term poor outcome was defined as death or dependence measured within six months after ICH. Dependence was defined as a score of 3–5 on the modified Rankin scale7,8 or a score of 2 or 3 on the GCS9

  • studies presented a logistic regression model with corresponding intercept and regression coefficients. If the intercept was not reported, the regression coefficients, the probability of outcome, and data on predictors had to be reported to allow calculation of the intercept

  • the publications were in English, French, German, or Spanish.

We focused on predictive models that were applied soon after the first clinical and radiological assessment. Therefore, those studies in which surgery or change of a predictor over time was included in the model were excluded. Furthermore, we excluded studies of patients with ICH as result of thrombolysis, trauma, or operation.

Data extraction

The following information was extracted from each study: number of patients, definitions of predictors, definition of outcome, time of outcome assessment, regression coefficients of the prognostic model, and the intercept. If the intercept was not reported, it was calculated as follows: Intercept = −LN ((1/p)−1) − (β1*predictor1+… βn*predictorn), where p is the overall proportion of outcome in the study population and predictor1 to predictorn are the means of these predictors. This formula is derived from the logistic regression equation.

Validation series

We applied the prognostic models to data of 122 patients aged 18 years and older who had been admitted to our hospital with an ICH between January 1988 and December 1997. Patients were included if:

  • they had been admitted with a primary ICH (supratentorial only) within 72 hours after the onset of symptoms

  • the score on the GCS on admission could be retrieved from the records

  • a computed tomography (CT) scan of the brain had been performed in our hospital immediately after admission.

During the study period, 306 patients were admitted to our hospital with an ICH. A total of 184 patients were excluded for the following reasons: insufficient CT scan data (n = 173) (CT scan performed in another hospital (n = 122), performed outside time window (n = 30), incomplete scans (n = 16), or missing CT scan (n = 5)) and GCS not retrievable (n = 11).

For each patient, we retrieved the following data from the medical records: sex, age, systolic and diastolic blood pressures, and the GCS score at admission. From the CT scan at admission we calculated the ICH volume, intraventricular spread, presence of intraventricular haemorrhage, and the septum pellucidum shift. Hydrocephalus was considered as present when the bicaudate index exceeded the upper limit of normal per decile of age.10

To evaluate the discriminatory performance of the models in the estimation of a patient’s prognosis, we computed a receiver operating characteristic (ROC) curve for each model and assessed its area under the curve (AUC) (SPSS for Windows, Standard version released 15 November 2001).11 In an ROC curve the true positive proportion (sensitivity) is plotted against the false positive proportion (1 – specificity). With ROC curves the prediction of outcome over the whole range of outcome probabilities (from 0% to 100%) can be evaluated. The area represents the probability that a randomly chosen diseased subject is correctly rated or ranked with greater suspicion than a randomly chosen non-diseased subject.12 An AUC of 1 corresponds with a perfect prediction and an AUC of 0.5 with no discriminatory power at all.

Data analysis

Two neurologists independently evaluated the ease of application of the prognostic model in clinical practice in terms of the time needed for calculation of an outcome probability for each prognostic model. The assessment was based on the time needed to collect and review all data needed for the model and the availability of these data on the ward on the day of admission. A prognostic model was classified as “easy to score” if calculation would cost no more than 10 minutes on the ward. In case of disagreement, consensus was reached in a meeting chaired by an epidemiologist.

To evaluate whether major treatment decisions could be made based on the predictions of a model, we calculated the highest possible predicted probability that could be generated with each model for the combination of predictors that provided the highest probability of death or poor outcome. We considered a probability of death or poor outcome of 95% or more high enough as basis for major treatment decisions. In addition, we used a value of 90% to assess the influence of the cut-off. If a model could not generate predicted probabilities of 90% or higher, we did not consider the predictions of this model suitable for major treatment or care decisions. For the other prediction models, we calculated for each patient in the validation series the predicted probability for that certain model. Then, for each model, the proportion of patients with a 95% or higher probability of death or poor outcome was calculated. In addition, we also calculated this proportion for a 90% cut-off. If we had no data on one categorical predictor, to limit missings, we used its maximum value and estimated the maximum proportion of patients with a probability of 90% or 95% or higher. In addition, we also used the minimum value to assess the influence of our assumption. Finally, the 30 day case fatality (proportion of patients who died within 30 days) in patients from our series with a probability of 90% or 95% or higher was calculated to assess the accuracy of the prediction.

RESULTS

We included 18 prognostic models in our analysis (fig 1).6,13–29 Often reported predictors were haematoma size, presence of intraventricular extension, and a poor clinical condition on admission (table 1).

Table 1

 Study characteristics of the included studies on predictors of death or poor outcome after intracranial haemorrhage

Figure 1

 Method of selection of studies.

Fourteen prognostic models were classified as easy to score (table 2). The highest possible predicted probability of death or poor outcome for the combination of predictors that provided the highest probability of death or poor outcome ranged from 80% to 100% (median 99%) (see table 2).

Table 2

 Assessment of the clinical applicability, relevance, and discriminatory power of the models

The characteristics of the patients included in the validation series are shown in table 3. Thirty day mortality was 40%. Patients who died within 30 days were slightly older, had a higher blood pressure, a larger ICH volume, a lower score on the GCS on admission, and more often ventricular extension of the haemorrhage (table 3). These data are consistent with the predictors suggested by the studies shown in table 1. Therefore, we assumed that our series of patients with ICH is a good representation of patients who are admitted to a hospital with an ICH.

Table 3

 Patient characteristics of the validation series of patients with intracerebral haemorrhage

In the validation series, the proportion of patients with a ⩾95% probability of poor outcome according to eight different models ranged from 0% to 43% (median 23%). Seven models could identify a subset of patients with a ⩾95% probability of death or poor outcome. In these subsets, the 30 day case fatality for the seven prognostic models ranged from 75% to 100% (median 93%; see table 2) Ten prognostic models could identify a subset of patients with a ⩾90% probability of death or poor outcome in the validation series. The proportion of patients with such a probability ranged from 5% to 48% (median 30%). In these subsets, the 30 day case fatality ranged from 67% to 100% (median 89%, data not shown).

We computed an ROC curve for each model to evaluate the prediction of outcome over the whole range of outcome probabilities relevant to inform patients or their relatives about their prognosis. The AUC ranged from 0.81 to 0.90 (see table 2).

DISCUSSION

In the present study, most prediction models for outcome after ICH appeared easy to apply and most could generate a high probability of death or poor outcome in patients with the combination of predictors providing the highest probability. However, only a small proportion of patients admitted to a hospital with ICH have such a high probability of death or poor outcome. In addition, 30 day case fatality was not always correctly predicted in the validation series. Because these models leave too much uncertainty for most patients and sometimes give an incorrect prediction, their relevance in deciding whether or not to start intensive treatment is limited. We estimated the discriminatory power of all the models with ROC curves. Every model had a reasonable AUC, with a value closer to one than to zero. Therefore, the estimations of prognosis obtained from these models can be used to give an estimate to patients and relatives of the chance of survival of the patient.

We focused our evaluation on the use of prognostic models for application soon after the first clinical and radiological assessment because at that stage prediction of outcome makes an important contribution to major treatment decisions. If either surgery or change in a predictor over time is included as a predictor of outcome, a prediction rule cannot be used for such early decision making. We had to limit our evaluation to studies in which the intercept was reported or from which we could calculate the intercept. Because of this criterion three studies were excluded—none of these reported a model with predictors that were strongly associated with death or poor outcome.

We used a series of 122 patients to assess the clinical relevance of the models. Our series was approximately the same size as the series in which the prediction models had been developed. The limitations of our series were first, the data were collected retrospectively, and second, of more than 300 patients who were eventually admitted to our hospital only 122 were eligible for this validation series. Nevertheless, our validation series had approximately the same patient characteristics, and furthermore, the factors associated with death within 30 days were similar to the factors identified in the studies in which the models were developed. Therefore, our validation series seems to be a representative sample of patients admitted to a hospital with ICH.

With regard to the analyses to assess clinical relevance we made the assumption that a probability of poor outcome of 95% or higher was high enough to base treatment decisions upon. This cut-off value was based on the assumption that intensive treatment might not be started in some patients with less than a 5% chance of survival. We used a value of 90% to assess the influence of this cut-off, and the results did not change significantly with it. It is known that prediction rules predict less well in their extremes. However, for treatment decisions pertaining to life and death one has to be certain about the prediction of the outcome. One could also plead for a higher cut-off value, however, in that case even fewer patients would have had such a high predicted probability and clinical relevance would be limited.

Several comments can be made on the methodological quality of studies on clinical prediction rules.33,34 The first pertains to outcome measures. In most studies the only outcome measure was mortality6,14,15,17–21,23–26,28,29; only four studies included poor outcome based on the GCS or the modified Rankin scale.13,16,22,27 From the patients’ perspective poor outcome may be a better outcome measure than death. The second point to consider addresses the predictors. The definitions of the predictors were not clearly described in a number of studies, thus, other investigators cannot apply these rules. One study did not define how to classify level of consciousness.19 For use of a prediction rule in clinical practice clear predictor definitions are essential. In 11 studies, the data were obtained retrospectively, with the inherent risk of missing or biased data on predictors.6,13,16–20,22,24,25,27 Lastly, only two models were externally validated26,35: one used patients from the same stroke registry14 and the other was validated by a research group different from the group that developed the model.6

In the development of a prediction model, about ten outcome events are needed for the inclusion of one predictor.36 Because approximately half the patients admitted to a hospital with ICH die, it should be possible to develop a valid model. However, for a model to be used in treatment decisions, it has to predict outcome extremely well to avoid errors with serious consequences. Key factors that should be included are the clinical condition on admission and the extent of the haemorrhage. Since these factors do not predict well enough for the majority of patients, other factors should be added. Factors that seem worth studying are the course of the clinical condition within the first hours after admission and the size of the haematoma on a repeated scan a few hours after admission (because haematomas enlarge within the first hours after the haemorrhage).

We conclude that the current prediction rules for predicting high probabilities of poor outcome after ICH are not sufficiently accurate, or can predict for only a small proportion of patients, to base decisions about commencement of intensive treatment upon. A useful model in clinical practice should give a very precise prediction for a large group of patients. However, the current models do have reasonable discriminatory power and can therefore be used to estimate the chances of survival of individual patients.

Acknowledgments

We thank Dr S P Claus for permission to use the data he collected of 122 patients who were admitted with an ICH to the University Medical Centre Utrecht. We thank Prof Dr D E Grobbee for constructive comments on an earlier version of this manuscript.

REFERENCES

View Abstract

Footnotes

  • This research was supported by the Health Research and Development Counsel of the Netherlands (ZonMw, project number 904–61–190). This study was in part supported by an established clinical investigator grant from the Netherlands Heart Foundation to Prof Dr G J E Rinkel (grant D98.014).

  • Competing interests: none declared

Request permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.