The Comprehensive Severity Index (CSI) was developed by Susan D. Horn, Ph.D., senior scientist at International Severity Information Systems, Inc. (ISIS), beginning in 1982 with the help of more than 150 physicians at The Johns Hopkins University Medical School. The CSI software system1 is available for use and continues to be expanded and kept current with medical advances by Dr. Horn and her colleagues at ISIS, in collaboration with medical specialists from across the United States.

CSI classifies patients according to their severity of illness, defined as the extent and interactions of a patient’s diseases as presented to medical personnel. The severity of illness scores are based on the degree of abnormality of individual signs and symptoms of a patient’s disease or diseases. The more abnormal the signs and symptoms, the higher the score, with Level 4 signs and symptoms being catastrophic, life-threatening, or likely to result in organ failure. Thus, the CSI scores are based on physiologic measures of a patient’s condition, not just on diagnostic information (ICD-9-CM coding).

The CSI Software System Is Comprehensive

CSI is comprehensive: it includes age- and disease-specific criteria and weighting systems to measure severity for patients in acute care, rehabilitation, outpatient, hospice, and long-term care settings. It includes severity criteria for all ICD-9-CM diagnoses. CSI can be used to predict any severity-dependent outcome, including mortality, morbidity (change in severity from admission to later in the stay), complications (infections, post-op stroke, etc.), cost, length of stay, and admission to a critical care area. It was not modeled to predict only one specified outcome.

All severity criteria (risk factors) measured by CSI are disease-specific and depend on the ICD-9-CM codes assigned to the patient, the patient’s age, and the care setting. The over 2,100 CSI severity criteria are based on objective clinical findings, i.e., the physiological signs and symptoms of a disease, such as temperature, blood pressure, laboratory values, radiology findings, lung assessments, heart sounds, etc. Published examples of CSI disease criteria include sets for adult patients with pneumonia, acute myocardial infarction, malignant neoplasm of the large intestine, and injury of lower extremities in the acute care setting.1 The signs and symptoms for adult patients with a diagnosis of pneumonia, for example, involve the cardiovascular system, temperature, lab results, x-ray findings, and respiratory findings. These signs and symptoms are presented in four columns according to the extent of abnormality of each. Normal to mild symptoms are in the Level 1 column, moderate symptoms are in Level 2, severe symptoms are in Level 3, and catastrophic or life-threatening symptoms with likelihood of organ failure are in Level 4. Thus, a patient’s signs and symptoms for a given disease are represented as a list of scores-1, 2, 3, or 4-depending on the degree of severity of each sign or symptom for that particular patient and his/her age, during a specified time interval and in a specified setting: acute care, rehabilitation, long-term care, etc.

CSI stratifies patients into homogenous severity of illness groups. There are more than 23,500 separate CSI criteria sets (similar to those referenced above) corresponding to diseases (aggregations of similar ICD-9-CM codes), age groups, and care settings. Note that each of the more than 19,000 ICD-9-CM codes indicates the existence of a disease; it does not indicate the extent (severity) of the disease. The choice of disease-specific, setting-specific, and age-specific severity-leveled criteria to be included in CSI is based on the clinical judgment of expert physicians, the literature, and medical textbooks, not on statistical regression analysis. Thus, the calculation of severity is not influenced by historical data that incorporates possibly inefficient medical practices of the past. CSI criteria are independent of treatments and are not tied to any particular outcome, e.g., mortality. Published work by several researchers2,3,4,5 has shown CSI severity scores to be among the best predictors of outcomes such as cost, LOS, mortality, resource utilization, and complications.
CSI Data Requirements

There is no fixed list of required data elements for CSI data collection. Because the system is disease-specific, age-specific, and location-specific, the criteria requested for each patient depend on the ICD-9-CM codes assigned to the patient’s medical record, the age of the patient, and the care setting. The severity criteria used are based on objective clinical findings and not treatment characteristics, so performance of treatments does not affect the severity level of the patient.

Why Does CSI Not Include Treatments?

There are at least two reasons not to include treatment variables in the definition of severity of illness. First, the use of specified treatments may differ because of physician preference or the availability of specific treatments within a facility or setting of care. If two patients have similar objective signs, symptoms, and clinical findings for a disease but one receives more treatments, such a greater blood product use or longer intubation time, a treatment-dependent severity system might report that the patient who received more extensive treatment was “sicker”. This may not be true; it may be an indication of physician tendencies to use specific treatments and/or the availability of such treatments, but not be an indication of true differences in how “sick” each of the patients is.

Second, if a severity system incorporates treatments into its paradigm for defining how sick a patient is, the system cannot be used determine best treatments (treatments cannot be used as both patient criteria and outcome criteria).
For both these reasons, the CSI software system uses only objective clinical findings, and not treatments, as criteria to define patient severity of illness.

Location of CSI Data Elements

Data are abstracted from clearly specified locations in the patient medical chart. For example, vital sign information must come from the vital sign flow sheets and lab values from laboratory report sheets. Physiologic signs and symptoms may come from physician, nursing, and ancillary service (respiratory or physical therapy, dietary, etc.) narrative notes. Persons who collect CSI data undergo an extensive training process to ensure accurate understanding of the system and its use of patient criteria to determine severity of illness scores.

Timing of CSI Reviews

Severity measurements can be performed to determine a patient’s severity at any point during the care process, e.g., at multiple times during an inpatient acute care or rehabilitation admission or long-term care stay, or during sequential visits to ambulatory providers. CSI coding may be performed either retrospectively or concurrently. Choice of timing of CSI assessments depends upon how the severity data are to be used. Examples of review time periods are:

  • The admission CSI review includes all data from the first 24 hours of an acute-care hospital stay. It assesses how sick the patient is on admission to the hospital. For other care settings, this time period is determined for each application.  In rehabilitation, for example, the admission time period typically is 72 hours because initial assessment typically occurs of a period of several days.
  • The discharge CSI review assesses the extent to which abnormalities have been resolved, and reflects information from before discharge.  This time period typically has been 24 hours for acute hospitalizations and 72 hours for inpatient rehabilitation.
  • The full stay CSI review (sometimes called ‘maximum’) uses data from the entire hospital stay, including the admission and discharge review periods. It measures the most-aberrant findings, regardless of when they occur. For example, a patient may peak on temperature one day, blood pressure another day, white blood cell count yet another day, etc. The full-stay CSI review measures the most aberrant clinical findings, because when multiple problems occur, bringing the patient back to baseline health takes longer and is more difficult.
  • Signs and symptoms recorded during each outpatient visit are used to assess ambulatory severity.
  • Signs and symptoms during a specified time window, such as one month, are used to assess severity for long-term care patients.

ISIS’s philosophy for CSI data collection is to integrate data from existing systems whenever possible. If no electronic data transfer is possible, all the CSI data must be abstracted from the medical record. Severity scores are calculated automatically by CSI on every data collection computer. Further statistical analysis of severity scores and severity indicators and statistical analysis of non-severity data can be performed by licensees on exported CSI data using statistical analysis software, such as SAS or SPSS.

Internal Edit Checking

CSI contains a variety of automatic data-collection edit checks and safeguards. For example:

  • For the three standard CSI in-patient reviews (admission, discharge, and full-stay), the system does not allow the data collector to enter any data in the admission review or the discharge review that are more severe than corresponding data entered into the full-stay review because, by definition, this review must be inclusive of the entire hospital stay, including the admission and discharge timeframes.
  • If a length of stay entered is longer than a pre-determined number of days, a message appears on the screen that alerts the data collector and asks him/her to double-check the admission and discharge dates.[JG2]
  • CSI checks input data for acceptable data ranges for severity criteria questions that require numeric input, such as vital sign information or laboratory values. For example, the acceptable range for the severity question of highest blood pressure is 1-400 . A key stroke error, such as entering 1200 instead of 120, would be rejected by the software and flagged immediately to the user.
  • Data entry fields are color-coded. Required fields (those used in most analyses that involve severity data) are colored red or blue depending on whether they are table-driven or direct-entry fields and cannot be skipped by the data collector. The system does not allow the data collector to move to the next data entry field before completing a required field. Optional fields are colored black or green.

ISIS employs the following data quality control measures to assist with data cleaning/editing. These quality checks are performed at each reliability test (when the first ten medical records have been completed by the data collector, after one month of data collection, and every six months for the remainder of data collection). Identified problems are communicated to data collectors during the reliability sessions.

  • Data completeness reports show completion statistics for each entry field (including severity criteria). If important entry fields are reported to be frequently empty, the ISIS reliability trainer works with the data collector to promote complete collection of these data.
  • Missing reviews reports identify patient admission records that do not contain an admission CSI review, a discharge CSI review, a maximum CSI review, and/or any user defined review.

Date screening reports identify any date entered in a given database that is outside established parameters for that database. For example, in a study of 1997 discharges, any date before 12/1/96 or after 12/31/97 could be flagged for further investigation.

Computation of CSI Severity Scores

The inputs to the CSI software system’s methodology for risk adjustment are the disease-specific, age-specific, and setting-specific severity criteria at specified levels of abnormality. The initial output is a list of severity ratings, an integer between zero and four, for each of a patient’s diagnoses separately. To form its final output, CSI combines the severity ratings for each separate diagnosis to obtain an overall patient severity level that is presented on a categorical scale of 1 (low severity) to 4 (high severity), as well as on a continuous scale with non-negative integer values that are not subject to any preset maximum limit. Higher numbers mean that patients are more severely ill.
To determine the severity of a specific disease in a specified setting and age group, CSI uses the two most severe criteria; for examples of criteria sets; see reference 4, pp. 116-131. To compute the overall severity score for a patient, CSI determines the severity of each disease using the disease-specific severity factors. It then combines the severity scores for all diagnoses using disease-specific system rules that reflect the interaction of the diagnoses. Thus, to produce the overall severity score, the system’s logic takes into account the severity level of each disease and the interaction (clinical relationships) of the diseases, along with the patient’s age and setting of care.
Missing data make no contribution to CSI severity scoring. ‘Points’ are not subtracted because specific data elements are missing. All CSI severity scoring is based on data elements that are considered out of the normal range for each specific diagnosis listed for the patient. The ISIS training-team works with the data collectors to promote accurate and efficient data collection.

Validity of CSI

CSI methodology is validated repeatedly as it continues to be used. Predictive validity is tested by how well CSI severity scores predict various outcomes. Although CSI was not designed to explain any particular outcome, e.g., cost, length of stay, or mortality, its scores have been shown to correlate with such severity-dependent outcomes as well or better than those from systems based on explicit models for these outcomes.2,3,4,5

All CSI criteria sets have been reviewed by panels of medical specialists (physicians, nurses, and other caregivers, e.g., dietitians, respiratory therapists, etc.) for content validity. These experts determine if the criteria are complete and are leveled appropriately to reflect disease-specific severity differences by age and care setting. Criteria sets are re-reviewed periodically to ensure the criteria are current and relevant to changing medical practices. Prior to using the CSI system for severity stratification in health care research or in Practice-Based Evidence (PBE) studies, providers are encouraged to review the CSI criteria sets and recommend refinements to them.

Construct validity is much harder to measure. Constructs represent efforts to measure relatively abstract variables, which is the case with severity. Constructs vary widely in the extent to which the domain of related observable variables is (a) large or small and (b) tightly or loosely defined. For severity measurement, the domain of variables is vast. It is difficult to define which variables do or do not belong in the domain and the boundaries of the domain or related observables are not clear. There are three major aspects in the process of validating a construct: (1) specify the domain of observables, (2) determine to what extent all or some of those observables correlate with each other or are affected alike by treatments, and (3) determine whether some, or all, measures of such variables act as though they measure the construct.

In developing CSI, the domain of observables was specified as follows: first, the design team decided that all of the patient’s diseases and their respective severity levels had to be taken into account. The design team developed a mechanism to account for levels of each criterion specific to the corresponding diseases (ICD-9-CM codes), a patient’s age, and care setting, and a mechanism to account for the interactions of all of a patient’s diseases, so that an overall severity score could be created. After much empirical research, we decided that the two most severe criteria would determine a disease-specific severity level. Second, unlike many other systems, CSI was not developed to predict any single outcome based on a specified derivation set. Some severity systems have used physiologic criteria (e.g., PRISM, APACHE, MediQual), some have used diagnoses (e.g., DRGs), and some have tried to combine information from all diagnoses present (e.g., APR-DRGs). Only the CSI system has put all the observables together in one system. CSI has clearly specified the domain of observables.

Now consider the second aspect of construct validity. As patients get sicker, more of their body systems show signs of failure. CSI picks up those multiple signs and symptoms for each disease, and, thus, correlates with the greater difficulty of bringing a sicker patient back to baseline health. However, until there is a well-specified domain for measuring severity of illness, there is no way to know exactly which studies should be done to test the adequacy with which a construct is measured. Strictly speaking, scientists can never be sure that a construct has been measured or that a theory regarding that construct has been tested, even though it may be useful to speak as though such were the case.6

Reliability of CSI

Reliability testing monitors the accuracy of data collection. The standard method employed by ISIS to ensure inter-rater reliability is as follows: the ISIS training-team selects several of the first 10 charts abstracted for CSI data by each data collector and abstracts each of these charts independently. This is done be granting specified training team members access to electronic records or by review of paper charts from which all patient identifiers have been removed or blinded. Results from the ISIS abstraction are compared with those from the data collector’s abstraction. A high agreement rate (at least 95% percent) at the criteria level between each data collector and the ISIS trainer is expected. If a data collector fails to achieve at least this rate, the ISIS training team works with the individual to improve abstraction methods and accuracy. One or more subsequent reliability tests are conducted for each data collector until an agreement rate of at least 95% is achieved. Reliability testing is conducted again after one month of data collection has been completed. The training team randomly selects three patient records from data submitted by each data collector. The same high agreement rates are expected. Reliability testing is then conducted every six months throughout the data collection period. Reliability testing at the beginning of data collection is intended to ensure that data collectors have learned correct abstraction methods and understand the definition of all variables. Continuing reliability testing ensures that data collection continues to be accurate and assists with increasing efficiency.

Other Approaches to Measure Severity

Many severity systems (but not the CSI system) are based on statistical regression models constructed to explain the variation in one particular outcome variable, such as in-hospital mortality or cost. Once the coefficients of the model are set, the model is then used to calculate an expected value of the particular outcome variable for any new patient or set of patients. These models are often used to compare the performance of hospitals. Perhaps the best-known example is the Medicare mortality reports that were developed by the US Health Care Financing Administration. Many local community hospital rating projects have also used this methodology to derive “quality” scores for “report cards” on their providers.
A disadvantage of using a severity model derived from statistical regression is that the coefficients reflect the medical practices of the providers included in the modeled data set during a particular period of observation. Yet, who is to say that these providers were exhibiting best practices during that time? Given the enormous variations in practices that have been observed in so many areas of medicine, older data sets are likely to reflect historical inefficiencies and perhaps suboptimal practices.

CSI Improves ICD-9-CM Coding

Although CSI scoring starts with ICD-9-CM codes, the system is not strictly dependent on the coding of the original medical record abstract. This is because the CSI system uses automatic error-checking routines to spot coding errors of two kinds: those in which a diagnosis was assigned in the absence of sufficient evidence in the chart to document that it actually existed or was correct, and those in which a diagnosis was present, but was not assigned. For example, if insufficient data are found in a chart to substantiate a particular diagnosis, that diagnosis receives a severity rating of 0. It is then natural to ask if the diagnosis is correct or if the medical record is fully documented. Similarly, if abnormal signs and symptoms are recorded in the chart but are not prompted by the severity criteria associated with any of the assigned diagnoses, it is natural to ask whether an uncoded condition may be present that should be added to the diagnosis list. For example, if fever, high white blood cell count, and infiltrates in the lungs are recorded in the chart but are not asked for by the software during CSI scoring, perhaps there is a missing diagnosis corresponding to a disease process such a lung infection or pneumonia. Thus, by matching signs and symptoms to diseases, the CSI software system helps validate and improve the ICD-9-CM coding process.
CSI also designates whether a diagnosis is present on admission to the facility; hence, one can identify complications, such as infections, that occur after admission and can determine how severe they are by their disease-specific severity score. ISIS’s research has found that patients who are sicker on admission are more likely to get infections and other complications during the hospital stay, and also are more likely to have their comorbidities worsen after admission.

CSI Predicts Outcomes

Thomas and Ashcraft2 used the R2 statistic to measure how well various severity measures explained variations in estimated costs among patients hospitalized for each of eleven conditions. Six severity methods were evaluated: AIM (Acuity Index Method), APACHE II, the CSI admission and maximum scores, Disease Staging, the MedisGroups admission and morbidity ratings, and PMC (Patient Management Categories). Analyses were performed twice, once on all cases and once with outliers eliminated. Two forms of models were considered: first, assuming simple linear relationships between severity and costs (therefore entering a single, continuous variable into the model); and second, allowing for nonlinear relationships (modeled with categorical risk variables). Modeling was performed separately within each of eleven conditions.

Table 1 shows the R2 values from the nonlinear models used to predict cost, averaged across the eleven conditions from the Thomas and Ashcraft study. The performance of the severity measures varied depending on whether outliers were included in the analysis and whether R2 was calculated from the model fit to the development data set, or whether a validation R2 was estimated. The CSI software system earned the highest R2 when all cases were analyzed as well as when outliers were eliminated. Thus, CSI was the best predictor of cost in this study.

Using a sample of approximately 76,000 cases abstracted from 25 hospitals in New Jersey, Averill and colleagues3 examined the relationship between CSI severity scores and hospital costs. Results were reported for 76 DRGs, selected based on preliminary analyses that identified DRGs in which severity affected patient costs. Only inlier patients (i.e., those without extreme outcome values) were analyzed. For 34 DRGs, the R2 for the CSI severity scores within the DRGs was between 0.10 and 0.19; for 25 DRGs, it was between 0.20 and 0.29; and for 17 DRGs, R2 was greater than or equal to 0.30. Thus, CSI explained much of the variation within DRGs.

In summary, the CSI system behaves as expected: sicker patients have longer lengths of stay, are more costly, have higher death rates, have more complications, and have more visits. CSI takes into consideration all relevant patient signs and symptoms for every disease.

Table 1: R2 for Predicting Cost by Risk-Adjustment System

Development R2

Validation R2

Severity System

All Cases

(N = 1,714)


(N = 1,379)

All Cases







Disease Staging




















MedisGroups: admission score





CSI: admission score





MedisGroups: morbidity score





CSI: full stay (maximum) score





Source: Thomas and Ashcraft2

Alemi and colleagues5 found CSI severity scores to be the statistically significant best predictor of mortality among patients with acute myocardial infarction when compared to six other risk adjustment systems including APACHE, MedisGroups, and Patient Management Categories.

Hence, the CSI software system has had extensive validation by independent researchers, who have published their conclusions in the professional literature. In all the comparative studies reported in the literature, the CSI severity scores predicted cost, LOS, and mortality better than the other measures examined.2,3,5

ISIS’s studies also have found CSI severity scores to be statistically significant predictors of a variety of outcomes for many patient populations. Regression models show that the inclusion of CSI severity scores typically increase the explanatory power of ICD-9-CM and/or DRG codes alone by 30% to 70%. Examples for the following outcomes are:

Mortality: CSI severity scores have been compared to APACHE III in a small separate study of 14 ICU patients who expired. CSI severity scores achieved better descriptions of who would die than the APACHE III ICU probability of death score.
In a study of 32 case-mix pediatric groups, the ability of CSI severity scores to predict mortality was compared to PRISM III in 9 groups with more than 10 deaths.7,8 Sample sizes ranged from 140 to 646, with most case-mix groups larger than 250. Data were collected for each system during the first 24 hours of the hospital stay. The CSI score and the PRISM score achieved statistical significance in all groups; however, the c statistic was higher for the CSI than PRISM in all groups. A summary of the ability of CSI severity scores and PRISM scores to predict death is presented in Table 2; the first c statistic reported is for PRISM, the second for CSI.

Table 2: CSI vs. PRISM to Predict Death

Circulatory Disorders

c = .77 –>  .93

Other Cardiac Procedures

c = .70 –> .83


c = .87 –> .96

Other Injury

c = .92 –> .99


c = .98 –> .99

Respiratory Patients on Ventilators

c = .77 –> .80

Major Cardiac Procedures

c = .68 –> .90

Trauma – OR

c = .93 –> .97

Neurologic Infection including meningitis

c = .87 –> .89

Cost: The full stay (maximum) CSI severity score received the highest R2 when compared to seven other risk-adjustment systems to predict cost in a vendor-independent study of over 1,300 patients by Thomas and Ashcraft.2 See Table 1.
In a study of 32 case-mix pediatric groups, the ability of CSI severity scores to predict cost was compared to PRISM III.7,8,9  CSI severity scores were better predictors of cost in 31 of the 32 case-mix groups. A summary of the ability of CSI severity scores compared to that of PRISM to predict cost is presented in Table 3. Deaths, transfers, and extreme outliers have been removed. The first R2 value reported is for PRISM, the second for the CSI software system.
Table 3: CSI vs. PRISM to Predict Cost

Allergy and Poisoning

R2 = .05 –>  .25

Metabolic Disorders

R2 = .21 –> .27

Appendectomy, Hernia, Stoma

R2 = .003 –> .25

Neurologic infections, meningitis

R2 = .46 –> .52


R2 = .27 –> .30

Neck and Back Procedures

R2 = .08 –> .21


R2 = .30 –> .39

Neonates – Medical

R2 = .23 –> .41


R2 = .30 –> .73

Other Cardiac Procedures

R2 = .41 –> .48

Circulatory Disorders

R2 = .39 –> .53

Other Injury

R2 = .19 –> .42


R2 = .23 –> .43

Other Musculoskeletal Procedures

R2 = .10 –> .11

Connective Tissue and Back

R2 = .07 –> .08

Otitis / Upper respiratory infection

R2 = .02 –> .15


R2 = .33 –> .52


R2 = .10 –> .26


R2 = .15 –> .34

Red Blood Cell Disease

R2 = .31 –> .51


R2 = .12 –> .18

Respiratory Patients on Ventilator

R2 = .15 –> .23

Kidney Diseases

R2 = .46 –> .52

Seizures, Concussion

R2 = .16 –> .27


R2 = .24 –> .36


R2 = .26 –> .54

Major Cardiac Procedures

R2 = .26 –> .31


R2 = .12 –> .39

Major GI Procedures

R2 = .31 –> .44

Trauma – OR

R2 = .34 –> .32

Medical GI Diseases

R2 = .20 –> .33

Viral Illness and Pyrexia

R2 = .14 –> .25

Length of Stay: In a study of 32 case-mix pediatric groups, the ability of CSI severity scores to predict length of stay was compared to that of PRISM III.7,8,9  CSI severity scores were better predictors of length of stay in all of the 32 case-mix groups. A summary of the ability of CSI severity scores compared to that of PRISM to predict length of stay is presented in Table 4. Deaths, transfers, and extreme outliers have been removed. The first R2 value reported is for PRISM, the second for the CSI software system.

Table 4: CSI vs. PRISM to Predict LOS

Allergy and Poisoning

R2 = .03 –> .15

Metabolic Disorders

R2 = .21–> .30


R2 = .03 –> .28

NS Infections, Meningitis

R2 = .30 –> .55


R2 = .16 –> .30

Neck and Back Procedures

R2 = .04 –> .36


R2 = .25 –> .43

Neonates – Medical

R2 = .15 –> .39


R2 = .32 –> .67

Other Cardiac Procedures

R2 = .33 –> .51

Circulatory Disorders

R2 = .34 –> .45

Other Injury

R2 = .19 –> .46


R2 = .19 –> .41

Other Musculoskeletal Procedures

R2 = .04 –> .18

Connective Tissue

R2 = .06 –> .22

Otitis / URI

R2 = .03 –> .13


R2 = .23 –> .57


R2 = .08 –> .27


R2 = .13 –> .23

Red Blood Cell Disease

R2 = .23 –> .46


R2 = .12 –> .24

Respiratory Patients on Ventilators

R2 = .14 –> .27

Kidney Diseases

R2 = .38 –> .48


R2 = .28 –> .46


R2 = .29 –> .45


R2 = .32 –> .35

Major Cardiac Procedures

R2 = .24 –> .40

Seizures, Concussion

R2 = .13 –> .27

Major GI Procedures

R2 = .15 –> .43


R2 = .19 –> .48

Medical GI Diseases

R2 = .17 –> .43

Viral Illness and Pyrexia

R2 = .10 –> .26

Customization of CSI

CSI software provides extensive built-in options for end-user customization of the database and interface:

  • Auxiliary Data Modules (ADMs): A CSI dataset can be extended through the creation of one or more ADMs. ADMs define additional, user-specified non-severity data sets. For PBE studies, clinical teams define data elements thought to be important to the study of the identified topic.  These elements are programmed into one or more ADMs, which are attached to each patient record. ADMs support user-defined data validation, such as table lookup, range validation, character format, etc. Look-up tables are pre-defined by the study team but users are able to add new allowable values. All data collected using ADMs are accessible for reporting and analysis.
  • User-Defined Study Groups: CSI normally collects only the clinical severity indicators specific to a patient’s diagnoses. User-defined study groups can be employed to force CSI to collect a user-specified set of clinical severity indicators for all patients in a study group.

CSI’s ability to produce information is limited only by the amount of data collected. All data collected can contribute to analyses with external statistical analysis programs.

CSI Updates

CSI is updated periodically following severity criteria reviews by clinical experts that lead to recommendations for refinements in severity measurement, based on current clinical practice. For example, prior to ISIS undertaking a 13-hospital study on coronary artery bypass graft (CABG) surgery, a team of cardiovascular surgeons, cardiologists, and cardiac nurses reviewed the CSI system criteria for coronary atherosclerosis, hypertension, congestive heart failure, and other common heart conditions and recommended refinements to the system that included adding questions for percent ejection fraction, percent stenosis of arterial vessels, and lab values of blood urea nitrogen and creatinine, as well as improving the way CSI accounts for EKG abnormalities. All edits to CSI are tested at ISIS following established software testing regimens. The CSI system is then distributed to licensees.
ISIS has incorporated into the CSI software system the unique presentation of physiologic signs and symptoms in children. This extension was tested on almost 18,000 patients for 18 months in ten pediatric institutions across the United States, with funding from the Agency for Health Care Policy and Research. ISIS received significant input from clinical representatives (pediatric intensivists, pediatricians, and pediatric nurses) from these ten institutions as it developed and tested the pediatric severity criteria. This meticulous process was modeled on our previous development of the adult versions of CSI. Hence, CSI has had and will continue to have extensive external validation.

Implementation of CSI

Data Abstraction for CSI. CSI requires data abstraction and data entry. The minimum number of data elements required for the system to function is approximately 10. These elements include demographic identifiers such as medical record number, account number, date of birth, gender, etc. When disease-specific criteria are added, a typical number of data elements might be 50-60 per record. There is no set maximum number of data elements collected by CSI. A large Practice-Based Evidence (PBE) study could include 200 or more data elements.

A minimal patient medical record (short length of stay, no or minimal additional data) may take only 5-10 minutes to abstract for three CSI inpatient reviews: admission, full stay (maximum), and discharge. A typical inpatient chart with some non-severity data in an Auxiliary Data Module (ADM) may take about 30 minutes to abstract for the three CSI reviews. The time required to abstract a large chart depends on the number of non-severity data elements programmed into an ADM. A typical ambulatory visit can be coded using the ambulatory component of CSI in about five minutes or less.

Qualifications of CSI Data Collectors. A wide variety of individuals have successfully collected CSI data. Certainly, familiarity with medical terminology and knowledge of the layout of an institution’s medical records are important. Medical records technicians and medical secretaries have collected CSI data successfully for simple projects. For complex projects (for example, the PBE projects) that involve detailed process steps such as ventilator management, intensive-care unit information, detailed drug data, etc., research nurses or others familiar with treatment terminology and processes may be more appropriate.

It is beneficial to have computer technical support available from personnel at the data collection site to assist data collectors with technical issues such as installing the CSI software system on computer systems, downloading available electronic data into the CSI, and merging CSI data with cost or other outcome data.

Standard Reports

  • Individual patient clinical summary (depicts all signs and symptoms recorded for each CSI review timeframe)
  • Auxiliary data module (ADM) summary

Standard reports can be produced by the data collector or CSI system coordinator at any time following the input of CSI data. All reports can be produced at the data collection computer. Telephone support is available to licensees who need assistance in

1.     Horn S, editor. Clinical Practice Improvement Methodology: Implementation and Evaluation, Faulkner and Gray, 1997.
2.     Thomas TJ, Ashcraft MLF. Measuring Severity of Illness: Comparison of Severity and Severity Systems In Terms Of Ability To Explain Variation In Costs. Inquiry. 1991;28:39-55.
3.     Averill R, McGuire T, Manning B, et al. “A Study of the Relationship Between Severity of Illness and Hospital Cost in New Jersey Hospitals,” HSR: Health Services Research 27 (5): 587-606.
4.     Iezzoni L, editor. Risk Adjustment for Measuring Healthcare Outcomes, Second Edition, Health Administration Press, 1997.
5.     Alemi F, Rice J, Hankins R. “Predicting In-Hospital Survival of Myocardial Infarction, a Comprehensive Study of Various Severity Measures,” Medical Care 28 1990: 762-74.
6.     Nunnally J. Psychometric Theory, McGraw-Hill, New York, 1967.
7.     Horn SD, Torres A, Willson D, Dean JM, Gassaway J, Smout R. Development of a Pediatric Age- and Disease-Specific Severity Measure. J Pediatrics 2002;141 (4):496-503.
8.     Torres A, Horn SD, Willson D, Dean JM, Gassaway J, Smout R. A comparison of the Pediatric Comprehensive Severity Index with the Pediatric Risk of Mortality III score to predict mortality, length of stay, and cost. (submitted for publication)
9.     Horn S, et al. “Interim Report, dated July 30, 1998,” Pediatric Severity of Illness Project, United States Department of Health and Human Services, Public Health Service, Agency for Health Care Policy and Research, Contract No. 290-95-0042, dated October 4, 1995, with International Severity Information Systems, Inc.


Stoskopf C, Horn SD. The Computerized Psychiatric Severity Index as a Predictor of Inpatient Length of Stay for Psychoses. Medical Care 29 (March, 1991):179-195.
Horn SD, Sharkey PD, Buckle JM, Backofen JE, Averill RF, Horn RA. The Relationship Between Severity of Illness and Hospital Length of Stay and Mortality. Medical Care 29 (April, 1991):305-317.
Sharkey PD, Horn SD, Brigham PA, Horn RA, Potts L, Wayne JB, Dimick AR. Classifying Burn Patients for Hospital Reimbursement: DRGs and Modifications for Severity. Journal of Burn Care & Rehabilitation 12 (July/August, 1991):319-329.
Stoskopf C, Horn SD. Predicting Length of Stay for Patients with Psychoses. Health Services Research 26 (February, 1992):743-766.
Buckle JM, Horn SD, Oates VM, Abbey H. Severity of Illness and Resource Use Differences Among White and Black Hospitalized Elderly. Archives of Internal Medicine 152 (August, 1992):1596-1603.
Clemmer TP, Spuhler VJ, Oniki TA, Horn SD. Results of a Collaborative Quality Improvement Program on Outcomes and Costs in a Tertiary Critical Care Unit. Critical Care Medicine 27:9 (September 1999):1768-1774.
Willson DF, Horn SD, Smout RJ, Gassaway J, Torres A. Severity Assessment in Children Hospitalized with Bronchiolitis Using the Pediatric Component of the Comprehensive Severity Index (CSI®), Pediatric Critical Care Medicine 1:2 (October 2000) 127-132.
Willson DF, Horn SD, Hendley JO, Smout R, Gassaway J. The Effect of Practice Variation on Resource Utilization in Infants Hospitalized for Viral Lower Respiratory Illness (VLRI), Pediatrics 108:4 (October 2001) 851-855.

  1. October 6, 2014 at 4:26 pm

    Clickable Links: Make it a habit to include a clickable link in the description box below your image,
    making sure that you also include before the website address.
    If you find yourself receiving a lot of fake or spam-seeming emails through the contact form on your website,
    it might be possible that people or bots are filling out
    the contact forms to disguise their click fraud. This will keep you
    updated with the trending pins and also maintains your social network.

  2. Helena Funderburg
    July 21, 2019 at 10:28 pm

    Hi there!

    You Need Leads, Sales, Conversions, Traffic for wordpress.com ? Will Findet…


    Don’t believe me? Since you’re reading this message then you’re living proof that contact form advertising works!
    We can send your ad to people via their Website Contact Form.

    IF YOU ARE INTERESTED, Contact us => lisaf2zw526@gmail.com


  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: