Volume 7, No. 2 • Summer 1992

Risk Analysis Model Targets Anesthesia Incidents

M. Elisabeth Pate'-Cornell, Ph.D.

Study of Accidents Identifies Critical Factors, Should Improve Safety

In a study currently under way at Stanford University, records and analyses of past anesthesia critical incidents (and the opinions of experts) are used to identify and assess the root causes of anesthesia problems. Most of these problems (but not all of them) come from human errors, some of which can be directly linked to management factors. The objective of this work is to identify risk reduction measures and to set priorities by assessing the benefits of each. To do this, the study started with the development and extension of a probabilistic risk analysis (PRA) model for anesthesia patients.

Risk analysis models are based on the identification of accident sequences and on the computation of their probabilities based on the probabilities of the basic events, including both component failures and operator errors. The PRA techniques were developed mostly for engineering systems (originally, nuclear power plants). The objectives of such analyses are to identify and correct the -‘weak spots’ and to allocate scarce resources for maximal risk reduction. Because they were developed by engineers, these studies tend to focus on the reliability of the hardware and to point primarily to technical measures of risk management; for example, adding a redundancy behind a critical component or reinforcing a key element (structural or procedural). Yet, the probabilities of failure of the components are often influenced by human decisions and actions that take place during the different phases of the system’s life (e.g., design, construction, or operation). Furthermore, these decisions and actions are often grounded, in turn, in the organization’s structure and culture which determines: (1) the information flow, (2) the rewards and incentives, and (3) the resource constraints under which the ‘actors’ must operate. These organizational factors can therefore become the root causes of potential accidents. As part of a risk management strategy, it is sometimes more effective to modify these organizational factors than to modify the hardware.

‘Management’ Factors Into System Failures

Management factors are only implicitly included in classical risk analysis models. A few years ago, Stanford researchers began working toward an extension of PRA models to include management issues. The approach is to start with an analysis of the physical system because in the final analysis, for an accident to occur, one of the various failure modes must occur. The first step thus is a probabilistic risk analysis of how likely such failures might be. Then, for each of the basic events of each accident sequence (i.e., failures of components or operator errors), the human decisions and actions which influence its probability are identified. Finally, researchers look for the organizational roots of each decision and action what features of the ‘culture’ of the situation might have influenced each event.

In earlier research, this approach was applied to the case of offshore oil drilling platforms. That study found that about 95% of the failure probability is associated with some form of reduction of the system’s function by some type of error (or combination of errors) in design, construction, and/or operation. Only 5% of the risk for failure of these platforms corresponds to the classical case of huge waves in the sea damaging a well designed, constructed, and operated platform. The study also found that an external design review by a certified verifying authority can reduce the probability of platform failure by 20% at a hundredth of the cost of built-in technical measures bringing the same benefit.

This method was also applied to the thermal protection (heat shield) system of the space shuttle. First, the potential contribution of each black tile in each location on the orbiter’s surface to the overall probability of an accident was computed. The inputs of this model were linked to the maintenance operations for the tiles and to the management factors that affect the quality of the file work. This study found that 85% of the risk to the shuttle was attributable to only 15% of the tiles. Researchers then showed that selective inspection to find the poorly bonded tiles in these most critical zones could considerably reduce the overall costs while preserving safety. They also showed that careful inspection of the insulation of the external tank and the solid rocket boosters (that could fall off and hit the tiles in the most critical zones) could also reduce considerably the overall risk of losing a shuttle due to a hole in the orbiter’s skin. Key organizational factors identified in this study include the fragmentation of the NASA and some procedures of personnel management that determine the quality of the maintenance work.

In these previous studies, several general observations were made. Many organizations seem to have difficulties in the management of resource constraints (e.g., time or budget pressures). Under such pressures, people tend to cut comers, which may weaken the system in a way that may or may not be understood at the time. They also have difficulties managing the tradeoff between productivity and safety, often claiming that safety comes first but rewarding only productivity.

Productivity Vs. Safety

Organizations, and particularly successful ones, also have difficulties observing signals of deterioration, recognizing that something is going wrong, communicating the bad news and following up to correct problems. They have particular difficulties in communicating uncertainties, showing a tendency to optimism and distorting the message. Under these conditions, learning under uncertainty (particularly from ‘near-misses’) becomes difficult. Finally, organizations do not seem to perceive “structural” weaknesses that may cause organizational as well as technical failures: for example, dangerous couplings among components (common causes of failure), and lack of redundancy in the organization. Anesthesia delivery systems and anesthesia practice share some of these problems.

This work on management and system reliability and its application to the safety of offshore oil drilling platforms was published in Science in 1990 and attracted the attention of David Gaba, M.D. (Stanford Medical School). It appeared from conversations at that time that some of the problems which had been identified and analyzed for engineering systems existed in the field of anesthesia. A research proposal was submitted to the Anesthesia Patient Safety Foundation to do two things: (1) construct and quantify a probabilistic risk analysis model for the risk of patient death or brain damage associated with anesthesia practice, and (2) identify organizational factors that influence the input into this risk analysis model. -Whenever possible, the link between these factors and the corresponding variables will be quantified. The research started January 1, 1992. After four months, the model has been formulated, some coarse estimates of the contribution of each type of accident to the overall risk have been obtained, and some of the organizational factors that may affect the probability of each accident sequence have been identified.

This work has been based so far on: 1.) a review of the literature, 2.) a set of interviews with anesthesiologists, surgeons, operating room nurses, and lawyers, and 3.) the data gathered by Dr. William Runciman (Adelaide, Australia) on about 2,000 cases of incidents in anesthesia. The structure of this model was based in large part on conversations with Drs. David Gaba and Steven Howard of the Stanford Medical School. The goal of this study is to improve the management of anesthesia delivery in order to decrease the probability of accidents leading to death or brain damage. This research focuses on healthy patients in large urban western hospitals. The background risk is already low, involving less than one accident in 10,000 operations, and this research effort is looking for ways to make it still lower. Focusing on the anesthetist, the objective is to set priorities among specific safety measures based on the potential decrease of their contributions to the corresponding risk factors. The final analysis can thus be represented as an influence diagram showing the effects of management on the accident rate in anesthesia (Figure 1).

This risk analysis model is a causal, ‘forward looking” analysis of the sequences of events that may lead to an accident. The key elements of this model are:

1. The initiating events (e.g., a tubing disconnect at the endotracheal tube connector).

2. The signals that appear following an initiator, either from monitoring devices or from the patient.

3. The observation of signals by the anesthetist.

4. The correct diagnosis of the problem.

5. The corrective actions that are taken at this stage.

6. The effectiveness of corrective actions.

Each of these steps involves a delay which is treated as a random variable. The sum of these random variables is the time elapsed between the problem’s occurrence and possible corrective action. This total time and the appropriateness (effectiveness) of the corrective actions determine the patient’s state at the end of the episode. Figure 2 is a flow diagram showing the sequence of events considered in this analysis.

The first task is to divide the initiating events into a set of exhaustive, mutually exclusive classes of scenarios. This classification of accident initiators must be specific enough to make the analysis meaningful, yet simple ‘and inclusive enough to make it manageable. We identified five broad categories of accident initiators:

1. Failure of equipment (mechanical, electrical, chemical, etc.) mostly involving the delivery of oxygen to the patient. This class includes some big events like a massive mechanical failure or minor ones such as a twisted tube that impedes the gas flow. It also includes hypoxic gas mixture.

2. Failure of the breathing system including the following:

* Esophageal intubation

* Breathing tube disconnect, often due to moving the patient

* Overpressurization of the breathing system o Nonventilation

* Rebreathing of carbon dioxide

The boundaries between these first two categories (failure of equipment and failure of the breathing system) may become fuzzy. The first one involves clearly technical failures; the second includes mostly human errors on the scene.

Figure 1: An influence diagram for the analysis of the effects of management factors on the risk of accident in a specified system.

3. Drug errors

a. Incorrect dose (mostly overdose)

* Inhaled anesthetic (OD)

* Vasodilator (OD)

* Vasopressor (OD)

The subjectivity of the concept of overdose is understood and the opinions of experts are being used to identify situations that qualify as accident initiators.

b. Syringe or ampule swap

* Muscle relaxant reversal (instead of muscle relaxant)

* Muscle relaxant instead of reversal.

This type of error can be caused by an outright failure to read the label or a confusion between two labels that look alike, and such errors can be rooted in the stress and rush inherent to emergencies.

c. Allergic reaction

This one may or may not be predictable given the patient’s history and the possibility of properly gathering that information when it is available.

4. De novo events

Events that occur by chance in the operating room include the following:

* Certain cardiac arrests

* Malignant hyperthermia

* Asthma attack

* Pneumothorax.

They are rare, rather specific, present clear signals, and in general, the anesthetists have been warned and trained to treat them even if they seldom occur (e.g., malignant hyperthermia).

5. Hemorrhages

This is a borderline case as far as anesthesia is concerned because it is generally caused by the surgeon. Yet, it is the anesthesiologist’s responsibility to keep the patient alive and to provide the appropriate blood supply while the problem is being corrected. This implies, among other things, proper typing and crossing of the blood supply.

Based on the opinions of the involved experts, a coarse assessment of the base rates of each of these types of initiators given that an incident occurs is:

1. Failure of the equipment: = 0%

2. Failure of the breathing system: = 40%

3. Drug errors: = 45%

4. De Novo events: = 5%

5. Hemorrhages: = 10%

Clearly, these base rates depend on the type of surgery (for example, E.N.T. surgery is more likely to cause a hemorrhage than a knee cap operation); but as a first cut, they indicate that the dominant initiators are drug errors and failure of the breathing system.

Following the occurrence of an initiating event, the patient reacts to the incident and the anesthetist generally observes the signals and corrects the problem. Two systems thus evolve during the actual incident in parallel (and in conjunction):

1. The anesthesia system (anesthetist, helpers, and equipment).

2. The patient.

These two parallel evolutions are modeled using a dynamic Markov model characterized by the probability of state transitions per time unit. Therefore, the patient’s states are defined as follows, for example, in the case of a tube disconnect incident:

* Healthy

* Hypoxemic

* Arrhythmia (or cardiac arrest)

* Brain damage or death.

The probability of transition from one patient state to another (i.e., the speed of deterioration) depends on the phase in which the anesthesia system is operating. For example, for a tube disconnect initiator:

* Phase 1, before the disconnection: no problem.

* Phase 2, between the disconnection and the problem correction: deterioration due to lack of oxygen.

* Phase 3, after correction (if it is done in time): recovery.

Figure 2: Structure of the mathematical model for the analysis of anesthesia accidents: general representation of an accident sequence.

 

Figure 3 shows the parallel evolution of the patient’s state and the anesthesia system.

The data being used to quantify these probabilities, rates, etc. come basically from two sources: statistics and expert opinions. The main statistical source is the database gathered by Dr. Runciman. For about 2,000 incidents, he and his colleagues attempted to systematically record the causes and the outcomes of the problems. This data set is biased because it relies on voluntary reports; in particular, hospitals were given the option not to report if they feared legal problems. Other statistics can be used to correct the information of this database.

In addition to statistics, experts’ opinions were gathered from a variety of sources including anesthetists, surgeons, operating room nurses, and lawyers. In the future, researchers will also talk to insurers and to patients or consumer groups. There has been developed a data sheet that represents, for each initiating event, the dynamics of the incident evolution (see Figure 4). In addition, they helped develop tables of relevant mean times elapsed for each interval between relevant events of accident sequences.

Calibrated Estimates of Risk

As in other studies of the same kind, statistical information and experts’ opinion are used in parallel. Expert opinions and the analytical model allow researchers to compute the probability of each accident or incident scenario. They can then be grouped and coarse estimates of the probabilities of classes of scenarios can be computed. The statistics then provide a benchmark against which the validity of these estimates can be checked. If there is a significant discrepancy, researchers can then come back to the expert and re-calibrate some of the estimates. At this time, researchers are in the process of developing this quantitative analysis.

At the same time, they have gathered from experts a sense for organizational factors that may affect some of the probabilities of this model. It is understood that these factors may or may not be correctable in the near future for reasons that can range from the practical (both to the patient and the medical profession) to the economic. The benefits of addressing these issues, however, may be substantial.

1. The work schedule can be a source of problems. The stretch of time during which surgeons, anesthetists and residents are on call or on duty can be potentially very long. In some states, residents can be on their feet for 24 hours in a row (other states have set stricter time limits). Although in general, there are times to rest during that period, it is possible that a resident will be called to an emergency after 22 hours of work. Fatigue, steep deprivation, and stress at that stage cannot improve the ability of the anesthetist to observe signals of problems, find their causes and decide on appropriate corrective measures. The problem is compounded by the non-specificity of many of these signals, and also by the fact that an emergency at the end of a long work period often adds stress to the background fatigue.

Unclear Decisions

2. The decision to operate and whether or not it is wise to proceed as scheduled is not always clear cut. Information errors may occur and the surgeon or the anesthetist may not have critical data at the time of the decision. Incentives and production pressures may also tip the scale in borderline cases. If and when the surgeon and the anesthetist have different opinions, there are not always clear mechanisms to resolve the conflict and the decision may depend mostly on personalities and power. The surgeon in this circumstance may prevail.

3. The payment and compensation structure may compound this problem because it shapes the basic incentive system. In situations where compensation is based on the number and the type of cases, production pressures may be more severe than in cases where personnel are paid flat rates per unit time.

Who is “Captain”

4. Problems of conflict of authority may also exist in the operating room where a clear hierarchy may no longer exist. The surgeon was once the .captain of the ship.’ For legal reasons, he or she can no longer be made responsible for the anesthesiologist’s decisions. (Furthermore, surgeons are not necessarily trained and competent in anesthesiology.) From this duality of authority, conflicts have surfaced which, at least in one case, has led to the death of a patient while both sides were arguing about priorities. The smooth resolution of problems thus rests on the compatibility of personalities among the people involved. The culture of the operating room, however, has sometimes promoted the emergence of authoritarian and arrogant characters with ineffective interpersonal skills.

5. The screening and selection of anesthesiologists according to personalities, skills, reasoning capabilities and ability to handle crisis situations is thus critical. The problem is that there is no formal screening beyond the entrance to medical school. Informally, the ‘probably incompetent’ can be gently steered toward other fields of specialization. But that, too, is difficult since competence and understanding of the job may take a while to reveal themselves.

Figure 4: Structure of the data gathering process used in this study. (After Gaba and Howard)

6. The training of the anesthetists, for example in the use of a simulator, may be critical to ensure that they acquire 1.) the competence and skills in their domain and 2.) the ability to react fast under pressure in a crisis mode.

7. The presence of a backup for the anesthetist, who in a crisis situation can provide both assistance and a fresh look at an urgent diagnosis, may therefore also be critical. Although this backup is generally available in large hospitals, anesthesiologists and, in particular, residents may be reluctant to call for help for two reasons: overconfidence and fear of being judged incompetent. In other terms, it may be those who need help the most who are least likely to ask for it. In freestanding surgery centers, this backup help may simply not be available.

8. Detection of performance decline at mid-career and at the end of a doctor’s career is an extremely difficult issue. There is no formal mechanism -in the absence of gross incompetence to force someone to retire when it is clear that he or she can no longer perform safely. The system seems to handle these cases informally by assigning easier cases to such people and providing informal assistance.

Provider Impairment

Still more complicated is the issue of mid-career performance decline due to drug abuse or alcoholism. Formal mechanisms involving confrontation of the individual and forced treatment are rarely used 1.) because they are painful for everyone and 2.) because they expose the confronters to legal actions. Informal mechanisms are not very effective because detecting and identifying the problem of drug or alcohol abuse is extremely difficult short of random testing. Yet, the presence of drugs as part of the working environment makes it tempting for some to use them for recreational purposes. Careful accounting by the nurses of the drug delivered and attention paid to the overprescription of a particular drug provide some clue but no guarantee that such a problem will be detected and addressed.

Recertification

9. Mid-career reevaluation or recertification has been proposed (and resisted by a number of physicians). Techniques evolve, skills may decline, and some anesthetists may no longer be up to date and able to provide patients with the benefits of recent developments. Regular recertification similar to procedures of the airline industry can permit verification that physicians are able to perform their functions and could provide anesthetists an incentive to upgrade their technical skills.

10. Operating room procedures and environment dearly affect the safety of surgery. There is a constant tradeoff, as in many fields, between the sophistication of electronic equipment and the skills needed to use it properly. In particular, there is a tradeoff between the number of monitors and the frequency of false alarms that may distract the physician and/or cause him to turn off the system with potentially disastrous effects. A more subtle problem is that physicians trained exclusively in the use of such complex equipment may lose the basic skills and fundamental understanding that one needs to operate without them.

Yet, some procedures and equipment may be helpful in crisis situations. First and simplest, crisis management protocols and second, (and directly linked) computerized decision support system that may prevent oversights and guide the diagnosis can be useful.

Sub-Specialization

11. Options have been suggested which are desirable for some people and elected by others as too cumbersome or expensive. First, the option to specialize in particular types of operations (e.g., cardiac or pediatric surgery) has been proposed and is already in place in some organizations; or the option to simply stay away from a particular type of cases about which one feels nervous or uncomfortable may be provided. The downside includes the multiplication of specialties with resulting cost increases and the loss of skills which may be needed in emergencies.

The second option is the possibility of forming surgeon-anesthetist teams. In a profession where egos can be large and delicate, the benefits are clear in cases of open personality conflicts that can pose a hazard to the patients. On the other hand, this proposal would create additional scheduling problems and an increase in cost.

Several management measures can therefore be envisioned to address these problems including: improvement of the mechanism of decision to operate, shorter working schedules (e.g., 12 or 16 hr. shifts), procedures for selection and screening, systematic simulator training, recertification, backup for the anesthetist, stringent monitoring of alcohol and drugs by random checks, and options of specialization and formation of surgeon-anesthetist teams. Each of these measures involves costs but may also provide substantial benefits in the form of increased patient safety.

These researchers are now in the process of linking this risk analysis model to organizational factors, by gathering the following information:

1. Which variables are affected by each factor initiating event and probabilities of detection and correction within given time frames)?

2. By how much can one expect these parameters to vary if one modifies the management factors and with what effect on the overall risk?

The risk analysis model thus provides a calibration tool by yielding the prior contribution of each parameter to the risk.

The problem of safety of anesthesia has often been compared to that of airline safety. Indeed, the two critical phases of takeoff and landing with long, uneventful periods in between, during which crises may nonetheless occur, make the two situations similar. The management problems may be similar as well, and one may learn from the airline industry about the benefits of screening, recertification, and monitoring for substance abuse.

A key issue is whether the medical profession is willing to police itself, or if economic and political pressures will force the responsibility on the managing institutions.

Dr. Elisabeth Pate’-Cornell is Professor of Industrial Engineering and Engineering Management at Stanford University. Her specialty is risk analysis, and her recent research has focused on the linkage between organizational factors and the risk of accidents in complex technical systems.