Circulation 80,350 • Volume 21, No. 3 • Fall 2006   Issue PDF

Challenges Ahead in Technology Training: A Report on the Training Initiative of the Committee on Technology

Michael A. Olympio, MD; Bonnie Reinke; Abe Abramovich

Introduction

The most effective method of introducing new anesthesia equipment into the operating room has not been thoroughly investigated, despite recent and dramatic increases in complexity of these new machines. New machines have unique and subtle variations in breathing circuit design, automated checkout, volatile agent delivery, hidden piston ventilators, fresh gas delivery, and ventilation modes.1 Despite conventional pre-use instruction with, or without simulation, Dalley and colleagues recently concluded that anesthesiologists could not reliably assess their ability to safely use the equipment in clinical practice. However, those clinicians who received additional training in simulation were more likely to correctly apply the machine features during a simulated anesthesia emergency.2 Furthermore, Dalley learned that new designs meant to enhance patient safety can actually have unintended and detrimental consequences particularly when latent errors surface during abnormal (and particularly stressful) clinical situations.

Although the incidence of equipment-related critical events is relatively low, morbidity associated with such events can be quite high. Human error is the leading contributor to equipment related problems, and is typically magnitudes greater than pure equipment failure, which itself is rare.3,4 The implication, of course, is that we need greater training and facility with our equipment as recognized by leading authorities.3,5,6

The APSF Committee on Technology was challenged to consider the adequacy of pre-use instruction when Dr. Michael Cox asked the Dear SIRS column to “suggest how I might propose to our group changing our approach” to a more organized and formal instructional program.7 Cox described his perception of inadequate training and ability to troubleshoot newly installed anesthesia machines, even after an 18-month period. Although APSF representatives argued that 1) a local champion for training was essential to overall satisfaction, 2) institutional support for training be provided, 3) the manufacturers develop educational programs that far exceed current standards, and 4) manufacturers must play a crucial role with continuing after-the-sale support, APSF neglected to demonstrate how those elements could actually succeed, and probably ignored the key element: participation of the clinical staff.

Concurrently, the Corporate Advisory Council of the APSF discussed and summarily expressed industry’s frustration with the inability to enforce the training of anesthesia clinicians, and physicians in particular.8 One member of the Council expressed his concerns as follows:

“Further on the topic of education, training, and safety, I would like to share with you the frequent reality of clinicians refusing in-service on new anesthesia equipment because they are ‘too busy’ or, ‘can figure things out for themselves,’ or, if they show up at all, they stick around for a few cursory minutes before they run out to do something else. We find ourselves chasing clinicians so we can fulfill our commitment to ensure that our equipment is operated safely. My colleague put it this way, ‘No one shows, no one listens, and very few care.’ These are usually the guys who scream the loudest if something goes wrong, or they misuse, or misunderstand some facet of the equipment’s operation. This probably sounds harsh, but unfortunately it is true—an aspect of the business which all manufacturers face, and one not usually discussed. This situation appears to be more a function of how a specific institution ‘runs its business.’ The local ‘champion’ concept, which we discussed in our other emails, is an essential part of the solution to this problem, but I think that this may be another opportunity for the APSF to make a difference by making this issue more visible and recommending a means of ensuring that users of anesthesia equipment are adequately trained on their equipment.”

With these and other imperatives to take action, the Committee on Technology of the APSF sought examples of mandated technology training programs and developed a pilot program for implementation, analysis, and presentation to the anesthesia community. That program is described below.

The Problem with Current Practice

Conventional “in-service” programs are often recognized as superficial and inadequate because they do not require advanced preparation, are not mandated, do not allow individual practice, do not test for learning nor application skills, and are frequently abandoned for lack of time (as the morning break or refreshments run out). They typically occur only once, when new equipment is installed, do not account for personnel who are away from work, nor do they accommodate new hires or new classes of trainees. Many clinicians lack the interest to learn, or they verbalize a great reluctance to accept change, or they find it very difficult to learn complex new technologies. Not all institutions designate or recognize an equipment advocate-enthusiast, and only a handful has sent a clinician to the factory for additional training. Some anesthesia departments have on-site biomedical technicians, but they may not have obtained the specialized training prior to installation, or don’t have time for teaching. Finally, simplistic instructional aides may not exist.

Getting the clinicians and educators together seems to be the problem. Should the “carrot” or “stick” method be applied to mandate training, as it promises to close the gap on deficiencies of knowledge and application skills, in order to improve patient safety?

Experience With Mandated Technology Training Initiatives

Clinical and corporate members of the Committee on Technology of the APSF were asked to provide detailed examples of organized, comprehensive, and mandatory technology training programs within their clinical base. Only 3 were provided. One included a 6-hour training session for 50 CRNA/student nurse anesthetists (SRNAs), and all MD attendings in a Michigan academic department9 prior to installation of 25 new machines. The time and manpower expense for nursing was provided by the hospital, while the manufacturer provided a line-item training expense on the customer’s invoice. Components of the training included manuals, manufacturing experts, train-the-trainers, hands-on workshops, 6 weeks of on-site company representatives, and super-user training, but no competence testing. No information was provided on training for the anesthesiologists at that same institution, nor the effects of that training on subsequent use of the equipment.

Another program was described by a major manufacturing company’s director of clinical education in the United Kingdom.10 This program utilized a new guided workbook with directed hands-on learning and response format, led by the company representative for a period of 3 hours, or by a train-the-trainer, who would have received a full day of training. The cost of the training program was line-itemized on the customer’s invoice. The program was not mandated, and the spokesman commented that cooperation for training among physicians and consultants was the most difficult to obtain because of time restrictions, culture, and venue for learning.

In a third description of training in a private hospital in Michigan,11 the Anesthesiology Department Chair mandated completion of a formal training program for 45 anesthesiologists and 20 CRNAs, who could not use the new and modern machine clinically, until they completed a 30-minute training session with a company representative, during a 3-week period. Those who did not initially participate were continuously assigned to “less desirable” off-site locations that did not have the new machine, or they were left unassigned with negative financial consequences. Such affected individuals quickly sought training once the negative consequences were realized. The Chair further described a highly receptive CRNA staff, as well as cases of “pure arrogance” and antipathy by several anesthesiologists, some of whom initially refused to be trained.

Design of a Mandated APSF Pilot Program

Developing consensus. The authors of the pilot training program (MAO, BR, AA) from the APSF agreed that initial consensus would have to be achieved among the key advocates for change. Wake Forest University Health Sciences was chosen as the test site for the program. The authors met with the Wake Forest University Health Sciences Vice President for Operations, the Director for Surgical Services, the Director of the Surgical Services Academy, Chief CRNAs, Chair of the Nurse Anesthesia Training Program, Chief of Surgery, Risk Management officers, Residency Training Program Director, and the Chairman of the Department of Anesthesiology. All were informed of background information and the intended scope of developing a model, mandated training program for the anticipated introduction of new and advanced anesthesia machines. Universally, these institutional leaders felt that additional machine training would be valuable. In trying to develop consensus, however, the issues which were difficult to resolve included

1. demands for proof that training was necessary
2. establishment of baseline practices
3. convincing the community that this special program was valuable
4. convincing, specifically, staff anesthesiologists that the program was necessary
5. concurrent development of refresher courses
6. ability to accomplish training of 195 individuals prior to machine use
7. supportive statements from regulatory bodies (JCAHO or the ASA)
8. ability to simplify training (e.g., online programs?)
9. determining the consequences of a refusal or failure to participate
10. provision of appropriate amounts of time and resources
11. measuring the outcome and value of the training process
12. approaching the mandate through positive or negative reinforcement
13. training significant numbers of random new hires.

Mandating the training. The most difficult obstacle was the method for mandating the program to so many different categories of clinicians. The student nurse anesthetists were easily directed into the machine training modules as they would be for other subject modules, and these were organized into their classroom schedule. The residents were informed by their Program Director that the training was a mandatory part of their curriculum, but with a less-than-structured environment, they were expected to attend some of these modules independently. The Chief CRNAs instructed their employees to attend the sessions, the employees received continuing education credits, and they were typically provided relief from clinical duty by additional CRNA clinical coverage. The faculty were initially informed of the planned training program, with an expectation from the training program director (MAO) that they would participate.

In a series of memos leading up to the training, the 4 groups were informed by the training program director that the program would be mandatory, but the final detailed memo simply described the program and stated that the machines would not be installed until all clinicians had completed training. The reason for the hesitation emanated from discussions between the training program director and the Department Chair, and the resultant decision by the Chair to use a process of encouragement, advertisement, individual judgment, and certificates of completion to achieve success, particularly in regard to faculty members. The other 3 groups had their own respective leaders who communicated a relative mandate. Thus, the Chair remained silent on the issue and did not communicate with any of the staff on this topic, nor were any punitive consequences for missing the training ever announced to any group.

Program content. The training program was designed to extend over a 2-month period and contained 4 structured components: 1) a 60-minute lecture, repeated twice, with slides available on the Department intranet, 2) a 60-minute hands-on workshop led by the manufacturer’s technician, as waves of clinicians attended 9 machine stations, 3) a pre-programmed, 30-minute clinical simulation application, which included functional troubleshooting, and 4) a formative assessment tool (or “test”) containing 40 questions derived from the manufacturer’s user manual. A fifth, unstructured component was the independent reading of the user’s manual posted on the intranet.

Go live. After 2 months was allowed for participants to complete the 4-stage program, the machines were installed for clinical use. Two manufacturer’s clinical applications specialists were on hand for a total of 5 days to assist the clinicians, but they were generally not called upon. Several follow-up announcements were made to encourage all to complete the 4 stages for awarding of a Certificate of completion.

Results of Implementing the Program

figure1

Figure 1: Component percentage completion by group. Certificates were awarded only to those completing all 4 components.

Number of participants and completion rates. There were 195 eligible participants, including 70 CRNAs, 42 staff anesthesiologists, 45 residents and 38 student nurse anesthetists (SRNAs) who were expected to participate in the training program. The overall certification rate was only 54%. Completion rates of the lecture, workshop, simulation, exam, and certification were readily verified. Completion rates for reading could not be verified. The percentage completion of each measurable component is reported in Figure 1. Maximum completion rates were achieved in all components by the SRNAs, with a 100% certification. The CRNAs and Residents had statistically similar component completion rates, but lower than the SRNAs, except for the Workshop component. There was a trend toward higher participation in the workshop and simulation. Certifications of those 2 groups were statistically the same at 54% and 51%, respectively. Significantly and sometimes dramatically lower rates of component completion among staff anesthesiologists were apparent, but the workshop completion was high at 90%. The MD certification rate was remarkably and significantly lowest at 14%.

figure2

Figure 2: Distribution of initial test scores of 121 individuals, on the 40-question exam.

figure3

Figure 3: Number of correct answers to individual question numbers. The questions with the lowest 9 scores, and the highest 14 scores were analyzed for content (see text).

Results of the examination. Whereas the initial performance on the formative assessment tool was not used as a determinate of certification, the results were analyzed to gauge the general understanding of various features of the anesthesia machine. Figure 2 demonstrates a Gaussian distribution of test scores, with an overall mean score of 22±4.9. There were significantly lower scores of SRNAs (19.5±3.5) vs. CRNAs (23.5±5.2) and faculty (25.9±6.3) (p<0.01 using Scheffe’ post-hoc comparisons), but not residents (22.3±3.6). Individual question scores were then ranked according to correct answer rates across all groups as shown in Figure 3. Analysis of the least and the highest performing questions was next analyzed.

The most sigificant misunderstanding was demonstrated in 3 questions regarding automatic leak testing of the machine. The vast majority of respondents could not correctly identify which component of the machine could be tested for leakage, when asked to compare the water trap, piston diaphragm, flow sensor, scavenger canister, vaporizer O-ring, and vaporizer filing port. Other highly-missed questions dealt with facts about the monitoring mode, the effects of weight and age on other parameters, and the oxygen ratio controller. There was a surprisingly significant misunderstanding of the reason why the machine determines its system compliance, which is a key feature of modern anesthesia machines. Out of 119 respondents 105 rated the exam as difficult to moderately difficult.

Analysis of the most highly performing questions revealed a good understanding of several basic and important functions of the anesthesia machine. For example, participants understood 1) the workstation could deliver oxygen and manual ventilation without any power, 2) it had 3 apnea detection strategies, 3) certain ventilation functions were, or were not associated with the APL valve, 4) what actions to take upon a presumed total failure of the system, and 5) the available modes of ventilation. In contrast to other misunderstandings of leak testing, respondents did understand that the APL/man/spont leak test could detect a leak in the breathing hose, and that the automated testing should be performed on a daily basis.

figure4

Figure 4: Survey and exam completion by group. Total refers to the number of individuals in each group. Note that the survey and exam participation are close in all groups, except that more faculty answered the survey (14) than answered the exam questions (8).

figure5

Figure 5: When asked which component they learned the most from, participants overwhelmingly chose the hands-on applications in the workshop and simulation lab.

figure6

Figure 6: Participants demonstrated some reluctance when asked if they were ready to use the anesthesia machine, following all 4 components of training.

figure7

Figure 7: When asked if such training programs should be mandatory, 97 out of 125 (78%) either agreed or strongly agreed.

Training program survey results. Of 195 subjects, 125, or 64%, answered the training program survey, even though only 105 subjects fully completed the training. Unfortunately, the survey was linked to the exam and the survey responses only reflect those who sat for the exam (refer to Figure 4). The hands-on learning sessions within the Workshop and Simulation were deemed most informative (Figure 5), but despite extensive training, nearly half of the participants were still uncertain when asked of their readiness to apply the machine clinically (Figure 6).

Of 125 respondents, 104 said that the program was moderately to extremely well organized and 116 of 124 felt it was moderately to extremely valuable overall. Somewhat lower numbers (94/125) felt that patient safety would be improved as a result of the training program, whereas 97/125, or 78%, felt that such training should be mandatory (Figure 7). Finally, when asked whether or not the APSF should convene a consensus conference on whether to mandate similar technology training programs, 88 of 124 respondents (71%) either agreed or strongly agreed, while 28 were neutral.

Free entry comments. A total of 129 free-entry written comments were categorized and ranked by general concerns:

Frequency Comment
46 Participants wanted more hands-on experience in the training program.
20 The timing and sequence of the modules needed to be changed.
12 Participants preferred take-home printed user manuals and information as opposed to the on-line reference materials.
11 Participants preferred on-site clinical training with an experienced factory representative during actual clinical care.
11 Respondents advised that the program focus on key points.

An overwhelming majority of the comments indicated that participants wanted more hands-on experience with the machine, and this is consistent with their selected preference for the workshop and simulation learning modules. Similarly, the details behind timing and sequencing indicated that students wanted a much more concentrated effort of hands-on learning, ideally located in the OR-environment for easy accessibility during breaks, or even training in the clinical setting. They felt that lectures and exam review should follow the hands-on sessions (and they did, after the survey was completed). Many stated that they could not understand nor remember the details within the early-phase lecture, having never seen the machine.

Conclusions

An overwhelming effort to justify, to organize, and to accomplish a comprehensive mandatory technology training program, prior to the installation of a profoundly modern and unfamiliar anesthesia machine, was a real challenge on many different levels.

The biggest failure of intent was to NOT mandate the program for all categories of clinicians who would be responsible for using the equipment upon installation. It was the authors’ impression (and later confirmed) that this omission of the Chairman’s mandate for staff anesthesiologists to complete the program was based upon a perceived lack of realistic and enforceable consequences for non-participation. Staff anesthesiologists in academic settings, at least, are notoriously independent-minded and perhaps drawn to the academic environment by the promise of freedom of expression, and freedom of learning and specialization. Many were simply not interested in learning the intricacies of the machine, and felt confident that they could apply the machine with minimal training. Our institution does not routinely provide primary staff-administered anesthesia, but rather staff-directed care, which could have made such training seem unnecessary. Similarly, mediocre (but much higher) certification rates of completion by residents and CRNAs may have been secondary to the lack of enforceable or threatened sanctions against those who did not complete the training.

On the other hand, one would expect academic faculty, at least, to understand and appreciate the significant literature on failures of human training, and the high rates of human error in the application of complex machines.4 Furthermore, the obligation of faculty to intervene in situations of sudden machine failure is obvious, but apparently did not enhance the faculty completion rate. Recent national emphasis on patient safety, and documentation of capability and performance both within the ABA and the ACGME, should be widely known by academic faculty. Perhaps the negative motivational factors listed above need to be overcome by negative consequential mandates from the Chairman, if indeed such training is deemed important for patient safety. At least one private hospital Chair (described above) had success in mandating training through the actualization of a threat to disallow preferred practice assignments. It seems clear in this pilot program, and with other evidence above, that the motivational “carrot” and certification was rejected for lack of enforcement, and that a top-down mandate is essential to accomplish such training.

The positive consequential mandates given to the CRNAs and the residents (including a certificate and CE credits), were still an inadequate means of accomplishing a high certification rate. There were no threats and no sanctions made by their supervisory personnel for a failure to certify. Regardless, there was a heroic effort and enthusiasm, particularly among the CRNAs and their leadership, to secure all 70 individuals to participate. Furthermore, there were high completion rates of the active-learning modules that adult learners typically find most interesting. The 90% participation of the faculty in the workshop session speaks to hands-on training, but this preference is not supported by their 38% participation in simulation.

The following is a commentary and insight from Dr. Raymond Roy pertaining to the decision not to mandate faculty participation in this study:
In the project development phase Dr. Olympio and I openly discussed the pros and cons of various roles I could play as department chair. Despite Dr. Olympio’s prediction of poor compliance without strong top-down pressure, I chose to treat this proposal as a routine clinical study, i.e., I endorsed it but did not mandate participation. Ironically, my decision enabled a clear demonstration of what not to do. I am pleased with the study, but disappointed in the outcome.

Is it a faculty member’s fault for not doing what is right and for not being a good role model for residents and CRNAs? Is it the principal investigator’s fault for not “selling the project” well enough? Or is it the chair’s fault for not championing the cause? Human nature being what it is, I consider it primarily a leadership issue. I hope to be presented with a similar proposal in the future. Armed now with real data I would view the project, not as a clinical study, but as a major departmental safety initiative. I would aggressively seek buy-in prospectively from the faculty and shepherd the project to completion. Faculty members who fail to receive their certificates by a certain time would not be assigned clinically until they did so. This penalty would create significant peer pressure. If that was not enough, it would have financial consequences related to a failure to acquire the requisite number of clinical service units. And if that was not enough, it could ultimately affect recredentialing and the employment contract. What private or academic anesthesia group really wants to recruit an anesthesiologist, anesthesia resident, CRNA, SRNA, or AA who refused to participate in a mutually agreed upon safety initiative?

Raymond C. Roy, PhD, MD
Professor and Chair of Anesthesiology
Wake Forest University School of Medicine
[email protected]

It is not surprising, and rather gratifying that 100% completion was obtained by the SRNAs, as this particular program is highly regarded for its quality, its rigor, its regimentation, and assertive leadership. Highly motivated students in a classroom-assignment system are unlikely to miss a mandated assignment by their Chair, presumably because of perceived or actual severe negative consequences.

Is it possible, preferable, or incumbent upon other clinicians in anesthesiology to demonstrate similar adherence to recommended training? Perhaps a cultural difference of more independent-minded practitioners, and even residents, needs to be addressed. We don’t think that a lack of opportunity to attend was the reason, but maybe the lack of “assignment structure” particularly among residents and faculty, was a problem.

The next obvious failure of the training program was the low score on the formative assessment tool (which was reviewed and corrected after the initial grading). Despite what was considered to be a heroic effort at training, a high percentage of correct answers was not obtained. Although the test was difficult and detailed, and designed to distinguish variations in knowledge content, it was disappointing to see the low scores after such an intense amount of effort.

We also did not measure machine-application capabilities in simulation, following the training, primarily because this project was NOT designed as an outcome study, but rather as a trial to test our ability to implement wide-scale training of a large group of clinicians, and to gather their perspectives on that training. We already believed that simulation training would improve application of the machine, and ultimately patient safety, as it did in the Dalley study.2

What did succeed in this pilot program was a great deal of enthusiasm and broad participation in at least some of the components of training, and we learned that an overwhelming number of clinicians felt the training was valuable and would improve patient safety. They made a number of consistent and constructive suggestions to improve the training by making it more clinically focused, more succinct, and with greater time spent on applications training. We know that at least 50% of the entire group (97/195) felt such training should be mandatory, whereas 91% of all respondents felt it should be mandated (97/125).

The impediments to mandating this training still remain, and will probably require a consistent and persistent culture change. We believe that publicity and documented concerns over the need for technology training will continue to increase, as evidenced by the background data in this paper, and presumably by this effort itself. Cynics will need to be reminded of existing data that justifies such training, and additional research will need to provide hard, measurable data that justifies training, such as reductions in service calls, reductions in critical machine incidents, or by increased ability to rescue from, or troubleshoot machine-related problems. The APSF invites your comments and suggestions on the next steps for this initiative.

Dr. Olympio is the Chair, Committee on Technology for the APSF and is Professor of Anesthesiology at Wake Forest University School of Medicine, Winston-Salem, NC.

Ms. Reinke is a member of the Committee on Technology for the APSF and is General Manager of Anesthesia Delivery for GE Healthcare Technologies, Madison, WI.

Mr. Abramovich is a member of the Committee on Technology for the APSF and is a Principal Consultant for ProMed Strategies, LLC, Lawrenceville, NJ.


References —back to top—

  1. Olympio MA. Modern anesthesia machines: what you should know. American Society of Anesthesiologists Refresher Course Lectures. Park Ridge, IL: American Society of Anesthesiologists, 2005:501.
  2. Dalley P, Robinson B, Weller J, Caldwell C. The use of high-fidelity human patient simulation and the introduction of new anesthesia delivery systems. Anesth Analg 2004;99:1737-41
  3. Caplan RA, Vistica MF, Posner KL, Cheney FW. Adverse anesthetic outcomes arising from gas delivery equipment: a closed claims analysis. Anesthesiology 1997;87:741-8.
  4. Weinger MB. Anesthesia equipment and human error. J Clin Monit Comput 1999;15:319-23.
  5. Eisenkraft JB. A commentary on anesthesia gas delivery equipment and adverse outcomes. Anesthesiology 1997;87:731-3.
  6. Cooper JB, Newbower RS, Kitz RJ. An analysis of major errors and equipment failures in anesthesia management: considerations for prevention and detection. Anesthesiology 1984;60:34-42.
  7. Cox M, et al. Dear SIRS: Clinician recognizes importance of machine checkout. APSF Newsletter 2004-05;19:50-1.
  8. Corporate Advisory Council member, Anesthesia Patient Safety Foundation. Personal Communication, September 28, 2004.
  9. Dosch M. University of Detroit Mercy. Personal Communication, March 25, 2005.
  10. King C. Dräger Medical, UK. Personal Communication, April 4, 2005.
  11. Dull D. Spectrum Health, Grand Rapids Michigan. Personal Communication, December 2, 2005.