Volume 5, No. 2 • Summer 1990

ASA Peer Review Group Publishes Model for Judging Clinical Competence of Anesthesiologists in Fair Manner

William L. Collins, M.D.

To the Editor

The American Society of Anesthesiologists Committee on Peer Review (the Committee) believe that the peer review process and quality assessment and assurance activities are an integral part of any effort to maximize patient safety and minimize patient risk in the medical setting. Patient safety is dependent upon high quality care provided by competent practitioners in an environment that is capable of supporting their efforts.

The Committee is currently recommending a model for evaluating clinical competence of individual providers of anesthesia care developed by Dr. Terry Vitez and initially employed in several hospitals in Los Vegas, Nevada. This model is the first example that we found that provides a measure of quantification to what has previously been a qualitative judgement. Others are currently in use as well and are successful.

There are three basic tenets to the system: competence is a human decision; outcome is the best indicator of competence; and humans are inherently fallible. This loop describes a system that may be suspect because of our human fallibility in making decisions about competence. Yet, we believe that a system that incorporates objective methods for evaluating quality of outcome with quantitation of outcomes from large groups and then comparison of individual performance against the group standard can be an effective way to determine competence.

Trying to briefly summarize the system is perhaps to do it a disservice, however attempt will be made. The basis for the methodology is analysis of outcome as evidenced by a continual collection of data that reflects results of care as provided by individuals and the ability to compare individual outcomes with those of the group. The data collection enables identification of the conditions (e.g. healthy adult herniorraphy vs. premature neonate congenital cardiac defect repair) and sets minimal acceptable standards (ASA Class I patients should not suffer organ damage).

Each individual “incident” (questionable outcome) is analyzed to determine if an “error” involving anesthesia has occurred. The severity of the “negative outcome” is determined and assigned a “negative outcome score” (NOS) from 0 to 10. For no sequellae, the NOS = 0; if an escalation of care occurs, NOS= 1-3, (e.g. post anesthesia verification because of relaxant overdose); if “reversible organ damage” occurs, NOS = 4-6 (e.g. pulmonary edema from fluid overload, corrected by diuresis, irreversible organ damage” (e.g. stroke) has NOS 7-9; and death, NOS = 10.

If an “error” is determined to have happened, its nature is examined. Was it technical (action taken not the intended action, e.g. arterial puncture rather than venipuncture); judgmental (action taken was the intended action, e.g. mask anesthesia for a parturient who had just finished a “Big Mac”); or a lack of vigilance(inaction for lack of attention, e.g. exsanguination from a disconnected arterial line).

The decisions about anesthesia involvement, yes or no, and if yes, the relative severity of the outcome, the nature of the “error” (if there is indeed one) are human decisions made by a subcommittee of knowledgeable anesthesiologists who are usually part of the evaluated group. Anonymity is maintained by trained staff support extracting the pertinent data for presentation to review.

If an error is determined to have been made, discussion elucidating its nature and methods for future avoidance are included in the report. Decisions are in turn evaluated by the group as a committee as a whole. Each individual’s performance is periodically tabulated and compared to the larger group’s in response to questions about competence or as an aide in the reappointment process. Computer software is available that enables us to accumulate the data and display it as raw data, relational display or graphically. Experience has shown that individuals can be perceived as “bad news” yet performance analysis shows them to be competent, maybe not your choice dinner companion, but competent. Some have shown general group needs for education , e.g. a need for new simulator monitoring of muscle relaxant usage.

Finally, there are those unhappy cases in which a colleague is found to be incompetent, usually not based upon a single human error, that could happen to any of us under some circumstances, but rather on a pattern of poor or questionable outcomes that do not compare favorably to other similar providers. The methodology now allows us to quantitate the severity of these outcomes and objectively analyze why they happened. When these analyses are performed by one’s colleagues or a comparative group, true peer review has been done.

Is the system perfect? No, not by a long shot. Is it an improvement over what was there? Yes, vastly! Will there be further improvements? Work is in progress, data being collected and used to constantly upgrade the system. The ASA, at a nominal charge, has available a publication that discusses the issues involved in peer review (see adjacent box) and includes a comprehensive description of Dr. Vitez’s program.

The measure of individual competence is but one, albeit very important, factor in providing safe and risk free care for our patients. The review of quality care is a comprehensive process, encompassing a variety of issues. It should be and is undergoing constant review and revision by many different parties, some our fellow anesthesiologists, often with medical expertise and, unfortunately, some with little concern beyond a simplistic desire to limit the cost of care.

The burden is upon ourselves to continue to demonstrate our commitment to provide high quality anesthesia care that is safe and as nearly risk free for our patients as possible and to show by objective means that we are meeting that commitment.

Dr. Collins, Olympia, WA is Chairman of the ASA Peer Review Committee.