Volume 11, No. 1 • Spring 1996

10-Year Program Review Reveals Variety of Topics, Success of Many Projects

Jeffrey B. Cooper, Ph.D.

Before the APSF was created in 1984, there was no defined source of research funding for topics that were directed squarely at anesthesia patient safety. Studies of “safety” usually employ “softer” research methods and can be difficult to judge by the criteria used for more traditional scientific research. Recognizing this situation, one of the primary interests of those who founded the APSF was to create seed funding for studies related to patient safety and so establish the idea that safety is an important issue, worthy of targeted support. Thus, the APSF Scientific Evaluation Committee (SEC) was established in 1985 under the leadership of Arthur Keats, M.D., with eight other reviewers (including this reporter). Dr. Keats created a process of review and a standard for excellence modeled on his experience in reviewing grants for the NIH and other agencies (see box below for the current grant application review process). The first three APSF grant awards of $35,000 each were selected in 1986 from among 26 applications. Today, after ten years, from 257 total applications, 35 grants have been awarded for a total of $1,322,429. In the most recent year, a record of 47 applications were submitted and five were awarded a total of $153,892 (APSF Newsletter, Winter 1996). What kind of research has been supported by the APSF grant program? To whom have the awards been granted? What has been the impact of this effort? These and other questions are explored here. We must also ask: Are we still headed in the right direction? What kind of research should be supported? Which is best supported elsewhere? Should we take a more active approach to supporting specific targeted research? To address these issues, as the current Chairman of the SEC, I have classified the topics of applications selected for funding (see the APSF Newsletter, December 1989, for the last such review), reviewed the list of related publications and solicited comments from APSF-funded investigators. Results of a survey sent to all unsuccessful applicants between 1989 and 1995 are compiled and some of their comments are reported (note: the number of surveys sent was limited to conserve expenses and because it was assumed that most unsuccessful applicants from long ago would not remember the details of the process).

Summary of Research Supported

The 257 applications were submitted from about 100 institutions; 22 institutions were awarded a grant. Among the 35 awards (Table 1), six investigators were funded for a second application. The topics of approved applications were diverse (Table 2). The categories chosen are arbitrary, but suggest some themes: The assessment of outcome or the assessment of risk appeared most frequently; one or the other was an objective in ten projects. Human factors or performance was the subject of eight grants. Nine grants focused on prevention of a specific injury or study of a specific complication. The topic of simulation or simulators was the focus of six grants. Five projects have involved monitoring and another four “alarms” or “artificial intelligence.” Surprisingly, only one application was funded for development of a “device” (development and evaluation of a “critical events prompter”). Four awards were for problems focused on children, but only one on problems of the geriatric patient.

How did applicants feel about the process?

A 13-question survey was designed to learn if unsuccessful applicants felt the process was fair, well-managed, useful and, especially, what other impact there is of this program even for those who do not receive awards. Unfortunately, there were only 34 responses to 112 surveys mailed. Still, the results, summarized in Table 3, are useful for examining these issues.

Although it may be a lot to expect that applicants whose projects were rejected would have a positive feeling of the experience, the majority of respondents felt that the application process was well-managed, e.g., that the guidelines were clear, that applications were reviewed promptly and notification of disapproval was received promptly. There were a few notable exceptions, e.g., nine felt that they were not notified promptly enough about rejection of their application. Sixty percent felt that the feedback was useful. Perhaps that is because the information for feedback is limited: The SEC started to provide feedback to applicants in 1989. This was in the form of a brief summary of the reviewers’ comments, almost always compiled by the SEC chairman. To minimize the burden on the reviewers, who review about 10-20 grants in two months during the summer, we have asked only for a brief rationale for the rating given to each application. That has perhaps not provided enough useful information for more applicants to learn how to improve their applications for re-submission or for submission elsewhere. Then again, it seems to have been useful for many.

An optimist would see a half-full glass in the 13 respondents who conducted at least some of their study even without funding from the APSF. Ten of those had received funding from another source. Thirteen of 28 who responded to the question felt that the process was useful despite that lack of success in winning an award. Perhaps this suggests that the program has had the effect of stimulating ideas that are explored even if funding is not secured from this source. On the other hand, it is unfortunate that so many did not feel that they got something from preparing the application despite the lack of success in funding.

There were several comments in addition to answers to the survey questions. No pattern emerged, but many of the suggestions can guide us in improving the program. One for instance suggests that we “… consider proposal pre-screening to weed topics that are unworthy.” The guidelines do encourage that applicants call the chairman or committee member for that kind of assistance, but perhaps this is not clear or not sufficient to help many people. Several prospective applicants do contact the SEC Chairman every year seeking advice, which is provided.

There were several comments on the general subject of what is funded. One felt that there should be “… better objective reviews.” Another felt that the “… criteria for evaluation are not explicit.” It was suggested that we “… expand (the) focus of efforts beyond simulators and equipment,” although the list of funded projects would suggest that this is a problem of perception. Another respondent felt that the “… scope of issues that might be funded (is) not well defined; Define precisely – do not waste people’s time writing grants when guidelines are loosely defined by criteria for funding that are likely well defined.” This was reviewed at the retreat; some modification was made to the criteria, but the reviewers wish to maintain an environment for innovation and do not think their criteria are any more well defined than as stated in the guidelines for applications. The request that we “publish (a) list of successful projects” has been met in this article. Another respondent reported that, although his or her rejected application was about a controversial topic, the work was subsequently funded by the local department, won a national research award and will be published. The suggestion was that there be a mechanism to fund some projects that get some high marks even if lower average priority. In fact, this past year, the review mechanism has been altered to one that parallels that used by the NIH (see box). While this still may not overcome the objections stirred by a controversial subject, it does allow for more debate and exchange of views among the reviewers. It is a bit more difficult to satisfy the person who believes that “… the central committee appears to be a non-scientific group of “cronies”. Research should advance the field scientifically, not regurgitate for the benefit of one’s own offspring.”

What Has Been the Result of the Effort?

It is difficult to measure the impact of any one single award or even that of the entire program. By an academic measure, the program seems to be productive: 66 original papers and 50 abstracts are reported by the PI’s as being related to APSF funding (not necessarily solely from APSF funding; see APSF Newsletter, 9(4):49-50, 1994 for discussion of topics and an abbreviated list of publications). Several papers stand out as being influential on changing clinical or educational practice. The development of computer-based and realistic simulators has clearly benefited greatly from APSF funding. Although, during the years of funding these projects, none of the reviewers had any experience or involvement with simulators, the idea seemed to catch their collective imagination. Those projects were not directly aimed at simulator development; rather they used simulators to measure or teach something that required this unique tool. Still, the effect was to seed several groups working independently on this new venture in its early years and so help the concept to grow beyond application for research and into a tool for education. It was only later, after simulator projects had competed with other projects and survived the scrutiny of the quite independent SEC, that the APSF Executive Committee committed targeted funds to these activities.

On an entirely different subject, one APSF project led to a widely quoted paper by Cote’ et al., which was probably influential in promoting the use of capnography in children. And, in yet another direction, recent reports of CO poisoning related to carbon dioxide absorbent, some from the work of Dr. Richard Moon, have received much attention recently. Several important papers have been derived from the epidemiological studies of Dr. Mark Warner and colleagues. They have suggested, for instance, that adult patients with clinically-apparent aspiration who do not develop symptoms within two hours are unlikely to have respiratory sequelae. On a broader scale, they have demonstrated the remarkably low rate of mortality and morbidity associated with ambulatory surgery. Another important finding has been that perioperative ulnar neuropathies are associated with factors other than general anesthesia and intraoperative positioning.

There is a large set of papers from several APSF investigators who have examined some aspect of human performance. Funding from the APSF is one of the few sources for this kind of work, but a clear effect on clinical practice has yet to be felt.

Not all of the projects reached successful conclusions. Several did not meet their objectives because of technical difficulties, because an idea did not work or a hypothesis proved to be wrong. Such are the vagaries of research. Also, several projects of the past few years are either still in progress or have not yet had publication of their results. So, the impact of the APSF research funding can only be seen from the work in the earlier years.

Several of the Principal Investigators have commented to me on what the APSF funding has meant for them. Let me summarize what a few have said since I believe they indicate the intangible impact that the APSF program has had. Dr. David Gaba described how the APSF funding had a profound influence on redirecting his research career from cardiovascular physiology to patient safety. His interest in human performance demanded a simulated environment. He is “… convinced that no other funding agency would have given any credence to the notion of building a simulator for research on decision-making… (Their) first APSF grant thus allowed us to produce our first generation of anesthesia simulator with which we conducted a set of research studies … My entire laboratory, and indeed my career, was turned … into this avenue and away from other pursuits.” A second grant led to the development of a curriculum for Anesthesia Crisis Resource Management, adapted from the aviation model. “This grant has resulted in one major article, a new textbook on Crisis Resource Management and a host of other studies represented by other articles and abstracts.” The ACRM concept has been adopted by several other centers. The Laboratory for Human Performance in Anesthesiology at the Palo Alto VA Hospital “has been started based on APSF funding… current members and alumni of the lab continue to investigate a wide variety of aspects of human performance… The APSF can clearly take credit for starting two whole new arenas of investigation and development (simulators and crisis resource management training) that might not otherwise have gotten underway. Moreover, the Foundation created the environment in which investigators like me could aim our research and academic careers in the patient safety sphere…”

The work of Dr. Mark Warner seems to have also been influenced by APSF funding. Dr. Warner writes that “… The support provided by the APSF has encouraged (me) to extend outcome-based research efforts at (my) institution. A Perioperative Outcomes Group has been formed.” Dr. Richard Moon notes that “… the APSF funding fills an extremely important niche. It allows investigators to examine issues which have not been traditionally supported by other funding agencies, yet which can have a tremendous impact on patient care. Additionally, the level of funding is certainly above the critical threshold necessary to initiate a research project in an academic setting.”

That the APSF funding has helped stimulate funding from other sources is noted by Dr. Dwayne Westenskow. His grant support for the development of neural networks for alarms “… was followed by two and a half years of support from NIH and four years of contracted research support from Ohmeda.”

Drs. W. Bosseau Murray and Arthur Schneider describe how APSF support can have a strong influence on a local environment. “Even though our participation in the APSF grant has been quite recent, the project can be said to have had a marked impact on the interests and focus of our department: we feel that it was valuable in demonstrating the value of a dedicated education laboratory in a teaching anesthesia department.”

Advice to Applicants

To those who will apply for funding, I urge that you review the list of topics awarded and judge how what you are doing fits whatever pattern you discern. One suggestion from my observation of the reviewers’ comments over nine years is that novelty and potential broad impact on the field are high priorities. This is not always so, as evidenced by some of the very targeted subjects that have been funded. When in doubt, contact the committee chairman or another reviewer for advice. Although it should go without saying, you certainly should READ THE GUIDELINES carefully. It seems that many applicants simply haven’t read what is expected or haven’t taken the advice to seek counsel of someone experienced in grant writing. An application from a first-time investigator who has not had assistance in preparing the application is not likely to fare well. When there are 47 applications, there is not much chance for those that are not well written or, especially those that do not address the objectives and priorities of the Foundation.

Most important remains the quality of the application itself, especially the methods proposed. The quality of proposals has improved since the first years, although the very top group of applications were always quite good. There remain, however, a substantial fraction of applications that suffer from the lack of a sufficient review of prior work on the topic, from having a sensible, objective methodology, or a clear description of how the results will be analyzed. One glaring, frequent omission from studies that purport to test a hypothesis is the lack of analysis or often any mention of statistical power. It is important that statistical consultation be sought if the project requires statistical tests beyond the most rudimentary. Even then, the issue of the ability to achieve significance from the sample size is required, unless the study is exploratory, hypothesis-generating or of a sort that is not amenable to statistical analysis.

It is useful to consider who the reviewers are and what they are looking for in their reviews. The APSF reviewers are a diverse lot. Some are from the hard sciences; some come with a strong clinical research experience while others come with an industry perspective. The range of reviewers’ demands on the rigor of study design or for hypothesis testing is similarly broad. There is a fairly large variance in the scores, although a top group of applications always emerges from the pack. The names of the reviewer and their email addresses are listed in the Guidelines for Applications.

Where Do We Go from Here?

Last October, a group of past and current members of the SEC met to review the history of the grants program. We considered if the APSF is meeting its objectives of stimulating useful research and promoting the concepts of patient safety. The stated objective and priorities were reconsidered as was the process for soliciting and reviewing applications. The consensus was that, as reflected in the number and diversity of topics represented by publications, that indeed this program has achieved and perhaps exceeded reasonable expectations. We discussed at length how things might have been done differently and what should be changed. Although there has been a suggestion by some that we target specific areas for funding, it was the view of this group that requests for proposals in specific areas would stifle the innovation that has been sought and found. Thus, unless there is a compelling reason or special interest from the Executive Committee, the general policy of open solicitation will not change. But, given the perceived improvement in patient safety, we recommended and the APSF Executive Committee approved that we would alter the general charter to state that the intent of projects should be “to benefit healthy patients or be broadly applicable”(italics indicate added words). The limitation to use “existing medical knowledge” was removed. And, rather than be “readily incorporated into medical practice”, it is now sufficient that a project have a defined and direct path to implementation into clinical care. In addition to the illustrative list of topics given in recent guidelines for application, the topic “approaches to training and education likely to improve performance or generally improve safety” was added. Because the resources of the APSF are so limited, priority will be given to grants that do not have other available sources for funding.

Most unsuccessful applicants appear to appreciate feedback about why their application was not funded. Although all have been informed of how to request such feedback, only about half do contact the committee chairman. Because the work to organize the comments of reviewers is considerable and the committee has so much to do in reviewing other applications, we will continue to require that information not be provided unless it is requested. Because of the workload on the reviewers, detailed comments will not be written for those applications that are disapproved by at least two reviewers. These will receive a list of reasons for disapproval. All others will receive a more detailed explanation summarized from the reviewer’s comments.

In response to comments in the survey and suggestions of the reviewers to improve applications, some important issues will be highlighted in the guidelines for application, e.g., how to get more information, topics given priority, common reasons that applications are not funded.

Some have suggested that the availability of funds be publicized to a wider audience. Although there are already more applications than can be managed easily, the greater diversity of ideas is always of benefit. No overt additional effort at publicity will be taken, but information is available on the new APSF World Wide Web home page.

Dr. Cooper, a founding member of the APSF Executive Committee, is from Harvard and the Massachusetts General Hospital, Boston.