Circulation 60,475 • Volume 15, No. 4 • Winter 2000

Intraoperative Physiologic Data Collection

Lori Cross

For over a decade, I’ve been looking for something that is elusive and much more difficult to actually achieve. What I’m looking for is a new way for us to approach patient safety. Patient safety, if you think about it, has always been approached from the perspective of loo king at incidents, closed claims. We saw some of the research which has been done already and most of that work has been about after the patient or a series of patients have suffered some injury. So what we’re looking for is a fundamentally different way to actually address patient safety proactively. And so, in that spirit, what I’m presenting today is intraoperative physiological data. But before I do that, I have to tell you that to really achieve that kind of ambitious goal after 10 years of working in this industry or record-keeping, I feel that it’s very important to begin by looking at what the end point really is. And you as clinicians or in the industry start to work on equipment that will work in this setting, do you know how the data is going to be used? Do you understand the questions that are going to be asked by clinicians and outcomes researchers? If you do know that end, it really helps to start designing the system from the beginning. So what is the desired outcome? Where is it that we’re trying to get? I think most of us would agree that the reason for putting forth this kind of effort over such a long period of time is to fundamentally improve patient care and safety, in spite of the decades of work. And in thirty years we’ve improved safety maybe 30-fold, but we still have very fundamental issues to address. So to improve patient care and safety, we’ve got to work on our processes. We’ve got to find new ways, more effective and efficient ways, to deliver the clinical ca processes. But whose call is it to say what’s a better or more efficient way to deliver anesthesia care? We can’t do it alone and we can’t do it through guesswork or gut work. We’re going to have to do it through knowledge so how do we enhance our knowledge base? Well that brings me to my topic which is getting more comprehensive data, very comprehensive, robust and unassailable data that all clinicians can agree to. It is important to understand the problems that we think we can address.

What are those breakthrough opportunities for improvement that we’ve all been aware of and how do we focus our efforts on them? First of all, there is a term that we use called the "information paradox." You’re all familiar with it if you’ve practiced anesthesia. When you most need time to record the data which is so critical, it’s the same moment that you have the least amount of time, those critical times during induction and emergence, but also during cardiac arrest. This is not the time to be recording data. When you do have time during maintenance, it’s not exactly the time when you have a lot of unique information to record. So this paradox is something we have to understand to design our data collection systems to be as automatic as possible during those very, very busy times. And we have to ensure that the data collection is unbiased. The bias is not intentional, but we as human beings, tend to look for the regular or expected result. There are numerous studies on this in human factors that show that we look for the information we expect. We have to create unbiased data collection legibly with access for everyone during the care process. It is also important to make sure that we can aggregate this data in a way that can answer questions, not simply create a large database of thousands and thousands of cases. So, I’m going to talk about four windows of opportunity. These are windows you need to look through as you design your very unique information system or design the equipment that works with it.

First of all is the element of definition. We’ve got to define very clearly what it is we’re collecting. Second, we’ve got to reduce artifacts at the source, push them back and access those signals and process them as early as possible in the chain of collecting intraoperative data. Third, we have to collect the right amount of data. This amount is what is needed to actually answer the questions that we all have in front of us. Then finally, my favorite adage frames the last window, "inspect and protect what you collect." If we’re going to spend this time and effort getting this data into a system that’s really solid, we’ve got to make sure that it’s protected and that we can recall it.

I want to review each of these windows in a little more detail and hopefully we’ll talk about them during the discussion. The first point is about clear data definitions. We all know what is meant by the term arterial blood pressure. Then, we’re faced with a very simple question. For example, does drug XYZ increase blood pressure variability during cardiac surgery? If we had the data, we could answer the question. Next, we have to define the data that would go into that decision making. Well, first of all, the definition needs to clarify how that data got it. Some points to consider include whether or not this is a noninvasive pressure reading or an invasive pressure reading. Then, determine the variances in the way that data is recorded, continuous versus intermittent. If it was continuous reading versus one every five minutes, you might draw different conclusions about the variability of blood pressure during cardiac surgery. The location of the measurement site (the neck, the leg, the arm) will impact on your reading. Then, consider where we actually get the arterial pressure. And lastly, deal with multiple sources of blood pressure reading. Do we take one over the other? Is it always the one that comes from the catheter which is superior to the one that comes intermittently from a cuff? There are many, many philosophies and algorithms and ways to approach multiple source data. This is a very important part you need to know if you’re designing a system for your institution so you understand the trade-offs that are inherent there. So great data begins with very clear definitions. If you don’t know what you’re recording when you aggregate the data, it will come back to bite you. We’ve learned this from experience.

The second window of opportunity is about "garbage in, garbage out." Good old saying and very, very true in anesthesia record keeping. There’s our bulldozer of reducing artifacts at the source. This is something that when I started with information systems and monitoring systems, we understood that alarms were a problem. And we put a lot of effort and time into making the alarms less nuisance and less hassle. But through efforts of people like Dr. David Edsall sitting up here, and others, we learned that the data output streams were also prone to artifacts which needed to be eliminated at the source. Don’t just change the alarm scheme so that you don’t see alarm, but really eliminate the artifact from the signal. A lot of the state-of-the-art systems now have invested over many, many years, to improve the artifact rejection of the physiological data that may be due to various factors. Some systems even use multiple sources so if you’re looking at heart rate for example, and you’re measuring the heart rate from the ECG, if you’re getting an electrocautery artifact, it will automatically switch over to the pulse oximeter. But then if the pulse oximeter starts with motion artifacts, it goes back to the ECG. This kind of switching of sourcing is very nice if that’s what you want to do. But you need to know that that switch has occurred in analyzing the data. Or make a choice not to have that happen. So, the modification of recorded vital signs then, if they go in clean then you don’t need to modify them. You may still have some artifact. The current thinking is you don’t want to be able to change the vital sign data because then it calls your whole record into question about whether that is, or is not, what really happened during the surgery. Of course, that brings up the other issue that your artifact annotation has to be very quick, very simple so that the clinician can do it without any additional effort and that has taken some additional work, and still some improvement is needed there.

The third window of opportunity is about how much data to collect. When we talk to people, there’s all different kinds of input about this particular topic. Most anesthesia records are five minutes, right? It’s a very common frequency, but once you go to electronic recording, you can do anything you want. It’s not the technology which is limiting the amount of data or the granularity of the data you record. It’s really your own use of that data after the fact. Some people think less is more. Let’s just get right to the point, average the signals, give me a five-minute average and that’s what I’m going to use for my information. That might be fine if your endpoint is just to create a record. Sometimes more is better. If you’re looking at a cardiac arrest, or as Matt mentioned, you sometimes want to be able to recreate the actual QRS waveforms as they occurred. You need to know your endpoint in order to design your system to collect the right granularity of data. Some locations use 15-20-second sampling. They fill up large data files. Some use one minute. A few I know are still using five-minute, basically for the purpose of record keeping. You just need to be aware that if you have a high frequency of sampling, you’re going to have to look at the post-processing to make sure that you know how the information is averaged and what you really are getting at the end point. Watch out also for sampling bias. This is one of the things we’re trying to protect against, right? The human bias is to just see the normal or expected result. We’re going to have a computer to record the real number. Well, computers have bias, too. I won’t get into all kinds of Nyquist’s Theorem and other elements of signal processing, but it’s important to understand the physiological biases which occur. You know the PA pressure waveform is affected by respiratory rate, correct? If that’s on the same frequency as 60HZ or as your sampling time, then you may get bias introduced into the actual number of the PA pressure. As long as you are aware, there shouldn’t be any problems, but you need to know how your equipment is averaging and recording the intraoperative data. So if you begin with that end in mind, then the anesthesia record and the use of that record or the database can be very clear. It’s important to note that the anesthesia record, the part which has certain legal and confidentiality requirements, may not be the same as the database. If you record things in a database at a higher frequency, but have a signed record with five-minute intervals, it may be permissible to separate them. Bottom line, how much data do you need? What are you going to do with that data? Now, there’s a lot of systems out there and some of the systems integrate together monitors, machines, pumps, record keepers, all in one. These kinds of systems have consistency usually in how they sample data and can build on multiple source data in a way that’s logical. Other systems actually have that within the anesthesia machine and monitor, but then use a separate record keeper. So all of these sorts of artifact rejection and data processing occur before it moves into the record keeper. But in the real world, we all have different equipment out there. We’ve got a lot of stand-alone devicesÑpumps, machines, monitorsÑthat are coming from multiple manufacturers and the way that they come into the record keeper can be unique. They can come in through a multiparameter monitor or through a machine or through a record keeper. My bottom line here is you need to know how they link together. How does that data get averaged as it moves from the point of signal detection to the point of database recording? If you have knowledge of that, then you can use the information intelligently.

The last window of opportunity, this "inspect and protect what you collect," is a very important concept. Many of the state-of-the-art systems now incorporate what we would call validation protocols which allow you to pull out the out-of-range errors, things that just don’t fit the bill. Outliers can be identified and they can be eliminated and that can be very useful. But you also need to think about data backup, the storage of the data. How often are you going to do it and at what granularity? You might record it at 15-second intervals as mentioned but then as you go out in time after 24, you might reduce the amount of backup and storage you have. Security audit trails, certainly the way that we retrieve the data, should impact how we populate the database. If you don’t know that, it’s going to be difficult to populate it correctly.

So four key points, clear data definitions. Shared definition across all of the clinicians is an ideal way to start because then you don’t get into questions after the fact. Reducing artifacts at the source, cleaning it up before it gets in is the most efficient way to do it, and a lot of people have spent time on getting the right amount of data, how much for how long? And lastly, inspecting and protecting the data that we record. If we do those things, I think I can stop my decade-long search and try to get to an end that’s really worth beginning. And my vision of the end is something very dramatic and profound for all of you but it requires that we have standard anesthesia data dictionaries across institutions so we can all define what we mean by arterial blood pressure, heart rate, pulse rate. What do these terms mean and are we using them in the same way when we aggregate our data? A voluntary global anesthesia database. Hopefully global. That would be wonderful. That would facilitate multi-center research studies of all this data that we’re collecting as well as help us conduct outcomes research, locally or globally. And then lastly, which is a point that people don’t bring up often, the legal protection for those who actually contribute to the database. Begin with that end in mind and I think that we’ll have data systems that serve all of our needs.