Reports from the Top End – The TriClinicians Cup

Recently the Aeromedical Society of Australia had their annual conference up in Darwin. This is the first of a few posts arising from people who got there – Dr Sam Bendall with a report from sim land. 

I love the way simulation brings people together. All aeromedical services use simulation as a training tool and that familiarity allows fun to be had and challenges to be set in the form of the 3rd annual ASA & FNA Simulation Cup.

The Getting Ready

It’s quite tricky putting together scenarios that will work for different team configurations and will be fun but enough of a challenge. We also had to be super careful to keep everything under wraps to keep it fair, so only 4 of us from NSW knew anything about it. I have enormous respect for my predecessor in organizing the Simulation Cup – Ben Meadley from Ambulance Victoria. He was unfortunately unable to make it this year but he was always on hand to provide guidance and advice in putting together this challenge. Thanks a million Ben! Kate Smith, the conference organiser, and her team were incredibly flexible and happy to work around the simulation craziness…. you want to what…. where ….. and with THAT??? allllrrriiight 🙂

Our logistics team was once again truly awesome – they are completely unflappable. Despite doing 3 training events in 2 weeks in different states, they not only transported several tonnes of gear to and from Darwin, but also helped in the scenarios and sorted out transporting it all back! Legends (and a double 🙂 for that).

We do a great deal of simulation at CareFlight and we are lucky enough to have some pretty cool toys, dreamed up and provided by our amazing Logistic and Events team. The newest addition to the stable is the NT crash car simulator. We have had version 1 and 2 in NSW for some time, but this one is new to the NT team so it had to have an outing.

It's cool and all but the engine could use some work.
It’s cool and all but the engine could use some work.

The Teams

Three teams competed in the heats – Cheah and his team from Malaysia (who did their scenario in English AND their native tongue), MedSTAR and CareFlight NT. The CareFlight NT and MedSTAR teams went through to the finals that were held at the end of the conference on Friday. Spectators grabbed a cold beverage and most of the delegates came down to support the two teams – a fantastic audience turnout.

Four members of the Northern Territory Emergency Service came to help out and add fidelity to the final scenarios. Gary and his team were happy to help out and be the rescue service in both scenarios. Many of the NTES folks have done the CareFlight Trauma Care Workshops so it was great to have another opportunity to train together.

Game Time

The first scenario saw the CareFlight NT team managing two patients in a Motor Vehicle crash. Their CRM was awesome and they were so calm it was amazing. They even found Chelsea, the puppy!

The second scenario saw the MedSTAR team managing a patient who was impaled on a construction site and bleeding heavily, and another injured construction worker. They too had great communication skills and did a good job of managing their patients. At the end of the day, the scores were close but the MedSTAR team was the winner on the day – NICE WORK TEAM 🙂

It takes an enormous amount of courage to get up in front of your peers and compete in a simulation challenge. It tests your CRM skills, your ability to function under pressure and your ability to treat patients as a team. Thank you to all three teams to stepped up to the plate. You are all incredible and it was a privilege to see you all in action.

Till next time …… bring on Queenstown (which btw, is my FAVOURITE place on earth!)…

QT copy

It Takes a Team

This entry could not pass without a big thanks to the following people who helped us out enormously with the Simulation Cup:

  • Kate Smith and her team – for everything!
  • Ben Meadley – for all his support and advice
  • Melinda Riall from Limbs and Things – provided the Suture Tutor prize for the Simulation Cup Final
  • Anthony Lewis – for providing an iSimulate unit for use in the scenarios
  • Stacey Williams from Zoll – for providing a defibrillator for use in the scenarios
  • NTES – Mark Fishlock, Gary and their team
  • The judges (some of whom were co-opted at very short notice – thank you J:
    • Mary Morgan – Hunter & New England Retrieval Service
    • Anthony De Wit – Ambulance Victoria
    • Paul Gallagher – NET
    • Andrew Pearce – MedSTAR
    • Emmeline Finn – CareFlight QLD
    • Andrew Duma – RFDS Sydney
    • Lachlan Beattie – NSW Ambulance
    • Lindsay Court – NSW Ambulance
  • Martin Dal Santo – CareFlight Logistics Team – he made EVERYTHING work!
  • Don Kemble – Manager Facilities and Logistics CareFlight – Enormously helpful with planning and logistics.
  • Ken Harrison – outstanding confederate performances – thank you
  • Richard Potts – AV guru from Kate’s team
  • Kellie Robertson, Danny Hickey and the AV team from the Darwin Convention Centre
  • Sarah and Ursula – fabulous coordinators from the DCC
  • Justin Treble, Elwyn Poynter and the rest of our fabulous education team – for all your help at the last minute making technology work and packing up!

Does video make for little airway stars?

Most of us are always out for new techniques to make difficult cases easier. Videolaryngoscopy is one area of great change over the last decade. Here Andrew Weatherall looks at videolaryngoscopy as it relates to looking after the little kidlet airway. 

Seeing is believing. It can happen in a moment in sport. It’s the whole basis of magicians plying their trade.  Even people seeing mysterious circles appearing in crops want to believe.

Perhaps that impulse is why everyone wants to believe in videolaryngoscopy. And it makes sense. It’s persuasive. The view is better than your eyes alone. It must be better.

And yet … the evidence doesn’t help us back up our gut reaction. So the debate starts. It’s a pretty big debate too. Too big for here.

So let’s just talk about one bit. Let’s see where videolaryngoscopy fits in with kids.

Open Bias

I should declare an interest here. I like videolaryngoscopy. I work in operating theatres where it’s freely available. In our prehospital operation we use it as routine. This is not to say I don’t dig direct laryngoscopy. I just really like an intubating experience that’s a little more IMAX. That isn’t even because I’m particularly a gear junkie. I’m only interested in tech if it helps me do a better job looking after patients.

So what’s so great about videolaryngoscopy? It’s not the view that it gives. It’s the team that it gives. My subjective experience is that when taking on a  slightly challenging airway the greatest benefit of using videolaryngoscopy is that all members of the team managing the airway can appreciate what is going on.

Sharing the same vision is the quickest way to get everybody operating on the same page. It’s particularly beneficial in getting any airway assistant providing external laryngeal manipulation to line up the view in the best possible way.

These observations are the same ones that colleagues who are fans of videolaryngoscopy seem to make. They note some drawbacks too. (blood in the airway being the obvious one). More and more though, videolaryngoscopy is perceived as the go to option for the extra few % that makes intubation a sure thing.

So does the evidence match that perception? And if not, why not?

What’s the Problem?

Perhaps it’s worth remembering that difficult intubation in kids isn’t that common. Some of the morphological changes that might be associated with difficult intubation are relatively common on their own. Restrictions to neck extension, a small mouth and jaw, a big tongue and dysmorphic appearance may be associated with difficult intubation. Of course most with these features still have a straightforward intubation.

A team from Erlangen published a retrospective review not that long ago looking at this issue. Looking back over a period of 5 years (while excluding records that were incomplete or where intubation wasn’t relevant) they ended up looking at 8434 patients who had a total of 11219 procedures.  152 (1.35%) of direct laryngoscopies were classified as difficult laryngoscopies (grade III or IV views).

1.35% isn’t much. Note also that they are talking about laryngoscopy, not actual intubation or airway management. Certain surgery groups had a relatively higher rate (oromaxillofacial and cardiac surgery patients) as did kids under the age of 1. The wash-up is that if we were to choose videolaryngoscopy to help with difficult laryngoscopy, we’re choosing that for < 2% of the population. This choice is fine but we at least need to understand the size of the problem we’re trying to address.

The 2% is something like the size of one of the eggs vs that ginormous bug.
The 2% is something like the size of one of the eggs vs that ginormous bug.

The Numbers For VL

Well they’re in and they’re not particularly supportive of the idea that videolaryngoscopy in kids is vastly better. Here’s one study where Truview PCD and Glidescope didn’t help with the view and slowed things down. Here’s another small series where the Glidescope doesn’t necessarily help with the view.

Of course rather than keep picking out individual studies, we could try to take on board the evidence from a meta-analysis. Sun et al have done the hard work, looking at fourteen studies which had a randomised component to their study.

Their findings? Videolaryngoscopy generally improved the view of the airway in kids with normal airways or potentially difficult airways. However the time to intubation was longer in pretty much all groups. Interestingly, the rate of failure was much higher with videolaryngoscopy (there was lots of heterogeneity in the included studies so that particular finding probably needs more than a few grinds of the giant salt mill).

Cochrane has a review specifically in neonates which is useful … to demonstrate that there’s not enough useful evidence.

What Don’t the Studies Say?

Well it already looks like the answer is “much”. Perhaps this is what I take away from them.

1. The evidence doesn’t justify a move away from direct laryngoscopy

I think videolaryngoscopy is still best considered as a technique to use as an adjunct, building off really good direct laryngoscopy technique. If the spiel is that VL “improves your view by one Cormack and Lehane grade” then implicit in that is the assumption that your view was already optimised.

For the vast majority of patients who have a grade I/POGO 100 laryngoscopy, videolaryngoscopy can’t improve your view (obviously). However you may reach the same view with slightly more ease. This applies particularly to videolaryngoscopy options that build off a standard laryngoscope design (rather than the Glidescope for example which has its own special learning curve).

Wouldn’t logic say if you need to work less to achieve grade I, II or even III views, your technique runs the risk of becoming reliant on the extra % that videolaryngoscopy gives you? For video laryngoscopes that operate pretty much like standard laryngoscopes with a little bit extra, you need your technique with direct laryngoscopy to get you most of the way there. The “video” bit is for the last few percent.

So good training in direct laryngoscopy techniques remains vital.  Practitioners will still need to understand the difference in technique required for different laryngoscopes and what the implications are for patient positioning to optimise success rates.

2. More nuance in the research would be helpful

Meta-analysis relies on the contributing papers. There’s presently a bit of heterogeneity there, including in the level of experience in those using the devices. Follow-up studies (or just fresh studies) when people have become highly used to videolaryngoscopy would be an interesting addition to the literature – how long does proficiency take to develop?

What about managing the unanticipated difficult airway case? That seems to be a whole area that isn’t well addressed by the current literature. Or measurement of decision-making and overall management of the airway when videolaryngoscopy is available?

There’s also a tendency to focus on clumps of trees rather than the whole forest. This is pretty common to airway papers. Often the focus seems to be on ‘time to tracheal intubation’ (which isn’t the worst surrogate to choose) or, less productively, on the view of the glottis or first pass success. This touches on the same territory discussed by Alan Garner here on measuring surrogates rather than clinically meaningful parameters.

Seeing the glottis more doesn’t equate to the airway being managed.  First pass success isn’t the most vital of measures. Time to tracheal intubation from laryngoscope in hand might be a little more helpful, but is it more useful than time from induction to airway secure in the patient with a difficult airway? Should we be reporting on desaturation rates with one technique over another given that the aim of airway management isn’t just the bit of plastic?

3. Measuring teams

The other feature the literature doesn’t inform is that subjective sense of utilising the team better in difficult airway management. It would be really interesting to see some research that examined the impact of videolaryngoscopy on the ways teams worked together or communicated in the management of the airway. Or what about performance of teams managing the airway in out of theatre locations? As things stand the thing I subjectively feel is the best feature of videolaryngoscopy doesn’t seem to have been evaluated.

 

So where does that leave me? Not really anywhere different. Probably where it leaves me is in need of checking my own position on the seeing vs believing spectrum.

In the absence of evidence from other people I should probably rigorously examine my personal practice. Practice the use of different techniques until I feel proficient. Then measure my actual performance and see what my own benchmark performances are. Perhaps really rigorous personal auditing (not the Scientology version) is the next step in understanding how VL should fit into my practice and how it measures up to DL.

It’s only after that that I’ll really know if I’m seeing what I think I’m seeing.

 

The References:

Heinrich S, Birkholz T, Ihmsen H, Irouschek A, Ackermann A, Schmidt J. Incidence and predictors of difficult laryngoscopy in 11,219 pediatric anesthesia procedures. Pediatr Anesth. 2012;22:729-36.

Riveros R, Sung W, Sessler DI, Sanchez IP,  Mendoza ML, Mascha EJ, Niezgoda J. Comparison of the Truview PCD and the GlideScope video laryngoscopes with direct laryngoscopy in pediatric patients: a randomised trial. Can J Anesth 2013;60:450-7.

Lee JH, Park YH, Byon HJ, Han WK, Kim HS, Kim CS, Kim JT. A Comparative Trial of the GlideScope Video Laryngoscope to Direct Laryngoscope in Children with Difficult Direct Laryngoscopy and an Evaluation of the Effect of Blade Size. Anesth Analg 2013;117:176-81.

Sun Y, Lu Y, Huang Y, Jiang H. Pediatric video laryngoscope versus direct laryngoscope: a meta-analysis of randomized controlled trials. Pediatr Anesth. 2014;24:1056-65.

Lingappan K, Arnold JL, Shaw TL, Fernandes CJ, Pammi M. Videolaryngoscopy versus direct laryngoscopy for tracheal intubatio in neonates (Review) Cochrane Database of Systematic Reviews 2015. dii: 10.1002/14651858.

Over on Minh Le Cong’s site, he’s also previously shared something a little more positive on videolaryngoscopy.

The image here came from Flickr Creative Commons and is unaltered. It was posted by Alibi 0591.

Chat about Chests – On Holes and Whether Plastic is Fantastic

Dr Andrew Weatherall with an introduction to a new type of thing (well, for this site anyway). 

*Ahem* [clears throat].

Well, we finally thought we should try chatting. After much delay we finally sat down and tried recording a chat with a microphone. And then after a much longer delay I have finally spent some time learning what to do with all that noise. All that slightly-too-quick-talking noise.

This effort features me chatting with Dr Alan Garner about those times you need to decompress the pleural space. It seems to be an area where a lot of people have passionate ideas about how and when to intervene. This makes it ideal for a chat, although maybe harder to be definitive about what to do. While Alan makes the argument that many of the disadvantages of tube thoracostomy first solved by the open technique have other solutions apparent in modern practice. However, all the options have some advantages and disadvantages, benefits and complications. That’s part of why it’s such an interesting topic.

This would be the point for  a tenuous link to the concept of lying back and enjoying the talk like this otter. It's just an excuse to share the otter.
This would be the point for a tenuous link to the concept of lying back and enjoying the talk like this otter. It’s just an excuse to share the otter.

So here is our first podcast for download (or here’s the permalink or the whole player thing). We sort of hope it will lead to plenty of associated convivial coffee-based chats.

I do need to share some extra bits of information, because it turns out 30 minutes of chatting still leaves some things unsaid:

  • This is very much a learning thing at this end. So if there’s a few rough bits in the audio/recording and the like feel free to send some constructive feedback. Promise to get better at it.
  • This chat actually happened way back in December (!!!) so apologies for taking this long to get it together. What that does mean is there’s a couple of bits that need an update – most particularly that the good Dr Garner has moved on from the Medical Director position at CareFlight. The excellent Dr Toby Fogg does that now (while Alan is still working pretty much as hard as ever, just not everywhere all at once).
  • At the end of the podcast, we have a chat about the need for research. Well I don’t know if that got him moving but Alan is now putting together a retrospective study involving lots of centres and services across Sydney. Hopefully this will provide some more evidence to add to the mix and inform how to do future research better.

Now some papers are mentioned by Dr Garner as he goes along. So,

As a bonus, here’s a reference for one looking at tube thoracostomy placement (as in whether it ends up in the right place, which was the case for 78%) which sort of highlights the importance of choosing the right bit of kit and being trained well:

Oh, and as a tracheal tube is sometimes suggested as an alternative to an intercostal catheter, it’s worth looking up this recent letter to the editor from Emergency Medicine Australasia, where a patient was unstable during transport with a tracheal tube in place to maintain the thoracostomy and subsequent investigation in hospital showed it had migrated. Yep, all techniques have their problems.

Addition:

Minh Le Cong reminded me that the draft NICE guidelines relating to trauma are up for people to comment on and obviously mention chest injury amongst many other things. Well worth a look (possibly via the excellent summary by Natalie May at St Emlyn’s.

Hope you enjoy it.

Wait, there’s some more acknowledgements:

A big thanks to Dr Minh Le Cong for the encouragement and advice. 

We tried out two bits of music for this podcast and they were sourced from the lovely Podington Bear at the Free Music Archive. The first is ‘Mute Groove’ off the ‘Equatorial’ album. The end track is ‘Dole it Out’ from the album ‘Grit’. 

Along the way I also picked up many useful tips from Joel Werner and Samuel Webster (disclosure: the good Mr Webster is my brother-in-law but is quite a good artist and everything person and I suspect I would have come across his work anyway). 

The image here was from the flickr Creative Commons area and shared by Peter Trimming. It isn’t altered. 

Fidelity – can you have too much of a good thing?

Finally Dr Sam Bendall returns with another post on things educational. This time around it’s about how to focus on fidelity. You can read Sam’s earlier post right about here

The human mind is a complex machine. I am constantly amazed at its ability to “fill in the gaps” or create a reality. Like …. I was SURE I saw my keys on the bench this morning.

This is not a post about drug-altered states. (By Rob Gonsalves.)
This is not a post about drug-altered states. (By Rob Gonsalves.)

Fortunately for those of us who love simulation as a teaching tool, this amazing ability can be exploited to create realism in our scenarios.

So this then begs the question, if the most powerful simulator in the world is on top of your neck, capable of filling in many environmental deficits, how much external fidelity do we really need? I love Dr. Cliff Reid’s line: “Run resuscitation scenarios in the highest fidelity simulator in the known universe.. your human brain.” (you can check out the related talk here). So how do you get other people’s brains working for you in your simulation?

Searching High and Low

In doing a little research for this post, I was curious to see what others felt constituted high fidelity vs low fidelity simulation. In many sources it was simply to do with how technologically fabulous the manikin was. No mention of recreating key environmental stimuli. No mention of inserting the human factors elements that play out repeatedly in any microcosm. No mention of recreating other sensory or physical cues that affect the way we behave in any given situation and affect our decision making.

The über end of the spectrum is virtual reality – full recreation of the all the visual stimuli you would ever encounter in any situation, sometimes involving goggles. Maybe something like this Virtual reality “cave” simulator.

Now some folks may thing that is amazing, and in my humble opinion the graphics are amazing. But how often do you treat patients with goggles on and by waving a wand thing at a wall? If you do…. well there is olanzapine for that. Last time I looked we also don’t work in a three-sided 3m x 3m box.

Actually these are just this guy's sunglasses. [via wired.com]
Actually these are just this guy’s sunglasses. [via wired.com]
The Experiences Where You Gain Experience

So lets take a step back. Think about your most memorable experiences – positive or negative. What are the details of those experiences that caused them to be so strongly imprinted in your mind? Was it the smell? The fact that you were freezing cold? Was it to do with touch? Chances are, it was not just the view in front of you.

Now think back on the medical cases you remember. What is now stuck in your mind about them? Was it the sound of the pulse oximeter descending into the basement where hypoxia hides? Was it the conflict going on in the resus bay? Was it the difficulty you had getting a piece of equipment to work?

I put it to you that THIS is the stuff we remember. If we are using simulation as a teaching tool, we want our participants to remember what they learnt so that they can apply it when it counts. So we have to make it memorable. Perhaps we need to rethink exactly what fidelity means in simulation…

I am fortunate to work with someone I consider to be a master of simulation, Dr Ken Harrison. By making the smallest tweaks, he can add a whole new aspect to the scenario and increase the fidelity for the participants that little bit more. Usually the cost involved in making the scenarios highly memorable is about $0.

I did his scenarios many years ago as a participant in the CareFlight Pre-Hospital Trauma Course, the first of which ran as a trial in 2001 (not with me attending) after years before that of employing simulation in education.

I can still remember being cold. I can still remember making a cluster of our environment. I have never forgotten the lessons I learned from those as the necessary fidelity was there, even though the manikin was a Resusci Annie simulator, the monitor was a billion year old defibrillator and the Thomas packs we were using were generic. No lights, no camera, no creepy goggles. Just the cold of the ground reminding me to wear warm stuff on jobs, the difficulty in getting unfamiliar equipment to work (know your equipment) and the difficulty in getting to the head of the patient because of the tree we had centred quite nicely in our workspace.

These are lessons I have not forgotten and things I will not repeat. All this by simply setting up a scenario on the side of a moderate embankment that our minds turned into  a 100 ft cliff, on a chilly July day. Job done I reckon!

The Bits You Need to Stick

So in considering where to invest your money, time and energy in creating fidelity in your simulation ask yourself this:

What is it about this scenario that I want my trainees to remember vividly in six months time when they will really need it?

For example I want my trainees first and foremost to stay safe on the job. There are a variety of hazards in the pre-hospital environment, some of which will kill you. Like this one.

This is not a recommended way to remember where your car is. [via Springfield New Sun]
This is not a recommended way to remember where your car is. [via Springfield New Sun]
Do I need to connect the car simulator to a 12V battery to teach them to look out for power lines? No. I can bring that same learning point out with a much more subtle long fat piece of electrical wire across the simulation field (car/ building site etc.).

This means if they notice it – great! The didactic part around scene safety worked. If they didn’t, one of our confederates will draw attention to it and ask for it to be isolated. The realisation that they have all potentially been electrocuted because they didn’t look is pretty powerful. Fidelity for $9 from Bunnings. Awesome!

Similarly if they are working outside in the elements, train outside. There is no point doing a scenario in an air-conditioned classroom if you work in an aircraft that is usually around 40 degrees Celsius. Once you get used to working with sweat dripping in your eyes yours, your patient’s and your teammates temperature you are able to concentrate on the task at hand.

Alarms are another easy one. We are so accustomed to hearing that pulse oximeter beep. Most critical care practitioners have an operant response when that tone starts to decrease or the rate goes up. It makes us look around. It can also be really distracting if the volume is turned up too high and the general anxiety level goes up. Easy way to create a bit of stress in the environment.

Then of course there’s broken things. Not everything goes well on every retrieval job. Equipment malfunctions, patients crash, the aircraft become unserviceable. We need to train our training audience to think laterally and deal with these problems quickly when they come up.

Most retrieval equipment sets have redundancy. Bringing this in is a different example of  fidelity. Give them a scenario and make some key equipment stop working or not work at all and watch their response. If they have a methodical approach to using the “other” equipment then they are more mission ready.

Weapon of Choice

So in essence, choose your weapons wisely. I LOVE cool toys more than most. Give me gadgets any day. BUT if you want me to remember what you taught me 6 or 12 months later or even 7 years later in the aforementioned example, make it real. Make me own it, smell it, feel it, touch it, troubleshoot it, be anxious in it, be hot/cold in it and THAT I will remember. And building that type of fidelity into your simulation usually takes neurons but not too many dollars.

 

 

Should we stop looking at first look intubation rates?

A brief note: I get to do the editing duty this week (Dr Andrew Weatherall that is) and I could not let it pass without a word of tribute to Dr John Hinds. I had only had the chance to learn from the good Dr Hinds via his online presence. It was a big presence. 

As one who did not know him personally, I can only reflect that he demonstrated many of the best qualities of a passionate doctor and that his passing, far too soon, has revealed many of the best qualities of his colleagues. 

Just in case you needed another reminder, you could watch him in action here, or read good words by @Eleytherius here, or sign a really worthwhile petition to deliver a vision for a better prehospital service for patients in NI here. 

As to this week’s post, Dr Alan Garner has a post on looking for the right outcomes so we’re doing the right thing for our patients. 

Can’t see the wood for the damn trees

As part of their intubation quality program many services now report their first look intubation rate. We have been doing so for a couple of years now. This looks like a really good thing to do. We know that more than one attempt at intubation is associated with greater incidence of serious adverse events in critically ill patients, and the more attempts the more likely those adverse events become (reference 1).

Therefore a strategy of aiming for first look success is probably a good idea, a strategy that my own service employs. So this should be a good thing to report as a quality measure too. Indeed why would you not? After all, the more attempts, the worse things get right?

Well wait a minute …

First let’s have a think about why we would report it. Is it telling us something that actually matters?

The outcomes that really matter are did they die or end up with hypoxic brain injury. The process issues that really matter are did they get hypoxic or have a cardiac arrest during the intubation process. There are other hard complications/process issues you can measure too like aspiration with unnecessary additional ventilator days, or even did you break their teeth.

First look intubation tells us none of these things. It does not tell us if the patient became hypoxic, aspirated or even arrested. Yes it is associated with lower incidence of these complications but it does not tell you if the complication actually occurred.

And what if emphasising first look intubation rate as a quality measure shifts the focus in the wrong direction? Could you risk making the risk of hypoxia higher?

Am I losing the plot here? Let’s go back to first principles.

The outcomes that really matter are death and hypoxic injury. I don’t think anyone is going to argue these should be avoided. Fortunately the incidence of these is pretty low so we tend to use surrogates for these things instead, things like the incidence of hypoxia or hypotension/bradycardia during intubation. These are pretty direct measures reflecting outcomes that matter.

First look intubation isn’t an outcome. It’s not even a surrogate for an outcome – it’s a surrogate for a surrogate of an outcome. My concern is that surrogates for an outcome, rather than the actual outcome can lead you way up the garden path. The MAST suit again comes to mind. The patient’s BP went up so it had to be a good thing surely. Of course when someone finally did a decent study on the outcome that really mattered, mortality, it was trending to worse not better.

Although there are no randomised controlled trials showing hypoxia to be bad for you, the circumstantial evidence is pretty overwhelming so I agree this is not quite like the MAST suit situation. However in using first look intubation as a quality measure we are now reporting a surrogate for a surrogate of the outcome that actually matters. I.e. we are reporting first look as it is associated with lower rates of hypoxia because lower rates of hypoxia are associated with lower rates of death and brain injury.

This is a risky game and recent audits of my own service show why. For the past year we have had a monitor that records the vital signs every 10 seconds and we download the data at mission end and attach it to the record. I have been going through these records to see what our rates of peri-intubation hypoxia actually are.

First thing I need to say is that our first look intubation rate so far this year is 100%. However we did have a couple of episodes of significant hypoxia.

My concern is that by reporting the first look rate, we draw attention to it and we send the message to our teams that this is the thing that we think matters. So better to press on a little bit longer even though the sats are falling to make sure I nail that tube first time!

What was the big picture again? [via Jarod Carruthers on flickr under CC 2.0 and unaltered]
What was the big picture again? [via Jarod Carruthers on flickr under CC 2.0 and unaltered]
Why are we reporting a surrogate for a surrogate? I have really accurate data from the monitor on the peri-intubation hypoxia rate, hypotension, bradycardia and arrest. Why report a surrogate for these things that might actually encourage our staff to focus on the surrogate and cause an episode of hypoxia, bradycardia, hypotension etc.

It remains important to emphasise optimising conditions for the first intubation attempt as that appears to have lower complication rates. However it is a means to an end. We should emphasise the outcomes (or at least the surrogates with only one degree of separation from that outcome) that matter. Why report a surrogate for a variable when you have the data to report the actual variable?

Some services like our own are now reporting 100% first look intubation rates, but no one is yet reporting 0% peri-intubation hypoxia rates. Aim for first look intubation as that appears to be a smart strategy, but tell your people it is the hypoxia that matters by making that the centre of attention in your reporting.

What do we mean by hypoxic?

Another thing I have been forced to look at is the definition of peri-intubation hypoxia. I had intended to use the definition of hypoxia used in many of the studies on this subject:

“Desaturation was defined as either a decrease in SpO2 to below 90% during the procedure or within the first 3 minutes after the procedure, or as a decrease of more than 10% if the original SpO2 was less than 90%.” (reference 2, see also 3-5)

I excitedly opened the data file of our first patient that we had intubated when we got our shiny new monitor a year ago to see what had happened. It was easy to identify the timing of intubation from the capnography data as we routinely pre-oxygenate our patients with a BVM device with the capnography attached. The sats pre-induction were a steady 90%, for 2 readings they were 89% (20 seconds) and then climbed to 98% when ventilation was commenced. So according to this definition we had a desaturation!

I don’t think anyone would claim a fall in SpO2 of 1% is clinically significant. It is also less than the error of the measurement quoted by the manufacturer of the oximetry system. This set of circumstances is not going to occur that often but it does not make sense to classify this case as a desaturation. We have therefore modified our definition to:

“Desaturation is defined as either a decrease in SpO2 to below 90% (minimum change at least 3%) during the procedure or within the first 3 minutes after the procedure, or as a decrease of more than 10% from the pre-intubation baseline if the original SpO2 was less than 90%.”

So what should we be reporting?

Thomas reported that each subsequent attempt at intubation was associated with an increased risk of hypoxia, aspiration, bradycardia, cardiac arrest etc. If we have the data on these variables then why not report them directly instead of reporting the surrogate for them. For hypoxia I would suggest our slightly modified definition above.

As for other variables why not use the definitions from Thomas’ paper?

Bradycardia HR <40 if >20% decrease from baseline
Tachycardia HR >100 if >20% increase from baseline
Hypotension SBP <90 mm Hg (MAP <60 mm Hg) if >20% decrease from baseline
Hypertension SBP >160 if >20% increase from baseline
Regurgitation Gastric contents which required suction removal during laryngoscopy in a previously clear airway
Aspiration Visualization of newly regurgitated gastric contents below glottis or suction removal of contents via the ETT
Cardiac arrest Asystole, bradycardia, or dysrhythmia w/non-measurable MAP & CPR during or after w/in intubation (5 min)

 

For the physiological definitions Thomas includes percentage change from baseline like we do with the hypoxia definition. This acknowledges that these are critically ill patients and often have deranged physiology before we start. These definitions can therefore be used in the real world in which we operate. If we all adopted these definitions we could meaningfully compare ourselves with Thomas’ original paper and with each other.

And as for us…

We are seriously thinking about ditching the reporting of first look intubation rate. It is not telling us what really matters – and we can’t get better than our current 100% rate anyway. Despite this we are having occasional episodes of hypoxia and other complications, and it is possible that the rate of these complications are being exacerbated by emphasising first look.

We are therefore looking at moving to the much more comprehensive set of indicators used by Thomas (along with our modified hypoxia definition). This will demonstrate to our team members the factors that we think really matter, because we measure them and report them externally.

You could argue that the only way to achieve 0% hypoxia is to accept that we are not going to have a 100% first look intubation rate. I for one would gladly give up our 100% first look rate if in doing so we achieved 0% hypoxia. I don’t yet know if this is achievable but I have some ideas. Those who walk the quality & patient safety road with me know that we might never arrive, but that should not deter us from the journey.

Anyone coming?

 

Reference:

1 . Thomas CM.   Emergency Tracheal Intubation: Complications Associated with Repeated Laryngoscopic Attempts. Anesth Analg 2004;99:607–13. [Full text.]

  1. Anders Rostrup Nakstad MD, Hans-Julius Heimdal MD, Terje Strand MD, Mårten Sandberg MD, PhD.   Incidence of desaturation during prehospital rapid sequence intubation in a physician-based helicopter emergency service. American Journal of Emergency Medicine (2011) 29, 639–644

 

  1. Reid C, Chan L, Tweeddale M. The who, where, and what of rapid sequence intubation: prospective observational study of emergency RSI outside the operating theatre. Emerg Med J 2004;21:296-301.

 

  1. Omert L, Yeaney W, Mizikowski S, et al. Role of the emergency medicine physician in airway management of the trauma patient. J Trauma 2001;51:1065-8.

 

  1. Dunford JV, Davis DP, Ochs M, et al. Incidence of transient hypoxia and pulse rate reactivity during paramedic rapid sequence intubation. Ann Emerg Med 2003;42:721-8.

Sandpits, Better Eyes and New Monitors – Can NIRS work for prehospital medicine?

This is part 2 of a series (part 1 is here) on trying to study near-infrared spectroscopy in the prehospital setting by Dr Andrew Weatherall (@AndyDW_). Can NIRS work? No one can be sure but here’s one approach to getting some data we can actually use. 

A while back I did a post where I pointed out that when you get sold technology, there’s a whole history behind the machine that goes beep that means it’s probably not what you’re told. And the example I used was near-infrared spectroscopy tissue oximetry.

That was partly because I’m involved in research on NIRS monitoring and I’ve spent a lot of time looking at it.  Like every time I look carefully in the mirror, there’s a lot of blemishes that I miss on a casual glance. I also don’t mind pointing out those blemishes.

So that post was about all the things that could get in the way – light bouncing about like a pinball, humans being distressingly uncatlike, comparing monitors that are might be apples and aardvarks rather than apples and apples and basing your whole methodology on assumptions of tissue blood compartments. Oh, and maybe you can’t get sunlight near your red light.

Sheesh.

The thing is, I really want to answer that original question – “How’s the brain?”

So enough of the problems, can we find some solutions?

Actually I’m not certain. But I can say what we came up with. It’s a plan that involves sandpits, hiding numbers and finding better eyes. Oh, and changing the design of monitors forever and ever.

 

Playing in Sandpits

Our first step was to try and figure out if NIRS technology could even work in the places it wasn’t designed for. Not near the cosy bleeping of an operating theatre monitor where the roughest conditions might be inflicted by a rogue playlist.

We figured that the first issues might be all the practical things that stop monitors working so effectively. And we already knew that in the operating suite you often needed to provide shielding from external light to allow reliable measurements.

So we asked for volunteers, stuck sensors to their heads and took them driving in an ambulance or hopped on the helicopter to do some loops near Parramatta. It gave us lots of chances to figure out the practicalities of using an extra monitor too.

And we learnt a bit. That we could do it with some layers of shielding between the sensors and the outside world. That the device we tested, though comfortable next to an intensive care bed was a bit unwieldy at 6 kg and 30 cm long to be carried to the roadside. Most importantly that it was worth pushing on, rather than flattening everything in the sandpit and starting again.

Early engineering advice included "just put a tinfoil hat on everyone to shield the sensors". I just ... I ... can't ... [via eclipse_etc at Flickr 'The Commons']
Early engineering advice included “just put a tinfoil hat on everyone to shield the sensors”. I just … I … can’t … [via eclipse_etc at Flickr ‘The Commons’]

Hiding Numbers and Getting Out of the Way

The next thing that was pretty obvious was that we couldn’t set out to figure out what NIRS monitoring values were significant and at the same time deliver treatments on the basis of those numbers. We needed to prospectively look at the data from the monitor and see what associations were evident and establish which bits in the monitoring actually mattered for patients and clinicians.

Of course paramedics and doctors tend to like to fix stuff. Give them a  “regional saturation” number which looks a little like mixed venous oxygen saturation while the manufacturer (usually) puts a little line on the screen as the “good-bad” cutoff line is a pretty good way to see that fixing reflex kick in. So to make sure it really is a prospective observational study and we’re observing what happens in actual patients receiving their usual treatment we ended up with a monitor with none of the numbers displayed. Better not to be tempted.

It was also obvious that we couldn’t ask the treating team to look after the NIRS monitor because they’d immediately stop doing the same care they always do and occasionally (or always) they’ll be distracted by the patient from being as obsessive about the NIRS monitor as we need for research.

So recruiting needs a separate person just to manage the monitor. On the plus side this also means we can mark the electronic record accurately when treatments like anaesthesia, intubation and ventilation or transfusion happen (or indeed when the patient’s condition obviously changes). It’s all more data that might be useful.

Getting Better Eyes

One of the big problems with NIRS tissue oximetry so far seems to be that the “absolute oximetry” isn’t that absolute. When you see something claiming a specific number is the cutoff where things are all good or bad, you can throw a bucket of salt on that, not just a pinch.

 

Maybe this much salt. [via user pee vee at Flickr's 'The Commons']
Maybe all of this salt. [via user pee vee at Flickr’s ‘The Commons’]
The other thing is that to pick up evolving changes in a dynamic clinical environment is difficult. What if it isn’t just the change in oximetry number, but the rate of change that matters? What if it’s the rate of change in that number vs the change over the same time in total haemoglobin measurements, or balance between cerebral monitoring and peripheral monitoring at the same time? How does a clinician keep track of that?

What we might need is a way of analysing the information that looks for patterns in the biological signals or can look at trends. The good news is there’s people who can do that as it’s acutally a pretty common thing for clever biomedical engineers to consider. So there are some clever biomedical engineers who will be part of looking at the data we obtain. When they have spare time from building a bionic eye.

My bet is that if NIRS monitoring is ever to show real benefits to patients it won’t be only by looking at regional saturation (though we’ll try that too). It will be the way we look at the data that matters. Examining rapidly changing trends across different values might just be the key.

 

Thinking About the Monitors We Need

Let’s imagine it all works. Let’s assume that even with all those hurdles the analysis reveals ways to pretty reliably pick up haematomas are developing, or the brain is not receiving enough blood flow, or oedema is developing (and there are other settings where these things have been shown), there’s still a big problem. How do you make that information useful to a clinician who has a significant cognitive load while looking after a patient?

For each NIRS sensor that is on (3 in this study) we’ll be generating 4 measurements with trendlines. The patient is likely to have pulse oximetry, ECG, blood pressure and often end-tidal capnography too. Putting together multiple bits of information is an underappreciated skill that highly trained clinicians make a part of every day. But it adds a lot of work. How would you go with 12 more monitoring values on the screen?

Yes Sclater's lemur, that's 16 monitoring values to keep track of. [via user Tambako the Jaguar at flickr]
Yes Sclater’s lemur, that’s 16 monitoring values to keep track of. [via user Tambako the Jaguar at flickr]
So before we can take any useful stuff the analysis reveals and free clinicians to use the information, we need to figure out how to present it in a way that lets them glance at the monitor and understand it.

How should we do that? Well it’s a bit hard to know until we know what we need to display. My current guess is that it will involve getting clever graphics people to come up with a way to display the aggregated information through shapes and colours rather than our more familiar waveforms (and that’s not an entirely novel idea, other people have been on this for a bit).

So then we’d need to test the most effective way to show it before finally trying interventional studies.

This could take a bit.

And that is a story about the many, many steps for just one group trying to figure out if a particular monitor might work in the real world of prehospital medicine. There are others taking steps on their own path with similar goals and I’m sure they’ll all do it slightly differently.

I hope we end up bumping into each other somewhere along the road.

 

Notes and References:

Here’s the link to our first volunteer study (unfortunately Acta Anaesthesiologica Scandinavica has a paywall):

Weatherall A, Skowno J, Lansdown A, Lupton T, Garner A. Feasibility of cerebral near-infrared spectroscopy monitoring in the pre-hospital environment. Acta Anaes Scand 2012;56:172-7.

If you didn’t look on the way past, you should really check the video of Prof. Nigel Lovell introducing their version of a bionic eye. It’s pretty astonishing and I can’t quite believe I get to learn things from him.

It’s the very clever Dr Paul Middleton who was first author on a review of noninvasive monitoring in the emergency department that is well worth a read (alas, another paywall):

Middleton PM, Davies SR. Noninvasive hemodynamic monitoring in the emergency department. Curr Opin Crit Care. 2011;17:342-50.

Here’s the PubMed link from a team taking a preliminary looking tissue oxygen monitoring after out-of-hospital cardiac arrest:

Frisch A, Suffoletto BP, Frank R, Martin-Gill C, Menegazzi JJ. Potential utility of near-infrared spectroscopy in out-of-hospital cardiac arrest: an illustrative case series. Prehosp Emerg Care. 2012;16:564-70.

 

All the images here were via flickr’s ‘The Commons’ area and used without any modification under CC 2.0

Studies in Blood from Iran – A Quick Review

We all want to stop bleeding. Here’s a quick review from Dr Alan Garner of a paper coming out of Iran that looks at haemostatic dressings. 

Hatamabadi HR et al. Celox-Coated Gauze for the Treatment of Civilian Penetrating Trauma: A Randomized Clinical Trial. Trauma Monthly. 2014;20:e23862. dii: 10.5812/traumamon.23862

There is not a lot of data on haemostatic dressings in the civilian context and human data from the military context is not randomised for obvious reasons. It is therefore nice to see a RCT on this subject in humans. In the study they compare the time to haemorrhage control and amount of haemorrhage in stab wounds to the limbs between 80 patients treated with Celox gauze versus 80 patients treated with normal gauze.

The study is from an emergency department in Tehran and is pragmatic in design. There are some limitations of the study worth mentioning. It was open label, and the amount of bleeding was measured simply by the number of gauze squares used. Weighing the gauze would have been a more accurate way to estimate ongoing blood loss.

The details of how the gauze was applied isn’t that clear. To be effective the gauze needs to be packed into the wound against the bleeding vessel. Was the Celox used in this way to maximise the chances it would work? I can’t tell from the paper. Oh, and the company provided the product for the trial.

Perhaps the biggest puzzle in the design is that patients with really significant haemorrhage (those requiring transfusion) were excluded from the trial. This is the group where you really want to know if the stuff works. You could theorise that this group of patients may have trauma coagulopathy and the method of action of Celox (being by electrostatic attraction and independent of clotting factors) might be particularly useful and a bigger difference between groups may have been found. I guess that will have to wait for another day and another trial that someone works through ethics.

Acknowledging all of this, there was a significant difference in the time taken to achieve haemostasis and the amount of ongoing bleeding with the Celox gauze looked superior by both measures.

This suggests that it remains reasonable to use these products as evidence continues to point to efficacy. Of course these agents are not a magic bullet and all the other principles of haemostasis need to be applied as a package, including urgent transport to a surgical facility.

Research That is Positive When It Is Negative

This weeks post is the first in a series touching on some of the challenges when you start researching technology for the prehospital setting (or anywhere really). Dr Andrew Weatherall (@AndyDW_) on why some monitors aren’t the monitors you’re sold. 

I am new to the research game. As is often the case, that brings with it plenty of zeal and some very rapid learning. When we first started talking about the project that’s now my PhD, we set out wondering if we could show something that was both a bit new and a positive thing to add to patient clinical care.

It didn’t take long to realise we’d still be doing something worthwhile if the project didn’t work one little bit.

Yep, if this thing doesn’t work, that would still be fine.

 

Simple Questions

I’m going to assume no one knows anything about this project (seems the most realistic possibility). It’s a project about brains and lights and monitors.

It came out of two separate areas of work. One of these was the prehospital bit of my practice. All too often I’d be at an accident scene, with an unconscious patient and irritated by the big fuzzy mess at the middle of the clinical puzzle.

“How’s the brain?”

Not “how are the peripheral readings of saturation and blood pressure against the normative bell curve?” Not “how are the gross clinical neurological signs that will change mostly when things get really grim?”

“How’s the brain?”

At the same time at the kids’ hospital where I do most of my anaesthesia we were introducing near-infrared spectroscopy tissue oximetry to monitor the brain, particularly in cardiac surgery cases.

The story sounded good. A noninvasive monitor, not relying on pulsatile flow, that provides a measure of oxygen levels in the tissue where you place the probe (referred to as regional oxygen saturation, or tissue saturation or some other variant and turned in to the ideal number on a scale between 0 and 100) and which reacts quickly to any changes. You can test it out by putting a tourniquet on your arm and watching the magic oxygen number dive while you inflate it.

Except of course it’s not really as simple as that. If you ask a rep trying to sell one of these non-invasive reflectance spectroscopy (NIRS) devices, they’ll dazzle you with all sorts of things that are a bit true. They’re more accurate now. They use more wavelengths now. Lower numbers in the brain are associated with things on scans.

But it’s still not that simple. Maybe if I expand on why that is, it will be clearer why I say I would be OK with showing it doesn’t work. And along the way, there’s a few things that are pertinent when considering the claims of any new monitoring systems.

 

A Bit About Tech

Back in 1977, a researcher by the name of Franz Jöbsis described a technique where you could shine light through brain tissue, look at the light that made it out the other side and figure out stuff about the levels of oxygen and metabolism happening deep in that brain tissue. This was the start of tissue spectroscopy.

Now, it’s 38 years later and this technology isn’t standard. We’re still trying to figure out what the hell to do with it. That might just be the first clue that it’s a bit complicated.

Of course the marketing will mention it’s taken a while to figure it out. Sometimes they’ll refer to the clinical monitors of the 1990’s and early 2000’s and mention it’s got better just recently. They don’t really give you the full breadth of all the challenges they’ve dealt with along the way. So why not look at just a few?

  1. Humans Aren’t Much Like Cats

Jöbsis originally tested his technique on cats. And while you might find it hard to convince cat lovers, the brain of a cat isn’t that close to a human’s, at least in size. (As an aside, I’m told by clever bionic eye researchers the cat visual cortex actually has lots of similarities with that of humans – not sure that explains why the aquarium is strangely mesmerising though).

He also described it as a technique where you shone the light all the way across the head and picked up the transmitted light on the other side. But even the most absent-minded of us has quite a bit more cortex to get through than our feline friends and you’d never pick up anything trying that in anything but a neonate.

So the solution in humans has been to send out near-infrared light and then detect the amount that returns to a detector at the skin, on the same side of the head as you initially shone those photons.

When you get handed a brochure by a rep for one of these devices, they’ll show a magical beam of light heading out into the tissues and tracing a graceful arc through the tissues and returning to be picked up. You are given to believe it’s an orderly process, and that every bit of lost light intensity has been absorbed by helpful chromophores. In that case that would be oxy- and deoxyhaemoglobin, cytochromes in the cell and pesky old melanin if you get too much hair in the way.

See? Here's the pretty version that comes with the monitor we're using in the study? [It's the Nonin EQUANOX and we bought it outright.]
See? Here’s the pretty version that comes with the monitor we’re using in the study? [It’s the Nonin EQUANOX and we bought it outright.]
Except that’s the version of the picture where they’ve put Vaseline on the lens. Each one of those photons bounces eratically all over the place. It’s more like a small flying insect with the bug equivalent of ADHD bouncing around the room and eventually finding its way back to the window it flew in.

So when you try to perform the underlying calculations for what that reduction in light intensity you detect means, you need to come up with a very particular means of trying to allow for all that extra distance the photons travel. Then you need to average the different paths of all those photons not just the one photon. Then you need to allow for all the scattering that means some of the light will never come back your way.

That’s some of those decades of development chewed up right there.

  1. Everyone Looks the Same But They Are Different

So that explains the delay then. Well there’s another thing that might make it hard to apply the technology in the prehospital environment. Every machine is different. Yep. If you go between systems, it’s might just be that you’re not comparing apples with apples.

That particular challenge of calculating the distance the light travels? Every manufacturer pretty much has a different method for doing it. And they won’t tell you how they do it (with the notable exception of the team that makes the NIRO device who have their algorithms as open access – and their device weighs 6 kg and is as elegant to carry as a grumpy baby walrus).

So when you read a paper describing the findings with any one device, you can’t be 100% sure it will match another device. This is some of the reason that each company calls their version of the magic oxygen number something slightly different from their competitor (regional saturation, tissue oxygenation index, absolute tissue oxygen saturation just to name a few). It might be similar, but it’s hard to be sure.

Maybe that's harsh. Could a walrus be anything but elegant? [via Allan Hopkins on flickr under CC 2.0]
Maybe that’s harsh. Could a walrus be anything but elegant? [via Allan Hopkins  on flickr without mods under CC 2.0]
  1. When “Absolute” Absolutely Isn’t Absolute

You get your magic number (I’m going to keep calling it regional saturation for simplicity) and it’s somewhere between 60 and 75% in the normal person. The thing is it hasn’t been directly correlated with a gold standard real world measurement that correlates with the same area sampled.

The NIRS oximeter makes assumptions about the proportions of arterial, venous and capillary blood in the tissue that’s there. The regional saturations are validated against an approximation via other measures, like jugular venous saturation or direct tissue oximetry.

On top of that all those “absolute NIRS monitors” that give you a definite number that means something? No. “Absolute” is not a thing.

It’s true the monitors have got much better in responding to changes quickly. And they’ve added more wavelengths and are based on more testing so they are more accurate than monitors from decades past. But they can still have significant variation in their readings (anywhere up to 10% is described).

And they spit out a number, regional saturation, that is actually an attempt to take lots of parameters and spit out a number a clinician can use. How many parameters? Check the photo.

This is from an excellent review by Elwell and Cooper. [Elwell CE, Cooper CE. Making light work: illuminating the future of biomedical optics. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 2011;369:4358-4379.]
This is from an excellent review by Elwell and Cooper. [Elwell CE, Cooper CE. Making light work: illuminating the future of biomedical optics. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 2011;369:4358-4379.]
  1. The Practical bits

And after all that, we reach the practical issues. Will sunlight contaminate the sample? Can it cope with movement? Do you need a baseline measurement first? Does it matter that we can only really sample the forehead in our setting?

All the joy of uncertainty before you even try to start to research.

 

So why bother?

Well the quick answer is that it might be better for patients for clinicians to actually know what is happening to the tissue in the brain. And acknowledging challenges doesn’t mean that it isn’t worth seeing if it’s still useful despite the compromises you have to make to take the basic spectroscopy technique to the clinical environment.

But even if we find it just doesn’t tell us useful stuff, we could at least provide some real world information to counter the glossy advertising brochure.

There are already people saying things like,

“You can pick up haematomas.” (In a device that just tells you if there’s a difference between the two hemispheres.)

“Low regional saturations are associated with worse outcomes.” (But that’s probably been demonstrated more in particular surgical settings and the monitoring hasn’t been shown to improve patient outcomes yet.)

“You can even pick up cytochromes.” (In the research setting in a specially set-up system that are way more accurate than any clinical devices.)

All of those statements are a bit true, but not quite the whole story. The other message I extract from all of this is that all this uncertainty in the detail behind the monitor can’t be unique to NIRS oximetry. I have little doubt it’s similar for most of the newer modalities being pushed by companies. Peripherally derived Hb measurements from your pulse oximeter sound familiar?

After all this it’s still true that if we can study NIRS oximetry in the environment that matters to us we might get an exciting new answer. Or we might not. And sometimes,

“Yeah … nah.”

Is still an answer that’s pretty useful.

 

 

This is the first in a series. The next time around, I’ll chat about the things we’re trying in the design of the study to overcome some of these challenges.

If you made it this far and want to read a bit more about the NIRS project, you can check out the blog I set up ages ago that’s more related to that (though it frequently diverts to other stuff). It’s here

 

Working with Standards that are Forgetful – Australian NSQHS Standards and Retrieval Medicine

In times where external standards are increasingly applied to health services, where does retrieval medicine fit in? Dr Alan Garner shares his insights after wrestling with the Australian National Safety and Quality Health Service Standards process. 

In Australia, national reform processes for health services began in the years following the 2007 election. Many of the proposed funding reforms did not survive negotiation with the States/Territories but other aspects went on to become part of the Health landscape in Australia.

Components which made it through were things like a national registration framework for health professionals. Although the intent of this was to stop dodgy practitioners moving between jurisdictions, the result for an organisation like CareFlight was that we did not have to organise registration for our doctors and nurses in 2, 3 or even more jurisdictions as they moved across bases all over the country. Other components that made it through were the national 4 hour emergency department targets although I think the jury is still out on whether this was a good thing or not.

NSQHS copy

Other Survivors

Another major component to survive was the National Safety and Quality Health Service Standards. The idea is that all public and private hospitals, day surgical centres and even some dental practices must gain accreditation with these new standards by 2016. The standards cover 10 areas:

  • Governance for Safety and Quality in Health Service Organisations
  • Partnering with Consumers
  • Preventing and Controlling Healthcare Associated Infections
  • Medication Safety
  • Patient Identification and Procedure Matching
  • Clinical Handover
  • Blood and Blood Products
  • Preventing and Managing Pressure Injuries
  • Recognising and Responding to Clinical Deterioration in Acute Health Care
  • Preventing Falls and Harm from Falls

Are these the right areas? Many of the themes were chosen because there is evidence that harm is widespread and interventions can make a real difference. A good example is hand washing. Lots of data says this is done badly and lots of data says that doing it badly results in real patient harm. This is a major theme of Standard 3: preventing and controlling healthcare associated infections.

Here is a visual metaphor for the next segue [via www.worldette.com]
Here is a visual metaphor for the next segue [via http://www.worldette.com]

What about those of us who bridge all sorts of health services?

So what about retrieval? We are often operating as the link between very different areas of the health system. And we pride ourselves on measuring up to the highest level of care within that broader system. So do these apply to us? Did they even think about all the places in between?

Well, whether these Standards will indeed be applied to retrieval and transport services remains unclear as retrieval services are not mentioned in any of the documentation. CareFlight took the proactive stance of gaining accreditation anyway so that we are participating in the same process and held to the same standards as the rest of the health system.

So when we approached the accrediting agency, this is what they said: “Well, I guess the closest set of standards is the day surgical centre standards.” We took it as a starting point.

Applying Other Standards More Sensibly

This resulted in 264 individual items with which we had to comply across the ten Standards. And we had to comply with all standards to gain accreditation – it is all or nothing. However as we worked through the standards with the accrediting body it became clear that some items were just not going to apply in the retrieval context.

A good example is the process for recognising deteriorating patients and escalating care that is contained in Standard 9. There are obvious difficulties for a retrieval organisation with this item as the reason we have been called is due to recognition of a patient being in the wrong place for the care they need. This is part of the process of escalating care. It would be like trying to apply this item to a hospital MET team – it doesn’t really make sense.

With some discussion we were able to gain exemptions from 40 items but that still left us with 224 with which to comply. Fortunately our quality manager is an absolute machine or I don’t think we would have made it through the process. There’s take away message number one: find an obsessive-compulsive quality manager.

It took months of work leading up to our inspection in December 2014 and granting of our accreditation in early 2015. Indeed I am pleased to say that we received a couple of “met with merits” in the governance section for our work developing a system of Carebundles derived from best available evidence for a number of diagnosis groups (and yes I’ve flagged a completely different post).

So yes or no?

Was the process worth it? I think independent verification is always worthwhile. As a non-government organisation I think that we have to be better than government provided services just to be perceived as equivalent. This is not particularly rational but nevertheless true. NGOs are sometimes assumed to be less rigorous but there are plenty of stories of issues with quality care (and associated cover-ups) within government services to say those groups shouldn’t be assumed to be better (think Staffordshire NHS Trust in the UK or Bundaberg closer to home)

As an NGO however we don’t even have a profit motive to usurp patient care as our primary focus. The problem with NGOs tends rather to be trying to do too much with too little because we are so focused on service delivery. External verification is a good reality check for us to ensure we are not spreading our resources too thinly, and the quality of the services we provide is high. The NSQHS allow us to do this in a general sense but they are not retrieval specific.

Is there another option for retrieval services?

Are there any external agencies specifically accrediting retrieval organisations in Australia? The Aeromedical Society of Australasia is currently developing standards but they are not yet complete.

Internationally there are two main players: The Commission for Accreditation of Medical Transport Systems (CAMTS) from North America and the European Aeromedical Institute (EURAMI). Late last year we were also re-accredited against the EURAMI standards. They are now up to version 4 which can be found here. We chose to go with the European organisation as we do a lot of work for European based assistance companies in this part of the world and EURAMI is an external standard that they recognised. For our recent accreditation EURAMI sent out an Emergency Physician who is originally from Germany and who has more than 20 years retrieval experience. He spent a couple of days going through our systems and documentation with the result that we were re-accredited for adult and paediatric critical care transport for another three years. We remain the only organisation in Australasia to have either CAMTS or EURAMI accreditation.

For me personally this is some comfort that I am not deluding myself. Group think is a well-documented phenomenon. Groups operating without external oversight can develop some bizarre practices over time. They talk up evidence that supports their point of view even if it is flimsy and low level (confirmation bias) whilst discounting anything that would disprove their pet theories. External accreditation at least compares us against a set of measures on which there is consensus of opinion that the measure matters.

What would be particularly encouraging is if national accreditation bodies didn’t need reminding that retrieval services are already providing a crucial link in high quality care within the health system. There are good organisations all over the place delivering first rate care.

Maybe that’s the problem. Retrievals across Australia, including all those remote spots, is done really well. Maybe the NSQHS needed more smoke to alert them.

For that reason alone, it was worth reminding them we’re here.

 

 

Risky Business – Weighing Things Up

The excellent Dr Paul Bailey returns to provide more practical insights from the bit of his work that involves coordination of international medical retrieval. This is the second in (we hope) a recurring series which started here

Greetings everyone, it’s a pleasure to be back for the long awaited second edition of this humble blog. Looking back at my first foray into this unfamiliar world I’m pretty happy with how it reads and I think that it worked out well. If any of you have questions, I’m happy to participate in a bit of to and fro in the comments section.

Where to from here? I thought we might talk about risk. It’s hard to know exactly where to start, but it is fair to say that there are clinical risks, aviation risks, environmental and political risks – and there are probably more but I can’t think of them right now.

Aviation risks are the domain of our pilot colleagues and it’s extremely fair to say that they do a great job. One of the reasons that flying is so safe overall is that pilots specifically (and the aviation industry more generally) take risk very seriously. This might well have something to do with the personal consequences to the pilots of getting it wrong, I’m not sure.

When was the last time, for instance, that the nurses or doctors amongst you had to consider your fatigue score whilst working for a big hospital? What is the mechanism by which you might stop work when you consider yourself impaired or too tired to work any longer? Random drug testing at work anyone? If you’re a doctor or nurse, not likely, unless you are also working in aviation. See what I mean?

Whilst on a job the clinical team are considered part of the crew and whilst it is certainly within our job description to point anything out to the pilots that looks odd – it is up to the pilots to get us there and back safely. One of the Gods of CareFlight said to me once that it was his considered opinion, having been in the game a while, that if the pilots don’t want to go somewhere – for whatever reason – then neither does he. I reckon that is a pretty good rule of thumb.

What about medical risk?

Preparing for an international retrieval, the risk assessment starts straight away. From the Medical Director’s chair, we attempt to have a clinical discussion with SOMEONE close to the patient, usually a doctor or nurse in the originating hospital. This can be difficult – sometimes there are language issues; sometimes standards of care might be different to what we are used to; sometimes it’s just the time of day. How many people would be able to give a comprehensive medical handover at short notice in your hospital at 02:30?

We can also discuss the case with a nurse or doctor from the assistance company as an alternative. Sometimes it is even possible to talk to the patient or their relatives and in fact this is often the best source of up to date information.

It's a pretty long hallway you're looking down to assess the patient.
It’s a pretty long hallway you’re looking down to assess the patient.

In a similar way, patients’ clinical condition can change in the substantial lead time between the activation of a job, your arrival at the bedside and the eventual handover of the patient to the next clinical team.

In the world of international medical retrieval, if the patient is still alive by the time you get there, it is likely that they are in a “survivors” cohort already and will very likely make it to the destination hospital intact. If death was considered imminent, it is unlikely the assistance company would go to the lengths of setting up an international medical retrieval. Sepsis is probably the grand exception to this rule – patients who are septic have progressive illnesses that are not improved by being shaken up in the back of an aircraft.

The summary is that sometimes the information is incomplete, may be in fact be wrong in spite of the best efforts of the Medical Director, or may well have been correct at the time but things have moved on. It’s best to keep an open mind about what you are going to.

Ways to Ruin a Dinner Party – Bring Politics and the Environment

Easy to understand in some ways, and hard to define on paper are the environmental and political risks associated with international medical retrieval.

Some locations are potentially dangerous on a 24/7 basis and it can be a matter of choosing the “least bad” time of day – eg daylight hours – for you to be on the ground, and to make that period of time as short as possible – eg by arranging the patient to meet you at the airport. Sometimes the situation will require the assistance of a security provider. Port Moresby would be an example of a location where any or all of the above statements are true.

Different standards apply in some locations and it can, for instance, be necessary for all fees and charges associated with a patient’s hospitalisation to be paid prior to their departure. Retrieval team as bill settlement agency.  Indeed, sometimes these fees can be very complex and quite difficult to understand. The hospital administrators may not be sympathetic to your timeline with regards to pilot duty hours and a strong wish to depart.

Some counties in our region have relatively new or potentially unstable political situations and this might come into play from time to time. East Timor is a perfect example. It is also possible to find yourself in the thick of a countries political situation in the event that a government official or politician becomes unwell and requires evacuation to a location with a higher standard of medical care.

Just one example - expect the unexpected.
Just one example – expect the unexpected.

 

So the risk is there, what do you do?

In the end, it is not possible to control for everything that could go wrong on a retrieval. The essentials are to be well trained, have the right equipment with you (it’s not much use back at the base), work with good people all of whom are doing their jobs properly and keep an open mind about both the clinical and logistical situation as the case progresses.

So here are some principles we try to follow from the coordinators end:

  • We will not send you to an uncontrolled situation.
  • We will endeavour to have you flying in daylight hours wherever possible.
  • We will do our best to give you a comprehensive medical handover prior to departure and discuss things that might go wrong.
  • The pilots undertake to get you and the patient there and back safely.

And my suggestions for those on the crew?

  • It is vital to maintain situational awareness and to understand that the world of international medical retrieval is fluid and things change – you don’t have to like it but you do need to respond.
  • Good communication is essential – within the clinical team, between the clinical team and the pilots and between those on the mission and the coordinator (not to mention the local organisers). Good communication is your best friend and keeps you, your team and the patient safe.

Until next time …