Tag Archives: governance

PHARM quality – how do you know when you’re doing it well?

This post from Dr Alan Garner tackles a core problem for all practitioners who give a damn – how do you know you’re doing it well? A chat worth having and Alan has a pretty good summary of the Carebundle approach. 

How do we measure quality in prehospital and retrieval medicine?  Speed?  Number of procedures performed?  Number of twitter followers?

Seriously though, this is a question that vexed me for many years as a service director and trying to find metrics that measure things that mattered seemed an elusive task.  The major part of the problem stemmed from the heterogeneity of the patient population that we treat.  Even simple (but easily measured and therefore attractive to bean counters) things like timeliness are not straightforward.  Not because they are hard to measure but because sometimes time matters and other times it very clearly does not.  Indeed emphasising it as a measure could lead to perverse outcomes for some patients.

Let me give you a couple of examples to illustrate the problem:

Case 1.  Central abdominal stab wound with hypotension.

There is almost no prehospital intervention that matters in this patient except gasoline and perhaps tranexamic acid.  I don’t think anyone would argue that time is a reasonable quality measure in this patient.

Case 2.  COPD patient in a small hospital an hour flying time from the nearest intensive care unit.

Patient is eventually stabilised on non-invasive ventilation after three hours of effort by the transport team at the referring site. They are then safely transported.  Clearly for this patient time does not matter at all.  Reporting turnaround time at the referring site in this patient may place subtle pressure on the team to intubate the patient early and depart – a move that is very clearly not in the patient’s best interests and would have placed the patient at significantly increased risk of unnecessary morbidity and mortality.

This got me thinking that our measures of quality had to be disease process specific or we were never going to move forward.  Speaking with Erwin Stolpe was the turning point in my thinking.

You Should Really Try to Know Erwin

Many of you will not have heard of Erwin.  Sometimes when I talk to people or read things on social media I get the impression that physician staffed HEMS started in about 2005.  The reality of course is quite different.  Erwin is a trauma surgeon from Munich who began flying as a resident on the Christoph 1 service out of that city in 1968 (yes, not a typo – 1968).

Erwin Stolpe
Here he is, at AirMed 2014 in Rome.

These days he no longer flies but is chair of the ADAC medical committee.  For those unfamiliar with ADAC they run about 35 physician staffed HEMS bases in Germany and also operate several jets for longer range transports.  Their HEMS services alone conduct about 50,000 prehospital cases annually.  The breadth and depth of experience of this organisation is extraordinary and Erwin has been there from the beginning.  You would think there might by a few pearls of wisdom there and you would be right.

The Key Cases

Erwin described to me the “tracer diagnosis” process they use to track the quality of the care that they provide.  Analysis of their prehospital caseload indicated that four diagnoses made up 75% of the cases they attended.  For these four diagnoses they defined the treatments that they expected the teams to achieve (see pages 52 onwards of this presentation by Erwin for more detail).  They used national and international consensus guidelines as a base.  They then began reporting against those criteria and they have also started to publish that performance.

What Erwin was calling “tracer diagnoses” is probably better known to us in the English speaking worlds as a “Carebundle”.  Lots of people will be familiar with the ventilator Carebundle for intubated patients in the intensive care unit.   Adherence to the items in the bundle is associated with lower rates of ventilator associated pneumonia.  In NSW and Queensland, Health Departments have introduced bundles for central line insertion in order to tackle the rates of central line associated bacteraemia.  In this case the bundle applies to a procedure or process rather than a diagnosis.  Is there a place for this kind of methodology in the prehospital and retrieval world to improve quality too?

What are we talking about when it comes to PHARM?

Let’s start by looking at what a Carebundle is.

“A bundle is a structured way of improving the processes of care and patient outcomes: a small, straightforward set of evidence-based practices — generally three to five — that, when performed collectively and reliably, have been proven to improve patient outcomes.”

This definition is taken straight from the Institute for Healthcare Improvement (IHI) website.  There is a bit of controversy regarding whether the items in a Carebundle really need to all be completed for the bundle to be effective in some sort of synergistic way or whether they are in fact just a checklist of items that have been shown to be effective and you get as many done as you can.  I am not aware of any evidence for the synergistic effect multiplier that is implied on IHI website.  I think it is unarguable however that you should try and get as many of the things that are proven to make a difference to that condition completed as possible.  That is certainly the approach that we have taken.

Another quote from the IHI website describes for me what we are trying to achieve by using bundles:

“The power of a bundle comes from the body of science behind it and the method of execution: with complete consistency. It’s not that the changes in a bundle are new; they’re well established best practices, but they’re often not performed uniformly, making treatment unreliable, at times idiosyncratic. A bundle ties the changes together into a package of interventions that people know must be followed for every patient, every single time.”

Using Carebundles in hospitals is clearly not new.  Even in EMS it has been previously described for benchmarking purposes.  The attraction of the methodology for me was that we would know if our care for patients with severe head injury for example was following the best available evidence and we would know what proportion of our patients were receiving that care.  I did not want just some of our patients to get that care, I wanted all of them to get every item of care that we could identify matters for that disease process all of the time.

Making it Match What We Do

For our rapid response service in Sydney we then determined from our medical database the diagnoses that cover 75% of our caseload as ADAC had done.  For us this resulted in the following list:

  • Multiple blunt trauma
  • Isolated severe head injury (GCS<9)
  • Burns (>15% BSA)
  • Penetrating trauma
  • Immersion/drowning
  • Seizures (to which we were often being dispatched as they were mistaken for head injury or had caused a minor traumatic event)
  • ROSC post primary cardiac arrest (similar to seizures – trivial traumatic injury and patient in VF)
  • Traumatic cardiac arrest (for us this is the HOTTT Drill which I have described in a previous post, well podcast but which also includes the HOTTT Drill package to go with it).

We then turned to the evidence based consensus guidelines, Cochrane reviews and good quality RCTs to define the Carebundle items.  This is a sobering process as you realise just how few interventions there are that have good evidence to back them up.  This is particularly true for prehospital care where we are often operating in an evidence free zone.  In many cases we had no choice but to go with the consensus (or best guess as I like to call it).  We decided that we would include intubation for unconscious trauma patients for example despite the evidence not being all that strong and in many cases contradictory.

When we had defined the items for the specific diagnosis we printed them up on cards that team members carry in their pocket.  These serve as a checklist which teams use on site or in transit just to be sure that they have covered all the items.  Below is our isolated severe head injury card – the item I constantly forget is the blood glucose level (BSL).  Highly embarrassing if this is low when you arrive at the trauma centre!  I for one am glad to have the prompt.

BI copy

Some of these items are extrapolated from in-hospital care.  For example having the external auditory meatus (EAM) above the JVP makes sense in terms of managing raised ICP but there is no direct prehospital evidence that shows this changes outcome.  We have also set relatively conservative targets for things like oximetry and blood pressure.  Most of the evidence suggests SpO2 >90% is enough but we felt that desaturation happens very rapidly from this point so we would rather aim a little higher.

Aspirations and Signals

Some of the items we knew from the outset that we would never achieve in all cases.  Scene time of <25mins is the obvious example.  When a patient is trapped this is outside of our control.  We know however that one in five patients with a severe head injury will have a drainable haematoma that is time critical.  We therefore included this item in order to signal to the team that we expect them to treat severe head injury as a time critical disease in the prehospital phase.

Some of the bundles have conditional items as well.  For head injury this is the hypertonic saline which we only expect to be given if there are lateralising signs or neurological deterioration.

When the team returns to base they complete an audit form indicating if the bundle items were achieved and if not, the reason for the variance.  This both reinforces for our personnel the contents of the bundles and also allows us to report on compliance.  Below is an example of our report for severe head injuries showing the reasons of variance in the comments section.

Report copy

You can see that we don’t meet all the targets all the time, and there is usually a good reason when we don’t.  However the Carebundles allow us to be transparent about what we think good care is, and also about how successful we are in achieving it.  We include Carebundle compliance (along with a lot of other stuff) in our external reporting in NSW to the Ministry of Health, NSW Ambulance, The NSW Institute of Trauma and Injury Management and all the trauma centres to which we transport patients.  Transparency is a key component of good governance and this processes helps us to achieve that.

Those People Were Here First

The concept is not new.  I merely walk behind the giants of the industry and follow their lead in this.  It is also worth noting that Russell MacDonald from Ornge in Ontario is leading a similar project with an initial group of 10 “tracer diagnoses” amongst a small international collaboration of critical care transport providers.  It will be interesting to see how closely their bundle items accord with our own.  Aligning our bundle items would allow us to benchmark ourselves against similar organisations in other parts of the world and create opportunities for us to learn from organisations who manage specific conditions better than we do.  In the end this is about maximising the outcomes for our patients and I will gladly accept any help I can get in achieving that.


Here’s the stuff referred to along the way, because the originals remain a vital part of looking at the issue.

J. B. Myers, C. M. Slovis, M. Eckstein et al., “Evidence-based performance measures for emergency medical services systems: a model for expanded EMS benchmarking,” Prehospital Emergency Care, vol. 12, no. 2, pp. 141–151, 2008.

Here’s a link to the English version of the “tracer diagnosis” abstract.

Helm M et al.  [Extended medical quality management exemplified by the tracer diagnosis multiple trauma. Pilot study in the air rescue service] Anaesthesist 2012;61(2):106-115.

(Well, not all of us are clever enough to know German.)

Here’s the direct link to the IHI page.

The image of Erwin Stolpe comes from the Intercongress flickr account and is unaltered under the CC 2.0 licence.


Should we stop looking at first look intubation rates?

A brief note: I get to do the editing duty this week (Dr Andrew Weatherall that is) and I could not let it pass without a word of tribute to Dr John Hinds. I had only had the chance to learn from the good Dr Hinds via his online presence. It was a big presence. 

As one who did not know him personally, I can only reflect that he demonstrated many of the best qualities of a passionate doctor and that his passing, far too soon, has revealed many of the best qualities of his colleagues. 

Just in case you needed another reminder, you could watch him in action here, or read good words by @Eleytherius here, or sign a really worthwhile petition to deliver a vision for a better prehospital service for patients in NI here. 

As to this week’s post, Dr Alan Garner has a post on looking for the right outcomes so we’re doing the right thing for our patients. 

Can’t see the wood for the damn trees

As part of their intubation quality program many services now report their first look intubation rate. We have been doing so for a couple of years now. This looks like a really good thing to do. We know that more than one attempt at intubation is associated with greater incidence of serious adverse events in critically ill patients, and the more attempts the more likely those adverse events become (reference 1).

Therefore a strategy of aiming for first look success is probably a good idea, a strategy that my own service employs. So this should be a good thing to report as a quality measure too. Indeed why would you not? After all, the more attempts, the worse things get right?

Well wait a minute …

First let’s have a think about why we would report it. Is it telling us something that actually matters?

The outcomes that really matter are did they die or end up with hypoxic brain injury. The process issues that really matter are did they get hypoxic or have a cardiac arrest during the intubation process. There are other hard complications/process issues you can measure too like aspiration with unnecessary additional ventilator days, or even did you break their teeth.

First look intubation tells us none of these things. It does not tell us if the patient became hypoxic, aspirated or even arrested. Yes it is associated with lower incidence of these complications but it does not tell you if the complication actually occurred.

And what if emphasising first look intubation rate as a quality measure shifts the focus in the wrong direction? Could you risk making the risk of hypoxia higher?

Am I losing the plot here? Let’s go back to first principles.

The outcomes that really matter are death and hypoxic injury. I don’t think anyone is going to argue these should be avoided. Fortunately the incidence of these is pretty low so we tend to use surrogates for these things instead, things like the incidence of hypoxia or hypotension/bradycardia during intubation. These are pretty direct measures reflecting outcomes that matter.

First look intubation isn’t an outcome. It’s not even a surrogate for an outcome – it’s a surrogate for a surrogate of an outcome. My concern is that surrogates for an outcome, rather than the actual outcome can lead you way up the garden path. The MAST suit again comes to mind. The patient’s BP went up so it had to be a good thing surely. Of course when someone finally did a decent study on the outcome that really mattered, mortality, it was trending to worse not better.

Although there are no randomised controlled trials showing hypoxia to be bad for you, the circumstantial evidence is pretty overwhelming so I agree this is not quite like the MAST suit situation. However in using first look intubation as a quality measure we are now reporting a surrogate for a surrogate of the outcome that actually matters. I.e. we are reporting first look as it is associated with lower rates of hypoxia because lower rates of hypoxia are associated with lower rates of death and brain injury.

This is a risky game and recent audits of my own service show why. For the past year we have had a monitor that records the vital signs every 10 seconds and we download the data at mission end and attach it to the record. I have been going through these records to see what our rates of peri-intubation hypoxia actually are.

First thing I need to say is that our first look intubation rate so far this year is 100%. However we did have a couple of episodes of significant hypoxia.

My concern is that by reporting the first look rate, we draw attention to it and we send the message to our teams that this is the thing that we think matters. So better to press on a little bit longer even though the sats are falling to make sure I nail that tube first time!

What was the big picture again? [via Jarod Carruthers on flickr under CC 2.0 and unaltered]
What was the big picture again? [via Jarod Carruthers on flickr under CC 2.0 and unaltered]
Why are we reporting a surrogate for a surrogate? I have really accurate data from the monitor on the peri-intubation hypoxia rate, hypotension, bradycardia and arrest. Why report a surrogate for these things that might actually encourage our staff to focus on the surrogate and cause an episode of hypoxia, bradycardia, hypotension etc.

It remains important to emphasise optimising conditions for the first intubation attempt as that appears to have lower complication rates. However it is a means to an end. We should emphasise the outcomes (or at least the surrogates with only one degree of separation from that outcome) that matter. Why report a surrogate for a variable when you have the data to report the actual variable?

Some services like our own are now reporting 100% first look intubation rates, but no one is yet reporting 0% peri-intubation hypoxia rates. Aim for first look intubation as that appears to be a smart strategy, but tell your people it is the hypoxia that matters by making that the centre of attention in your reporting.

What do we mean by hypoxic?

Another thing I have been forced to look at is the definition of peri-intubation hypoxia. I had intended to use the definition of hypoxia used in many of the studies on this subject:

“Desaturation was defined as either a decrease in SpO2 to below 90% during the procedure or within the first 3 minutes after the procedure, or as a decrease of more than 10% if the original SpO2 was less than 90%.” (reference 2, see also 3-5)

I excitedly opened the data file of our first patient that we had intubated when we got our shiny new monitor a year ago to see what had happened. It was easy to identify the timing of intubation from the capnography data as we routinely pre-oxygenate our patients with a BVM device with the capnography attached. The sats pre-induction were a steady 90%, for 2 readings they were 89% (20 seconds) and then climbed to 98% when ventilation was commenced. So according to this definition we had a desaturation!

I don’t think anyone would claim a fall in SpO2 of 1% is clinically significant. It is also less than the error of the measurement quoted by the manufacturer of the oximetry system. This set of circumstances is not going to occur that often but it does not make sense to classify this case as a desaturation. We have therefore modified our definition to:

“Desaturation is defined as either a decrease in SpO2 to below 90% (minimum change at least 3%) during the procedure or within the first 3 minutes after the procedure, or as a decrease of more than 10% from the pre-intubation baseline if the original SpO2 was less than 90%.”

So what should we be reporting?

Thomas reported that each subsequent attempt at intubation was associated with an increased risk of hypoxia, aspiration, bradycardia, cardiac arrest etc. If we have the data on these variables then why not report them directly instead of reporting the surrogate for them. For hypoxia I would suggest our slightly modified definition above.

As for other variables why not use the definitions from Thomas’ paper?

Bradycardia HR <40 if >20% decrease from baseline
Tachycardia HR >100 if >20% increase from baseline
Hypotension SBP <90 mm Hg (MAP <60 mm Hg) if >20% decrease from baseline
Hypertension SBP >160 if >20% increase from baseline
Regurgitation Gastric contents which required suction removal during laryngoscopy in a previously clear airway
Aspiration Visualization of newly regurgitated gastric contents below glottis or suction removal of contents via the ETT
Cardiac arrest Asystole, bradycardia, or dysrhythmia w/non-measurable MAP & CPR during or after w/in intubation (5 min)


For the physiological definitions Thomas includes percentage change from baseline like we do with the hypoxia definition. This acknowledges that these are critically ill patients and often have deranged physiology before we start. These definitions can therefore be used in the real world in which we operate. If we all adopted these definitions we could meaningfully compare ourselves with Thomas’ original paper and with each other.

And as for us…

We are seriously thinking about ditching the reporting of first look intubation rate. It is not telling us what really matters – and we can’t get better than our current 100% rate anyway. Despite this we are having occasional episodes of hypoxia and other complications, and it is possible that the rate of these complications are being exacerbated by emphasising first look.

We are therefore looking at moving to the much more comprehensive set of indicators used by Thomas (along with our modified hypoxia definition). This will demonstrate to our team members the factors that we think really matter, because we measure them and report them externally.

You could argue that the only way to achieve 0% hypoxia is to accept that we are not going to have a 100% first look intubation rate. I for one would gladly give up our 100% first look rate if in doing so we achieved 0% hypoxia. I don’t yet know if this is achievable but I have some ideas. Those who walk the quality & patient safety road with me know that we might never arrive, but that should not deter us from the journey.

Anyone coming?



1 . Thomas CM.   Emergency Tracheal Intubation: Complications Associated with Repeated Laryngoscopic Attempts. Anesth Analg 2004;99:607–13. [Full text.]

  1. Anders Rostrup Nakstad MD, Hans-Julius Heimdal MD, Terje Strand MD, Mårten Sandberg MD, PhD.   Incidence of desaturation during prehospital rapid sequence intubation in a physician-based helicopter emergency service. American Journal of Emergency Medicine (2011) 29, 639–644


  1. Reid C, Chan L, Tweeddale M. The who, where, and what of rapid sequence intubation: prospective observational study of emergency RSI outside the operating theatre. Emerg Med J 2004;21:296-301.


  1. Omert L, Yeaney W, Mizikowski S, et al. Role of the emergency medicine physician in airway management of the trauma patient. J Trauma 2001;51:1065-8.


  1. Dunford JV, Davis DP, Ochs M, et al. Incidence of transient hypoxia and pulse rate reactivity during paramedic rapid sequence intubation. Ann Emerg Med 2003;42:721-8.

Working with Standards that are Forgetful – Australian NSQHS Standards and Retrieval Medicine

In times where external standards are increasingly applied to health services, where does retrieval medicine fit in? Dr Alan Garner shares his insights after wrestling with the Australian National Safety and Quality Health Service Standards process. 

In Australia, national reform processes for health services began in the years following the 2007 election. Many of the proposed funding reforms did not survive negotiation with the States/Territories but other aspects went on to become part of the Health landscape in Australia.

Components which made it through were things like a national registration framework for health professionals. Although the intent of this was to stop dodgy practitioners moving between jurisdictions, the result for an organisation like CareFlight was that we did not have to organise registration for our doctors and nurses in 2, 3 or even more jurisdictions as they moved across bases all over the country. Other components that made it through were the national 4 hour emergency department targets although I think the jury is still out on whether this was a good thing or not.

NSQHS copy

Other Survivors

Another major component to survive was the National Safety and Quality Health Service Standards. The idea is that all public and private hospitals, day surgical centres and even some dental practices must gain accreditation with these new standards by 2016. The standards cover 10 areas:

  • Governance for Safety and Quality in Health Service Organisations
  • Partnering with Consumers
  • Preventing and Controlling Healthcare Associated Infections
  • Medication Safety
  • Patient Identification and Procedure Matching
  • Clinical Handover
  • Blood and Blood Products
  • Preventing and Managing Pressure Injuries
  • Recognising and Responding to Clinical Deterioration in Acute Health Care
  • Preventing Falls and Harm from Falls

Are these the right areas? Many of the themes were chosen because there is evidence that harm is widespread and interventions can make a real difference. A good example is hand washing. Lots of data says this is done badly and lots of data says that doing it badly results in real patient harm. This is a major theme of Standard 3: preventing and controlling healthcare associated infections.

Here is a visual metaphor for the next segue [via www.worldette.com]
Here is a visual metaphor for the next segue [via http://www.worldette.com]

What about those of us who bridge all sorts of health services?

So what about retrieval? We are often operating as the link between very different areas of the health system. And we pride ourselves on measuring up to the highest level of care within that broader system. So do these apply to us? Did they even think about all the places in between?

Well, whether these Standards will indeed be applied to retrieval and transport services remains unclear as retrieval services are not mentioned in any of the documentation. CareFlight took the proactive stance of gaining accreditation anyway so that we are participating in the same process and held to the same standards as the rest of the health system.

So when we approached the accrediting agency, this is what they said: “Well, I guess the closest set of standards is the day surgical centre standards.” We took it as a starting point.

Applying Other Standards More Sensibly

This resulted in 264 individual items with which we had to comply across the ten Standards. And we had to comply with all standards to gain accreditation – it is all or nothing. However as we worked through the standards with the accrediting body it became clear that some items were just not going to apply in the retrieval context.

A good example is the process for recognising deteriorating patients and escalating care that is contained in Standard 9. There are obvious difficulties for a retrieval organisation with this item as the reason we have been called is due to recognition of a patient being in the wrong place for the care they need. This is part of the process of escalating care. It would be like trying to apply this item to a hospital MET team – it doesn’t really make sense.

With some discussion we were able to gain exemptions from 40 items but that still left us with 224 with which to comply. Fortunately our quality manager is an absolute machine or I don’t think we would have made it through the process. There’s take away message number one: find an obsessive-compulsive quality manager.

It took months of work leading up to our inspection in December 2014 and granting of our accreditation in early 2015. Indeed I am pleased to say that we received a couple of “met with merits” in the governance section for our work developing a system of Carebundles derived from best available evidence for a number of diagnosis groups (and yes I’ve flagged a completely different post).

So yes or no?

Was the process worth it? I think independent verification is always worthwhile. As a non-government organisation I think that we have to be better than government provided services just to be perceived as equivalent. This is not particularly rational but nevertheless true. NGOs are sometimes assumed to be less rigorous but there are plenty of stories of issues with quality care (and associated cover-ups) within government services to say those groups shouldn’t be assumed to be better (think Staffordshire NHS Trust in the UK or Bundaberg closer to home)

As an NGO however we don’t even have a profit motive to usurp patient care as our primary focus. The problem with NGOs tends rather to be trying to do too much with too little because we are so focused on service delivery. External verification is a good reality check for us to ensure we are not spreading our resources too thinly, and the quality of the services we provide is high. The NSQHS allow us to do this in a general sense but they are not retrieval specific.

Is there another option for retrieval services?

Are there any external agencies specifically accrediting retrieval organisations in Australia? The Aeromedical Society of Australasia is currently developing standards but they are not yet complete.

Internationally there are two main players: The Commission for Accreditation of Medical Transport Systems (CAMTS) from North America and the European Aeromedical Institute (EURAMI). Late last year we were also re-accredited against the EURAMI standards. They are now up to version 4 which can be found here. We chose to go with the European organisation as we do a lot of work for European based assistance companies in this part of the world and EURAMI is an external standard that they recognised. For our recent accreditation EURAMI sent out an Emergency Physician who is originally from Germany and who has more than 20 years retrieval experience. He spent a couple of days going through our systems and documentation with the result that we were re-accredited for adult and paediatric critical care transport for another three years. We remain the only organisation in Australasia to have either CAMTS or EURAMI accreditation.

For me personally this is some comfort that I am not deluding myself. Group think is a well-documented phenomenon. Groups operating without external oversight can develop some bizarre practices over time. They talk up evidence that supports their point of view even if it is flimsy and low level (confirmation bias) whilst discounting anything that would disprove their pet theories. External accreditation at least compares us against a set of measures on which there is consensus of opinion that the measure matters.

What would be particularly encouraging is if national accreditation bodies didn’t need reminding that retrieval services are already providing a crucial link in high quality care within the health system. There are good organisations all over the place delivering first rate care.

Maybe that’s the problem. Retrievals across Australia, including all those remote spots, is done really well. Maybe the NSQHS needed more smoke to alert them.

For that reason alone, it was worth reminding them we’re here.