Tag Archives: research

Getting to the Start Line

We can debate the value of this advanced team model vs that advanced team model. We can debate videolaryngoscopy vs direct laryngoscopy for days. People do. Its all chump change compared to the real challenge. Getting that team where they need to be. Dr Alan Garner and Dr Andrew Weatherall have a bit reviewing a paper they’ve just had published trying to add to this discussion. 

You may just have noticed that there are things happening in Brazil. They are called Olympics and they are a curious mix of inspiring feats of athleticism and cynical marketing exercise inflicted upon cities that can probably barely afford them and which will be scarred for a generation afterwards. I’d hashtag that but it turns out the IOC will take you on if you mess with their precious sponsor money.

Now, you might think the obvious segue from a mention of the Olympics at the start there would be to mention drugs. The sort of drugs that enhance performance. It’s just that this feels too obvious. We’d rather make a very tangential link to kids. In particular, let’s talk about kids who are very, very injured.

 

The Teams

One of the bits of the Olympics that is a bit fascinating is the logistics of getting highly specialised teams into the right place at the right time in the sorts of cities that don’t usually get anything to the right place at the right time.

Maybe this is unfair but I don’t immediately think “super efficient transport infrastructure” when I think of Rio de Janeiro. And when I’m on a commute in the early hours of a Sydney workday, the fact that anyone was able to get a rowing team out of the stacking rack and to a patch of water in the hillock-shaded nirvana of Penrith during our local Olympics is astounding.

That’s kind of central to the whole circus though. Everyone is getting their right team to the right start line at the right time. It would probably be more entertaining if you dropped the table tennis team at the volleyball court but that’s not how it works when you’re trying to get the best of the best doing what they are built for.

Which is the cue to make this lumbering patchwork monster lurch back to the segue.

 

Right Place, Right Time

Advanced EMS needs to achieve the same goals of right place and right time. (Never said it would be a pretty link, but there it is.) Whatever your model of staff might be for delivering advanced prehospital care (paramedic/physician, paramedics across the board, St Bernard with an alcohol supply) there would be no one who doubts that the key to the whole thing is to get them to the right jobs at a time when those advanced skills have a role in making a difference.

You might be able to put one of those snorkels in the airway hanging upside down while drilling an intraosseous with particularly agile toes but if you’re back at base that’s not going to help the patient out there who is injured.

For a while now we’ve been really exercised by that problem. How do we make the tasking process better? Because tasking is not about the team at base. It’s not about which location the vehicle comes from. Tasking is always about the patient waiting for the care they need. They’re just wishing you’d been waiting there already, not still somewhere else.

The latest in a suite of papers which are ultimately about this question has gone online pretty recently. With the catchy title of “Physician staffed helicopter emergency medical service case identification – a before and after study in children” it builds a little bit from an earlier paper where two parallel tasking systems for sending advanced EMS (in this case physician staffed HEMS) to injured kids was compared.

That paper suggested that when you had a team actually delivering HEMS involved in identifying and tasking of cases, they were far more likely to identify cases where their skills might help (meaning they were more likely to identify cases of severely injured kids from the initial emergency call information in the system) than a single non-HEMS tasker working away in the office.

The involvement of the HEMS team got removed though, so it seemed timely to revisit this area to look at the time before the changes where the two systems worked together and the subsequent time period where it was just left to that one paramedic in the office.

Kids and the NSW System

It is going to help you to know a bit of background here. For a while now in New South Wales, there has been a stated goal in the trauma system to get kids straight to a paediatric trauma centre (PTC). Interest in this first came about because of overseas evidence that maybe this was the best option for kids. This was later followed by local work. This established that kids who went to other centres before the PTC tended to wait a long time in the first place they went to. Like 5 hours in that initial hospital before there was any movement.

Another study also suggested that kids who went to an adult trauma centre first had 3 to 6 times the risk of a bad outcome. And by bad outcome I mean a dying sort of outcome. Now, there are issues with being too firm on those numbers, particularly as not many kids die from traumatic injuries over any measured time period in our system so one or two kids surviving in the adult centre would make a big difference to those stats. But these were the sort of figures that made people keen to get kids straight to the specialist kids centres.

So the system is supposed to be designed to get kids to the kids’ hospitals as a priority. Do not pass go, do pass the adult centre.

Around the same time as that was becoming a talking point, the Head Injury Retrieval Trial was getting moving. As part of that trial, there was an agreed setup for the HEMS crew (including the aviators) to have access to the emergency call info on the ambulance computer screens on about a 90 second delay from when it hit the ambulance system.

For the trial (only adults), you’d look at the highest urgency trauma cases and look for specific trigger mechanisms which would lead to a protocolised response – either an immediate decision to randomize or a callback and interrogation step.

For kids, a different request was made. The request was just to respond to severely injured kids (where it seemed like the severity matched the initial call info or the mechanism was a super bad one; something like “kid vs train” for example). No randomisation as they were not in the trial; we just went.

So the crew screened for paediatric cases too, as requested. And went to paediatric cases. There was some real learning in that too, as the HEMS crews started making it to a much higher proportion of severe paeds trauma (and drowning) than had historically been the case.  This was partly due to the higher rate of recognition of cases, and partly due to the fact that the HEMS team was really fast getting to the patient, arriving before the road paramedics had already moved on.   You can read more about the kind of time intervals the HEMS team achieved here.  As far as we are aware from the published literature the whole end-to-end process was the fastest ever reported for a physician staffed HEMS system, while still offering the full range of interventions when indicated.

Mirror, Mirror

A third of the way through the HIRT thing happening, the ambulance service introduced a role within ambulance which hadn’t been there before. The Rapid Launch Trauma Coordinator. Their role? To look at the screens as jobs came in and try to identify cases where advanced EMS might help.

As it turned out they elected to include the trial area as well as other areas in the state in the roving brief for this paramedic sitting in at the control centre. While that was an issue for the trial, for kids it was just a bonus, right? Another set of eyes trying to find kids who might need help sounded perfect.

The bonus in kids was that there was no need to try and have the person doing the RLTC work blinded to whether the case had been randomised or not, so if the HIRT crew in their screening saw a case with a kid, they’d call quickly and see if the RLTC knew of a reason they shouldn’t go. It was a nice collegial cross-check.

This also ensured that only one advanced team went unless they thought there were multiple casualties (in the trial double tasking was common due to the blinding of the RLTC to the randomization allocation).  So the cross-check avoided double ups and maximized use of resources too.

OLYMPUS DIGITAL CAMERA
Well how close to being the same are they then?

It was in this context of the systems for screening cases operating alongside each other that the first bit of research was done [2]. Over a two year period cases with severely injured kids occurring while the HEMS was available were reviewed to see if either screening process picked them up.

There were 44 kids fitting that bill (again, the numbers are low in the Sydney metro area). 21 weren’t picked up by anyone. 20 were picked up by that HIRT crew and 3 were picked up by that person working on their lonesome in central control.

When you looked more broadly at times the HIRT system wasn’t available compared to those it was, the proportion of patients directly transferred to the PTC was much lower. This fits with other stuff showing that advanced EMS teams tend to be more comfortable bypassing other sites to make it to a PTC, while also performing more interventions.

Another thing this research threw up was to do with time of a different kind: when HIRT was available the median time to reach the PTC was 92 minutes, compared to 296 (nearly 5 hours again) when they weren’t available.

So on that first round of research the message seemed to be that there was something about that case screening process that picked the severely injured kids more often. Maybe it was the extra eyes and regular rotation. Maybe it was better familiarity with the nature of the operational work for advanced EMS on the ground. Either way that screening process seemed to support the goals of the trauma system pretty well.

Things You Take Away

Come March 2011, the screens were taken away from the HIRT set-up as the trial wrapped up. No more screening by the actual HEMS crew. Back to centralised control screening back in the office.

As the HIRT screening process seemed to have such a dramatic effect on the trauma system in Sydney we wanted to keep it going as did the trauma people in the Children’s Hospital at Westmead.  They had particularly noted the change as by virtue of geography they are the closest kids centre to most of the Sydney basin.  The increase in kids arriving straight to the ED even led them to revise their internal trauma systems. But away the screens went.

So the question for this subsequent bit of research was really pretty simple: did we lose anything going back to the centralised process alone? More crucially, do the patients lose anything?

Comparisons

This time the comparison wasn’t the two screening processes working alongside each other. It was before and after. What didn’t change was the sort of paeds patients being looked for. It was any kid with severe trauma. This might include head injury, trunk trauma, limb injuries, penetrating injuries, near drownings, burns and multi-casualty incidents with kids involved.

So in the ‘before’ epoch there were 71 cases of severely injured kids (covering 34 months) that fitted the bill. For the ‘after’ epoch there were 126 cases (over 54 months).

In the ‘before’ epoch with the systems working alongside each other, 62% of severely injured kids were picked up and had an advanced EMS team sent.

In the ‘after’ phase? It fell. To 31%.

And while the identification rate halved it also took kids longer to reach the PTC going from 69 to 97 mins. 28 minutes might seem small but then most of us have probably seen how much can change in a severely injured patient in less time than an episode of Playschool runs for.

Things that didn’t change? Well the overtriage rate for the CareFlight crew was pretty much the same. And whether advanced EMS teams or paramedic only teams reached the kids, their respective rates of transfer direct to PTC were pretty much the same as in the ‘before’ time. It seems that once crews get tasked they treat the patients much the same as their training sets them up to do.

It certainly seems that the right team in our system is a physician/paramedic crew (in NSW the doctor/paramedic mix is the advanced EMS set-up used across the board) as the kids get much more intensively treated at the scene and then get transported directly to a kids centre.  In other words faster access to advanced interventions and much faster access to the specialist kids trauma people.  Right team to the right patient at the right time.

 

The Washup

So we’re left with a few things to consider. There is an acceptance locally that severely injured kids are more likely to get time critical interventions if an advanced EMS team is sent (and advanced EMS teams could come from different backgrounds in different places, it just happens to be physician/paramedic here). There is a belief that those who’ve had that extra training and exposure will feel more comfortable with kids, who can be challenging.

The system has set a goal of getting those advanced teams to severely injured patients, and in this case we’re talking about kids. These two papers suggest that a model where those who are directly involved in advanced EMS are part of the screening process will identify more severely injured kids and get more of them straight to the PTC and definitive care.

Should this be a surprise? As the paper mentions this isn’t the only example of a model where clinicians who do advanced EMS work being part of the screening process seems to be a success above and beyond those who specialise in screening all calls. It may be that knowing the lay of the land when it comes to service capability counts for a whole lot. There is also work suggesting that telephone interrogation of the emergency caller by a flight paramedic is accurate when compared to assessment by on-ground ambulance crews when trying to figure out whether advanced care might help.

This was the experience with the HIRT screening process too, where structured callback was part of the game. The HIRT system also had some unique features.  It is the only one we have heard of where the crew sitting next to the helicopter identified the cases they responded to.  This seemed to create added benefits in shortening the time to getting airborne because parallel activities come to the fore (see the paper for more).  A very consistent six minutes from the beginning of the triple zero call (emergency call from the public) to airborne is pretty quick.

Does this have any implications for adults too?  Back in 2007 when the RLTC was introduced the local ambulance admin made the decision that sending advanced EMS teams to severely injured patients was the standard in Sydney and the RLTCs job was to make that happen.  From the time the RLTC started till the screens were removed in March 2011 the HIRT system identified 499 severely injured adults.  The RLTC also spotted 82 of these, or 16%.  So the HIRT spotting system appears to be even more effective for adults than in kids.

Right now there are a bunch of different advanced EMS teams in Sydney, all wanting to get to that right patient and offer top notch care. Those patients would be very happy to have teams with the full range of skills coming. And all those teams have the skills to add the sort of screening that involves protocols that operated during HIRT. They’re sitting waiting for someone else to look their way.

So let’s work it through again.

Let’s say you were trying to meet that thorny challenge of right team, right place, right time. Let’s say you had ended up trying out a screening system similar to some others around the world but with some tweaks that made it even better, particularly for local conditions.

Let’s say that system hugely improved the way that severely injured kids were cared for.  Let’s say that system was also even better at spotting severely injured adults too.  Let’s say that system was part of the fastest end-to-end physician HEMS system yet described in the world literature.

Let’s say when you moved away from that screening system you didn’t pick up as many of the severely injured kids as you wanted to so they missed out on early advanced care, the kids didn’t get to your preferred destination first up as often and they took longer to get there.

You might ask why such a hugely effective system was discontinued in the first place.

You might ask why it has not been reinstated given the subsequent evidence.

And they would be very good questions.

 

Notes:

The image of Charlie in his guises was on the Creative Commons area of flickr and posted by Kevin O’Mara. It’s unchanged here.

The papers mentioned again are:

Garner AA, Lee A, Weatherall A, Langcake M, Balogh ZJ. Physician staffed helicopter emergency medical service case identification – a before and after study in children. Scand J Trauma Resusc Emerg Med. 2016;24:92.

Garner AA, Lee A, Weatherall A. Physician staffed helicopter emergency medical service dispatch via centralised control or directly by crew – case identification rates and effect on the Sydney paediatric trauma system. Scand J Trauma, Resusc Emerg Med. 2012;20:82. 

Soundappan SVS, Holland AJA, Fahy F, et al. Transfer of Pediatric Trauma Patients to a Tertiary Pediatric Trauma Centre: Appropriateness and Timeliness. J. trauma. 2007;62:1229-33.

Mitchell RJ, Curtis K, Chong S, et al. Comparative analysis of trends in paediatric trauma outcomes in New South Wales, Australia. Injury. 2013;44:97-103.

Garner AA, Mann KP, Fearnside M, et al. The Head Injury Retrieval Trial (HIRT): a single-centre randomised controlled trial of physician prehospital management of severe brain injury compared with management by paramedics only. Emerg Med J. 2015;32:869-75.

Garner AA, Mann KP, Poynter E, et al. Prehospital response model and time to CT scan in blunt trauma patients; an exploratory analysis of data from the head injury retrieval trial. Scand J Trauma Resusc Emerg Med. 2015;23:28. 

Garner AA, Fearnside M, Gebski V. The study protocol for the Head Injury Retrieval Trial (HIRT): a single centre randomised controlled trial of physician prehospital management of severe blunt head injury compared with management by paramedics. Scand J Trauma Resusc Emerg Med. 2013;21:69. 

Wilmer I, Chalk G, Davies GE, et al. Air ambulance tasking: mechanism of injury, telephone interrogation or ambulance crew assessment? Emerg Med J. 2015;32:813-6. 

Did you check all of those out? Why not take a break from all of that and watch these French kids rock a club track?

 

 

Does the Thing in the Box Do What it Says?

Sometimes really simple questions don’t get asked. Here’s a joint post from Alan Garner and Andrew Weatherall on places you end up when you ask simple questions about ways of warming blood. 

Carriage of packed red blood cells (PRBC) by HEMS crews has become increasingly common in the last several years in both Europe and North America.  CareFlight was an early adopter in this regard and has been carrying PRBCs to prehospital incident scenes since the 1980s.  We reported a case of a massive prehospital transfusion in the 1990s (worth a read to see how much Haemaccel was given before we arrived on the scene and how much things have changed in fluid management).  In that case we tried to give plasma and platelets as well but the logistics were very difficult.  This remains the case in Australia with plasma and platelets still not viable in a preparation that is practical for prehospital use.

Returning to the PRBCs however the issue of warming them was something that always vexed us.  We experimented with a chemical heat packs in the late 1990s and early 2000s but could not find a method that we felt was reliable enough.  We also looked at the Thermal Angel device from the US when it appeared on the market nearly 15 years ago, but as the battery weighed the best part of 3kg we decided that it still had not reached a point where the technology was viable for us to be carrying on our backs (battery technology has moved on a long way in the last 10 years and Thermal Angel now have a battery weighing 550gms).

Fast Forward

Hence we were pretty excited when we found that there was a new device available in the Australian market, the Belmont Buddy Lite, where the whole set up to warm blood or fluid weighs less than a kg.  We have been using the device for 3 years now, and our clinical impression was somewhere between impressed and “finally”.

Still, one of our docs, James Milligan, thought it worth validating this new technology. Part of that was about checking that the machine does what it says on the box. Is it just marketing or is it really that good?

The other thing we wanted to assess was how a commercial device compared to all those old techniques we were once stuck with. Traditional methods used by EMS in our part of the world include:

  • Stuffing the unit under your armpit inside your jacket for as long as possible prior to transfusion.
  • Putting it on a warm surface (black spine board in the sun or bonnet of a vehicle). Yep, baking.
  • That chemical heat pack method we had tried 10 years ago.

 

Fire
Some things aren’t a prehospital option. Well this isn’t anywhere maybe.

The Nuts and Bolts

Now, how would you go about testing this? The first thought bubble included a pump set, a theatres wash bowl and a standard old temperature probe that you might use at operation. Oh, and some blood. Like most bubbles that don’t involve property, it didn’t last long.

So we were left with a question: how do you try and set things up to test a system for the real world so it is actually like you’d use it in that real world, while still allowing measurements with a bit of rigour? How consistent are you when you deploy a blood-giving pump set?

Enter Martin Gill, perfusionist extraordinaire from The Children’s Hospital at Westmead. Because when we thought “how do we test prehospital blood warmers” obviously we thought about heart sugery in newborns. We turned to Martin with the following brief:

  • We want to test prehospital blood warming options.
  • We want to measure temperature really well.
  • We’re keen on being pretty rigorous about as many things as we can actually. Can we guarantee flow rate reliably?
  • We figure we could use units of blood about to be discarded and we want to be able to do the most with what we’ve got. So we want to be able to use a unit for a bunch of testing runs.

And Martin delivered. He designed a circuit (check the diagram) that would guarantee flow, measure in 3 spots, cool the blood once it had run through, and run it all through again. There are some things you could never come up with yourself. That’s just one.

Diagram copy
It looks a little different in three dimensions but you get the idea.

 

You might wonder how hard is it to get blood? Well actually it was pretty easy (thank you Sydney Children’s Hospital Network Human Research Ethics Committee and Haematology at The Children’s Hospital at Westmead).

The results have just been published online in Injury.  So this humble little idea has led us some places and told us some things. What were those things then?

  1. As you will note, the commercial warmer was the only method that reliably warmed the blood to something like a physiological level.
  2. The change in temperature as the products pass through the line itself was more than we’d expected. Even the measurement of temperature just a little bit distal to the bag of blood showed a sharp step up temperature (that mean was 9.40C).
  3. Any of the options that weren’t the commercially available device here guaranteed very cold blood reaching the end of the line. After all, 180C is the temperature we aim for when setting up deep hypothermic circulatory arrest in the operating suite. It is very cold. Should you even consider packed red blood cells if you aren’t going to warm them effectively?

In some ways, these aren’t super surprising items but small things like this can still be valuable. This was a humble little bench study of a simple question. Still, finding out that a device does what it says on the box by direct observation is reassuring. But …

 

We Have Questions

Research is very often an iterative process. Ask a question, provide answers to one small element of the initial puzzle, find another puzzle along the way and define a new question to explore. Each new question contributes more to the picture. On top of that, finding our way to the lab set-up and squeezing in the measurements around other work has taken a bit of time and things have moved along. This itself suggests new questions to ask.

Will everyone’s questions be the same? Well here are ours, so you tell us.

  1. Now that we’ve come up with a lab set-up to test the manufacturer’s recommended use, what about testing a situation that more closely matches how the warming device is used at the roadside? As noted in the discussion, we don’t use machines pumping blood at a steady rate of 50 mL/min. How will a warmer perform at the much higher flow rates we demand in prehospital use? Will it still be a warmer or more of a tepid infusion system?
  2. Are all devices the same? We didn’t choose the Buddy Lite because we were after a sweet, sweet money deal. It was the only prehospital fluid warmer with Therapeutic Goods Administration registration in Australia. There are now at least 2 other devices weighing less than 1 kg on the international market. They also advertise an ability to work at higher flow rates of up to 200 mL/min.
  3. Are there are other potential problems when you warm the blood with these low dead space solutions? Let’s just imagine for a second you’re a red blood cell rushing through a warmer. In a pretty small area you’ll be put through a temperature change of over 200C within a system aiming to maximise that heat transfer in a very small bit of space. That implies the pressure change across the warming device could be pretty sizeable. When you get to the end of that little warming chamber having effectively passed through a very high pressure furnace, is there a chance you might feel like you’re going to disintegrate at the end of it all? What we’re alluding to is maybe, just maybe, does making red blood cells change temperature quickly while rushing through the system at up to 200 mL/min leave those red cells happy or is haemolysis a risk? If it was a risk, would the patient benefit from receiving smashed up bits of red cell?

 

Now that we’ve established a good model that will let us do rigorous testing,we can ask those new questions. Without the simpler first question, we wouldn’t be so ready to get going. Those new questions would seem to be how do modern devices perform at flow rates useful for the clinician rather than the marketing pamphlet? And what happens to the red cells in the process?

That’s the space to watch. Because that’s where we’re going next.

 

 

Notes and References:

Here’s the link to the prehospital massive transfusion case report mentioned near the start.

Garner AA, Bartolacci RA Massive prehospital transfusion in multiple blunt trauma. Med J Aust. 1999;170:23-5.

And here’s link to the early online version of the blood warmer paper:

Milligan J, Lee A, Gill M, Weatherall A, Tetlow C, Garner AA. Performance comparison of improved prehospital blood warming techniques and a commercial blood warmer. Injury. [in press]

That image of the fire is from flickr’s Creative Commons area and is unaltered from the post via the account “Thomas’s Pics”.

And did you get this far? Good for you. Much respect to all those who read to the end of a thing. For this you get a reminder that you can follow along by signing up to receive updates when we post.

You also get the word of the week: colophon [kol-uh-fon] which is a publlisher’s or printer’s distinctive emblem used as an identifying device on books or other works. Alternatively it can be the inscription at the end of a book or manuscript.

 

 

Summers Past – A Look Back at Drowning Cases

A quick post on a recent paper from one of the authors, Andrew Weatherall. You can get the full text over here and it might be worth having a quick look at a quick review of a study from the Netherlands that Alan Garner did previously. 

Every summer, for too many summers, prehospital teams at CareFlight go to drownings. Too many drownings. This isn’t to say it’s only summer, but that is definitely when most of the work happens. Sometimes they’re clustered in a way that makes you think there’s some malevolent purpose to it, some malign manipulation of chance striking at families.

And also at our teams, particularly the paramedics, backing up day after day.

So drowning is something we want to understand better. What are we offering? What are our longer term outcomes?

And surprisingly given that drowning has been a long-term feature of preventable tragedy, particularly in Australia, there’s not really that much research out there. In fact it was only in 2002 that clever people at the World Congress on Drowning sat down and agreed on definitions for what was really drowning.

So we set about trying to add at least a little bit to the discussion.

Looking Back

Retrospective research has a bunch of issues. It has a place though when you’re trying to understand your current practice and what you’re actually seeing, rather than just what you think you’re seeing.

We went back and looked at a 5 year period between April 2007 and 2012 (and full credit to co-author Claire Barker who did the majority of that grunt work). For most of this time the tasking system included the HEMS crew observing the computer assisted dispatch system screens. For some of the time there was also a central control person doing this while from March 2011 on there was only the central control person. The aim of the game was to pick up cases where there was an immersion mechanism and either reduced conscious state or CPR and get a team with advanced medical skills moving.

Key points of interest were whether the cases were picked up, what interventions the HEMS team undertook and, if possible, what were the outcomes for those patients? In particular was it possible to glean what their longer term outcomes were?

Things We Found

Up until the move to solely central tasking, all of those at the severe end of the scale (ISS > 15 meaning they had an altered level of consciousness of documented cardiac arrest) were identified for a HEMS team response. Once it went to central control alone, 3 of the relevant 7 were not identified (obviously not super big numbers).

Of the 42 patients transported, 29 of them could be fairly had an ISS > 15 and you can see the interventions in the prehospital setting here:

Table 2 copy

So what were our other findings?

Those who present with GCS 3

This group did not do well. Of the 14 in this group, 10 died within 2 weeks. Of the other 4, one died at 17 months, having had significant neurological impairment after their drowning.

But there was one patient with GCS 3 and a first reported rhythm of asystole that was rated as having normal neurological development on follow-up by the hospital system.

What was different about this kid? How do we make that outlier fit right in the middle?

That’s a nagging question from this study.

9 and above, 8 and below

In our patients, if you had an initial GCS over 8 there was no evidence of new neurological deficit. All the patients with GCS 8 or below were intubated and ventilated by the teams. Every patient with a GCS over 3 when the team arrived survived. All of the survivors (with any initial GCS) had return of spontaneous circulation by the time they were in ED.

Another feature was the neurological outcome for those with an initial GCS between 4 and 7 – 7 of the 8 kids in this group had a good neurological outcome. (One patient had pre-existing neurological impairment but returned to baseline.)

Figure copy

The Bystanders

An observation along the way that is a real highlight. All but one child with a GCS less than 8 on arrival of the HEMS team had received bystander CPR (and that included all of them for those who were systolic). Here’s hoping that marks good community knowledge of what has to be done.

The Stuff We Just Can’t Say

There are the usual issues with retrospective studies here. Some patient may have moved out of area and not had subsequent follow-up. There’s also those three cases of severe drowning not picked up after the change in tasking options after March 2011. As those patients weren’t managed by the reporting HEMS team they don’t make up part of the 42 in this data set. As mentioned in the results, two of those cases went to adult trauma centres first, then were transferred more than 4 hours after the incident to a paeds specialist centre, where they unfortunately passed away. The other case did get a HEMS response by another organisation but we didn’t have detailed access to the treatments undertaken.

Another really important point about the follow-up. The follow-up here is short-term, as it was what is available from the hospitals. It may well be that more subtle neurological impairment only becomes evident as kids get older, particularly when they hit school age. I happen to know that The Children’s Hospital at Westmead has done some work on longer term neurological outcomes which should hopefully hit the public airwaves soon. Intriguingly they’ve looked at kids thought to be entirely neurologically normal and followed them in detail over the longer term. No spoilers in this post but I’ll definitely follow up when it breaks.

It’s also the case that when you’re looking at those who get a HEMS response, you’re not catching the denominator of all drownings. This study doesn’t help us understand what proportion of total drownings end up at this more severe end of the spectrum.

With those provisos, retrospective research still has a key role. It’s still a brick of knowledge. A small brick maybe, but a brick. It’s certainly a better brick to build with than you end up with if you don’t look.

What do we need?

More. Always more data. There is a bit here that suggests things similar to other series, some of which are a good deal bigger. Outcomes after arrest from drowning are better than is generally the case. Our impression is that, as suggested by that Netherlands paper, time does matter and that once you get beyond 30 minutes of resuscitation further efforts are unlikely to help.

The other factor that would appear to make sense (but clearly needs lots of robust research) is that earlier delivery of interventions that should make a difference to outcomes would be good. That surely starts with a big focus on bystander CPR. But that should also be backed up by accurate, early triage of teams with the skills to extend that care. The right teams in the right place at the right time remains the challenge.

 

Notes:

There’s a comprehensive bit on the tasking stuff in this earlier paper relating to tasking and paeds trauma. It should clarify the different systems used, which can be a bit hard to get your head around.

After the paper came out, @LifeguardsWB asked a pretty reasonable question “What were the average response and transport times?”

So as described in the subsequent tweet, the median times were:

  • Start of emergency call to team on scene = 17 minutes.
  • Time on the scene = 17 minutes.
  • Total prehospital time = 54 minutes.

 

 

 

Sandpits, Better Eyes and New Monitors – Can NIRS work for prehospital medicine?

This is part 2 of a series (part 1 is here) on trying to study near-infrared spectroscopy in the prehospital setting by Dr Andrew Weatherall (@AndyDW_). Can NIRS work? No one can be sure but here’s one approach to getting some data we can actually use. 

A while back I did a post where I pointed out that when you get sold technology, there’s a whole history behind the machine that goes beep that means it’s probably not what you’re told. And the example I used was near-infrared spectroscopy tissue oximetry.

That was partly because I’m involved in research on NIRS monitoring and I’ve spent a lot of time looking at it.  Like every time I look carefully in the mirror, there’s a lot of blemishes that I miss on a casual glance. I also don’t mind pointing out those blemishes.

So that post was about all the things that could get in the way – light bouncing about like a pinball, humans being distressingly uncatlike, comparing monitors that are might be apples and aardvarks rather than apples and apples and basing your whole methodology on assumptions of tissue blood compartments. Oh, and maybe you can’t get sunlight near your red light.

Sheesh.

The thing is, I really want to answer that original question – “How’s the brain?”

So enough of the problems, can we find some solutions?

Actually I’m not certain. But I can say what we came up with. It’s a plan that involves sandpits, hiding numbers and finding better eyes. Oh, and changing the design of monitors forever and ever.

 

Playing in Sandpits

Our first step was to try and figure out if NIRS technology could even work in the places it wasn’t designed for. Not near the cosy bleeping of an operating theatre monitor where the roughest conditions might be inflicted by a rogue playlist.

We figured that the first issues might be all the practical things that stop monitors working so effectively. And we already knew that in the operating suite you often needed to provide shielding from external light to allow reliable measurements.

So we asked for volunteers, stuck sensors to their heads and took them driving in an ambulance or hopped on the helicopter to do some loops near Parramatta. It gave us lots of chances to figure out the practicalities of using an extra monitor too.

And we learnt a bit. That we could do it with some layers of shielding between the sensors and the outside world. That the device we tested, though comfortable next to an intensive care bed was a bit unwieldy at 6 kg and 30 cm long to be carried to the roadside. Most importantly that it was worth pushing on, rather than flattening everything in the sandpit and starting again.

Early engineering advice included "just put a tinfoil hat on everyone to shield the sensors". I just ... I ... can't ... [via eclipse_etc at Flickr 'The Commons']
Early engineering advice included “just put a tinfoil hat on everyone to shield the sensors”. I just … I … can’t … [via eclipse_etc at Flickr ‘The Commons’]

Hiding Numbers and Getting Out of the Way

The next thing that was pretty obvious was that we couldn’t set out to figure out what NIRS monitoring values were significant and at the same time deliver treatments on the basis of those numbers. We needed to prospectively look at the data from the monitor and see what associations were evident and establish which bits in the monitoring actually mattered for patients and clinicians.

Of course paramedics and doctors tend to like to fix stuff. Give them a  “regional saturation” number which looks a little like mixed venous oxygen saturation while the manufacturer (usually) puts a little line on the screen as the “good-bad” cutoff line is a pretty good way to see that fixing reflex kick in. So to make sure it really is a prospective observational study and we’re observing what happens in actual patients receiving their usual treatment we ended up with a monitor with none of the numbers displayed. Better not to be tempted.

It was also obvious that we couldn’t ask the treating team to look after the NIRS monitor because they’d immediately stop doing the same care they always do and occasionally (or always) they’ll be distracted by the patient from being as obsessive about the NIRS monitor as we need for research.

So recruiting needs a separate person just to manage the monitor. On the plus side this also means we can mark the electronic record accurately when treatments like anaesthesia, intubation and ventilation or transfusion happen (or indeed when the patient’s condition obviously changes). It’s all more data that might be useful.

Getting Better Eyes

One of the big problems with NIRS tissue oximetry so far seems to be that the “absolute oximetry” isn’t that absolute. When you see something claiming a specific number is the cutoff where things are all good or bad, you can throw a bucket of salt on that, not just a pinch.

 

Maybe this much salt. [via user pee vee at Flickr's 'The Commons']
Maybe all of this salt. [via user pee vee at Flickr’s ‘The Commons’]
The other thing is that to pick up evolving changes in a dynamic clinical environment is difficult. What if it isn’t just the change in oximetry number, but the rate of change that matters? What if it’s the rate of change in that number vs the change over the same time in total haemoglobin measurements, or balance between cerebral monitoring and peripheral monitoring at the same time? How does a clinician keep track of that?

What we might need is a way of analysing the information that looks for patterns in the biological signals or can look at trends. The good news is there’s people who can do that as it’s acutally a pretty common thing for clever biomedical engineers to consider. So there are some clever biomedical engineers who will be part of looking at the data we obtain. When they have spare time from building a bionic eye.

My bet is that if NIRS monitoring is ever to show real benefits to patients it won’t be only by looking at regional saturation (though we’ll try that too). It will be the way we look at the data that matters. Examining rapidly changing trends across different values might just be the key.

 

Thinking About the Monitors We Need

Let’s imagine it all works. Let’s assume that even with all those hurdles the analysis reveals ways to pretty reliably pick up haematomas are developing, or the brain is not receiving enough blood flow, or oedema is developing (and there are other settings where these things have been shown), there’s still a big problem. How do you make that information useful to a clinician who has a significant cognitive load while looking after a patient?

For each NIRS sensor that is on (3 in this study) we’ll be generating 4 measurements with trendlines. The patient is likely to have pulse oximetry, ECG, blood pressure and often end-tidal capnography too. Putting together multiple bits of information is an underappreciated skill that highly trained clinicians make a part of every day. But it adds a lot of work. How would you go with 12 more monitoring values on the screen?

Yes Sclater's lemur, that's 16 monitoring values to keep track of. [via user Tambako the Jaguar at flickr]
Yes Sclater’s lemur, that’s 16 monitoring values to keep track of. [via user Tambako the Jaguar at flickr]
So before we can take any useful stuff the analysis reveals and free clinicians to use the information, we need to figure out how to present it in a way that lets them glance at the monitor and understand it.

How should we do that? Well it’s a bit hard to know until we know what we need to display. My current guess is that it will involve getting clever graphics people to come up with a way to display the aggregated information through shapes and colours rather than our more familiar waveforms (and that’s not an entirely novel idea, other people have been on this for a bit).

So then we’d need to test the most effective way to show it before finally trying interventional studies.

This could take a bit.

And that is a story about the many, many steps for just one group trying to figure out if a particular monitor might work in the real world of prehospital medicine. There are others taking steps on their own path with similar goals and I’m sure they’ll all do it slightly differently.

I hope we end up bumping into each other somewhere along the road.

 

Notes and References:

Here’s the link to our first volunteer study (unfortunately Acta Anaesthesiologica Scandinavica has a paywall):

Weatherall A, Skowno J, Lansdown A, Lupton T, Garner A. Feasibility of cerebral near-infrared spectroscopy monitoring in the pre-hospital environment. Acta Anaes Scand 2012;56:172-7.

If you didn’t look on the way past, you should really check the video of Prof. Nigel Lovell introducing their version of a bionic eye. It’s pretty astonishing and I can’t quite believe I get to learn things from him.

It’s the very clever Dr Paul Middleton who was first author on a review of noninvasive monitoring in the emergency department that is well worth a read (alas, another paywall):

Middleton PM, Davies SR. Noninvasive hemodynamic monitoring in the emergency department. Curr Opin Crit Care. 2011;17:342-50.

Here’s the PubMed link from a team taking a preliminary looking tissue oxygen monitoring after out-of-hospital cardiac arrest:

Frisch A, Suffoletto BP, Frank R, Martin-Gill C, Menegazzi JJ. Potential utility of near-infrared spectroscopy in out-of-hospital cardiac arrest: an illustrative case series. Prehosp Emerg Care. 2012;16:564-70.

 

All the images here were via flickr’s ‘The Commons’ area and used without any modification under CC 2.0

Research That is Positive When It Is Negative

This weeks post is the first in a series touching on some of the challenges when you start researching technology for the prehospital setting (or anywhere really). Dr Andrew Weatherall (@AndyDW_) on why some monitors aren’t the monitors you’re sold. 

I am new to the research game. As is often the case, that brings with it plenty of zeal and some very rapid learning. When we first started talking about the project that’s now my PhD, we set out wondering if we could show something that was both a bit new and a positive thing to add to patient clinical care.

It didn’t take long to realise we’d still be doing something worthwhile if the project didn’t work one little bit.

Yep, if this thing doesn’t work, that would still be fine.

 

Simple Questions

I’m going to assume no one knows anything about this project (seems the most realistic possibility). It’s a project about brains and lights and monitors.

It came out of two separate areas of work. One of these was the prehospital bit of my practice. All too often I’d be at an accident scene, with an unconscious patient and irritated by the big fuzzy mess at the middle of the clinical puzzle.

“How’s the brain?”

Not “how are the peripheral readings of saturation and blood pressure against the normative bell curve?” Not “how are the gross clinical neurological signs that will change mostly when things get really grim?”

“How’s the brain?”

At the same time at the kids’ hospital where I do most of my anaesthesia we were introducing near-infrared spectroscopy tissue oximetry to monitor the brain, particularly in cardiac surgery cases.

The story sounded good. A noninvasive monitor, not relying on pulsatile flow, that provides a measure of oxygen levels in the tissue where you place the probe (referred to as regional oxygen saturation, or tissue saturation or some other variant and turned in to the ideal number on a scale between 0 and 100) and which reacts quickly to any changes. You can test it out by putting a tourniquet on your arm and watching the magic oxygen number dive while you inflate it.

Except of course it’s not really as simple as that. If you ask a rep trying to sell one of these non-invasive reflectance spectroscopy (NIRS) devices, they’ll dazzle you with all sorts of things that are a bit true. They’re more accurate now. They use more wavelengths now. Lower numbers in the brain are associated with things on scans.

But it’s still not that simple. Maybe if I expand on why that is, it will be clearer why I say I would be OK with showing it doesn’t work. And along the way, there’s a few things that are pertinent when considering the claims of any new monitoring systems.

 

A Bit About Tech

Back in 1977, a researcher by the name of Franz Jöbsis described a technique where you could shine light through brain tissue, look at the light that made it out the other side and figure out stuff about the levels of oxygen and metabolism happening deep in that brain tissue. This was the start of tissue spectroscopy.

Now, it’s 38 years later and this technology isn’t standard. We’re still trying to figure out what the hell to do with it. That might just be the first clue that it’s a bit complicated.

Of course the marketing will mention it’s taken a while to figure it out. Sometimes they’ll refer to the clinical monitors of the 1990’s and early 2000’s and mention it’s got better just recently. They don’t really give you the full breadth of all the challenges they’ve dealt with along the way. So why not look at just a few?

  1. Humans Aren’t Much Like Cats

Jöbsis originally tested his technique on cats. And while you might find it hard to convince cat lovers, the brain of a cat isn’t that close to a human’s, at least in size. (As an aside, I’m told by clever bionic eye researchers the cat visual cortex actually has lots of similarities with that of humans – not sure that explains why the aquarium is strangely mesmerising though).

He also described it as a technique where you shone the light all the way across the head and picked up the transmitted light on the other side. But even the most absent-minded of us has quite a bit more cortex to get through than our feline friends and you’d never pick up anything trying that in anything but a neonate.

So the solution in humans has been to send out near-infrared light and then detect the amount that returns to a detector at the skin, on the same side of the head as you initially shone those photons.

When you get handed a brochure by a rep for one of these devices, they’ll show a magical beam of light heading out into the tissues and tracing a graceful arc through the tissues and returning to be picked up. You are given to believe it’s an orderly process, and that every bit of lost light intensity has been absorbed by helpful chromophores. In that case that would be oxy- and deoxyhaemoglobin, cytochromes in the cell and pesky old melanin if you get too much hair in the way.

See? Here's the pretty version that comes with the monitor we're using in the study? [It's the Nonin EQUANOX and we bought it outright.]
See? Here’s the pretty version that comes with the monitor we’re using in the study? [It’s the Nonin EQUANOX and we bought it outright.]
Except that’s the version of the picture where they’ve put Vaseline on the lens. Each one of those photons bounces eratically all over the place. It’s more like a small flying insect with the bug equivalent of ADHD bouncing around the room and eventually finding its way back to the window it flew in.

So when you try to perform the underlying calculations for what that reduction in light intensity you detect means, you need to come up with a very particular means of trying to allow for all that extra distance the photons travel. Then you need to average the different paths of all those photons not just the one photon. Then you need to allow for all the scattering that means some of the light will never come back your way.

That’s some of those decades of development chewed up right there.

  1. Everyone Looks the Same But They Are Different

So that explains the delay then. Well there’s another thing that might make it hard to apply the technology in the prehospital environment. Every machine is different. Yep. If you go between systems, it’s might just be that you’re not comparing apples with apples.

That particular challenge of calculating the distance the light travels? Every manufacturer pretty much has a different method for doing it. And they won’t tell you how they do it (with the notable exception of the team that makes the NIRO device who have their algorithms as open access – and their device weighs 6 kg and is as elegant to carry as a grumpy baby walrus).

So when you read a paper describing the findings with any one device, you can’t be 100% sure it will match another device. This is some of the reason that each company calls their version of the magic oxygen number something slightly different from their competitor (regional saturation, tissue oxygenation index, absolute tissue oxygen saturation just to name a few). It might be similar, but it’s hard to be sure.

Maybe that's harsh. Could a walrus be anything but elegant? [via Allan Hopkins on flickr under CC 2.0]
Maybe that’s harsh. Could a walrus be anything but elegant? [via Allan Hopkins  on flickr without mods under CC 2.0]
  1. When “Absolute” Absolutely Isn’t Absolute

You get your magic number (I’m going to keep calling it regional saturation for simplicity) and it’s somewhere between 60 and 75% in the normal person. The thing is it hasn’t been directly correlated with a gold standard real world measurement that correlates with the same area sampled.

The NIRS oximeter makes assumptions about the proportions of arterial, venous and capillary blood in the tissue that’s there. The regional saturations are validated against an approximation via other measures, like jugular venous saturation or direct tissue oximetry.

On top of that all those “absolute NIRS monitors” that give you a definite number that means something? No. “Absolute” is not a thing.

It’s true the monitors have got much better in responding to changes quickly. And they’ve added more wavelengths and are based on more testing so they are more accurate than monitors from decades past. But they can still have significant variation in their readings (anywhere up to 10% is described).

And they spit out a number, regional saturation, that is actually an attempt to take lots of parameters and spit out a number a clinician can use. How many parameters? Check the photo.

This is from an excellent review by Elwell and Cooper. [Elwell CE, Cooper CE. Making light work: illuminating the future of biomedical optics. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 2011;369:4358-4379.]
This is from an excellent review by Elwell and Cooper. [Elwell CE, Cooper CE. Making light work: illuminating the future of biomedical optics. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 2011;369:4358-4379.]
  1. The Practical bits

And after all that, we reach the practical issues. Will sunlight contaminate the sample? Can it cope with movement? Do you need a baseline measurement first? Does it matter that we can only really sample the forehead in our setting?

All the joy of uncertainty before you even try to start to research.

 

So why bother?

Well the quick answer is that it might be better for patients for clinicians to actually know what is happening to the tissue in the brain. And acknowledging challenges doesn’t mean that it isn’t worth seeing if it’s still useful despite the compromises you have to make to take the basic spectroscopy technique to the clinical environment.

But even if we find it just doesn’t tell us useful stuff, we could at least provide some real world information to counter the glossy advertising brochure.

There are already people saying things like,

“You can pick up haematomas.” (In a device that just tells you if there’s a difference between the two hemispheres.)

“Low regional saturations are associated with worse outcomes.” (But that’s probably been demonstrated more in particular surgical settings and the monitoring hasn’t been shown to improve patient outcomes yet.)

“You can even pick up cytochromes.” (In the research setting in a specially set-up system that are way more accurate than any clinical devices.)

All of those statements are a bit true, but not quite the whole story. The other message I extract from all of this is that all this uncertainty in the detail behind the monitor can’t be unique to NIRS oximetry. I have little doubt it’s similar for most of the newer modalities being pushed by companies. Peripherally derived Hb measurements from your pulse oximeter sound familiar?

After all this it’s still true that if we can study NIRS oximetry in the environment that matters to us we might get an exciting new answer. Or we might not. And sometimes,

“Yeah … nah.”

Is still an answer that’s pretty useful.

 

 

This is the first in a series. The next time around, I’ll chat about the things we’re trying in the design of the study to overcome some of these challenges.

If you made it this far and want to read a bit more about the NIRS project, you can check out the blog I set up ages ago that’s more related to that (though it frequently diverts to other stuff). It’s here

 

Examining the Hairs on the Yak – A Good Chance for More Chat

One of the good things about research that has its own issues, is that there is lots of scope to learn from the things about it that are good, as well as those that aren’t so great. The nice thing about ongoing comment is it gives even more chances to explain why a researcher might make certain choices along the way. Every question in research has more than one way of approaching some answers. Dr Alan Garner returns to provide even more background on this particular study, which has already generated some interesting conversation and a follow-up post

It’s an excellent thing to be able to keep having discussion around the challenges related to both conducting and interpreting a trial.  These things always bring up so many valuable questions, which deserve a response. So this is not going to be quick, but I hope you’ll have a read.

Lots of things changed between the time this trial was designed and now. Standards of care change. Systems, processes and governance models change. Indeed, in this trial standard care changed underneath us. We completed the protocol and gained ethical and scientific committee approval for this study during 2003.

The world was a different place then – at the start of 2003 George W Bush was US President and Saddam Hussein was still running Iraq. There is no keener instrument in medicine than the retrospectoscope particularly when focused 12 years back. Would I have done things differently if I knew then what I knew now – absolutely. Does the trial have hairs? Looks like a yak to me and I don’t think we are pretending otherwise.

Asking Questions

Did we ask the right question? The question was pragmatic. Add a doctor and with them comes a package of critical care interventions that were not routinely otherwise available in our standard EMS system. A number of cohort studies had previously looked at exactly this question and more studies have asked it since. Even papers published this month  have examined this question although the issue often overlaps with HEMS as that is how the doctors are frequently deployed.

I might segue slightly to address dp’s question as well which overlaps here. Is it the procedures that the team performs or the person performing the procedures that matter? Dp suggests that a better study design would be to have them all use the same protocols then we compare doctors with non-doctors. Such a randomised trial has actually been done although it is a long time ago now – 1987. It is one of the classic Baxt and Moody papers and was published in JAMA.

Patients were randomly assigned to a helicopter staffed by a flight nurse/paramedic or a flight nurse/emergency physician. The flight nurse and emergency physicians could perform the same procedures under the same protocols including intubation, chest tubes, surgical airways and pericardiocentesis. By TRISS methodology there was lower mortality in the group that included the doctor and the suggestion was this might be related to how they judged the necessity for intervention, rather than technical skill. This study is well worth a read. They note that the outcome difference might have been removed if the nurse/paramedic team was more highly trained but where does this end? We then move into the question of how much training is enough training and this is an area that I think is still in its infancy. Each time you do some research your prompt a whole lot of extra, usually interesting questions.

All That Methods Stuff

Anyway, back to this paper. All analyses presented in this paper were pre-specified in the statistical analysis plan. Although the protocol paper was not published till 2013, the statistical analysis plan (SAP) was finalised by the NHMRC Clinical Trials Centre in August 2010, more than a year prior to follow up of the last recruited patients. Copies of the SAP were then provided to the trial funders and NSW Ambulance at the time it was finalised in 2010. Along the way we have presented data in other settings, mostly at the request of interested parties (such as the Motor Accidents Authority who specifically requested analyses of road trauma cases) and in retrieval reviews. This is why there has been the opportunity for extra public scrutiny by experts like Belinda Gabbe. And public scrutiny is a good thing.

And Standard Treatments?

I’m very happy to provide some reassurance that this study did not rely on junior doctors being put through EMST/ATLS and then sent out to manage severe prehospital trauma patients. Rather the trial protocol states that treatment was according to ATLS principles. In 2003 there was no other external standard of care that we could cite for trauma patient management that was widely and internationally recognised.

The specialists had of course all completed EMST/ATLS but they were also all critical care specialists in active practice in major trauma centres in Sydney with ongoing exposure to severe trauma patients. The average years of prehospital trauma management experience held by this group of doctors at the beginning of the trial was more than 12 years each. They operated to those high level of treatment standards, with regular reviews of management to make sure this was current best practice over the life of a trial that ended up being longer than we hoped.

Other Dimensions of Time

And time wasn’t a friend. Recruitment was indeed slower than planned. This is a common problem in prospective trials. Our estimates of how long it would take to recruit the required sample size were based on a written survey of the major trauma centres in Sydney in 2003 to determine how many unconscious adult blunt trauma patients they were seeing each year. This was reduced to 60% to reflect the fact the trial would recruit for only 12 hours each day (although during the busiest part of the day) and the time needed to recruit was then estimated at 3 years. We in fact planned for 4 years to allow for the fact that patients usually disappear when you go looking for them prospectively. This of course is exactly what happened but to a greater degree than we planned.

I agree it would have been nice to have the results formally published earlier. We did present some results at the ICEM in Dublin in June 2012. It is interesting to note that Lars Wik spoke immediately before me at this conference presenting the results of the CIRC trial on the Autopulse device. This study was finally published online in Resuscitation in March 2014, more than three years from recruitment of their last patient and this trial did not include a six month neurological assessment as HIRT did.   Getting RCTs published takes time. Given we did have to perform six month outcome assessments I don’t think we were too far out of the ball park.

Quokka copy 2
To keep you going, here’s a quokka who looks like he’d be up for a chat too. [Via Craig Siczak and unchanged under Creative Commons.]

Randomising in Time Critical Systems

Just to be sure that I really have the right end of the stick on the question of excluding patients after randomisation I ascended the methodology mountain to consult the guru. For those that don’t know Val Gebski he is Professor and Director, Biostatistics and Research Methodology at the NH&MRC Clinical Trials Centre in Sydney. He was our methodology expert from the beginning of planning for the trial.

When I reached the mountain top I had to leave a voice message but Val did eventually get back to me. He tells me excluding patients post randomisation is completely legit as long as they are not excluded on the basis of either treatment received or their outcome. This is why he put it in the study design.

These are essentially patients that you would have excluded prior to randomisation had you been able to assess them properly and of course in our study context that was not possible. The CIRC study that I have already discussed also adopted this approach and excluded patients that did not meet inclusion criteria after enrolment.

Prehospital studies where you have to allocate patients before you have been able to properly assess them are always going to have these kind of difficulties. The alternative for a prehospital RCT would be to wait until you know every element of history that might make you exclude a patient. How many of us have that sort of detail even when we arrive at the hospital?

Extra Details to Help Along the Discussion

The newly met reader might also like to know that the call off rate was about 45% during the trial, not 75%. This is not different to many European systems. If you don’t have a reasonably high call off rate then you will arrive late for many severely injured patients.

And of course the HIRT study didn’t involve “self-tasking”. The system randomised cases on a strict set of dispatch guidelines, not on the feelings of the team on the day. This process was followed for nearly 6 years. There was not a single safety report of even a minor nature during that time. Compliance with the tasking guidelines was audited and found to be very high. Such protocolised tasking isn’t inherently dangerous and I’m not aware of any evidence suggesting it is.

It’s reassuring to know that other systems essentially do the same thing though perhaps with different logistics. For example in London HEMS a member of the clinical crew rotates into the central control room and tasks the helicopter using an agreed set of dispatch criteria. This started in 1990 when it was found that the central control room was so poor at selecting cases, and it resulted in the call off rate falling from 80% to 50%. The tasking is still by a member of the HEMS team, they just happen to be in the central control room for the day rather than sitting by the helicopter.

A more recent study from last year of the London system found that a flight paramedic from the HEMS service interrogating the emergency call was as accurate as a road crew assessing the patient on scene. This mirrors our experience of incorporating callbacks for HIRT.

The great advantage of visualising the ambulance Computer Assisted Dispatch system from the HIRT operations base by weblink was the duty crew could work in parallel in real time to discuss additional safety checks and advise immediately on potential aviation risks that might be a factor.

To consider it another way, why is the model safe if the flight paramedic is sitting at one location screening the calls but dangerous if he is sitting at another? What is the real difference between these models and why is one presumably a safe mature system and the other inherently dangerous?

More Mirrors

I agree that the introduction of the RLTC to mirror the HIRT approach of monitoring screens and activating advanced care resources (with extension to a broader range) was a good thing for rural NSW. However they did activate medical teams into what are very urban areas of Sydney who were neither a long way from a trauma centre nor was there any suggestion they were trapped. Prior to the RLTC the Ambulance dispatch policy for medical teams was specifically circumstances where it would take the patient more than 30 mins to reach a trauma centre due to geography or entrapment. Crossover cases obviously didn’t explain the whole of our frustrating experience of recruitment, but it was one extra hurdle that finally led us to wrap recruitment up.

You can’t bite it all off at once

In a study where you collect lots of data, there’s no publication that will let you cram it all into a single paper. So there are definitely more issues to cover from the data we have. This includes other aspects of patient treatment. So I will be working with the other authors to get it out there. It might just require a little bit of time while we get more bits ready to contribute to the whole picture.

Of course, if you made it to the end of this post, I’m hoping you might just have the patience for that.

Here’s those reference links again: 

That Swiss paper (best appreciated with a German speaker). 

The Baxt and Moody paper.

CIRC.

The earlier London HEMS tasking paper.

The latter London HEMS tasking paper. 

Same, same? Actually different

More of the operational data from the Head Injury Retrieval Trial has just been published. By luck more than anything else this has occurred within 24 hours of the publication of the main trial results which you can find here.

Some operational data about systems used in the trial has already been published. A key part of HIRT was a dispatch system where the operational crew were able to view screens with case information as they were logged to spot patients who may have severe enough injuries to warrant advanced care. They could then use the available information or call the initiating number for further details. If the available information matched the criteria for consideration of an advanced care team, the randomisation process then swung into action. The whole idea was to streamline the process of activation of an advanced care team to severely injured patients.

A study looking at this dispatch system in the context of identifying severely injured children has already been published here. This study compared the trial case identification system with the Rapid Launch Trauma Coordinator (RLTC) system in NSW. When the trial dispatch system was operating the paediatric trauma system in Sydney performed significantly better than when the trial system was not available. This was a combination of the dispatch system and the rapid response capability of the trial HEMS. The speed and accuracy of dispatch was a key component however.

So what’s this new paper about?

In this new paper we had the opportunity to explore the HIRT data set to look at the times it took various team models to treat patients and get them to the hospital, and then through the ED to CT. The data is unique as far as I know as we had the unusual situation of two physician staffed services operating in parallel sometimes being dispatched to the same patients.

You can find the paper here.

Getting to a CT scanner in a more timely fashion than this was a way of tracking patient progress through their care. [via telegraph.co.uk]
Getting to a CT scanner in a more timely fashion than this was a way of tracking patient progress through their care. [via telegraph.co.uk]
First comment is that this appears to confirm some European data that physician teams do not significantly affect prehospital times when compared with paramedics although the intubation rate is much greater. Papers such as that by Franschman from the Netherlands make interesting comparisons with this paper. The Dutch Physician staffed HEMS system closely mirrors the HIRT rapid response system in time intervals (and many other factors too). The fact that we have such similar results half a world apart suggests some generalisability of the data.

So are there some differences?

This study did show some differences between the physician teams in those time markers through the patient pathway. It’s worth making a couple of comments that might help to interpret that data.

This is not about individual performance but about systems. There were doctors and paramedics who worked across both systems. Their times followed the pattern of the system they were operating in on any given day.

If you look in the study discussion, the two physician HEMS systems are quite different. The Greater Sydney Area (GSA) HEMS forms part of the State ambulance helicopter system. It has to be all things to all people all the time. They have a wide range of tasks including interfacility transports, hoisting operations, ECMO and IABP transfers etc and they may potentially be tasked anywhere in NSW and perhaps up to 100nm off the coast. By necessity they are multirole and they have to be able to respond to any of these mission types when the phone rings without any notice.

The rapid response HEMS system that was set up for the trial is not constrained in the same way. It is a specialist service where every mission follows the same basic pattern. This data indicates that it is very, very good at doing one thing. Indeed as far as I am aware the scene times for intubated patients are the fastest achieved for a physician staffed HEMS anywhere in the world, even slightly faster than the published data from the Netherlands. The price of specialisation however is that this service cannot perform the range of tasks that the multirole GSA HEMS undertake.

Put simply the services are not interchangeable. The data indicates that the specialist rapid response model will arrive at patients first compared with the multirole GSA HEMS model anywhere in the greater Sydney area, except at the extreme edges of their operating range where rural bases may be faster, or within a couple of km of the GSA HEMS Sydney base.

The differences also apply to scene times where the HIRT rapid response system had scene times of half that or less observed in the GSA HEMS teams, even when confounders such as entrapment and requirement for intubation were considered. We speculate on some reasons for this such as the relative team sizes for the two operations. There may well be advantages in highly familiar teams. There is certainly some evidence for this in other areas of medicine.

What do we make of this?

Overall however I think specialisation is the key. If we again compare the HIRT rapid response model to the Dutch physician staffed HEMS system the similarities are striking. Like the HIRT system, the Dutch only perform prehospital cases, they only operate within a limited radius of their operating base (including urban areas) and they do not have hoists. Like most European HEMS they have small team sizes. And their times are remarkably similar to that achieved by the HIRT HEMS system in our study. It is all about how the services are structured and their role definition which makes them good at what they do.

There are clear implications for the task allocation system in Sydney from this data.

The current pattern of tasking appears to allocate physician teams primarily on who is closest. This allocation only makes sense if the two teams are interchangeable in capability. This is very clearly not the case. The two systems are quite different. The relative strengths of each service should be taken into account in the dispatch policy so that patients will get the most rapid and most appropriate response possible given their location and clinical condition.

The patient doesn’t care who started out closer. They want the service they need for their situation. The different strengths of the two services should form a complimentary system that ensures the fastest and highest quality care to patients, whether they are on the roadside, already in a smaller hospital, at the base of a cliff or on a ship off the coast.

What about dispatch?

The evidence from this study combined with the previous study on the Sydney paediatric trauma system also indicates that the HIRT case identification system significantly outperformed the RLTC in both speed and accuracy.

The trial case identification system operated for nearly 6 years without a single report of any type of safety incident, even of a minor nature. Once the RLTC came into being in 2007 the RLTC and HIRT systems operated collaboratively to identify severely injured children and ensure a speedy response. When HIRT identified a paediatric case, they checked with RLTC who retained tasking control to ensure that there was no additional information or competing tasks that might affect the dispatch decision. In this way Ambulance retained central control and oversight of the system and a double up of tasking to paediatric patients was averted. This would seem to be the ideal system with patients benefiting from the increased speed and accuracy of the parallel case identification process when the HIRT and RLTC systems were operating together, but Ambulance retaining central control so that competing tasks could be balanced. The HIRT dispatch system was however discontinued in 2011 when the last patient was recruited into the trial.

The practical difficulties of applying this level of sophistication to resource allocation, given the sheer volume and variety of demands on the centralised despatch system, need to be acknowledged. Nevertheless it might be time for a rethink.

Here’s those references again:

HIRT.

The comparison of dispatch systems in paeds patients.

The times paper.

The Dutch study.

 

 

HIRT – Studying a Non-Standard System that Ended up as Standard

There’s always a bit of extra reflection you can’t include in the discussion of a research paper. Dr Alan Garner reflects more on some of the challenges of doing research in prehospital medicine. 

The main results of the Head Injury Retrieval Trial have now been published on-line in Emergency Medicine Journal. We have paid the open access fees so that the results are freely available to everyone in the spirit of FOAM. This was an important study that was eagerly awaited by many clinicians around the world.

The summary from my point of view as the chief investigator: an enormous opportunity wasted.

It is now nearly ten years since we commenced recruiting for the trial in May 2005. Significant achievements include obtaining funding for a trial that was ultimately to cost 20 million Australian Dollars to run. I am not aware of another prehospital trial that has come anywhere close to this. Hopefully this is a sign that prehospital care is now seen as worthy of the big research bucks.

In the subsequent ten years world events have helped to drive increasing investment in prehospital trauma research, particularly conflicts in Iraq and Afghanistan and the perception that there were many preventable deaths.   The US government has become a big investor in prehospital research that might lower battlefield mortality. The Brits on the other hand typically made some assumptions based on the evidence they had and got on with it. Higher levels of advanced interventions during evacuation as exemplified by the British MERT system in Afghanistan seem to be associated with better outcomes but the evidence is not high quality.

I am the first to acknowledge that randomised trials are inherently difficult when people are shooting at you. Most prehospital care is not quite that stressful but there remain significant barriers to conducting really high quality prehospital research. Taking the evidence you have and getting on with it is a practical approach but it is not a substitute for meticulously designed and executed high quality studies. Such studies often disprove the evidence from lower level studies. We all bemoan the lack of good data in prehospital care and recognise the requirement for better research.

When you’re only left with signals

The Head Injury Retrieval Trial taken in this context really is an opportunity wasted. There is a strong signal in the as-treated analysis of unconscious trauma patients that there is a significant difference in mortality associated with physician prehospital care. The Intention to treat (ITT) analyses was not significant however.

The potential reasons for the lack of difference in the Intention to Treat group is really best appreciated by looking at the difference in intervention rates in Table 2. Both treatment teams (additional physician or paramedic only) could intubate cold so we only report the rate of drug assisted intubation. This was by far the most common physician only intervention, and the one we have been suspecting to make the most difference to head injured patients. When you look at the rates receiving this intervention it was 10-14% in the paramedic only group due to the local ambulance service sending their own physician teams in a good percentage of patients, compared with 49-58% in the treatment group. If this really is the intervention that is going to make the difference, our chances of demonstrating that difference are not great unless the treatment effect is absolutely massive.

When the system you study changes

The Ambulance Service in NSW decided two and half years into the trial that they considered physician treatment to already have sufficient evidence to make it the standard of care. They partially replicated the trial case identification system to enhance identification of patients that they believed would benefit from dispatch of a physician (there’s more detail in the HIRT protocol paper).

This is not the first time that such a thing has happened. In the OPALS study of prehospital advanced life support in Canada in 2004 the original study design was a randomised trial (Callaham). It was however done as a cohort study owing to the belief of paramedics that it was unethical to withhold ALS despite absence of proof of its efficacy. We bemoan the lack of evidence but belief in the efficacy of established models of care make gathering high quality evidence impossible in many EMS systems. NSW has proved to be no exception.

Sydney remains a good place to do this work of course.
Sydney remains a good place to do this work of course.

Where are we then?

So where does this leave Sydney? I think a quote from Prof Belinda Gabbe best sums up the situation. Prof Gabbe is a trauma researcher from Monash who has published much on the Victorian trauma system and was brought in as an external expert to review the HIRT outcome data during a recent review of the EMS helicopter system in New South Wales. Her comment was:

“As shown by the HIRT study, physician staffed retrieval teams are now an established component of standard care in the Sydney prehospital system. The opportunity to answer the key hypothesis posed by the study in this setting has therefore been lost and recommendation of another trial is not justified. Future trials of HIRT type schemes will therefore need to focus on other settings such as other Australian jurisdictions, where physician staffed retrieval teams are currently not a component of standard care”.

The only jurisdiction in Australia with enough patients to make such a study viable that does not already use physicians routinely is Victoria. Such a study would be particularly interesting as the recent randomised trial of paramedic RSI from that state found absolutely no difference in mortality, the area where the HIRT trial indicates there well may be a difference. Any potential trial funder would want some certainty that history would not repeat itself in the standard care arm however.

In NSW though, the question of whether physician care makes a difference to patient outcome is now a moot point. It is now the standard of care – HIRT has definitively demonstrated this if nothing else.   All we can do now is determine the best way of providing that care. We have more to publish from the data set that provides significant insights into this question so watch this space.

References:

In case you missed them above:

HIRT

The HIRT Protocol Paper

Callaham M.   Evidence in Support of a Back-to-Basics Approach in Out-of-Hospital Cardiopulmonary Resuscitation vs “Advanced” Treatment. JAMA Intern Med. 2015;175(2):205-206. doi:10.1001/jamainternmed.2014.6590. [that one isn’t open access]