Category Archives: Research

Blood Warmers (Sort Of) In The Wild

This post (after, let’s face it, a massive break) is the written version of a talk by Andrew Weatherall for the Aeromedical Society of Australasia Conference 2019, held in Perth.

 This is a thing that’s about blood. And the roadside. And the lab bench. Naturally it starts with a story because clinical research should start with a patient. That’s why we do it. It’s a really topic to talk about because the management of transfusion, and the questions it brings up might seem simple at first glance. But a transfusion in the prehospital environment is a particular example of people across a whole system working for a patient.

And the questions clinicians ask that start with something small are the sorts of questions that make a difference in simple ways to every single patient.

A Mountain Story

You have to cast yourself back to the late 90s. Back when the millennium bug was a spooky tale and not an impressively well managed systems vulnerability.

On a particular morning back then there was a car accident in the Blue Mountains, just outside of Sydney. It was serious. The driver had died.

These were the times when asking for a team with a few more options waited until the ambulance officers had got to the scene and started working very hard. So around 28 minutes after the accident, the CareFlight crew on that day got a call to attend to try and help a passenger. Around 52 minutes had passed by the time they reached the patient.

In that time a lot of difficult treatment and rescue work had already happened. The patient, a 15 year old was accessible but still a bit stuck. The paramedics had already started care. In this case this included a large bore cannulae and 6 litres of polygeline fluid resuscitation.

I should have mentioned it was 7 degrees Celsius this particular morning.

Over the course of the extrication all the crews on scene combined. The patient was intubated because the combination of their injuries, diminishing conscious level and the respiratory rate of 40/min suggested something was up.

They received the 4 units of red cells that the CareFlight crew was carrying. The heart rate of 140/min and blood pressure of 70 mmHg made it clear bleeding was an issue. It was an issue that needed more red cells. How does that happen on a cold day in the mountains in 1997? The local hospital, nothing like a trauma hospital sends O negative red cells when they receive a call. The police deliver it.

The patient had a chest drain inserted after their breath sounds changed unilaterally.

But it was clear they needed more fluid resuscitation. And more.

In total, by the time the patient reached their destination hospital a flight later and about 132 minutes after their injury, the combined teams had managed to find a total of 15 units for that patient. Which gave the hospital a chance to get working.

The receiving hospital that day was Nepean hospital, at the foot of the mountains (and these days not a major trauma centre). They received a critically unwell patient and had a long day in the operating theatres. A long day with a long list of injuries who received a further 56 units of red cells, 16 units of platelets and 19 units of FFP.

And survived.

I know they survived because they provided consent for the first case report of massive transfusion in the prehospital setting that Rob Bartolacci and Alan Garner wrote up in the MJA in 1999.

Questions

That story could be seen as an extraordinary success. It didn’t stop the crew asking questions though. This was of course before we knew nearly as much about trauma and bleeding and coagulopathy. I mean we knew it was an issue but Brohi et al hadn’t published that excellent work showing a rate of coagulopathy of 24.4% or the huge import of coagulopathy when it came to mortality.

We didn’t have nearly as much understanding of the ills of acidosis, coagulopathy and hypothermia. We didn’t even have as much evidence about how easy it is to cool patients with cooled fluids as we now do.

We knew hypothermia was bad though. So when that patient arrived to hospital with a temperature of 29.5 degrees Celsius and a heart rate of 80/min, the crew started thinking about ways to do it better.

These days of course we prioritise haemorrhage control. We have a different approach to administering massive transfusions. We’d be reaching for tranexamic acid. Along the way though the first question was ‘how do we try and make our fluids warm?’ Red cells come out of the esky at around 4 degrees Celsius. Patients are not designed to meet that temperature halfway.

Lots of things have been tried over the years. In the past we’d relied on the Australian sun. We’d relied on the toasty armpit of an emergency service worker probably wearing a non-breathable fabric blend that isn’t very flattering to the profile. We’d tried gel heat pads that you’d use to ease a muscle ache.

Fast forward a decade and a bit though and we finally had a portable fluid warming device small enough to help us out. We felt pretty good about the Belmont Buddy Lite too. It was a huge step up, we figured, from what we’d been doing.

Clinicians still ask questions though. This time it was another CareFlight specialist, James Milligan, who started asking questions. It was a pretty simple one really. ‘Maybe we should check how much better it is than all the other options we’ve tried?’

Simple, right?

 

Setting the Right Rules

Now one of the great challenges when you’re trying to do bench testing of a device for prehospital and retrieval medicine is getting the balance right. You want to produce measurements that are rigorous and reliable enough to give you real information. The risk when you do that is that you do things so differently to the environment that counts for us that it no longer represents something that still applies at the roadside.

So the natural choice if you want to bench test a prehospital blood warming device is this …

… a bespoke cardiopulmonary bypass circuit to deliver reliable flows, measure the temperature at multiple sites, measure pressure changes across the circuit, collect the first unit of red cells you administer, spin it down and cool it then recirculate that blood to deliver the equivalent of a second unit.

With a plan to fix the flow rates to 50 mL/min (the suggested rate for the Buddy Lite), randomise the sequence of runs and repeat each type of run 3 times to generate useful data. We standardised as much as we could, including the spine board we used for the ‘on a warmed spine board in the sun’ group and had the board heating for a standardised period of time. We used a single armpit to generate body heating. (Yes, I can confirm that sometimes you have to sacrifice a few frozen off armpit hairs for science.) You get those gel pads ready to go. You get the support in an entirely different setting from your blood bank.

Then it’s time to test how MacGyver works in real life.

 

Phase 1: A Warmer vs MacGyver.

This study turned out to be one of those ones where the results match pretty much what you thought. You can find the full paper here but the key table to pull out is this one.

Even with this simple study there are a few interesting points to note:

  • There is actually more warming than you might expect when you just run the fluid through an intravenous fluid line.
  • The Australian sun actually did pretty well.
  • My armpit is just an embarrassment. Only for this reason.
  • Gel pads just don’t have the contact time to count.
  • The thing specifically designed for warming turns out to be better than warming.

Phew. Done and dusted.

Except we had more questions.

The thing is as clinicians we knew that delivering at 50 mL/min is probably not what we do when a patient is really critically unwell. There’s every chance it’s quicker. It certainly was in 1997. Maybe even pulsed because someone is squeezing it in.

So to really make the benchtop more like the place we work, we wanted to test different flow rates. We also thought maybe we should double check that putting red cells through a rapid temperature change across a small area of space where at least some pressure changes happen inside the device was still OK for red cells. What if we haemolyse a bunch of hose bendy little discs.

Happily just as we got there, more devices hit the market.

 

Phase 2: Warmer vs Warmer

We had some more limits to set though. The devices we’re after for prehospital work need to be light enough to not be too annoying. They have to be free of an external power source. Ideally they’d be idiot proof. I mean folks like me just a little overloaded at the accident scene need to use them.

So we looked for devices that could do the job at a total weight under 1 kg and that operated as standalone units. We ended up with 4 to test:

  • The Thermal Angel.
  • The M0
  • The Hypotherm X LG.
  • The Buddy Lite.

This time with flow rates of 50 mL/min, 100 mL/min, and 200 mL/min (the maximum rating of any device was 150 mL/min but we also knew that with a pump set we can get up as high as those numbers). Still randomised. This time with fresh units of red cells so we could test for haemolysis on the first run through the circuit.

The testing circuit was a little different this time, mostly because it did things better and would reliably deliver the flow rate all the way to 200 mL/min. It was a bit quicker to cool the units too.

The results this time are best shown in a couple of the tables. At 50 mL/min there’s one warmer that’s clearly performing better than the others. The red cells reach the thermistor that reflects delivery to the patient at 36.60C. The best of the others only gets to 30.50C.

When you get to higher flow rates the difference is even more marked.

That same device gets the blood to 32.50C. The others? 23.7, 23.5 and 19.40C.

That is a heck of a difference.

Along the way we couldn’t pick up any evidence of haemolysis. We count that as the best sort of negative result.

Oh, that warmer was the M warmer. We switched.

The Deeper Bits

A study that started with a very simple question ended up being a pretty fun excursion into the lab. Except for that one thing where the clamp went on the wrong bit of the circuit.

For no reason at all I’m just going to mention that wearing PPE not your own clothes is something you should be glad you do at work. It pays off.

The other thing worth noting is that all of these devices go through a process before they become available for sale commercially. That is part of what the TGA does. And they all do kind of what they say on the box. They do warm things up. They are unlikely to cause problems. They’re not the same though. You could even argue that some of them performed so below the level of the most effective device we tested that it’s only marginally better than our next reference, my armpit.

Everything that goes to market has its own story, just like the little pig who goes to market I guess. The TGA is obviously very rigorous in applying its incredibly voluminous guidelines. And I wouldn’t suggest that the manufacturers or those sponsoring the devices t get into the market haven’t likewise followed every part of that process.

What is less clear to me is how you demonstrate ‘suitable for intended use’ which is one of the essential principles that must be shown to be met. Near as we can tell there wasn’t prior testing of these units that so rigorously reflected how we’re likely to use the units in our actual practice prehospitally.

There is a requirement to show that the design meets appropriate standards and that there is clinical evidence. However you are permitted to show that testing is similar enough to the area you reckon it can be used that it gets signed off. On my read of the guidelines you can even show good evidence for the principle of blood warming and that a similar device has been shown to do that well and say ‘our tech specs show we can do the same thing so tick here please’. Extrapolating practice from the hospital or other settings to prehospital and retrieval medicine is a thing we often have to do but that doesn’t mean we should just accept that.

So when you look at a design like, say, the Buddy Lite (a device that has served us really well and that was a huge step up from what we were doing) you can see that it would meet lots of essential principles about not exposing the patient to harm, being safe to handle and operate and warming the infusate. As long as it is delivered as suggested by the manufacturer at 50 mL/min.

And that’s the bit of evaluation that we can’t rely on the TGA to do for us. Is 50 mL/min what we’re after?

We need clinicians asking questions.

Slow Down

So we started with a question. Actually the questions have been happening since at least the 90s.

Simple questions asked by clinicians thinking about the patient in front of them are really useful. They take you in unexpected directions. They lead you to work in teams just like you do at the accident scene. It’s just that this team involves some doctors and a paediatric perfusionist and a haematologist and a lab technician and the blood bank and the Red Cross and a statistician in Hong Kong.

And we’ll keep asking questions. Along the way on this study we figured out that the Hypotherm was just a little challenging to use. In using the M warmer we’ve picked up ways we need to manage the battery recharging.

Clinical teams are vital to doing things better for our patients not just because we actually do the doing, but because the studies that get out there need to be interpreted by people who know the operating space. Or it needs to be clinical teams and patients inspiring studies in the first place.

Someone needs to keep an eye on the pigs when they’re trying to get to market.

 

The References Bit:

OK that’s quite long.

That case report is this one (and obviously the patient has given permission for its use in contexts like this):

Garner AA, Bartolacci RA. Massive prehospital transfusion in multiple blunt trauma. Med J Aust. 1999;170:23-5.

The first of the blood warming papers is this one:

Milligan J, Lee A, Gill M, Weatherall A, Tetlow C, Garner AA. Performance comparison of improved prehospital blood warming techniques and a commercial blood warmer. Injury. 2016;47:1824-7.

The more recent one comparing devices is this one:

Weatherall A, Gill M, Milligan J, Tetlow C, Harris C, Garner A, Lee A. Comparison of portable blood-warming devices under simulated prehospital conditions: a randomised in-vitro blood circuit study. Anaesthesia. 2019;8:1026-32.

If you want to read that Brohi et al paper again it’s here:

Brohi K, Singh J, Heron M, Coats T. Acute traumatic coagulopathy. J Trauma. 2003;54:1127-30.

You might like to reflect on just how quickly you can cool someone down with cold crystalloid:

Kämäräinen A, Virkkunen I, Tenhunen J, Ali-Hankala A, Silfvast T. Induction of therapeutic hypothermia during prehospital CPR using ice-cold intravenous fluid. Resuscitation. 2008;79:205-11.

If you have lots of spare time you might like to read the TGA regulations. (Note they are being updated.) I advise drinking coffee first.

And did you get all this way? Then you definitely need to watch this and relax.

 

Maths and Choppers from Norway to New South Wales

There are a bunch of ways to figure out where to put your resources. Dr Alan Garner found a guy who can crunch the big numbers to look at it a little differently. 

What’s the answer for optimal locations? First ask what is the question.

We have just had a new study published in BMC Emergency Medicine on modelling techniques to determine optimal base locations for helicopter emergency medical services (HEMS).  There is always more to say than can be covered in a publication so I thought I might have a look at some of those issues here.

First up is a big thank you to my co-author Pieter van den Berg from the Rotterdam School of Management in the Netherlands.  Pieter is the real brain behind the study and the mathematician behind the advanced modelling techniques we utilised.  Pieter has looked at HEMS base location optimisation previously in Norway and has done some modelling for Russel McDonald’s service Ornge in Ontario, Canada as well.  Without him the study would not have been possible.

So what did we do and why?

As already noted Pieter had recently done a similar exercise in Norway where the government has a requirement that 90% of the population should be accessible by physician staffed ambulances within 45mins.  Pieter and his co-authors were able to demonstrate that the network of 12 HEMS bases easily accomplishes this – indeed it could be done with just four optimally positioned bases.  They also modelled adding and moving bases to determine if the coverage percentage could be optimised with some small adjustments.

As it happens New South Wales (NSW) and Norway have very similar population densities and both are developed, first world jurisdictions.  Hence this previous study seemed a good place to start for a similar exercise in NSW.  Both jurisdictions also have geographical challenges; Norway is long and thin with population concentrated at the southern end whereas NSW has almost all the population of the state along the eastern coastal fringe with high concentration along the Newcastle – Sydney – Wollongong axis.

We were interested in population coverage but we also wanted to look at response times as this also is a key performance indicator for EMS systems.  It is certainly reported as a key indicator by NSW Ambulance.  Response times were not modelled in the Norwegian system so we were interested in seeing how the optimum base locations varied depending on the question that was asked, particularly in a jurisdiction such as NSW where the population is so concentrated to a non-central part of the state.

If you look at the study you will note from Figure 1 the existing arrangements in NSW. You’ll be shocked to know these arrangements weren’t planned in advance with the aid of a Dutch maths guru. These things happen organically. Nevertheless it provides a reasonable balance of response times and coverage although the gap on the north coast is immediately evident.

Figure 1If you start with a clean slate and optimally position bases for either population coverage or average response time, both models place bases to cover that part of the coast (see Figure 2).  Hardly surprising.  When we modelled to optimise the existing base structure by adding or moving one or two bases, the mid north coast was either first or second location chosen by either model too.

Figure 2

This seems an obvious outcome from even a glance at the population distribution and current coverage in Figure 1.  What is surprising is that the 2012 review of the HEMS system in NSW (not publically released) which utilised the same census data in demand modelling did not come to the same conclusion when two previous reviews in the 1990s and 2000s had recommended just such a change.  Certainly the Reform plan for helicopter services which was released the following year did not make any changes or additions to base locations leaving this significant gap still uncovered.

Wagga Wagga was the other location identified for a HEMS base in the 2004 review.  Interestingly it is favoured as the first relocated base when the existing structure is optimised for average response time by moving Canberra to this location.  But a Wagga Wagga base also was not mentioned in the reform plan.

What about the green fields?

When the green field modelling was done it is clear that the current NSW system mostly closely resembles the model optimised for average response time, rather than coverage.  The Wollongong base really justifies its location on this basis as it contributes to a better overall average response time.  Its population coverage falls entirely within the overlapping circles of the Sydney and Canberra bases so it makes no contribution here, at least if a 45min response time is used as the standard.

There was another aspect that interested us compared with Norway.  In Norway all aircraft have the same capability and this is also true for the recently tendered services in NSW.  The unusual feature in NSW though (unique to Australia although common in Europe in particular) is a dedicated urban prehospital service operating from a base near to the demographic centre of the largest population density – Sydney.  The performance characteristics of this service have been well described (by us, because I’m talking about the CareFlight service which I think does serve a useful function) previously and when it was operating with its own dispatch system was the fastest service of its kind in the world to our knowledge.

Like the Wollongong service it operates entirely within the population coverage circles of other bases, but it makes an enormous contribution to average response time.  When this rapid response urban service is added to the network of large multirole helicopters in NSW the average response time across the entire state falls by more than 3.5mins because that service is able to access more than 70% of the state population within its catchment zone, and significantly faster than the multirole machines.

This modelling only takes into account the response time benefit of the specialisation afforded by such as service.  We have previously been able to demonstrate that the service is also much faster in almost every other aspect of care delivering patients to the major trauma services in Sydney only a few minutes slower than the road paramedic system but with much higher rates of intervention and ultimately passage through the ED to CT scan faster than either the road paramedic or multirole retrieval systems in NSW.  At least this was the case when it had its own specialised dispatch system but that is a story we have discussed previously too.

There are recurrent themes here.  The Rapid Response Helicopter service adds significantly to the response capability in NSW whether you model it using advanced mathematical techniques or whether look at the actual response data compared with the alternative models of care.  Indeed the real data is much stronger than the modelling.  It seems that at least in large population centres in Australia there is a role for European style HEMS in parallel with the more traditional multirole Australian HEMS models that service the great distances of rural and remote Australia.  Different options can work alongside one another to strengthen the whole system and better deliver stuff that is good for patients – timely responses when they really need them. The capability differences however need to be reflected in dispatch systems that maximise the benefits which come with specialisation rather than a one size fits all tasking model that takes no account of those significant differences.

Every version of the numbers I look at tell the same story.

 

Notes and References:

While this post covers a few ways of looking at a tricky sort of problem, there are lots of clever people out there with insights into how these things work. If you have ideas or examples from your own area, drop into the comments and help people learn.

Now, the paper that’s just been published is this one:

Garner AA, van den Berg PL. Locating helicopter emergency medical service bases to optimise population coverage versus average response times. BMC Emerg Med. 2017;17:31. 

The paper on optimal base locations in Norway is this one:

Røislien J, van den Berg PL, Lindner T, et al. Exploring optimal air ambulance base locations in Norway using advanced mathematical modelling. Injury Prevention. 2017;23:10-15.

And if you like any of the posts on here, then maybe share them around. Or sign up for an update when new posts hit with the email sign on thing.

 

The Remote Bad Stuff

Last time Jodie Martin, Flight Nurse extraordinaire dropped by she shared one of our most popular posts ever. Jodie returns with a little on the Top End experience of sepsis. 

Time for a look at some remote medicine again.

CareFlight provides the aeromedical service for the top half of the Northern Territory (NT) in Australia.  The area covered by the service is the same size as France but has only 160,000 people.  And less vineyards.

As 115,000 of this population are in Darwin which is serviced by road ambulance services this leaves CareFlight to provide services to about 45,000 people in very remote and widely scattered centres, most of which are small Indigenous communities.  The catchment area has only two rural hospitals which are non-referral centres with care otherwise provided in remote health clinics. Even then not everyone lives close to a rural hospital or remote health clinic. Some rural folk still have to drive several hours or even a few days to any level of health care. Access to health care is a real challenge when someone becomes sick.

The Top End of the Northern Territory may be sparsely populated with 0.2 persons per square km, but it has the highest incidence of sepsis in Australia and five times higher rates than those recorded in the US and Europe 1,2. It has been suggested that one of the reasons for the high incidence of sepsis is related to the higher Indigenous population in the Top End 2. The incidence of sepsis requiring ICU admission in the Top End of the NT for Indigenous people is reported to be 4.7 per 1,000. In the non-Indigenous population there are 1.3 admissions per 1000 people. When compared to the rest of Australia, the rate of admission to an ICU for sepsis is 0.77 per 1,000 2  with national 28 day mortality rates of 32.4% 1.

The Top End – Not Just Popular with People

Human-invading bacteria and viruses love the warmth and moisture of the tropics. To make things even harder, the Top End has the highest rate in the world of melioidosis, a very nasty pathogen found in the wet tropics of Australia.  Melioidosis has been classified as a Type B bioterrorism agent by the Centre for Disease Control in the US and kills up to 40% of infected patients often from rapidly fulminant disease.  However most sepsis is of the more common garden variety, but still causes severe, life threatening illness.

jurgen-otto
A quick editorial note that we have done another story from the Top End and still it’s not about crocodiles. We apologise but it turns out there are other things up there trying to kill you.

When you add the challenges of distance and retrieval times, meeting targets for sepsis treatment which are time-based would seem an impossible task. Given this, we were keen to review the retrieval of septic shock patients in our service to see what the outcomes are like and whether we could improve the process.  The results have just been published in the Air Medical Journal which you can find here.

The patients were sick.  A third of patients required intubation and 89% required inotropes.  Median mission time however was 6 hours and the longest case took 12 hours.  Given the remoteness and time delays inherent in retrieval over such distances with a population known to have worse health outcomes, you would expect mortality to be high.  Surprisingly however the 30 day mortality in this group of 69 patients, which are predominately Indigenous, was only 13%.  This is lower than previous rates described for both sepsis in Australian Indigenous populations and for patients in Australian and New Zealand intensive care units.

That’s Excellent, But Why?

It is interesting to speculate on the possible reasons for such good outcomes.  Reasons might include:

  • The relatively young age of the patients compared with many series. Perhaps the better physiological reserves of younger patients are still a key factor despite the higher rates of co-morbidities.
  • Early antibiotics – these are almost always given by the end of the referral call. Good clinical coordination has a role to play in this too.
  • Early aggressive fluid resuscitation – the median volume of crystalloid administered was 3L during the retrieval process.
  • Inotropes administered following fluid resuscitation occurred in the vast majority of patients.
  • Early referral – recognising when a patient is sick. This is something we’d like to gather more data on. We didn’t record how long a patient was in a remote health centre before a referral call was made, but we have a suspicion early referral might have played a part here.

It is also interesting to note the good outcomes that were achieved without invasive monitoring in approximately half the patients retrieved.  Perhaps there are shades of the findings of the ARISE study here where fancy haemodynamic monitoring really did not seem to make much difference either – what matters in the retrieval context is early antibiotics, aggressive fluid resuscitation and early intubation when indicated.

We did not randomise patients to invasive versus non-invasive monitoring and it is possible that the sicker patients and those with longer transport times received the invasive version.  But it is also possible that we get too hung up on this stuff and it is the basics that really matter whether you are in the city or a really remote health clinic.

The Wrap

The Australian Indigenous population have poorer health outcomes than the general community. Outcomes are even worse for those residing in remote areas than those in urban areas. In our small study it is pleasing to see such good outcomes despite remoteness and long retrieval times. Our young patient cohort recovered well considering how sick they were but what would be even better is preventing sepsis in the first instance. The incidence and burden of sepsis in young Indigenous people requires preventative strategies and appropriate and timely health care resources. Improving access to health care, improved housing and decreasing overcrowding, decreasing co-morbidities and decreasing rates of alcohol and tobacco use are hopefully just some of ways we can possibly decrease the incidence of sepsis and contribute to closing the gap.

Notes:

That croc with almost enough teeth came from flickr’s Creative Commons area and is unchanged from Jurgen Otto’s original post.

Here’s the link to the paper that’s just been published:

Joynes EL, Martin J, Ross M. Management of Septic Shock in the Remote Prehospital Setting. Air Med Journal. 2016;35:235-8. 

The two references with the actual superscript numbers above are here:

  1. Finfer S, Bellomo R, Lipman, J, et al. Adult population incidence of severe sepsis in Australian and New Zealand intensive care units. Intensive Care Med. 2004; 30: 589-596.
  2. Davis J, Cheng A, Humphrey A, Stephens D, Anstey N. Sepsis in the tropical Top End of Australia’s Northern Territory: Disease burden and impact on Indigenous Australians. Med J Aust. 2011; 194: 519-524.

Here’s a bit on melioidosis from the CDC website and here’s a review in the NEJM.

If you want to look more at the government’s Closing the Gap stuff, you could go here.

Getting to the Start Line

We can debate the value of this advanced team model vs that advanced team model. We can debate videolaryngoscopy vs direct laryngoscopy for days. People do. Its all chump change compared to the real challenge. Getting that team where they need to be. Dr Alan Garner and Dr Andrew Weatherall have a bit reviewing a paper they’ve just had published trying to add to this discussion. 

You may just have noticed that there are things happening in Brazil. They are called Olympics and they are a curious mix of inspiring feats of athleticism and cynical marketing exercise inflicted upon cities that can probably barely afford them and which will be scarred for a generation afterwards. I’d hashtag that but it turns out the IOC will take you on if you mess with their precious sponsor money.

Now, you might think the obvious segue from a mention of the Olympics at the start there would be to mention drugs. The sort of drugs that enhance performance. It’s just that this feels too obvious. We’d rather make a very tangential link to kids. In particular, let’s talk about kids who are very, very injured.

 

The Teams

One of the bits of the Olympics that is a bit fascinating is the logistics of getting highly specialised teams into the right place at the right time in the sorts of cities that don’t usually get anything to the right place at the right time.

Maybe this is unfair but I don’t immediately think “super efficient transport infrastructure” when I think of Rio de Janeiro. And when I’m on a commute in the early hours of a Sydney workday, the fact that anyone was able to get a rowing team out of the stacking rack and to a patch of water in the hillock-shaded nirvana of Penrith during our local Olympics is astounding.

That’s kind of central to the whole circus though. Everyone is getting their right team to the right start line at the right time. It would probably be more entertaining if you dropped the table tennis team at the volleyball court but that’s not how it works when you’re trying to get the best of the best doing what they are built for.

Which is the cue to make this lumbering patchwork monster lurch back to the segue.

 

Right Place, Right Time

Advanced EMS needs to achieve the same goals of right place and right time. (Never said it would be a pretty link, but there it is.) Whatever your model of staff might be for delivering advanced prehospital care (paramedic/physician, paramedics across the board, St Bernard with an alcohol supply) there would be no one who doubts that the key to the whole thing is to get them to the right jobs at a time when those advanced skills have a role in making a difference.

You might be able to put one of those snorkels in the airway hanging upside down while drilling an intraosseous with particularly agile toes but if you’re back at base that’s not going to help the patient out there who is injured.

For a while now we’ve been really exercised by that problem. How do we make the tasking process better? Because tasking is not about the team at base. It’s not about which location the vehicle comes from. Tasking is always about the patient waiting for the care they need. They’re just wishing you’d been waiting there already, not still somewhere else.

The latest in a suite of papers which are ultimately about this question has gone online pretty recently. With the catchy title of “Physician staffed helicopter emergency medical service case identification – a before and after study in children” it builds a little bit from an earlier paper where two parallel tasking systems for sending advanced EMS (in this case physician staffed HEMS) to injured kids was compared.

That paper suggested that when you had a team actually delivering HEMS involved in identifying and tasking of cases, they were far more likely to identify cases where their skills might help (meaning they were more likely to identify cases of severely injured kids from the initial emergency call information in the system) than a single non-HEMS tasker working away in the office.

The involvement of the HEMS team got removed though, so it seemed timely to revisit this area to look at the time before the changes where the two systems worked together and the subsequent time period where it was just left to that one paramedic in the office.

Kids and the NSW System

It is going to help you to know a bit of background here. For a while now in New South Wales, there has been a stated goal in the trauma system to get kids straight to a paediatric trauma centre (PTC). Interest in this first came about because of overseas evidence that maybe this was the best option for kids. This was later followed by local work. This established that kids who went to other centres before the PTC tended to wait a long time in the first place they went to. Like 5 hours in that initial hospital before there was any movement.

Another study also suggested that kids who went to an adult trauma centre first had 3 to 6 times the risk of a bad outcome. And by bad outcome I mean a dying sort of outcome. Now, there are issues with being too firm on those numbers, particularly as not many kids die from traumatic injuries over any measured time period in our system so one or two kids surviving in the adult centre would make a big difference to those stats. But these were the sort of figures that made people keen to get kids straight to the specialist kids centres.

So the system is supposed to be designed to get kids to the kids’ hospitals as a priority. Do not pass go, do pass the adult centre.

Around the same time as that was becoming a talking point, the Head Injury Retrieval Trial was getting moving. As part of that trial, there was an agreed setup for the HEMS crew (including the aviators) to have access to the emergency call info on the ambulance computer screens on about a 90 second delay from when it hit the ambulance system.

For the trial (only adults), you’d look at the highest urgency trauma cases and look for specific trigger mechanisms which would lead to a protocolised response – either an immediate decision to randomize or a callback and interrogation step.

For kids, a different request was made. The request was just to respond to severely injured kids (where it seemed like the severity matched the initial call info or the mechanism was a super bad one; something like “kid vs train” for example). No randomisation as they were not in the trial; we just went.

So the crew screened for paediatric cases too, as requested. And went to paediatric cases. There was some real learning in that too, as the HEMS crews started making it to a much higher proportion of severe paeds trauma (and drowning) than had historically been the case.  This was partly due to the higher rate of recognition of cases, and partly due to the fact that the HEMS team was really fast getting to the patient, arriving before the road paramedics had already moved on.   You can read more about the kind of time intervals the HEMS team achieved here.  As far as we are aware from the published literature the whole end-to-end process was the fastest ever reported for a physician staffed HEMS system, while still offering the full range of interventions when indicated.

Mirror, Mirror

A third of the way through the HIRT thing happening, the ambulance service introduced a role within ambulance which hadn’t been there before. The Rapid Launch Trauma Coordinator. Their role? To look at the screens as jobs came in and try to identify cases where advanced EMS might help.

As it turned out they elected to include the trial area as well as other areas in the state in the roving brief for this paramedic sitting in at the control centre. While that was an issue for the trial, for kids it was just a bonus, right? Another set of eyes trying to find kids who might need help sounded perfect.

The bonus in kids was that there was no need to try and have the person doing the RLTC work blinded to whether the case had been randomised or not, so if the HIRT crew in their screening saw a case with a kid, they’d call quickly and see if the RLTC knew of a reason they shouldn’t go. It was a nice collegial cross-check.

This also ensured that only one advanced team went unless they thought there were multiple casualties (in the trial double tasking was common due to the blinding of the RLTC to the randomization allocation).  So the cross-check avoided double ups and maximized use of resources too.

OLYMPUS DIGITAL CAMERA
Well how close to being the same are they then?

It was in this context of the systems for screening cases operating alongside each other that the first bit of research was done [2]. Over a two year period cases with severely injured kids occurring while the HEMS was available were reviewed to see if either screening process picked them up.

There were 44 kids fitting that bill (again, the numbers are low in the Sydney metro area). 21 weren’t picked up by anyone. 20 were picked up by that HIRT crew and 3 were picked up by that person working on their lonesome in central control.

When you looked more broadly at times the HIRT system wasn’t available compared to those it was, the proportion of patients directly transferred to the PTC was much lower. This fits with other stuff showing that advanced EMS teams tend to be more comfortable bypassing other sites to make it to a PTC, while also performing more interventions.

Another thing this research threw up was to do with time of a different kind: when HIRT was available the median time to reach the PTC was 92 minutes, compared to 296 (nearly 5 hours again) when they weren’t available.

So on that first round of research the message seemed to be that there was something about that case screening process that picked the severely injured kids more often. Maybe it was the extra eyes and regular rotation. Maybe it was better familiarity with the nature of the operational work for advanced EMS on the ground. Either way that screening process seemed to support the goals of the trauma system pretty well.

Things You Take Away

Come March 2011, the screens were taken away from the HIRT set-up as the trial wrapped up. No more screening by the actual HEMS crew. Back to centralised control screening back in the office.

As the HIRT screening process seemed to have such a dramatic effect on the trauma system in Sydney we wanted to keep it going as did the trauma people in the Children’s Hospital at Westmead.  They had particularly noted the change as by virtue of geography they are the closest kids centre to most of the Sydney basin.  The increase in kids arriving straight to the ED even led them to revise their internal trauma systems. But away the screens went.

So the question for this subsequent bit of research was really pretty simple: did we lose anything going back to the centralised process alone? More crucially, do the patients lose anything?

Comparisons

This time the comparison wasn’t the two screening processes working alongside each other. It was before and after. What didn’t change was the sort of paeds patients being looked for. It was any kid with severe trauma. This might include head injury, trunk trauma, limb injuries, penetrating injuries, near drownings, burns and multi-casualty incidents with kids involved.

So in the ‘before’ epoch there were 71 cases of severely injured kids (covering 34 months) that fitted the bill. For the ‘after’ epoch there were 126 cases (over 54 months).

In the ‘before’ epoch with the systems working alongside each other, 62% of severely injured kids were picked up and had an advanced EMS team sent.

In the ‘after’ phase? It fell. To 31%.

And while the identification rate halved it also took kids longer to reach the PTC going from 69 to 97 mins. 28 minutes might seem small but then most of us have probably seen how much can change in a severely injured patient in less time than an episode of Playschool runs for.

Things that didn’t change? Well the overtriage rate for the CareFlight crew was pretty much the same. And whether advanced EMS teams or paramedic only teams reached the kids, their respective rates of transfer direct to PTC were pretty much the same as in the ‘before’ time. It seems that once crews get tasked they treat the patients much the same as their training sets them up to do.

It certainly seems that the right team in our system is a physician/paramedic crew (in NSW the doctor/paramedic mix is the advanced EMS set-up used across the board) as the kids get much more intensively treated at the scene and then get transported directly to a kids centre.  In other words faster access to advanced interventions and much faster access to the specialist kids trauma people.  Right team to the right patient at the right time.

 

The Washup

So we’re left with a few things to consider. There is an acceptance locally that severely injured kids are more likely to get time critical interventions if an advanced EMS team is sent (and advanced EMS teams could come from different backgrounds in different places, it just happens to be physician/paramedic here). There is a belief that those who’ve had that extra training and exposure will feel more comfortable with kids, who can be challenging.

The system has set a goal of getting those advanced teams to severely injured patients, and in this case we’re talking about kids. These two papers suggest that a model where those who are directly involved in advanced EMS are part of the screening process will identify more severely injured kids and get more of them straight to the PTC and definitive care.

Should this be a surprise? As the paper mentions this isn’t the only example of a model where clinicians who do advanced EMS work being part of the screening process seems to be a success above and beyond those who specialise in screening all calls. It may be that knowing the lay of the land when it comes to service capability counts for a whole lot. There is also work suggesting that telephone interrogation of the emergency caller by a flight paramedic is accurate when compared to assessment by on-ground ambulance crews when trying to figure out whether advanced care might help.

This was the experience with the HIRT screening process too, where structured callback was part of the game. The HIRT system also had some unique features.  It is the only one we have heard of where the crew sitting next to the helicopter identified the cases they responded to.  This seemed to create added benefits in shortening the time to getting airborne because parallel activities come to the fore (see the paper for more).  A very consistent six minutes from the beginning of the triple zero call (emergency call from the public) to airborne is pretty quick.

Does this have any implications for adults too?  Back in 2007 when the RLTC was introduced the local ambulance admin made the decision that sending advanced EMS teams to severely injured patients was the standard in Sydney and the RLTCs job was to make that happen.  From the time the RLTC started till the screens were removed in March 2011 the HIRT system identified 499 severely injured adults.  The RLTC also spotted 82 of these, or 16%.  So the HIRT spotting system appears to be even more effective for adults than in kids.

Right now there are a bunch of different advanced EMS teams in Sydney, all wanting to get to that right patient and offer top notch care. Those patients would be very happy to have teams with the full range of skills coming. And all those teams have the skills to add the sort of screening that involves protocols that operated during HIRT. They’re sitting waiting for someone else to look their way.

So let’s work it through again.

Let’s say you were trying to meet that thorny challenge of right team, right place, right time. Let’s say you had ended up trying out a screening system similar to some others around the world but with some tweaks that made it even better, particularly for local conditions.

Let’s say that system hugely improved the way that severely injured kids were cared for.  Let’s say that system was also even better at spotting severely injured adults too.  Let’s say that system was part of the fastest end-to-end physician HEMS system yet described in the world literature.

Let’s say when you moved away from that screening system you didn’t pick up as many of the severely injured kids as you wanted to so they missed out on early advanced care, the kids didn’t get to your preferred destination first up as often and they took longer to get there.

You might ask why such a hugely effective system was discontinued in the first place.

You might ask why it has not been reinstated given the subsequent evidence.

And they would be very good questions.

 

Notes:

The image of Charlie in his guises was on the Creative Commons area of flickr and posted by Kevin O’Mara. It’s unchanged here.

The papers mentioned again are:

Garner AA, Lee A, Weatherall A, Langcake M, Balogh ZJ. Physician staffed helicopter emergency medical service case identification – a before and after study in children. Scand J Trauma Resusc Emerg Med. 2016;24:92.

Garner AA, Lee A, Weatherall A. Physician staffed helicopter emergency medical service dispatch via centralised control or directly by crew – case identification rates and effect on the Sydney paediatric trauma system. Scand J Trauma, Resusc Emerg Med. 2012;20:82. 

Soundappan SVS, Holland AJA, Fahy F, et al. Transfer of Pediatric Trauma Patients to a Tertiary Pediatric Trauma Centre: Appropriateness and Timeliness. J. trauma. 2007;62:1229-33.

Mitchell RJ, Curtis K, Chong S, et al. Comparative analysis of trends in paediatric trauma outcomes in New South Wales, Australia. Injury. 2013;44:97-103.

Garner AA, Mann KP, Fearnside M, et al. The Head Injury Retrieval Trial (HIRT): a single-centre randomised controlled trial of physician prehospital management of severe brain injury compared with management by paramedics only. Emerg Med J. 2015;32:869-75.

Garner AA, Mann KP, Poynter E, et al. Prehospital response model and time to CT scan in blunt trauma patients; an exploratory analysis of data from the head injury retrieval trial. Scand J Trauma Resusc Emerg Med. 2015;23:28. 

Garner AA, Fearnside M, Gebski V. The study protocol for the Head Injury Retrieval Trial (HIRT): a single centre randomised controlled trial of physician prehospital management of severe blunt head injury compared with management by paramedics. Scand J Trauma Resusc Emerg Med. 2013;21:69. 

Wilmer I, Chalk G, Davies GE, et al. Air ambulance tasking: mechanism of injury, telephone interrogation or ambulance crew assessment? Emerg Med J. 2015;32:813-6. 

Did you check all of those out? Why not take a break from all of that and watch these French kids rock a club track?

 

 

Does the Thing in the Box Do What it Says?

Sometimes really simple questions don’t get asked. Here’s a joint post from Alan Garner and Andrew Weatherall on places you end up when you ask simple questions about ways of warming blood. 

Carriage of packed red blood cells (PRBC) by HEMS crews has become increasingly common in the last several years in both Europe and North America.  CareFlight was an early adopter in this regard and has been carrying PRBCs to prehospital incident scenes since the 1980s.  We reported a case of a massive prehospital transfusion in the 1990s (worth a read to see how much Haemaccel was given before we arrived on the scene and how much things have changed in fluid management).  In that case we tried to give plasma and platelets as well but the logistics were very difficult.  This remains the case in Australia with plasma and platelets still not viable in a preparation that is practical for prehospital use.

Returning to the PRBCs however the issue of warming them was something that always vexed us.  We experimented with a chemical heat packs in the late 1990s and early 2000s but could not find a method that we felt was reliable enough.  We also looked at the Thermal Angel device from the US when it appeared on the market nearly 15 years ago, but as the battery weighed the best part of 3kg we decided that it still had not reached a point where the technology was viable for us to be carrying on our backs (battery technology has moved on a long way in the last 10 years and Thermal Angel now have a battery weighing 550gms).

Fast Forward

Hence we were pretty excited when we found that there was a new device available in the Australian market, the Belmont Buddy Lite, where the whole set up to warm blood or fluid weighs less than a kg.  We have been using the device for 3 years now, and our clinical impression was somewhere between impressed and “finally”.

Still, one of our docs, James Milligan, thought it worth validating this new technology. Part of that was about checking that the machine does what it says on the box. Is it just marketing or is it really that good?

The other thing we wanted to assess was how a commercial device compared to all those old techniques we were once stuck with. Traditional methods used by EMS in our part of the world include:

  • Stuffing the unit under your armpit inside your jacket for as long as possible prior to transfusion.
  • Putting it on a warm surface (black spine board in the sun or bonnet of a vehicle). Yep, baking.
  • That chemical heat pack method we had tried 10 years ago.

 

Fire
Some things aren’t a prehospital option. Well this isn’t anywhere maybe.

The Nuts and Bolts

Now, how would you go about testing this? The first thought bubble included a pump set, a theatres wash bowl and a standard old temperature probe that you might use at operation. Oh, and some blood. Like most bubbles that don’t involve property, it didn’t last long.

So we were left with a question: how do you try and set things up to test a system for the real world so it is actually like you’d use it in that real world, while still allowing measurements with a bit of rigour? How consistent are you when you deploy a blood-giving pump set?

Enter Martin Gill, perfusionist extraordinaire from The Children’s Hospital at Westmead. Because when we thought “how do we test prehospital blood warmers” obviously we thought about heart sugery in newborns. We turned to Martin with the following brief:

  • We want to test prehospital blood warming options.
  • We want to measure temperature really well.
  • We’re keen on being pretty rigorous about as many things as we can actually. Can we guarantee flow rate reliably?
  • We figure we could use units of blood about to be discarded and we want to be able to do the most with what we’ve got. So we want to be able to use a unit for a bunch of testing runs.

And Martin delivered. He designed a circuit (check the diagram) that would guarantee flow, measure in 3 spots, cool the blood once it had run through, and run it all through again. There are some things you could never come up with yourself. That’s just one.

Diagram copy
It looks a little different in three dimensions but you get the idea.

 

You might wonder how hard is it to get blood? Well actually it was pretty easy (thank you Sydney Children’s Hospital Network Human Research Ethics Committee and Haematology at The Children’s Hospital at Westmead).

The results have just been published online in Injury.  So this humble little idea has led us some places and told us some things. What were those things then?

  1. As you will note, the commercial warmer was the only method that reliably warmed the blood to something like a physiological level.
  2. The change in temperature as the products pass through the line itself was more than we’d expected. Even the measurement of temperature just a little bit distal to the bag of blood showed a sharp step up temperature (that mean was 9.40C).
  3. Any of the options that weren’t the commercially available device here guaranteed very cold blood reaching the end of the line. After all, 180C is the temperature we aim for when setting up deep hypothermic circulatory arrest in the operating suite. It is very cold. Should you even consider packed red blood cells if you aren’t going to warm them effectively?

In some ways, these aren’t super surprising items but small things like this can still be valuable. This was a humble little bench study of a simple question. Still, finding out that a device does what it says on the box by direct observation is reassuring. But …

 

We Have Questions

Research is very often an iterative process. Ask a question, provide answers to one small element of the initial puzzle, find another puzzle along the way and define a new question to explore. Each new question contributes more to the picture. On top of that, finding our way to the lab set-up and squeezing in the measurements around other work has taken a bit of time and things have moved along. This itself suggests new questions to ask.

Will everyone’s questions be the same? Well here are ours, so you tell us.

  1. Now that we’ve come up with a lab set-up to test the manufacturer’s recommended use, what about testing a situation that more closely matches how the warming device is used at the roadside? As noted in the discussion, we don’t use machines pumping blood at a steady rate of 50 mL/min. How will a warmer perform at the much higher flow rates we demand in prehospital use? Will it still be a warmer or more of a tepid infusion system?
  2. Are all devices the same? We didn’t choose the Buddy Lite because we were after a sweet, sweet money deal. It was the only prehospital fluid warmer with Therapeutic Goods Administration registration in Australia. There are now at least 2 other devices weighing less than 1 kg on the international market. They also advertise an ability to work at higher flow rates of up to 200 mL/min.
  3. Are there are other potential problems when you warm the blood with these low dead space solutions? Let’s just imagine for a second you’re a red blood cell rushing through a warmer. In a pretty small area you’ll be put through a temperature change of over 200C within a system aiming to maximise that heat transfer in a very small bit of space. That implies the pressure change across the warming device could be pretty sizeable. When you get to the end of that little warming chamber having effectively passed through a very high pressure furnace, is there a chance you might feel like you’re going to disintegrate at the end of it all? What we’re alluding to is maybe, just maybe, does making red blood cells change temperature quickly while rushing through the system at up to 200 mL/min leave those red cells happy or is haemolysis a risk? If it was a risk, would the patient benefit from receiving smashed up bits of red cell?

 

Now that we’ve established a good model that will let us do rigorous testing,we can ask those new questions. Without the simpler first question, we wouldn’t be so ready to get going. Those new questions would seem to be how do modern devices perform at flow rates useful for the clinician rather than the marketing pamphlet? And what happens to the red cells in the process?

That’s the space to watch. Because that’s where we’re going next.

 

 

Notes and References:

Here’s the link to the prehospital massive transfusion case report mentioned near the start.

Garner AA, Bartolacci RA Massive prehospital transfusion in multiple blunt trauma. Med J Aust. 1999;170:23-5.

And here’s link to the early online version of the blood warmer paper:

Milligan J, Lee A, Gill M, Weatherall A, Tetlow C, Garner AA. Performance comparison of improved prehospital blood warming techniques and a commercial blood warmer. Injury. [in press]

That image of the fire is from flickr’s Creative Commons area and is unaltered from the post via the account “Thomas’s Pics”.

And did you get this far? Good for you. Much respect to all those who read to the end of a thing. For this you get a reminder that you can follow along by signing up to receive updates when we post.

You also get the word of the week: colophon [kol-uh-fon] which is a publlisher’s or printer’s distinctive emblem used as an identifying device on books or other works. Alternatively it can be the inscription at the end of a book or manuscript.

 

 

Summers Past – A Look Back at Drowning Cases

A quick post on a recent paper from one of the authors, Andrew Weatherall. You can get the full text over here and it might be worth having a quick look at a quick review of a study from the Netherlands that Alan Garner did previously. 

Every summer, for too many summers, prehospital teams at CareFlight go to drownings. Too many drownings. This isn’t to say it’s only summer, but that is definitely when most of the work happens. Sometimes they’re clustered in a way that makes you think there’s some malevolent purpose to it, some malign manipulation of chance striking at families.

And also at our teams, particularly the paramedics, backing up day after day.

So drowning is something we want to understand better. What are we offering? What are our longer term outcomes?

And surprisingly given that drowning has been a long-term feature of preventable tragedy, particularly in Australia, there’s not really that much research out there. In fact it was only in 2002 that clever people at the World Congress on Drowning sat down and agreed on definitions for what was really drowning.

So we set about trying to add at least a little bit to the discussion.

Looking Back

Retrospective research has a bunch of issues. It has a place though when you’re trying to understand your current practice and what you’re actually seeing, rather than just what you think you’re seeing.

We went back and looked at a 5 year period between April 2007 and 2012 (and full credit to co-author Claire Barker who did the majority of that grunt work). For most of this time the tasking system included the HEMS crew observing the computer assisted dispatch system screens. For some of the time there was also a central control person doing this while from March 2011 on there was only the central control person. The aim of the game was to pick up cases where there was an immersion mechanism and either reduced conscious state or CPR and get a team with advanced medical skills moving.

Key points of interest were whether the cases were picked up, what interventions the HEMS team undertook and, if possible, what were the outcomes for those patients? In particular was it possible to glean what their longer term outcomes were?

Things We Found

Up until the move to solely central tasking, all of those at the severe end of the scale (ISS > 15 meaning they had an altered level of consciousness of documented cardiac arrest) were identified for a HEMS team response. Once it went to central control alone, 3 of the relevant 7 were not identified (obviously not super big numbers).

Of the 42 patients transported, 29 of them could be fairly had an ISS > 15 and you can see the interventions in the prehospital setting here:

Table 2 copy

So what were our other findings?

Those who present with GCS 3

This group did not do well. Of the 14 in this group, 10 died within 2 weeks. Of the other 4, one died at 17 months, having had significant neurological impairment after their drowning.

But there was one patient with GCS 3 and a first reported rhythm of asystole that was rated as having normal neurological development on follow-up by the hospital system.

What was different about this kid? How do we make that outlier fit right in the middle?

That’s a nagging question from this study.

9 and above, 8 and below

In our patients, if you had an initial GCS over 8 there was no evidence of new neurological deficit. All the patients with GCS 8 or below were intubated and ventilated by the teams. Every patient with a GCS over 3 when the team arrived survived. All of the survivors (with any initial GCS) had return of spontaneous circulation by the time they were in ED.

Another feature was the neurological outcome for those with an initial GCS between 4 and 7 – 7 of the 8 kids in this group had a good neurological outcome. (One patient had pre-existing neurological impairment but returned to baseline.)

Figure copy

The Bystanders

An observation along the way that is a real highlight. All but one child with a GCS less than 8 on arrival of the HEMS team had received bystander CPR (and that included all of them for those who were systolic). Here’s hoping that marks good community knowledge of what has to be done.

The Stuff We Just Can’t Say

There are the usual issues with retrospective studies here. Some patient may have moved out of area and not had subsequent follow-up. There’s also those three cases of severe drowning not picked up after the change in tasking options after March 2011. As those patients weren’t managed by the reporting HEMS team they don’t make up part of the 42 in this data set. As mentioned in the results, two of those cases went to adult trauma centres first, then were transferred more than 4 hours after the incident to a paeds specialist centre, where they unfortunately passed away. The other case did get a HEMS response by another organisation but we didn’t have detailed access to the treatments undertaken.

Another really important point about the follow-up. The follow-up here is short-term, as it was what is available from the hospitals. It may well be that more subtle neurological impairment only becomes evident as kids get older, particularly when they hit school age. I happen to know that The Children’s Hospital at Westmead has done some work on longer term neurological outcomes which should hopefully hit the public airwaves soon. Intriguingly they’ve looked at kids thought to be entirely neurologically normal and followed them in detail over the longer term. No spoilers in this post but I’ll definitely follow up when it breaks.

It’s also the case that when you’re looking at those who get a HEMS response, you’re not catching the denominator of all drownings. This study doesn’t help us understand what proportion of total drownings end up at this more severe end of the spectrum.

With those provisos, retrospective research still has a key role. It’s still a brick of knowledge. A small brick maybe, but a brick. It’s certainly a better brick to build with than you end up with if you don’t look.

What do we need?

More. Always more data. There is a bit here that suggests things similar to other series, some of which are a good deal bigger. Outcomes after arrest from drowning are better than is generally the case. Our impression is that, as suggested by that Netherlands paper, time does matter and that once you get beyond 30 minutes of resuscitation further efforts are unlikely to help.

The other factor that would appear to make sense (but clearly needs lots of robust research) is that earlier delivery of interventions that should make a difference to outcomes would be good. That surely starts with a big focus on bystander CPR. But that should also be backed up by accurate, early triage of teams with the skills to extend that care. The right teams in the right place at the right time remains the challenge.

 

Notes:

There’s a comprehensive bit on the tasking stuff in this earlier paper relating to tasking and paeds trauma. It should clarify the different systems used, which can be a bit hard to get your head around.

After the paper came out, @LifeguardsWB asked a pretty reasonable question “What were the average response and transport times?”

So as described in the subsequent tweet, the median times were:

  • Start of emergency call to team on scene = 17 minutes.
  • Time on the scene = 17 minutes.
  • Total prehospital time = 54 minutes.

 

 

 

Sandpits, Better Eyes and New Monitors – Can NIRS work for prehospital medicine?

This is part 2 of a series (part 1 is here) on trying to study near-infrared spectroscopy in the prehospital setting by Dr Andrew Weatherall (@AndyDW_). Can NIRS work? No one can be sure but here’s one approach to getting some data we can actually use. 

A while back I did a post where I pointed out that when you get sold technology, there’s a whole history behind the machine that goes beep that means it’s probably not what you’re told. And the example I used was near-infrared spectroscopy tissue oximetry.

That was partly because I’m involved in research on NIRS monitoring and I’ve spent a lot of time looking at it.  Like every time I look carefully in the mirror, there’s a lot of blemishes that I miss on a casual glance. I also don’t mind pointing out those blemishes.

So that post was about all the things that could get in the way – light bouncing about like a pinball, humans being distressingly uncatlike, comparing monitors that are might be apples and aardvarks rather than apples and apples and basing your whole methodology on assumptions of tissue blood compartments. Oh, and maybe you can’t get sunlight near your red light.

Sheesh.

The thing is, I really want to answer that original question – “How’s the brain?”

So enough of the problems, can we find some solutions?

Actually I’m not certain. But I can say what we came up with. It’s a plan that involves sandpits, hiding numbers and finding better eyes. Oh, and changing the design of monitors forever and ever.

 

Playing in Sandpits

Our first step was to try and figure out if NIRS technology could even work in the places it wasn’t designed for. Not near the cosy bleeping of an operating theatre monitor where the roughest conditions might be inflicted by a rogue playlist.

We figured that the first issues might be all the practical things that stop monitors working so effectively. And we already knew that in the operating suite you often needed to provide shielding from external light to allow reliable measurements.

So we asked for volunteers, stuck sensors to their heads and took them driving in an ambulance or hopped on the helicopter to do some loops near Parramatta. It gave us lots of chances to figure out the practicalities of using an extra monitor too.

And we learnt a bit. That we could do it with some layers of shielding between the sensors and the outside world. That the device we tested, though comfortable next to an intensive care bed was a bit unwieldy at 6 kg and 30 cm long to be carried to the roadside. Most importantly that it was worth pushing on, rather than flattening everything in the sandpit and starting again.

Early engineering advice included "just put a tinfoil hat on everyone to shield the sensors". I just ... I ... can't ... [via eclipse_etc at Flickr 'The Commons']
Early engineering advice included “just put a tinfoil hat on everyone to shield the sensors”. I just … I … can’t … [via eclipse_etc at Flickr ‘The Commons’]

Hiding Numbers and Getting Out of the Way

The next thing that was pretty obvious was that we couldn’t set out to figure out what NIRS monitoring values were significant and at the same time deliver treatments on the basis of those numbers. We needed to prospectively look at the data from the monitor and see what associations were evident and establish which bits in the monitoring actually mattered for patients and clinicians.

Of course paramedics and doctors tend to like to fix stuff. Give them a  “regional saturation” number which looks a little like mixed venous oxygen saturation while the manufacturer (usually) puts a little line on the screen as the “good-bad” cutoff line is a pretty good way to see that fixing reflex kick in. So to make sure it really is a prospective observational study and we’re observing what happens in actual patients receiving their usual treatment we ended up with a monitor with none of the numbers displayed. Better not to be tempted.

It was also obvious that we couldn’t ask the treating team to look after the NIRS monitor because they’d immediately stop doing the same care they always do and occasionally (or always) they’ll be distracted by the patient from being as obsessive about the NIRS monitor as we need for research.

So recruiting needs a separate person just to manage the monitor. On the plus side this also means we can mark the electronic record accurately when treatments like anaesthesia, intubation and ventilation or transfusion happen (or indeed when the patient’s condition obviously changes). It’s all more data that might be useful.

Getting Better Eyes

One of the big problems with NIRS tissue oximetry so far seems to be that the “absolute oximetry” isn’t that absolute. When you see something claiming a specific number is the cutoff where things are all good or bad, you can throw a bucket of salt on that, not just a pinch.

 

Maybe this much salt. [via user pee vee at Flickr's 'The Commons']
Maybe all of this salt. [via user pee vee at Flickr’s ‘The Commons’]
The other thing is that to pick up evolving changes in a dynamic clinical environment is difficult. What if it isn’t just the change in oximetry number, but the rate of change that matters? What if it’s the rate of change in that number vs the change over the same time in total haemoglobin measurements, or balance between cerebral monitoring and peripheral monitoring at the same time? How does a clinician keep track of that?

What we might need is a way of analysing the information that looks for patterns in the biological signals or can look at trends. The good news is there’s people who can do that as it’s acutally a pretty common thing for clever biomedical engineers to consider. So there are some clever biomedical engineers who will be part of looking at the data we obtain. When they have spare time from building a bionic eye.

My bet is that if NIRS monitoring is ever to show real benefits to patients it won’t be only by looking at regional saturation (though we’ll try that too). It will be the way we look at the data that matters. Examining rapidly changing trends across different values might just be the key.

 

Thinking About the Monitors We Need

Let’s imagine it all works. Let’s assume that even with all those hurdles the analysis reveals ways to pretty reliably pick up haematomas are developing, or the brain is not receiving enough blood flow, or oedema is developing (and there are other settings where these things have been shown), there’s still a big problem. How do you make that information useful to a clinician who has a significant cognitive load while looking after a patient?

For each NIRS sensor that is on (3 in this study) we’ll be generating 4 measurements with trendlines. The patient is likely to have pulse oximetry, ECG, blood pressure and often end-tidal capnography too. Putting together multiple bits of information is an underappreciated skill that highly trained clinicians make a part of every day. But it adds a lot of work. How would you go with 12 more monitoring values on the screen?

Yes Sclater's lemur, that's 16 monitoring values to keep track of. [via user Tambako the Jaguar at flickr]
Yes Sclater’s lemur, that’s 16 monitoring values to keep track of. [via user Tambako the Jaguar at flickr]
So before we can take any useful stuff the analysis reveals and free clinicians to use the information, we need to figure out how to present it in a way that lets them glance at the monitor and understand it.

How should we do that? Well it’s a bit hard to know until we know what we need to display. My current guess is that it will involve getting clever graphics people to come up with a way to display the aggregated information through shapes and colours rather than our more familiar waveforms (and that’s not an entirely novel idea, other people have been on this for a bit).

So then we’d need to test the most effective way to show it before finally trying interventional studies.

This could take a bit.

And that is a story about the many, many steps for just one group trying to figure out if a particular monitor might work in the real world of prehospital medicine. There are others taking steps on their own path with similar goals and I’m sure they’ll all do it slightly differently.

I hope we end up bumping into each other somewhere along the road.

 

Notes and References:

Here’s the link to our first volunteer study (unfortunately Acta Anaesthesiologica Scandinavica has a paywall):

Weatherall A, Skowno J, Lansdown A, Lupton T, Garner A. Feasibility of cerebral near-infrared spectroscopy monitoring in the pre-hospital environment. Acta Anaes Scand 2012;56:172-7.

If you didn’t look on the way past, you should really check the video of Prof. Nigel Lovell introducing their version of a bionic eye. It’s pretty astonishing and I can’t quite believe I get to learn things from him.

It’s the very clever Dr Paul Middleton who was first author on a review of noninvasive monitoring in the emergency department that is well worth a read (alas, another paywall):

Middleton PM, Davies SR. Noninvasive hemodynamic monitoring in the emergency department. Curr Opin Crit Care. 2011;17:342-50.

Here’s the PubMed link from a team taking a preliminary looking tissue oxygen monitoring after out-of-hospital cardiac arrest:

Frisch A, Suffoletto BP, Frank R, Martin-Gill C, Menegazzi JJ. Potential utility of near-infrared spectroscopy in out-of-hospital cardiac arrest: an illustrative case series. Prehosp Emerg Care. 2012;16:564-70.

 

All the images here were via flickr’s ‘The Commons’ area and used without any modification under CC 2.0

Studies in Blood from Iran – A Quick Review

We all want to stop bleeding. Here’s a quick review from Dr Alan Garner of a paper coming out of Iran that looks at haemostatic dressings. 

Hatamabadi HR et al. Celox-Coated Gauze for the Treatment of Civilian Penetrating Trauma: A Randomized Clinical Trial. Trauma Monthly. 2014;20:e23862. dii: 10.5812/traumamon.23862

There is not a lot of data on haemostatic dressings in the civilian context and human data from the military context is not randomised for obvious reasons. It is therefore nice to see a RCT on this subject in humans. In the study they compare the time to haemorrhage control and amount of haemorrhage in stab wounds to the limbs between 80 patients treated with Celox gauze versus 80 patients treated with normal gauze.

The study is from an emergency department in Tehran and is pragmatic in design. There are some limitations of the study worth mentioning. It was open label, and the amount of bleeding was measured simply by the number of gauze squares used. Weighing the gauze would have been a more accurate way to estimate ongoing blood loss.

The details of how the gauze was applied isn’t that clear. To be effective the gauze needs to be packed into the wound against the bleeding vessel. Was the Celox used in this way to maximise the chances it would work? I can’t tell from the paper. Oh, and the company provided the product for the trial.

Perhaps the biggest puzzle in the design is that patients with really significant haemorrhage (those requiring transfusion) were excluded from the trial. This is the group where you really want to know if the stuff works. You could theorise that this group of patients may have trauma coagulopathy and the method of action of Celox (being by electrostatic attraction and independent of clotting factors) might be particularly useful and a bigger difference between groups may have been found. I guess that will have to wait for another day and another trial that someone works through ethics.

Acknowledging all of this, there was a significant difference in the time taken to achieve haemostasis and the amount of ongoing bleeding with the Celox gauze looked superior by both measures.

This suggests that it remains reasonable to use these products as evidence continues to point to efficacy. Of course these agents are not a magic bullet and all the other principles of haemostasis need to be applied as a package, including urgent transport to a surgical facility.

Research That is Positive When It Is Negative

This weeks post is the first in a series touching on some of the challenges when you start researching technology for the prehospital setting (or anywhere really). Dr Andrew Weatherall (@AndyDW_) on why some monitors aren’t the monitors you’re sold. 

I am new to the research game. As is often the case, that brings with it plenty of zeal and some very rapid learning. When we first started talking about the project that’s now my PhD, we set out wondering if we could show something that was both a bit new and a positive thing to add to patient clinical care.

It didn’t take long to realise we’d still be doing something worthwhile if the project didn’t work one little bit.

Yep, if this thing doesn’t work, that would still be fine.

 

Simple Questions

I’m going to assume no one knows anything about this project (seems the most realistic possibility). It’s a project about brains and lights and monitors.

It came out of two separate areas of work. One of these was the prehospital bit of my practice. All too often I’d be at an accident scene, with an unconscious patient and irritated by the big fuzzy mess at the middle of the clinical puzzle.

“How’s the brain?”

Not “how are the peripheral readings of saturation and blood pressure against the normative bell curve?” Not “how are the gross clinical neurological signs that will change mostly when things get really grim?”

“How’s the brain?”

At the same time at the kids’ hospital where I do most of my anaesthesia we were introducing near-infrared spectroscopy tissue oximetry to monitor the brain, particularly in cardiac surgery cases.

The story sounded good. A noninvasive monitor, not relying on pulsatile flow, that provides a measure of oxygen levels in the tissue where you place the probe (referred to as regional oxygen saturation, or tissue saturation or some other variant and turned in to the ideal number on a scale between 0 and 100) and which reacts quickly to any changes. You can test it out by putting a tourniquet on your arm and watching the magic oxygen number dive while you inflate it.

Except of course it’s not really as simple as that. If you ask a rep trying to sell one of these non-invasive reflectance spectroscopy (NIRS) devices, they’ll dazzle you with all sorts of things that are a bit true. They’re more accurate now. They use more wavelengths now. Lower numbers in the brain are associated with things on scans.

But it’s still not that simple. Maybe if I expand on why that is, it will be clearer why I say I would be OK with showing it doesn’t work. And along the way, there’s a few things that are pertinent when considering the claims of any new monitoring systems.

 

A Bit About Tech

Back in 1977, a researcher by the name of Franz Jöbsis described a technique where you could shine light through brain tissue, look at the light that made it out the other side and figure out stuff about the levels of oxygen and metabolism happening deep in that brain tissue. This was the start of tissue spectroscopy.

Now, it’s 38 years later and this technology isn’t standard. We’re still trying to figure out what the hell to do with it. That might just be the first clue that it’s a bit complicated.

Of course the marketing will mention it’s taken a while to figure it out. Sometimes they’ll refer to the clinical monitors of the 1990’s and early 2000’s and mention it’s got better just recently. They don’t really give you the full breadth of all the challenges they’ve dealt with along the way. So why not look at just a few?

  1. Humans Aren’t Much Like Cats

Jöbsis originally tested his technique on cats. And while you might find it hard to convince cat lovers, the brain of a cat isn’t that close to a human’s, at least in size. (As an aside, I’m told by clever bionic eye researchers the cat visual cortex actually has lots of similarities with that of humans – not sure that explains why the aquarium is strangely mesmerising though).

He also described it as a technique where you shone the light all the way across the head and picked up the transmitted light on the other side. But even the most absent-minded of us has quite a bit more cortex to get through than our feline friends and you’d never pick up anything trying that in anything but a neonate.

So the solution in humans has been to send out near-infrared light and then detect the amount that returns to a detector at the skin, on the same side of the head as you initially shone those photons.

When you get handed a brochure by a rep for one of these devices, they’ll show a magical beam of light heading out into the tissues and tracing a graceful arc through the tissues and returning to be picked up. You are given to believe it’s an orderly process, and that every bit of lost light intensity has been absorbed by helpful chromophores. In that case that would be oxy- and deoxyhaemoglobin, cytochromes in the cell and pesky old melanin if you get too much hair in the way.

See? Here's the pretty version that comes with the monitor we're using in the study? [It's the Nonin EQUANOX and we bought it outright.]
See? Here’s the pretty version that comes with the monitor we’re using in the study? [It’s the Nonin EQUANOX and we bought it outright.]
Except that’s the version of the picture where they’ve put Vaseline on the lens. Each one of those photons bounces eratically all over the place. It’s more like a small flying insect with the bug equivalent of ADHD bouncing around the room and eventually finding its way back to the window it flew in.

So when you try to perform the underlying calculations for what that reduction in light intensity you detect means, you need to come up with a very particular means of trying to allow for all that extra distance the photons travel. Then you need to average the different paths of all those photons not just the one photon. Then you need to allow for all the scattering that means some of the light will never come back your way.

That’s some of those decades of development chewed up right there.

  1. Everyone Looks the Same But They Are Different

So that explains the delay then. Well there’s another thing that might make it hard to apply the technology in the prehospital environment. Every machine is different. Yep. If you go between systems, it’s might just be that you’re not comparing apples with apples.

That particular challenge of calculating the distance the light travels? Every manufacturer pretty much has a different method for doing it. And they won’t tell you how they do it (with the notable exception of the team that makes the NIRO device who have their algorithms as open access – and their device weighs 6 kg and is as elegant to carry as a grumpy baby walrus).

So when you read a paper describing the findings with any one device, you can’t be 100% sure it will match another device. This is some of the reason that each company calls their version of the magic oxygen number something slightly different from their competitor (regional saturation, tissue oxygenation index, absolute tissue oxygen saturation just to name a few). It might be similar, but it’s hard to be sure.

Maybe that's harsh. Could a walrus be anything but elegant? [via Allan Hopkins on flickr under CC 2.0]
Maybe that’s harsh. Could a walrus be anything but elegant? [via Allan Hopkins  on flickr without mods under CC 2.0]
  1. When “Absolute” Absolutely Isn’t Absolute

You get your magic number (I’m going to keep calling it regional saturation for simplicity) and it’s somewhere between 60 and 75% in the normal person. The thing is it hasn’t been directly correlated with a gold standard real world measurement that correlates with the same area sampled.

The NIRS oximeter makes assumptions about the proportions of arterial, venous and capillary blood in the tissue that’s there. The regional saturations are validated against an approximation via other measures, like jugular venous saturation or direct tissue oximetry.

On top of that all those “absolute NIRS monitors” that give you a definite number that means something? No. “Absolute” is not a thing.

It’s true the monitors have got much better in responding to changes quickly. And they’ve added more wavelengths and are based on more testing so they are more accurate than monitors from decades past. But they can still have significant variation in their readings (anywhere up to 10% is described).

And they spit out a number, regional saturation, that is actually an attempt to take lots of parameters and spit out a number a clinician can use. How many parameters? Check the photo.

This is from an excellent review by Elwell and Cooper. [Elwell CE, Cooper CE. Making light work: illuminating the future of biomedical optics. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 2011;369:4358-4379.]
This is from an excellent review by Elwell and Cooper. [Elwell CE, Cooper CE. Making light work: illuminating the future of biomedical optics. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 2011;369:4358-4379.]
  1. The Practical bits

And after all that, we reach the practical issues. Will sunlight contaminate the sample? Can it cope with movement? Do you need a baseline measurement first? Does it matter that we can only really sample the forehead in our setting?

All the joy of uncertainty before you even try to start to research.

 

So why bother?

Well the quick answer is that it might be better for patients for clinicians to actually know what is happening to the tissue in the brain. And acknowledging challenges doesn’t mean that it isn’t worth seeing if it’s still useful despite the compromises you have to make to take the basic spectroscopy technique to the clinical environment.

But even if we find it just doesn’t tell us useful stuff, we could at least provide some real world information to counter the glossy advertising brochure.

There are already people saying things like,

“You can pick up haematomas.” (In a device that just tells you if there’s a difference between the two hemispheres.)

“Low regional saturations are associated with worse outcomes.” (But that’s probably been demonstrated more in particular surgical settings and the monitoring hasn’t been shown to improve patient outcomes yet.)

“You can even pick up cytochromes.” (In the research setting in a specially set-up system that are way more accurate than any clinical devices.)

All of those statements are a bit true, but not quite the whole story. The other message I extract from all of this is that all this uncertainty in the detail behind the monitor can’t be unique to NIRS oximetry. I have little doubt it’s similar for most of the newer modalities being pushed by companies. Peripherally derived Hb measurements from your pulse oximeter sound familiar?

After all this it’s still true that if we can study NIRS oximetry in the environment that matters to us we might get an exciting new answer. Or we might not. And sometimes,

“Yeah … nah.”

Is still an answer that’s pretty useful.

 

 

This is the first in a series. The next time around, I’ll chat about the things we’re trying in the design of the study to overcome some of these challenges.

If you made it this far and want to read a bit more about the NIRS project, you can check out the blog I set up ages ago that’s more related to that (though it frequently diverts to other stuff). It’s here

 

Examining the Hairs on the Yak – A Good Chance for More Chat

One of the good things about research that has its own issues, is that there is lots of scope to learn from the things about it that are good, as well as those that aren’t so great. The nice thing about ongoing comment is it gives even more chances to explain why a researcher might make certain choices along the way. Every question in research has more than one way of approaching some answers. Dr Alan Garner returns to provide even more background on this particular study, which has already generated some interesting conversation and a follow-up post

It’s an excellent thing to be able to keep having discussion around the challenges related to both conducting and interpreting a trial.  These things always bring up so many valuable questions, which deserve a response. So this is not going to be quick, but I hope you’ll have a read.

Lots of things changed between the time this trial was designed and now. Standards of care change. Systems, processes and governance models change. Indeed, in this trial standard care changed underneath us. We completed the protocol and gained ethical and scientific committee approval for this study during 2003.

The world was a different place then – at the start of 2003 George W Bush was US President and Saddam Hussein was still running Iraq. There is no keener instrument in medicine than the retrospectoscope particularly when focused 12 years back. Would I have done things differently if I knew then what I knew now – absolutely. Does the trial have hairs? Looks like a yak to me and I don’t think we are pretending otherwise.

Asking Questions

Did we ask the right question? The question was pragmatic. Add a doctor and with them comes a package of critical care interventions that were not routinely otherwise available in our standard EMS system. A number of cohort studies had previously looked at exactly this question and more studies have asked it since. Even papers published this month  have examined this question although the issue often overlaps with HEMS as that is how the doctors are frequently deployed.

I might segue slightly to address dp’s question as well which overlaps here. Is it the procedures that the team performs or the person performing the procedures that matter? Dp suggests that a better study design would be to have them all use the same protocols then we compare doctors with non-doctors. Such a randomised trial has actually been done although it is a long time ago now – 1987. It is one of the classic Baxt and Moody papers and was published in JAMA.

Patients were randomly assigned to a helicopter staffed by a flight nurse/paramedic or a flight nurse/emergency physician. The flight nurse and emergency physicians could perform the same procedures under the same protocols including intubation, chest tubes, surgical airways and pericardiocentesis. By TRISS methodology there was lower mortality in the group that included the doctor and the suggestion was this might be related to how they judged the necessity for intervention, rather than technical skill. This study is well worth a read. They note that the outcome difference might have been removed if the nurse/paramedic team was more highly trained but where does this end? We then move into the question of how much training is enough training and this is an area that I think is still in its infancy. Each time you do some research your prompt a whole lot of extra, usually interesting questions.

All That Methods Stuff

Anyway, back to this paper. All analyses presented in this paper were pre-specified in the statistical analysis plan. Although the protocol paper was not published till 2013, the statistical analysis plan (SAP) was finalised by the NHMRC Clinical Trials Centre in August 2010, more than a year prior to follow up of the last recruited patients. Copies of the SAP were then provided to the trial funders and NSW Ambulance at the time it was finalised in 2010. Along the way we have presented data in other settings, mostly at the request of interested parties (such as the Motor Accidents Authority who specifically requested analyses of road trauma cases) and in retrieval reviews. This is why there has been the opportunity for extra public scrutiny by experts like Belinda Gabbe. And public scrutiny is a good thing.

And Standard Treatments?

I’m very happy to provide some reassurance that this study did not rely on junior doctors being put through EMST/ATLS and then sent out to manage severe prehospital trauma patients. Rather the trial protocol states that treatment was according to ATLS principles. In 2003 there was no other external standard of care that we could cite for trauma patient management that was widely and internationally recognised.

The specialists had of course all completed EMST/ATLS but they were also all critical care specialists in active practice in major trauma centres in Sydney with ongoing exposure to severe trauma patients. The average years of prehospital trauma management experience held by this group of doctors at the beginning of the trial was more than 12 years each. They operated to those high level of treatment standards, with regular reviews of management to make sure this was current best practice over the life of a trial that ended up being longer than we hoped.

Other Dimensions of Time

And time wasn’t a friend. Recruitment was indeed slower than planned. This is a common problem in prospective trials. Our estimates of how long it would take to recruit the required sample size were based on a written survey of the major trauma centres in Sydney in 2003 to determine how many unconscious adult blunt trauma patients they were seeing each year. This was reduced to 60% to reflect the fact the trial would recruit for only 12 hours each day (although during the busiest part of the day) and the time needed to recruit was then estimated at 3 years. We in fact planned for 4 years to allow for the fact that patients usually disappear when you go looking for them prospectively. This of course is exactly what happened but to a greater degree than we planned.

I agree it would have been nice to have the results formally published earlier. We did present some results at the ICEM in Dublin in June 2012. It is interesting to note that Lars Wik spoke immediately before me at this conference presenting the results of the CIRC trial on the Autopulse device. This study was finally published online in Resuscitation in March 2014, more than three years from recruitment of their last patient and this trial did not include a six month neurological assessment as HIRT did.   Getting RCTs published takes time. Given we did have to perform six month outcome assessments I don’t think we were too far out of the ball park.

Quokka copy 2
To keep you going, here’s a quokka who looks like he’d be up for a chat too. [Via Craig Siczak and unchanged under Creative Commons.]

Randomising in Time Critical Systems

Just to be sure that I really have the right end of the stick on the question of excluding patients after randomisation I ascended the methodology mountain to consult the guru. For those that don’t know Val Gebski he is Professor and Director, Biostatistics and Research Methodology at the NH&MRC Clinical Trials Centre in Sydney. He was our methodology expert from the beginning of planning for the trial.

When I reached the mountain top I had to leave a voice message but Val did eventually get back to me. He tells me excluding patients post randomisation is completely legit as long as they are not excluded on the basis of either treatment received or their outcome. This is why he put it in the study design.

These are essentially patients that you would have excluded prior to randomisation had you been able to assess them properly and of course in our study context that was not possible. The CIRC study that I have already discussed also adopted this approach and excluded patients that did not meet inclusion criteria after enrolment.

Prehospital studies where you have to allocate patients before you have been able to properly assess them are always going to have these kind of difficulties. The alternative for a prehospital RCT would be to wait until you know every element of history that might make you exclude a patient. How many of us have that sort of detail even when we arrive at the hospital?

Extra Details to Help Along the Discussion

The newly met reader might also like to know that the call off rate was about 45% during the trial, not 75%. This is not different to many European systems. If you don’t have a reasonably high call off rate then you will arrive late for many severely injured patients.

And of course the HIRT study didn’t involve “self-tasking”. The system randomised cases on a strict set of dispatch guidelines, not on the feelings of the team on the day. This process was followed for nearly 6 years. There was not a single safety report of even a minor nature during that time. Compliance with the tasking guidelines was audited and found to be very high. Such protocolised tasking isn’t inherently dangerous and I’m not aware of any evidence suggesting it is.

It’s reassuring to know that other systems essentially do the same thing though perhaps with different logistics. For example in London HEMS a member of the clinical crew rotates into the central control room and tasks the helicopter using an agreed set of dispatch criteria. This started in 1990 when it was found that the central control room was so poor at selecting cases, and it resulted in the call off rate falling from 80% to 50%. The tasking is still by a member of the HEMS team, they just happen to be in the central control room for the day rather than sitting by the helicopter.

A more recent study from last year of the London system found that a flight paramedic from the HEMS service interrogating the emergency call was as accurate as a road crew assessing the patient on scene. This mirrors our experience of incorporating callbacks for HIRT.

The great advantage of visualising the ambulance Computer Assisted Dispatch system from the HIRT operations base by weblink was the duty crew could work in parallel in real time to discuss additional safety checks and advise immediately on potential aviation risks that might be a factor.

To consider it another way, why is the model safe if the flight paramedic is sitting at one location screening the calls but dangerous if he is sitting at another? What is the real difference between these models and why is one presumably a safe mature system and the other inherently dangerous?

More Mirrors

I agree that the introduction of the RLTC to mirror the HIRT approach of monitoring screens and activating advanced care resources (with extension to a broader range) was a good thing for rural NSW. However they did activate medical teams into what are very urban areas of Sydney who were neither a long way from a trauma centre nor was there any suggestion they were trapped. Prior to the RLTC the Ambulance dispatch policy for medical teams was specifically circumstances where it would take the patient more than 30 mins to reach a trauma centre due to geography or entrapment. Crossover cases obviously didn’t explain the whole of our frustrating experience of recruitment, but it was one extra hurdle that finally led us to wrap recruitment up.

You can’t bite it all off at once

In a study where you collect lots of data, there’s no publication that will let you cram it all into a single paper. So there are definitely more issues to cover from the data we have. This includes other aspects of patient treatment. So I will be working with the other authors to get it out there. It might just require a little bit of time while we get more bits ready to contribute to the whole picture.

Of course, if you made it to the end of this post, I’m hoping you might just have the patience for that.

Here’s those reference links again: 

That Swiss paper (best appreciated with a German speaker). 

The Baxt and Moody paper.

CIRC.

The earlier London HEMS tasking paper.

The latter London HEMS tasking paper.