Simulcast Journal Club November 2018


Introduction :  

Simulcast Journal Club is a monthly series that aims to encourage simulation educators to explore and learn from publications on Healthcare Simulation Education.  Inspired by the ALiEM MEdIC Series, each month we publish a case and link a paper with associated questions for discussion.  We moderate and summarise the discussion at the end of the month in pdf and podcast format, including opinions of experts from the field. 

For the journal club to thrive we need your comments!  Some participants report feeling nervous about their initial posts, but we work hard at ensuring this is a safe online space where your thoughts are valued and appreciated.  To ensure this, all posts are reviewed prior to posting.  We look forward to learning from you. 

Journal Club (2)

Title :  “Happy Endings 

“This is it!” grinned Nitin.  In his hand, he held a copy of ‘Resuscitation’.  “An ecological study, across 26 hospitals.  Improved survival!  Statistical significance!  This is the paper we’ve needed!” 

He grasped Nimali’s hand as they walked towards their office.  Nimali’s phone rang, she glanced down to see that Catherine was calling her and quietly cancelled. 

“Nimali, between this data and the AHA Statement there’s no way the hospital can shut us down.”. 

Her heart skipped a beat as she considered the implications.  “All this time we’ve been looking at RCT’s that never get enough power.  But with this paper we can justify rolling out Brad’s new rapid cycle program!  Funding might get easier…  This is a huge moment!”.  She jumped briefly as her phone buzzed again.  She put it on silent. 

Nitin stared into her eyes.  “I told you not to doubt yourself.  Everything we do here, it actually means something. And now we have proof!  What a way to finish the year.”.   

Nitin’s infectious passion dampened Nimali’s inhibitions.  She leaned forward and pecked him on the cheek.  “Everyone else is off campus at mandatory training.  I think it’s time we invested in some Spaced Repetition.” 

“I’d prefer some Rapid Cycle, but as long as there’s some contextual learning.” Grinned Nitin. 

For the third time Nimali’s phone buzzed.  She sighed.  “Hold that thought.”  

“Hey, what’s up?” she asked, as Nitin kissed her neck. “What? Oh God. Catherine I’ll be right there.”. 

She looked up from the phone in shock and pushed Nitin away.   

“The staff meeting, we need to get there… now.” 

Nitin could see the fear in her eyes.  “Nimali, what is it?”.  

“Professor Snythe has been murdered.”. 

The Article :  

Josey, K., Smith, M., Kayani, A., Young, G., Kasperski, M., Farrer, P., Gerkin, R., Theodorou, A. and Raschke, R. (2018). Hospitals with more-active participation in conducting standardized in-situ mock codes have improved survival after in-hospital cardiopulmonary arrest. Resuscitation, 133, pp.L-52. 

 

Discussion :  

 

For several years on journal club we’ve mentioned the great white whale of Simulation Research. The ability to correlate our educational efforts with improved patient outcomes.  While some papers have achieved this on some levels, Josey et al’s paper finds an association between In Situ Sim Programs and decreased patient mortality in hospital arrest. 

 

For our journal clubbers this month, does this paper seem as exciting and validating as Nimali and Nitin seem to think?  Or are they seeing what they wish to see from this data? 

 

This is our last journal club for 2018, so jump in now! 

 

References : 

Josey, K., Smith, M., Kayani, A., Young, G., Kasperski, M., Farrer, P., Gerkin,https://www.ncbi.nlm.nih.gov/pubmed/30261220 R., Theodorou, A. and Raschke, R. (2018). Hospitals with more-active participation in conducting standardized in-situ mock codes have improved survival after in-hospital cardiopulmonary arrest. Resuscitation, 133, pp.47-52.


About Ben Symon

Ben is a Paediatric Emergency Physician at The Prince Charles Hospital in Brisbane and a Simulation Educator at Queensland Children's Hospital.

25 thoughts on “Simulcast Journal Club November 2018

  • Glenn Posner

    Like Nitin & Nimali, I was overjoyed when this paper was published and it created a twitter-flurry. I sat and read it cover to cover with a fine-tooth comb (which is admittedly rare for me) because this study, if true, might be the evidence I am always searching for to justify our in-situ simulation program. My first question was related to what the definition of “more active” vs “less active” hospitals was, because (prior to reading this paper) I would have considered my hospital as one of the “most active” hospitals in Canada with respect to running mock codes. In the study, the hospitals that were more active were running 17 sessions per 100 beds vs 3/100 beds in the less active group. This is disheartening, because my hospital has nearly 1200 beds – I would need to run 200 sessions per year to match the median of the “most active” group. Alas, the largest hospital in the study had 708 beds, and was a “less active” hospital, so I’m not sure my experience is comparable. I would need to run 4 mock codes per week rather than the two per month that are currently happening (and that I was proud of). This is in the context of other in-situ events that aren’t code-blue related. Beyond the coordination and debriefing instructor-hours required for these events, I also have lots of guilt about constantly taxing the code team residents and nurses with more frequent events. To this end, I contacted Dr. Raschke and asked for the median number of sessions at the >200 bed hospitals, and his preliminary answer has been that “any reasonable increase in training is likely to benefit patient outcomes.” He is implying a dose- response relationship between training and outcomes, and I can certainly live with that. This is the kind of paper I am going to share with the CEO of our hospital as evidence for the value of in-situ sim.

    • Ben Symon Post author

      Hi Glenn, Thanks so much for stopping by and sharing some really useful points about practical application of this data.
      I think my own enthusiasm overshadowed any thoughts about how to implement this data, and it’s a really important thought about how to actually achieve the production line of in situ sims that would be required to replicate this in a large service. My question for you though, if there’s evidence for this, is it the responsibility of the sim team alone to provide this service?

  • Jennifer Dale-Tam

    Hi Ben & Glenn.

    To answer your question Ben, the sim team needs to be involved, but does not need to have the sole responsibility of this. It could be shared between all educational departments of an institution. Organization and financial support of in-situ programs could be shared across all stakeholders. The in-situ sim needs to be lead by a trained simulation instructor In a recent AHA statement released by Adam Cheng et al. (2018) focusing on education strategies and structures around cardiac resuscitation recommended debriefing focused on tasks and team work by trained faculty to improve outcomes. It also spoke to spaced training as improving outcomes which is in alignment with the dose-response statement of Glenn and Dr. Raschke. Personally, I have seen improvements in tasks and team function around resuscitation in the teams I work with during the in-situ simulations I run and debrief through Glenn’s program such as: time to 1st defibrillation in less than 3 minutes, break down of hierarchies in the inter-professional team where the nurses where debriefing the physicians in what they needed from them. There was a successful resuscitation in our ambulatory care setting where the front-line nurses attributed a part of the success to the regular simulations we have run for them. Though initially resistant to participate, consistent feedback indicates healthcare staff appreciate the simulations and request more of them. Glenn, comments like this could help alleviate your guilt about running in-situ simulations. The hard evidence from the Josey article, support from the AHA statement and the soft evidence of feedback from participants and anecdotal support from instructors all provide sound support for in-situ simulation programs and engaging all relevant stakeholders.

    Cheng, A., Nadkarni, V. M., Mancini, M. B., Hunt, E. A., Sinz, E. H., Merchant, R. M., … Bhanji, F. (2018). Resuscitation Education Science: Educational Strategies to Improve Outcomes From Cardiac Arrest: A Scientific Statement From the American Heart Association. Circulation. https://doi.org/10.1161/CIR.0000000000000583

    • Ben Symon Post author

      Hi Jennifer,
      Thanks so much for your considered and thorough response, I really enjoyed your perspective.
      I think this paper and Glenn and your comments have made me reflect on our own simulation service, and the limits of what we can do as a crew of 10 people in a large tertiary hospital service.
      I think as a simulation micro-community we have a fair amount of work in upskilling other educational staff to feel comfortable running In Situ in their areas. I also think that this signals an important transfer of ownership of peer feedback conversations. If wards own their own responsibility to practice, ideally I would imagine this would help with morale and professional pride as well as patient outcomes.
      Interesting stuff!
      Ben

      • Jennifer Dale-Tam

        Hi Ben,

        Thanks for the feedback. Currently working on a project “upscaling” my educator colleagues in hopes of increasing their confidence of facilitating threatre based sim with future sessions on insitu sim. Insitu sim usually bring the interprofessional component which can be daunting to new simulation instructors. Sim services are “the guardians of expertise” who need to ensure the safety of the learning environment and quality of the debriefing are maintained, but with support and mentorship other educators can facilitate in-situ sim, which takes time. Yes, it would help with moral and professional pride, it certainly does mine ?!

        Jenn

    • Glenn Posner

      Thanks, Jen!

      Indeed, I have started quoting this article at the end of the debriefing of the code team to assuage my guilt.

      Ben, it is not the responsibility of the sim team to provide this service, but it is our responsibility to coordinate this service because we are the guardians of the expertise regarding best educational practices (curriculum development, instructional design, objectives, etc…) and best practices regarding debriefing.

  • Janine Kane

    Hi Ben, Whilst I am not involved in any insitu simulations I know through many years of SIM training for EN students that anecdotally, repetition builds skill! Repetition also builds speed and confidence. The more we can run any immersive SBE is a bonus for everyone involved. However, we are so busy trying to improve patient outcomes as well as staff confidence, expertise and let’s face it resilience, that the task of constantly having to prove SBE/Insitu Sim works is exhausting and frustrating! Loved the story, my favourite storybook characters ?

    • Ben Symon Post author

      Hi Janine,
      Great to have you back! It’s been so nice to have you coming along so regularly.
      I agree that the work done by Josey at al will hopefully help us all justify our jobs, and also inform how we can best achieve optimal patient outcomes.
      Ben

  • Susan Eller

    Hello Ben,

    Thank you for updating the Nitin & Nimali story – I had to chuckle at your use of simulation terms in their particular connotation 😉

    As an educator, I agree with the article, Glenn & Jennifer that an increase in training leads to a dose-related improvement in performance. It is a little daunting in a 900, or 1200, bed hospital to achieve the “more active” rate of in situ simulations. As many of you mentioned, there is a need for good debriefing of the training, and that means training the ACLS instructors and code team trainers in good facilitation techniques. The Josey article and the Cheng et al. article provide rationale to present to the financial authority of your institution.

    As a PhD student, there is still some part of my brain that gets stuck on the fact that correlation ≈ causation. The authors described the work as an ecological study, and again that challenges the rules of hard science. YET – I don’t know that I think “hard science” is necessary. When I read Jennifer’s description of the interprofessional simulations with nurses debriefing physicians, I think that kind of training changes not only performance, but culture. I believe those changes in culture contribute greatly to the improved outcomes.

    • Ben Symon Post author

      Susan, thanks so much for coming along and mentioning some concerns regarding the ‘hard science’ aspect of the paper. I was wondering if you could unpack a little bit about what your concerns are about correlation vs causation?
      P.S. I have no idea what you mean about any connotations in my case study, it’s all very serious and above board I assure you :p

      • Susan

        Hello Ben,

        Hmm, sending the ice cream link, which is how I first understood correlation versus causation 😉 https://www.youtube.com/watch?v=BaETnBzM7yU.

        I think that a better example that I can give is that carrying a lighter is correlated with an increase in lung cancer. Of course the lighter doesn’t cause cancer. The cigarettes that have a high correlation to both the lighter and the cancer is the cause. I think that is part of my question with this article: does the actual training cause the differences, or is the culture of the institutions – one that encourages and funds in situ training – what really creates the difference in outcomes?

  • Farrukh

    Great article this month, and what a massive undertaking. I also love the electronic checklist and would love to have something similar to that. I think its great we are beginning to see simulation data with a focus on patient outcome.

    I was a bit surprised that there was not a significant improvement in CPR quality between the two groups. I would think that with direct feedback through frequent in situ simulations, that there would be a difference in the quality of CPR also. I would have liked more in the discussion for why they felt this was not the case. It looks like the debriefs were focused debriefs after. It would have been interesting if we had data on what the focused debriefing topics were (i.e. if they focused more on early defibrillation and less focused feedback on qCPR?). The teamwork scale appeared also similar, but I did not see any debriefing items that focused on teamwork.

    Also, while outside the focus of the paper, I am also curious as to the reasons between the more active and less active programs. Was it staffing? More administrative support? More of a psychologically safe culture? I suppose more fodder for future mixed methods studies!

    • Ben Symon Post author

      Hey Farrukh! Thanks so much for coming.

      I agree it is quite surprising that there is not a significant improvement in CPR quality or teamwork between the two study groups. If we are going to extend upon Susan’s concern about correlation vs causation, the investigators observed minimal improvement in simulated resuscitation (except for improved shock times), so how do we then draw the conclusion that the training that DIDN’T make a difference in the simulated environment, somehow make a big difference in real life?

      The authors concede the potential existence of an unknown confounder or other intervention that is having the effect. Not sure what that could be, I guess hypothetically the fact that the staff as a whole are becoming more experienced to codes on the ward may lead to improved transfer times to ICU, access to specialist, times to arrival etc?

      Exciting but tricky stuff!

  • Komal Bajaj

    I spent a lovely hour with a cappuccino and this manuscript when it was first published. I’m thrilled that the field of insitu simulation is maturing such that we are seeing works like this and suspect there will be many more to follow shortly (including hopefully some from this group!). The “dose-response” relationship makes sense and as Farrukh alluded to, a deeper dive into factors that helped promote “activity” would be an interesting read. As we here in NYC are always thinking about how to scale up, I agree with Glenn that this article is potentially useful for stakeholder buy-in, though the actual % attribution to simulation for the improvement seen is a big question that remains to be answered and, in my view, the messaging would need to reflect that in some way.

    Nerdy simulation terminology-based flirtation…..Snythe’s murder……..this is getting serious.
    Happy New Year, Simulcast!

    • Ben Symon Post author

      Thanks for swinging by Komal, hope to share a cappuccino with you soon at IMSH :p
      I agree there’s a big question that remains to be answered, but I also really like how the paper doesn’t try and overcompensate for that.

  • Victoria Brazil

    Hey Ben
    Great article.
    Late to the party this month, but have enjoyed the discussion so far.
    Does ISS really help? or is it just a dangerous and expensive ego trip for excitable sim educators?
    🙂
    I thin this article helps…, sort of….

    Most remarkable for me is the detailed and rigorous data collection system they have managed to sustain. As Atul Gawade suggests in his book ‘Better’, great performance starts with ‘counting something’. It tends to improve.
    The paper gives us lots of impressive, objective data.

    It does mean we are counting something which can be counted – quantitative metrics on resuscitation performance. Fortunately they are relevant and clinically meaningful metrics. But it does re-inforce ‘traditional’ views of what sim (and ISS specifically) are for … technical, skill tasks. Thats not bad, its just that many of us are hoping ( ? naively) for bigger impacts on teams and culture that are harder to measure.

    Culture is relevant here – in the association/ causation issue already raised.
    It may just be that great performers realise they need sim training and hence do more of them – the confounder is the attitude and culture of the teams involved.

    But remains a great paper for the funding submissions…..

    Robust, generalisable impact papers on ISS are all welcome, including negative studies. We need to know when we aren’t having an impact too. The question is not ‘does it work?’ but for what, whom and under what circumstances..

    Thanks again Ben

    vb

  • Belinda Lowe

    Thanks for choosing such an interesting paper!

    I had several take away messages from this paper
    1) It’s possible to achieve a highly detailed and rather impressive data collection and metrics for ISS over a large scale study
    2) Despite the reported differences in mortality it was interesting that only time to defibrillate <2 minutes was significant
    3) This study reported no difference in teamwork performance but this was only measured using a composite composed of having a defined leader and using closed loop communication – There are surely other elements to teamwork performance at play here.

    I also couldn’t help but wonder if these two hospital groups are really comparable?? The hospitals with more active ISMC had nearly double the amount of IHCA events per year. They also had 40% more admissions/year. It is possible that the IHCA survival is improved not only because they do more simulations BUT also because their code teams faced this clinical event almost twice as frequently as their less active counterparts. The portion of patient admissions to ICU was also statistically greater in the less active group. Is this reflecting a sicker cohort of patients admitted to the less SIM active hospitals? The effect of a more active ISS program seemed to be most powerful in the medium and larger size hospitals. It would have been interesting to view a more comprehensive breakdown of the data rather than just the mean statistics.

    Overall an incredibly interesting paper with lots of food for thought but I am left wondering if we seem to be missing a piece of the puzzle.

    • Ben Symon Post author

      Hi Belinda, thanks so much for coming by and sharing your take homes, they’re great.
      I think that the point you raise about the different hospitals potentially being ‘non comparable’ groups is a fair one too, although without knowing the hospitals well, I have no idea.
      The clear improvement despite relatively similar performance on observation is very intriguing to me…

  • Kamal Cortas

    From the point of view of mainly a “user” of in situ simulation, the hard outcome improvements from this paper are really exciting. I think that the connection between team practice and improved team performance are somewhat intuitive but it’s so encouraging to see a tangible improvement in patient outcome.

    I was impressed by the number of metrics that were captured here and although it might add to the impression that simulation improves technical skills predominantly, I think that the improvements in the survival is a result of the improvement in clinical skills PLUS the non technical skills that are so important to practice through ISS. Separating, or measuring those differences might be the next frontier!

    I wonder if clinicians/departments knew more about their own resuscitation metrics whether this would stimulate a desire to practice through ISS – I know it would for me!

    • Ben Symon Post author

      Hi Kamal,
      Thanks for joining. It’s certainly an impressive body of work and I suspect a labour of love for In Situ Sim too.
      I agree, knowing your own resuscitation metrics might stimulate a desire to practice more in many people, although I suspect there’d be plenty of people who’d rather not know too.

  • Paul

    Hi Team, once again another thought-provoking article. It seems to put many of the advantages that simulation has for education into a nice research article. I thought that the electronic making sheet was very interesting. There were a few factors that stood out for me, the study looked at insitu simulation, however I do not see what advantage this had over centre based simulation (in this case), however there are significant disadvantages such as loss of a clinical space, interruptions, etc. The other was that in the results they noted that the rate of defibrillation was higher with more active ISMC, however better composite CPR performance or teamwork was not demonstrated. They then go on to discuss that defibrillation performance was also not measured during the activities. It was great to see that they have been able to link the use of ISMC to improved outcomes for patients, however I would have thought that with the use of ISMC there should have also been improvements in the other links in the ‘chain of survival’. I don’t think this article is as validating as initially thought.

    • Ben Symon Post author

      Hi Paul,
      I’m kind of with you I’d have to say, when I wrote the case study I was trying to mirror the ecstatic glee of feeling validated in your work after trying to prove it so long, but after reflecting on the paper more and more, I think it’s great, but not as validating as I’d originally imagined.
      I do however think that the authors are not deceptive about this at all, in fact they state very clearly that this ecological study is best for generating hypotheses rather than confirming them. As such, I think it’s a fantastic paper.

  • Sarah McNamee

    If I knew I got a month to month narrative, I would have come to play here ages ago! Watch out Dan Brown.

    How cool to have that much feedback on the technical aspects/numbers/timeframe of resuscitation – it’s like our little BLS cart on steroids. It’s a really nice way to quantify what we do in simulation as a technical drill and let’s be honest, doctors love hard, undisputable numbers.

    When we boil it down I guess that we can derive the following equation:
    Practicing effective resus = faster time to defib = less badness

    I don’t feel like this is anything mind blowing, however if I extrapolate from the above equation we could argue that;
    Simulation = lives saved (yay!)

    So from a funding perspective being able to quantify exactly how much less badness is a great way to encourage hospitals (particularly smaller hospitals with less IHCA/ and indeed those doing less resus sim practice) to back simulation as a integral part of resuscitation training and medical education in general.

Comments are closed.