Simulcast Journal Club April 2018 – Unconditional Love


Introduction :  

Simulcast Journal Club is a monthly series that aims to encourage simulation educators to explore and learn from publications on Healthcare Simulation Education.  Inspired by the ALiEM MEdIC Series, each month we publish a case and link a paper with associated questions for discussion.  We moderate and summarise the discussion at the end of the month in pdf and podcast format, including opinions of experts from the field. 

In order for the journal club to thrive we need your comments!  Some participants report feeling nervous about their initial posts, but we work hard at ensuring this is a safe online space where your thoughts are valued and appreciated.  To ensure this, all posts are reviewed prior to posting.  We look forward to learning from you. 

Copy of Journal Club(1)

Title :  Unconditional Love”   

Nimali looked wearily at the end of financial year forms that had piled up on her desk.  Trawling through the accounting reports her stomach sank as she noticed how close to the bottom line their centre was treading.  They were keeping their heads above water and the CEO of the hospital was a fan of simulation, but still… they were an expensive unit and some of the medical wards had been complaining of cut backs. 

She turned to Nitin who was quietly typing on the other side of the office.  “Do you ever worry all this isn’t worth it?”, she asked. 

Nitin paused for a moment and smiled.  “I haven’t been here as long as you of course, but even I can see the difference in culture that your facility has brought to the hospital.  People communicate better.  The departments interact more warmly.  It’s a hospital people can be proud of.  Don’t doubt yourself Nimali.” 

Nimali was touched, but she was rarely one to avoid reflection. 

“But is that us?” she asked, “Or are we just riding on the coat-tails of other cultural changes?  Have we really made a difference to patient outcomes?  It’s so damn hard to prove with research, this weirdly nebulous stuff!  We argue that simulation changes so many things, but we have so little evidence to prove it.  I don’t know.  Sometimes I worry we’re deluding ourselves.”. 

Nitin looked at her with a compassionate grin.  “How does one so talented have so much self doubt?  You make a difference my friend.  Don’t worry. One day you’ll prove it.”. 

His words were genuine, too.  Having learned so much from her on his fellowship, Nitin was convinced Nimali could do anything.  Then again, he thought quietly, he was biased.  The truth was he’d been in love with her from the first time he’d heard her talk about psychological safety. 

The Article (Open Access!) : 

Kumar A, Sturrock S, Wallace EM, et al 

Evaluation of learning from Practical Obstetric Multi-Professional Training and its impact on patient outcomes in Australia using Kirkpatrick’s framework: a mixed methods study 

BMJ Open 2018;8:e017451. doi:10.1136/bmjopen-2017-017451 
Discussion 

Simulation Educators often have a dual burden to both educate and promote their service as a powerful way to improve patient safety, but we have often struggled to prove it works.  Have we in some ways developed an unconditional love for the medium without evidence of actual patient impact?  In this month’s paper, we hope to learn from Kumar et al’s approach, where they used a mixed methods study to assess whether a simulation program made an actual measurable impact on patient outcomes. 

 

References : 

Kumar A, Sturrock S, Wallace EM, et al 

Evaluation of learning from Practical Obstetric Multi-Professional Training and its impact on patient outcomes in Australia using Kirkpatrick’s framework: a mixed methods study 

BMJ Open 2018;8:e017451. doi:10.1136/bmjopen-2017-017451 


About Ben Symon

Ben is a Paediatric Emergency Physician. He is based at The Prince Charles Hospital in Brisbane. In 2014 Ben was the first Simulation Fellow for Children's Health Queensland, and assisted in the statewide roll out of the SToRK Team's RMDDP program. He currently teaches on a variety of paediatric simulation based courses on paediatric resuscitation, trauma and CRM principles. Ben has a growing interest in encouraging clinical educators to be more familiar with simulation research.


Leave a comment

Your email address will not be published. Required fields are marked *

8 thoughts on “Simulcast Journal Club April 2018 – Unconditional Love

  • Luke Summers

    Hello Everyone,
    I am an Emergency Medicine Trainee based at the new Sunshine Coast University Hospital currently undertaking a 6month placement as the Education and Simulation Registrar. Although I have extensive experience as a participant in simulation training over the years I am completely new to the provision of it.
    My initial superficial reading to the subject (including last month’s PEARLS paper) has undoubtedly been exceptionally useful in helping me get started in planning and debriefing simulation training. However, as alluded to on the podcast, the topic of research into simulation education does appear to be a lot woolier than the usual objectiveness of outcome based research that we are usually used to in Emergency Medicine. So when I saw that this month’s paper was one utilising objective patient care outcomes I thought it a great opportunity to join in the conversation. As neither an expert in reviewing academic journals or simulation education I look forward to you responses to my initial impressions.
    My initial optimism did no bear through though. I thought the use of qualitative feedback questionnaires was appropriate for assessing level 1 and 2a of the kirkpatrick scale, and I thought this was done well. However, I thought that its use for assessing knowledge (level 2b) was less appropriate. It highlighted that many of the topics (CRM) that we use simulation to train in had been taken on board by the participants but did not objectively assess knowledge or ability with these topics. I do appreciate that formal assessment would be much more difficult/time consuming/expensive than a questionnaire but would have more truly allowed an understanding of whether level 2b had been achieved. This would potentially then be more readily relatable to any improvements in patient outcome. If performed ‘on the shop floor’ it may also have allowed for assessment of level 3 (cultural changes).
    This brings us on to the level 4b component and the significant adverse events of obstetric medicine that were being assessed seemed too rare for the study to be able suitably powered to comment on any significant improvements. The significant outcomes that were picked up were in management outcomes and not patient outcomes. Although it was pleasing to see that all patient outcomes that were commented on in the study (one assumes there were no maternal deaths from PPH) did improve following the introduction of the PROMPT study if not significantly so.
    I think this highlights the difficulty in performing (and resultant lack thereof) of objective research in simulation training. We use this to train for high risk low frequency events which by definition would require massive studies to be able to identify any significant objective improvements. I thought this was valiant attempt to do this and overall suggests that our perceived benefits of simulation training are real but was thwarted by the rarity of adverse events that we strive to avoid. As pointed out by the author the time frames and nebulous nature of relating simulation training to patient outcomes further hampers the feasibility of undertaking such research.
    As I mentioned above I am thinly read in this topic and would welcome all feedback. thanks, Luke

    • Ben Symon Post author

      Hi Luke,
      Thankyou so much for being our journal club first responder and sharing your insightful thoughts about the study.
      I was quite excited to read the abstract for this study, as I thought in many ways this kind of mixed methods study might be able to prove something that can’t be achieved via qualitative or quantitative analysis alone. I think the article’s method approaches the challenge of
      ‘how to prove sim works’ really well, so it was confronting for me that the clinical differences are quite scant to quantify.
      As you mentioned in your comments though, this stuff is all quite rare in modern obstetric care, and it reminds me a bit of Paediatric C-Spine studies…. it’s essentially impossible to recruit enough patients with actual pathology to definitively prove anything.
      So after some initial disappointment with the outcomes of this article, I have reframed this as evidence of some performance improvement post sim, without clear evidence of direct patient benefit.

      It’s hard, hey? Was the study just not powered enough to prove we can change outcomes with this stuff? Or am I just being a delusional sim groupie who’s going to reinterpret every paper I don’t agree with???

  • Bec Szabo

    Ben, I promise I’ll comment on article by Arunaz and others article but seriously you need a literary agent and advance on a romantic novel about Nitin and Nimali. Unconditional love really could’ve belonged to Valentine’s Day month of February!

    Purely commenting on the Nitin – Nimali exchange – maybe a genuine part of the impact of simulation and need for proof / evidence is the impact on people, culture and hence how health professionals and the health system functions rather than only direct impact on patient outcomes which is so hard to prove particularly in high income countries…..maybe like nitin sometimes is just so obvious and you don’t need to bang yourself on the head with a hammer to know it hurts. Just challenging to demonstrate ROI and worth in fiscally austere times, need to be provide evidence to our medical colleagues and those providing the money……and those like Nimali who just cannot see what’s right in front of their faces.

    PS we need a follow up or novel to know what happens to Nitin and Nimali

    • Ben Symon Post author

      Thanks for posting Bec! The Nitin/Nimali novel has been coming, just in discrete monthly installments. (if you read the case studies in chronological order, anyway :p)

  • Derek Louey

    Sim education isn’t the only modality that has difficulty proving it’s validity. The problem is it is difficult to prove that any educational activity improves hard clinical outcomes because of multiple confounders that influence clinical performance. But here is an interesting question, for all the purported advantages of sim, is it superior to other forms of training? I found this article thought-provoking.

    https://www.mja.com.au/journal/2018/208/4/educational-research-current-trends-evidence-base-and-unanswered-questions

    • Ben Symon Post author

      Thanks for that perspective Derek, I agree that most educational activities aren’t mandated to provide evidence they work a lot of the time. I guess given the relative expense of Sim it makes sense though. Particularly since we make some phenomenal claims sometimes. Thanks for the extra reading!

  • Victoria Brazil

    Overall i think this paper just highlights the challenge of demonstrating value from simulation, or indeed any educational intervention.

    The authors have done an enormous amount of work gathering data in a structured way across the levels of Kirkpatrick’s model.
    The patient level data collection would have been especially exhausting, and i can only imagine the disappointment when they crunched the numbers and found no improvement over historical controls.

    Of course as sim enthusiasts – its easy to find methodological issues that might explain away this ‘poor’ result – historical controls are problematic, baseline outcomes are already good, time period not long enough, retrospective data registry, confounders and attribution issues for any educational intervention, etc etc.
    And of course we rejoice in all that good stuff at levels 2b and 4a – participants think they are doing better on a range of things we think are important

    But as scientists – I guess we also have to accept this is the answer to the question asked, using the methods described.
    Just maybe there is no difference.
    Maybe its one of those ?80% ? majority of sim activities that we can’t point to a numeric, tangible return on investment (ROI).

    Does that mean we should stop doing PrOMPT?
    Without a true comparison group not sure we can point to ‘nothing’ being a better alternative.
    And why was this result so different to Tim Draycott’s initial work? ( which as sim enthusiasts we’ll keep quoting 🙂

    But for me it invites 3 points to prompt discussion

    1. The flaw of choosing ‘format driven’ education, on the basis of a binary ‘it works (or not)’ world view. My qualitative research friends would say we need to explore what works, for whom, when, and under what circumstances. Educational interventions are rarely ‘cookie cutter’ and have different impacts in different hands. My thought its that these judgments are what makes a great sim educator.

    2. Kirkpatrick’s isn’t the only model. Thinks like ‘logic models’ and other approaches may be better for complex interventions. I am no expert on this but i know where to look…….. https://www.ncbi.nlm.nih.gov/pubmed/22515309

    3. This doesn’t mean we can’t make our case for the ‘value’ in what we do…..https://onlinelibrary.wiley.com/doi/full/10.1111/medu.13505
    and we should spend time getting better at articulating that.

    Thanks again Ben – important article to consider and i look forward to the comments and expert commentary

    vb

  • ben lawton

    Hi Ben
    Another great choice of article! I think this is pretty clever. With the sim programs that we run we are usually torn between measuring measurable things and trying to measure meaningful things. Obviously the latter is what matters but is very difficult to do so we end up collecting Kirkpatrick level one stuff and calling that an outcome which helps in justifying a service but is pretty unsatisfactory overall. I like the way they have attempted to come at this at a few angles. PROMPT is an inherently attractive course to try and measure the level 4 outcomes for as the patient population it is training people to look after are fairly homogenous (generally healthy women of childbearing age) for whom the vast majority of complications experienced come from a fairly small list and all have an outcome (a baby) who is assessed with a widely accepted and validated outcome measure (an Apgar score) regardless of whether there is a study going on. From memory other people have shown improvements in Apgars in big population cohorts after the introduction of PROMPT, which still has the weakness of proving association but not causation. Though this has been demonstrated after the introduction of PROMPT in a few different countries. This might be a bit of a stretch but I cant help feeling that taken together these trials behave like a clumsy step-wedge study and might be as good evidence as we are going to get for a while that this type of training is effective. Trying to measure level 4 outcomes for the paediatric resus courses that we teach is particularly challenging because of the very small numbers of patients involved and their heterogenous nature.

    I found the qualitative component of this study interesting and wonder what thematic analysis of our feedback forms would look like and whether it would reflect that we are achieving the outcomes we are aiming at. As Vic alludes to above though I would have trouble being truly objective about this type of study given that my job really depends on demonstrating value in what we do so I clearly have a vested interest in showing a measurable outcome, it really would be nice if those outcomes were meaningful but I guess this paper demonstrates how hard it is to try and measure those meaningful outcomes even when there are a ton of patient/population factors in your favor.