Simulcast Journal Club April 2018 – Unconditional Love


Introduction :  

Simulcast Journal Club is a monthly series that aims to encourage simulation educators to explore and learn from publications on Healthcare Simulation Education.  Inspired by the ALiEM MEdIC Series, each month we publish a case and link a paper with associated questions for discussion.  We moderate and summarise the discussion at the end of the month in pdf and podcast format, including opinions of experts from the field. 

In order for the journal club to thrive we need your comments!  Some participants report feeling nervous about their initial posts, but we work hard at ensuring this is a safe online space where your thoughts are valued and appreciated.  To ensure this, all posts are reviewed prior to posting.  We look forward to learning from you. 

Copy of Journal Club(1)

Title :  Unconditional Love”   

Nimali looked wearily at the end of financial year forms that had piled up on her desk.  Trawling through the accounting reports her stomach sank as she noticed how close to the bottom line their centre was treading.  They were keeping their heads above water and the CEO of the hospital was a fan of simulation, but still… they were an expensive unit and some of the medical wards had been complaining of cut backs. 

She turned to Nitin who was quietly typing on the other side of the office.  “Do you ever worry all this isn’t worth it?”, she asked. 

Nitin paused for a moment and smiled.  “I haven’t been here as long as you of course, but even I can see the difference in culture that your facility has brought to the hospital.  People communicate better.  The departments interact more warmly.  It’s a hospital people can be proud of.  Don’t doubt yourself Nimali.” 

Nimali was touched, but she was rarely one to avoid reflection. 

“But is that us?” she asked, “Or are we just riding on the coat-tails of other cultural changes?  Have we really made a difference to patient outcomes?  It’s so damn hard to prove with research, this weirdly nebulous stuff!  We argue that simulation changes so many things, but we have so little evidence to prove it.  I don’t know.  Sometimes I worry we’re deluding ourselves.”. 

Nitin looked at her with a compassionate grin.  “How does one so talented have so much self doubt?  You make a difference my friend.  Don’t worry. One day you’ll prove it.”. 

His words were genuine, too.  Having learned so much from her on his fellowship, Nitin was convinced Nimali could do anything.  Then again, he thought quietly, he was biased.  The truth was he’d been in love with her from the first time he’d heard her talk about psychological safety. 

The Article (Open Access!) : 

Kumar A, Sturrock S, Wallace EM, et al 

Evaluation of learning from Practical Obstetric Multi-Professional Training and its impact on patient outcomes in Australia using Kirkpatrick’s framework: a mixed methods study 

BMJ Open 2018;8:e017451. doi:10.1136/bmjopen-2017-017451 
Discussion 

Simulation Educators often have a dual burden to both educate and promote their service as a powerful way to improve patient safety, but we have often struggled to prove it works.  Have we in some ways developed an unconditional love for the medium without evidence of actual patient impact?  In this month’s paper, we hope to learn from Kumar et al’s approach, where they used a mixed methods study to assess whether a simulation program made an actual measurable impact on patient outcomes. 

 

References : 

Kumar A, Sturrock S, Wallace EM, et al 

Evaluation of learning from Practical Obstetric Multi-Professional Training and its impact on patient outcomes in Australia using Kirkpatrick’s framework: a mixed methods study 

BMJ Open 2018;8:e017451. doi:10.1136/bmjopen-2017-017451 


About Ben Symon

Ben is a Paediatric Emergency Physician at The Prince Charles Hospital in Brisbane and a Simulation Educator at Queensland Children's Hospital.

15 thoughts on “Simulcast Journal Club April 2018 – Unconditional Love

  • Luke Summers

    Hello Everyone,
    I am an Emergency Medicine Trainee based at the new Sunshine Coast University Hospital currently undertaking a 6month placement as the Education and Simulation Registrar. Although I have extensive experience as a participant in simulation training over the years I am completely new to the provision of it.
    My initial superficial reading to the subject (including last month’s PEARLS paper) has undoubtedly been exceptionally useful in helping me get started in planning and debriefing simulation training. However, as alluded to on the podcast, the topic of research into simulation education does appear to be a lot woolier than the usual objectiveness of outcome based research that we are usually used to in Emergency Medicine. So when I saw that this month’s paper was one utilising objective patient care outcomes I thought it a great opportunity to join in the conversation. As neither an expert in reviewing academic journals or simulation education I look forward to you responses to my initial impressions.
    My initial optimism did no bear through though. I thought the use of qualitative feedback questionnaires was appropriate for assessing level 1 and 2a of the kirkpatrick scale, and I thought this was done well. However, I thought that its use for assessing knowledge (level 2b) was less appropriate. It highlighted that many of the topics (CRM) that we use simulation to train in had been taken on board by the participants but did not objectively assess knowledge or ability with these topics. I do appreciate that formal assessment would be much more difficult/time consuming/expensive than a questionnaire but would have more truly allowed an understanding of whether level 2b had been achieved. This would potentially then be more readily relatable to any improvements in patient outcome. If performed ‘on the shop floor’ it may also have allowed for assessment of level 3 (cultural changes).
    This brings us on to the level 4b component and the significant adverse events of obstetric medicine that were being assessed seemed too rare for the study to be able suitably powered to comment on any significant improvements. The significant outcomes that were picked up were in management outcomes and not patient outcomes. Although it was pleasing to see that all patient outcomes that were commented on in the study (one assumes there were no maternal deaths from PPH) did improve following the introduction of the PROMPT study if not significantly so.
    I think this highlights the difficulty in performing (and resultant lack thereof) of objective research in simulation training. We use this to train for high risk low frequency events which by definition would require massive studies to be able to identify any significant objective improvements. I thought this was valiant attempt to do this and overall suggests that our perceived benefits of simulation training are real but was thwarted by the rarity of adverse events that we strive to avoid. As pointed out by the author the time frames and nebulous nature of relating simulation training to patient outcomes further hampers the feasibility of undertaking such research.
    As I mentioned above I am thinly read in this topic and would welcome all feedback. thanks, Luke

    • Ben Symon Post author

      Hi Luke,
      Thankyou so much for being our journal club first responder and sharing your insightful thoughts about the study.
      I was quite excited to read the abstract for this study, as I thought in many ways this kind of mixed methods study might be able to prove something that can’t be achieved via qualitative or quantitative analysis alone. I think the article’s method approaches the challenge of
      ‘how to prove sim works’ really well, so it was confronting for me that the clinical differences are quite scant to quantify.
      As you mentioned in your comments though, this stuff is all quite rare in modern obstetric care, and it reminds me a bit of Paediatric C-Spine studies…. it’s essentially impossible to recruit enough patients with actual pathology to definitively prove anything.
      So after some initial disappointment with the outcomes of this article, I have reframed this as evidence of some performance improvement post sim, without clear evidence of direct patient benefit.

      It’s hard, hey? Was the study just not powered enough to prove we can change outcomes with this stuff? Or am I just being a delusional sim groupie who’s going to reinterpret every paper I don’t agree with???

  • Bec Szabo

    Ben, I promise I’ll comment on article by Arunaz and others article but seriously you need a literary agent and advance on a romantic novel about Nitin and Nimali. Unconditional love really could’ve belonged to Valentine’s Day month of February!

    Purely commenting on the Nitin – Nimali exchange – maybe a genuine part of the impact of simulation and need for proof / evidence is the impact on people, culture and hence how health professionals and the health system functions rather than only direct impact on patient outcomes which is so hard to prove particularly in high income countries…..maybe like nitin sometimes is just so obvious and you don’t need to bang yourself on the head with a hammer to know it hurts. Just challenging to demonstrate ROI and worth in fiscally austere times, need to be provide evidence to our medical colleagues and those providing the money……and those like Nimali who just cannot see what’s right in front of their faces.

    PS we need a follow up or novel to know what happens to Nitin and Nimali

    • Ben Symon Post author

      Thanks for posting Bec! The Nitin/Nimali novel has been coming, just in discrete monthly installments. (if you read the case studies in chronological order, anyway :p)

  • Derek Louey

    Sim education isn’t the only modality that has difficulty proving it’s validity. The problem is it is difficult to prove that any educational activity improves hard clinical outcomes because of multiple confounders that influence clinical performance. But here is an interesting question, for all the purported advantages of sim, is it superior to other forms of training? I found this article thought-provoking.

    https://www.mja.com.au/journal/2018/208/4/educational-research-current-trends-evidence-base-and-unanswered-questions

    • Ben Symon Post author

      Thanks for that perspective Derek, I agree that most educational activities aren’t mandated to provide evidence they work a lot of the time. I guess given the relative expense of Sim it makes sense though. Particularly since we make some phenomenal claims sometimes. Thanks for the extra reading!

  • Victoria Brazil

    Overall i think this paper just highlights the challenge of demonstrating value from simulation, or indeed any educational intervention.

    The authors have done an enormous amount of work gathering data in a structured way across the levels of Kirkpatrick’s model.
    The patient level data collection would have been especially exhausting, and i can only imagine the disappointment when they crunched the numbers and found no improvement over historical controls.

    Of course as sim enthusiasts – its easy to find methodological issues that might explain away this ‘poor’ result – historical controls are problematic, baseline outcomes are already good, time period not long enough, retrospective data registry, confounders and attribution issues for any educational intervention, etc etc.
    And of course we rejoice in all that good stuff at levels 2b and 4a – participants think they are doing better on a range of things we think are important

    But as scientists – I guess we also have to accept this is the answer to the question asked, using the methods described.
    Just maybe there is no difference.
    Maybe its one of those ?80% ? majority of sim activities that we can’t point to a numeric, tangible return on investment (ROI).

    Does that mean we should stop doing PrOMPT?
    Without a true comparison group not sure we can point to ‘nothing’ being a better alternative.
    And why was this result so different to Tim Draycott’s initial work? ( which as sim enthusiasts we’ll keep quoting 🙂

    But for me it invites 3 points to prompt discussion

    1. The flaw of choosing ‘format driven’ education, on the basis of a binary ‘it works (or not)’ world view. My qualitative research friends would say we need to explore what works, for whom, when, and under what circumstances. Educational interventions are rarely ‘cookie cutter’ and have different impacts in different hands. My thought its that these judgments are what makes a great sim educator.

    2. Kirkpatrick’s isn’t the only model. Thinks like ‘logic models’ and other approaches may be better for complex interventions. I am no expert on this but i know where to look…….. https://www.ncbi.nlm.nih.gov/pubmed/22515309

    3. This doesn’t mean we can’t make our case for the ‘value’ in what we do…..https://onlinelibrary.wiley.com/doi/full/10.1111/medu.13505
    and we should spend time getting better at articulating that.

    Thanks again Ben – important article to consider and i look forward to the comments and expert commentary

    vb

  • ben lawton

    Hi Ben
    Another great choice of article! I think this is pretty clever. With the sim programs that we run we are usually torn between measuring measurable things and trying to measure meaningful things. Obviously the latter is what matters but is very difficult to do so we end up collecting Kirkpatrick level one stuff and calling that an outcome which helps in justifying a service but is pretty unsatisfactory overall. I like the way they have attempted to come at this at a few angles. PROMPT is an inherently attractive course to try and measure the level 4 outcomes for as the patient population it is training people to look after are fairly homogenous (generally healthy women of childbearing age) for whom the vast majority of complications experienced come from a fairly small list and all have an outcome (a baby) who is assessed with a widely accepted and validated outcome measure (an Apgar score) regardless of whether there is a study going on. From memory other people have shown improvements in Apgars in big population cohorts after the introduction of PROMPT, which still has the weakness of proving association but not causation. Though this has been demonstrated after the introduction of PROMPT in a few different countries. This might be a bit of a stretch but I cant help feeling that taken together these trials behave like a clumsy step-wedge study and might be as good evidence as we are going to get for a while that this type of training is effective. Trying to measure level 4 outcomes for the paediatric resus courses that we teach is particularly challenging because of the very small numbers of patients involved and their heterogenous nature.

    I found the qualitative component of this study interesting and wonder what thematic analysis of our feedback forms would look like and whether it would reflect that we are achieving the outcomes we are aiming at. As Vic alludes to above though I would have trouble being truly objective about this type of study given that my job really depends on demonstrating value in what we do so I clearly have a vested interest in showing a measurable outcome, it really would be nice if those outcomes were meaningful but I guess this paper demonstrates how hard it is to try and measure those meaningful outcomes even when there are a ton of patient/population factors in your favor.

  • Suneth Jayasekara

    Thanks Ben for the article – and those who have contributed so far – very insightful comments! Thanks Vic for linking in the article about economic evaluation of simulation based education – it was worth reading it again, and links in nicely with the discussion.

    Here are my thoughts regarding this..

    When looking for improvement in clinical outcomes – I think this study was set out the be a negative study right from the get go. Monash is in all likelihood a mature obstetric centre with highly trained and experienced obstetricians and midwives. To significantly improve the performance of these practitioners in the outcomes they looked at by a 1/2 day course would be very unrealistic. But I still think there is probably real value in having the course in the centre, as there is likely to be an improvement in the clinical performance in the more junior learners in the institution. However, this will not translate to improved clinical outcomes in Monash itself, but rather manifest itself a few years down the track, when that junior learner would be better equipped to deal with a situation in a smaller regional centre where they may work in the future.

    For example – imagine Jane is an experienced obstetrician of 20 years experience in Monash health. Jane already is pretty good at managing shoulder dystocia and post partum haemorrhage. Jim and John are two junior obstetric registrars who attend the PROMPT course along with Jane, who provided valuable insights during the course. Over the next year Jim and John get to watch a few real cases expertly managed by Jane (who is only marginally better after attending the PROMPT course – and the patients do the same as they would have if the PROMPT course did not exist) – and Jim and John compound what they learnt from the PROMPT course. 2 years later Jim and John are working in rural settings, where they are better equipped to deal with obstetric emergencies, and they train local staff as well. Patient out comes HERE may be much better than they were 3 years ago, and this is contributed to by the PROMPT course in Monash. But how on earth do you measure this?!

    I think that in order to demonstrate improved clinical outcomes – a similar study would need to be done in a less mature setting – such as a compliation of a number of rural settings, or a large hospital in the developing world, where staff are less well trained prior to the course.

  • Nemat

    Thanks Ben

    As always I look forward to read your monthly entertaining case study and your great choices of articles.

    I really enjoyed reading this paper (well written, easy to read and very relevant to our day to day Simulation work). It asks an important question (Evaluation of a learning program) that everyone is interested in the answer from different perspectives. From an educator / simulationist point of view I want to know if I have made a difference to our participants and patient outcome. From a learner point of view I want to know if this program will help me with my clinical and non-technical skills acquisition. From an organizational / hospital view they want to know if the money spent on the program and the staff is worth it and is there is measurable outcome / data to plot. I highly recommend This interesting read That Victoria mentioned in her response “You can’t put a value on that… Or can you? Economic evaluation in simulation-based medical education”
    https://onlinelibrary.wiley.com/doi/abs/10.1111/medu.13505

    It is disappointing not to see significant change in this study but maybe as Victoria said “as a scientist perhaps we have to accept this is the answer to the question asked, using the methods described there may be no difference!”
    I also enjoyed the insightful responses from my fellow educators on this article:

    “we are usually torn between measuring measurable things and trying to measure meaningful things”

    “sometimes is just so obvious and you don’t need to bang yourself on the head with a hammer to know it hurts”

    “We use this to train for high risk low frequency events which by definition would require massive studies to be able to identify any significant objective improvements.”

    I agree with Suneth that maybe another follow up / extension study done at a different institute (rural, other countries) to assess the impact on patient outcome and participants clinical and non-technical skills which might show more obvious and palpable results.

    Regards
    Nemat

  • Nemat Alsaba

    Thanks Ben
    As always I look forward to read your monthly entertaining case study and your great choices of articles.

    I really enjoyed reading this paper (well written, easy to read and very relevant to our day to day Simulation work). It asks an important question (Evaluation of a learning program) that everyone is interested in the answer from different perspectives. From an educator / simulationist point of view I want to know if I have made a difference to our participants and patient outcome. From a learner point of view I want to know if this program will help me with my clinical and non-technical skills acquisition. From an organizational / hospital view they want to know if the money spent on the program and the staff is worth it and is there is measurable outcome / data to plot. I highly recommend This interesting read That Victoria mentioned in her response “You can’t put a value on that… Or can you? Economic evaluation in simulation-based medical education” https://onlinelibrary.wiley.com/doi/abs/10.1111/medu.13505
    It is disappointing not to see significant change in this study but maybe as Victoria said “as a scientist perhaps we have to accept this is the answer to the question asked, using the methods described there may be no difference!”

    I also enjoyed the insightful responses from my fellow educators on this article.

    “we are usually torn between measuring measurable things and trying to measure meaningful things”

    “sometimes is just so obvious and you don’t need to bang yourself on the head with a hammer to know it hurts”

    “We use this to train for high risk low frequency events which by definition would require massive studies to be able to identify any significant objective improvements.”

    I agree with Suneth that maybe another follow up / extension study done at a different institute (rural, other countries) to assess the impact on patient outcome and participants clinical and non-technical skills which might show more obvious and palpable results.

  • Komal Bajaj

    Hi everyone!

    I thoroughly enjoyed reading this article when it was first published and now again, immediately prior to typing this response. The authors should be commended on their tremendous efforts to address this burning topic in simulation. I appreciated the clarity and mixed-methods nature of their approach to their data. I look forward to the future analysis they describe, including observation of teamwork.

    In our large health system in New York City, we too have found obstetric simulation-programs a ripe target for analysis. Using the Philips Model of Learning Evaluation as our guide, we took a deep dive into 10 years of births and evaluated clinical outcomes and risk/billing data. It initially felt like squeezing water out of a stone to get data out of our not-so-fancy electronic health record but thankfully there are some very smart teams in our system who could help us. We found a favorable trend in both clinical outcomes and associated indemnity payouts (~$17 saved for every $1 spent). What was interesting is that there the effect seems to diminish within ~18 months after training, in my view highlighting the need for refresher training. I wonder if in the experience from Monash, would there be more effect from training in a shorter window after training. We’re still poking holes in the analysis and hope to have something meaningful to share with the simulation community of practice in a few months.

    Though nascent efforts, we are beginning to look at the ROI at all of our system-wide programs. The Jump Simulation “ROI Game” is a fun and effective way gain a shared mental model around some of these ideas. The game was developed by several large US-based health systems who are tackling these issues. At our organization, we also established a patient safety/simulation/finance committee to ensure that our goals/language are aligned.

    Look forward to hearing about other efforts to ascertain clinical impact.

  • Komal Bajaj

    I thoroughly enjoyed reading this article when it was first published and now again, immediately prior to typing this response. The authors should be commended on their efforts to address this burning topic in simulation! I look forward to the future analysis they describe, including observation of teamwork.

    In our large health system in New York City, we too have found obstetric simulation-programs a ripe target for analysis. Using the Philips Model of Learning Evaluation, we took a deep dive into 10 years of births and evaluated clinical outcomes and risk/billing data. It initially felt like squeezing water out of a stone to get data out of our not-so-fancy electronic health record but thankfully there are some very smart teams in our system who could help us. We found a favorable trend in outcomes and a consequent decrease in indemnity payouts (~$17 saved for every $1 spent). We’re still poking holes in the analysis and hope to have a manuscript prepared in a few months.

    Though nascent efforts, we are beginning to look at the ROI at all of our system-wide programs. The Jump Simulation “ROI Game” is a fun and effective way gain a shared mental model around some of these ideas. The game was developed by several large US-based health systems who are tackling these issues. We also developed a simulation-patient safety-finance to make sure what is “valuable” to all stakeholders is clearly understood and aligns with the mission/vision of our organization.

    Look forward to hearing about more simulation-based successes!

    • Victoria Brazil

      Hey thanks Komal
      I just read that 17:1 figure on Twitter – i think from that recent symposium at your shop.
      So pleased you and others are doing this extremely important ROI work

  • Arunaz Kumar

    Dear all,

    Please accept my sincerest gratitude to the simulation podcast team for reviewing this paper. A special thanks to Sarah Janssens for the expert opinion that offered a great summary.

    Although the topic has been closed, i wanted to contribute some insight into the study.
    It was a very difficult study to publish as we were disappointed at the outset to note that no significant difference was shown in the patient outcome. This was quite a contrast from the original VicPROMPT paper by Shoushtarian et al in 2014.

    The reasons as outlined by the group are obvious – Difference in patient outcome data is difficult to demonstrate in a high resource setting, where standards of patient care were high even prior to introduction of the PROMPT! Besides outcomes are influenced by a multi-pronged approach which includes education, quality assurance review, adequate provision of services and support etc.

    Although we were excited to report Level 4b Kirkpatrick’s framework, the strength of this paper is in triangulation and using multiple lenses to review a program, (as highlighted by Sarah in her review).
    So, what the take-home message?
    As clinicians, educators and researchers, we focus on patient outcome as it appears to be at the top of the “pyramid”, hence giving this rung of evaluation, a superior level in the Kirkpatrick’s hierarchy. It may almost appear to overwrite the results demonstrated at the other levels. It is interesting to note in the group discussion, how this negative outcome, suddenly raises the question if the program has any value at all?

    In my humble opinion, these quantitative results can sometimes be distracting…I see the value in staff members reporting “what” they have learnt and even more valuable are the “how” and “why” questions that provide insight into the “process” rather than focus on the “outcome”!! The “behaviour” which is not reported here is the missing link, the cog in the wheel, that is a connection between knowledge/attitudes and outcome! If we have overwhelming evidence to support a program by participants having a positive reaction, a change in attitude and knowledge, then hopefully a change in behaviour follows…(but we are still endeavouring to demonstrate that in our ongoing evaluation…)
    Lastly, as Victoria has suggested there are other linear frameworks as well like the CIPP and the logic model…will be interesting to use another model in the same study…
    My complements to the group for providing valuable insight into the study…which will be helpful in guiding other similar research…

Comments are closed.