Simulcast Journal Club May 2017 – Hard Outcomes


Simulcast Journal Club is a monthly series heavily inspired by the ALiEM MEdIC Series.  It aims to encourage simulation educators to explore and learn from publications on Healthcare Simulation Education.  Each month we publish a case and link a paper with associated questions for discussion.  We moderate and summarise the discussion at the end of the month, including exploring the opinions of experts from the field.

Journal Club (1)

Title :  “Hard Outcomes

The Case :

When he started in ICU Brad had never intended to have an archnemesis, but Dr Snythe had somehow applied for the role. “I overheard your sim fellow’s debriefing the other day.” said Snythe from across the tea room.  “It’s all very sweet, isn’t it? This debriefing with good judgment thing.  Lots of feelings and concerns and whatnot?  I was quite touched.  It’s like the care bears are interviewing ICU trainees.”

“I’m impressed.” said Brad dryly.  “I thought after two decades of bullying trainees you might not be able to recognise what feelings actually are.”.

Snythe’s expression changed rapidly.

“Well you know what doesn’t impress me Brad? Your data.  No matter how much advocacy and inquiry you throw at these registrars, 6 months into your program their BLS metrics remain terrible.  You can uncover frames all you want, but you’re not making a spec of difference when it comes to our Quality & Safety outcomes.”

Brad’s ego was bruised and he came out swinging.

“Why are you even drinking coffee, Snythe? Out of nurse’s tears?”

“Nurse’s tears?” Snythe snarled.  “Funny you should mention.  We lost two nurses to fund your damn program in the first place. And when the director hears your results at the end of the year, I fully intend to get them back. Because believe it or not, Brad, I do care about stuff.  Like actual patient care.  And your program is getting in the way of it.”

Brad’s heart sank as Snythe stormed out.  It hurt so much more because there was some truth in it.  His trainee’s metrics remained terrible.  They loved the program, it was creating genuine cultural change and he knew they were learning! But skill acquisition just wasn’t improving.

He needed a new strategy, and he needed it fast.  The vultures were circling.

 

The Articles :

Acad Med. 2015;90:00–00.  First published online doi: 10.1097/ACM.0000000000000934

Structuring feedback and debriefing to achieve mastery learning goals

Eppich WJ1, Hunt EA, Duval-Arnould JM, Siddall VJ, Cheng A.

 

Resuscitation. 2014 Jul;85(7):945-51. doi: 10.1016/j.resuscitation.2014.02.025. Epub 2014 Mar 4.

Pediatric resident resuscitation skills improve after “Rapid Cycle Deliberate Practice” training

Hunt EA, Duval-Arnould JM, Nelson-McMillan KL, Bradshaw JH, Diener-West M, Perretta JS, Shilkofski NA.

 

Discussion

This month, we are looking at not one, but two articles!  They are in many ways a pigeon pair and as such it’s valuable to read them together.  They describe a relatively new innovation in debriefing : Rapid Cycle Deliberate Practice.  In many ways the opposite of traditional debriefing with good judgment, Eppich et al’s paper provides an overview of strategies for structuring RCDP, while Hunt et al’s paper provides hard data about the measurable improvements in paediatric resuscitation found through designing and implementing a paediatric RCDP program.

What are your thoughts on these papers and RCDP in general?  Have you successfully implemented it into your simulation utility belt, or is it more a theoretical practice you’ve heard about but not seen much of?

Let us know in the comments below.

 


References :

Acad Med. 2015;90:00–00.  First published online doi: 10.1097/ACM.0000000000000934

Structuring feedback and debriefing to achieve mastery learning goals

Eppich WJ1, Hunt EA, Duval-Arnould JM, Siddall VJ, Cheng A.

 

Resuscitation. 2014 Jul;85(7):945-51. doi: 10.1016/j.resuscitation.2014.02.025. Epub 2014 Mar 4.

Pediatric resident resuscitation skills improve after “Rapid Cycle Deliberate Practice” training

Hunt EA1, Duval-Arnould JM2, Nelson-McMillan KL3, Bradshaw JH4, Diener-West M5, Perretta JS6, Shilkofski NA7.


About Ben Symon

Ben is a Paediatric Emergency Physician at The Prince Charles Hospital in Brisbane and a Simulation Educator at Queensland Children's Hospital.

9 thoughts on “Simulcast Journal Club May 2017 – Hard Outcomes

  • Ben Symon

    So to start the conversation for this month, I found these two articles confronting reading.
    Confronting for me because I think I’ve been working in a silo where Advocacy and Inquiry tend to be considered ‘best practice’, and Rapid Cycle Deliberate Practice is a term and concept unfamiliar with most.
    There have been perpetual themes in our journal club regarding the need for simulation research to provide concrete evidence of clinical improvements in either patient safety or patient management. Here we have a beautiful article from Elizabeth ‘Betsy’ Hunt providing just that. Hard evidence of improvement in a critical, life saving skill. It’s then backed up by the Eppich article providing more theoretical knowledge and depth surrounding the skills required to facilitate Rapid Cycle.

    And here I am in my little silo, not really doing any of this. Working on perfecting my A&I, cyclical questioning etc, and missing this important new skill. And I have to say that even having read these articles, I really would like to see some role modelling of it as a technique before I try it myself on some unsuspecting residents.

    Traditional debriefing courses that I’ve done or heard about don’t seem to often address Rapid Cycle, and it seems at least an equally important tool as Debriefing with Good Judgment if we consider it the ‘Yang’ to Rapid Cycle’s ‘Yin’. What a great tool to improve critical skills training, and I would suspect that despite my unfamiliarity with it, group dynamics wise, most people would be reasonably comfortable with a ‘coach’ or ‘drills’ based session, in Western societies at least, given it’s ubiquity in sports based culture. My challenge is not acceptance of the findings, it’s learning to do this effectively.

    I would also like to highlight debrief2learn’s hosted video involving Betsy Hunt explaining : https://debrief2learn.org/resources/rapid-cycle-deliberate-practice-lecture/ , which I think is well worth a look.

  • Victoria Brazil

    Hey Ben
    Thanks for a great pair of articles to prompt thinking.
    I have started a version of this with our final year medical students – inspired by this paper, and also by the great work of Sunga and Cabrera and colleagues describing their “Lie Die Repeat” simulation. (blog post and link to paper here http://emblog.mayo.edu/2016/06/11/live-die-repeat-on-movies-deliberate-practice-and-simulation/ )

    The key sentence is “prioritises opportunities for learners to try again over a lengthy debriefing”
    Our students have found that very useful, and we have found it avoids those uncomfortable scenarios where nothing is going right but you have to watch it all the way to the end before discussing :-), and then hardly know where to start.

    That said – like all mastery learning approaches – this really suits those skills and performance where the desired performance can be well described and there is minimal subjectivity is deciding whether learners have achieved or not. Hence BLS/ resuscitation a perfect subject. Using directive feedback predominately – as described in Walter’s article – feels easier as ‘right’ and ‘wrong’ seem clearer and more technically based.

    We’ve adapted a little for our student scenarios where we are looking at more ‘grey’ scenarios of patient deterioration where there are less black and white expected actions. That said – we’ve been able to adjust, and its really forced some discipline in our scenario writing – “So just what do we regard as expected performance at this stage?”

    Walter’s article also illustrates that this doesn’t mean non technical skills aren’t covered – as in the example of closed look communication positive feedback.

    The article methods are well described, and illustrate attention to detail in measurement.
    From my reading, however, the comparison groups didn’t quite compare two different interventions.
    Historical control got one education program, then the intervention group got that program PLUS the RCDP.
    Hence the intervention group simply got more training so it would be surprising if they didn’t do better.
    Outcome measures were good – robustly measured clinician performance in simulated environments, but we’re not sure if that translates to real world patients – hard research to do.

    The debriefing paper is a timely adjunct as i agree the approach needs to be a little different. I like the emphasis on the pre-brief needing to be different – also described in the Live Die Repeat article. The actual words as illustrative examples were useful for me.

    Thanks again and looking forward to more comments

    vb

    • Ben Symon Post author

      Thanks Vic, while I agree that the intervention group got more training and therefore that could explain improved outcomes, I still found it exciting to even see evidence that an educational intervention did lead to improved measured outcomes. While it might not provide evidence that RCDP is better than traditional education sessions, the evidence that a combination leads to improved outcomes was still informative for me.

      To me as well, the coaching model provided just makes ‘intrinsic sense’ when we’re talking about skills like this involving finite, measurable interventions. A slippery slope I guess to assuming that’s the reason they got better without evidence to back it up, but I look forward to incorporating it into my practice.

  • Ben Lawton

    Both these papers basically had me at hello. I think our current standards of mandatory training involving proving “competency” in BLS and ALS are great examples of what Steven Shorrock would call “ignorance and fantasy” as they really don’t reflect work as done in the real world. I also worry that people both invest unnecessary emotional energy in worrying about the test and leave our BLS courses with the idea that they have passed or failed, rather than reflecting on which aspects of their performance make a difference to patient outcomes and how they can go about improving that in their everyday work. A mastery learning approach whereby everybody “passes” they just vary in the time they take to get there seems conceptually much more appropriate in terms of what we offer our learners and their patients.

    Walter’s paper identifies what I think is an important point in making a distinction between simple/complicated things for which a correct approach can be re-enforced and complex things which need more nuanced discussion.

    This approach seems to be effectively behaviourism as we are using external stimuli to re-enforce the “right” and “wrong” things to do. I agree with Vic thats CPR/BLS is a great place to do this as there generally are right and wrong answers. As things get more complex these right and wrong answers become a little more grey and therefore more difficult to usefully re-enforce by this sort of method without some sort of contextualising discussion. That said however there are particular aspects of a more complex resuscitation that could be re-enforced with this kind of technique. Are communication loops being closed, rather than specifically what was communicated about. Was appropriate attention paid to pre-oxygenation/haemodynamic support prior to intubation rather than specifically which methods were chosen. RCDP also seems like an attractive method for learning SoPs/protocols which many of us have (or should have) around particular procedures such as intubation/CVL insertion/etc.

    There is a risk that participants learn to respond to a simulation based trigger that will not necessarily be present in real life and we need to be really careful to focus on aspects that actually improve survival (e.g minimising interruptions in CPR) and avoid asserting “correct” behaviour in areas where practise is more controversial as this may just alienate learners who no longer get the opportunity to discuss the pros and cons

    As an added benefit, I also wonder whether people would be more inclined to subject themselves to a mastery learning type session as a refresher than they are to sit through the same lecture/skills stations/test that they did 12 months ago as they are often asked to under our current mandatory training models.

    Walter’s paper points out that “according to self determination theory promoting feelings of competence improves internal motivation” I find as an educator when everyone stuffs up the first scenario and then the last scenario is done well after a day of emphasising the important bits it motivates me too so there may be rewarding aspects for both participants and educators in employing this kind of approach.

  • Suneth Jayasekara

    Thanks Ben for sharing these 2 great articles this week! Thought provoking for sure! As noted by Vic and Ben L, it would appear that RCDP would lend itself well to learning goals that have well defined standards, with metrics that are easy to measure. ALS being the prototypical example.

    I had not come across RCDP as a concept before, but interestingly, we rather unintentionally used some of these principles, and combined it with gamification in our registrar/nurse education simulations this month! It was the first sim education session in our new hospital ED, and we ran a “CPR sim wars” session, where we had 12 teams of 5 or 6 participants (yes – those are the kind of numbers we are dealing with!!) over 2 weeks. The teams had 2 rounds to achieve their best score – which was judged on some pre-defined objective criteria, including objective CPR quality as per the Q-CPR mannikin. They received their score sheet as feedback between rounds, so that they could focus on areas to improve on. Of course the winning team received a trophy for their efforts! We thought it worked well, with the competition element making teams find creative ways to improve their score. At the end we gave some verbal feedback about the general themes and areas for improvement.

    It would be challenging to use similar principles outside of ALS type scenarios though. However after reading these articles and discussion I think there would be value in allowing the participants to repeat the sim after the “traditional” debrief, as a method of deliberate practice. However, practically, time constraints would be the primary issue we would have in our setting.

    Once again, thanks very much for facilitating this great monthly discussion!

  • Tanya Bohlmann

    Hi everyone.
    I’m new to the fray, currently collaborating with Suneth to run our new Sim Education program. I’m pretty much still a novice but hopefully what I lack in experience, I can make up for with enthusiasm! This forum is such a great way to see what everyone else is doing and thinking in the world of Sim education. These two articles provided great food for thought – thanks for sharing. As Suneth already alluded to, our CPR SIM wars unintentionally used some of the principles in RCDP. It was intriguing to watch how quickly our trainees improved on their performance after their first round attempt and after watching other teams perform. Their trend would have been interesting to follow had we time for more than two rounds per team! Due to the design of our new registrar education program, we repeat the SIM event two weeks in a row. CPR sim wars was our opening event and we obviously learnt a lot from delivering it the first time around. On this first run through, we ran out of time to debrief the participants properly. As a result, one of the registrars was (understandably) rather disgruntled by the fact that he did not have the opportunity to get specific feedback about his team’s performance. We had designed such a great event that achieved so much in terms of teamwork, communication, CPR mastery, introduction to our sim lab etc. – but for him, the entire experience was tainted by the fact that we weren’t able to debrief him properly and provide feedback that allowed him to improve his performance in the future. It was a bitter pill to swallow, but an extremely valuable lesson to learn. This meant that the following week, we made sure that we had enough time to incorporate feedback to all the participants. This was well received and rounded the event off well, but I’m sure many participants were still eager to know more detail about those elements of their performance that could have been improved on. Had I known more about this RCDP concept when I came up with the initial idea, perhaps we could have designed the event differently – to provide short, real time feedback – that would improve the participants learning experience and mastery. Perhaps the next time we run it, we can try it. Core skills like CPR certainly lend themselves to RCDP but I can imagine using some of these real-time “micro-debrief” skills embedded within a more complex sim – for example using facilitators to pose reflective questions (similar to those described by Wayne et.al “what are you seeing, what are you thinking right now?”) during a sim – almost like having a coach stand behind them…? Definitely lots to think about.

    • Ben Symon Post author

      Thanks for sharing the lessons you learned Tanya! It’s great to have you on board. Interesting to hear about your frustrated learner as well.

  • Walter Eppich

    I am honored that Vic and Ben S.—a fellow pediatric emergency doctor—have chosen our article about feedback and debriefing for this month’s journal club. How gratifying to read all the comments:-) I fear I won’t be able to address each of them individually. However, in the collective these reflections have prompted a number of thoughts on my end.

    First, Dr. Snythe would be my archnemesis, too! I bristle at some of his provocative statements, namely that ‘Debriefing with Good Judgment’ (DGJ) is all about “lots of feelings and concerns and whatnot?” – this misses the point and seems to devalue that debriefing model. Also, “throwing advocacy-inquiry” at learners makes it sound like an indiscriminate one-size-fits-all approach. For me, DGJ represents a DEBRIEFING PHILOSOPHY and advocacy-inquiry a CONVERSATIONAL STRATEGY. Certainly we can all agree on the following: (a) we should give learners the benefit of the doubt that they’re trying their best and want to improve; and (b) we should inquire about learners’ rationale for action (or their frames of mind that drove their behavior). Of course, the latter depends on the performance domain. As we discuss in the paper, there are times when coaching and directive feedback are in order!

    Second, one of Ben Symon’s statements jumped out at me, namely that he found the notions in our paper confronting for him since he thinks he has “been working in a silo where Advocacy and Inquiry tend to be considered ‘best practice’…”. I empathize—I used to this the same! However, when I think of best practices for debriefing, I firmly believe that one size does NOT fit all. This explains why Adam Cheng and I advocate for a BLENDED APPROACH to debriefing in our 2015 ‘PEARLS’ paper. [See also our 2016 ‘More than one way to debrief’ paper with Taylor Sawyer in the lead. Michaela Kolbe, a psychologist and family therapist based in Zurich, Switzerland also introduced a blended approach in her 2013 TEAMGains paper.] In PEARLS, Adam and I articulate three broad categories of educational strategies that educators can use in the analysis phase of the debriefing: (a) learner self-assessment, (b) focused facilitation (e.g. advocacy-inquiry and others), and (c) providing information in the form of directive feedback and teaching. Readers will recognize the third strategy in the coaching practices outlined in our journal club article. In my view, what our article adds to the literature is the powerful role of with-in scenario debriefing—what we refer to as ‘MICRODEBRIEFINGS’ to reflect its brief and focused nature. I cannot emphasize more strongly that for those who have mastered advocacy-inquiry, TARGETED use of this conversational strategy during microdebriefings can be highly effective and efficient. Furthermore, pairing of specific observations with your point of view can also structure effective feedback. Here, your point of view should include WHY you think this is important and how to IMPROVE to achieve performance goals. I think of ‘giving feedback’ as advocacy-inquiry without the inquiry. Add the inquiry when you need to diagnose the learning need by uncovering the frames of mind. Of course, when I learned that Viva Jo Siddall, the educator in Wayne et al. 2006 ACLS paper, stood behind team leaders and coached them DURING the simulation scenarios, it really rocked my world. I would never have gleaned that from reading the original publication. Of course, this practice mirrors what more experienced consultants might do whilst coaching a registrar or fellow through an actual resuscitation event.

    Third, our field desperately needs a more nuanced understanding of how to align intended learning outcomes with simulation/debriefing strategy; in my mind, we are only beginning to understand how to dose each of the educational strategies Adam and I outline in our PEARLS framework. At least for resuscitation education, the practices outlined in ‘Structuring feedback and debriefing to achieve mastery learning goals’ have a quite robust evidence base—as Betsy Hunt and colleagues outline. Further, I would remind readers that outcomes from Diane Wayne’s research program on ACLS demonstrate that deliberate practice and mastery learning lead to admirable retention of skills (Wayne et al. 2006) and clear translation of learning outcomes to clinical environments. [See also Wayne et al. 2008, Simulation-Based Education Improves Quality of Care During Cardiac Arrest Team Responses at an Academic Teaching Hospital.] The strength of these results supports the use of feedback and debriefing practices explored in this month’s journal articles. I must highlight one glaring aspect about Wayne et al. 2006 and Hunt et al. 2014: they included UNIprofessional learner groups, namely resident physicians only.

    Fourth, until recently, there was a clear gap in the literature about the importance of integrating of feedback/debriefing practices within an overarching curriculum design. No more: the two papers in this month’s journal club highlight and justify that the traditional approach to simulation, with a 10-15 min scenario followed by a 20-30 minute debriefing, are no longer defensible for resuscitation education. One could argue the same for other domains for which clear performance guidelines exist. Advocacy-inquiry alone will not promote learning when what learners really need is deliberate practice—actually doing the key skill over and over again until they get it right. Feedback and debriefing should be tailored to the performance domain and should be integrated within a thoughtfully designed curriculum. Mastery learning and rapid cycle deliberate practice (RCDP) each represent an approach to CURRICULUM DESIGN. For more detail about RDCP curriculum design, see Figure 2 in our “Feedback and Debriefing” paper. See also “Developing a simulation-based mastery learning curriculum “ by Barsuk et al. 2016. As Vic highlighted, a key aspect of these curricula is an effective PREBRIEF to set the stage for the feedback and debriefing to follow.

    Fifth, I remain a strong advocate of DBJ and advocacy inquiry in my own debriefing practice– particularly for training interprofessional and interdisciplinary teams dealing with complex and vague situations. Especially for experienced providers, I find it potentially risky and off-putting to coach and provide directive feedback without first understanding where they are coming from and what they were aiming to accomplish. So Ben S, you should keep your hard earned advocacy-inquiry skills at the ready—you will still need them! And to Dr. Snythe: BUGGER OFF!

    Finally: for those who are interested about the application of these principles in clinical practice, see our paper: Eppich et al. 2016 “Let’s Talk about It: Translating Lessons from Healthcare Simulation to Clinical Event Debriefing and Coaching Conversations”.

    Disclosures: I work in the Department of Medical Education at Northwestern University Feinberg School of Medicine; my colleagues include Bill McGaghie, Diane Wayne, and Jeff Barsuk. Also, I receive salary support to teach on simulation educator courses with the Harvard Center for Medical Simulation.

  • Ian Summers

    I love it when Journal Club teaches me something new and in these articles it certainly has. Thanks to all who commented before me and helped shape my thoughts.

    So hello to everyone and a special hello to new friend Walter who I last saw a few days ago leaving him strolling towards the Duomo in Florence, camera in hand, and I am hoping that the evening light gave you some good shots…

    and so to his article and what is new, at least for me…

    The concept of coaching within scenarios is dramatically different from the more traditional approach of brief pause and reset and forward projection of goals and strategies, to be put into place by the team (pause and discuss). I have never tried rewind as part of this because I regarded time shifting as adding a layer of mind bending that would be distracting, and because…well because I hadn’t thought of it.

    The concept of in scenario micro-debriefing is even more foreign and bizarre except it’s what we actually do when we are supervising senior trainees as team leaders running real complex cases, without taking over but gently directing and assisting and then stepping back out of the way. So in many ways this is much more natural than it would seem and has an added advantage of building practical supervision skills, and building the capacity of trainees to utilise coaching during real cases. I would love to hear the thoughts of trainees as to whether it is distracting and adds pressure or if it helps. I suspect this depends largely on the skills of the coach.

    Traditionally, you used pause and discuss more on junior than advanced trainees partly to reduce the intensity and anxiety levels, but have found it just as useful for seniors. In may ways it is the complexity of task and scenario (as the authors discuss) rather than the seniority of trainees that dictate which approach, and it is repetitive skills training on senior trainees eg RSI drills or ALS where the approach works. It makes sense, but again, this was new for me.

    Interested to hear about competetive ALS drills after coaching, Suneth.I have done something similar to this approach for the last few years for ALS training and accreditation. Seniors vs juniors after boith teams get coaching and repetitive drills ending in friendly but competitive war..scoring on designated goals (eg time to initial defib, CPR pauses times) and communication and teamwork goals. Losers buy winners a beer afterwards- our introduction to sim and registar bonding at the start of each year. We have only 15 trainees so we can get accreditation done in teams over 2-3 hours with high levels of buy-in.

    Thanks Ben and team and to the authors. Great articles this month.

    Ian
    @IanMeducator

Comments are closed.