Simulcast Journal Club October 2019

Introduction :  

Simulcast Journal Club is a monthly series that aims to encourage simulation educators to explore and learn from publications on Healthcare Simulation Education.  Each month we publish a case and link a paper with associated questions for discussion.  Inspired by the ALiEM MEdIC series, we moderate and summarise the discussion at the end of the month, including exploring the opinions of experts from the field. 

The journal club relies heavily on your participation and comments and while it can be confronting to post your opinions on an article online, we hope we can generate a sense of “online psychological safety” enough to empower you to post!  Your thoughts are highly valued and appreciated, however in depth or whatever your level of experience.  We look forward to hearing from you. 

Title :  “After Action Review 

The Case :  

Nimali’s head was reeling. 

“So you’re telling me there’s a sinister organisation working to overthrow democratic governments by infiltrating Simulation centres with sleeper agents who can weaponize healthcare mannequins and activate them for a robotic terrorist attack on hospitals and healthcare systems?” 

Nitin marvelled.  It had taken him the last 15 minutes to explain his mission.  “Your ability to synthesise complex concepts into single sentences is one of the main reasons I fell in love with you.”  He paused for a second.  “That, and your eyes.” There was a chance, after all, that hadn’t sounded as romantic as intended. 

“I’ve been undercover, tracking down sleeper agents for the last few years and retrieving them for deprogramming.  I was considered uniquely suitable for the job.  I was recruited to the secret service after my duel medical and criminal law degree was mentioned at my televised trophy presentation on India’s Next Ninja Warrior.”  He caught an impressed look from Nimali and smiled humbly.  “To be fair, Nimali, the medical degree was mainly just to please my parents.”. 

“I recruited Brad early in my tenure here, since he had a police background and it became clear Snythe was the sleeper agent : he’d developed a sudden interest in Sim, he had a pre-existing toxic personality that would be susceptible to recruitment, and he was making swift moves to become director of education in order to control the sim lab.  We knew we had proof when Brad found mannequin hacking software on Snythe’s office laptop.”. 

Nitin sighed and wrung his hands anxiously. 

“We were supposed to remove him for extraction, replace his body with a lifecast and make it look like an elaborate hoax/disappearance, but instead Snythe was somehow killed for real and I don’t know why.” 

Nimali held his hand.  “We debriefed the murder.” She said, “But we were so focused on keeping everyone calm that we haven’t analysed the data effectively.”. 

Nitin nodded.  “It’s been a busy few hours, but you’re right; It’s one thing to have an event debrief, but we need to act on the information.” 

Nimali paused.  “So let’s look at the facts : Snythe was murdered with a trocar.  Only you and Brad knew about the Lifecast mannequin.  Brad found the data on Snythe’s laptop without you, and you both planned the extraction together?  Brad was applying for the same job that Snythe did, and you were with me when the murder happened.  So that only really leaves one suspect : Brad.” 

Nitin’s eyebrows frowned.  “But that makes no sense, Brad wouldn’t kill Snythe over a job rivalry?  He knew how important it was for us to extract information, and the flashdrive he found checked out as legit.” 

Nimali thought for a minute, then clicked her fingers. 

“Nitin.” She whispered.  “Every sleeper agent needs a handler.  And we’ve left Brad alone with our friends for the last 20 minutes”. 

The Articles : 

Brindle, M., Henrich, N., Foster, A., Marks, S., Rose, M., Welsh, R. and Berry, W. (2018). Implementation of surgical debriefing programs in large health systems: an exploratory qualitative analysis. BMC Health Services Research, 18(1). 

Rose, M. and Rose, K. (2018). Use of a Surgical Debriefing Checklist to Achieve Higher Value Health Care. American Journal of Medical Quality, 33(5), pp.514-522. 

Discussion :  

Clinical Event Debriefings are becoming increasingly prominent in healthcare systems, and simulationist experts are increasingly driving (or being driven) towards providing guidance within this space.  Often pitched as a combination of psychological first aid and after action review, it’s a rapidly evolving space with multiple controversies and barriers towards effective implementation. 

This month we explore both a qualitative and quantitative paper exploring the outcomes and challenges involved in large scale clinical debriefing programs.  The results seem impressive, so it will be interesting to hear your thoughts on the papers and clinical event debriefing programs in general. 

References : 

Brindle, M., Henrich, N., Foster, A., Marks, S., Rose, M., Welsh, R. and Berry, W. (2018). Implementation of surgical debriefing programs in large health systems: an exploratory qualitative analysis. BMC Health Services Research, 18(1). 

Rose, M. and Rose, K. (2018). Use of a Surgical Debriefing Checklist to Achieve Higher Value Health Care. American Journal of Medical Quality, 33(5), pp.514-522. 

About Ben Symon

Ben is a Paediatric Emergency Physician at The Prince Charles Hospital in Brisbane and a Simulation Educator at Lady Cilento Children's Hospital. He currently teaches on a variety of paediatric simulation based courses on paediatric resuscitation, trauma and CRM principles. Ben has a growing interest in encouraging clinical educators to be more familiar with simulation research.

17 thoughts on “Simulcast Journal Club October 2019

  • Sarah Janssens

    Sleeper agents???? Wow I’m starting lose track of all the twists and turns! Today the Mater crew got together over coffee/lunch (no curry ) to chat about the papers. Most had read only the Brindle et al paper, and a few also the Rose and Rose paper. We had a wide ranging discussion about the Brindle paper which is kind of hard to summarise succinctly and I’ll try to describe here, although the order in which I present our points is not how they arose during discussion!
    The concept that “debriefing” surgical cases could be an EMR/paper based activity was quite a novel concept for many of us who tend to have a very simulation based view of debriefing. Lots of “how did they do it” and “what kind of training did they do” questions were asked, mostly left unanswered. However, it was appreciated that whatever was “done” was monumental, with the Rose and Rose paper describing >59000 debriefs!!! I get a headache just thinking about this!!!
    There was general appreciation of the time, money and effort invested in these programs, and the Rose and Rose paper’s title was cheekily labelled by me as “misleading” as it was felt that so much more than “use of a checklist” occurred. And I guess that’s the whole point of these papers right? – they clearly articulate that it isn’t just using the checklist that enables improvement. As so well described in the Brindle paper, it’s the asking for feedback, enabling team member voice, investing in people and time to collect data, monitoring systems level data to make improvements and enabling change, while always feeding back to the teams to sustain engagement. Figure 1 could almost work as a checklist for anyone wanting to implement such a program, and we pondered ourselves about other programs involving big change that we have implemented and how many of the elements of that we had touched on, and which ones we may have missed
    Planned debriefs such as these seem to be an organisation’s way of inviting feedback. Other feedback mechanisms equally be established by running insitu simulations with the time for a debrief afterwards. In this vein it was noted by one of our team members that when Mater first introduced the pop up sim program, staff on the floors “brought issues” into the debriefs knowing this was an avenue where someone would listen!
    It was agreed that one of the strengths of the system was debriefing the “routine”, which of course is error laden, but normalised in the staff’s minds. These papers highlight that identifying and correcting these numerous “bugs” in the “everyday” can lead to huge dividends, more so than perhaps focusing only on critical or simulated events.
    So lessons for our team? Inviting feedback (in whatever form), identifying performance gaps and acting on them is a recipe for success – in simulation and clinical environment. But be warned – sustained energy, leadership and money will be required to ensure success, and the Rose and Rose paper provides an argument for all healthcare systems to invest likewise.

    • Ben Symon

      Thankyou so much Sarah for assembling the thoughts of the mater curry club! I agree that these papers present a really clear emphasis on the importance of infrastructure and staff resources that action, analyse and communicate clinical debriefs. I particularly like in the Rose squared paper that it was acknowledged that quality and safety discussions that lead to zero practice change are actually a demoralising and draining intervention with potential negative impact on morale. It’s certainly made me look at my own clinical debriefing protocol with a humbled reflection on formalising the accountability more clearly!

  • Shannon McNamara

    This papers were great! I joined my colleagues at NYSIM in New York City to review them and learned a lot.

    1) We were struck by the lack of specifics on *how* the debriefing was done and also by how *so many* debriefings were done! It’s impressive to see these programs’ broad reach. How much does it matter how you debrief? Perhaps it matters more that we debrief and do something with the results.

    2) These papers seem more like Debriefing + Ops QI work to me and less about the debriefing itself. It’s amazing how the data from the debriefs was communicated to operations teams, how systems issues were identified, and how those systems issues were fixed without focusing on blaming individuals. In my own experiences, this type of collaborative engagement is difficult to achieve.

    3) Figure 3 in the Rose paper is an astounding example of managing efficiency / thoroughness trade offs. The labor hours per case went DOWN while the case count went UP. Mortality went DOWN. Overall – efficiency went up without sacrificing thoroughness. This shows the power of addressing underlying systems challenges that drag down workflows.

    My takeaways: clinical debriefing (whether from real clinical work or in situ simulation) can be a powerful surveillance tool to pick up systems issues from the realm of work as done, as long as there is another feedback loop from institutional leadership to fix the systems issues that are discovered. As Ben put it – “formalizing the accountability.”

    I wholeheartedly agree with the Mater Team’s takeaways: “Inviting feedback (in whatever form), identifying performance gaps and acting on them is a recipe for success – in simulation and clinical environment. But be warned – sustained energy, leadership and money will be required to ensure success, and the Rose and Rose paper provides an argument for all healthcare systems to invest likewise.”

    • Ben Symon Post author

      Hi Shannon it’s so good to hear from you and NYSIM! Thanks for taking the time to come by. I really appreciated your comment ‘How much does it matter how you debrief? Perhaps it matters more that we debrief and do something with the results.’, as I think that gets to the heart of things in regard to debriefing clinical events. Conversely I wonder if some of the principles would translate back to regular educational sim debriefings as well? Maybe we can get better at integrating iterative improvements in the sim sphere too!

  • Sonia Twigg

    I really did not expect this twist in the plot! And I concur with the peanut gallery about the effectiveness demonstrated in these papers of inviting feedback and then making sure it’s followed up – and appreciating that might be more difficult than it sounds.

    Re the Brindle paper, I did not realise that the WHO surgical safety checklist had three parts. I had participated in the timeout section while working as an anaesthetic registrar but didn’t realise there was a sign in & sign out/debrief section. I enjoyed the insight into the leaders’ perspective – the keys to success included strong executive leadership involvement as well as making the debrief part of of a whole system culture of safety by empowering staff at all levels of the hierarchy and providing feedback – ie proving the issues raised in the debrief were changed as a result of the debrief. Interestingly the main reason to fail seemed to be the loss of this feedback.

    And wow – the Rose paper demonstrated the power of simple but consistent learning conversations with the whole team after every clinical event and then following up on the issues raised/ solutions suggested! I looked at the scripted debriefing checklist in the supplement – it’s a pretty simple plus/delta conversation (with the negatives listed for patient harm, waste of resources, unnecessary cost, negative patient experience or negative teamwork). They underline that it took time to see improvement – but 4 years does not seem that long. I wondered if there was something special about their team that made this possible? Or is it just a matter of diligently using the process and ensuring accountable follow up?

    I am still a bit confused about exactly what we mean by the word debrief – is it simply any conversation we have with the team about an event after that event? There were some elements of a debrief that did not seem to be present in these papers. I think of a debrief similar to the way it is defined by the Healthcare simulation dictionary – as a formalised process, led by a facilitator, acknowledging or exploring emotions, and involving reflection and feedback with the intention to learn/ improve.

    • Ben Symon Post author

      Hi Sonia! Thanks so much for coming along again, I know you’re busy at the mo. Thanks for digging through the supplements and finding some more detail on the debriefing structure, although I still wish this was more clearly within the papers themselves.
      I’m a little mystified at the… lack of buzz About this paper, since to claim this level of morbidity and mortality improvement plus output and efficiency changes. This appears to be a truly remarkable achievement that many of us would envy. Even looking at the journal club this month, comments have been limited. To me if a drug was invented that had the same changes in mortality in a wide range of patients the Nobel prize would be knocking, but when it turns out patient mortality is improved through long, consistent reflection and strong executive support for change integration … it’s not sexy?

  • Laura Rock

    Ben, thank you so much for promoting discussion on these papers. As you know I invited Michael Rose to deliver GR at BIDMC on his work, and he did a wonderful podcast hosted by Jenny Rudolph (see link below). As I was creating my own non-surgical routine debriefing program, I have taken advantage of his kindness and generosity of time with phone calls and discussions around his visit. Here are some take-aways based on the points you have raised here.

    Hi Sarah!!! You re right that “checklist” is misleading, for two major reasons. One you identified- it’s about so much more than the checklist. It’s about changing the culture to a “what went well and what could we improve next time” culture of transparency and growth. The other major reason is that they actually didn’t even use the checklist— they found it too scripted, and their debriefs ended up being a quick check-in (around 2-3 minutes), with people having a chance to say out loud or write down on a note what was on their mind about the case as the case was ending. Then they had a magnitude of follow-up on all these findings that the rest of us can only dream of. Now they have held over 140,000 debriefs, more than 97% of all of their surgical cases since 2010. The concept of “checklist” was what prompted the program, though, since it grew out of the Guwande-led safe surgery program.

    Shannon, you are totally right that it was more about QI and less about they debriefing itself. The Rose program has such remarkable follow-through, not just of what people said or wrote in debriefings, but on noticing patterns (such as unexpected admission to ICU post-op) or detailed, exhaustive follow-up of even one-time events (like a patient fall from a failure of positioning device). So the “debrief” was one small part of a huge culture shift. A huge part of it was the transparency of the findings— from the very beginning they posted the findings of every debrief publicly, including the names of everyone mentioned.

    I think the biggest message is that debriefs alone might make people feel a bit more connected and a bit more whole, and create some sense of community. But they will quickly be disengaged and even cynical about it if their observations and concerns don’t get any traction.

    I think the biggest message is that people work in a culture where they can be honest, that their observations matter, that leadership actually listen to, and where they really see an effort to act on what they say. It’s along the lines of thinking that a good “preview” of a topic, along with curiosity and kindness can accomplish 99% of a fancy, skilled debrief-trained facilitator. In their program there was no skilled facilitator, no formal approach. But there was transparency, a culture of curiosity and improvement, and actual follow-through.

    Listen to 100% Adoption. 35% Mortality Reduction. $7,000,000 / Year. | A Talk with Michael Rose & Kate Hilton from The Center for Medical Simulation Presents: DJ Simulationistas… ‘Sup? in Podcasts.

    • Ben Symon Post author

      Hi Laura, thanks so much for your original suggestion of these papers and the granularity and detail you’ve added to the discussion.
      Thanks for linking the podcast, I really enjoyed listening to it. Interestingly even on a meta level on the podcast, Michael didn’t hugely emphasise the debrief model, my perception of the conversation was one heavily focused on ‘descending into the particular’ and ensuring the systems were in place to filter and consolidate, distribute that knowledge and implement solutions.

      I’m now thinking myself about what I will and won’t be able to implement in our own clinical debriefings in a lower acuity, non surgical, Paediatric setting. Our mortality rate might be one patient every 3 or 4 years, so our focus will need to be elsewhere.

  • Victoria Brazil

    Hey thanks everyone for a very interesting discussion.
    I agree – this extent of patient outcome improvement in the Rose paper is astounding.
    The challenge is to understand why that happened, and to what extent it can be replicated (in process and outcome). As mentioned in the papers – the surgical safety checklist also suffers from variable results in different contents (we presume because of different application in practice)
    I’m with Shannon – the hard work here was done after the debriefs – the rigorous analysis and follow through speaks to a motivated and resourced team. This approach to QI is quite different to the broad, generic data surveillance that seems more characteristic in many institutions. Dare I say it – it is a lot more incremental, “1%’, marginal gains approach.
    That said – the awareness that the information from debriefs actually went somewhere would have been a great encouragement for clinicians to engage with the debriefs. Many of us may have a sense of learned hopelessness about suggesting improvements due to a perception that nothing changes as a result. The Brindle paper capture a lot of these principles of success/ failure.
    As I think about this in my emergency medicine context I am stuck with an issue that Sonia raises – whats a ‘debrief’? Some staff feel like they want more ‘psychological first aid’ after big cases, in some situations this probably needs to be a ‘critical incident debrief‘. Sometimes we are drawing on our simulation debriefing frameworks to help close performance gaps – individual and team. Finally the ‘QI type’ debrief ( the focus of these papers) is similar to INFO and DISCERN models.
    All of these are potentially competing for the 5 -6 minutes we might try and prise out after a case in the ED. Ideally we might have highly skilled and available facilitators who would carefully decide which of these conversations was most appropriate for the circumstance. But even in a single institution that lack of standardisation/ consistency is going to cause consternation.
    Maybe controversially, I’m not sure simulation debriefing (even in situ) offers us that many answers for these questions.
    Looking forward to our podcast wrap on this Ben

    • Ben Symon Post author

      Hi Vic! I’m so glad you’ve raised these points, but I’ve been particularly mulling over you and Sonia discussing the term debrief and its lack of specificity as a term.
      It made me think of a great paper I read once :
      On translational sim and the concept of ‘not where, but why’?

      I think this principle works for clinical debriefing too, but instead of asking “where are we doing it” we’re asking “what do we call this, and how do we delineate?”.

      As you know, I think clinical debriefs contain 3 primary elements : Deactivation of staff from a state of higher to lower activation, Deconstruction of the case with facilitated reflection and a potential QI lens, and Dissemination of Information (either from the team to the rest of the organisation, or from the patients outcomes back to the team).

      To me when I think of clinical event debriefing, they will always contain those elements but in very different proportions, and I think the core challenge of clinical debriefing is to have the adaptability to prioritise each domain depending on participant need.

      If it’s a stressful Resus with an unexpected outcome, we should focus on deactivation/defusing. If it’s a standard case with opportunities for marginal gained we should focus on deconstruction. If it’s a case with clear actionable change identified or systems issues found we will need to focus on clarification of the message and effective dissemination to the right people.

      To me the term debrief is probably the best fit still, if only because every other term I’ve tried instead sounds clunky and lame and people go back to using debrief. But instead I think we need to subdivide with a focus on the core goal of our conversation.


  • Komal Bajaj

    I’m bummed to be so late to the party this month, especially as this is one of my favorite topics! I am very surprised that this paper didn’t get more buzz when it was published initially as it provides tremendous fuel to this powerful tool. Kudos to Rose et. al for this remarkable work. I too would like to know more details about the implementation as well as follow-up analysis as to why this intervention may be associated with the powerful findings described. It’s liberating to think of debriefs as bite-size morsels that may not necessarily always adhere to the types of strict structure we are typically used to following after post-simulation events. The image of a crepe cake always comes to mind when thinking about post-clinical event debriefs – one super thin layer building on another (maybe I’m just hungry). I’m particularly interested in how to collect all these crowd sourced gems and have a few lessons learned from local experience – less is more (i.e.: better to have folks reliably complete a three item report than ask for 10 pieces of information that are barely ever completed) and clarity about what hospital-wide committees these gems are reported to (with responsibility to act upon them) from the onset of the program is crucial. Look forward to we learn about this quality/safety vehicle in coming years.

    • Ben Symon Post author

      Hi Komal! So nice to hear from you every month, and yes I can imagine these papers are right up your #borntobuildagency alley!

      I love your crepe cake metaphor, and I agree that seeing real outcomes come from very brief quality improvement conversations can be liberating. I also think that as many of transition from an educational debriefing model to a clinical debriefing focus, we are at risk of simply transplanting an educational format (which is often long, focused on frames, and with less focus on technical performance) into a clinical setting, where (from this paper) there should be stronger focus on time efficiency, marginal gains and actionable outcomes.

  • Jenny Rudolph

    Hi Ben and colleagues,
    I thought I would throw in some details about how the work was actually accomplished from what I learned from Michael Rose when I interviewed him. (I am recalling this from memory and rough notes so it may not be perfectly correct.)
    The information from the three minute perioperative debriefs was captured by the circulating nurse/scout nurse in notes but also people were able to simply write down ideas on scrap of paper.
    These were given to an assistant working with Michael Rose who spent approximately 90 minutes a day collating and organizing the findings from all the ORs into a spreadsheet. These were sorted into different categories and assigned to different people to deal with. If memory serves, there was a designated cadre of people who were assigned to make sure things were addressed. There was some time frame in which they needed to be addressed – – I’m not sure I remember that accurately but I recall several days but probably not longer than a week. As Laura Rock noted the findings were posted daily, initially on post it notes and then in other formats.

    As others have noted in the dialogue above, getting serious traction on small reported issues probably created a positive feedback loop or a virtuous reinforcing cycle such that people noticed that their ideas got implemented and then that it courage more reflection and more ideas.

    One other thing I might note from the interview was how to kick something like this off. I found it very poignant and generous that Michael Rose stated of himself that he was “highly cynical“ even to the point of “hardened skepticism“ about the value of anything like a Checklist Debriefing etc.

    But since he was formally assigned to get “compliance“ across his health systems hospitals with the checklist he tried to find a way to do it. it was notable to me that the process of somehow personalizing in small collectives (2 to 4 people) the personal stories of why the colleagues even went into healthcare and why they care about quality, seemed to be a spark that pierced or defrosted many peoples skepticism. Once that engagement was triggered then, it seems, that the positive reward of seeing your ideas implemented kept the process going.

    I’d also like to recognize Laura Rock for persistently pursuing Michael Rose and helping bring his work to the Simulation Community’s attention. And thanks Ben for giving this some more space and attention here. I think the McLeod health work is transformational.

    • Ben Symon Post author

      Hi Jenny! It was such a lovely gift to receive your comments a few minutes after listening to your podcast with Michael and Kate on this issue : my own little personalised space repetition!

      I’m wondering if I can borrow your behavioural change brain for a moment with regards to one question I have been mulling at the moment. The Rose papers highlight institutional responsiveness/receptivity to feedback as critical to generating the psych safety and motivational engagement for staff to contribute to QI. But on an individual basis, I’ve found myself trapped between wanting to encourage every brainstorm suggestion, while also knowing that sometimes not every suggestion is a great one and that responding to one persons systems issue breaks another persons functional model. I guess, I’m wondering how we facilitate these conversations with some acknowledgement of safety 2 principles.

      Any thoughts from yourself or anyone else?

  • Stuart Rose

    Hi Ben

    Also late to the party. Thanks for highlighting more papers about debriefing in the workplace. Great discussion so far. Like yourself and Komal, I am surprised that the process described in the Brindle and Rose 2 papers has not resulted in more coverage as it seems so positive. Given that I work in the ED and we have a debriefing program running, there are a few things that stand out for me in these papers as well as prompt me to ask some questions of our current process.

    1. What does the word “debriefing” mean to people? There are many definitions of debriefing in the literature. However, as Sonia and Vic mention, the process described in these papers is quite different to what we may traditionally consider a debriefing in simulation. I wonder if we are creating a barrier to post event discussion in the workplace if we call our conversation “debriefing”? Once we say debriefing, do staff think of a trained facilitator taking 45 minutes to explore the emotional, teamwork and process issues identified after the case? Have staff had a bad experience being “debriefed”? Debriefing clinical events is not the same as debriefing simulated events. We don’t have the time or the trained facilitators available that we have when we do a simulated case. INFO is a scripted process so in some ways close to the checklist type style of feedback but we still call it debriefing. Despite the structure, INFO debriefings are often become emotional. Would it be better to call the post event discussion a “checklist” as described in these papers? Perhaps that implies that the discussion can be covered in several minutes but still promotes the team getting together and sharing experiences and identifying areas to improve. Would “checklist” discussions still explore allow participants to discuss emotional aspects of the event if they feel safe to do so?

    I also completely missed this article when I did a search for clinical debriefing papers because of the terminology used to describe their process. It makes me realize that I need to really broaden out my search for evidence. I wonder how many other great papers I am missing?

    2. An integrated and supported system is important for a successful implementation. This is listed as one of the key factors for the success of Rose process and the withdrawal of administrative support contributed to the failure of the Beaumont hospital debriefing program mentioned in the Brindle paper. Promotion of the OR checklist had already created awareness of benefit of inter-professional communication and OR management was actively promoting structured conversations in the workplace. Jenny recollects that Michael Rose stated that he had been formally assigned to improve compliance with the checklist. In the ED, there is no precedent for regularly making time for team conversations and debriefing has not been traditionally been viewed as a regular part of patient care. Creating awareness of the need and then promoting debriefing within our busy ED’s remains a big part of the INFO process. Without ED management support, the INFO process would not be sustainable and regular attendance of ED admin meetings has been important to establish and maintain debriefing in the ED. We are still not debriefing all the cases that meet criteria so we constantly need to continue working to promote debriefing. Trying activities like a “clinical debriefing month” is part of a strategy to raise awareness. I have been asked to show return on investment for the INFO process and thus far it has been hard to show definite numbers. The Rose paper is an example of a debriefing program providing value in a clinical environment, albeit not in the ED.

    3. The debriefing process should be adapted to each specific environment. The OR debriefings described in these papers were mandated debriefings which took place after each case and then feedback was given to staff each morning before they started in the OR. Each day staff reviewed trends revealed by debriefings as well as specific issues identified from recent debriefings. The OR environment allowed for this type of feedback strategy but in the fluctuating and unscheduled ED, this type of debriefing program doesn’t work. INFO debriefings are voluntary and while our goal is to debrief all appropriate cases, just doing one a shift would be considered a success. Providing staff with regular feedback on suggestions made during INFO debriefings remains a challenge. Changes have been made on a site by site basis (INFO is running in 5 ED’s in Calgary) and do provide some evidence that suggestions have been addressed but getting personal feedback to the staff is a difficult. The site ED Nurse Educator gives staff feedback on suggestions but as we do not keep a list of debriefing participant names we may not be able to relay the changes to all staff involved reliably. One of our current plans to promote feedback to staff is a six-monthly infographic with a summary of suggestions from debriefings and actions as a result of those suggestions.

    4. What is the value added for participants? The articles describe the keys to success and challenges in the OR mainly related to performance metrics and quality improvement. Opportunities for quality and systems improvements do arise from INFO debriefings but the shared human experience and support related to debriefing seems to be an important motivation for staff to debrief. Personally, debriefing has provided me the opportunity to share my thoughts and reflections about what occurred with the team. This has allowed me to reset and move onto the next case with more available cognitive bandwidth. Anecdotally, I have heard similar accounts from other team members. Debriefing also creates a shared experience that can be referenced days or weeks later without preamble. As soon as a team member says to me, “Remember Mr. Smith, the patient we resuscitated in Bay 1 last week?”, we are both immediately on common ground and able to talk about the case. When this happens or when I reach out to someone else to discuss a case, it helps me to feel that I am still part of that team. That support makes me want to debrief all my high stakes cases.

    Our debriefing process in Calgary has been running for three and a half years and while debriefings are nowhere near the numbers described in the Rose and Brindle papers, it is a start. The more evidence we can gather that shows value and return on investment, the easier it will be to convince management and staff that this is a valuable use of our time and should be standard of care in the ED.

    Thanks for highlighting this important topic, Ben. I look forward to hearing more.

  • Sarah Janssens

    Wow – what a conversation – good things come to those who wait Ben! I can’t help myself and need to dive back in. Thanks to Jenny and Laura for enlightening us on more of the “what” and “how” – I’m looking forward to listening to the podcast.

    In re-entering the fray I pondered Rose Figure 3 the most- seeing the caseload go up and the nursing hours go down, I wondered if there was a tipping point, where staff “got better at getting better”? I wondered how much of this was this related to systems improvements and how much just due to increases in surgical volume – that is, the more we do the better we get. Not only are high volume surgeons more efficient, but they have better surgical outcomes – could this be one of the factors related to decreased mortality? Apologies for giving the surgeons all the glory – the same may be true for “high volume teams”! Looking at figure 3, I also asked – can the improvement be sustained or is there a plateau that could be maintained with less resources ongoing?

    As I read through others’ reflections on “what is a debrief?” I began to think more about my workplace – in this case on our own systems. There are so many ways we invite feedback in my clinical (O&G) patch, through clinician led clinical audits, end of rotation surveys and even verbal invitations after morning handover. This conversation has made me realise these are all forms of “debrief” that would benefit from the same systematic follow up and resources (anyone got a cloning machine?) as the Rose paper described. While we make all efforts to act on feedback and circle back to those who provided it, there is no united system for collating all of these informal feedback sources. Is the answer to formalise? Should every bit of feedback (clinical debriefing/sim debriefing/checklist data/audits) be entered into a centralised and monitored program if one had the resources? Or would that just create a giant pool of data so difficult to interpret that it lost its meaning? Please help – my brain is hurting – Komal – any answers from your patch?

    • Ben Symon Post author

      Haha yes I’m currently tickled pink after worrying this month was turning out to be a dud! Thanks for jumping back in Sarah! Komal, the balls in your court :p

Comments are closed.