Journal Club August 2020 – Evidence Based In Situ Sim Curriculum


Introduction : 

Simulcast Journal Club is a monthly series that aims to encourage simulation educators to explore and learn from publications on Healthcare Simulation Education.  Each month we publish a case and link a paper with associated questions for discussion.  Inspired by the ALiEM MEdIC series, we moderate and summarise the discussion at the end of the month, including exploring the opinions of experts from the field.

The journal club relies heavily on your participation and comments and while it can be confronting to post your opinions on an article online, we hope we can generate a sense of “online psychological safety” enough to empower you to post!  Your thoughts are highly valued and appreciated, however in depth or whatever your level of experience.  We look forward to hearing from you.

The Article :

Leung, J.S., Brar, M., Eltorki, M. et al. Development of an in situ simulation-based continuing professional development curriculum in pediatric emergency medicine. Adv Simul 5, 12 (2020). https://doi.org/10.1186/s41077-020-00129-x

The Case Study : 

Alejandra frowned in the consultant meeting as they yet again debated the pros and cons of simulations during consultant education.  The discussion was a mix of strong resistance, excuses and intermittent spurts of enthusiasm from a few acolytes.  She could tell from the way things were going that the proposal would fizzle out again, as it had in the years before that.

Despite her frustration, Alejandra knew this was partly her own doing.  In a burst of sim enthusiasm earlier in her consultant career, she had created what she believed to be a fantastic sim designed to challenge even the brightest of her colleagues.   Her simulation, the “Resuscitative hysterotomy of a mother pregnant with a 23/40 premature infant with harlequin syndrome and an abusive same-sex partner during a swine flu outbreak” had seemed the perfect physical. conceptual and social challenge for a group of 12 consultants and 3 nurses, but the fallout during the debrief had taught her a lot about identity threat at the top of the food chain.

She had realised with time that she needed a different approach to psych safety for this group, but she’d also realised that having a robust program didn’t mean simulating the ultimate list of fascinomas.  She needed a curriculum they actually valued.  But she didn’t know where to begin.

Discussion : 

This month we look at curriculum development, with a very in depth process from Leung et al in advances in simulation describing their approach to a robust curricular framework for a simulation program for senior medical officers in paediatric emergencies.

As you read the article, we hope it provides the opportunity to reflect on how you approach your own local curriculum development, and whether there are lessons to be learned from the paper at hand.

Our expert this month will be primary author himself, James Leung.  We look forward to your comments!


About Ben Symon

Ben is a Paediatric Emergency Physician at The Prince Charles Hospital in Brisbane and a Simulation Educator at Queensland Children's Hospital.

26 thoughts on “Journal Club August 2020 – Evidence Based In Situ Sim Curriculum

  • James Leung

    Hi everyone! Thank you Simulcast for featuring our article! I am a long time follower, and have so much respect for your work! I would love to learn what people think about the curriculum: questions about the process, potential applications in your field of practice, potential flaws and drawbacks, and of course, how we can advance and improve the process in future iterations!

  • Sarah Janssens

    A short post to start:

    I love this article and it appeals immensely to my desire to have everything appear in a tabulated format! The methodical approach to curriculum planning was inspiring, however made me wonder how different the final curriculum list would have been had the educators just gone with their gut instinct?

    I’m definitely sharing with our educator group and hope to re-invigorate Mater Curry Club and make another post later!

    • Benjamin Symon

      G’day Sarah and James, it’s lovely to hear from both of you and James thanks for being our expert / author this month!

      I love that first devils advocate question from Sarah, wondering what your thoughts are James about the true value of this that you found in your development and use of the program?

      • James Leung

        That is a great point Sarah!

        To answer your question truthfully, I don’t know how different the curriculum would have been if I designed based on gut instinct.
        The reason for this, is quite simply, I was overwhelmed with the sheer number of available choices on topics to cover and unable to develop a curriculum based on gut. In my own frame as a PEM physician we had over 94 clinical presentations/procedures “to cover” and I remember sitting down in my office, asking myself “how do I chose?! How do I determine what is more important for myself, compared to my peers with more/less experience than me? How do I know what I pick is representative of what the department as a whole needs? How do I know that I can exclude an obscure, but potentially important topic?

        In our case specifically, our systematic prioritization approach resulted in:
        – Covering topics that I thought didn’t need to be covered because they were “cookie-cutter” (i.e. meningitis + encephalitis, severe asthma, sepsis) for PEM physicians
        – Removing topics that I thought were important but ultimately a bit obscure (i.e. thyroid disorders or complications of teen pregnancy, congestive heart failure, biological/chemical/radiation exposure)
        – Highlighting the obscure topics that people thought were helpful (i.e. DIC, adrenal disorders)
        – Highlighting and reaffirming core topics that I felt and knew would be important to include (i.e. toxicology, cardiac arrest, respiratory failure, congenital heart disease)

        To me, addressing this sense of being overwhelmed itself is the a core value of our systemic prioritization approach as it takes away much of the onus on the curriculum planner and distributes this responsibility amongst the learners themselves, the system they operate on, and to some degree the patients affected by our learning. This effect of distributing planning amongst our learner group also results in the additional benefit of increased participant buy-in. Providing a voice to the learners on the topics to cover, while respecting system and organization needs empowers staff to participate and learn, which is especially important since most CME activities are on a voluntary basis (rather than a requisite activity in our training).

        Ultimately, as I reflect on your points, I suspect that there would be a number of common elements between gut vs our systemic prioritization approach, based on the person designing the curriculum. However, I believe our systematic approach, our approach is much less vulnerable to threats from personal/group bias, replicable (as gut may change over time, and is highly based on recent case exposure as interpreted from the lens of few individuals), and useful in determining which “grey zone” cases to cover.

  • Sarah Janssens

    Thanks James – this is so interesting. Ninety-four clinical presentations would be overwhelming. Obstetrics is so different – the number of “emergencies” we have can be counted on your fingers (lucky for us as we’re quite simple people us O&Gs!). I’m inspired to try a similar approach to you as we revise our curriculum – from what you say I am likely to be a little surprised and will learn something new!

    • James Leung

      Thank you for looking into the approach!

      I think each speciality and field has their own unique challenges! For sure your O&G work is critical, and I imagine the occurrences of your emergencies are much more frequent than in PEM!

      I would love to find ways to incorporate patient needs and feedback more into our CME activities in our future curricula!

  • Victoria Brazil

    Hey thanks Ben, James and team.
    A massive amount of very detailed work – lots of interesting aspects to discuss!
    For me – the strength of the simulation design process described in the paper was actually having a prioritisation process, when there is usually none. Often simulation program design (including some of my own) is dictated by ‘what scenarios do we have?’
    Of course the ‘gut instinct’ of experienced clinician simulationists is not a bad guide – as Sarah points out – the outcomes of the rigorous process described in the paper would be familiar to many.
    But to extend the conversation, and in view of this being a completely novel process – it’s worth considering the strength and weaknesses of the prioritisation. The structure by ‘clinical presentation’ and ‘critical procedures’ appears at odds with the more comprehensive view of curricular outcomes such as that presented (and discussed earlier in the article) in CanMEDS roles? I guess it may be implicit but I don’t see scenarios designed specifically for ‘challenging communication with patients’ or ‘supervising residents’ or ‘ethical decision-making’ or even the more familiar teamwork behaviour topics. This leads into a more philosophical question about the expected outcomes of continuing professional development programs. I just did a podcast with Louise Allen -PhD candidate from Monash – who has taken a deep dive into this issue in a scoping review – and her subsequent empirical work illustrating social learning theory as a key contributor.
    I really like the idea of a prioritising matrix, and I am going to think more about how to use such a strategy. Agree it moderates the ‘what learners like’ drivers of educational activity.
    Again – to provoke discussion – I’m not entirely sure about some of the elements in this matrix. For example I’m not sure how ‘benefits to patients if practiced’ was established? – but would appear from the paper that there was no healthcare consumer engagement in this process or any advice from those hey might be more expert in calculating return on investment(ROI)?. I know we all think surgical airways are critical for patient outcomes, but I suspect they don’t match up to handwashing practice as an intervention for patient morbidity and mortality….??
    I’ve also not so sure about “educational value of simulating the case for learning (balanced with frequency of encountering the case in real clinical practice,in which case there would be less educational need”. This appears at odds with the Safety II approaches and ‘learning from success’ where reflection on the ‘mundane’ gives the greatest insights into better, safer care delivery?.

    But perhaps most fundamentally – this paper highlights the uneasy relationship between medical educational curricula and the ‘QI lens’ – improving the quality of patient care. Perhaps more tangibly – I would feel uneasy if the content of my interprofessional simulation programme was driven so overtly by the curricular needs of the doctors? Obviously that’s not the intent and probably not even the practice in the institution described. But I would worry this would send a powerful, albeit unintended message about what the simulation program is for???
    Push back on any or all of that please !!

    vb

    • Ben Symon

      G’day Vic, thanks so much for posting such a considered critique. I’m mindful a bit that the energy this month has been influenced by James’ kind attendance and availability for the discussion, and I’m a bit worried we’re heading into an ‘Ask James a question about his paper and he has to answer it’ type pattern, so I’m wondering if I could prompt other readers to comment on these thoughts so far.

      I notice in particular that Vic, you mention the tension between a QI focused simulation program and a Senior Medical Officer curriculum. I would agree we’ve found this in our own program with In Situ’s happening during Reg education. There was tension between taking a QI approach (incorporating full existing teams in a range of both exciting and non exciting cases to identify opportunities for improvement) and frustration from some Registrars who felt their opportunity for self improvement and a focused, educational curriculum had been lessened by the QI focus. Similarly I have had episodes of medical dissatisfaction expressed when we ran a febrile neutropenia simulation that focused more on nursing specific skill sets in the emergency setting, to the point where it made me realise just how much privilege we had as medical staff over a supposed multidisciplinary discussion…. when the focus became on other disciplines, there was however some medical dissatisfaction.

      To me, I still feel that this is a really worthy endeavour at mapping out a pretty robust curriculum for senior medical officers and obtaining a social contract at the same time : if we design this together, maybe you’ll be more likely to come, participate and learn.

      I think the curriculum as it currently stands must have been a huge undertaking to get to this point, and I applaud the team for their efforts. But I would love to see that the curricular needs of other healthcare professionals was similarly mapped to inform a larger emergency simulation program (as opposed to a curriculum for one subset).

      So I might put this to the group then : how have you seen other healthcare professionals approach their simulation curricula? And how have you integrated those curricular needs into an actual program?

      • James Leung

        Hi Victoria!
        Great to hear from you!
        I have some thoughts about your very awesome, respectful and valid critique. In the interest of Ben’s point, maybe I’ll wait a few days for others to provide it =)

        • Benjamin Symon

          Hey James!
          I hadn’t meant to stifle you if you had thoughts, and sounds like similar themes are coming in from Susan as well. Would love to hear your thoughts?
          Ben

          • James Leung

            Haha of course – I totally understand Ben!

            Victoria, I see you point, and agree that the curriculum is biased more towards the traditional “resuscitation-focused” simulation. There are definitely other non-resuscitation focused clinical presentations that have been omitted that are of equal importance. To be honest, we focused more on medical resuscitations for this curriculum, which may be why the non-resuscitation topics did not show up. My concern is what would happen if we put non-resuscitation presentations (i.e. critical communication moments) are put on a list next to the resuscitation presentations, the latter, more adrenaline-focus presentations would be systemically favoured. Of course we don’t know because we didn’t try. We elected instead to focus on resuscitations as specifically for PEM physicians, this was a big CPD desire in our group. In the interval period, I’m hoping to review with our QI lead on some of the key learning points from our QI/PS rounds – if there are repeated issues (i.e. communication), my hope is that we can author cases that address these issues as learning objectives (rather than design around clinical presentations which are currently set in our curriculum).

            In regards to the matrix composition itself, I definitely think it is an iterative process, and there is room to improve the inputs. I really like the idea of engaging patients (or in our case parents) directly for their input. We actually thought of doing the same with our parent counsel committee, but due to logistical issues, we weren’t able to get a hold of the committee very easily and had to abandon plans to do so. My hope is that in future versions, we can engage this critical group. It would be fascinating to see how these outcomes change our prioritization in the future.

            Ben your point about tension between QI and learning is something we’ve experienced as a group too. For us, we’ve focused on translational outcomes of simulation. To that end, we’ve conceptually placed learner and team education on a spectrum of QI, in the essence that better education leads to better patient outcomes (T1-T2 outcomes). We devote part of our debrief to QI processes, and are transparent to the teams that our sim committee functions to communicate latent safety threats and QI, policy opportunities to leadership (it is convenient for us that our RN education and QI leads are the same person, and this RN is fully committed to our simulations and regularly attends). All that being said, I do find that buy in is better when we focus more on participant learning, rather than simply QI processes and would love to hear other’s thoughts on how to balances these processes especially in an insitu setting.

            McGaghie WC, Draycott TJ, Dunn WF, Lopez CM, Stefanidis D. Evaluating the impact of simulation on translational patient outcomes. Simulation in Healthcare. 2011;6(7 SUPPL.):1–3.

  • Susan Eller

    Hello everyone.

    I appreciate that there has been some discussion on this thread regarding engaging interprofessional participants, and as a nurse I feel compelled to comment further. I think the authors did a thorough job describing their use of Kern’s model, and the steps they took to develop CPD for physicians, and it sounds like a great exercise. What it does not sound like is true interprofessional education.

    The phrase in the abstract was concerning to me: “We hypothesize that we could generate an interprofessional CPD simulation curriculum for practicing pediatric emergency medicine (PEM) physicians …”. My initial reaction was discordance, as it sounded like the authors were developing an IPE scenario FOR the physicians – and not the other groups. This was reinforced by the robust description of the needs assessment, and how they surveyed the physician group. The description of soliciting feedback from hospital leadership also seemed targeted to physician needs and potential risk mitigation. There seemed to be a very deliberate focus on clinical presentations and critical procedures. Although communication and teamwork was mentioned as a benefit of simulation, that part was not a focus as I would expect in interprofessional scenarios. There was not a description of how/if they solicited non-physician learner’s needs or perceptions. They mentioned that these were interprofessional simulations, but did not explore or expand on any outcomes beyond the physicians.

    Later in the abstract the authors wrote “The primary objective of this paper is to describe our methods for the development of a SBME curriculum for CPD of practicing PEM physicians.” This statement does seem accurate, and perhaps if they had not written the interprofessional parts, I would have better appreciated the rigor of their instructional design.

  • Susan Eller

    I pushed send precipitously, as my comment was pretty clear in the advocacy of my position, but did not invite any conversation or response. I am sorry about that. While I was challenged by some of the wording choices in the article, I am also curious as to why those particular choices were made and what other considerations existed for the non-physician participants.

    • James Leung

      No worries Susan!
      I love your comments and perspectives. I honestly feel that there is room to improve the interprofessional aspect of our curriculum, and apologize if the wording was a bit offsetting for you.

      I would say that although our curriculum is centered around PEM physician objectives (and labelled as such), they are still ultimately based on clinical presentations. To me, clinical presentations (i.e. shortness of breath, chest pain, altered level of consciousness) are blind to professions, and we all work together to manage the patients/clients that present with these chief concerns. Similarly for resuscitations, we all work together and play important roles to ensure the shared clinical procedure is successful. For example, with chest tube insertion, there is the MD role of inserting the chest tube, RN role of setting up the chest seal/suction device, RT role of appropriate ventilation support and airway management. We used objectives from PEM physicians as our anchor, because, in our current healthcare system set up, care that can be provided is still largely established by physician scope of practice (i.e. what can the PEM doc do before we call the surgeon for a blunt abdo trauma). On a pragmatic level, there were not as many objectives/training available for PED RN, PED RT for us to anchor our prioritization process on. The ones we had (ENPC) overlapped with what was in the PEM objectives.

      In terms of polling leadership, our intent actually, was not from a legal mitigation standpoint. Rather, the intent was from a healthcare system perspective. In our Canadian healthcare system, the PED fills a unique role as both an ED with primary presentations, as well as a referral centre for patients seen in other General EDs, but requiring transfer to a paediatric care centre for ongoing care. In this sense, the PED has to be able to manage within this system to be capable of managing both major functions. By including leadership opinions (our leadership group is very close to the PED group and well respected) we hoped that they could offer clarity on what we as a PED really needed to know to function in the broader healthcare system (i.e. to prevent us from going down the rabbit hole of really obsure, rare metabolic condition management, when really, what the healthcare system needs is a PED that can manage sepsis really really well).

      Ultimately, I still believe our curriculum process incorporates interpressionalism in its derivation (i.e. groups of professions and patients have input in determining topics). It ultimately determines the simulation topics, but not the simulation itself. To make it truly interprofessional where we can “learn from, about and with one another”, I believe challenge still falls to the simulation committee to design simulations with objectives that meet this standard. In our program, we have addressed this by having a multi-d panel of experts review each authored case. There are also cases that are primary authored by RN to have more of an RN focus (i.e. chest tube set up/troubleshooting), and likewise with pharmacists (i.e. medications for hyperkalemia). We also debrief as a team with MD and RN education leads present. If there are profession specific concerns, we often find ourselves breaking out into smaller groups after the debrief is wrapped up. Finally our SimBITS newsletter is created as a join communication from all groups, and includes MD, RN and pharmacy technical data/policy/procedural data so we can all learn together.

  • ben lawton

    I think it is telling that the authors lit review found no examples of anyone doing this before. Therefore we really need a place to start the conversation. A lot of the conversations I have about what should go in curricula are, shall we say, not very scientific.

    This is really nice way to start a curriculum development journey. The fact that James has used such a clear structure to develop his team’s curriculum makes it much easier to frame the conversation regarding improvement that has been brought up here – most obviously the interprofessional aspect. Is that a flaw with the process or can that be addressed by repeating the same methods but changing the stakeholder groups consulted?

    I wondered if the authors had considered using a validated tool for the learning needs analysis (e.g Hennessy-Hicks – https://www.who.int/workforcealliance/knowledge/toolkit/19/en/ ). I also wondered about the choice of prioritization matrix. The paper states “we adapted an available prioritization matrix utilized by members in other local education projects”, which makes it sound like a pragmatic decision. I couldn’t tell if others had been considered in detail. Presumably the design of this prioritization matrix could heavily influence the weighting given to different stakeholder’s input? I must admit I got a bit lost on the maths in table 1 and couldn’t figure out how scores of 5, 5, 5, 5 and 4.333 gave a total item score of 2708. Did I miss that bit?

    I often wonder if there is a structural problem with starting learning needs analyses with sim as the pre-defined modality for intervention – I suspect we are guilty of this in our service. Much as I would like to think sim is the answer to everything it is not always the most cost effective way of doing things and if we truly want to know what the learning needs of a group are is it reasonable to limit that to learning needs which are addressable by sim? I wonder if it could have been broader with the same method producing a list of learning objectives for which those amenable to sim were a subset and there remained a left over wish list for other departmental educational activities to address.

    I really like the concept of separating the procedures from the clinical presentation. Knowledge and skills probably decay at different rates so regular practice of recurrent skills is valuable though gets boring if you do the same drill all the time. Placing that drill in the context of a broader curriculum of clinical presentations I think is a clever way exploring more non-recurrent skills while regularly practicing the high yield recurrent ones.

    I also think there is something illustrative about the quality of this discussion. The outcome in the paper is that about 18 people from open professional silo liked the intervention – which is pretty weak. The fact that the authors have used such a clear chain of logic to explain how they got to where they did allows really valuable qualitative insights such as those above, which to me have much greater value than the official outcome measure.

    basically I think this is an excellent paper that has taken a gap in published knowledge and filled it with something robust and clear enough to provide a solid foundation for ongoing development in this space. It is certainly a paper I will come back to when designing curricula in the future

    • James Leung

      Hey Ben!

      Thanks for your comments! It really means a lot that we can contribute to the education art and science in our tiny way.

      It was shocking that there was no other literature that we could identify on the topic. I think it speaks to the challenge and the novel application of CPD education in general!

      I haven’t heard of the Hennessy Hicks, but looking at it now, it looks fantastic! I wonder if elements if this process could be merged with our prioritization approach. Specifically, I think the matrix It’s fascinating, because they also use a matrix of their own, in the form of a modified Eisenhower priority matrix in their section 6, analysis section. To your point about the matrix choice we used, you are totally correct that it was a pragmatic decision. It was a very contentious what matrix to use, math behind it, and the paywalls some had to access. It was fascinating, because I’m not certain that the business world (the origin of many of these matrices) is as open, rigorous or transparent on their derivation as the scientific world. That being said, honestly, I think there is room to consider other matrix compositions, and potentially compare their outcome in the curriculua they generate.

      I also totally echo your concerns that as simulationists, we bias ourselves towards the method before the objective. While Kerns method allows this to happen and us to start anywhere on the cycle, I think there are definitely things that lend themselves to other methods with equal/similar educational and translational effectiveness! The other way we thought of doing the process was to simply generate a general CPD curriculum using the same matrix process and then looking as an expert group of simulations which one of the “top” topics could be effective simulation exercises. I do wonder if there is an effect on the responses if people knew we were going to teach/review a topic via simulation, or “in general”.

  • Jane Cichero

    After a year of saying to myself I should find time to participate in this Journal Club I have finally taken the plunge. I am green to this and have no skills in critical analysis so this is now my CPD & I will attempt to offer some of my thoughts as a novice.

    Congratulations to the authors on this work and the drive to develop a more rigorous approach to curriculum development for CPD for PEM’s. As a nurse educator, developing curriculum has always been a given but as I have moved into the world of simulation over the last 15-20 years rigorous curriculum development for our simulation programs has evolved.

    In our nurse education world in ED much of our curriculum content is driven by what we as “experts” see (gut instinct as Sarah suggests) as the learning needs for our nurses to progress professionally eg beginning ED RN to resus, triage & Clinical Nurse Unit Manager – skills and knowledge needed to get to this point – our nursing professional pathway. As such we have designed well structured curriculum for some time that is post graduate CPD.

    Targeted education for targeted learner groups is essential and I applaud the methods used in this study to achieve this for PEM’s. The question I have though is that when we care for the children as a team doesn’t the learning need change?

    I suggest this is where Sim Zones need to be utilised to support curriculum development and as Ben Lawton mentioned – determine whether insitu team simulation is the most effective/cost effective way to achieve the goal. Perhaps Sim Zone 2 would be a better start to curricula to meet some of the identified needs of the PEM’s & then move to Zone 3 curricula to meet the learning needs of the interprofessional team – a different learning needs survey.

    I agree with the advanced notification of the scenario for participants to encourage learning prior to the simulation and hopefully reduce performance anxiety. I question the usefulness of only performing the scenario once with one team when in reality it is rare that you will manage the same scenario with the same team more than once in paediatrics. Exposure of rarely practiced skills or presentations x 1 will not be retained & we could argue will not improve patient care that much. This is however a great starting point and offers scope for how we could repeat high yield scenarios over & over again.

    I love the idea of SimBITS – sharing the learnings – something we also do.

    I wonder if bringing it back to skills versus non-technical skills in surveying the identified learning needs may help build an even more substantial curriculum whereby different strategies of Sim Zones are used to meet not only the PEM learning needs but also other team members learning needs – thereby making it truly interprofessional.

    • James Leung

      Hi Jane!

      I was in the same boat until this journal club lol!

      As an MD, I have always been envious of the amazing CPD process of RNs! I think your concept of zones is definitely intriguing, considering the “bigger picture of learning”. Our RN educator has been doing follow-up sessions based on learning points highlighted in the simulation (i.e. in our recent sepsis case, the topic of “epi-spritizers” came up, and a lot of one-on-one follow-up RN education has been done), but the idea of a more formalized process sounds great.

      I agree that doing a rare case once with a single team is questionable in it’s educational effectiveness. My hope is to study this a bit more, but ideally, regular review will help. In addition, I think, as Victoria and others have mentioned, I hope (on a deeper meta level), establishing this simulation curriculum that we share, own and regularly reference will help establish our social community of practice, where we all know and help each other learn and keep up our knowledge on a regular basis

      The concept of zones, also could map out the interprofessional CPD process in a different light as well!

    • Benjamin Symon

      Hi Jane!
      I just wanted to thank you for coming along after meeting such a long time ago, I love your comments and have little to add to James’ response, but I really appreciate you joining us and hope we see you again!
      Ben

  • Lon Setnik, MD FACEP

    Ben, I’m grateful for the choice of this article, and excited by the discussion. I have a few questions for James about his choices and the bigger picture.

    My first question hinges on the decision to inform the staff of the general topic to be covered. I’m curious if this was something that the PEM physicians asked for, possibly as grounds for participation and engagement, or was this a decision the PEMSOC came to? I can see both sides of this decision, it can certainly reduce anxiety, and simulation can then be used as a flipped classroom to solidify learning by the PEM. However, I’m concerned that an alternate view could be that the PEM received special information, so they can appear prepared. In the in situ SBLE, where the team is so interdependent, others were not similarly advantaged, if I read the article correctly. Did this create any hurt feelings in the department by other team members? What were the benefits or downsides of this decision?

    I’m also curious about your ranking system. I’m viewing the scale of the 29 topics and 29 procedures to be non-linear in their stress and anxiety levels, yet they were linear in the scale that was applied to them. For example, as an Emergency Physician facing a pediatric airway CICO situation would be exponentially worse than a pediatric pneumothorax, although I might rank the CICO as 5 and the chest tube as 4 on the 1-5 scale. Had you considered or tested other ranking systems? Would this have caused your survey matrix results to be different?

    Lastly, I think this is question for Ben and anyone else on the thread: You demonstrate the enormous commitment an organization must make to SBME, if it going to be an educational modality used to advance groups of practicing clinicians. This is, in my understanding, rarely done. If I read the article correctly, an incredible amount of time was dedicated to this effort over only 2 years and essentially one beta cycle, and only focused on a single group of 20 physicians. I am suspicious the intervention had many measurable and non-measurable patient safety improvements for the patients in your department. What would it take to apply this type of training, with Kern cycles of curriculum development to people and teams with measured outcomes of any type across a whole hospital, health system, or country? It would take an army of clinician educators, data analysts, sim ops, etc. I am a single physician simulationist for a whole health system of one 200 bed hospital that is a level 2 trauma center with 20+ outpatient practices affiliated, I have 3.5 incredible nurses and one sim ops person for all the work we do, how would you think about applying this type of thinking to my scenario? And, none of the community hospitals around us have any simulationists helping their organizations learn. As I did my own form of needs assessment, without your rigor, I got stuck on how my small team could possibly answer the request (along with ATLS, ACLS, PALS, procedural sedation, and all the organizations “required” trainings). How do you think simulation can realistically scale to be the high value modality our practicing post-graduate colleagues, teams, and their patients deserve?

    Thanks for all your energy and dedication to advancing our field.

    Lon

    Lon Setnik, MD FACEP
    Medical Director, Forrest D. McKerley Simulation and Education Center
    Immediate Past-President, Concord Hospital Medical Staff
    Concord Hospital Emergency Medical Associates
    Concord, NH, USA
    Adjunct Faculty, Center for Medical Simulation, Boston, MA

    • James Leung

      Hey Lon!

      Thanks for the thoughtful comment!

      Our decision to let the staff know the general topic to be covered was one that we came to after +++ discussion in our group. The context to this decision stemmed, as you allude to, the anxiety for the PEM physician group. Prior to letting physicians know about the topic before, the MDs in our group the night before their simulation, would not sleep because they had to anticipate any topic from all of PEM (which is massively broad) and perform as a “leader”, with much workplace rapport to lose. It was definitely not psychologically safe, and it got to a point where the buy-in was so terrible from the MD group that they dreaded and did not want to participate in simulation. Ultimately, since we have switched know about the topic in advance, it has definitely reversed our situation 180 degrees. We have realized that, at the end of the day, we are aiming for translational and educational outcomes, and if the MD knows about the topic in advance, it can actually augment this outcome. We don’t tell them everything that will happen in the scenario (so there are some curveballs), but feedback has be much more positive in our group with this approach, with little downside. In regards to other professions (RN), we currently will tell them the topic in advance if they are anxious and prefer to read in advance, but the majority seem not to want to know. I wonder if there is something different in the culture and the role (not being the somewhat isolated team leader who is on a pedestal to perform well).

      In regards to the matrix and inputs (columns) we used in this version of the curriculum, we currently did not input stress as a consideration. It’s an interesting consideration, but I wonder since stress is such a personally defined situation if it would create much of an effect on priority. I can see some people who are stressed about a certain clinical presentation (i.e. CICO peds) embrace the stress and rank the priority higher, but conversely other people who will be avoidant and not rank the clinical presentation highly. I would say it is a valuable input and consideration for psychological safety to consider if buy in from faculty group in general depends on stress.

      Finally, I would say in some ways, I’m hoping that our simulation curriculum decreases the work to be done. There certainly is some investment of resources up front, but by preplanning a curriculum and knowing the topics in advance, it makes it easier for me at least to do the work (i.e. I can hole-up and write 3-4 simulation cases in a row, and divide out the cases amoungst the simulation team to author and review en bloc). It also makes scheduling much more easier – especially for us EM physicians who are on a schedule.

      In terms of person-power, it sounds as if we are in similar situations. if it helps, I am the main simulationist for our division in our tertiary care PED (~55-65000 patients per year), and am working full time clinically. I have the support of one simulation operations specialist who also services the rest of the children’s hospital (General Pediatrics, PICU), and a team of simulation trained RN, and MD (3x), and an administrative assistant. We also do outreach simulation activities with 7-10 hospitals in our encatchment area (albeit these hospitals do not follow our curriculum…but the do use the same cases on them if they deem appropriate for their learning needs). The data collection process was not super difficult because it was primarily by email (and therefore asynchronous) and with a few reminder emails took about 1 month to collect. Our simulation group had a few longer meetings, but mainly met over 1-2 hour meetings every month for the 3-4 months to plan the curriculum. Our simulation curriculum mapping process was also done over a group meeting with a few refinements. With the size of our team (4-5 members), each member is assigned 2-3 cases to author primarily, and the cases edited in groups. Our admin assistant collects the standardized feedback forms and we review the curriculum and simulations in a monthly meeting. The challenge is facilitating the simulations, and finding someone regularly who has adequate debriefing training to manage the simulation, which we often find ourselves scheduling on spec. We are fortunate that our simulation operations specialist (RT) is also an expert debriefer! Definitely happy to chat if you have more questions.

    • Ben Symon

      G’day Lon,
      Thank you so much for joining us and contributing such a thoughtful response with questions for James and the rest of us!
      I would have to say that I’m a big fan of informing the team ahead of time of the condition we are simulating (unless diagnostic dilemma is the thing we want to simulate). I think particularly In Situ or for Translational simulation it allows us to focus the team on rehearsing getting better at treating a specific condition, and I don’t think simulation (particularly mannequin based simulation) is actually that great at emulating the granular detail needed to effectively work through complex differential diagnoses in a lot of situations.

      While I agree with all of the benefits you listed for pre-notification of simulation topics, I was interested in your questioning about whether SMOs receive the diagnosis as a specific group. I think to me this highlights that sometimes there is a higher level avoidance from senior medical officers to participate in simulation, and that in deference to this, or to mitigate their perceived heightened risk [of social risk taking in pursuit of learning] we sometimes make compromises from our usual standards in exchange for participation.

      Am I the only one who’s seen this?

      As far as your last question, regarding what we can deliver in a service with a seemingly unquenchable need, I guess I would argue that it is contributions such as James’ and other scholars who do the work for us, and enable us to critique, add and adapt the work they share to allow those educators at the coal face to keep delivering the best education / simulation they can with the backing of literature and quality thought. Certainly our sim service has begun to focus more on creation of content that others can deliver and online resources that are easier to share.

      Thank you so much for your thoughts, and I wonder if you had your own answers to those questions?

      • Lon Setnik, MD FACEP

        My physician colleagues had asked to know the topics ahead of time, and I pushed back against it. My thinking was that we don’t normally know the case situation when patients present to the Emergency Department. You likely have changed my mind. I can understand the tradeoffs involved better after reading this discussion. I am also coming to appreciate the idea that the in situ can be used to solidify knowledge increased through studying. I can even picture coordinating the cases with the nurse education curriculum which is rotated on a monthly basis. I am recognizing how this can be used to promote learning and solidify behaviors. Reducing anxiety and cognitive load would be helpful for those eventual outcomes that we all strive for.

        I am also thinking that the debrief time for in situ is so limited, there is a different, more limited set of skills required to go from needs assessment to case to debrief. I am going to attempt to use my time partially as a mentor, coach, or facilitator for others trying to do this work, as opposed to needing to do it all myself, to better meet the needs of the communities I serve. I will look forward to further conversations.

        Thank you for the mind changing, that’s the power of peer conversations.

        Lon

  • Lauri Romanzi

    “Resuscitative hysterotomy of a mother pregnant with a 23/40 premature infant with harlequin syndrome and an abusive same-sex partner during a swine flu outbreak” – hysterically funny -humor in SIM! Great stuff. So glad that I found this journal.

Comments are closed.