Introduction :
Simulcast Journal Club is a monthly series that aims to encourage simulation educators to explore and learn from publications on Healthcare Simulation Education. Each month we publish a case and link a paper with associated questions for discussion. Inspired by the ALiEM MEdIC series, we moderate and summarise the discussion at the end of the month, including exploring the opinions of experts from the field.
The journal club relies heavily on your participation and comments and while it can be confronting to post your opinions on an article online, we hope we can generate a sense of “online psychological safety” enough to empower you to post! Your thoughts are highly valued and appreciated, however in depth or whatever your level of experience. We look forward to hearing from you.
The Article :
The Case Study :
As Amir watched the playback video of his scenario, he became quietly frustrated.
The course orientation had stated there’d be no tricks, that this was a safe space for learning, and that he was expected to push himself to the edge of his comfort zone in pursuit of learning. As an avid life long learner, this was something he’d been happy to embrace. Performing well in front of his junior staff was important for him but so was being open to feedback and becoming a better physician.
But if this was such a ‘safe space’, why had the patient’s notes containing critical background information been placed in an obscure part of the room under a coffee cup and some patient handouts? Critical minutes had been wasted pursuing an incorrect treatment pathway for the patient’s real condition, and while the resultant discussion had generated some great discussion about shared mental models and situational awareness, he couldn’t help shake the feeling that he’d been somehow set up as the patsy in the greater pursuit of transformative teamwork conversations.
This particular safe container, he felt, left a somewhat bitter aftertaste.
Discussion :
This month we get to revisit a topic we explored at the very start of simulation journal club : deception in simulation. As we chat this month, we ask our journal clubbers : how does this article change or inform your practice? What has been your experience with deception in simulation? When have you seen it used effectively, and what have you learned to avoid?
I’ve realised deception is more difficult to define than I initially thought. In some ways everything in sim is a small form of deception – but I think a lot of us recognise when there is something in there that feels like it was designed to ‘trick us’, was not part of the agreed upon rules and makes us feel unsettled.
Given the potential harm that can be done when using deception, I appreciate the authors giving guidance on when and how it can be done safely.
As implied by the trips provided with the ‘3 P’s’ in this article, the pre-brief and debrief will be critical! This means they should be lead by someone with skill and experience. Leaving a simulation feeling it was unfair, that you were ‘tricked’, or that you have failed in front of your peers is a horrible feeling with the ability to impact on your work and future interactions with those involved. I get the sense that it feels okay so long as it is realistic, beneficial, and fits within the ground rules established during the pre-brief.
I don’t think it compromises the simulation to alert the participants of the presence of expected difficulties: ie. the following simulation will involve a challenging patient interaction, or equipment needing troubleshooting or something that goes wrong. It allows the learners to feel as if it was fair. It shouldn’t be a case of “will they fall for the trick”.
Along side the ‘3D’s’ in this article about making good decisions around using deception, I also think asking whether simulation is actually appropriate medium to address the proposed theme/issue. For example a simulation where a senior purposely is giving a wrong direction to test if the juniors ‘speak up for safety’. It puts all the pressure on the junior, has the ability to cause an identity threat, may interfere with future interaction between the confederate involved and the learner and does not address the underlying cultural issues that result in people not speaking up (or listening down and around).
Hey Charlotte
Thank so much for kicking off the discussion, and drawing together so many important principles.
You referred to ‘unfair’ and ‘fairness’ a couple of times, as well as talking about ‘failing in front of your peers’.
These reminded me of Loo et al paper on Rapport Management in Facilitator-Guided Simulation Debriefing Approaches … she talks about ‘face sensitivities’ and ‘sociality rights’ as critical in maintaining rapport between those involved in learning conversations around simulation.
The latter in particular focuses on the idea about whats ‘fair’.
As you point out – some deception can be made predictable or transparent enough to be perceived as fair.
A next question – how do we judge this ‘fair’? How should we anticipate whether our idea of fair is the same as that for our learners/ participants?
I enjoyed this article as it provides guidance for the use of deception in simulation. I love the practical tips that deception should be used sparingly and when it is necessary, it should be based on the learning objectives. Additionally, I wholeheartedly agree that it should used with as much transparency as possible through the pre-briefing and debriefing of the leaners. Additionally, I fully agree with the author’s point that novice simulation facilitators should not use deception as it may create an emotionally charged situation they may not be prepared to appropriately address.
I have to say we don’t routinely run any intentionally deceptive scenarios, so I’m wondering if others have examples of scenarios they run they could share as an example of a deceptive scenario that results in key learning points that could not have otherwise been achieved?
HI Lisa
Thanks so much for dropping into the discussion.
I think your point is a good one – if we embark on using deception – we’d better be prepared to justify the ‘why’ and to manage group and individual dynamics in preparation or subsequent to the sim.
Hopefully others may have examples too – but i liked many of the ones in the paper that illustrated the spectrum of ‘deception’. Arguably any time we restrict access to information that participants might use in diagnosis or problem solving we are being just a little deceptive? eg ‘ECG machine not available’ in a chest pain scenario or ‘the consultant is caught in traffic’ as the trainees have to manage a simulated major trauma on their own.
Personally i think there are good reasons where ‘deception’ in the broad sense might be justified – but i wouldn’t deceive anyone about whether we were planing to deceive them ………
(at this point Simulcast JC fans are asking when Ben is coming back ….:-))
I, unfortunately, have not been able to access the article (feeling a little deceived by my organisations “comprehensive” ovid access!!) so apologies if I am repeating content within the article.
From the excellent comment, I would agree with Charlotte that ‘deception’ is a little hard to define and we are all implicitly agreeing to a certain level of deception by being participants in sim (we don’t tell participants exactly what will happen in the sim but we (hopefully!) have an idea about the scenario direction and intended learning outcomes). I think that ‘safe container’ is broken if you reach a threshold of deception where the participant feels that ‘gotcha’ moment (which may be different for different participants?). I think this is dangerous and will negatively impact the learner for a long time. Certainly, I’ve had participants come to sim where they have had that ‘gotcha’ feeling in sims previously and there is an extra effort required to make them feel safe and able to be open to learning. Is it useful to consider and openly address people’s prior experiences of simulation deception in the pre-brief?
I can share one example that highlights some deception and where it had potentially ended up with some participants having that ‘butt-of-the-joke’ feeling. We run a communication-focused course our Foundation Year 2 doctors (FY2). The focus of the one scenarios is about ‘graduated assertiveness’ and safely/supportively challenging a colleague that isn’t following infection control measures. The confederate shows escalating levels of disregard for infection control until challenged by the FY2. The rest of the group watching the simulation via camera always spot this (highlighting their lower cognitive load and bird’s eye picture) but the participant wouldn’t always spot it and come out of the scenario thinking “not sure what that was about” until the video was reviewed or their colleagues watching pointed out the behaviour of the confederate. This was always a tricky debrief and sometimes I felt I would get it wrong despite best efforts to normalise/share fallibility. The participant, you could sense, had that ‘got at’ feeling.
We made a change to the scenario so that if the FY2 hadn’t recognised/dealt with the inappropriate behaviour the simulated patient would be prompted to mention to the doctor their concerns about the confederate not wearing gloves/not washing hands etc – this would ‘force’ the FY2 to recognise the issue and demonstrate the communication skills about graduate assertiveness that we want to discuss in the debrief. This was a much better debrief and allowed a much better conversation about compassionately challenging a colleague.
I’m reminded about Ben Symon’s recent talk about ‘simulation self sabotage’ I think it’s worth thinking about the ‘hidden curriculum’ of how you hide/deceive participants – the stereotype speciality that doesn’t give a complete handover, the IVDU simulated patient that is lying about what they ingested, (or our scenario above where the confederate isn’t following the protocol, all these scenarios might have useful learning outcomes but what is the possible hidden message we are sending with the deception used?
I think for me using some deception is ok and to a small degree implicit in the sim and safe space/trust that is established. I would also echo the importance of WHY are you adding that bit of deception – is it just ‘noise’: “let’s make it tricky for this person” and add cognitive load without benefit? Or a key learning outcome you want to pull out about situational awareness? balanced against the ‘gotcha’ risk and hidden curriculum.
I have now been able to read the article (Thanks VB). Having done this discussion backwards by reading the paper after and then reflecting on my above comments is an interesting exercise! Reflecting on the above example I shared I think the changes to the scenario really minimised the ‘deception’ used the participants had a better experience.
In table 3 the first P: Place – I would echo the importance of this. I wonder if there are situations where being more explicit than the authors suggest (i.e not just “using generic language”) might be helpful to build that essential trust/safe container. I can think of one scenario I have run previously where if I feel the participants are fairly nervous I am very explicit that the patient will deteriorate. I suppose this does have risks in escalating anxiety but I hope that I explain that the scenario is about looking at how we respond, reassess, adapt to changes in clinical picture – I think this has worked for me. I wonder if anyone else has experience of being even more explicit (to minimise deception)?
As I’m writing this comment I’m wondering whether I do agree with my previous comment that ‘some’ deception is ok? A slightly random analogy that comes to mind is that perhaps deception should be treated like a medication with a very small therapeutic window/toxic side effects…. smallest dose possible to achieve effect for least amount of time with risk/benefit properly considered and participants consent to taking it?
Hi Dan
Thanks so much for your thoughts ( both pre and post reading the paper !)
I agree .. if it feels like a ‘gotcha’ moment – its probably beyond that ‘toxic therapeutic window’ – and love the analogy 🙂
You also make an insightful point about the unintended consequences of deception in terms of hidden curriculum/ feeding into stereotyping and biases…. Even asking our colleagues to play a ‘bad nurse’ or ‘unhelpful anaesthetist’ will risk consequences for our sim team dynamics, and risk caricatures emerging in the scenario with unhelpful references to tea rooms and golf courses and share trading on laptops ….
So i wonder why its so common? I’ve heard people wanting to be ‘lighthearted’ and ‘fun’ or wanting to make it ‘challenging enough’
So how do we find a sweet spot – hopefully these guidelines help?
But interested in more examples as to how to apply in practice..
Thanks again Dan
Hi Vic,
Thanks for letting me know that this conversation is taking place! I am thrilled to see that our paper is generating this level of discussion. The first “P” – in terms of generic language – has an interesting origin. Basically we were trying to find a way to balance the potential need for deceptive events in order to authentically recreate key patient-care situations with direct safety implications while still trying to find a way to be sure that that deception was seen as “within” the overall learner expectations for what can legitimately occur within the simulation. Our solution was to make it a global practice to inform learners that deception may occur if absolutely needed, but that it would always be in service to the greater learning objectives. That way, a deception can still be unexpected in the local sense (i.e. the prebrief did not lead the learners to expect one that day – the consequence of too explicit opening statement) while still being located within the safety container.
Dan, I like your comparison of deception as a medication with a narrow therapeutic window. That encapsulates my view exactly. It may be needed to achieve a certain clinical/educational goal, but levels must be carefully titrated and it should not be used lightly.
Finally, to bring up a point we do not address at length in the paper, but seems very germane, I think that the nature of the relationship between facilitators and learners outside the immediate session/case is a critical thread. Most of my learners are ICU physicians and nurses that I work with frequently, and so I have the opportunity to make sure our relationship is solid, cordial, and positive before the simulation even begins. Because of that, even when I have used more overt deception, they have believed me when I disclosed the reasons behind it and have not seemed to feel singled out. By keeping its use infrequent, I also have the capacity to continually build that trust so that that needed resilience is there when needed. This, however, is not possible for larger sim centers where it is hard to really get to know individual learners. The layers of context playing into this are many and varied.
Hey Aaron
Always a big thrill when an article’s author drops in !
Great to get a little more background on some of the thoughts.
I’m glad you’ve delved further into the nuances of psychological safety …… this is a good concept around which to focus our ideas of ‘risk’ and ‘risk mitigation’ of deception.
We talk about ‘creating and maintaining’ psychological safety, but as you identify here – groups can ‘bring’ psychological safety …. or not … and this needs to be part of our judgment on the balance of risks.
It also suggests our groups might ‘take’ psych safety (or its lack) out of the sim and that deception will have impacts in the real clinical environments.
Heady stuff…..
I think the analogy of deception in Sim with that of infidelity within a relationship helps to frame some of the issues.
It is understandable that for sim providers there is a temptation to justify the large commitment of time, resources and personnel with a crescendo moment that catches all off guard. Flirting with the rare and unexpected certainly has its appeal but is it worth the longer-term collateral damage?
The alternative – a stable relationship that is predictable, reliable and based on mutual respect may seem a little tame in comparison but it does offer a foundation with many hidden benefits.
Another parallel that can be drawn is the likening of the use of deception in Sim to probe individual responses with the use of M+M meetings to target individuals and error. If the M+M process just focuses on individual mistakes – and the solution is increased individual adherence to personal vigilance – it misses out on all the team and systems elements that contributed to the outcome. Similarly, the use of deception to highlight a specific objective can then rob the sim of all of the other team and systems elements that when done well create resilience to errors occurring in the first place.
Personally, I’m fortunate to work in a program that runs weekly sims with my work peers to form an ongoing longitudinal relationship. By establishing a fair playing field devoid of deliberate trickery, it allows regular top ups to the “safe container” that spill into other areas.
In a recent resus the team leading doctor was prompted by one of the nursing staff to prioritise medication orders which were listed on a whiteboard as an additional visual cue to the team. This example of interpersonal risk taking and initiative was fostered in the simulation environment by exploring a relatively simple presentation.
A stable Sim relationship may not be sexy – but when compared to a relationship tarred by infidelity and eroded trust – it is one that is more likely to meet the end goal of looking after our patients better.
I’m enjoying the various analogies `in these reflections !
Thanks Warwick.
I like the idea of the fair playing field adding ‘top ups’ to the psychological safety.
Well planned deception may even dip into that pool – but its about having enough of that trust to ‘spend’ a little
At this point need to hasten .. we are still far from perfect. One participant in our sim yesterday explained a team performance gap as due to her ‘still learning the simulation game’… which gave me in equal measure a flash of anger and a profound disappointment
Its constant work trying to overcome a perception of ‘performing’ rather than ‘practising’
Agree with ‘top up’ idea – and the comments of the author about the relationship being ‘solid’ and built over time. It reminds me of the Stephen Covey book – “7 habits of Highly Effective People”. In the book he talks about an ’emotional bank account’. Simulation psychological safety could be a bit like this?
You need to deposit (build trust) more than you withdraw (take risks) but with that ‘equity’ built you could occasionally ‘borrow’ in order to achieve more?? Maybe just another cheesy analogy!!
This topic really got my interest going. As a long time lurker on Simulcast I recalled a podcast about deception with Dan Raemer and a very short search through your archives reveals you are well ahead of the game with your October 2016 journal club! I highly recommend the summary and Dan’s entertaining podcast describing the events of his childhood summer camp deception!
As I pondered Warwick Isaacson’s infidelity analogy, I considered the range of different emotions we can have when we are deceived (not specific to infidelity!) and how they sometimes evolve over time. Being deceived can sometimes be a laugh (Ha Ha got me!- ), sometimes a rush of admiration (wow! That was such a great set up), or our reactions can lie at the opposite end of the spectrum with anger (those bastards!) or shame (I can’t believe I fell for that!). What explains such different reactions? How do we (usually) manage to make those we tease/prank feel like we’re laughing “with them” not “at them”? This may help us understand how we might deceive (if necessary) for learning without people feeling betrayed?
Firstly Nick Argall’s comments from 2016 rang clearly here: “the damage of the betrayal does not come from lying, it comes from lying when your social contract does not explicitly permit you to lie. The first step towards psychological safety for the would-be deceiver is, therefore, to be absolutely explicit about the intention to deceive.”
Secondly, relationships (as pointed out by Aaron Calhoun) really matter! When our friends prank us – we are in the joke together, if it’s a stranger pulling the prank- they are just being a jerk. I’m certain many of your journal club contributors will be able to explain the underlying psychological phenomenon at play here!
Overall I love this article and these discussions, as it challenges us all to think deeply and deliberately about deception. As my colleague Leah McIntosh commented, so many learners participate through the lens of waiting for the “gotcha” despite transparent prebriefing to the contrary. The scars of previous deceptions may be long lasting! This paper will be making it into my “must reads” for simulation training.
Thanks Sarah – i love how eloquently you’ve expressed those ideas, and recognized the breadth of emotional response spectrum.
Its like teasing … of so much fun for everyone in the right context, but can easily miss the mark if not on solid ground with an existing relationship.
That Raemer expert commentary was fantastic, and its wonderful to go back and read Ben’s lovely summary of the case and discussion. Check it out here for those who haven’t read it https://simulationpodcast.comwp-content/uploads/2016/10/SIM-Journal-Club-October-2016-Summary.pdf
Ok so here was my statement back in 2016 from Ben’s summary: “: “Don’t do it. Low gain, high risk and reasonable alternatives
exist. Destroy trust and we lose the long game.” and yet I look at the table 1 contents (especially the high risk events and goals) and realise that I am still using various aspects of what the authors feel is deception with (hopefully!) high level mitigation and transparent justification. My learner groups in these scenarios have high levels of comfort in uncertainty and the ability to cope with shifting information and with, as others have noted, the ability to form longitudinal trust. It’s a timely reminder to be cautious with that trust. The areas that I still see a role in “deception” are in professionalism, cognitive bias, error and subsequent team behaviours. Where information is hidden and revealed in stages, this is realistic in an Emergency Medicine environment and wouldn’t be seen as deception, just part of life. Like all things high risk it all needs some pretty good mitigation and justification and understanding and consent to rules of engagement. Or alternatively: don’t do it!
A useful article and opinion and thanks for posting and commenting.
Warning: content here might be deliberately provocative
Some additional thoughts from our group at Monash Simulation who are now running our journal club in line with Ben/Simulcast team, inspired by the Mater Curry Club but Melbourne lockdown version.
In this guideline where is the ethical balance of the potential advantages of using deception in sim/potential harms of allowing HCW to face for the first time in the real clinical environment the surprise and cognitive load of discovery, uncertainty, bias, errors, uncertainty or unexpected team behaviours. To what extent are those ethical considerations balanced by the potential low level harms of well -mitigated deception?
To what extent is the word deception here used (with its negative connotations) as an overreach to describe events and uncertainties that are pretty normal in the simulation world and in the parallel world of clinical reality?
What are the potential cognitive distortions introduced by a non-specific early warning of impending “deception” and a safe word/phrase. What can I trust? Who can I trust? is there such a thing as deceptionitis?
In many ways we at Monash Sim are spoiled: the teams we learnt from have used many of the goals and high risk events described in Table 1 (assess teams ability to address incorrect orders, unexpected developments in environment, incorrect orders, deal with medical errors, unexpected equipment failure) with such skill, mitigation and purity of intent that we don’t even like the idea that they are considered deception. Would love a new word/phrase not so loaded with negativity. Guidelines for the Responsible Use of Unexpected Events in Simulation?
Hey thanks Ian
Excellent thoughts that illustrate the grey between unfair trickery and the necessary fiction contract.
I liken it to the completely non-academic framework of ‘little white lies’. We can convince ourselves its ‘prosocial’ – the deception is for others benefit .
And apparently lying is kind of hard wired ! https://greatergood.berkeley.edu/article/item/whats_good_about_lying
I’d also like to run our attention to the method and writing of this article. Its a position/ opinion piece, draws on theoretical underpinnings and delivers some tables and catchy ‘3 P’ and 3D’s. And yet i’m not sure how we embed these questions or thresholds during our design and delivery process for simulation. How do we make the practices actionable in our ‘workflow’? Or do we need to? Maybe its just the general ethical principles to ‘keep in mind’ ?