Introduction :
Simulcast Journal Club is a monthly series heavily inspired by the ALiEM MEdIC Series. It aims to encourage simulation educators to explore and learn from publications on Healthcare Simulation Education. Each month we will publish a case and link a paper with associated questions for discussion. We will moderate and summarise the discussion at the end of the month, including exploring the opinions of experts from the field.
“An Act of Betrayal”
The Case :
“Oh crap” gulped Danielle as she watched Dave rush out the sim lab and well past the debriefing room doorway. It didn’t take an anthropologist to recognise that he was on the verge of tears.
It hadn’t even seemed like a particularly difficult scenario, but it had clearly affected Dave more than she’d expected. His management of the PEA had been going fine, but he’d been thrown off balance when the confederate consultant arrived and told him the patient needed defibrillation. Usually the learners were clued in enough to speak up and stop the shock going ahead (which was kind of the whole point of the scenario, half the bloody pre reading was on speaking up for safety), but Dave had been swept up in proceedings and 600 joules later had finally realised what was going on.
“Get started on the debrief”, Danielle asked the rest of the team as she pulled off her confederate wig and hurriedly followed Dave down the doorway. “I’ll deal with this.”
So much for her safe container.
The Article :
Deception and Simulation Education : Issues, Concepts, and Commentary
Calhoun, Aaron W.; Pian-Smith, May C. M.; Truog, Robert D.; More
Simulation in Healthcare : The Journal of the Society for Simulation in Healthcare. 10(3):163-169, June 2015.
Discussion :
As simulation educators, psychological safety is often considered a core component of our teaching paradigms. Yet at the same time, in the interests of both realism and specific learning objectives, we often incorporate an element of deception into our scenarios.
This month’s article was originally stimulated by a debate around the use of deception at the International Meeting for Simulation in Healthcare in 2014, and set off a series of responses in Simulation in Healthcare regarding the pros and cons of deception.
Please enjoy the original article, and let us know your thoughts in the comments in below.
In particular :
- What’s your position on the use of deception in simulated teaching?
- Has reading this article changed your approach to scenario design? If so, how?
- How do you maintain psychological safety while incorporating deception?
References :
Deception and Simulation Education : Issues, Concepts, and Commentary
Calhoun, Aaron W.; Pian-Smith, May C. M.; Truog, Robert D.; More
Simulation in Healthcare : The Journal of the Society for Simulation in Healthcare. 10(3):163-169, June 2015.
The Importance of Deception in Simulation : A Response
Calhoun, Aaron William; Pian-Smith, May C.M.; Truog, Robert D.; More
Simulation in Healthcare : The Journal of the Society for Simulation in Healthcare. 10(6):387-390, December 2015
I think it’s important to note that there are two issues raised in the journal article – both are represented in the summary above, but one of them is missing from the questions. The questions assume that it is impossible to deceive without breaching trust; this is not the case.
The real-world challenges involved in practicing healthcare routinely involve misinformation, misdirection and miscommunication. If simulation is going to help people practice healthcare, then it must expose them to misinformation, misdirection and miscommunication. To intentionally misinform, misdirect or miscommunicate is to deceive. Therefore, deception is essential to a complete and effective simulation practice.
At the same time, psychological safety is critical, and breaches of trust (as described in the paper) are extremely difficult to forgive. if trust is broken between a participant and an instructor, this becomes a serious barrier to learning.
In other words, I agree with everything that the paper has to say, and I think it’s an excellent way of formalizing the question and the issues. At the time of writing, I have an answer that I can demonstrate, and a draft paper that starts to explain how I demonstrate it.
I put it to you that you have been deceived without feeling betrayed, on many occasions. Whenever you see a magic act, the magician lies to you about what will happen, they fabricate evidence to support the lie, and then we applaud: this is the social contract that governs a magician’s performance. (My favorite magicians are Penn and Teller, and this video is one of my favorite magic performances ever: https://www.youtube.com/watch?v=WvFJpmq-mFk )
The damage of the betrayal does not come from lying, it comes from lying when your social contract does not explicitly permit you to lie. The first step towards psychological safety for the would-be deceiver is, therefore, to be absolutely explicit about the intention to deceive. It is, of course, much more difficult to lie successfully when people are expecting you to lie. This is why we applaud magicians when they fool us, and why we are disappointed when we catch them in the act. (If you’ve watched the video, you’ll be aware of my deception when I recommended it. And yet, I doubt that you’d feel betrayed if you watched it to the end.)
The trick, therefore, is to lie in a highly effective way, in a way that reassures the person on the receiving end of that you have their best interests at heart.
There are a number of factors that confound a public conversation on this topic. One is that a paper about ‘how to lie’ is about as useful as a book on magic – there’s a lot that comes down to practice and skill; the art does not yield easily to exposition; once exposed, it may lose its effectiveness.
I can think of one example that I can share without compromising my ethical standards, and my interest and effective deception. It starts with a briefing. For many good reasons, this briefing note would be repeated at the beginning of every session, and it goes like this:
“During this exercise, there may be a confederate in your team, who has been requested to act in ways that are not ideal for the situation. If there is a confederate, then they will have instructions from me regarding the behaviour that I’m asking for. We will also have discussed plausible motives for that behaviour, and ways that the behaviour might be analyzed in the debriefing. They have strict instructions not to break character during or after the debriefing, and I will not confirm or deny the presence of a confederate in any particular simulation. In debriefing, please focus on the behaviours that you see, and how they might be improved. Given the possibility that you’re speaking to my confederate, please avoid drawing assumptions about someone’s character based on what you see in this session.”
Having given that warning, then (like any self-respecting magician), I would not use a confederate. If caught, I’d ask people if they changed their behaviour during debriefing, and whether those changes were desirable.
I’m really not happy with the tone of my comment above. I think that sometimes, when you try a bit too hard to impress people, it’s possible to come across in a really negative way, and I think I did that. Sorry.
I really care about this stuff, and I’m struggling to find ways of expressing it that make sense. I hope that what I wrote above makes sense. I hope also to work out how to say it without sounding like a jerk.
I recognize this conversation moving down the path through ‘a bit odd’ to ‘really quite weird’, but I feel that there are some serious inadequacies in what I said earlier, and I want to address them.
On the one hand, my opening comment makes a valid point, which can be summarized as “There are ways of deceiving people that are not offensive.” I’ll take into consideration something which made it difficult for me to reply in a more constructive way, which is that the last time I said “There are ways of progressing well beyond what most people consider safe” that there were some people who trusted me, and got hurt. So I’m a bit shy about saying “This is how you do the dangerous stuff,” because there are important parts of the message that need to be delivered in order to avoid doing harm and I don’t feel confident about getting those messages across.
However, there has to be a middle way. The simulation business can’t just pause for 3 years while I work out how to explain myself. By attempting to dominate the conversation with something witty, I’ve failed to show respect to people who have been working hard for a really long time.
With that out of the way, how can things proceed?
Better methods for deception is part of the answer, but another important thing is to drive a conversation about where the limits are when it comes to psychological safety and set firm, negotiable boundaries.
Assuming that people have to choose (which they usually do), then I think the fundamental question is whether you should value the lesson, or whether you should value the relationship. I think that highly intelligent people who want others to succeed tend to place a really high value on the lesson; and I think that that you tend to get much better learning in the medium and long term if you value the relationship.
As you can tell from this odd little exchange between me, myself and I, I make that observation from the point of view of experience, not from moral authority. Please don’t risk your relationships with your colleagues and your students in order to prove a point, no matter how important the point seems to be. They’re more likely to remember the betrayal than the point you were trying to make, and they won’t be listening the next time you have something you want to teach them. And when you push the boundaries, do it with people who can withstand imperfection and who consent to being part of your experiment. Informed consent (particularly consent that can be freely withdrawn) is a really useful safety mechanism.
Thanks for your thoughts as always Nick.
There were two things that really struck me about your comments, the first being that I think your multiple replies would confirm that this is a really ethically complex issue, and that negotiating a safe container for learning while also incorporating “the real-world challenges involved in practicing healthcare” involving “misinformation, misdirection and miscommunication.” is indeed an art form.
Secondly I was delighted to hear your magicians example, as the exact same principle came up on a recent debriefing course I was learning on, where one of the participants was coached to become upset during the debrief. We had been warned beforehand that one of our participants would become upset, and that it was an important learning objective to cover. Eye contact was made directly with every candidate to ensure they were aware of ‘the plant’ in the room.
Surprisingly when it happened, after a particularly intense oncology debrief, I had completely forgotten about the fact that someone would deliberately get upset, and took their tears as genuine. When I was informed that this was a deception, I was not initially angry, instead quite delighted at being tricked, given that I had clearly been warned, much as the ‘magician’s trick’ would mention. Another candidate discussed the principal of ‘informing of the ruse’ as a principle for con artists. i.e. In order for people to not feel cheated, you must inform them there is going to be a ruse.
Despite this, however, it was important to note that I did still feel a little embarrassed about being tricked, particularly as I had been warned. The sim was intense (I had been negotiating with a mother to concede to her dying daughter’s wishes to refuse transfusion), and the emotions that I had experienced brought back multiple episodes of mild grief from somewhat related real life experiences.
I did not feel cheated, but my emotional resilience was lower than it normally would be after absorbing all of that mother’s grief, and my mature defence mechanisms were not up as much as I would hope they usually would be. As such, I voiced my frustration at the debriefers for not intervening in my colleague’s grief earlier, despite knowing that it was now fake, and on reflection their interactions were entirely appropriate.
In summary :
– I agree deception is necessary for certain learning objectives
– I agree it can damage learner’s sense of psychological safety (from first hand experience)
– I agree pre warning is appropriate, but feel it does not negate risk
– I think deception must be used carefully, in cases where it meets the learning objectives, and that it requires appropriately sensitive pre briefing, debriefing and follow up in some cases, to ensure your candidate is able to tolerate those challenges.
Ben,
Thanks. Interesting your very personal experience. I had similar as a participant in debriefing courses. Interesting thought and research. How many sim educators have been “burnt” as participants in a sim and what do they do differently as a result?
How many had an intense emotional response to a sim and had a powerful learning experience because of it.
I would have to vote in both boxes.
Is this what inspired us?
That’s a fascinating article Ben and team, thanks for choosing it. I was faced with a scenario written almost exactly like this one last week. I would have loved this article at hand.
My thoughts regarding deception:
Short version: Don’t do it. Low gain, high risk and reasonable alternatives exist. Destroy trust and we lose the long game.
About the article:
There is so much to agree with. I was uncertain about the comparison with Milgram, in that we don’t use deception (+/- mitigation) for the learning about simulation, but for that of the participant. How much of Milgram’s negative reaction was provoked by “I was used and felt all this…. so you can learn about psychology”?
Hopefully our construct makes it clear to the participant the clinically focussed outcome we are seeking and therefore our motives. If we are constructing it for the purpose of simulation research I think the learner has the right to feel used. This isn’t suggested in the article, rather they make use of existing “deception-friendly” centres.
Therefore the longer version:
Why: The context I have seen and used is either graded assertiveness of clinical matters e.g. the consultant wants to give a dangerous drug, or professionalism/behaviour issues. Both of these I have experienced many times as a sim participant or faculty (one course x 15). Over the years we have learnt to mitigate. If you are considering it- red flags everywhere.
How:
Pre-brief (as discussed in article Table 3). “Be prepared that members of your team may not act in ways that you agree with or that you might expect from what you know about them. We have been talking about (…insert challenging behaviour), so don’t be too surprised if you deal with some in this next scenario. Is that OK by you?”
Context
Discuss professionalism issues or challenge techniques in the same session
Debrief
De-role immediately, explicitly.
“We have been talking in the session about professionalism (conflict resolution/advocacy/ graded assertiveness etc). You may have noticed that David was not the usual consultant you know and respect. He was playing a role. David can you explain to the group the role I asked you to play…..
Does this fit with people’s idea about the consultant in that scenario? If there is anyone who still holds a desire to throw the phone at him (etc) can I ask you to let it go. He doesn’t actually use fentanyl on the job (maybe get a laugh, diffuse the tension). He’s back as the capable consultant you know.
The impact on identity the article brought up is interesting. Of the mistakes I have seen or made regarding sim, asking someone to play a clinical role other than themselves has been a significant factor. Interestingly some of these have been scenario writing and debriefing courses where course participants are asked to be sim participants at a lower level than themselves to suit the pre-written scenario eg consultants or experienced nurses as nursing/medical students, not as confederates but as de-skilled sim participants.
We also risk our sim faculty confederate nurses playing roles of “competent but not particularly assertive” nurses. Apologies for raising this side issue – I don’t see it as particularly helpful to dumb down team roles. I would be interested if anyone else has seen/felt this as a problem.
And finally…
One scenario I have seen (as a participant and run in a mitigated form for ED consultants), uses degrees of deception, unprofessional conduct and large amounts of noise to provoke genuine anger, and to look at the effect of this on cognitive bandwidth and performance in a high end but still vulnerable group. Is it fair? It’s certainly high risk. Is it good simulation? Not sure.
Everything we do relies on trust. When we challenge this we need to be very, very careful.
Like all things, the construction of the safety net is as important as the trapeze skills we are trying to learn..
As usual look forward to the thoughts of others..
Ian S
Thanks for your thoughts Ian, I agree that the article and personal experience has lead to much reflection on the use of deception. I agree this deception stuff usually comes up in CRM/graded assertiveness/conflict management training. I wonder how often it is necessary.
Creating a safe environment in simulation requires actions from the faculty before, during and after the simulation scenario. How the faculty use the pre-brief to set expectations and the rules of the game is one of the most important steps in simulation – and often one which both the faculty and participants neglect in the process of setting up for a session. The “rules” of the simulation need to exist for both the participants and the faculty, it is the faculty who need to strive to have these followed during the session.
The pre-brief may be immediately before the simulation, at the start of the day, at the start of a course, as pre-course briefing or part of the ongoing culture of the department where simulation is a regular part of practise – with refreshers of the rules regularly. It is essential that the faculty consider all the risks of a simulation (and prepare for the unknown risks) in the lead up to the session, we should consider emphasising the source of these during the pre-brief – sometimes subtly and other times overtly – depending on the perceived level of risk. Prior to SMACCForce last year we had many discussions about the risks of the tactical response simulation and ensured a pre-brief was sent out, had further briefing in the morning and then ensured their was availability of faculty to provide support should a reaction have occurred.
In the case given above, I was concerned that the consultant went to 600J of defibrillation during the simulation before the participant “clocked” what was going on – the point of the scenario being purely to stop a shock given for PEA. The learning needs seem to have been met with the first shock, and potentially the subsequent shocks in the lead up to 600J. The emotional response is likely to be embarrassment and in this case it seems that the deception has been taken to an extreme. Our confederates can be trained to provide an appropriate scaled challenge and should be vigilant to gauge the emotions/responses from the participants to their behaviours. We should write our scenarios to have a realistic, non-stereotypical confederate with motivations for their behaviour and knowledge base. I don’t believe that having participants seeking or noticing the “simulation twist” is helpful for learning, certainly I have watched participants change their behaviours because they thought one was coming (the “when are they going to crash” anticipation), which reduces the fidelity of the sim experience.
Confederates who are perceived by participants to be playing themselves but being difficult/incompetent or otherwise, has potential real world complications – it can impair relationships and put patients at risk as the faculty and participants encounter each other for years to come. My experience has been that creating alternate personas with different names, with archetypal behaviour, has met the learning needs for communication and conflict resolution without breaching trust or putting real world relationships at risk. Likewise having participants loose trust in each other, if used in the deception, has both short term issues for the educational experience and ongoing clinical relationships. I would prefer to use known introduced faculty in any acting role and allow our participants to experience simulation as themselves and rehearse their own behaviours. In the context of a course its important to consider who is playing which confederate and when during the course of the day, if that same faculty member is to also debrief during the course.
In short, I don’t think deception is useful, but there are versions of this style which can be used to provide learning – pre-briefing, being vigilant to participant response and not pushing things too far, as well as effective debriefing are essential to maintain an environment so participants can learn.
Thanks for your comments Clare!
I had written the case with the idea (in my head) of 600J essentially paraphrasing that 3 shocks had been delivered at 200J, but I appreciate that is not how it reads. Your comments highlighted an interesting argument to me about ‘extreme deception’ leading to higher levels of embarrassment, whereas for me sometimes I wonder if making it extreme at least gives a clear signal to the sim participants that there is an expectation of intervention. As a participant in an adult emergency scenario once, it was much easier for me to intervene with the meddlesome confederate ICU consultant once he has for resonium to be placed down the ETT, than it was when he was arguing the finer points of dialysis with me (an area well, well, well outside my area of expertise). In some ways that cartoon level of medical error allowed me to interpret the signals that the scenario facilitators were trying to flag.
I agree that ‘alternate personas’ and ‘character archetypes’ will help divide the perilous line between ‘deliberately deceitful confederate’ and ‘trusted co-worker’, and while I’d always thought of the use of wigs and costumes as ‘a bit of fun’ while running scenarios, it does on reflection give a clear visual signal that ‘this person is playing a role, and is not their normal self’. Probably not revolutionary for most people but I hadn’t thought about it before in that way.
Sometimes, on the other hand, I worry that the character archetypes we use in our ‘speaking up for safety’ scenarios reinforce the idea that people who make medical errors are also assholes. Sometimes this is true, and the unconsciously incompetent medical senior is certainly a force to be wary of, but more often than not the people advocating for incorrect treatment are good people making an incorrect assumption or being fooled by an underlying bias, or who don’t have all the information. I will have to reflect more on the type of character to use when discussing medical errors etc.
Thanks again!
Ben
Thanks Ben.
I think its very important that to try and avoid caricature type confederates in simulations – archetypes being more authentic and realistic, than the stereotypical “cartoon-like” characters that I often see portrayed in scenarios. But by ensuring our confederates are not themselves – using different names, and yes often the fun costume, it helps keep the differentiation between themselves and others for the reasons I’ve described above.
I also worry about the stereotypical character that is portrayed by many, in addition I think it maintains the silos between our professional groups – the arrogant surgeon springs to mind, but in reality most of my surgical colleagues are lovely on most days. As educators we should avoid perpetuating the urban myths and unprofessional behaviours that can occur between speciality groups. My acting friends have suggested that its always important to create the character to be authentic – often giving the actor a reason why they are acting that way to assist in them behaving authentically. The advantage of that is that real world solutions attempted by the team can be responded to by the confederate appropriately, especially if they are well briefed on the learning outcomes and how the scenario has been written to achieve that.
I find your comments about the extreme, cartoon like medical errors being easier to intervene/giving a clear signal interesting. I wonder about the concept of grading the errors in the scenarios and whether the participants always have to “click” during the simulation, or if we can use the debrief to bring out the unrecognised error threat and allow the reflection as the learning point. In reality many of the errors are more subtle and the consequences the result of missed opportunity to correct the alignment of care. Both subtle and sledge hammer approaches to error in simulation allow the participants to use many different skills which are required to reduce error, improve communication and ultimately patient care. I think the same can be said for the use of some of the more deceptive techniques in simulation – if used, they must be used with caution and the team should always aim to create a safe space, but prepare for the occasional misfiring.
Clare
to start with a summary – I’m with Ian and his “short version” sums it up nicely I think
Interesting discussion and one that I had not thought about to a significant degree beforehand. I get the global sense that the arguments for deception centre around theoretical reasons why it’s OK and how it can be done safely and this is then juxtaposed with the real (negative) participant experiences described in this paper, many of the negative parts of which could easily have flown under the radar of a study designed to measure those negative effects (and the authors provide some reasons why people might not admit to being as upset about being deceived as they actually were).
As a participant when I have felt I have been set up, I tend to leave annoyed rather than reflecting on the learning objectives and can only assume participants in my scenarios feel the same.
I wonder if this is a simulation education version of the hypotonic maintenance fluid problem in paediatrics – theoretically hypotonic maint fluid was physiologically appropriate and for most kids we got away with it, but every now and then one would get hypotonic to the point of seizures/death and it took some big and carefully done studies to notice this really important down side.
With deception in sim ed you can provide all the theoretical arguments you want but there is a risk of doing enormous damage to the educator/participant relationship which, even if that is the exception rather than the rule, may well not be worth the trade off and the chances of missing that signal in a study designed to assess it are not insignificant.
I would suggest that the danger is even greater when performed in-situ as disruption of trust amongst real clinical teams is even more damaging than that amongst ad-hoc teams formed on an external/sim centre based course.
The point about deception that occurs within or outside the fiction contract I think is valid and the circumstance in which I could conceive of using deception is on a course specifically described as being about communication and is explicitly pre-briefed as involving confederates whose role is to enable specific communication challenges. i.e the deception would be occurring within the fiction contract. Nick’s magic example is a good one and I think he makes this point well.
The idea of using real clinicians in real environments acting in a way that does not reflect their own competence sounds really dangerous and I can’t imagine learning objectives that would be worth that risk, at least not where I work. The point about the danger of re-enforcing stereotypes I think is also an important one as confederates can easily become an unwitting vehicle for the “hidden curriculum”
While this is an interesting academic exercise I would not be signing up to take part in any further studies (such as that suggested by the author) involving use of deception as I think the risk to our sim program and it’s participants would just not be worth it.
Pingback: Journal Club Summary October 2016 - An Act of Betrayal - Simulcast