103 Simulation or Simulacrum: Can Simulation Be Too Real?


Can simulation be ‘too real’? Can this ‘hyper-reality’ actually have negative impacts on learning? 

Vic takes a deep breath and dives into theory with the team from Queens University Belfast, including Jenny JohnstonHelen Reid and Gerry Gormley. We discuss their recent Medical Education article – Into the uncanny valley: simulation versus simulacrum?  

Eve Purdy joined the chat to offer her unique anthropological perspectives. 

If you’ve ever felt slightly uncomfortable with OSCEs or breaking bad news to a plastic mannequin, this might help you understand why….. 


Leave a comment

Your email address will not be published. Required fields are marked *

One thought on “103 Simulation or Simulacrum: Can Simulation Be Too Real?

  • Derek Louey

    Hi Ben,

    Thank you for an interesting and philosophical podcast discussing Baudrillard’s theory of ‘simulacra’, where the simulated and clinical world become indistinguishable. I feel this discussion somewhat relates to your other podcast on Functional Task Alignment and the relevance of simulation fidelity. I will return to this later. I would summarise the pedagogical challenge as one where either learning or assessment in a constrained or parallel context provides useful transfer into the complex world of clinical medicine. Firstly, I find the term, simulation ‘participant’ as potentially problematic. It transforms the intent of the education intervention from one of learning, to one of ‘performance’. OSCEs were cited as another example of the validity (or the unintended consequences) of this transferability where learners began to confuse OSCE performance as having adequate or equivalent importance to ward or clinic performance. The actual performance which is truly important is the one occurring at the patient’s side. Baudrillard highlights the possibility where participants or assessors may begin confusing performance between the simulated and clinical world. My personal anecdote confirms both the beneficial and detrimental effects of this. An example seen in the staged environment of simulation where ‘thinking aloud’ is encouraged, can result in clinicians who have learnt to more effectively share their thoughts and intentions with their clinical team (though not necessarily guaranteeing that the clinical decisions themselves were ideal). Along with this, simulation affords assessors better insights of the learners’ needs through a dedicated period of debriefing and reflection that could not be always be obtained on a busy clinical floor. However, there are also risks. Even with high levels of fidelity, typical prompts or triggers-to-action that may exist in the ‘real’ world may not be possible in simulation. These are often substituted with observations or suggestions provided by an off-stage ‘voice of god’, through the actions of helpful confederates, or by dramatically altering parameters on a monitor. The pedagogical assumption is that had those authentic triggers actually occurred, the participant would learn to transfer these reactions from outside the simulation room and perform accordingly and in timely manner without the need for not-so-subtle prompting. The converse sometimes occurs, we also risk training participants to over-react or to response prematurely in a scenario. This effect is seen when a series of escalating and time compressed series of prompts are presented in a simulation, to provoke an expected reaction and to fulfil the learning objective of the simulation. A negative impact I have seen is that I find some junior clinicians quickly reach out for magnesium sulphate to treat moderately severe asthma before there is even a chance for the salbutamol to take its effect. This may have been a learnt reaction from simulation. The lesson to draw as custodians of this modality, is that is we need to always triangulate our contrivances and observations in the simulation and see how well they correlate with actual behaviours in the clinical environment.

    I now turn to your other article that you reviewed on Functional Task Alignment.

    http://simulationpodcast.com/journal-club-february-2021-_-functional-task-alignment/

    This article examines the relationship of simulation and reality from a slightly different perspective. It specifically explores the nature of fidelity. Having concluded that even physical fidelity is an elusive concept, it proposes that an education intervention, through its components, context, and features should functionally ‘align’ with that of the learning outcome. Biggs has previously described this synergy as ‘constructive alignment’.1 Based on the outcomes-based discourse of education, the focus for any learning or assessment modality shifts toward the actual outcomes that are being achieved (positive or negative), and moves away from faithfully adhering to a specific technique or approach. The risk for any education specialist is taking a siloed approach to teaching and assessment. By isolating these learning moments or data points from each other we risk losing an important understanding of our learners. Regardless of whether you are an expert in the construction of MCQs, SAQs, OSCEs or Simulations, they are all partial representations of this abstract ‘truth’ called competence. Even WBAs are still constrained moments that provide a snapshot of a learner’s performance. Competence exists in a complex, messy world which we evaluate using a series of simplified measures, each of which have their own strengths and weaknesses. I would argue that the most useful contribution of simulation education isn’t necessarily by just doing more of it, finding more applications of it, or refining its delivery (useful as this may be) but to audit it as a co-ordinated, multi-pronged approach through a system of programmatic assessment that incorporates several sources of verification and feedback that assists individual learner progress towards actual competence.2

    Yours,

    Derek

    1. Biggs J. Teaching for Quality Learning at University. Vol Ausg. 1. 4th ed.. ed: Maidenhead: McGraw-Hill Education; 2011.
    2. Schuwirth L, Van der Vleuten C. Programmatic assessment: From assessment of learning to assessment for learning. Medical teacher. 2011;33:478-485.