Simulcast Journal Club September 2016 – The Theory of Evolution


Introduction : 

Simulcast Journal Club is a monthly series heavily inspired by the ALiEM MEdIC Series.  It aims to encourage simulation educators to explore and learn from publications on Healthcare Simulation Education.  Each month we will publish a case and link a paper with associated questions for discussion.  We will moderate and summarise the discussion at the end of the month, including exploring the opinions of experts from the field.

Journal Club

Title :The Theory of Evolution

The Case : 

Brad was furious.

That meeting should have been a triumph.  All year he had worked hard at establishing a simulation program for the Intensive Care Unit.   He had trained simulation faculty, found sponsorship to fund the purchase of two mannequins, converted the spare bed space to a sim lab and developed a curriculum for the junior doctors rotating through ICU.  And today’s presentation to the other consultants was supposed to be a celebration of how far the program had come.

But yet again, his snarky colleagues couldn’t resist the temptation to cut him down.

“What’s the evidence any of this works?” Dr Snythe had snidely asked.  “As far as I can see, the educational research that’s out there is a bunch of opinion pieces and inconsistently reported RCTs.  With the money you got for those damn mannequins, we could have hired 2 more nurses to provide actual patient care!”

“What’s the evidence you do anything besides drink coffee on your non clinical shifts?” Brad had wanted to retort, but fortunately his frontal lobe had engaged at that point.

Well he’d show them.

Brad had been collecting data for 12 months prior to rolling out the sim program.  He was armed with a swathe of Likert scales and survey responses and results from the junior doctors Advanced Life Support assessments.  With the help of a keen epidemiology student at the local university, Brad was going to reassess the junior doctors this year in the same format.  While they continued to attend their regular education program, Brad was certain that his survey outcomes would be better this year, and he had the staff numbers to hit a pretty decent p value.

Given his experience in clinical RCTs, Brad was confident his reporting standards would be exemplary, and publication would be likely.

“Can’t wait to see Synthe’s face when I finally prove this.” thought Brad.

Sometimes, success was the best revenge.


The Article :

“Reporting Guidelines for Health Care Simulation Research: Extensions to the CONSORT and STROBE Statements” – Cheng A, Kessler D, Mackinnon R, Chang TP, Nadkarni VM, Hunt EA, Duval-Arnould J, Lin Y, Cook DA, Pusic M, Hui J, Moher D, Egger M, Auerbach M; International Network for Simulation-based Pediatric Innovation, Research, and Education (INSPIRE) Reporting Guidelines Investigators.


Discussion

As simulation educators who see the fruits of our educational labours on a frequent basis, it can seem a foregone conclusion that simulation is an incredibly valuable teaching tool.   But creating a strong evidence base to prove that has been a bigger challenge.  For clinicians used to evaluating pharmaceutical RCTs, contributing and evaluating simulation research requires a surprisingly different skillset.

In August this year, Cheng et al published a series of standards and guidelines for reporting on simulation in research.  The publication has been hailed as “A Joint Leap into a Future of High-Quality Simulation Research” and as such is an important read for simulation educators, both for those planning to contribute to simulation research and those wishing to be able to critique and assess the published literature.

This is a somewhat more ‘dry’ article than other articles we’ve run on journal club, but it’s critically important piece to be informed about and we look forward to your thoughts and reflections on the article.

Please comment below. Here are some questions to get you started:

  • What advice would you have for Brad about engaging in his first foray into simulation research?  What do you wish you knew before you started on a similar pathway?
  • What issues have you had interpreting simulation literature?
  • Do you have a structured approach to educational literature?  How has this paper effected your approach to critical analysis of simulation literature?

References : 

  1. Cheng A, Kessler D, Mackinnon R, Chang TP, Nadkarni VM, Hunt EA, Duval-Arnould J, Lin Y, Cook DA, Pusic M, Hui J, Moher D, Egger M, Auerbach M; International Network for Simulation-based Pediatric Innovation, Research, and Education (INSPIRE) Reporting Guidelines Investigators.  Reporting Guidelines for Health Care Simulation Research: Extensions to the CONSORT and STROBE Statements.  Simul Healthc. 2016 Aug;11(4):238-48. doi: 10.1097/SIH.0000000000000150.
  1. Sevdalis, Nick PhD; Nestel, Debra PhD; Kardong-Edgren, Suzan PhD, RN, ANEF, CHSE, FAAN; Gaba, David M. MD; A Joint Leap into a Future of High-Quality Simulation Research—Standardizing the Reporting of Simulation Science. Simul Healthc. 2016 Aug;11(4):236-37.  doi: 10.1097/SIH.0000000000000179

About Ben Symon

Ben is a Paediatric Emergency Physician at The Prince Charles Hospital in Brisbane and a Simulation Educator at Queensland Children's Hospital.

25 thoughts on “Simulcast Journal Club September 2016 – The Theory of Evolution

  • Nick Argall

    Firstly, thank you for publicizing these standards, it is critically important to be aware of them. (If having difficulty with the link to the Svedalis paper, this one works for me: https://advancesinsimulation.biomedcentral.com/articles/10.1186/s41077-016-0026-x )

    The first thing that I’d say to Brad is that he should reconsider the way that he thinks about his colleagues. Dr Synthe’s concerns are perfectly legitimate, and engaging with him on the basis of “I’ll show you” is a great way to entrench an adversarial relationship. Given that Dr Snythe is outspoken, it’ll be much better to recruit him as an ally than to refute him. Given his clear concerns about research quality, his advice on that topic should be actively sought and incorporated into the final paper. When I was starting to work in technology (I have a history in the medical industry, but not in medicine), I wish I’d understood more about persuasion, instead of focusing on proving people wrong.

    When I read reports on the efficacy of simulation as an educational tool (which is my interest, as an educator), I tend to find it frustrating. Extremely limited and precise studies tend to produce credible results, but are extremely limited. There’s also a lot of research that draws conclusions that don’t seem to derive rigorously from the results. I’ve stood in a room where a researcher presented two papers in quick succession – in the first paper, he showed that Method A was more stressful for participants than Method B, and was therefore superior. In the second paper, he showed that Method C was less stressful for participants than Method D, and was therefore superior. No information about why ‘more stressful’ was good in the first comparison and ‘less stressful’ was good in the second comparison was supplied.

    I know that my own work has suffered from a lack of clarity regarding goals – we figured we were onto a good thing, and set about sharing it with people. When asked “What were the learning objectives?” I would turn to the room and say “What did you learn?” It was only when the answers proved to be consistent that we were in a position to say ‘These are our goals’.

    Perhaps that’s another piece of advice that I should give to Brad: If you’re trying something that really is new, have an open mind about what outcomes it will produce. It may completely fail to deliver the expected benefit. It may be spectacularly successful at doing something unintended but desirable. It may also have undesirable side-effects. Cast the widest net you can, until you think you know what the data is telling you. Once you know what the topic really is (as opposed to what the topic was supposed to be), you’re in a position to drill into the details.

    I have to confess that I don’t have a structured approach to the literature – I’m not sure that this is a bad thing. I focus on establishing relationships with people working in the field, letting them know what I’m working on, and reading the things they tell me to read. If I find myself repeatedly going to a particular journal, I’ll subscribe. I don’t think that the Reporting Guidelines will change the way that I read, but they will most certainly inform the way that I write.

    • Ben Symon

      Thanks for your thoughts Nick!
      So am I right in saying that your thoughts about investigating innovative educational techniques is to collect data without necessarily having a finite clinical question every time, and then see what treasure comes up?
      It’s an interesting technique, but hard as a junior researcher to do, given that the inherent focus (I think) comes from years of reading drug related RCTs : i.e. Gold Standard = Double blinded RCT of a drug vs another drug and who does better for this disease. I think as well, with the eternal hope of getting something published, just casting the net and seeing what you find would take a lot of confidence and experience in what you’re doing.

      • Nick Argall

        Thanks Ben!

        You make some excellent points, and I see how my response might have made for a confusing read, because I switched rapidly between “Advice that I would give” and “My experience” – I would not necessarily advise people to replicate my experience! When it comes to overcoming the challenges faced by a junior researcher, I have to be extremely careful what I say – I’ve never been a junior researcher, and don’t have a sufficient awareness of the pitfalls.

        I have experience in engineering (98% IT systems engineering), in corporate sales, management, as an inventor, as a mental health volunteer, and I provided care in student clinics before dropping out. My limited experience with academia shows that it is a very specific world (much like a courtroom) with very specific rituals. I’m increasingly aware of the cultural divide between engineers and scientists, and I recognize that I’m on the ‘engineering’ side of that divide.

        Medical research is an interesting cultural challenge, in that the work doctors do is engineering (diagnosing and repairing faults in an extremely complex but well-established system, where the original designer is frustratingly unavailable for comment) but medical research seems to be an academic pursuit. And although providing medical care is maintenance engineering, there’s a lot of engineering techniques (like ‘build a prototype’ and ‘test to destruction’) that are either impossible or unethical when working on human beings or animals.

        ‘Casting the net’ takes a lot of confidence; derived either from the experience required to navigate the pitfalls, or from ignorance of the pitfalls. It’s much more difficult in early to mid portions of a career. (Ignorance comes naturally at the beginning, and confidence hopefully builds in mid to late.)

        When it comes to the simulation work that I’m currently doing, I was convinced that my outsider’s perspective would be useful, but I had no idea why. I was fortunate enough to meet Steve Costa, who has been extremely patient with me as I throw ideas at him, and who acts as a preliminary filter between my ideas and his students. He’s a very inventive fellow himself, with the experience needed to correct problems before they turn harmful, and he has not forgotten his painful experiences as a student.

        For even more challenging ‘outside perspective’ I’d invite you to consider Ted Kaptchuk’s work examining the processes of clinical research. Professor Kaptchuk wrote the first English-language textbook on Chinese medical theory, and is now leads the “Harvard-wide Program in placebo studies” http://tedkaptchuk.com/

        • Ben Symon

          Thanks Nick, yeah there’s this really weird crossover point in transitioning from clinical work to academia in medicine. For some people I’m sure it comes naturally, or they do a PhD or whatever, but I think there’s an important subgroup of clinicians (such as myself) who just kind of ‘stumble through it’ with (hopefully) the aid of some mentors. For clinical research we have at least been trained through med school and regular journal club discussions etc on how to assess the quality of a paper, so we have the basic tools to start toddling through producing clinical research, but for educational research, it’s a completely different and somewhat bewildering field.

          I think these guidelines for me, particularly the summary tables, are really a rock to cling to in the storm of educational research vagueness. For someone just starting out, they provide a comprehensive, thorough structure on a gold standard of important information to include. Like a lot of profundities, a lot of it seems obvious once it’s written down a formalised, ie. “Of course it makes sense to include participants previous experience with simulation”, but I am embarrassed to admit that as a new researcher that never would have occurred to me.

          Which is why I think it’s such an important paper; I love reading a paper that suddenly reveals my own blindspots, you get such a feeling of personal growth through reading it. And my impression from the fact that editorials are using such phrases as “evolution” and “leap forward”, is that I’m not the only one who’s having some blindspots exposed.

          • Nick Argall

            Thank YOU Ben 🙂

            I agree completely that as a writer, the papers turn a horrible mess of “What on EARTH do I say?” into concrete guidance. I’m increasingly and uncomfortably aware of how much I don’t like academia, and how important it will be for me to participate in that world.

            I’m really grateful that you posted this. See you in Melbourne later this month?

  • Jessica Stokes-Parish

    Hi Nick, Hi Ben,

    A scenario that I can identify with and fantastic articles to review. Firstly on the paper – it is incredibly timely and will prove useful as the theoretical frameworks and recommendations for practice in simulation progress. A personal pet peeve within the simulation space is when research is limited to ‘happy sheets’ (aka likert scale about how the participant enjoyed the session) with no further exploration. I do a dance when I see that an individual has described the orientation process used, previous exposure to simulation and the detailed elements used to create the simulation. So this a fantastic guide for anyone wishing to embark on the journey of simulation research.

    Secondly, let’s get to Brad. Ah Brad. I can assure you I have experienced your frustration many a time. The first example that comes immediately to mind is when an undergraduate inter-professional obstetric simulation was removed from the curriculum without consultation. This simulation was of high standard and the research was showing that the influence went far beyond ‘happy sheets’. We were furious. I digress. However, my suggestions do link to this – think beyond the happy sheet!
    My suggestions:
    1. Ask yourself some questions.
    What is it you would like to know? What would you like to prove to your colleague? Are my motives correct? What are his motives?
    In reality simulation IS expensive. Simulation IS time consuming. Most involved in simulation are simulation evangelists – to those not quite convinced, this can be infuriating. There is no point Bible-bashing (for want of a better metaphor). Keep in mind that measuring the gold standard outcome (improved patient care, less adverse events) is complex and often not possible (at this point)- how to you measure near misses?! And remember, “people are not against you; they are for themselves.
    2. Check the resources/research that is already out there.
    Surprisingly, sometimes someone has already done the hard hitting work for you. Get an understanding of the key theories, thoughts and literature out there – this will give you the ability to understand the purpose and results of your research in a much deeper way.
    3. Get a mentor
    The world of research and academia can be overwhelming. As clinicians we are often task-focussed and struggle to think the same way those immersed in research do. This can be a bone of contention between the two spheres – the clinician using simulation doesn’t want to know the minute detail about the research and the researcher gets frustrated when a clinician does the research without understanding the foundational aspect of simulation. Find someone who can bridge this gap for you. In my Simulation Centre-based role (I know, I can hear you groan!!), I was as a free resource for the clinicians wanting to implement or research simulation – we want to help! And you can teach us to remain grounded and deliver practical (not theoretical) solutions from our research/experience.
    4. No data is no data!
    Collect something! If, for example, you want to explore how the interprofessional relationships improved as a result of simulation, go find a tool already developed that you can use. This will give you something beyond a ‘happy sheet’ in a relatively easy way. (eg. RIPLS, T-TAQ, ICCAS)

    • Ben Symon Post author

      Jessica, thanks for your comments. I was wondering if you could elaborate about ‘The resources/research that is already out there’. Where would you point a new researcher to? Are there pivotal papers or books you have found most valuable?

      • Margaret Bearman

        Hi, Margaret Bearman here, coming from a slightly different angle.

        It is an important paper and useful. And the advice I’d give to Brad is how to make the best of what he’s done now, as it IS important work for educators to evaluate what we do, and to try and do so in a scholarly manner (whether your colleagues are snarky or supportive). The year’s practical paper may transform into something else – particularly if Brad follows Jessica’s EXCELLENT advice and listens to Nick’s sharp observations about the balance between ‘credibility’ and ‘narrowness’.

        I believe we (particularly clinicians) expect too much from ourselves as novice educational researchers. Educational research is an expert practice. I’ve been doing it for a while now (OK over a decade) and I still feel like there are so many mountains to climb!

        I think the guidelines are very valuable and the simulation extensions really help guide practice. All credit to the simulation community for working together on this one. Table 3 is terrific and I think will make a huge difference.

        Having said all this, I have a slightly ambivalent response to standards. This is not really to do with the standards themselves but standards in use.

        Quality of the literature and reporting standards are often conflated. That is to say, just because something is reported, does not mean it is of quality – it simply allows us to make judgements about quality through transparency of reportage.

        This is based on my view of use of standards in systematic literature reviews (shameless self-promotion – see recent commentary on quality in literature reviews in Medical Education).

        Great journal club topic and thanks to Ben for a great case to work with.

        Margaret

        • Ben Symon Post author

          Thanks for your thoughts Margaret, I enjoyed your point about guidelines adeherence not necessarily equaling brilliant research. I look forward to reading your commentary!

        • Nick Argall

          Hi Margaret,

          In an effort to work out what your email address is, I’ve created a ResearchGate account (which is probably worth doing anyway) and pressed the ‘request full-text’ button. Not sure how that works, but I guess I’ll learn.

          My email address is nick.argall@hybridsims.com – website is in desperate need of a rebuild, and will be relaunched shortly.

          Nick

  • Ian Summers

    Dear Brad,

    I fear that the evidence you are collecting will not be enough to convince the simulation cynics, funders and managers allocating clinical support time in your hospital that your improvement on ALS assessment on simulators is justification for the funds required to invest in simulation, so I wouldn’t suggest you use this as your primary motivation.

    I would look for other triggers, motivations and strategies to try and convert your ICU colleagues and especially your director/CEO/board etc. As Nick has said , you are far better of trying to “recruit them as allies”. Often they will be convinced not by you, but by the nurse they admire who comes back from a MET call and says “That ran much better after the MET sim session on Thursday, we finally had ICU and aesthetics and med reg talking to each other and making a plan. You should try and go to the next session”.

    Invite your exec team to attend a sim session as participants at some stage. Unrepresentative sample for me but they gain from learning BLS and seeing their shiny sim centre/team at work. And they might see if as a team building exercise for themselves. Nothing like the CEO problem solving the AED with the HR manager. They also instinctively understand CRM- their life is spent solving one acute crisis while anticipating and planning to avoid the next. It actually makes for a fascinating CRM session.

    With regard to the article- I was looking at the site of the meeting to see if their was a selection bias to those able to fund and attend a session in New Orleans, expecting to see medical>nursing/academics and North America >rest of the world and yes there is a predominance of North American contributors (with a few prominent overseas names) but I don’t see the content produced as being anything other than helpful and welcome. So thanks to the group behind this article.

    As usual I have learnt from the comments of Nick,Jessica and Ben and great to see/hear Margaret on board.

    Ian S

    • Ben Symon Post author

      Thanks for your thoughts as always Ian, I agree about a North American bias,(from a conference presence point of view) but I think unless we can fund an international sim space station there will always be local bias at guidelines developed at a conference. Would be interesting to hear if any of our Scandinavian friends, for example, have noticed any significant differences in approach. (I’m looking at you Sandra Viggers :p)

  • Chris Cropsey

    Hello everyone,

    I’ve been journal club “lurking” for the past couple months but wanted to throw in a few comments this month.

    I really appreciate what you said, Ben, about the challenges of being a junior researcher. I can totally identify! I’ve never had formal training in carrying out research, and now that I’m in the middle of a 2 year simulation-based research project I’m realizing just how much I have to learn. I’m totally in that sort of “stumble through it” group – even with solid mentors around, it still feels like there are pitfalls and dropoffs around every corner. Thanks for giving voice to sentiments that I feel frequently, and sometimes feel like no one else has dealt with (although I’m sure they have, they just don’t talk about it).

    I think coming from that background, this paper is a huge help for me in planning research. In many ways, using these guidelines as a blueprint of sorts would probably result in far fewer study design flaws. One of my recurring nightmares, having already collected about half of my data, is that I’m going to get to the end of the project and realize I didn’t adequately control for some aspect of the simulation that I never really considered at the outset. The authors of this paper are fairly detailed in their recommendations and I definitely plan on using these to guide my planning in the future.

    One other thing I really appreciated about this paper was the fact that it clearly identified two different roles for simulation in research – simulation as an intervention itself, and simulation as a technique for studying other interventions. I think often we lump all simulation into one big pile, but there are actually some unique aspects to both of these purposes and the authors do I good job of addressing that.

    I am curious to see how this all plays out in the actual practice of publication. With so much additional detail being reported about the debriefing, simulation scenario, setup, etc., I am guessing that there will be the need to publish a lot of this as supplementary material not in the main body of the paper. I do find that a bit of a shame in that it can sometimes be difficult to track down. However, to be fair, if the paper is really interesting enough to merit our time as readers, then it is likely worth the effort to find any supplementary material online or elsewhere.

    I love hearing the perspectives of everyone on here – I want to say thanks to Margaret for reminding me that it’s ok not to be perfect as a “novice educational researcher”. Not to say we shouldn’t aim high, but that it takes time to achieve expertise.

    • Ben Symon Post author

      So nice to hear from you Chris! I hope the research project is going well, did these guidelines make you want to change anything specifically in your project?

  • Adam Cheng

    Hello Simulation Colleagues,

    Ben – nice little vignette you’ve shared here which has stimulated some really great discussion! I love the frank, honest and practical comments that many have shared. Some have mentioned that they plan on using the reporting guidelines to help with designing and implementing their studies ahead of time. While the guidelines weren’t necessarily designed for this use, I think many will find value in using the checklists in this manner (particularly Table 3).

    Margaret’s comment re: quality is bang on! Just because something is reported doesn’t mean it is high quality …. the reporting guidelines should not be used to assess the quality of a study … but rather to gauge completeness of reporting.

    Thanks for getting the discussion going on this paper – happy to field questions as they arise. Hope to see many of you at SimHealth in a few weeks.

    Warmest regards
    Adam

    • Ben Symon Post author

      Adam, thanks so much for stopping by, it’s a really exciting opportunity for us to have some conversations with one of the authors of this paper. Having now published the extensions to great fanfare (Table 3 in particular keeps getting highlighted as a very valuable tool for sim researchers and clinicians alike) have you received any feedback or had any reflections on anything that might need further iteration in the future?

      • Adam Cheng

        Good question Ben. Some have commented that there isn’t enough detail in the extensions related to educational/instructional design. As you mentioned in your prior comment, we decided to develop a completely separate table that lists the desired elements to report for sim-based educational interventions …. while it is not in checklist form, we certainly encourage investigators to use Table 3 as a checklist when preparing manuscripts (or studies!).

        Hopefully we’ll see more people sending us feedback and ideas at http://inspiresim.com/reportingnowavail/
        The comments collected will help to inform the next iteration of the guidelines … probably in 5-7 years.

  • Gabriel Reedy

    Sorry for the delay in responding on this… just getting back into the swing of the new academic year here in the northern part of the world! A few thoughts from my perspective.

    I’m pleased that we’ve got these standards as a guide and a touchstone for simulation-based research. It helps to move us forward together as a field.

    I do tend to agree with Margaret about the danger and potential downsides of standards, though. In particular there’s a worry that they can drive research into very specific and, occasionally, limited directions. I’m reminded of some experiences I’ve had over the last few years at international simulation conferences. The first was a couple of years ago when it was patiently explained to me in a public forum that qualitative work requires a hypothesis just like quantitative work (and that if you’re not testing a hypothesis, you’re not doing research). Obviously this belies not just a misunderstanding of qualitative research but also of the breadth of quantitative research methods as well. I was further told that educational interventions were just like drug trials or trials of other therapeutic interventions in medicine: the only question we should be exploring was effectiveness and therefore the only legitimate approach to doing so was randomised controlled trials.

    I’ve had similar experiences during professor rounds at various international conferences where a postdoctoral fellow and a PhD student were presenting posters on their respective work in progress. They were each told variations on the theme of “that doesn’t look to me like research” because the work was empirical but collected qualitative data. It’s disappointing and disheartening for them; but more importantly, it’s dangerous for the field as a whole to slide into a very narrow view of what counts as research. At another conference, a professor round that I attended had a simulation expert announce to one of the presenters that “your poster has numbers data, and is therefore actual research. Well done.”

    I don’t share these anecdotes to invoke pity at all. Rather I point them out to highlight the ways in which conceiving of both simulation education and worthwhile knowledge about the field are limited by ourselves in the field. This happens even among colleagues who generally acknowledge the potential benefits of rigorous qualitative research.

    So the potential, and the trap that Brad is falling into with his superviser Dr Synthe, is that by striving to “show him” how good his program is, he is perpetuating only one particular approach to understanding what happens in simulation based education. In doing so, all of us–no matter how well-intended–help perpetuate the generation of particular kinds of knowledge and continue to limit the kinds of questions we can even ask. And that, I worry, limits us as a field.

    • Margaret Bearman

      Fabulous contribution Gabe. So well said. I agree whole-heartedly.

      Following on, I also think it’s important that – as simulation researchers move from novice to expert – they come to grips with learning and other theories, which underpin both simulation-based education and research. This assists with thinking beyond the status quo. So if I can give one more piece of advice to Brad is to read broadly and read beyond your comfort zone.

      • Victoria Brazil

        Thanks Margaret and Gabe
        Really appreciate the thoughtful take on this.

        Gabe i agree and its a broader tension when educational scholars (or clinicians trying to become that) enter the healthcare field (especially medicine) where that ‘hypothesis/ RCT’ paradigm is so strongly re-inforced about ‘real research’.
        We had a similar issue with our trainee research requirement. The College lead actually said “we really need to see numbers for it to be of adequate quality”.
        In the end they decided not to allow qualitative research on the basis that they didn’t have Fellows to ajudicate the submissions. Actually i thought that was fair enough, but it means that EM trainees are just not exposed to other methods (many of which very suitable for answering some key questions in EM practice).
        I’m glad folks like you and Margaret are around to help us navigate these dilemmas 🙂

        vb

        • Gabriel Reedy

          I think that’s right- a tension is exactly what it is! I’m reminded of the old adage of “when you have a hammer, every problem looks like a nail.” It’s not surprising that when the field is steeped in the idea of RCTs as the gold standard of empirical evidence, every research question would be approached with that particular tool. Why would you not want gold standard evidence?

          I should note that the reporting guidelines absolutely do not fall into that trap, and do quite clearly provide for different quantitative research designs. This, I believe, points to the robust and thoughtful nature of the guidelines.

          Margaret I think you’re absolutely spot-on regarding the importance of reading widely as a consumer of research. I really believe that helps people broaden their conception of research and an appreciation of the breadth of kinds of knowledge that can be generated with different approaches.

  • Ben Symon

    Thanks for your comments everyone, I’m going to summarise the discussion at this point.
    Please feel free to discuss the article further, however further comments won’t be mentioned in our .pdf summary.

Comments are closed.