Elaboration and Feedback for Clinical Reasoning Training

Sponsor
University Medical Center Goettingen (Other)
Overall Status
Completed
CT.gov ID
NCT05585892
Collaborator
(none)
143
1
2
8.4
17

Study Details

Study Description

Brief Summary

Clinical reasoning abilities can be enhanced by repeated formative testing with key feature questions. An analysis of wrong answers to key feature questions facilitates the identification of common misconceptions. This prospective, randomised, cross-over study assessed whether an elaboration task and individualised mailed feedback further improve student performance on clinical reasoning.

Condition or Disease Intervention/Treatment Phase
  • Other: Elaboration and individual mailed feedback
N/A

Detailed Description

Repeated formative (i.e., non-graded) testing enhances student learning outcome on clinical reasoning skills. At University Medical Centre Göttingen (UMG), a number of trials investigating the so-called testing effect have already been conducted in the past. They showed, amongst others, that dealing with videotaped clinical cases compared with written cases increases short-term outcome but not long-term retention. More recently, one study addressed the question whether clinical reasoning skills can be fostered by an elaboration of incorrect answers. Results of a previous trial had suggested that a considerable number of students were not sufficiently motivated to provide thorough answers to elaboration questions. This impression remained even after introducing financial incentives for students although a small but significant effect of the intervention was noted (percent score in the exit exam: 65.7 +/- 19.6% vs. 62.3 +/- 22.9%; p = 0.022). Yet, student performance remained moderate at best. Thus, the intervention will now be extended by including automated feedback provided by email. All students participating in an electronic case-based seminar (e-seminar) will receive an individual email after the event, displaying their raw point score as well as their written answers to elaboration questions and expert comments reflecting current medical knowledge.

This trial addresses the following research question:

What is the effect of elaboration and consecutive automated and individual feedback following e-seminars on medical students' clinical reasoning skills?

  1. Background and previous work According to recent findings, retrieval of knowledge is not a passive process. Instead, long-term retention is being facilitated by the act of retrieval itself ('retrieval hypothesis'). Potentially, this effect that has also been called 'direct testing effect', could also be due to additional exposure to the content during an assessment. However, complex studies in which exposure was experimentally controlled did not lend support to this 'total time hypothesis'. The effectiveness of examinations as memory boosters with respect to medical education has been shown in a number studies. However, many of these used short follow-up periods (e.g., 7 days) or implemented reproduction tests on a low taxonomic level. Yet, these studies suggest that formative examinations may promote learning processes. According to a review of the topic, these exams should contain production tests and be repeated with appropriate spacing. In addition, students should receive feedback shortly after the exam.

Given these recommendations, longitudinal key feature examinations were implemented in three consecutive teaching modules at our institution in 2013. These case-based examinations lend themselves to fostering complex cognitive skills. A key feature is defined as a critical step in solving a clinical problem. According to this definition, a key feature case consists of a case vignette and approximately five consecutive questions relating to the diagnostic and therapeutic approach. In contrast to single-best answer multiple choice questions, students cannot choose from a list of five answer options but must produce a written answer. Thus, rather than recognizing the correct answer, the aim of a key feature examination is to actively produce a correct answer. In order to save students from making follow-on mistakes, they are informed about the correct answers to preceding questions whenever attempting to answer the next question. At this point, students also receive static feedback on their previous answer.

Recently, the results of a randomized cross-over trial comparing active retrieval using key feature questions with repeated study of the same material were published. The data showed that working on key feature cases with static feedback elicited a larger medium-term learning outcome than passive restudying of the same content. The specific role of the feedback in the process however remained unclear.

Current findings from educational psychology research suggest that diagnostic errors made in a protected learning environment can serve as starting points for further elaboration which may eventually lead to a reduction in diagnostic errors in clinical practice. This trial aims to implement and evaluate this concept. To this end, existing data obtained in previous trials at UMG were analysed with regard to common clinical reasoning errors (CCRE). On this basis, e-seminars running in parallel to curricular teaching in the three aforementioned modules were modified in that - upon answering specific questions - students were prompted to comment on frequent CCREs ('elaboration'). The analyses of student entries revealed that despite all the content having been covered in preceding teaching sessions, a considerable proportion of entries represented slack answers (e.g., 'don't know' or 'no idea'), suggesting that students might not have taken the exercise serious enough. In fact, this notion was corroborated in student comments during focus group discussions following the main study. As a consequence, the study was repeated in the following year, and this time complete answers to elaborations questions were incentivised using book vouchers. In this setting, a significant effect of the intervention was noted but student performance was still at best moderate. Given the importance of feedback for learning processes elicited by formative examinations, this aspect will be strengthened in the trial described here. Students can already open a text box containing static feedback after each question, but so far they have not received personal feedback after each exam. In winter term 2018/19, all students participating in the trial will receive individual emails containing (a) the raw point score achieved in each e-seminar, (b) static expert feedback to elaboration questions, and (c) their own entries to these elaboration questions. Thus, students will be able to compare their own answers to the instructor feedback.

  1. Design and Conduct of the Study

This is a randomised controlled cross-over educational trial. Participating students will be stratified according to sex and summative exam scores in the previous term. Subsequently, they will be randomized to one of two study groups in a 1:1 fashion. During weekly e-seminars, they work on clinical cases addressing diagnostic and therapeutic strategies needed to manage patients with prevalent symptoms of general medical disorders. Cases will be presented as key feature cases with five questions per case. For some of these questions, elaboration questions will be written. These will focus on common misperceptions and clinical reasoning errors. When used as 'intervention items', elaboration questions will be shown after the original key feature question. Students will be prompted to enter a free-text answer. Upon completing both the original item and the elaboration question, they will be able to access a static feedback ('expert comment'). This feedback will be included in an email sent to all students on the day after the e-seminar, also containing individual performance data as well as the student's free-text answer to the elaboration question. When used as a 'control item', the same key feature question is being displayed, and students can access the expert comment directly after answering the question. Information on control items will not be contained in the mailed feedback. Every student will be exposed to 15 intervention and control items, respectively, and each of these will be shown twice over the course of 10 weeks. Items that are being shown as intervention items in one randomized group will be shown as control items in the other group and vice versa, thus making each student their own control. At the end of the study, individual 'intervention item' and 'control item' scores will be computed for each student, and these two scores will be compared using a paired t Test. This primary analysis will be done to test the following hypothesis:

"Long-term retention will be better for content that has been repeatedly tested with additional elaboration questions and subsequent mailed individual feedback than for content that has been repeatedly tested alone."

Long-term retention will be assessed in a formative electronic key feature assessment in summer term 2019. It will be identical to the entry and exit exam held in winter term 2018/19.

Secondary analyses will include unadjusted and adjusted linear regressions with percent scores in the exit exam and retention test as dependent variables and student characteristics as well as their engagement with key feature questions as independent variables.

Study Design

Study Type:
Interventional
Actual Enrollment :
143 participants
Allocation:
Randomized
Intervention Model:
Crossover Assignment
Intervention Model Description:
This was an educational study with a cross-over design in which every undergraduate medical student served as their own control as both groups were exposed to control and intervention items in non-graded key feature examinations.This was an educational study with a cross-over design in which every undergraduate medical student served as their own control as both groups were exposed to control and intervention items in non-graded key feature examinations.
Masking:
Double (Investigator, Outcomes Assessor)
Masking Description:
Participants could not be masked to the intervention as it was clear to them whether they had received an email (for intervention items) or not (for control items). Investigator and Outcomes Assessor knew that each participant was exposed to both item types but they were masked to the group a student was assigned to.
Primary Purpose:
Other
Official Title:
Randomised Cross-over Study on the Effectiveness of Elaboration and Proactive Feedback as Part of Formative Key Feature Examinations in Undergraduate Medical Education
Actual Study Start Date :
Oct 1, 2018
Actual Primary Completion Date :
Jun 14, 2019
Actual Study Completion Date :
Jun 14, 2019

Arms and Interventions

Arm Intervention/Treatment
Other: Intervention items 1-15

Students in this arm were exposed to intervention and control items in the 10 weekly e-seminars. Assignment of item types was the exact opposite of that in group B.

Other: Elaboration and individual mailed feedback
see above

Other: Intervention items 16-30

Students in this arm were exposed to intervention and control items in the 10 weekly e-seminars. Assignment of item types was the exact opposite of that in group A.

Other: Elaboration and individual mailed feedback
see above

Outcome Measures

Primary Outcome Measures

  1. Clinical reasoning performance [Nine months after the start of the study]

    Within-subject difference in percent scores in intervention versus control items in the retention test six months after the last e-seminar

Secondary Outcome Measures

  1. Predictors of exam performance [Three (exit exam) and nine (retention test) months after the start of the study]

    Unadjusted and adjusted linear regressions with percent scores in the exit exam and retention test as dependent variables and student characteristics as well as their engagement with key feature questions as independent variables

Eligibility Criteria

Criteria

Ages Eligible for Study:
18 Years and Older
Sexes Eligible for Study:
All
Accepts Healthy Volunteers:
Yes
Inclusion Criteria:
  • Enrolment to three consecutive undergraduate teaching modules in Year 4 at Goettingen Medical School in winter term 2018/19
Exclusion Criteria:
  • no informed consent

Contacts and Locations

Locations

Site City State Country Postal Code
1 University Medical Centre Göttingen Göttingen Germany D-37075

Sponsors and Collaborators

  • University Medical Center Goettingen

Investigators

  • Principal Investigator: Tobias Raupach, MD, University Medical Centre Göttingen

Study Documents (Full-Text)

More Information

Publications

Responsible Party:
Tobias Raupach, Professor, University Medical Center Goettingen
ClinicalTrials.gov Identifier:
NCT05585892
Other Study ID Numbers:
  • 17/9/18
First Posted:
Oct 19, 2022
Last Update Posted:
Oct 19, 2022
Last Verified:
Oct 1, 2022
Individual Participant Data (IPD) Sharing Statement:
No
Plan to Share IPD:
No
Studies a U.S. FDA-regulated Drug Product:
No
Studies a U.S. FDA-regulated Device Product:
No
Keywords provided by Tobias Raupach, Professor, University Medical Center Goettingen

Study Results

No Results Posted as of Oct 19, 2022