Interactive whiteboard use in clinical reasoning sessions to teach diagnostic test ordering and interpretation to undergraduate medical students
Over-testing of patients is a significant problem in clinical medicine that can be tackled by education. Clinical reasoning learning (CRL) is a potentially relevant method for teaching test ordering and interpretation. The feasibility might be improved by using an interactive whiteboard (IWB) during the CRL sessions to enhance student perceptions and behaviours around diagnostic tests. Overall, IWB/CRL could improve their skills.
Third-year undergraduate medical students enrolled in a vertically integrated curriculum were randomized into two groups before clinical placement in either a respiratory disease or respiratory physiology unit: IWB-based CRL plus clinical mentoring (IWB/CRL + CM: n = 40) or clinical mentoring only (CM-only: n = 40). Feasibility and learning outcomes were assessed. In addition, feedback via questionnaire of the IWB students and their classmates (n = 233) was compared.
Analyses of the IWB/CRL sessions (n = 40, 27 paperboards) revealed that they met validated learning objectives. Students perceived IWB as useful and easy to use. After the IWB/CRL + CM sessions, students mentioned more hypothesis-based indications in a test ordering file (p < 0.001) and looked for more nonclinical signs directly on raw data tests (p < 0.01) compared with students in the CM-only group. Last, among students who attended pre- and post-assessments (n = 23), the number of diagnostic tests ordered did not change in the IWB/CRL + CM group (+ 7%; p = N.S), whereas it increased among CM-only students (+ 30%; p < 0.001). Test interpretability increased significantly in the IWB/CRL + CM group (from 4.7 to 37.2%; p < 0.01) but not significantly in the CM-only group (from 2.4 to 9.8%; p = 0.36).
Integrating IWB into CRL sessions is feasible to teach test ordering and interpretation to undergraduate students. Moreover, student feedback and prospective assessment suggested a positive impact of IWB/CRL sessions on students’ learning.
KeywordsLaboratory radiology test Clinical reasoning Technology Cognitive load
Clinical reasoning learning
Laboratory, radiology, functional and nuclear medicine tests are affordable tools in modern medicine . However, concern about the overuse of these techniques has been growing [2, 3, 4]. For instance, the contribution of laboratory investigations to final diagnosis remains lower than medical history and clinical examination . In addition, recent retrospective reports have shown the high prevalence of inappropriate and avoidable tests [3, 6, 7, 8, 9]. Over-ordering increases patient discomfort and harm related to iatrogenesis [7, 9], may result in false-positive results, and wastes healthcare resources .
Inappropriate test ordering can be improved by education and training in test requests, interpretation and use [3, 4], as the inappropriateness is often due to the physician’s uncertainty about the test indications, performance, feasibility, contraindications, and risk, as well as a lack of knowledge about better alternatives . In addition, education in clinical assessment (pre-analytical) and interpretation of first-line diagnostic tests may reduce the need for more invasive and expensive tools.
Studies have shown that long-term education programs can improve future clinical practice . Thus, early instruction in test ordering and interpretation for undergraduate medical students in embedded courses has emerged as a relevant educational strategy . Harendza et al. showed that students in a vertically integrated curriculum (having learned to identify the clinical question, the technical and diagnostic performance of tests, and how test results impact decisions)  ordered fewer diagnostic tests than those in a traditional curriculum . In line with this strategy, clinical reasoning learning (CRL) might help build skills in test ordering and interpretation. CRL is an extension of the problem-based learning approach for medical and clinical problems [14, 15, 16]. It encourages students to mobilize and reorganize their knowledge  and provides remediation for students with clinical reasoning difficulties . With this method, clinical assessment and test ordering and interpretation are embedded in small group training. In the most prevalent CRL approach (serial-cue), one student simulates a previously examined patient and progressively reveals the clinical signs or diagnostic tests to the other students. In a recent study, CRL improved student perceptions of their ability to request relevant and hypothesis-based tests . However, training in test interpretation has never been evaluated in CRL sessions.
To assess the feasibility of IWB/CRL sessions to teach test ordering and interpretation to third-year undergraduate medical students.
To compare the feedback from these medical students with the feedback from third- to sixth-year medical students who followed traditional courses on test ordering and interpretation.
To compare improvements in the appropriateness of test ordering and interpretation in third-year undergraduate medical students after 2 months of a vertically integrated module with and without IWB/CRL sessions.
Participants and educational context
Clinically assess the patient.
Identify the question/the indication.
Suggest one or more hypotheses.
Assess potential contraindications, feasibility, requirements and risks of the diagnostic test.
Discuss potential alternatives to the diagnostic test.
Specify the conditions for carrying out the test.
Inform the patient.
Verify the interpretability of the results.
Compare the results with the reference values, previous personal values, and post-challenge values.
Interpret the results: use the positive and negative results/paraclinical signs to answer the question − affirm or eliminate the hypothesis.
Study 1 (part 1): feasibility
Third-year undergraduate medical students at the Montpellier-Nîmes School of Medicine enrolled in the 2-month module on cardiovascular and respiratory medicine were randomly assigned to a respiratory disease or a respiratory physiology unit. Randomization was performed in blocks of 20 students. A computer-generated list of random numbers was used to assign the students to the study groups. Each student assigned to the respiratory physiology unit received four IWB/CRL sessions (90 min each) and clinical mentoring (IWB/CRL + CM group), while students assigned to the respiratory disease unit followed the traditional curriculum with clinical mentoring only (CM-only group). The feasibility of the IWB/CRL sessions was assessed by analysing the paperboards from the sessions.
Study 2: student feedback
Through their university email addresses, all medical students at the Montpellier-Nîmes School of Medicine were invited to answer an online questionnaire (Additional file 1. Questionnaire). The questionnaire responses were compared with those of the third-year medical students after their last session of IWB/CRL.
Study 1 (part 2): learning outcome assessment
During placements in the respiratory disease and respiratory physiology units, students learned about the diagnostic tests used in the field of respiratory medicine (spirometry, exercise testing, sleep tests, imaging, endoscopy). Students in both groups were asked to clinically examine their patients and report the findings individually. The respiratory disease and respiratory physiology units shared the same patients. Learning was assessed by prospectively comparing the appropriateness of test ordering and interpretation in both student groups with a 1-h test at the beginning and end of their 2-month clinical placement.
The CRL sessions were run in small groups of six to eight students, with a physician or resident as the facilitator . A real clinical encounter was simulated using the serial-cue method. While the students “played the doctor” and actively gathered information necessary for diagnosis, one student “played the patient” and progressively revealed clinical signs. The students recorded the information on the IWB. The facilitator regularly asked them why they had requested certain clinical information or ordered a test to prompt them to express/write their clinical questions and hypotheses on the IWB. The facilitator was also able to gain access to their reasoning and correct errors by asking them how the clinical or other information would help to affirm/reject a diagnostic hypothesis. The diagnostic tests that were ordered were written on the IWB and discussed. Then, the raw test data (previously digitized in JPEG) were provided onscreen and any signs underlying the clinical manifestations were annotated for further interpretation. Last, the facilitator asked the students to return to the simulated patient’s initial complaint and the clinical questions/hypotheses so that they could prepare a summary and conclude the IWB/CRL session. The final paperboard of the session was then provided to the students as a .pdf file.
Analysis of the IWB/CRL sessions (study 1; part 1)
A review team of three teachers (FG + CH + MH) predefined items that matched the validated course objectives for diagnostic test ordering and interpretation as set by the French College of University Teachers in Health. The .pdf files of the paperboards from the IWB/CRL sessions were then blindly reviewed by two teachers (FG + CH), and all items were assessed. Last, the agreement of the two teachers’ assessments was rated for each item. Depending on the item, inter-teacher agreement was fairly good to excellent, with Lin concordance coefficients from 0.86 [0.68–1.04] to 0.97 [0.91–1.01].
Student feedback on test ordering and interpretation courses (study 2)
At the end of the clinical placement, students who participated in the IWB/CRL sessions were asked to respond to an online questionnaire (10 min; Additional file 1. Questionnaire). The questions were designed to determine whether the students had indeed learned test ordering and interpretation as set out in the course objectives of the French College of University Teachers in Health. In addition, four questions about perceptions of the IWB were added, in accordance with the technology acceptance model . Perceived usefulness (utility) and perceived ease of use (usability) were assessed with Likert scales for each dimension . Questions also addressed student perceptions of their curriculum, practice and self-confidence in diagnostic test ordering and interpretation. In parallel, this online questionnaire was administered to all third- through sixth-year students following the vertically integrated curriculum.
Examination and assessment of the learning outcomes (study 1; part 2)
The number of ordered tests that could affirm or eliminate a hypothesis;
The ratio between ordered tests that could affirm or eliminate a hypothesis and all tests ordered. The correspondence between tests and indications was defined as:
The number of ordered tests that were validated for an indication, as defined in the French reference document on respiratory diseases for undergraduate students (http://cep.splf.fr/enseignement-du-deuxieme-cycle-dcem/referentiel-national-de-pneumologie/);
The ratio between tests validated for an indication and all tests ordered.
Test identification and interpretability were recorded. Last, the number of signs underlying the clinical manifestations and the appropriateness of the hypotheses were also recorded. The answers were blindly reviewed by the two teachers. Depending on the item, the inter-teacher agreement was fairly good to excellent, with Lin concordance coefficients from 0.98 [0.97–0.99] to 0.88 [0.82–0.90].
The anonymized data were statistically analysed. Quantitative data were presented as means ± SD or medians [IQR 25–75] depending on the results of the Kolmogorov-Smirnov test of normality. For the feasibility and feedback studies, quantitative data were analysed using a one-way ANOVA and an independent t-test. Qualitative data were analysed with the Kruskal-Wallis test, a two-proportion Z test and Fisher’s exact test. For multiple comparisons, a Bonferroni correction was performed. In the learning outcome study, two groups (IWB/CRL + CM and CM-only) were assessed twice (before and after 7 weeks of clinical placement), with four clinical cases. Thus, data were analysed with a multilevel linear mixed effect model, with two nested levels of random effects, the student identity (Level 1) and the clinical case (Level 2), to take into account the dependency of the data . In this model, we used the Time (T) and Group (G) effects, as well as the interaction between these factors (GxT), as fixed effects. The analyses were completed with Fisher’s LSD post-hoc test when the Group x Time interaction term was significant. The normal distribution of the residual was verified with a Q-Q plot for each model. Data were analysed with R 3.5.0 software (www.r-project.org). A p-value < 0.05 was considered significant.
Feasibility of the IWB/CRL sessions
Analysis of the IWB/CRL sessions
Objectives of learning test requesting, interpretation and use for medical students
N = 22
1. Clinically assess the patient
2. Identify the question/the indication
No. of questions/indications
2.48 ± 1.34
Type % (diagnosis/aetiology/prognosis/evolution/complication/treatment)
Suggestion of a diagnostic test
No. of diagnostic tests suggested
8.05 ± 3.21
Mention of an indication/test
• Test a hypothesis
• Conform to recommendations
• “Systematic approach”
• Assess the time course
• Adapt the treatment
Appropriateness of the test regarding the diagnostic hypothesis
Mention of the looked-for nonclinical signs/test
3. Suggest one or more hypotheses
No. of hypotheses
7.23 ± 2.27
4. Assess the potential contraindications, feasibility, requirements and risks of the diagnostic test
Risks and adverse events discussed
Feasibility and limits discussed
6. Specify the conditions for conducting the test
No. of diagnostic tests interpreted
4.36 ± 1.59
0.90 ± 0.89
2.09 ± 1.27
1.09 ± 0.68
• Nuclear medicine
0.05 ± 0.21
• Cytology – Anatomic pathology
0.05 ± 0.21
0.05 ± 0.21
8. Verify the interpretability of the results
9. Compare the results with reference values, previous personal values, post-challenge values
No. of nonclinical signs identified
8.57 ± 5.27
• No. of positive signs identified
5.48 ± 3.14
• No. of negative signs identified
3.10 ± 2.93
10. Interpret the results: use signs to answer the question – affirm or eliminate the hypothesis
IWB usefulness and ease of use
N = 27
1. Was the IWB easy to use?
4.37 ± 0.56
2. Was the IWB useful to learn diagnostic test ordering?
3.89 ± 0.89
3. Was the IWB useful to learn diagnostic test interpretation?
4.11 ± 0.80
4. Was the IWB a useful tool to learn how to use the diagnostic test in clinical situations?
4.11 ± 0.64
Comparison of student feedback: IWB/CRL sessions vs. traditional courses with CM-only
Student feedback about test ordering and interpretation. IWB/CRL sessions vs. traditional learning sessions
Vertically integrated curriculum
n = 27
n = 40
n = 56
n = 78
n = 32
1. Now when I complete a test ordering file, I understand the reason/indication for the test
3.37 ± 1.01
3.95 ± 0.83
3.68 ± 0.79
4.01 ± 0.81
4.50 ± 0.62*
2. Now when I complete a test ordering file, the most frequent reason/indication that I specify is (%):
• I never specify a reason or indication/I specify the resident/senior’s request
• To test (affirm or eliminate) a hypothesis
• To conform to recommendations
• As “a systematic approach”
• To assess the time course
• To adapt the treatment
3. Now when I complete a test ordering file, I specify one or more nonclinical signs to be looked for
2.81 ± 1.17
2.51 ± 1.10
2.63 ± 1.10
3.14 ± 1.18
3.61 ± 1.12*
4. Now when I complete a test ordering file, I integrate the risks and limitations into the decision
3.46 ± 1.07
3.04 ± 0.85
3.50 ± 0.83#
3.72 ± 0.77#
5. Now I look for positive and negative nonclinical signs directly on the raw data and not on the report
3.54 ± 1.03**
2.77 ± 0.87
2.48 ± 0.85
2.70 ± 0.92
3.47 ± 0.88**
Learning outcomes in the IWB/CRL sessions vs. traditional sessions with CM-only
Change in pre- and post-test in IWB/CRL + CM group and CM-only group
Traditional learning sessions
N = 12
N = 11
Hypothesis proposed (n)
2.33 ± 1.26
2.89 ± 1.36
2.53 ± 1.32
2.98 ± 1.42
T: < 0.001
Diagnostic test ordered (n)
4.25 ± 1.67
4.55 ± 1.73
4.42 ± 1.67
5.76 ± 2.01
G*T: < 0.01
Clear indication specified (n)
1.81 ± 1.81
2.66 ± 2.04
1.58 ± 1.61
2.56 ± 1.91
T: < 0.001
Risk and limits mentioned (n)
0.35 ± 0.70
0.38 ± 0.97
0.71 ± 0.99
0.95 ± 1.43
Test requirements mentioned (n)
0.17 ± 0.52
0.23 ± 0.70
0.37 ± 0.76
0.68 ± 1.29
Correspondence between test and hypothesis
Number of appropriate tests (n)
2.77 ± 1.8
3.47 ± 1.79
2.72 ± 1.55
3.83 ± 2.01
T: < 0.001
Ratio of appropriate tests
1.32 ± 0.92
1.29 ± 0.64
1.21 ± 0.88
1.33 ± 0.64
Correspondence between indication and hypothesis
Number of appropriate tests (n)
1.15 ± 1.35
1.98 ± 1.79
0.98 ± 1.12
1.72 ± 1.43
T: < 0.001
Ratio of appropriate tests
0.53 ± 0.61
0.70 ± 0.71
0.45 ± 0.56
0.59 ± 0.44
Correspondence between test and indication
Number of appropriate tests (n)
1.63 ± 1.65
2.51 ± 1.9
1.33 ± 1.39
2.21 ± 1.72
T: < 0.001
Ratio of appropriate tests
0.39 ± 0.38
0.52 ± 0.36
0.32 ± 0.31
0.40 ± 0.29
T: < 0.001
Extra-clinical signs found
Number of true extra-clinical signs (n)
1.63 ± 1.14
2.57 ± 1.72
1.29 ± 1.11
2.05 ± 1.66
T: < 0.001
Number of extra-clinical signs consistent with the proposed hypothesis (n)
0.60 ± 0.76
1.23 ± 1.07
0.52 ± 0.74
1.07 ± 1.33
T: < 0.001
Change in pre- and post-test in IWB/CRL group and CM-only group
N = 12
Traditional learning sessions
N = 11
Identification of the diagnostic test (1/0)
This study shows the feasibility of using an IWB during CRL sessions to teach diagnostic test ordering and interpretation. The sessions were feasible for undergraduate medical students, who met the validated learning objectives. To our knowledge, this is the first study in medical education showing a change in students’ perceived practices and skills in test ordering and interpretation.
IWB for CRL sessions: potential and feasibility for achieving learning objectives
In previous systematic reviews outside the medical field, adding the IWB to problem-based learning methods was shown to have potentially positive effects on learning [22, 26]. Doing so appears to improve data sharing and the observation of nonclinical signs [20, 21] and it may also scaffold reasoning , enquiry learning and hypothesis generation . Our study confirmed this potential in the medical education field. Although we did not compare learning in the IWB/CRL sessions versus the CRL sessions alone, 8.57 ± 5.27 nonclinical signs were observed in 4.36 ± 1.59 tests in IWB/CRL. In addition, hypotheses/tests were systematically suggested, and the observed nonclinical signs were used to affirm or eliminate a hypothesis in more than 50% of the sessions. Thus, the IWB/CRL sessions seemed to enhance student participation, as indicated by the detail on the IWBs and in contrast to the student feedback on the traditional medical curriculum, regardless of the teaching modality, as previously shown [28, 29].
Our interest in assessing the feasibility of using an educational technology (IWB) during CRL sessions was based on ergonomic principles  in the educational sciences [25, 30]. The IWB appeared easy to use and useful for third-year undergraduate medical students. Likert-scale grading was in line with the study of Jain et al. with an older device . Given the methodological limitations of open and questionnaire-based assessments, the usefulness of the IWB was also evaluated by assessing the correspondence between the learning outcomes and the validated learning objectives . Because we analysed the paperboards and not the sessions directly, some of the learning outcomes may have been underestimated and thus the correspondence with the learning objectives might have been even better. Last, as noted above, we did not compare the usefulness of the IWB during CRL sessions with that of traditional CRL sessions. Yet, it seems evident that some of the objectives (verify the interpretability, identify nonclinical signs, etc.) could not have been addressed during our CRL sessions without the use of the IWB. Although other technologies (e.g. multimedia projectors, tablets, smartphones, laptops)  might also have augmented clinical reasoning skills or learning during CRL, our study provides quantified feasibility data about IWB use during CRL sessions that should encourage the behavioural intention of learners and teachers.
Students’ self-reported practices for diagnostic test ordering and interpretation
After the IWB/CRL sessions, the students’ questionnaire responses about test requests and interpretation revealed attitudes or beliefs that differed from those of their classmates. The differences in mentioning the indication and the nonclinical signs to look for agreed with the paperboard observations. Interestingly, the self-perceived understanding of the reason/indication for tests was lower in the third-year students with IWB/CRL sessions than in students following the traditional curriculum. Yet, this agreed with the moderate  or poor correlation between self-perceived competence and objective assessment in undergraduate medical students [32, 33, 34]. Strikingly, after the IWB/CRL sessions, these third-year students no longer gave unspecified indications. For the CM-only students in the traditional courses, unspecified indications (“I never specify the indication” or “I specify the resident/senior physician’s request” and the “systematic approach”) were quite prevalent and depended on the mentor and the educational context. Detsky et al. pointed out the role of the trainee’s identification of his/her mentor’s practice and an educational system that rewards exhaustivity in medical over-testing . Therefore, our educational approach showed a relevant educational effect.
However, despite the feasibility and the positive student feedback, learning objectives may not have been fully achieved. Indeed, cognitivists have shown that educational technologies can place an extraneous cognitive load on working memory that hampers learning [36, 37, 38]. This is a major issue for the IWB, as evidence of the impact on learning is lacking in health sciences education . Therefore, using the Study 1 design, we paired the self-reports with a prospective randomized controlled study.
Objective improvements in learning outcomes during diagnostic test education
In the prospective randomized controlled study, the students overall showed progress in aligning the tests with hypotheses (reasoning) and indications (knowledge). While CM-students increased the number of diagnostic tests, the students who attended the IWB/CRL sessions did not, despite similar improvement in test appropriateness, indicating that these students were being more thoughtful in their test ordering.
The learning effect might have been directly associated with the use of the IWB during the CRL sessions. Indeed, the paperboard analysis revealed the hypotheses that needed to be affirmed/eliminated and the nonclinical signs that came to the students’ attention. The paperboards were consistent with the student feedback: the students specified more accurate indications and nonclinical signs. Moreover, this greater specificity was in line with studies showing that diagnostic uncertainty [6, 39, 40] and irrational ordering (i.e. not hypothesis-based) are associated with the overuse of diagnostic tests [41, 42]. Conversely, including probabilistic reasoning in education reduces test ordering . The reduction in diagnostic test ordering has been shown in students following a vertically integrated curriculum , and our results with third-year students following this type of curriculum demonstrated that it is possible to further improve the accuracy of their test ordering. Nonetheless, it remains to be demonstrated whether this result can be translated into their future medical practice, as with other effective educational strategies [44, 45].
Perspective: determinants of improvement during IWB-based CRL sessions
The educational background of the medical students was an issue for those teaching diagnostic test ordering/interpretation from the third to the sixth year, as it is for education researchers who study clinical (and nonclinical) reasoning learning [17, 18]. In our study, the students’ experience (age or years of training) did not impact the results of the prospective study. It did, however, reveal a clear shift in perceptions of test ordering and interpretation skills during undergraduate medical study. This raises questions about the timing of our IWB/CRL sessions on diagnostic tests. Because of low class attendance, third-year medical students in France often lack a solid grounding in physiology and semiology, which might limit the benefits of these sessions for these young medical students. In addition, between the first year and the second and third years, there is a shift in the learning paradigm from knowledge-building to the development of reasoning skills. Therefore, students starting the third year may have memorized considerable medical knowledge but not yet acquired the conceptual understanding that would have enabled them to reason about diagnostic testing. Yet, Allen et al. showed that first-year medical students in the North American system were already able to display clinical reasoning during physical examinations , and our IWB/CRL sessions on diagnostic tests were probably offered at a time when these students were starting to acquire clinical reasoning skills. In addition, the student expertise level has been shown to impact the cognitive load during learning tasks. Indeed, using single/multiple, redundant/complementary, transitory/fixed, unimodal/multimedia, and interactive/isolated materials in specified/unspecified ways for solving complex/simple problems all require different perceptual and cognitive resources for working memory . Strikingly, the effects are reversed in expert learners. Assessment of the cognitive load through questionnaires or physiological methods would help to better understand these phenomena and adapt IWB/CRL session content to the student level .
This study had several limitations that should be noted. First, the sample size was small, with 40 students in the IWB/CRL + CM group. Nevertheless, this sample was almost half (44.5%) of the eligible students in the module on cardiovascular and respiratory disease. In addition, all students must follow this module in the third undergraduate year of medical study. Thus, to some extent, the study sample represented the entire population of third-year undergraduate medical students, as suggested by the non-significant difference in gender – an indicator of randomization − between all third-year students and those included in Studies 1 and 2. Second, the questionnaire response rate was 67% for the third-year students attending the IWB/CRL sessions and only 22% for other students from the third to sixth year, and this differential response rate may have affected the validity of our findings. While the response rates in academic studies are 55.6+/− 19.7% , the response bias still remains an issue in research studies including questionnaires . Third, the attrition rate was high in the randomized study, with only 29% of the students attending the post-test assessment. However, the students who dropped out did not differ from completers for gender or prior knowledge in respiratory basics (physiology, anatomy, histology, semiology, etc.). More importantly, the attrition rate was not differential, which suggests that the dropouts may not have been linked to the intervention but more likely to the lack of reminders sent to the students by the faculty. Nonetheless, this high attrition rate − although not differential − may have altered the study’s external validity. Indeed, those who completed the study may have represented a subgroup of motivated students. This limitation could have been overcome if the faculty had tied the final validation of the clinical placements to participation in these tests. Last, teacher- and unit-dependency (students in the IWB/CRL + CM group assigned to the respiratory physiology unit may have been more aware of diagnostic test issues) might have had an impact. Altogether, the learning effect observed in our pilot study must be confirmed in a larger scale study, involving more teachers in various hospital units.
Our study demonstrated the feasibility of integrating the IWB into CRL sessions to teach test ordering and interpretation to undergraduate students. The students were able to focus on the learning objectives via the appropriate use of an educational technology and a validated methodology. Moreover, the feedback from these students revealed different medical attitudes and beliefs regarding test ordering and interpretation, indicating that the teaching “messages” had been heeded. Last, although the additional value of the IWB in the CRL sessions was not tested versus CRL alone, these IWB/CRL sessions impacted the students’ test ordering behaviour and interpretation and may indicate a positive effect of this combined strategy on learning.
We thank Nancy Carvajal for administrative support during the study.
FG conceived the study, wrote the research protocol and the questionnaires, recruited the students, analysed the data and wrote the manuscript. CH and EP analysed the data, reviewed the paperboards and assisted in writing the manuscript. LB analysed the data, performed the statistical analyses and assisted in writing the manuscript. NK and TP trained the investigators in the use of the interactive whiteboard and educational technologies. JM and MH ensured the feasibility of conducting the study in the respiratory physiology department, provided the educational device and organized the teaching sessions. They also assisted in writing the manuscript. All authors read and approved the final manuscript. Guarantor: FG.
This study was not funded.
Ethics approval and consent to participate
The IRB committee of the Montpellier University Hospital approved the study (2019_IRB-MTP_05–05). An information letter was sent to the participants and in absence of specific opposition, consent was considered to be obtained.
Consent for publication
The authors declare that they have no competing interests.
- 4.Morgan S, Coleman J. We live in testing times - teaching rational test ordering in general practice. Aust Fam Physician. 2014;43(5):273–6.Google Scholar
- 5.Peterson MC, Holbrook JH, Von Hales D, Smith NL, Staker LV. Contributions of the history, physical examination, and laboratory investigation in making medical diagnoses. West J Med. 1992;156(2):163–5.Google Scholar
- 8.Miglioretti DL, Smith-Bindman R. Overuse of computed tomography and associated risks. Am Fam Physician. 2011;83(11):1252–4.Google Scholar
- 9.Dale JC, Ruby SG. Specimen collection volumes for laboratory tests. Arch Pathol Lab Med. 2003;127(2):162–8.Google Scholar
- 12.Price CP. Evidence-based laboratory medicine: supporting decision-making. Clin Chem. 2000;46(8 Pt 1):1041–50.Google Scholar
- 19.Harendza S, Krenz I, Klinge A, Wendt U, Janneck M. Implementation of a Clinical Reasoning Course in the Internal Medicine trimester of the final year of undergraduate medical training and its effect on students’ case presentation and differential diagnostic skills. GMS J Med Educ. 2017;34(5):Doc66.Google Scholar
- 21.Jain NL, Murphy JF, Hassan SW, Cunnius EL, Metcalfe ES, Schnase JL, Schoening PA, Spooner SA, Frisse ME. Interactive electronic whiteboards in the medical classroom. Proc Annu Symp Comput Appl Med Care. 1994:54–8.Google Scholar
- 25.Tricot A, Plégat-Soutjis F, Camps J, Amiel A, Lutz G, Morcillo A: Utility, usability, acceptability: interpreting the links between three dimensions of the evaluation of the computerized environments for human training (CEHT). In Environnements Informatiques pour l’Apprentissage Humain 2003: 2003; Strasbourg, France. Edited by Desmoulins C, Marquet, P., Bouhineau, D. ATIEF ; INRP; 2003.Google Scholar
- 30.Bétrancourt M. L’ergonomie des TICE : quelles recherches pour quels usages sur le terrain ? De Boeck: Bruxelles; 2007.Google Scholar
- 42.Morgan S, van Driel M, Coleman J, Magin P. Rational test ordering in family medicine. Can Fam Physician. 2015;61(6):535–7.Google Scholar
- 45.Wertheim BM, Aguirre AJ, Bhattacharyya RP, Chorba J, Jadhav AP, Kerry VB, Macklin EA, Motyckova G, Raju S, Lewandrowski K, et al. An educational and administrative intervention to promote rational laboratory test ordering on an academic general medicine service. Am J Med. 2017;130(1):47–53.CrossRefGoogle Scholar
- 48.Baruch Y. Response rate in academic studies - a comparative analysis. Hum Relat. 1999;52(4):421–38.Google Scholar
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.