Abstract
Objective structured clinical examination (OSCE) is the main assessment tool for the end of semester (EOS) summative continuous assessment for the clinical training (Phase 2) at the Taylor’s Clinical School. OSCE is a clinical competency assessment, whereby a candidate who passes the OSCE is deemed as clinically competent. The traditional way of deciding the pass score as 50 % in any examination is arbitrary; this may lead to problem in OSCE where the score of 50 % (pass score) may not represent the actual competency required. In recent years, standard setting methods have been applied in OSCE by many medical schools so that a defensible, fair and absolute pass score is determined. The aim of this study is to describe the OSCE cut-off score by the borderline regression method (BRM) in Taylor’s Clinical School (TCS) compared to the conventional arbitrary pass mark of 50 %. This study focused on the following two research questions: (1) What is the difference in cut-off (pass mark) of OSCE if BRM standard setting is applied? (2) What is the difference in OSCE pass rate if BRM is applied? The results of EOS 5 and EOS 7 were tabulated, and the BRM standard setting was applied to these two OSCEs. The results showed that the mean score of both EOS OSCEs was significantly lower after BRM standard setting (P = 0.001). With BRM standard setting, it was able to identify more poor performers in OSCE who may have passed if the conventional arbitrary pass mark of 50 % was applied. We concluded that BRM standard setting is feasible and is a reasonable as well as defensible method of standard setting for OSCE. We recommend BRM for all OSCEs in Taylor’s Clinical School.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Ben-David, M. F. (2000). Standard setting in student assessment. Medial Teacher, 22, 120–130.
Cusimano, M. (1996). Standard setting in medical education. Academic Medicine, 71 (Suppl. 10), S112–20.
Harden, R. M., Hart, I. R., & Mulholland. H. (Eds.). (1992). Approaches to the assessment of clinical competence part I. In Proceedings of the 5th Ottawa International Conference on Medical Education and Assessment; Dundee, Scotland, September 1–3, 1992.
Kramer, A., et al. (2003). Comparison of a rational and an empirical standard setting procedure for an OSCE. Medical Education, 37, 132–139.
Mash, B. (2007). Assessing clinical skills-standard setting in the objective structured clinical exam. South African Family Practice, 49, 5–7.
Newble, D., & Dawson, B. (1994). Guidelines for assessing clinical competence. Teaching and Learning in Medicine, 6, 213–220.
Newble, D., Jolly, B., & Wakeford, R. E. (Eds.). (1994). The certification and recertification of doctors. Issues in the assessment of clinical competence. Cambridge: Cambridge University Press.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer Science+Business Media Singapore
About this paper
Cite this paper
Loh, KY., Nurjahan, M.I., Ong, K.K., Roland, G.S., Noor, A.R. (2016). OSCE Standard Setting by Borderline Regression Method in Taylor’s Clinical School. In: Tang, S., Logonnathan, L. (eds) Assessment for Learning Within and Beyond the Classroom. Springer, Singapore. https://doi.org/10.1007/978-981-10-0908-2_5
Download citation
DOI: https://doi.org/10.1007/978-981-10-0908-2_5
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-10-0906-8
Online ISBN: 978-981-10-0908-2
eBook Packages: EducationEducation (R0)