Trust as a Scaffold for Competency-Based Medical Education

  • Eric Young
  • D. Michael ElnickiEmail author

We are in a dynamic time in medical education as competency-based medical education (CBME) moves from theory to implementation. With change comes self-reflection, reexamination, and healthy debate.1 Much of the debate around CBME stems from the lens through which it is viewed. When competencies and entrustable professional activities (EPAs) are envisioned as checklists for completion, they appear reductionist and far too simple to capture the complex fabric of knowledge, skills, and attitudes we call the practice of medicine.2 However, when we conceptualize CBME through the lens of trusting relationships, greater layers of nuance and value begin to emerge.

At their most fundamental level, competencies, milestones, and EPAs create a shared mental model for the desired outcomes of our educational endeavors. This shared framework is essential for the relationships that underpin our education system as a whole—relationships between learners and direct supervisors, between trainees and educational training programs, and between those training programs and society. As the phase of defining core competencies comes to a close, we are entering a period of inquiry focused on those relationships that support CBME. From this inquiry, we are beginning to understand the paramount importance of trust.3

Trust is a unifying principle at all levels within a CBME system. At the level of the learner and the direct supervisor, bi-directional trust fosters self-reflection and honest feedback essential for professional development. Between learners and the training programs that assess them, trust is imperative for creating a just system that promotes success for all trainees. Finally, society places trust in our training programs to produce physicians who can provide thoughtful, compassionate care to our loved ones. In this medical education theme issue of JGIM, five articles offer us further insight into the role trust plays within these diverse relationships that support CBME.

In a qualitative study of clerkship students, Karp and colleagues explored students’ perceptions of trust in a clinical learning environment.4 Through thematic analysis of semi-structured interviews with clerkship students, three key themes emerged. First, appropriate levels of trust form a scaffold for learning through deliberate guidance. Second, trust dramatically impacts learning climate—when metered appropriately, learning accelerates—when metered inappropriately, learning suffers. Lastly, inappropriate levels of trust may have negative consequences for patients as students do not feel empowered to decline assigned tasks they are not qualified to perform. This study offers new insight into the vital role of trust within the dyad of student and clinical supervisor. It leads to the natural question: how do we structure these relationships to foster trust and facilitate students’ rapid progression towards competency?

Within a CBME framework, feedback is essential for learners to gauge their progression and identify areas for growth. In a perspective piece, Ramani et al. explore the interpersonal and environmental factors that allow for productive feedback.5 They argue that the process by which feedback is given may be less important than the dynamics of the relationship between the learner, the educator, and the environment. As they explain, “relationships, not recipes, are more likely to promote feedback that has an impact on learner performance and ultimately patient care.” Not surprisingly, this dynamic hinges on trust. Learners must inherently trust their supervisors and the quality of their observations to internalize feedback and change behavior.

Outside of traditional clinical relationships, coaching has gathered attention as an approach to facilitate healthy professional development. In this issue, Carney et al. describe the development and psychometric evaluation of two instruments for assessing coaching relationships.6 Combining literature review with an existing instrument, they developed a 31-item tool that was administered to students and faculty members involved in a longitudinal coaching program. Through exploratory factor analysis, they produced instruments assessing students’ and coaches’ perceptions of coaching. The domains reflecting students’ impressions of coaching included promoting self-monitoring, relationship building, promoting reflective behavior, and establishing foundational ground rules. In contrast, coaches defined coaching by the practice of coaching and relationship formation only. Notably, the two instruments shared a common domain of relationship building as a vital thread in the coaching relationship. The authors also note that coaching will only increase in importance as more schools transition from time-based to competency-based assessments.

Two additional pieces in this issue explore the relationship between trainees and the programs that supervise them. In a qualitative study using semi-structured interviews, Frank and colleagues explore how one school’s grading committees use assessment data to guide grading decisions.7 Analysis of interviews from committee members revealed several key themes. First, committee members believed that grading committees create more accountability, standardization, and transparency in the grading process. However, challenges faced by committees included dealing with varying competencies and inconsistent or cryptic data. This was particularly a problem if members did not personally know a student. The potential for passivity and groupthink were compounded by these issues. Site directors familiar with students, preceptors, and care environments played a pivotal role in grade determinations. This led the authors to worry that confirmation bias might hinder committees from considering grade proposals that differed from those put forth by site directors. It is imperative that students trust the systems established to assess their progression to competency. This important work by Frank et al. raises many questions, but it demonstrates commitment to thoughtful self-reflection and continuous improvement that are essential for maintaining trust between learners and those entrusted with their evaluation.

Finally, Hatala et al. offer a perspective piece that dissects key challenges associated with the application of EPAs within internal medicine (IM) residency training.8 The authors argue that the core activities of internal medicine are difficult to parse into discrete tasks because they often unfold over an extended period of time and are influenced by complex interprofessional inputs. The authors reflect on a common concern regarding EPAs, which is that they may be overly reductionist. Similar concerns arose with earlier assessment techniques, such as standardized patient examinations. Furthermore, cognitive tasks may be more difficult to observe and to evaluate than technical and procedural ones. We can anticipate friction to occur as competency-based entrustment begins to replace traditional, time-based seniority. The authors note that as we gather more information from early adopters of entrustability assessments, it is imperative that we pause to reflect on the strengths and limitations being uncovered. This commitment to iterative improvement is essential in maintaining trust among our learners and the broader community for whom they will be providing care.

Taken together, these pieces offer new insights and raise important questions about the role of trust within CBME systems. They invite us to consider how we can structure educational systems that foster trusting relationships wherein the full value of CBME can be realized. This process starts at the level of the learner and direct supervisor who know that trust takes time to develop. Models of clinical supervision must reflect this reality. This concept is one of many reasons some have argued that continuity should be the guiding principle within educational reform.9 Models such as longitudinal integrated curricula (LICs) and clinical coaching programs are rapidly expanding throughout the country. By intentionally creating longitudinal relationships, LICs and longitudinal coaching programs offer fertile soil in which trust can grow and support the progressive achievement of competence.

Simultaneously, we must ensure that our learners can place trust in the systems of assessment guiding their progress. We earn this trust by committing to healthy, transparent self-reflection on our current assessment practices. By focusing a lens of inquiry on traditional means of assessment, we promote iterative progression towards ever-more robust methodologies. This progression may not always be linear; indeed, some novel approaches to CBME may prove to be missteps when rigorously scrutinized. However, we must trust that an arduous and critically reflective evolution in CBME will better serve our learners than maintaining an imperfect status quo.


Compliance with Ethical Standards

Conflict of Interest

The authors have no conflicts of interest to declare.


  1. 1.
    Holmboe ES, Sherbino J, Englander R, Snell L, Frank JR; ICBME Collaborators. A call to action: The controversy of and rationale for competency-based medical education. Med Teach. 2017;39:574–581.CrossRefGoogle Scholar
  2. 2.
    Huddle TS, Heudebert GR. Taking apart the art: the risk of anatomizing clinical competence. Acad Med. 2007;82:536–541.CrossRefGoogle Scholar
  3. 3.
    Hauer KE, ten Cate O, Boscardin C, Irby DM, Lobst W, O’Sullivan PS. Understanding trust as an essential element of trainee supervision and learning in the workplace. Adv Health Sci Educ Theory Pract. 2014;19:435–456.Google Scholar
  4. 4.
    Karp NC, Hauer KE, Sheu L. Trusted to learn: A qualitative study of students’ perspectives on trust in the clinical learning environment. J Gen Intern Med. (SPI 4883)Google Scholar
  5. 5.
    Ramani S, Konings KD, Ginsburg S. Feedback redefined: Principles and practice. J Gen Intern Med. (SPI 4874)Google Scholar
  6. 6.
    Carney PA, Bonura EM, Kraakevik MD, Juve AM, Kahl LE, Deiorio NM. Measuring coaching in undergraduate medical education: The development and psychometric validation of new instruments. J Gen Intern Med. (SPI 4888)Google Scholar
  7. 7.
    Frank AK, O’Sullivan P, Mills LM, Muller-Juge V, Hauer KE. Clerkship grading committees: The impact of group decision-making for clerkship grading. J Gen Intern Med. (SPI 4879)Google Scholar
  8. 8.
    Hatala R, Ginsburg S, Hauer KE. Entrustment ratings in internal medicine: Capturing meaningful supervision decisions or just another rating? J Gen Intern Med. (SPI 4878)Google Scholar
  9. 9.
    Hirsh DA, Ogur B, Thibault GE, Cox M. “Continuity” as an organizing principle for clinical education reform. NEJM. 2007; 356(8):858–66.CrossRefGoogle Scholar

Copyright information

© Society of General Internal Medicine 2019

Authors and Affiliations

  1. 1.Denver VA Medical CenterUniversity of Colorado School of MedicineDenverUSA
  2. 2.Office of Medical EducationUniversity of PittsburghPittsburghUSA

Personalised recommendations