By: Fremen Chichen Chou (@FremenChou)
Introduction
By distilling the core components framework for evaluating the implementation of competency-based education (CBE) in healthcare professional education (HPE)1, we identify three primary dimensions for enacting CBE:
- First, the establishment of competency frameworks and operationalized descriptions for sequenced progression, tailored to societal needs and professional characteristics, which act as blueprints for professional development.
- Second, these frameworks of staged progress are translated into bespoke educational designs and curricula, alongside clinically-oriented learning experiences crafted to enhance the acquisition and mastery of competencies.
- Third, within the continuum of teaching and learning, a comprehensive collection and aggregation of data points occur, enabling the assessment of learner progression through collective intelligence methods, such as clinical competency committees. These evaluations guide personalized clinical supervision, informed entrustment decisions, and, as necessary, tailored learning plans or remediation strategies.
This integrative approach characterizes the essence of programmatic assessment.
Reality vs. Intention: Stakes of Assessments
According to the principles of programmatic assessment, ideally, there exists a program of assessment wherein each outcome competency domain undergoes multiple and varied data collection efforts. Each data source within this framework can provide insights across several competency domains. The collection of assessment data and the evaluation of staged progress are interconnected yet conducted distinctly. Each individual assessment is designed to be low-stake, primarily serving as an assessment for learning rather than solely focusing on assessment of learning.2 In this model, each integrated data source transcends the traditional binary classification of assessments as either formative or summative. Instead, it supports summative decision-making through a comprehensive approach involving multiple stages, diverse methods, and repeated measures.
However, research indicates a discrepancy between the perceived stakes of assessments by learners and the intended stakes designed within the assessment program. A critical aspect of understanding the actual perception of assessment stakes by learners involves recognizing their sense of control over the assessment process and its outcomes or impacts.3 When learners perceive the lack of transparency in the operation and utilization of assessment data, or when they cannot actively participate with a degree of autonomy, it may lead to a sense of insecurity about elements beyond their control, causing each point of data collection to be perceived as high stakes. This perception can lead to learners’ reluctance or passive avoidance of participation, ultimately hindering the ideal realization of assessment for learning.
The Role of Documentation in IT-Assisted Assessments
The implementation of programmatic assessment in healthcare professional education (HPE) through competency-based education (CBE) relies significantly on information technology. Intriguingly, the control that learners have over the data stored within these systems, as well as the management mechanisms for utilizing assessment data within training programs, remain underexplored. Poor data management can inadvertently amplify the unintended impacts of assessment data, altering its original intended stakes, undermining learner trust, and leading to disengagement and passive coping behaviors. Several points merit further discussion:
- Learner Handover: Ideally, each stage of education, particularly during traditional clinical rotations, would benefit from proper educational handovers about learners. Such handovers could enhance personalized learning guidance. However, in the absence of appropriate management mechanisms, presentation methods, and consideration of learner autonomy within CBE’s assessment information systems, the transfer of such data, especially the inappropriate presentation a single workplace-based assessment and the integrated opinions post-rotation, can inadvertently reinforce stereotypes rather than positively supporting learner growth.
- Advocating for the Right to Be Forgotten: Growth is a dynamic process, and each data point has its temporal relevance. As learners progress, controlling earlier stages of data becomes crucial. It is essential that learners’ data possess ‘The Right to Be Forgotten,’ ensuring that necessary documentation does not permanently impact a learner’s career due to early educational challenges or errors.
The management of data within assessment systems and the degree of learner autonomy and control over this data necessitate further consideration and deliberate design. Such measures would ensure that assessment systems not only support but also respect the evolving nature of learners’ professional development.
About the author: Fremen Chihchen Chou, MD, PHD, is an Emergency Physician, Assistant Professor in the School of Medicine at China Medical University in Taiwan, and Director of the Center for Faculty Development at its affiliated hospital. Dr. Chou has been instrumental in leading the competency-based training reform for the Taiwan Society of Emergency Medicine since 2011. He actively engages with the international CBME community and holds significant roles such as the International Regional Hub Leader for the Faculty Development Assessment Program in collaboration with ACGME, a faculty member of the International Course: Ins and Outs of Entrustable Professional Activities, and an inaugural member of the executive committee of ICBME collaborators.
References
- Van Melle, Elaine et al. “A Core Components Framework for Evaluating Implementation of Competency-Based Medical Education Programs.” Academic medicine : journal of the Association of American Medical Colleges vol. 94,7 (2019): 1002-1009.
- Schuwirth LWT, Van der Vleuten CPM. Programmatic assessment: From assessment of learning to assessment for learning. Med Teach 2011; 33: 478–485
- Schut S, Driessen E, van Tartwijk J, et al. Stakes in the eye of the beholder: an international study of learners’ perceptions within programmatic assessment. Med Educ 2018; 52: 654–663.
The views and opinions expressed in this post are those of the author(s) and do not necessarily reflect the official policy or position of The University of Ottawa . For more details on our site disclaimers, please see our ‘About’ page