Competency-based education expresses competency as a progression through milestones: criterion-referenced, performance-based trajectory from novice to expert for any number of behaviors. The most commonly used development framework used for expressing milestones in clinical contexts is Miller’s Pyramid of Clinical Skills, Competency, and Performance: Know, Know How, Show How, Do.
As discussed in my last blog post, competency-based education prioritizes the objectives and outcomes (and outcome-based objectives) aspects of instructional design. Technically, it does not dictate specific curricula or approaches to instruction. However, the very nature of outcome-based objectives implies best-fits with certain learning and assessment philosophies, approaches, and activities.
As Vanderbilt University School of Medicine discovered (Miller et al., 2010), if a medical school wants to anchor their curriculum to the ACGME core competencies, they need to provide students with opportunities to practice and demonstrate behaviors that align with those competencies. While input activities such as listening (e.g. lectures), reading, or watching are appropriate for the introduction of new information, social learning settings for case- or problem-based learning are best-suited for factual integration and application.
While medical knowledge can be assessed through a series of frequent low-stakes assessments and less frequent high-stakes exams (which mimic standardized national licensing exams), multi-faceted qualitative rubric assessments work best for assessing other competencies. Vanderbilt has published at length on these (Pettepher et al. 2016) and how those data points contribute to a developmental portfolio in which students and their coaches can follow the student’s progress from novice to expert (Lomis et al.,2017).
There is more to the story – much more.
This blog post will focus on the assessment focus and tools that facilitate competency-based education at undergraduate, graduate, and postgraduate levels. Not much time will be spent on knowledge assessments (e.g. quizzes and tests; because they are familiar) or qualitative rubric assessments (because I focused there my last post). Instead, I will focus how we assess competence in the workplace – at the “show how” and “do” levels of Miller’s pyramid.
“Show How” marks the transition from classroom to workplace-based learning. In the Flexner years, undergraduate medical students spent the first two years in the classroom (“preclinical years”) before engaging in hospital- and outpatient-based clinical clerkships for their third and fourth years (“clinical years”) of schooling.
In the Beyond-Flexner era of medical education, the “preclinical” label is no longer accurate, because students are moving into clinical settings and demonstrating basic clinical skills as early as the first month of medical school. Additionally, current practice conditions require clinicians to learn new procedures and demonstrate their competence through formalized and documented processes. Therefore the “Show How” levels of clinical competence are no longer the sole purview of graduate medical education. Learners at all levels will encounter “Show How” assessments.
Competencies versus Entrustable Professional Activities
One of the most consistently-voiced critiques of the ACGME core competencies relates to the nature of the competencies themselves. While they are meaningful at the individual skill level, successful clinical practice is more than the sum of its parts; practitioners must be able to integrate the individual skills and exercise context-based judgment on how to use them. Critics argue that “checking off competencies” will never achieve appropriate levels of authentic assessment. Thus, Entrustable Professional Activities (EPAs) were born.
Best source for learning more:
Englander, R. & Carraccio, C. (2014). EPAs, Competencies and Milestones: Putting it all Together. 2014 Fall APPD Meeting. Retrieved from: https://pdfs.semanticscholar.org/presentation/a3fe/6090d11619a87585aa94bba9660ce24ce31f.pdf
As described by ten Cate, Snell, and Carraccio (2010), EPAs are the interplay of core competencies and authentic practice. They are founded in ‘entrustment’ which has been defined by Kennedy et al. (2008) by these criteria:
- Ability or level of Knowledge, Skills, &/or Attitudes
- Discernment (Knowing one’s limits).
As Englander & Carraccio (2014) explain, entrustment implies competence through a lens of supervision – a lens which is consistent with the culture of health care and intuitive for its practitioners. The following table, which is a mashup of Englander & Carraccio (2014) and ten Cate and Scheele (2007), differentiates between competencies and EPAs.
|Describes the ability of the practitioner||Describes the outcome of the work|
|Context-independent||Embedded in a clinical setting and situation|
|Addresses the knowledge/skills/attitudes of a specific task||Integrates multiple tasks required to provide successful patient care|
Example of a Competency: Gathers and synthesizes essential and accurate information to define the clinical problem.
Example of an EPA: Care of a complicated pregnancy.
“Can you be entrusted with care?” translates to “Do you need to be supervised to make sure things are done correctly?” The binary response (yes or no) is particularly useful in assessment.
Best source for learning more:
Ten Cate, O. (2013). Nuts and bolts of entrustable professional activities. Journal of Graduate Medical Education, 5(1), 157-158.
In the ten years that Olle ten Cate has been writing on EPAs, he has developed levels of supervision that allow for a progression from novice to expert. Although they have evolved over time, this list was published in the Journal of Graduate Medical Education as part of a practice bulletin (of sorts) in 2013:
- Level 1. Observation of task with no involvement in execution.
- Level 2. Execution of task with proactive, direct supervision.
- Level 3. Execution of task with supervision easily accessed on request.
- Level 4. Execution of the task with off-site or post-hoc supervision.
- Level 5. Providing supervision for more junior colleagues.
Achievement of an EPA takes place when the learner moves from Level 3 to 4.
ten Cate (2013) explains that the ultimate aim of EPAs is not to result in independent practice in the literal sense since healthcare is an interdependent act. [Author’s Note: Writing this has made me understand that I need to edit the Draft Summary Table for ‘Do’ reads as “Qualified to Teach” rather than “Unsupervised Practice.”]
Assessment Tools for Workplace-Based Assessment
The ability to “Show How” requires assessments that allow for demonstration. Norcini & Burch (2007) outline a handful of approaches to workplace-based assessment.
Best source for learning more:
Norcini, J. & Burch, V. (2007) Workplace-based assessment as an educational tool: AMEE Guide No. 31. Medical Teacher. 29(9-10), 855-871. DOI: 10.1080/01421590701775453
- Mini-CEX (Mini Clinical Examination). The mini-CEX was first described by the American Board of Internal Medicine (Norcini et al., 1995) as a formative assessment process for faculty evaluation of physicians-in-training as they interact with patients. Subsequent research suggests mini-CEX assessments are similar to similar simulation-based assessments but with higher fidelity and potentially lower cost (Norcini et al., 2003). The process involves the following sequence:
- Direct Observation
- Learner Self-Evaluation
- Structured, faculty-driven written and verbal feedback
- Development of an action plan for improvement
- Follow-up with multiple mini-CEXs through the year [with different faculty assessors].
- Clinical Encounter Cards (CEC). The CEC system was developed at McMaster University (Hatala & Norman, 1999) before its implementation and study at other centers. It is similar to the mini-CEX; faculty assessors use pre-printed notecards to score performance and provide structured feedback. The system has been shown to provide feasible, valid, and reliable measurements.
- Blinded Patient Encounters. First described in undergraduate clinical clerkships in South Africa (Burch, Seggie, & Gary, 2006), trainees are assessed during bedside sessions in which they take a history, perform an exam, and formulate a plan of care without prior access to the medical record or diagnosis. This allows students to develop their clinical diagnostic skills without leaning on the work previously done by more experienced practitioners. The assessment includes direct observation and structured feedback using performance rating scales.
- Direct Observation of Procedural Skills. Developed in the UK, this system involves the direct observation and scoring of performance on a 6-point rating scale. Physicians-in-training are given the list of procedures for which they will need to be assessed in advance. For a resident physician, typical procedures might include endotracheal intubation or arterial blood sampling. For medical students, typical procedures might include venipuncture or urethral catheterization. The sequence is designed to involve an (approximately) 15-minute, direct observation and a 5-minute, structured feedback session.
- 360-Degree Assessments. These structured questionnaires are filled out by multiple stakeholders surrounding the physician-in-training, including peers, supervisors, and ancillary staff.
Next blog post: Personalized Learning, Competency-Based Assessment and Oregon Health and Science University