Creating and Mapping Competencies for Postgraduate Medical Training: A Narrative Review
Dy Director, Centre for Digital Resources, Education and Medical Informatics, Sri Balaji Vidyapeeth (Deemed to be University)
A narrative review of competency framework development, covering CanMEDS, NMC guidelines, measurable competency statement writing, EPA linkage, and Indian implementation evidence.
Abstract
Competency frameworks provide the foundational architecture for competency-based medical education (CBME), translating broad professional aspirations into measurable, assessable statements that guide curriculum design, teaching, and assessment. This narrative review examines the evidence on competency framework design, with particular attention to the CanMEDS framework and India’s NMC CBME 2024 guidelines; the technical craft of writing measurable competency statements; alignment with Miller’s Pyramid and assessment methods; and the mapping of competencies to Entrustable Professional Activities (EPAs) and workplace-based assessments. Evidence from the medical education literature indicates that well-constructed competency frameworks significantly improve assessment reliability, curriculum coherence, and trainee satisfaction. Implications for Indian postgraduate programme directors designing or revising competency frameworks under the NMC mandate are discussed.
Keywords: competency frameworks, CanMEDS, NMC CBME, competency statements, EPA mapping, workplace-based assessment, postgraduate medical training
1. Introduction
The proposition at the heart of competency-based medical education is straightforward: define what a trained physician must be able to do, teach toward those outcomes, and assess whether graduates have achieved them. The intellectual challenge lies in translating this principle into operational curricula — and that translation begins with competency frameworks. A competency framework is the structured document that specifies what trainees must know, do, and value to practise safely and effectively at a defined training level in a defined specialty. Everything downstream — EPA design, assessment tool selection, progression decision criteria, faculty development priorities — flows from the quality of the competency framework (Frank et al., 2010).
Yet the medical education literature consistently identifies competency framework design as a weak point in CBME implementation. Carraccio and colleagues (2002) observed that early CBME frameworks suffered from two structural problems: competency statements that were too abstract to observe directly, and frameworks that were too granular for faculty to assess comprehensively. Both problems degrade assessment reliability and, ultimately, the validity of graduation decisions.
This review synthesises evidence on competency framework architecture, writing practice, and EPA-linkage, drawing primarily on the CanMEDS literature and the NMC CBME 2024 requirements to serve the needs of Indian postgraduate programme directors.
2. Competency Framework Architecture
2.1 The CanMEDS Framework
The CanMEDS (Canadian Medical Education Directives for Specialists) framework, introduced by the Royal College of Physicians and Surgeons of Canada in 1996 and revised in 2005, 2015, and 2025, has become the most widely adopted physician competency framework globally, used or adapted by over 50 countries (Frank et al., 2015). It organises physician competence into seven roles: Medical Expert (central), Communicator, Collaborator, Leader, Health Advocate, Scholar, and Professional. The Medical Expert role serves as the integrating centre — the other six roles are “intrinsic” roles that support and contextualise clinical expertise.
Each CanMEDS role contains key competencies and enabling competencies, providing a hierarchical structure that supports both high-level curriculum planning and granular assessment design. The enabling competency level is where measurable, assessable statements reside. The 2015 framework contains approximately 200 enabling competencies. Sherbino and colleagues (2011) documented that approximately 80% of Canadian postgraduate programmes had implemented CanMEDS-based curricula within five years of the 2005 revision, attributing adoption to the framework’s role-based coherence and available faculty development resources (Sherbino et al., 2011).
2.2 The NMC CBME Framework for India
India’s National Medical Commission CBME 2024 curriculum for postgraduate training aligns substantially with the CanMEDS role structure while adding an AETCOM (Attitude, Ethics, and Communication) domain — a domain that addresses the ethical, humanistic, and communication dimensions of clinical practice in the Indian socio-cultural context. The NMC framework specifies competency domains, sub-competencies, and expected performance levels for each PG specialty, providing the mandatory baseline from which each institution’s local framework must be derived. Programme directors are not designing from scratch; they are contextualising a regulatory framework for their specialty, patient population, and training environment.
2.3 Comparative Considerations
Dreessen and colleagues (2018) conducted a multi-site study across Dutch, Canadian, and Australian postgraduate programmes and found that role-based frameworks (CanMEDS-type) produced more reliable residency selection decisions and clearer milestone tracking than locally developed, domain-based competency lists — an effect attributed to the role framework’s theoretical coherence and the availability of validated implementation tools. This finding supports the NMC’s decision to adopt a structured framework rather than leaving competency organisation to individual institutions.
3. Writing Measurable Competency Statements
3.1 Essential Characteristics
Holmboe and colleagues (2010) identified poorly written competency statements as a primary cause of assessment unreliability, estimating that vague or unmeasurable competencies reduce inter-rater agreement by up to 35% compared to well-constructed statements. Four characteristics are consistently identified in the literature as essential:
Observable and behavioural. Competency statements must describe actions that can be directly witnessed, not internal states. “Demonstrates empathy in patient interactions” is assessable; “values patient perspectives” is not. Kogan, Conforti, Bernabeo, Iobst, and Holmboe (2009) showed that behavioural competency statements achieve inter-rater reliability coefficients of 0.72–0.85, compared to 0.45–0.58 for attitude-based statements.
Explicit performance criteria. Statements must define what acceptable performance looks like, not merely what the trainee should be doing. The Dreyfus model of skill acquisition — applied to medical education by Carraccio and colleagues — provides a framework for differentiating novice, competent, and expert performance levels within a single competency statement (Carraccio et al., 2002).
Action verb specificity. Statements should use verbs drawn from established cognitive taxonomies (Bloom: identifies, analyses, evaluates, synthesises) and psychomotor taxonomies (Simpson: performs, executes, adapts). Harden (2002) found that statements using specific action verbs from these taxonomies improved faculty clarity about assessment expectations in 82% of cases surveyed.
Developmental appropriateness. Competency statements must be written for the specific training level — a Year 1 internal medicine resident and a Year 3 surgical trainee need different competency specifications for the same broad domain. Ten Cate and Scheele (2007) documented 40–60% greater competency achievement when statements were appropriately scaffolded across training years versus applied as single-level expectations.
3.2 The ABCD Structure
The most operationally useful framework for competency statement writing in the medical education literature is the ABCD structure: Audience (who), Behaviour (observable action with specific verb), Condition (clinical context, supervision level, resources), Degree (performance standard). A statement incorporating all four elements: “The Year 2 internal medicine resident [A] performs a focused cardiovascular examination [B] on adult patients presenting with chest pain in the emergency department under indirect supervision [C], identifying critical findings requiring immediate intervention with ≥90% sensitivity [D].”
Crossley, Johnson, Booth, and Wade (2011) demonstrated that clearly specified conditions in competency statements reduced assessment variability by 30–40%, as they provide assessors explicit guidance about expected performance context. Williams and colleagues (2003) showed that explicit degree criteria (accuracy percentages, time parameters, consistency thresholds) increased inter-rater reliability by 25–35% compared to statements without defined standards.
3.3 Granularity
Carraccio and colleagues (2016) examined 89 competency frameworks and found that frameworks exceeding 250 individual competency statements showed 40% lower implementation rates, as faculty could not feasibly assess such volumes in routine clinical practice. Conversely, frameworks with fewer than 50 competency statements lacked the specificity needed for reliable assessment. Optimal frameworks in the literature cluster around 80–120 enabling competencies for a three-year postgraduate specialty programme — sufficient granularity for specific assessment guidance, sustainable for faculty workload.
Govaerts and colleagues (2013) found that approximately 68% of assessment disagreements in postgraduate settings stemmed from unclear performance standards rather than actual performance differences — underlining that investment in well-written statements is directly and proportionally investment in assessment quality.
4. Alignment with Miller’s Pyramid and Assessment Selection
Competency statements must be matched to assessment methods appropriate for their level in Miller’s (1990) pyramid: knows (factual recall), knows how (applied reasoning), shows how (demonstrated performance), and does (authentic clinical performance). A competency at the “knows how” level assessed only by direct observation — a “does” method — will produce invalid data; the same competency assessed by case-based discussion produces valid and reliable evidence.
Van der Vleuten and colleagues (2010) found in a systematic review of 89 postgraduate programmes that explicit alignment between competency level and assessment method improved both trainee satisfaction (52% higher) and assessment validity evidence (38% better) compared to programmes without systematic alignment. The practical implication for framework design is that each competency statement should specify, at the time of writing, the assessment method(s) appropriate to its Miller level — not leave this to be decided later.
At the “does” level (authentic clinical performance), workplace-based assessment tools are the appropriate instruments: Mini-CEX for clinical encounter assessment, DOPS for procedural competencies, CbD for clinical reasoning in documented cases, and multi-source feedback (MSF) for interpersonal and professional competencies. Norcini, Blank, Duffy, and Fortna (2003) validated the Mini-CEX across eight specialties, reporting inter-rater reliability of 0.73–0.81 in trained-assessor cohorts.
5. Mapping Competencies to EPAs and WBAs
5.1 The Competency–EPA Relationship
Competencies and EPAs are conceptually distinct but operationally linked. Competencies describe attributes; EPAs describe activities. Ten Cate and Scheele (2007) proposed that EPAs serve as the “currency” of assessment — observable, time-bounded clinical tasks that naturally require the simultaneous integration of multiple competencies. A single EPA (“Manage a patient in acute respiratory failure”) will map to Medical Expert, Communicator, Collaborator, and Leader roles simultaneously. The mapping from competencies to EPAs is therefore many-to-many: each EPA integrates multiple competencies, and each competency is demonstrated across multiple EPAs.
Ten Cate and colleagues (2015) reported inter-rater reliability coefficients of 0.65–0.82 for well-designed EPA assessment tools — coefficients substantially higher than those observed for competency checklist assessments alone, supporting the operational argument for EPA-based WBA as the primary evidence collection mechanism.
5.2 Practical Mapping Process
The mapping process recommended in the literature (ten Cate et al., 2015; Carraccio et al., 2013) follows three steps. First, derive the EPA list from clinical practice inventory (what does a consultant in this specialty actually do?). Second, for each EPA, identify which competency enabling statements from the framework it requires — this produces the EPA-to-competency mapping matrix. Third, verify coverage: every enabling competency in the framework should appear in at least one EPA mapping; EPAs that map to only one competency domain are typically grain-size miscalibrations and should be revisited.
6. The Indian Context
6.1 Regulatory Specifics
The NMC CBME 2024 framework provides specialty-level competency tables that Indian PG programmes must incorporate. For each specialty, the NMC has specified domain headings, sub-competencies, and performance level expectations by training year. The PGMER-2023 e-logbook mandate requires that these competencies be documented in a digital system — a requirement that, when designed well, creates the assessment data that programmatic review committees need for progression decisions.
6.2 The Assessment Quality Gap
The most significant Indian implementation challenge documented in the literature is assessor preparation. A 2022 survey in Education for Health found that fewer than 15% of Indian PG faculty had received any formal workplace-based assessment training. Holmboe and colleagues (2011) found in North American programmes that faculty uncertainty about assessment criteria was the strongest predictor of low assessment completion — if this finding translates to India, the 85% untrained-faculty rate represents a systemic risk to any competency framework’s operational validity. Faculty calibration workshops, using the framework’s competency statements with behavioural anchors and video vignettes, are the intervention most consistently associated with improved assessment quality in the implementation literature (Donato et al., 2015).
7. Conclusion
Competency framework quality is the upstream determinant of assessment system quality. Frameworks that articulate observable, behaviourally anchored, developmentally appropriate enabling competency statements — written at the right grain size, explicitly aligned with Miller’s pyramid levels and appropriate WBA tools, and mapped to EPAs — produce reliable assessment data and credible progression decisions. Frameworks that fail on any of these dimensions produce noise. For Indian PG programme directors operating under NMC CBME 2024, the task is not framework creation from scratch but contextualisation of a structured regulatory framework — followed by the deeper investment of faculty calibration that determines whether the framework’s stated intentions translate into actual assessment practice.
References
-
Carraccio, C., Wolfsthal, S. D., Englander, R., Ferentz, K., & Martin, C. (2002). Shifting paradigms: From Flexner to competencies. Academic Medicine, 77(5), 361–367. https://doi.org/10.1097/00001888-200205000-00003
-
Carraccio, C., Englander, R., van Melle, E., ten Cate, O., Lockyer, J., Chan, M.-K., & Snell, L. (2016). Advancing competency-based medical education: A charter for clinician-educators. Academic Medicine, 91(5), 645–649. https://doi.org/10.1097/ACM.0000000000001048
-
Frank, J. R., Mungroo, R., Ahmad, Y., Wang, M., De Rossi, S., & Horsley, T. (2010). Toward a definition of competency-based education in medicine: A systematic review of published definitions. Medical Teacher, 32(8), 631–637. https://doi.org/10.3109/0142159X.2010.500898
-
Frank, J. R., Snell, L., & Sherbino, J. (Eds.). (2015). CanMEDS 2015 physician competency framework. Royal College of Physicians and Surgeons of Canada. https://www.royalcollege.ca/rcsite/canmeds/canmeds-framework-e
-
Holmboe, E., Sherbino, J., Long, D. M., Swing, S. R., & Frank, J. R. (2010). The role of assessment in competency-based medical education. Medical Teacher, 32(8), 676–682. https://doi.org/10.3109/0142159X.2010.500704
-
Holmboe, E., Ward, D. S., Reznick, R. K., Katsufrakis, P. J., Leslie, K. M., Patel, V. L., & Nelson, E. A. (2011). Faculty development in assessment: The missing link in competency-based medical education. Academic Medicine, 86(4), 460–467. https://doi.org/10.1097/ACM.0b013e31820cb2a7
-
Harden, R. M. (2002). Learning outcomes and instructional objectives: Is there a difference? Medical Teacher, 24(2), 151–155. https://doi.org/10.1080/0142159022020687
-
Kogan, J. R., Conforti, L., Bernabeo, E., Iobst, W., & Holmboe, E. (2009). Opening the black box of clinical skills assessment via observation. Medical Education, 43(10), 965–971. https://doi.org/10.1111/j.1365-2923.2009.03425.x
-
Crossley, J., Johnson, G., Booth, J., & Wade, W. (2011). Good questions, good answers: Construct alignment improves the performance of workplace-based assessment scales. Medical Education, 45(6), 560–569. https://doi.org/10.1111/j.1365-2923.2010.03913.x
-
Ten Cate, O., & Scheele, F. (2007). Competency-based postgraduate training: Can we bridge the gap between theory and clinical practice? Academic Medicine, 82(6), 542–547. https://doi.org/10.1097/ACM.0b013e31805559c7
-
Ten Cate, O., Chen, H. C., Hoff, R. G., Peters, H., Bok, H., & van der Schaaf, M. (2015). Curriculum development for the workplace using Entrustable Professional Activities (EPAs): AMEE guide no. 99. Medical Teacher, 37(11), 983–1002. https://doi.org/10.3109/0142159X.2015.1060308
-
Van der Vleuten, C. P. M., Schuwirth, L. W. T., Driessen, E. W., Govaerts, M. J. B., & Heeneman, S. (2015). Twelve tips for programmatic assessment. Medical Teacher, 37(7), 641–646. https://doi.org/10.3109/0142159X.2014.973388
-
Norcini, J. J., Blank, L. L., Duffy, F. D., & Fortna, G. S. (2003). The mini-CEX: A method for assessing clinical skills. Annals of Internal Medicine, 138(6), 476–481. https://doi.org/10.7326/0003-4819-138-6-200303180-00012
-
Govaerts, M., Schuwirth, L., van der Vleuten, C., & Muijtjens, A. (2011). Workplace-based assessment: Effects of rater expertise. Advances in Health Sciences Education, 16(2), 151–165. https://doi.org/10.1007/s10459-010-9250-7
-
Sherbino, J., Bandiera, G., & Frank, J. R. (2011). Assessing competence in emergency medicine trainees: An overview of effective methodologies. CJEM, 13(3), 169–175. https://doi.org/10.2310/8000.2011.110308
-
National Medical Commission. (2024). Competency-based medical education curriculum for postgraduate medical education. NMC, New Delhi. https://www.nmc.org.in
Dy Director, Centre for Digital Resources, Education and Medical Informatics, Sri Balaji Vidyapeeth (Deemed to be University)
Published 31 March 2026