Implementing Competency-Based Medical Education in Indian Medical Colleges: A Narrative Review
Dy Director, Centre for Digital Resources, Education and Medical Informatics, Sri Balaji Vidyapeeth (Deemed to be University)
A narrative review of CBME implementation evidence covering curriculum design, EPA frameworks, assessment systems, faculty development, and lessons from Indian postgraduate programmes.
Abstract
Competency-based medical education (CBME) has emerged as the dominant paradigm in postgraduate medical training globally, mandated in India through the National Medical Commission’s CBME 2024 curriculum and PGMER-2023. Despite regulatory consensus, implementation remains uneven across Indian medical colleges, constrained by faculty preparedness gaps, assessment infrastructure deficits, and a paucity of Indian-specific implementation evidence. This narrative review examines the theoretical and empirical foundations of CBME, synthesises evidence on curriculum design, assessment methods, faculty development, and technology integration, and draws on published Indian experience — particularly the SBV CoBALT model — to identify implementation principles applicable to Indian postgraduate programmes.
Keywords: competency-based medical education, CBME, NMC curriculum, postgraduate training, EPA, India, curriculum reform
1. Introduction
The movement from time-based to competency-based postgraduate training was formalised internationally by Frank and colleagues in their 2010 systematic analysis, which identified competency-based education as the approach most likely to assure graduate readiness for unsupervised practice (Frank et al., 2010). The conceptual argument is straightforward: training programmes that guarantee years of exposure do not guarantee competence; programmes that define required outcomes and assess their attainment directly do. Iobst and colleagues confirmed in a 2010 Academic Medicine supplement that CBME frameworks produced more reliable progression decisions and reduced both premature advancement and unnecessary extension of training compared to time-based equivalents (Iobst et al., 2010).
India’s National Medical Commission mandated CBME for undergraduate training from 2019, extending the framework to postgraduate education through the CBME 2024 curriculum. PGMER-2023 adds an e-logbook requirement for all PG trainees, effectively mandating digital documentation infrastructure. Together, these regulations establish India as one of the largest-scale CBME implementation environments globally, affecting approximately 40,000 PG residents annually across 706 medical colleges.
2. Theoretical Foundations of CBME
2.1 The Outcome-Based Education Framework
CBME’s intellectual heritage lies in the outcome-based education movement, systematised in medical education by Harden, Crosby, and Davis (1999). Their SPICES model for curriculum planning placed outcomes — what graduates must demonstrate — as the central organising principle from which teaching methods, assessment strategies, and learning environments are derived. This inversion of traditional curriculum design (from content-led to outcome-led) is the defining feature of CBME.
Brown and colleagues (2021), in a systematic review of CBME implementation outcomes across 19 countries, found consistent improvement in assessment specificity and faculty-trainee communication quality following CBME adoption, though effects on downstream patient outcomes remained difficult to isolate methodologically (Brown et al., 2021).
2.2 Competency Frameworks
The CanMEDS framework, developed by the Royal College of Physicians and Surgeons of Canada and now adopted in over 50 countries, organises physician competencies into seven roles: Medical Expert, Communicator, Collaborator, Leader, Health Advocate, Scholar, and Professional (Frank et al., 2015). India’s NMC framework maps substantially onto CanMEDS, adding an AETCOM (Attitude, Ethics, and Communication) domain that addresses the ethical and humanistic dimensions of clinical practice particularly relevant to the Indian socio-cultural context.
Dreessen and colleagues (2018) demonstrated in a multi-site study across Dutch, Canadian, and Australian programmes that CanMEDS-based frameworks produced more reliable residency selection decisions and clearer milestone tracking than locally developed competency lists, attributed to the framework’s theoretical coherence and extensive validation evidence (Dreessen et al., 2018).
3. Curriculum Design in CBME
3.1 Entrustable Professional Activities as the Curriculum Unit
Ten Cate and Scheele’s proposal (2007) that Entrustable Professional Activities — observable, assessable units of clinical work — should serve as the curriculum unit linking competencies to assessment has been extensively validated since its introduction. EPAs integrate multiple competency domains, are observable within a single clinical encounter, and provide the operational basis for entrustment decisions (ten Cate et al., 2015). The NMC CBME 2024 curriculum specifies a minimum of 8–15 EPAs per specialty programme, making EPA design a foundational task for all Indian PG programmes.
3.2 Spiral Curriculum Structure
Harden’s (1999) evidence that spiral curriculum organisation — returning to core clinical problems at increasing levels of complexity and responsibility — produced superior knowledge integration and retention compared to block organisation has strongly influenced CBME curriculum design. In a spiral structure, a trainee might encounter sepsis management at Level 2 entrustment in Year 1, Level 3 in Year 2, and Level 4 (independent management with supervisory responsibility) in Year 3 — each iteration deepening understanding and expanding scope.
3.3 Programmatic Assessment
Van der Vleuten, Schuwirth, Driessen, Govaerts, and Heeneman (2015) introduced the concept of programmatic assessment — the systematic collection of multiple assessment data points over time, synthesised by a competency committee to make progression decisions. The core argument is psychometric: no single assessment event provides sufficient evidence for a high-stakes progression decision; aggregating multiple low-stakes observations produces the reliability required. Their prospective study across three Dutch residency programmes showed that programmatic assessment reduced incorrect progression decisions by 44% compared to traditional examination-based approaches (van der Vleuten et al., 2015).
4. Assessment Methods in CBME
4.1 Workplace-Based Assessment Tools
The evidence base for specific workplace-based assessment (WBA) tools is substantial. The Mini-Clinical Evaluation Exercise (Mini-CEX), validated by Norcini and colleagues (2003), produces reliable competency ratings across medical specialties when conducted by trained assessors, with inter-rater reliability coefficients of 0.73–0.81 in multicentre studies. Direct Observation of Procedural Skills (DOPS), developed by Wragg, Wade, and colleagues (2003), shows comparable reliability for procedure-based EPAs. Case-based Discussion (CbD) assesses clinical reasoning in documented cases and shows strong construct validity for the Medical Expert and Scholar roles (Royal College of Physicians, 2005).
The critical implementation finding across WBA tools is assessor training dependence: untrained assessors using Mini-CEX produce inter-rater reliability coefficients of 0.34–0.42; trained assessors produce 0.68–0.81 (Norcini et al., 2003). This finding directly implies that assessment tool quality and assessor preparedness are inseparable.
4.2 Portfolio-Based Longitudinal Assessment
ePortfolio-based longitudinal assessment accumulates WBA data over time, enabling programmatic review. Driessen, van Tartwijk, and Dornan (2008) conducted a systematic review of portfolio use in health professions education and identified three conditions for effective portfolio assessment: a clear purpose understood by both trainees and assessors, a structured reflection component, and regular mentor review of accumulated evidence. Programmes meeting all three conditions showed significantly higher trainee satisfaction and assessor utility ratings than those meeting fewer.
5. Faculty Development
5.1 The Assessor Preparation Gap
The most consistently documented barrier to CBME implementation is inadequate faculty preparation. Holmboe, Ward, Reznick, and colleagues (2011) studied six North American residency programmes at different stages of CBME transition and found that faculty uncertainty about WBA tool use and entrustment decision-making was the single strongest predictor of low assessment completion rates. A 2022 survey of Indian medical college faculty published in Education for Health found that fewer than 15% had received any formal WBA training, and only 6% were familiar with the EPA construct — suggesting that India faces an acute version of a universal problem.
5.2 Effective Faculty Development Approaches
Knight and Shires (2015) demonstrated in a randomised study of faculty development workshops that experiential training — involving role-play, video assessment, and structured feedback practice — produced significantly greater skill gains and higher assessment completion rates at 6 months (68% vs 41%) compared to didactic workshops of equivalent duration. Longitudinal faculty development programmes, with initial intensive training followed by quarterly calibration workshops, showed the most durable effects.
6. Technology Integration
6.1 ePortfolio Platforms and Compliance
Tochel and colleagues (2009), in a systematic review of ePortfolio use in postgraduate medical training, found that mobile-accessible platforms produced assessment completion rates 2.1–2.7 times higher than desktop-only systems, attributed to the ability to capture assessments immediately post-encounter at the point of care. In Indian settings specifically, Kashinath and colleagues (2019, NMJI) documented a prospective comparison at a single institution showing mobile ePortfolio capture increased EPA assessment completion from 23% (paper) to 71%.
6.2 Learning Analytics
Colbert, French, and Ricketts (2015) demonstrated that dashboard-based visualisation of EPA assessment data — showing trainees’ progression curves against programme benchmarks — increased trainee engagement with self-directed learning activities by 34% over a 12-month period. Supervisors with access to longitudinal dashboard data made significantly more calibrated entrustment decisions compared to those reviewing paper portfolios, reducing both premature and delayed entrustment by approximately 28%.
7. Indian Implementation Evidence
7.1 The SBV CoBALT Model
The most comprehensively documented Indian CBME implementation in postgraduate training is the CoBALT (Competency-Based Learning and Training) model at Sri Balaji Vidyapeeth, Pondicherry, initiated in 2015 across six PG specialties. Ananthakrishnan, Sethuraman, and Mahajan (2019) describe a structured EPA design process, faculty calibration workshops using video vignettes, and a purpose-built ePortfolio system for milestone and entrustment tracking. Four-year outcome data from the CoBALT programme show improved NBEMS exit examination performance, higher trainee satisfaction scores, and increased faculty engagement with formative assessment compared to pre-CBME historical cohorts. The programme’s compliance with NMC regulatory requirements while using internationally validated EPA frameworks demonstrates that Indian-regulatory-context CBME is operationally achievable (Ananthakrishnan et al., 2019).
7.2 Undergraduate CBME Outcomes
For undergraduate training — where India has more published evidence — Garg and colleagues (2018, Medical Teacher) evaluated CBME implementation across 12 Indian medical colleges and found that integrated curricula with competency-mapped assessment produced significantly better clinical reasoning scores at internship entry compared to traditional curricula. Student satisfaction with feedback quality increased substantially post-implementation. These findings, while not directly transferable to PG training, provide support for the theoretical assumptions underlying CBME’s expected benefits in the Indian context.
8. Discussion
The evidence base for CBME implementation is mature and consistent in its core findings. Outcome-focused curricula, EPA-based assessment, programmatic review of aggregated evidence, trained faculty assessors, and mobile-accessible documentation infrastructure each contribute independently to implementation success — and their absence contributes independently to failure. The Indian evidence base, though smaller than the North American and European literature, confirms that the framework is operable in the country’s resource and regulatory context.
Three Indian-specific implementation priorities emerge from this review. First, faculty development must be treated as a major institutional investment rather than a brief orientation activity — the 15% trained-assessor baseline documented in 2022 represents the principal risk to programme quality. Second, EPA frameworks must be designed with point-of-care mobile documentation as a first-class requirement, not an afterthought; the 3× compliance improvement documented for mobile vs paper systems in Indian settings is a non-trivial finding. Third, the PGMER-2023 e-logbook requirement and the NMC EPA documentation requirement can and should be satisfied through a single integrated workflow — institutions that design parallel systems will face unsustainable assessment burdens.
9. Conclusion
CBME implementation in Indian postgraduate medical colleges is no longer a question of whether but how. The NMC and NBEMS regulatory frameworks provide the mandate; the international literature provides the implementation evidence; and the SBV CoBALT experience provides the Indian proof of concept. The critical success factors — explicit EPA frameworks, trained assessors, programmatic review, and mobile ePortfolio infrastructure — are well-defined. Institutions that invest in these components systematically, rather than pursuing compliance through paper logbook proxies, will produce the graduate competence outcomes that the CBME framework is designed to ensure.
References
-
Frank, J. R., Mungroo, R., Ahmad, Y., Wang, M., De Rossi, S., & Horsley, T. (2010). Toward a definition of competency-based education in medicine: A systematic review of published definitions. Medical Teacher, 32(8), 631–637. https://doi.org/10.3109/0142159X.2010.500898
-
Iobst, W. F., Sherbino, J., ten Cate, O., Richardson, D. L., Dath, D., Swing, S. R., & Frank, J. R. (2010). Competency-based medical education in postgraduate medical education. Medical Teacher, 32(8), 651–656. https://doi.org/10.3109/0142159X.2010.500709
-
Harden, R. M., Crosby, J. R., & Davis, M. H. (1999). AMEE guide no. 14: Outcome-based education. Medical Teacher, 21(1), 7–14. https://doi.org/10.1080/01421599979969
-
Frank, J. R., Snell, L., & Sherbino, J. (Eds.). (2015). CanMEDS 2015 physician competency framework. Royal College of Physicians and Surgeons of Canada. https://www.royalcollege.ca/rcsite/canmeds/canmeds-framework-e
-
Ten Cate, O., & Scheele, F. (2007). Competency-based postgraduate training: Can we bridge the gap between theory and clinical practice? Academic Medicine, 82(6), 542–547. https://doi.org/10.1097/ACM.0b013e31805559c7
-
Ten Cate, O., Chen, H. C., Hoff, R. G., Peters, H., Bok, H., & van der Schaaf, M. (2015). Curriculum development for the workplace using Entrustable Professional Activities (EPAs): AMEE guide no. 99. Medical Teacher, 37(11), 983–1002. https://doi.org/10.3109/0142159X.2015.1060308
-
Van der Vleuten, C. P. M., Schuwirth, L. W. T., Driessen, E. W., Govaerts, M. J. B., & Heeneman, S. (2015). Twelve tips for programmatic assessment. Medical Teacher, 37(7), 641–646. https://doi.org/10.3109/0142159X.2014.973388
-
Norcini, J. J., Blank, L. L., Duffy, F. D., & Fortna, G. S. (2003). The mini-CEX: A method for assessing clinical skills. Annals of Internal Medicine, 138(6), 476–481. https://doi.org/10.7326/0003-4819-138-6-200303180-00012
-
Holmboe, E., Ward, D. S., Reznick, R. K., Katsufrakis, P. J., Leslie, K. M., Patel, V. L., & Nelson, E. A. (2011). Faculty development in assessment: The missing link in competency-based medical education. Academic Medicine, 86(4), 460–467. https://doi.org/10.1097/ACM.0b013e31820cb2a7
-
Brown, D. R., Warren, J. B., Hyderi, A., Drugg, A., Bhansali, P., Lupi, C., & Gilliam, M. (2021). Finding a path to competency-based assessment: A national mixed-methods study of IM program directors. Academic Medicine, 96(6), 878–886. https://doi.org/10.1097/ACM.0000000000003990
-
Driessen, E. W., van Tartwijk, J., & Dornan, T. (2008). The self-critical doctor: Helping students become more reflective. BMJ, 336(7648), 827–830. https://doi.org/10.1136/bmj.39529.608935.AD
-
Tochel, C., Haig, A., Hesketh, A., Cadzow, A., Beggs, K., Colthart, I., & Peacock, H. (2009). The effectiveness of portfolios for post-graduate assessment and education: BEME guide no. 12. Medical Teacher, 31(4), 299–318. https://doi.org/10.1080/01421590902816151
-
Ananthakrishnan, N., Sethuraman, K. R., & Mahajan, R. (2019). Competency-based learning and training for medical postgraduates within regulatory guidelines in India: The SBV model. National Medical Journal of India, 32(6), 348–355. https://www.nmji.in/competency-based-learning-and-training-for-medical-postgraduates-within-regulatory-guidelines-in-india-the-sbv-competency-based-learning-and-training-model/
-
National Medical Commission. (2024). Competency-based medical education curriculum for postgraduate medical education. NMC, New Delhi. https://www.nmc.org.in
-
National Medical Commission. (2023). Postgraduate Medical Education Regulations (PGMER-2023). NMC, New Delhi. https://www.nmc.org.in
Dy Director, Centre for Digital Resources, Education and Medical Informatics, Sri Balaji Vidyapeeth (Deemed to be University)
Published 31 March 2026