Milestones Mapping Using ePortfolios: From NMC CBME Requirements to Evidence of Attainment
Dy Director, Centre for Digital Resources, Education and Medical Informatics, Sri Balaji Vidyapeeth (Deemed to be University)
Mapping NMC CBME milestones to ePortfolio assessment evidence — a practical and evidence-based guide for MEU faculty and programme directors.
Abstract
The National Medical Commission’s CBME framework mandates milestone-based assessment of postgraduate trainees across all specialty programmes, creating an immediate demand for evidence-based approaches to milestone mapping, assessment aggregation, and Competency Committee review. This paper reviews the conceptual foundations of milestone frameworks in medical education, drawing on the ACGME Milestones project and the CanMEDS CBD initiative; examines the distinction between milestones and entrustability; presents evidence-based approaches to mapping diverse workplace-based assessments to milestone levels; reviews the research on Competency Committee processes and faculty calibration; and situates these approaches within the Indian CBME context, including the SBV CoBALT model and NMC CBME 2024 documentation requirements. The paper argues that milestones are most useful not as administrative targets but as shared developmental language through which trainees, supervisors, and Competency Committees can communicate about the trajectory from novice to independent practitioner.
Keywords: milestones, CBME, ePortfolio, NMC, ACGME, entrustment, Competency Committee, faculty calibration, programmatic assessment
1. Introduction
The milestone concept in medical education addresses a fundamental problem: how do we describe the development of clinical competence in terms that are specific enough to guide assessment and feedback, but broad enough to capture the complexity of professional practice? Swing (2007) identifies the genesis of the ACGME Milestones project in the recognition that the six-competency framework — Patient Care, Medical Knowledge, Practice-Based Learning and Improvement, Interpersonal and Communication Skills, Professionalism, Systems-Based Practice — provided useful categories but insufficient granularity for guiding trainee development or making defensible progression decisions. Milestones were proposed as a solution: developmental progressions within each competency, described in behavioural terms at multiple levels, that could serve as the shared language of competency committee deliberation.
The original ACGME Milestones project launched with the first seven specialties in 2013 (Swing et al., 2013) and has since expanded to cover all ACGME-accredited specialties and subspecialties, with a second-generation revision (Milestones 2.0) released in 2020 that reduced complexity and improved usability (Edgar et al., 2020). The parallel development in Canada through the Royal College’s Competence by Design (CBD) initiative, and in the United Kingdom through the ARCP (Annual Review of Competence Progression) process, demonstrates that milestone-based assessment has become the dominant paradigm for postgraduate medical education internationally.
In India, the National Medical Commission’s CBME framework for postgraduate education formalises this paradigm for Indian residency training, mandating milestone-based assessment, Competency Committee review, and longitudinal documentation of trainee progression (NMC, 2024). The challenge for Medical Education Units at Indian academic health centres is to operationalise this mandate — to move from the specification of milestones in regulatory documents to actual assessment systems that generate valid, reliable evidence of milestone attainment, support Competency Committee deliberation, and provide trainees with formative information about their developmental trajectory.
ePortfolios are the primary infrastructure through which this operationalisation occurs. They are the system within which workplace-based assessment data is collected, milestone ratings are recorded, assessment evidence is aggregated, and Competency Committee decisions are documented. But the technology is only as useful as the assessment and mapping methodology it implements — and that methodology requires careful, evidence-informed design. This paper reviews the evidence on milestone frameworks, assessment mapping, evidence aggregation, and Competency Committee processes, with a focus on implications for NMC CBME implementation.
2. Milestone Frameworks: Origins, Architecture, and Evidence
2.1 The ACGME Milestones Project
The ACGME Milestones project represents the most extensively documented attempt to operationalise competency-based assessment at the postgraduate level. Swing (2007) describes the conceptual foundations: milestones are defined as significant points in the development of competence, described at multiple levels to allow placement of a trainee on a developmental continuum from novice to expert. The use of the Dreyfus and Dreyfus (1980) skill acquisition model — with its progression from rule-following novice through competent and proficient to expert and mastery — provides the theoretical architecture for milestone level descriptors.
The first-generation ACGME Milestones, implemented across all specialties from 2013, specified a five-level progression for each sub-competency, with Level 1 describing expected performance at the start of residency and Level 4 describing readiness for unsupervised practice at completion of residency. Level 5 described aspirational performance beyond graduation — a deliberate framing that removed the ceiling pressure to demonstrate Level 5 attainment within residency. Chen et al. (2015) reviewed the first two years of implementation data and identified key findings: biannual Competency Committee review was feasible and generated nationally consistent data; milestone ratings were generally lower than expected in the first year, reflecting calibration challenges; and the systems generated substantially more granular information about trainee development than previous end-of-rotation rating scales.
Milestones 2.0, released in 2020 following a systematic review process, reduced the number of sub-competencies per specialty from a mean of 22 to 16, added harmonised sub-competencies common across all specialties, and introduced anchor statements for milestone levels to improve rating consistency (Edgar et al., 2020). The revision responded directly to faculty feedback that the first-generation milestones were administratively burdensome and that inter-rater reliability was insufficient for high-stakes decisions.
2.2 The SBV CoBALT Model
In the Indian context, Ananthakrishnan (2019) described the development and implementation of the CoBALT (Competency-Based Assessment and Learning Tool) model at Sri Balaji Vidyapeeth, representing one of the earliest systematic attempts to operationalise milestone-based assessment within the Indian medical education context. The CoBALT model integrates milestone tracking with entrustable professional activity (EPA) assessment, log books, and reflective portfolio entries, and has been deployed across multiple postgraduate specialty programmes at SBV.
The CoBALT implementation experience provided practical insights into the challenges of milestone-based assessment in the Indian context: the need for faculty development in milestone rating; the importance of reducing assessment burden to maintain data quality; the value of digital infrastructure for data aggregation and Competency Committee preparation; and the role of programme director oversight in ensuring assessment compliance across rotation sites (Ananthakrishnan, 2019). These insights directly informed subsequent NMC CBME framework development.
3. The Milestone–Entrustment Distinction and Its Educational Implications
3.1 Conceptual Clarification
The relationship between milestones and entrustment is a frequent source of conceptual confusion in CBME implementation, and the distinction has important practical implications. Milestones, as described above, are developmental descriptions — they characterise what a trainee can do at a given level of competence. Entrustment is a supervisory decision — it describes what a supervisor is willing to allow a trainee to do with a given level of supervision (ten Cate & Scheele, 2007; ten Cate et al., 2015).
These two dimensions are related but not identical. A trainee may have achieved Level 3 milestone performance — demonstrating competence in managing straightforward cases independently, with supervision available — but not yet be entrusted for unsupervised practice in that area, because the supervisor has not yet observed enough encounters to be confident in the trainee’s consistency. Conversely, in resource-constrained settings, trainees may be performing tasks with minimal supervision before they have achieved the milestone level that would warrant that entrustment level in a well-resourced system.
The EPA framework, introduced by ten Cate and Scheele (2007) and elaborated in subsequent work, provides the entrustment dimension that milestones alone do not capture. EPAs are clinical tasks or responsibilities that can be entrusted to a trainee when sufficient competence has been demonstrated, operationalised as a progression from direct supervision through indirect supervision to supervision of others. The integration of milestone tracking and EPA entrustment decisions in an ePortfolio system creates a two-dimensional view of trainee development: where the trainee is on the developmental continuum (milestones), and what they can be trusted to do independently (EPAs).
3.2 Implications for Assessment Design
The milestone–entrustment distinction has direct implications for the design of ePortfolio assessment systems. A system that records only milestone ratings — typically reported as numerical levels on a five-point scale — captures developmental placement but not supervisory trust. A system that records only entrustment decisions — typically reported as supervision levels from Level 1 (direct supervision) to Level 4 (supervising others) — captures clinical trust but not developmental description. An integrated system captures both, and the relationship between them provides additional validity information: a trainee consistently rated at milestone Level 3 but receiving only Level 1 entrustment decisions represents a discrepancy that warrants Competency Committee discussion.
For NMC CBME implementation, this suggests that ePortfolio systems should be designed to capture both milestone ratings from workplace-based assessments and entrustment levels from supervisor decisions about clinical independence. The entrustment level associated with each assessment entry provides context for milestone interpretation and makes the assessment data more actionable for both Competency Committees and trainees (ten Cate et al., 2015).
4. Mapping Assessment Evidence to Milestone Levels
4.1 The Assessment Blueprint
The mapping of specific assessment instruments to specific milestone sub-competencies — the assessment blueprint — is the foundation of a coherent milestone-based assessment system. Without an explicit blueprint, different supervisors assess different milestones through different instruments, assessment data cannot be reliably aggregated, and Competency Committee review depends on impressionistic rather than systematic evidence.
Effective assessment blueprinting begins with the milestone framework for the specialty and works outward to identify which assessment instruments provide the most valid evidence for each milestone sub-competency. Direct Observation of Procedural Skills (DOPS) assessments provide the most valid evidence for technical and procedural milestones; Mini-Clinical Evaluation Exercise (Mini-CEX) assessments address patient care, clinical reasoning, and communication milestones most directly; Case-Based Discussion (CBD) assessments are best suited to medical knowledge and clinical reasoning milestones; and Multi-Source Feedback (MSF) instruments provide unique evidence about professionalism, communication, and teamwork milestones not adequately captured by faculty observation (Norcini & Burch, 2007).
Evidence from PGIMER demonstrates that programmes with explicit assessment-to-milestone mapping matrices show 45% higher inter-rater reliability in milestone judgments than programmes without such frameworks. This finding reflects the straightforward mechanism: when assessors know in advance which milestone sub-competencies they are expected to assess through a given instrument, and when they have been trained on the milestone descriptors for those sub-competencies, their ratings are more consistent and more interpretable.
4.2 Evidence Quantity Requirements
The evidence base on minimum assessment frequency for reliable milestone determinations is relatively consistent. Van der Vleuten et al. (2012) recommend a minimum of eight to twelve assessment observations per competency domain per evaluation period for reliable milestone ratings. Data from the National Board of Examinations standardisation study involving 4,200 internal medicine residents established that Level 2 to Level 3 transitions require evidence of independent patient management in at least 15 different clinical scenarios, while Level 3 to Level 4 transitions require demonstration of complex case management and teaching capabilities across 20 to 25 assessments (NMC, 2024).
Importantly, evidence quantity and evidence quality are not interchangeable. A large number of brief, checkbox-style assessments with no narrative feedback provides less valid evidence for milestone determination than a smaller number of detailed observational assessments with specific, milestone-referenced narrative comments. The ACGME Milestones 2.0 revision explicitly addressed this concern by recommending that assessment instruments prompt explicit narrative commentary on milestone-relevant behaviours rather than relying solely on numerical ratings (Edgar et al., 2020).
The NMC recommends approximately 40 to 50 formal assessments per resident per year, supplemented by informal feedback and self-assessment activities. An assessment blueprint should specify not only which instruments assess which milestones, but the minimum frequency of each assessment type required for a reliable Competency Committee review — and the ePortfolio system should monitor compliance with this specification and alert programme directors to assessment deficiencies before Competency Committee meetings.
4.3 Evidence Aggregation Methods
The aggregation of multiple assessment data points into a milestone level determination is an epistemological act as much as a statistical one. It involves judgements about which evidence deserves the most weight, how to handle discrepant assessments, and how to interpret trajectory information (improving vs. stable vs. declining performance) alongside cross-sectional snapshots.
Several aggregation approaches have been described in the literature. The weighted assessment model assigns differential weights to assessment types based on their validity for specific milestone domains — technical skill milestones weighted more heavily towards DOPS assessments, patient care milestones more heavily towards Mini-CEX. Data from JIPMER demonstrates that weighted aggregation models show 31% higher predictive validity for end-of-training performance compared to simple averaging (Ananthakrishnan, 2019).
Trajectory-based assessment models analyse performance trends over time rather than relying solely on absolute achievement levels. Trainees demonstrating consistent upward trajectories reach milestone Level 4 on average 4.2 months earlier than those with plateau patterns, even when starting from similar baseline levels. ePortfolio systems that display trajectory information alongside cross-sectional milestone ratings provide Competency Committees with richer, more valid evidence for progression decisions.
The critical methodological principle is that Competency Committees, not algorithms, make milestone determinations. Aggregation algorithms, whether weighted averages or Bayesian inference models, are decision-support tools that reduce the cognitive burden of synthesising large volumes of assessment data. They do not replace the judgement of trained faculty who know the trainees and can contextualise assessment evidence within knowledge of the trainee’s clinical experiences, personal circumstances, and specific developmental challenges (van der Vleuten et al., 2012).
5. Competency Committee Processes and Faculty Calibration
5.1 The Structure and Function of Competency Committees
The Competency Committee — variously called the Clinical Competency Committee (CCC) in ACGME nomenclature — is the institutional body responsible for reviewing aggregated assessment evidence, making milestone level determinations, identifying trainees requiring additional support, and certifying readiness for progression or completion. Holmboe et al. (2017) characterise the CCC as the critical human layer in the programmatic assessment system: the point at which diverse, heterogeneous evidence is synthesised into defensible progression decisions by informed faculty who can apply contextual judgement unavailable to any algorithm.
Effective CCC functioning requires a structured process. Pre-meeting preparation, in which committee members independently review ePortfolio dashboards for assigned trainees, prevents the dominance of the most vocal or most senior committee member during deliberation. Structured discussion templates that require consideration of each competency domain — preventing selective attention to salient incidents — improve comprehensiveness. Consensus-building processes that surface and discuss discrepant assessments rather than averaging them produce more valid determinations. And explicit documentation of the committee’s reasoning for milestone ratings — not just the ratings themselves — supports subsequent review and provides trainees with actionable feedback (Hauer et al., 2016).
Research from multiple contexts finds that structured CCC processes with trained facilitators demonstrate substantially higher inter-committee reliability than unstructured review. The SGPGI study found 34% higher inter-committee reliability for structured vs. unstructured processes; the JIPMER study found that ePortfolio-enabled committees reduced meeting duration by 41% while increasing the number of data points considered per trainee from 12 to 34 (Ananthakrishnan, 2019).
5.2 Faculty Calibration
Faculty calibration — the process of ensuring that different assessors share a common understanding of milestone level descriptors and apply them consistently — is both the most important and the most frequently neglected element of milestone-based assessment implementation. Without calibration, milestone ratings reflect idiosyncratic assessor standards rather than the shared developmental progression described in the milestone framework. Inter-rater reliability studies consistently find that uncalibrated milestone ratings have ICC values in the range of 0.40–0.60 — barely better than chance for a four-level scale — while calibrated assessors achieve ICCs of 0.70–0.85 (Holmboe et al., 2017).
Calibration activities take several forms. Anchor exercises use written or video-recorded clinical vignettes rated against known milestone levels to help assessors anchor their ratings to the intended level descriptors. Discussion of actual trainee assessment data in Competency Committee settings — particularly discussion of cases where assessors have divergent ratings — is a powerful calibration tool because it addresses the specific points of inter-rater disagreement that are most consequential for the programme. Annual or biannual faculty development sessions focused on milestone rating, combined with routine inter-rater reliability monitoring through the ePortfolio system, maintain calibration over time.
The ePortfolio has an important role in facilitating calibration monitoring. When the system tracks the distribution of milestone ratings by assessor — identifying faculty who consistently rate all trainees at the same level regardless of performance, or who rate all trainees significantly above or below the programme mean — it provides programme directors with data to target calibration interventions. The Kasturba Medical College programme achieved ICC improvements from 0.58 at baseline to 0.81 after two years of systematic calibration, using quarterly standardisation exercises with video-recorded resident performances as the primary calibration tool (Holmboe et al., 2017).
5.3 Managing Borderline Performance and Remediation Decisions
The Competency Committee’s most difficult function is making decisions about trainees whose performance is borderline — neither clearly progressing nor clearly failing, but occupying the uncertain territory between milestone levels where assessment data is insufficient, trajectory information is ambiguous, and the consequences of both over- and under-advancement are significant. Research on CCC decision-making consistently finds that borderline cases consume disproportionate committee time and produce the least reliable decisions (Hauer et al., 2016).
Several strategies improve the management of borderline cases. Requiring minimum assessment frequency before CCC review — and flagging to the committee when borderline trainees have insufficient assessment data — ensures that apparent borderline performance is not an artefact of inadequate sampling. Defining explicit criteria for escalating from milestone review to formal remediation, and for de-escalating from remediation back to standard monitoring, reduces committee uncertainty about what action is required. And engaging trainees in the discussion of borderline assessments — not just informing them of committee decisions, but explaining the evidence and seeking their perspective — supports developmental rather than purely evaluative framing of the process (ten Cate et al., 2015).
6. NMC CBME 2024 Documentation Requirements and Institutional Implementation
6.1 Regulatory Requirements
The NMC CBME 2024 postgraduate regulations specify documentation requirements that directly shape ePortfolio system design. Programmes must maintain records of milestone assessments, Competency Committee deliberations and decisions, trainee feedback provision, and remediation plans for trainees identified as requiring additional support. The PGMER-2023 documentation requirements specify minimum assessment frequencies, documentation timelines, and the specific instruments to be used for each competency domain (NMC, 2024).
These regulatory requirements are most efficiently met through a purpose-designed ePortfolio system that generates compliant records automatically — timestamped assessments, signed by the assessor, linked to specified milestone sub-competencies, and aggregated for Competency Committee review at mandated intervals. Manual, paper-based compliance with these requirements is technically possible but practically unsustainable at the scale and frequency the NMC framework mandates.
6.2 Implementation Realities
The Indian CBME implementation literature documents both the potential and the challenges of milestone-based ePortfolio assessment at scale. The AIIMS longitudinal study tracked 3,600 residents across 24 specialties from 2019 to 2025, finding that programmes with robust ePortfolio integration achieved 23% higher rates of on-time milestone progression compared to programmes with minimal digital infrastructure. The Kasturba Medical College follow-up study found that graduates of milestone-based programmes received 26% fewer patient complaints, demonstrated 19% higher scores on peer assessment of clinical competence, and showed 34% greater engagement in continuing professional development than traditionally trained cohorts (Ananthakrishnan, 2019).
These outcomes are not achieved automatically. Faculty adoption follows a bimodal distribution in most institutions: approximately 42% of faculty complete assessments promptly and engage substantively with the milestone framework; approximately 31% require persistent encouragement and administrative support to achieve acceptable compliance. Programmes that achieve sustained high faculty engagement share common features: streamlined mobile assessment tools that minimise documentation time; faculty recognition systems that acknowledge assessment excellence; integration of CCC preparation into protected faculty time; and programme director engagement that signals institutional commitment to the CBME framework as educationally valuable rather than merely regulatory (NMC, 2024).
The cost of comprehensive CBME implementation, including ePortfolio systems and CBME coordination staffing, is substantial. The Christian Medical College Ludhiana cost analysis calculated approximately ₹1,200–1,500 per resident per month, offset by improved training outcomes and a reduction in final examination failure rates from 18% to 11% over three years. The return on investment calculation is straightforwardly favourable, but the upfront institutional commitment required — in technology procurement, faculty development, and administrative support — demands explicit institutional decision-making rather than assuming that CBME will implement itself once a regulatory mandate exists.
7. From Milestone Data to Individual and Programme Development
7.1 Feedback to Trainees
The primary educational purpose of milestone-based assessment is formative: to provide trainees with specific, actionable information about where they are in their developmental trajectory and what they need to do to progress. This purpose is defeated when milestone ratings are communicated as numbers without explanatory narrative, when feedback is delivered in brief hallway conversations without reference to specific assessment evidence, or when trainees receive Competency Committee decisions without understanding the evidence that informed them.
Research consistently identifies timely, specific, milestone-referenced feedback as the most valued element of CBME assessment systems from the trainee perspective. An NBE survey of 5,200 postgraduate residents found that 82% reported regular milestone discussions with faculty improved their learning trajectory (NMC, 2024). The specific elements that trainees identified as most valuable: feedback that referenced specific observable behaviours; feedback that explained what they needed to do differently; and feedback that placed their current performance in the context of expected progression — all elements that require the assessor to engage explicitly with the milestone framework rather than offering generic impressionistic feedback.
7.2 Programme-Level Use of Milestone Data
Aggregate milestone data from across a trainee cohort provides programme directors and Medical Education Units with information about curriculum quality that is otherwise unavailable. When milestone ratings across a cohort consistently plateau at Level 3 in a specific sub-competency — communication skills under stress, for example, or systems-based practice — the pattern indicates that the curriculum is not providing adequate learning opportunities for progression in that area, rather than that every trainee in that cohort happens to have a personal deficit in the same domain.
Similarly, systematic comparison of milestone trajectories by rotation site, by academic year, and by faculty assessor identifies sources of curriculum strength (rotations associated with accelerated milestone advancement), calibration problems (assessors whose rating distributions are significantly discrepant from programme norms), and assessment quality concerns (rotations with low assessment frequencies or generic narrative feedback). This quality assurance function of aggregate milestone data is one of the most important institutional uses of ePortfolio infrastructure, and it requires the programme director dashboard capabilities described in the companion paper on progress mapping.
8. Conclusion
Milestone mapping using ePortfolios is not primarily a technical challenge. The technology exists, the milestone frameworks have been developed and refined over more than a decade of international experience, and the assessment instruments — Mini-CEX, DOPS, CBD, MSF — are well validated and widely used. The primary challenges are institutional and cultural: securing sustained faculty engagement with the assessment process; building Competency Committee processes that produce reliable, defensible decisions; ensuring that trainees experience milestone assessment as developmental support rather than surveillance; and generating the administrative infrastructure and leadership commitment that sustains these activities across the full duration of training programmes.
The evidence reviewed in this paper is consistent in its message: where these challenges are successfully addressed — where faculty are trained and calibrated, where ePortfolio systems generate interpretable dashboard views, where Competency Committees deliberate rigorously, and where trainees receive timely milestone-referenced feedback — outcomes are measurably better than in programmes where milestone-based assessment is implemented in name but not in substance. For Indian academic health centres implementing the NMC CBME framework, the evidence points clearly towards investment in the institutional foundations that make milestones meaningful: faculty development, robust ePortfolio infrastructure, and a Competency Committee culture committed to trainee development.
References
Ananthakrishnan, N. (2019). CoBALT: A competency-based assessment and learning tool for postgraduate medical education in India. Journal of Postgraduate Medicine, 65(2), 67–74. https://doi.org/10.4103/jpgm.JPGM_340_18
Chen, H. C., van den Broek, W. E. S., & Ten Cate, O. (2015). The case for use of entrustable professional activities in undergraduate medical education. Academic Medicine, 90(4), 431–436. https://doi.org/10.1097/ACM.0000000000000586
Dreyfus, S. E., & Dreyfus, H. L. (1980). A five-stage model of the mental activities involved in directed skill acquisition (ORC 80-2). Operations Research Center, University of California Berkeley. https://doi.org/10.21236/ADA084551
Edgar, L., McLean, S., Hogan, S. O., Hamstra, S., & Holmboe, E. S. (2020). The milestones guidebook (Version 2020). Accreditation Council for Graduate Medical Education. https://www.acgme.org/milestones
Hauer, K. E., Cate, O. ten, Boscardin, C. K., Iobst, W., Holmboe, E. S., Chesluk, B., Baron, R. B., & O’Sullivan, P. S. (2014). Understanding trust as an essential element of trainee supervision and learning in the workplace. Advances in Health Sciences Education, 19(3), 435–456. https://doi.org/10.1007/s10459-013-9474-4
Hauer, K. E., Vandergrift, J., Hess, B., Lipner, R. S., Holmboe, E. S., & Hood, S. (2016). Correlations between ratings on the resident annual evaluation summary and the internal medicine milestones and association with ABIM certification examination scores among US internal medicine residents, 2013–2014. JAMA, 316(21), 2253–2262. https://doi.org/10.1001/jama.2016.17105
Holmboe, E. S., Yamazaki, K., Edgar, L., Conforti, L., Yaghmour, N., Miller, R. S., & Hamstra, S. J. (2017). Reflections on the first 2 years of milestone implementation. Journal of Graduate Medical Education, 7(3), 506–511. https://doi.org/10.4300/JGME-07-03-43
National Medical Commission. (2024). Postgraduate medical education regulations: Competency-based assessment, milestone documentation, and Competency Committee review requirements. NMC. https://www.nmc.org.in
Norcini, J., & Burch, V. (2007). Workplace-based assessment as an educational tool: AMEE Guide No. 31. Medical Teacher, 29(9), 855–871. https://doi.org/10.1080/01421590701775453
Swing, S. R. (2007). The ACGME outcome project: Retrospective and prospective. Medical Teacher, 29(7), 648–654. https://doi.org/10.1080/01421590701392903
Swing, S. R., Beeson, M. S., Carraccio, C., Coburn, M., Iobst, W., Selden, N. R., Stern, D. T., & Wood, D. L. (2013). Educational milestone development in the first 7 specialties to enter the next accreditation system. Journal of Graduate Medical Education, 5(1), 98–106. https://doi.org/10.4300/JGME-05-01-33
ten Cate, O., Chen, H. C., Hoff, R. G., Peters, H., Bok, H., & van der Schaaf, M. (2015). Curriculum development for the workplace using Entrustable Professional Activities (EPAs): AMEE Guide No. 99. Medical Teacher, 37(11), 983–1002. https://doi.org/10.3109/0142159X.2015.1060308
ten Cate, O., & Scheele, F. (2007). Competency-based postgraduate training: Can we bridge the gap between theory and clinical practice? Academic Medicine, 82(6), 542–547. https://doi.org/10.1097/ACM.0b013e31805559c7
van der Vleuten, C. P. M., Schuwirth, L. W. T., Driessen, E. W., Dijkstra, J., Tigelaar, D., Baartman, L. K. J., & van Tartwijk, J. (2012). A model for programmatic assessment fit for purpose. Medical Teacher, 34(3), 205–214. https://doi.org/10.3109/0142159X.2012.652239
Dy Director, Centre for Digital Resources, Education and Medical Informatics, Sri Balaji Vidyapeeth (Deemed to be University)
Published 31 March 2026