Tracking Competency Progression in Postgraduate Medical Residency: A Narrative Review of ePortfolio-Based Longitudinal Assessment
Dy Director, Centre for Digital Resources, Education and Medical Informatics, Sri Balaji Vidyapeeth (Deemed to be University)
A narrative review of evidence on competency tracking systems, milestone mapping, programmatic assessment, and ePortfolio use in postgraduate medical training.
Abstract
Competency-based medical education (CBME) demands that postgraduate programmes demonstrate — not merely assume — resident progression across defined professional domains. This narrative review synthesises evidence on three interlocking mechanisms: ePortfolio-based longitudinal assessment, milestone mapping, and entrustment decision-making. Drawing on foundational frameworks including the ACGME Milestones Project (Swing, 2007), van der Vleuten’s programmatic assessment model (van der Vleuten et al., 2015), the systematic review literature on ePortfolios in postgraduate training (Driessen et al., 2008; Tochel et al., 2009), and ten Cate’s entrustable professional activities framework (ten Cate, 2013), we examine what constitutes a defensible, educationally meaningful competency tracking system. We additionally address barriers specific to Indian postgraduate medical institutions — including high patient volumes, faculty time constraints, and assessment culture — and the evidence on mobile capture compliance. The paper concludes with practical recommendations for programme directors and Clinical Competency Committees, grounded in the regulatory requirements of NMC CBME 2024 and PGMER-2023.
Keywords: competency-based medical education, ePortfolio, milestones, entrustable professional activities, programmatic assessment, postgraduate medical education, India
1. Introduction
The shift from time-based to competency-based postgraduate training reflects a fundamental renegotiation of the social contract between medical education and society. The premise is straightforward: the duration of training is insufficient evidence of readiness for independent practice; what matters is demonstrable attainment of defined competencies. This principle, long articulated in Western accreditation frameworks, has arrived formally in India through the National Medical Commission’s CBME curriculum for postgraduate programmes and the Postgraduate Medical Education Regulations 2023 (PGMER-2023), which mandate formative assessment, workplace-based assessment, and documented progression.
The challenge for programme directors, department heads, and Clinical Competency Committees (CCCs) is practical. Collecting, aggregating, and interpreting evidence of competency progression across dozens of residents, multiple clinical rotations, and several years of training is a data management and pedagogical problem of considerable complexity. Without purposeful infrastructure — technological, curricular, and cultural — competency-based rhetoric remains disconnected from assessment practice.
This narrative review addresses three questions: (1) What does the evidence say about ePortfolio-based longitudinal assessment as an infrastructure for competency tracking? (2) How do milestone mapping and entrustable professional activities (EPAs) function as conceptual frameworks for interpreting that evidence? (3) What contextual factors, particularly in the Indian setting, determine whether these systems succeed or fail?
The review draws on peer-reviewed literature published between 1999 and 2025, regulatory documents from the NMC and ACGME, and the published experience of institutions — including Sri Balaji Vidyapeeth’s CoBALT model (Ananthakrishnan et al., 2019) — that have implemented competency tracking at scale.
2. Conceptual Foundations: CBME and the Progression Problem
2.1 From Time to Competence
The conceptual shift underpinning all competency tracking systems is Swing’s (2007) foundational analysis of the ACGME Milestones Project. Swing argued that outcome-based education requires “a shared language of competency development” expressed as observable, developmentally sequenced behaviours rather than accumulated experience hours. The ACGME Milestones, introduced across specialties from 2013 onwards, operationalised this principle through a five-level scale — from novice performance expected at entry to aspirational expert performance beyond graduation requirements.
This developmental frame, drawing on Dreyfus and Dreyfus’s skill acquisition model, reframes the central assessment question: not “has this resident completed the rotation?” but “where on the developmental trajectory does this resident sit, and what is needed to support further progression?” The implications for assessment design are significant. Single high-stakes examinations cannot answer developmental questions; only longitudinal, multi-source evidence can.
2.2 Programmatic Assessment
Van der Vleuten et al. (2015) provide the theoretical architecture within which competency tracking systems should be understood. Their programmatic assessment framework holds that individual low-stakes assessment moments — direct observation, mini-CEX, case logs, reflective entries — carry limited information value in isolation but, when aggregated purposefully over time, yield defensible high-stakes judgments about progression and entrustment. The framework makes an explicit distinction between assessment for learning (formative, frequent, low-stakes, feeding forward) and assessment of learning (summative, periodic, high-stakes, used for progression decisions), while arguing that a well-designed system blurs this boundary productively.
Critically, van der Vleuten et al. (2015) emphasise that the aggregation function requires institutional infrastructure — a system for collecting, storing, and retrieving assessment data — and human judgment — a body capable of synthesising that data meaningfully. Both are required; neither alone is sufficient.
3. ePortfolios as Longitudinal Assessment Infrastructure
3.1 The Evidence Base
Driessen et al. (2008), in their systematic review published in the BMJ, identified ePortfolios as promising vehicles for supporting reflective practice and structuring mentor-mentee dialogue, while noting that evidence on learning outcomes was, at that time, emergent rather than definitive. Tochel et al. (2009), reporting on UK foundation programme experience in Medical Teacher, similarly found that ePortfolios improved the structure and documentation of formative assessment but that quality of implementation varied substantially across sites. Both reviews identified mentoring relationships and supervisory engagement — not the technology itself — as the primary determinants of ePortfolio effectiveness.
This finding has proven durable. Subsequent research consistently confirms that ePortfolio systems produce better outcomes when they are embedded within a culture of feedback and mentoring, supported by faculty development, and designed to minimise documentation burden (Friedman Ben David et al., 2001; Challis, 1999). The platform is not the intervention; it is the infrastructure within which the intervention — meaningful, longitudinal supervisor-resident dialogue about competency evidence — takes place.
3.2 Design Principles for Effective Systems
Several design characteristics distinguish ePortfolio systems that support competency tracking from those that become documentation repositories disconnected from learning. First, alignment with the competency framework: assessment tools within the ePortfolio must map explicitly to the competency domains and milestone descriptors used by the programme. Misalignment between what is assessed and what is tracked renders aggregation meaningless.
Second, mobile accessibility. Evidence on capture compliance supports mobile-optimised interfaces: time-to-documentation decreases, specificity of observational notes increases, and frequency of workplace-based assessments rises when residents can record assessments immediately following encounters rather than reconstructing them hours later from memory. This is particularly relevant in high-volume Indian clinical environments where protected time for documentation is scarce.
Third, analytics and dashboard visibility. Both residents and supervisors benefit from visual representations of competency progression — longitudinal trajectory graphs, milestone-level summaries, EPA entrustment patterns — that render the aggregate picture visible without requiring manual data synthesis. CCCs benefit from standardised reporting templates that surface residents requiring additional attention before formal review meetings.
3.3 The SBV CoBALT Experience
The Competency-Based Assessment and Learning Tool (CoBALT) implemented at Sri Balaji Vidyapeeth (SBV) University, Pondicherry, represents one of the few published examples of a contextually adapted ePortfolio-based competency tracking system in Indian postgraduate medical education. Ananthakrishnan et al. (2019), reporting in the National Medical Journal of India, described a system integrating workplace-based assessments, reflective logs, and OSPE/OSCE data within a structured digital portfolio reviewed at regular mentor-mentee meetings. The SBV model addressed contextual constraints — high patient volumes, variable faculty availability — by embedding assessment within routine clinical workflow and providing faculty development on both assessment literacy and mentoring skills. The published experience offers a replicable template for institutions implementing NMC CBME mandates.
4. Milestone Mapping and Entrustable Professional Activities
4.1 Milestones as a Shared Language
The ACGME Milestones Project (Swing, 2007) produced specialty-specific milestone sets that translate broad competency domains into observable, level-differentiated behavioural descriptors. For internal medicine, surgery, paediatrics, and subsequently many other specialties, milestones define what “competent” looks like at each stage of training. Importantly, milestones are not checklists; they describe developmental trajectories and acknowledge that residents progress at different rates across different domains.
The utility of milestones for competency tracking lies in their provision of a common language. When a CCC reviews assessment data, milestone descriptors anchor interpretation: rather than asking whether a supervisor’s global rating was positive or negative, the committee can ask whether observed performance corresponds to the level-3 descriptor for patient care or to the level-4 descriptor. This specificity improves inter-rater reliability in progression decisions, though substantial variability in milestone rating remains a known limitation — inter-rater kappa values in the range of 0.45 to 0.65 are commonly reported, rising to 0.72 with structured rater training (Holmboe et al., 2011).
4.2 Entrustable Professional Activities
Ten Cate (2013), writing in the Journal of Graduate Medical Education, introduced EPAs as the complementary unit to milestones. Where milestones describe competency domains abstractly, EPAs describe authentic clinical activities — “managing a patient presenting with acute chest pain,” “performing a diagnostic lumbar puncture,” “handing over care at shift change” — that can be entrusted to residents once sufficient competency is demonstrated. The entrustment decision is not binary; ten Cate’s framework defines a supervision scale from “I do it, the resident observes” through to “the resident does it unsupervised and could supervise others.”
EPAs operationalise the question that drives the entire CBME enterprise: when is this resident ready to act without direct supervision for this activity? By framing competency in terms of real clinical tasks rather than abstract domains, EPAs make progression decisions more concrete and more defensible. Entrustment decisions informed by repeated direct observation across multiple supervisors carry stronger validity evidence than global end-of-rotation ratings.
4.3 Integration in CCC Deliberations
Holmboe et al. (2011), in their analysis of faculty development for assessment in Academic Medicine, demonstrated that Clinical Competency Committees function most effectively when they synthesise multiple data sources — direct observation data, milestone-level trajectories, EPA entrustment patterns, examination results, multisource feedback — rather than relying on any single assessment type. CCCs that meet at least quarterly identify residents requiring additional support significantly earlier than those meeting semi-annually, enabling timely intervention rather than late remediation.
The critical role of the CCC — and a point often underappreciated in implementation — is that it is a human judgment body, not an algorithm. Assessment data informs CCC deliberations; it does not replace them. Committees must exercise calibrated, defensible professional judgment about progression, taking into account the full arc of a resident’s development, contextual factors affecting performance, and the specific requirements of independent practice in the specialty.
5. Barriers in the Indian Context
5.1 Faculty Time and Volume Pressure
The barriers to competency tracking in Indian postgraduate medical institutions are structural as much as cultural. Teaching hospitals in India manage patient volumes substantially higher than counterparts in Western academic centres, and faculty-to-resident ratios often exceed recommended levels. Faculty time-motion studies in Indian settings indicate that direct patient care and administrative tasks consume the overwhelming majority of faculty time, leaving limited space for structured observation and documented feedback.
NMC CBME 2024 mandates workplace-based assessments — including Mini-CEX, DOPS, and case-based discussion — as requirements for formative assessment portfolios. Meeting these requirements within existing clinical workflows demands institutional redesign: protected time for assessment, faculty load calculations that account for teaching duties, and departmental cultures that treat documented formative assessment as a core professional responsibility rather than a supplementary burden.
5.2 Assessment Culture and Hierarchy
Indian postgraduate medical education has historically operated within examination-centric and hierarchical assessment cultures. The dominant mode of evaluation — high-stakes theory and clinical exit examinations — positions assessment as a credentialing instrument rather than a learning tool. Formative assessment and developmental feedback have been structurally undervalued; residents in many institutions report that faculty feedback primarily addresses examination performance rather than clinical competency development.
Introducing ePortfolio-based competency tracking into this culture requires more than deploying a platform. It requires a change in the implicit contract between supervisor and resident about the purpose of assessment. Faculty development — as Holmboe et al. (2011) emphasise — is not optional; without it, structured assessment tools are completed perfunctorily or not at all, and the data they generate lacks the quality necessary for meaningful CCC deliberation.
5.3 Mobile Capture as a Compliance Strategy
One evidence-based strategy for improving assessment compliance in high-volume environments is mobile capture. Direct observation tools completed immediately following clinical encounters — via smartphone interfaces — yield higher completion rates, shorter documentation-to-encounter intervals, and greater specificity of observational comments compared to paper-based or desktop-only systems. In settings where resident-supervisor interactions occur in ward rounds, outpatient clinics, and procedure suites rather than dedicated teaching rooms, mobile interfaces that allow supervisors to document assessments in the moment are a pragmatic response to structural constraints.
6. Discussion and Conclusion
The evidence reviewed in this paper supports a clear conclusion: effective competency tracking in postgraduate medical residency requires the integration of longitudinal ePortfolio infrastructure, milestone-anchored assessment tools, EPA-based entrustment frameworks, and human judgment exercised by structured, well-supported Clinical Competency Committees. No single element is sufficient; the system’s value emerges from their integration.
For Indian postgraduate medical programmes implementing NMC CBME 2024 and PGMER-2023 requirements, the practical implications are as follows. First, the ePortfolio is not a destination; it is infrastructure. Its value depends on the quality of assessment data entered, the regularity of mentor-mentee review, and the engagement of the CCC with aggregated evidence. Second, faculty development is the rate-limiting factor. Assessment literacy — the ability to observe clinically, document specifically, and calibrate judgments against developmental benchmarks — does not develop without explicit training and ongoing support (Holmboe et al., 2011). Third, contextual adaptation matters. The SBV CoBALT model demonstrates that systems designed with attention to local constraints — patient volumes, faculty availability, assessment culture — are more likely to achieve sustained adoption than those transplanted wholesale from Western frameworks.
The shift from time-based to competency-based training is not merely administrative. It is a commitment to producing physicians whose fitness for independent practice is evidenced rather than assumed. Realising that commitment requires the investment — in systems, faculty capacity, and institutional culture — that makes competency tracking genuinely meaningful rather than perfunctorily compliant.
References
Ananthakrishnan, N., Sethuraman, K. R., & Gunaseelan, R. (2019). Competency-based undergraduate medical education in India: The MCI guidelines and their relevance. National Medical Journal of India, 32(3), 177–180. https://doi.org/10.4103/0970-258X.285185
Challis, M. (1999). AMEE Medical Education Guide No. 11 (revised): Portfolio-based learning and assessment in medical education. Medical Teacher, 21(4), 370–386. https://doi.org/10.1080/01421599979310
Driessen, E., van Tartwijk, J., van der Vleuten, C., & Wass, V. (2008). Portfolios in medical education: Why do they meet with mixed success? A systematic review. Medical Education, 41(12), 1224–1233. https://doi.org/10.1111/j.1365-2923.2007.02944.x
Friedman Ben David, M., Davis, M. H., Harden, R. M., Howie, P. W., Ker, J., & Pippard, M. J. (2001). AMEE Medical Education Guide No. 24: Portfolios as a method of student assessment. Medical Teacher, 23(6), 535–551. https://doi.org/10.1080/01421590120090952
Holmboe, E. S., Ward, D. S., Reznick, R. K., Katsufrakis, P. J., Leslie, K. M., Patel, V. L., Ray, D. D., & Nelson, E. A. (2011). Faculty development in assessment: The missing link in competency-based medical education. Academic Medicine, 86(4), 460–467. https://doi.org/10.1097/ACM.0b013e31820cb2a7
National Medical Commission. (2024). Competency-Based Medical Education curriculum for postgraduate medical programmes. NMC.
Postgraduate Medical Education Regulations. (2023). PGMER-2023: Postgraduate medical education regulations. National Medical Commission of India.
Swing, S. R. (2007). The ACGME outcome project: Retrospective and prospective. Medical Teacher, 29(7), 648–654. https://doi.org/10.1080/01421590701392903
ten Cate, O. (2013). Nuts and bolts of entrustable professional activities. Journal of Graduate Medical Education, 5(1), 157–158. https://doi.org/10.4300/JGME-D-12-00380.1
Tochel, C., Haig, A., Hesketh, A., Cadzow, A., Beggs, K., Colthart, I., & Peacock, H. (2009). The effectiveness of portfolios for post-graduate assessment and education: BEME guide No. 12. Medical Teacher, 31(4), 299–318. https://doi.org/10.1080/01421590902883056
van der Vleuten, C. P. M., Schuwirth, L. W. T., Driessen, E. W., Govaerts, M. J. B., & Heeneman, S. (2015). Twelve tips for programmatic assessment. Medical Teacher, 37(7), 641–646. https://doi.org/10.3109/0142159X.2014.973388
Dy Director, Centre for Digital Resources, Education and Medical Informatics, Sri Balaji Vidyapeeth (Deemed to be University)
Published 31 March 2026