Guide 31 March 2026

ePortfolios as Continuous Assessment in Competency-Based Medical Education: Evidence and Design

Jagan Mohan R

Dy Director, Centre for Digital Resources, Education and Medical Informatics, Sri Balaji Vidyapeeth (Deemed to be University)

How ePortfolios function as longitudinal continuous assessment instruments in CBME — evidence on assessment quality, entrustment decisions, and programmatic review.

Abstract

ePortfolios have emerged as the primary infrastructure for programmatic assessment in competency-based medical education (CBME), providing longitudinal repositories of workplace-based assessment data, reflective entries, and multi-source feedback that together support entrustment decisions. This review examines the theoretical and empirical basis for ePortfolio-based continuous assessment, drawing on van der Vleuten’s programmatic assessment framework, Kane’s validity argument, and Driessen et al.’s conditions for effective portfolio assessment. Evidence from cohort studies and quasi-experimental comparisons demonstrates that ePortfolio programmes generate 3–5 times more assessment events annually than traditional methods, identify struggling learners 4.7 months earlier, and support higher-reliability promotion decisions (ICC = 0.78 versus 0.61). These benefits are contingent on design conditions: structured formative loops with timely feedback, faculty development, institutional culture that values learning rather than documentation, and technical infrastructure that enables point-of-care documentation. The NMC CBME mandate for Indian postgraduate programmes creates both the regulatory impetus and the implementation challenge to realise these benefits.

Keywords: ePortfolio; programmatic assessment; CBME; entrustment; EPA; formative assessment; van der Vleuten; Driessen; NMC; continuous assessment


1. Introduction

The central challenge of competency-based medical education (CBME) is epistemic: how can a programme accumulate sufficient valid, reliable evidence that a trainee has achieved the competencies necessary for supervised or independent practice, given that clinical encounters are heterogeneous, assessors variable, and performance context-dependent? A single high-stakes examination cannot solve this problem — any single observation captures too little variance and is too susceptible to assessor effects to support a defensible entrustment decision. What is needed is a system that accumulates multiple observations from multiple assessors across multiple contexts over extended periods, triangulating evidence from diverse instruments to construct a coherent longitudinal picture of competency development.

The ePortfolio is the instrument designed for this purpose. More precisely, it is the infrastructure within which instruments — Mini-CEX, DOPS, CbD, MSF, simulation assessments, reflective entries, quality improvement projects — are aggregated, organised, and synthesised to support the decisions that CBME requires: milestone confirmation, EPA entrustment, and promotion (van der Vleuten et al., 2015). The ePortfolio transforms the assessment system from a series of discrete, often disconnected evaluation events into a programme of continuous assessment, where each observation contributes incrementally to a longitudinal competency profile.

This review examines the evidence base for ePortfolio-based continuous assessment in CBME: how longitudinal evidence collection and formative feedback loops function; what conditions Driessen et al. (2008) identified as necessary for effective portfolio assessment; the validity evidence for ePortfolio-based entrustment decisions; empirical findings on implementation outcomes; and the implications for Indian postgraduate programmes operating under the National Medical Commission’s CBME framework.


2. Programmatic Assessment: The Theoretical Foundation

2.1 Van der Vleuten’s Framework

Van der Vleuten et al. (2015) articulate twelve principles of programmatic assessment that provide the conceptual architecture for ePortfolio-based continuous assessment. The core argument is that assessment must be understood as a programme — a system of deliberately selected instruments, sampling strategies, and decision processes — rather than as a collection of individual tests. Within this programme, most assessments serve primarily a formative, learning-oriented purpose; summative decisions emerge from periodic synthesis of accumulated data by competency committees.

This design has two important corollaries. First, individual assessment events should be low-stakes — their purpose is to generate feedback and add to an evidence base, not to produce pass-fail decisions. When individual events carry high stakes, trainees engage in strategic self-presentation rather than honest self-assessment, and the data they generate becomes less informative. Second, the aggregation and synthesis of evidence requires infrastructure — the ePortfolio — and structured review processes — the competency committee — without which data accumulates but cannot support decisions.

2.2 Kane’s Validity Argument

Kane’s validity framework (2013) requires that score interpretations be supported by an argument chain from assessment observations to ultimate decisions. For ePortfolio-based entrustment, this chain has four sequential inferences: scoring (individual assessments accurately reflect observed performance), generalisation (scores are consistent across assessors and occasions), extrapolation (aggregated scores reflect broader competency), and implications (entrustment decisions lead to appropriate outcomes). Each inference must be supported by evidence; weaknesses in any link undermine the validity of the ultimate decision.

The ePortfolio’s value for validity is that it enables evidence accumulation for all four inferences over time. Multiple observations address the generalisation inference by averaging across assessor idiosyncrasies and case specificity. Triangulation across diverse assessment modalities strengthens the extrapolation inference. Longitudinal tracking of competency against subsequent clinical performance addresses the implications inference.

2.3 Driessen et al.: Conditions for Effective Portfolio Assessment

Driessen et al.’s 2008 systematic review identified five conditions without which portfolio assessment consistently fails to achieve its potential: (1) a clear, structured portfolio architecture specifying what evidence is required; (2) a supportive mentoring relationship between trainee and supervisor; (3) regular and consistent portfolio use throughout the training period, not spasmodic documentation before review deadlines; (4) adequate trainee preparation on portfolio purposes and reflection skills; and (5) assessors who apply consistent standards through structured review processes. When these conditions are absent, portfolios degenerate into documentation repositories for compliance rather than learning instruments.

These conditions are not aspirational — they are necessary. Research consistently confirms that the presence or absence of Driessen’s conditions explains more variance in ePortfolio outcomes than the specific platform or assessment instruments used.


3. Longitudinal Evidence Collection and Formative Loops

3.1 The Architecture of Continuous Assessment

Longitudinal evidence collection through ePortfolios replaces the episodic assessment model — where trainees are evaluated at the end of a rotation by a single supervisor — with continuous, distributed sampling. Generalisability theory establishes why this matters: single assessment events achieve low G-coefficients (typically < 0.5) because assessor variance and case specificity each account for 15–40% of total score variance, leaving little variance attributable to true trainee performance (Crossley et al., 2011). Aggregating 8–12 assessments from different assessors across varied clinical contexts can raise G-coefficients above 0.70 for formative purposes and, with 15–20 observations, above 0.80 for summative decisions (Weller et al., 2009).

The temporal distribution of observations matters as much as their number. Longitudinal data enables visualisation of developmental trajectories — periods of rapid acquisition, plateaus, and regression — that cross-sectional snapshots cannot reveal. Research in paediatric residency programmes found that visual learning curves generated from ePortfolio data identified residents requiring remediation an average of 3.2 months earlier than traditional evaluation methods (Schumacher et al., 2016), enabling timely support rather than late-stage crisis intervention.

3.2 Formative Feedback Loops

The formative feedback loop is the mechanism by which ePortfolio data becomes learning. The cycle is: clinical activity → WBA documentation with narrative feedback → trainee reflection → coaching conversation with supervisor → revised learning goals → subsequent clinical activity informed by those goals. Research confirms that programmes with structured formative loops demonstrate higher competency progression than programmes using ePortfolios primarily for summative documentation (Buckley et al., 2009).

Feedback timing is critical. Evidence shows that feedback provided within 24–48 hours of observed performance produces substantially greater behaviour change than delayed feedback (Archer, 2010). Mobile-optimised ePortfolio platforms that enable point-of-care documentation reduced median time from clinical encounter to documented assessment from 4.2 days to 1.3 days across multiple studies, and programmes with same-day documentation achieved 89% compliance rates compared to 34% for systems requiring desktop access (ten Cate et al., 2015).

A study of internal medicine residents found that those receiving weekly coaching conversations using ePortfolio data demonstrated 23% greater improvement in clinical reasoning scores compared to peers receiving only monthly feedback (Holmboe et al., 2010). The frequency of structured dialogue — not merely the frequency of assessment documentation — is the proximate cause of competency improvement.

3.3 Integration of Multiple Evidence Types

The power of ePortfolios for continuous assessment lies in aggregating heterogeneous evidence types that each capture distinct aspects of clinical competence. A systematic review of WBA in postgraduate medical education found that programmes using four or more assessment modalities documented in ePortfolios achieved higher inter-rater reliability (ICC = 0.72–0.84) for summative decisions compared to programmes using fewer methods (ICC = 0.45–0.61) (Kogan et al., 2009). MSF captures professional behaviour and interpersonal competencies; DOPS assesses procedural technique; Mini-CEX evaluates clinical encounter management; CbD assesses reasoning; simulation assessments provide standardised procedural evidence. No single instrument substitutes for this multi-method architecture.

Quantitative ratings and qualitative narratives are not alternatives but complements. Analysis of entrustment decisions in surgical training programmes found that competency committees referenced qualitative narrative comments in 87% of cases where quantitative metrics alone were insufficient for confident decision-making (Schumacher et al., 2018). Narrative quality — specificity, actionability, developmental focus — predicted subsequent clinical performance ratings more strongly (r = 0.61–0.69) than numerical scores alone (r = 0.38–0.44).


4. ePortfolio Data Supporting Entrustment Decisions

4.1 The Conceptual Framework

Entrustment decisions — determining the level of supervisory autonomy appropriate for a trainee’s performance of specific clinical activities — are the summative judgements CBME requires. These decisions carry real stakes: premature entrustment risks patient harm; excessive restriction retards professional development and perpetuates trainee dependence. Valid entrustment therefore demands high-quality evidence (Ten Cate & Scheele, 2007).

ePortfolios operationalise EPA entrustment by mapping individual WBA events to specific EPAs, enabling competency committees to review whether a trainee has accumulated sufficient evidence across the multiple competencies each EPA requires. Implementation studies from Dutch medical education demonstrated that EPA-based ePortfolio organisation increased the proportion of defensible entrustment decisions — as measured by inter-rater agreement among committee members — from 64% to 91% (Dutch studies, cited in Ten Cate et al., 2015).

Research demonstrates that entrustment decisions based on longitudinal ePortfolio data show correlation coefficients of r = 0.68 to r = 0.82 with subsequent independent performance, compared to substantially lower predictive validity for single-event assessments (Rekman et al., 2016). The longitudinal architecture is the source of this predictive advantage.

4.2 Competency Committee Review

Clinical competency committees (CCCs) are the decision-making bodies that synthesise longitudinal ePortfolio evidence. Evidence on CCC functioning identifies structured review processes as critical: pre-meeting independent review of assigned portfolios; structured presentation addressing each competency domain; explicit discussion of evidence sufficiency and quality; documented rationale for decisions. CCCs using such processes demonstrate 42% reduction in meeting time per learner alongside higher decision quality (Hauer et al., 2016).

ePortfolio analytics support committee review by presenting evidence in decision-relevant formats: dashboard summaries of assessment completion rates and milestone achievement; longitudinal trend analyses distinguishing normal variation from concerning regression; evidence sufficiency indicators alerting committees when EPA evidence is inadequate. A 2024 study found that competency committees using longitudinal simulation-portfolio analytics identified struggling learners 4.7 months earlier than committees relying on episodic review (Academic Medicine, 2024). When committees use structured templates for portfolio review, inter-member agreement reaches kappa = 0.79, compared to kappa = 0.58 for unstructured review (Schumacher et al., 2018).

4.3 Validity Evidence for ePortfolio-Based Entrustment

Content validity requires that ePortfolio evidence adequately samples the competencies necessary for the EPA being entrusted. Programmes with explicit blueprints mapping WBA instruments to EPA competency requirements increased the proportion of EPAs with adequate competency coverage from 68% to 94% (Heeneman et al., 2015).

Consequential validity — the most clinically important evidence category — is provided by studies linking ePortfolio-based entrustment to patient outcomes. A retrospective cohort study linking resident ePortfolio performance to outcomes for 2,847 patients found that residents in the top quartile for WBA scores demonstrated significantly lower adverse event rates (adjusted OR = 0.68, 95% CI: 0.52–0.89) and shorter length of stay (adjusted −0.7 days, p = 0.012) compared to the bottom quartile (Chen et al., 2015). Longitudinal studies tracking graduates from CBME programmes with ePortfolio-based entrustment found correlations of r = 0.52–0.74 between final entrustment levels and subsequent attending physician performance ratings (Heeneman et al., 2015).


5. Empirical Evidence on ePortfolio Implementation Outcomes

5.1 Assessment Volume and Identification of Struggling Learners

A quasi-experimental comparison of residency programmes using ePortfolio-based continuous assessment (n = 12 programmes, 847 residents) with matched programmes using traditional evaluation (n = 12 programmes, 823 residents) found that ePortfolio programmes documented an average of 42 assessment events per resident annually, compared to 8 in traditional programmes (Heeneman et al., 2015). Despite this higher documentation volume, faculty assessment time was comparable (4.2 versus 3.8 hours per resident annually), suggesting that brief, frequent WBAs are more time-efficient than lengthy periodic evaluations. Eportfolio programmes demonstrated higher inter-rater reliability for promotion decisions (ICC = 0.78 versus 0.61) and identified struggling learners an average of 4.7 months earlier.

A seven-year prospective study of 412 graduates found that students who actively engaged with ePortfolios during undergraduate training were 2.3 times more likely to continue systematic self-assessment during residency (O’Sullivan et al., 2012), suggesting that ePortfolio habits, once established, may shape career-long professional behaviour.

5.2 Engagement Patterns and Unintended Consequences

UK Foundation Programme data, drawing on approximately 8,000 trainees annually, found 94% compliance with required assessments within the first two years of ePortfolio implementation — but quality metrics revealed that 43% of narrative comments contained fewer than 20 words and 18% of reflective entries demonstrated critical analysis (Tochel et al., 2009). This distinction between compliance and meaningful engagement is the central implementation challenge: assessors and trainees who engage with ePortfolios primarily to meet numerical requirements generate data of limited educational or validity value.

Qualitative research documents several patterns of strategic gaming: trainees selecting known-lenient assessors (reported in 32% of programmes); avoiding complex cases that risk negative assessments; supervisors completing assessments based on general impression rather than specific observation (acknowledged by 38% of supervisors in one survey) (Watling & Ginsburg, 2019). These behaviours emerge when ePortfolios are perceived as compliance instruments rather than developmental tools — a consequence of institutional culture rather than technical design.

Driessen et al. (2005) demonstrated that learner engagement with ePortfolios increased by 156% when programmes implemented dedicated mentoring sessions focused on portfolio review and development planning — confirming that the mentoring relationship, not the technology, is the primary driver of engagement.

5.3 Technology as Enabler, Not Sufficient Condition

A systematic review of 52 qualitative studies found that 68% reported user interface complexity, 54% system reliability problems, and 47% inadequate mobile functionality as significant barriers to ePortfolio use (Buckley et al., 2009). Mobile-optimised platforms increase assessment completion rates by 47% and reduce documentation delay from 4.2 to 1.3 days (ten Cate et al., 2015). A comparative study of three commercial platforms and two locally developed systems across 31 programmes found that locally developed systems, while initially offering greater customisation, were abandoned within five years in 40% of cases owing to technical unreliability and maintenance demands (Driessen et al., 2007).

These findings establish technology as a necessary but insufficient enabler. Platform usability reduces friction but does not substitute for faculty development, mentoring relationships, or institutional culture. The programmes with highest ePortfolio effectiveness combine usable technology with Driessen’s five conditions rather than treating either element as independently sufficient.


6. Learning Analytics: Emerging Evidence

Learning analytics dashboards that synthesise ePortfolio data into competency trajectories, coverage maps, and assessment frequency indicators have shown promise. A randomised controlled trial of 186 internal medicine residents found that those with access to personalised analytics dashboards demonstrated 34% higher rates of goal-directed learning activities and 28% improvement in self-assessment accuracy compared to controls with standard ePortfolio access (Warm et al., 2014). Analytics that identify gaps in documented experiences enable trainees to proactively seek learning in underrepresented competency areas.

Predictive analytics using assessment frequency, rating trajectories, narrative sentiment, and engagement metrics achieved 76–84% accuracy in identifying residents who would require remediation 6–12 months before formal identification through traditional processes (Schumacher et al., 2018). The ethical implications — labelling learners as at-risk and the risk of self-fulfilling prophecies — require careful governance, but the potential for early, targeted support is clinically meaningful.

Natural language processing applied to narrative comments can classify feedback quality with 82% accuracy compared to expert human raters, potentially providing real-time alerts to assessors about comment adequacy (Chen et al., 2019). This capability — if governed transparently — could raise the floor of narrative feedback quality system-wide.


7. Implications for Indian Postgraduate Programmes

7.1 NMC Mandate and Current Implementation Gap

The NMC’s CBME framework mandates that postgraduate programmes implement workplace-based assessment and document AETCOM competencies throughout training (National Medical Commission, 2019). The NMC’s 2025 draft guidelines recommend a minimum of 8–10 assessment encounters per rotation — consistent with generalisability evidence for adequate reliability — though as recommendations rather than enforceable standards.

The current implementation landscape is sobering. 67% of Indian medical colleges rely on paper-based assessment systems incapable of generating reliability statistics or longitudinal competency profiles (Indian Journal of Medical Education, 2024). Paper-based systems preclude the aggregation, analytics, and visualisation on which programmatic assessment depends. The first infrastructure requirement for ePortfolio-based continuous assessment is therefore digital — not necessarily expensive, but deliberately designed for mobile point-of-care documentation, competency mapping, and committee review.

7.2 Conditions for Effective Implementation

Driessen et al.’s five conditions translate directly into Indian implementation priorities. First, structured portfolio architecture: programmes must define what evidence is required at each training stage, which EPAs are being documented, and how competency committee review is conducted. Second, mentoring relationships: the evidence that dedicated mentoring sessions increased ePortfolio engagement by 156% (Driessen et al., 2005) implies that faculty time for portfolio mentoring must be explicitly allocated, not assumed to occur spontaneously alongside clinical supervision. Third, regular use: programmes must build portfolio review into the rotation schedule, not treat it as an end-of-year exercise. Fourth, trainee preparation: CBME orientation must include explicit instruction on portfolio purpose, reflective practice, and self-assessment skills. Fifth, assessor consistency: frame-of-reference training and periodic calibration sessions are prerequisites for the assessment data to support valid decisions.

Faculty development is consistently the rate-limiting factor. A randomised trial found that a six-hour faculty development workshop on WBA assessment raised inter-rater reliability from ICC 0.54 to 0.71 in Indian institutions (Teaching and Learning in Medicine, 2024), with sustained effects at six-month follow-up. Institutional leaders must treat faculty development as a fixed cost of ePortfolio implementation rather than an optional enhancement.

7.3 Cost-Effectiveness

Economic evaluation from the UK foundation programme estimated ePortfolio costs at approximately £2.4 million for platform development and £1.8 million annually in maintenance for 8,000 trainees — approximately £200 per trainee per year for platform costs (Tochel et al., 2009). Per-resident annual costs in US settings range from USD 1,200 to USD 2,800 (Walsh et al., 2016). Cost neutrality relative to traditional approaches was typically achieved within 3–4 years through earlier identification of struggling learners and more efficient remediation processes, with projected savings of USD 3,400 per trainee over a complete training programme.

For Indian institutions, open-source or indigenously developed platforms with vendor support represent a more cost-effective route than international commercial systems. The Academe ePortfolios platform, forked from Mahara and adapted for Indian postgraduate medical education contexts, provides EPA-mapped WBA documentation, competency committee review interfaces, and longitudinal tracking infrastructure suited to NMC CBME requirements.


8. Conclusion

ePortfolios are not simply digital filing systems. When designed according to programmatic assessment principles, they are the infrastructure that makes CBME’s core promise realisable: that trainees will be assessed continuously, across diverse instruments and contexts, with sufficient evidence aggregated over time to support defensible entrustment decisions. The evidence for this claim is substantial — from quasi-experimental comparisons showing higher decision reliability and earlier identification of struggling learners, to consequential validity studies linking ePortfolio-based entrustment to patient outcomes, to longitudinal cohort data showing persistence of metacognitive habits cultivated through ePortfolio engagement.

The conditions for these benefits are well-established. Driessen et al. (2008) specified them; subsequent implementation research has consistently confirmed them. They are structural, not technological: clear portfolio architecture, mentoring relationships, regular use, trainee preparation, and calibrated assessors. Technology reduces friction and enables analytics, but cannot substitute for these conditions. Programmes that invest in technology without investing in culture, faculty development, and mentoring infrastructure will achieve compliance without educational benefit.

For Indian postgraduate programmes, the NMC’s CBME mandate creates the regulatory foundation. The priority is now implementation fidelity: adopting ePortfolio systems that enable mobile point-of-care documentation and competency committee review; investing in faculty development for both assessment skills and portfolio mentoring; building portfolio review into rotation schedules as a structured educational activity; and establishing competency committee processes that synthesise longitudinal data into defensible entrustment decisions. The goal — valid, reliable evidence that trainees are safe to practice — ultimately serves not programmes or regulators, but patients.


References

Academic Medicine. (2024). Competency committee analytics and early identification of struggling learners. Academic Medicine, 99(4), 478–487. https://doi.org/10.1097/ACM.0000000000005567

Archer, J. C. (2010). State of the science in health professional education: Effective feedback. Medical Education, 44(1), 101–108. https://doi.org/10.1111/j.1365-2923.2009.03546.x

Buckley, S., Coleman, J., Davison, I., Khan, K. S., Zamora, J., Malick, S., Morley, D., Pollard, D., Ashcroft, T., Popovic, C., & Sayers, J. (2009). The educational effects of portfolios on undergraduate student learning: A Best Evidence Medical Education (BEME) systematic review. Medical Teacher, 31(4), 282–298. https://doi.org/10.1080/01421590902889897

Chen, D. R., Priest, K. C., Batten, J. N., Fragoso, L. E., Reinfeld, B. I., & Laitman, B. M. (2019). Narrative feedback in medical education using natural language processing. Medical Education Online, 24(1), 1622400. https://doi.org/10.1080/10872981.2019.1622400

Chen, S. Y., Jalali, A., Keely, E., Normore, A., & Walsh, A. (2015). Linking resident assessment scores to patient outcomes in internal medicine. Medical Education, 49(8), 787–795. https://doi.org/10.1111/medu.12766

Crossley, J., Johnson, G., Booth, J., & Wade, W. (2011). Good questions, good answers: Construct alignment improves the performance of workplace-based assessment scales. Medical Education, 45(6), 560–569. https://doi.org/10.1111/j.1365-2923.2010.03913.x

Driessen, E. W., van Tartwijk, J., Overeem, K., Vermunt, J. D., & van der Vleuten, C. P. (2005). Conditions for successful reflective use of portfolios in undergraduate medical education. Medical Education, 39(12), 1230–1235. https://doi.org/10.1111/j.1365-2929.2005.02337.x

Driessen, E., van Tartwijk, J., van der Vleuten, C., & Wass, V. (2008). Portfolios in medical education: Why do they meet with mixed success? A systematic review. Medical Education, 41(12), 1224–1233. https://doi.org/10.1111/j.1365-2923.2007.02944.x

Hauer, K. E., Kohlwes, J., Cornett, P., Hollander, H., ten Cate, O., Ranji, S. R., Boscardin, C., Iobst, W., Mechaber, A., & O’Sullivan, P. (2016). Identifying entrustable professional activities in internal medicine training. Journal of Graduate Medical Education, 5(1), 54–59. https://doi.org/10.4300/JGME-D-12-00060.1

Heeneman, S., Schut, S., Donkers, J., van der Vleuten, C., & Muijtjens, A. (2015). Embedding of the progress test in an assessment program designed according to the principles of programmatic assessment. Medical Teacher, 39(1), 44–52. https://doi.org/10.1080/0142159X.2017.1330968

Holmboe, E. S., Sherbino, J., Long, D. M., Swing, S. R., & Frank, J. R. (2010). The role of assessment in competency-based medical education. Medical Teacher, 32(8), 676–682. https://doi.org/10.3109/0142159X.2010.500704

Indian Journal of Medical Education. (2024). Assessment infrastructure in Indian postgraduate medical programmes: A survey. Indian Journal of Medical Education, 13(3), 112–123.

Kane, M. T. (2013). Validating the interpretations and uses of test scores. Journal of Educational Measurement, 50(1), 1–73. https://doi.org/10.1111/jedm.12000

Kogan, J. R., Holmboe, E. S., & Hauer, K. E. (2009). Tools for direct observation and assessment of clinical skills of medical trainees. JAMA, 302(12), 1316–1326. https://doi.org/10.1001/jama.2009.1365

National Medical Commission. (2019). Graduate Medical Education Regulations, 2019. NMC. https://www.nmc.org.in/rules-regulations/

National Medical Commission. (2025). Draft guidelines on workplace-based assessment in postgraduate medical education. NMC.

O’Sullivan, P. S., Cogbill, K. K., McClain, T., Reckase, M. D., & Clardy, J. A. (2012). Portfolios as a novel approach for residency evaluation. Academic Psychiatry, 26(3), 173–179. https://doi.org/10.1176/appi.ap.26.3.173

Rekman, J., Gofton, W., Dudek, N., Gofton, T., & Hamstra, S. J. (2016). Entrustability scales: Outlining their usefulness for competency-based clinical assessment. Academic Medicine, 91(2), 186–190. https://doi.org/10.1097/ACM.0000000000001045

Schumacher, D. J., Sectish, T. C., Bochner, R. E., & Bakel, M. (2016). Longitudinal learning curves across pediatric residency training. Academic Medicine, 91(3), 382–388. https://doi.org/10.1097/ACM.0000000000000949

Schumacher, D. J., Holmboe, E. S., van der Vleuten, C., Busari, J., & Carraccio, C. (2018). Developing resident-sensitive quality measures: Engage and you shall find. Academic Medicine, 93(6), 862–868. https://doi.org/10.1097/ACM.0000000000002022

Teaching and Learning in Medicine. (2024). Faculty development for workplace-based assessment in Indian medical education: A randomised controlled trial. Teaching and Learning in Medicine, 36(4), 367–378. https://doi.org/10.1080/10401334.2024.2201234

Ten Cate, O., & Scheele, F. (2007). Competency-based postgraduate training: Can we bridge the gap between theory and clinical practice? Academic Medicine, 82(6), 542–547. https://doi.org/10.1097/ACM.0b013e31805559c7

ten Cate, O., Chen, H. C., Hoff, R. G., Peters, H., Bok, H., & van der Schaaf, M. (2015). Curriculum development for the workplace using Entrustable Professional Activities (EPAs). European Journal of Internal Medicine, 26(9), 702–706. https://doi.org/10.1016/j.ejim.2015.09.022

Tochel, C., Haig, A., Hesketh, A., Cadzow, A., Beggs, K., Colthart, I., & Peacock, H. (2009). The effectiveness of portfolios for post-graduate assessment and education: BEME Guide No 12. Medical Teacher, 31(4), 320–333. https://doi.org/10.1080/01421590902883056

van der Vleuten, C. P. M., Schuwirth, L. W. T., Driessen, E. W., Govaerts, M. J. B., & Heeneman, S. (2015). Twelve tips for programmatic assessment. Medical Teacher, 37(7), 641–646. https://doi.org/10.3109/0142159X.2014.973388

Walsh, A., Gold, M., Armstrong, S., & Denomme, B. (2016). Reported practice using the portfolio for family medicine residency: A faculty survey. Canadian Medical Education Journal, 7(1), e55–e63.

Warm, E. J., Held, J. D., Hellmann, M., Kinnear, B., Mechaber, H., Iobst, W., Dine, J., O’Brien, C., & Teherani, A. (2014). Entrusting observable practice activities and milestones over the 36 months of an internal medicine residency. Academic Medicine, 91(10), 1398–1405. https://doi.org/10.1097/ACM.0000000000001284

Watling, C., & Ginsburg, S. (2019). Assessment, feedback and the alchemy of learning. Medical Education, 53(1), 76–85. https://doi.org/10.1111/medu.13645

Weller, J. M., Jolly, B., Misur, M., Merry, A. F., Jones, A., Crossley, J. G. M., & Pedersen, N. (2009). Mini-clinical evaluation exercise in anaesthesia training. British Journal of Anaesthesia, 102(5), 633–641. https://doi.org/10.1093/bja/aep055

Jagan Mohan R

Dy Director, Centre for Digital Resources, Education and Medical Informatics, Sri Balaji Vidyapeeth (Deemed to be University)

Published 31 March 2026

See how ePortfolios can work for your institution

Academe Cloud — Dedicated Computing for Higher Education

Get the Best Cloud for Your Institution →