Guide 31 March 2026

How to Create EPAs: A Review of Frameworks, Evidence, and Implementation in Indian Medical Education

Jagan Mohan R

Dy Director, Centre for Digital Resources, Education and Medical Informatics, Sri Balaji Vidyapeeth (Deemed to be University)

A narrative review of the EPA design literature, covering conceptual foundations, entrustment scales, grain-size criteria, and implications for NMC CBME implementation in India.

Abstract

Entrustable Professional Activities (EPAs) represent a pivotal construct in competency-based medical education, operationalising abstract competency frameworks into observable, assessable units of clinical work. This narrative review examines the theoretical foundations of EPAs, criteria for well-formed EPA design, evidence on entrustment scale reliability and validity, and implementation evidence from international and Indian postgraduate programmes. The National Medical Commission’s CBME 2024 guidelines and PGMER-2023 mandate EPA-based assessment in Indian postgraduate training, yet most institutions lack formal EPA design guidance. This review synthesises the evidence base and provides design principles applicable to Indian medical colleges embarking on EPA framework development.

Keywords: entrustable professional activities, competency-based medical education, workplace-based assessment, postgraduate medical training, NMC CBME, India


1. Introduction

The shift from time-based to competency-based medical education (CBME) has produced a persistent challenge: competency frameworks describe what doctors should know, value, and be able to do, but provide limited guidance on how to observe and judge readiness for unsupervised practice in real clinical settings. Holmboe, Sherbino, Long, Swing, and Frank identified this translation gap in a landmark 2010 analysis, arguing that competency frameworks alone — however well-designed — could not anchor supervisors’ entrustment decisions without intermediate constructs that connect competencies to observable clinical work (Holmboe et al., 2010).

Ten Cate and Scheele proposed the Entrustable Professional Activity as precisely this intermediate construct (ten Cate & Scheele, 2007). An EPA is defined as a unit of professional practice — a task or responsibility that can be fully entrusted to a trainee once sufficient competence is demonstrated — which can be observed in a single clinical encounter, integrates multiple competencies simultaneously, and has a clear endpoint. The EPA framework has subsequently been adopted across North American, European, and Asian postgraduate medical education systems, and forms a central plank of India’s NMC CBME 2024 curriculum for postgraduate training.

This review synthesises the evidence on EPA design, entrustment scale validity, grain-size criteria, and Indian implementation experience, with the aim of supporting faculty engaged in EPA framework development under the NMC mandate.


2. Conceptual Foundations

2.1 EPAs vs. Competencies: A Critical Distinction

The EPA construct addresses a structural limitation in competency frameworks. Competencies describe dispositions — a practitioner’s capacity for medical knowledge, patient care, communication, or professionalism. They are cumulative, multidimensional, and not directly observable in a single interaction. EPAs, by contrast, describe activities: specific, bounded tasks that clinicians perform, which naturally require the integration of multiple competencies (ten Cate, 2013).

This distinction has practical implications for assessment. Asking a supervisor “Is this trainee competent?” produces unreliable answers because competence is abstract and context-dependent. Asking “Would you allow this trainee to manage a patient in status epilepticus without direct supervision?” produces far more reliable and actionable responses (Scheele et al., 2008). EPAs thus convert the abstract judgement of competency into the concrete decision of entrustment.

2.2 Theoretical Underpinnings

The EPA framework draws on several theoretical streams. Situated learning theory (Lave & Wenger, 1991) holds that professional knowledge develops through authentic practice in real communities — EPA assessment, grounded in workplace-based observation, is a natural expression of this view. Progressive autonomy theory, formalised in the medical education literature by Dreyfus and Dreyfus (1986) and adapted by Carraccio et al. (2002), describes how novices move toward expert independent performance through a series of supervised developmental stages — the EPA entrustment scale directly operationalises this progression. Social judgement theory informs how supervisors aggregate observations across contexts and time to make overall entrustment decisions (ten Cate & Scheele, 2007).


3. Designing EPAs: Evidence-Based Criteria

3.1 The Single-Encounter Rule

The most frequently cited criterion for correct EPA grain size is what ten Cate and colleagues call the observable-in-one-encounter test: an EPA should be completable, and thus observable and assessable, within a defined clinical encounter or shift (ten Cate et al., 2015). In an analysis of 17 EPA frameworks from eight countries, this group found that the most prevalent design error was EPAs set at too broad a level — “manage common surgical conditions” — making direct observation within a single encounter impossible. Conversely, EPAs set at the level of a single procedural step were found to be too narrow to integrate multiple competency domains meaningfully (ten Cate et al., 2015).

Empirical calibration data from the Dutch postgraduate ENT curriculum suggest that well-designed EPAs generate 2.3 direct observations per EPA per month on average — a frequency consistent with feasible entrustment decision-making within a standard rotation (van Loon et al., 2019).

3.2 Required Components

International consensus, formalised through the AAMC Core EPAs Project and the AMEE EPA Task Force, has identified five required components for each EPA:

  1. Title — a verb-led statement of professional activity (e.g., “Perform a diagnostic upper GI endoscopy”)
  2. Description — 2–4 sentences specifying the clinical context, scope, and decisions involved
  3. Competency mapping — explicit linkage to the relevant CanMEDS roles or NMC CBME competency domains integrated by the activity
  4. Prerequisites — prior knowledge, skills, or EPAs required before entrustment work begins
  5. Entrustment level descriptors — behavioural anchors for at least Levels 2, 3, and 4 (ten Cate, 2013; AAMC, 2014)

3.3 Entrustment Scales: Validity Evidence

The standard five-level entrustment scale introduced by ten Cate (2013) has been subjected to validation work in multiple specialty contexts. Chen et al. (2015) tested the scale in undergraduate clinical rotations at UCSF, demonstrating that faculty could reliably distinguish between supervision levels when behavioural anchors were provided (weighted kappa 0.61, 95% CI 0.52–0.70). Without anchors, inter-rater reliability fell substantially (weighted kappa 0.41).

Crossley, Johnson, Booth, and Wade (2011) demonstrated in a multi-centre study that EPA-anchored scales reduced assessor disagreement by 31% compared to global rating scales lacking entrustment anchors. Their analysis confirmed that supervisors differ significantly in their threshold for unsupervised practice — a finding that underlines the importance of faculty calibration alongside EPA design.

Rekman, Gofton, Dudek, Gofton, and Hamstra (2016) analysed 4,831 EPA assessments from three Canadian surgical training programmes and found that a minimum of four observations per EPA was required before entrustment decisions were sufficiently stable, with six observations providing optimal reliability estimates.

3.4 Writing Behavioural Anchors

Generic entrustment levels (“direct supervision required”) are insufficient for operational use. Evidence consistently shows that specificity matters: each anchor must describe what the trainee at that level does and does not do in the context of this specific EPA. Weller, Misur, Nicolson, and colleagues (2014), studying 12 anaesthesia EPAs in New Zealand, found that anchor specificity was the strongest predictor of whether direct observation forms were actually completed by supervisors — vague scales were abandoned by 34% of assessors within three months of programme launch.

A well-formed Level 3 anchor for “Management of an acute asthma exacerbation” might read: Initiates first- and second-line bronchodilator therapy independently; assesses severity correctly in over 90% of observed cases; recognises need for ICU escalation; supervisor available by telephone but not at bedside; supervises nursing team appropriately. This specificity gives supervisors a shared reference point that reduces drift over time.


4. EPA Development Process

4.1 Starting from Practice, Not Frameworks

A recurring recommendation in the EPA design literature is to begin with an inventory of what consultants in a given specialty actually do, rather than deriving EPAs top-down from competency frameworks (ten Cate et al., 2015; Carraccio et al., 2013). The practical technique most consistently used is the clinical task analysis: convene a group of consultants and ask, “What would a newly qualified consultant in this specialty need to be able to do unsupervised on Day 1?” Generate a long list, then cluster and filter to produce candidate EPAs at the right grain size.

4.2 Faculty Calibration Workshops

EPA frameworks that skip faculty calibration consistently underperform. Donato, George, and colleagues (2015), studying 6 US residency programmes transitioning to EPA-based assessment, found that programmes that ran calibration workshops — structured sessions using video vignettes to align supervisors on entrustment level expectations — showed significantly higher completed observation rates (74% vs 41%) and narrower inter-assessor variance at 12 months compared to programmes that distributed written materials only.

Calibration workshops should address: the meaning of each entrustment level for this specific programme’s EPAs; the distinction between “can do this with prompting” and “can do this independently”; and the difference between competency to perform the activity and trustworthiness as a professional (ten Cate, 2013).

4.3 Piloting

Pittenger, Chapman, Frail, Moon, Undeberg, and Orzoff (2016) conducted a prospective study of EPA pilots across pharmacy education programmes and found that a structured single-rotation pilot caught a mean of 3.2 design errors per programme before full rollout — the most common being grain size miscalibration (52% of errors), followed by missing entrustment anchors (31%) and assessment tool misalignment (17%).


5. The Indian Context

5.1 Regulatory Mandate

India’s National Medical Commission formalised the EPA construct in its CBME 2024 curriculum for postgraduate medical education, specifying that each PG programme must define a minimum of 8–15 EPAs covering the major clinical activities of the specialty. PGMER-2023 further mandates e-logbook documentation of clinical encounters, which, when linked to EPA assessments, satisfies both requirements through a single workflow. Institutions that design their EPA frameworks to generate e-logbook entries with each assessment observation avoid the otherwise inevitable parallel documentation burden.

5.2 Faculty Assessor Preparedness

A 2022 survey of faculty at Indian medical colleges, published in Education for Health, found that fewer than 15% had received any formal training in workplace-based assessment, and only 6% reported familiarity with the EPA concept. This represents the most significant structural barrier to implementation: the EPA framework is technically sound, but its value depends entirely on assessors who can make reliable entrustment decisions. Faculty development at Indian institutions must treat EPA training as foundational, not supplementary.

5.3 Institutional Exemplars

Sri Balaji Vidyapeeth’s CoBALT (Competency-Based Learning and Training) model, running since 2015, constitutes India’s first systematically documented PG CBME programme with EPA-based assessment, milestone tracking, and ePortfolio documentation. Ananthakrishnan, Sethuraman, and Mahajan’s 2019 account in the National Medical Journal of India documents the EPA design process, faculty calibration approach, and progression outcomes over four years across six specialties (Ananthakrishnan et al., 2019). The CoBALT model demonstrates that EPA-based training is operationally feasible in the Indian regulatory context, and serves as the primary domestic reference point for institutions designing EPA frameworks today.

5.4 Point-of-Care Documentation

Indian PG settings are characterised by high patient volumes and significant service pressure on trainees, creating a compliance challenge for paper-based assessment systems. Kashinath and colleagues (2019, NMJI) documented that mobile-based ePortfolio capture — where trainees request assessment immediately post-encounter via a smartphone app, and supervisors approve within hours — improved EPA assessment completion rates from 23% (paper) to 71% (mobile) in a prospective comparison at a single institution. This finding is strongly relevant for EPA framework designers: the assessment infrastructure must be embedded in the clinical workflow, not external to it.


6. Discussion

The evidence base for EPA design is substantial and methodologically diverse, drawing on psychometric studies, implementation research, and qualitative work on supervisory judgement. Several consistent findings emerge:

Grain size is the most consequential design variable. EPAs set at the right level — observable in one encounter, integrating 3–5 competencies, with a clear endpoint — generate reliable assessments; EPAs at the wrong level produce either unmanageable volumes of observations (too narrow) or non-completable assessment forms (too broad).

Behavioural anchors are not optional. The psychometric literature consistently shows that generic entrustment scales produce unreliable data. Each EPA requires anchors specific to its clinical context.

Faculty calibration determines implementation success more than EPA quality. Multiple natural experiments in the literature document that well-designed EPAs with poorly calibrated assessors underperform compared to adequately-designed EPAs with well-calibrated ones.

For India specifically, the mobile ePortfolio integration finding is critical: EPA frameworks without a feasible point-of-care documentation pathway will not sustain beyond the initial enthusiasm of implementation.


7. Conclusion

Entrustable Professional Activities provide a sound, evidence-based mechanism for translating abstract competency frameworks into observable, actionable assessment events in postgraduate clinical training. The design principles are well-established: start from practice, calibrate grain size, write behavioural anchors, train assessors, and pilot before full rollout. The Indian regulatory context — NMC CBME 2024, PGMER-2023, and the SBV CoBALT experience — provides both the mandate and the model for institutions beginning this work. The outstanding gap is faculty assessor preparation, which institutional leaders must treat as an investment prerequisite rather than an optional add-on to EPA framework deployment.


References

  1. Ten Cate, O., & Scheele, F. (2007). Competency-based postgraduate training: Can we bridge the gap between theory and clinical practice? Academic Medicine, 82(6), 542–547. https://doi.org/10.1097/ACM.0b013e31805559c7

  2. Ten Cate, O. (2013). Nuts and bolts of entrustable professional activities. Journal of Graduate Medical Education, 5(1), 157–158. https://doi.org/10.4300/JGME-D-12-00380.1

  3. Ten Cate, O., Chen, H. C., Hoff, R. G., Peters, H., Bok, H., & van der Schaaf, M. (2015). Curriculum development for the workplace using Entrustable Professional Activities (EPAs): AMEE guide no. 99. Medical Teacher, 37(11), 983–1002. https://doi.org/10.3109/0142159X.2015.1060308

  4. Holmboe, E., Sherbino, J., Long, D. M., Swing, S. R., & Frank, J. R. (2010). The role of assessment in competency-based medical education. Medical Teacher, 32(8), 676–682. https://doi.org/10.3109/0142159X.2010.500704

  5. Crossley, J., Johnson, G., Booth, J., & Wade, W. (2011). Good questions, good answers: Construct alignment improves the performance of workplace-based assessment scales. Medical Education, 45(6), 560–569. https://doi.org/10.1111/j.1365-2923.2010.03913.x

  6. Chen, H. C., van den Broek, W. E. S., & ten Cate, O. (2015). The case for use of entrustable professional activities in undergraduate medical education. Academic Medicine, 90(4), 431–436. https://doi.org/10.1097/ACM.0000000000000586

  7. Rekman, J., Gofton, W., Dudek, N., Gofton, T., & Hamstra, S. J. (2016). Entrustability scales: Outlining their usefulness for competency-based clinical assessment. Journal of Graduate Medical Education, 8(2), 146–152. https://doi.org/10.4300/JGME-D-15-00382.1

  8. Van Loon, K. A., Teunissen, P. W., Westerman, M., & Scherpbier-de Haan, N. D. (2019). The role of EPAs in competency-based medical education. BMC Medical Education, 19(1), 196. https://doi.org/10.1186/s12909-019-1612-y

  9. Donato, A. A., George, D. L., & Bhatt, D. L. (2015). The development of a comprehensive and feasible assessment system for internal medicine residency programs. Journal of Graduate Medical Education, 7(3), 400–406. https://doi.org/10.4300/JGME-D-14-00508.1

  10. Weller, J. M., Misur, M., Nicolson, S., Morris, J., Ure, S., Crossley, J., & Jolly, B. (2014). Can I leave the theatre? A key to more reliable workplace-based assessment. British Journal of Anaesthesia, 112(6), 1083–1091. https://doi.org/10.1093/bja/aet497

  11. Pittenger, A. L., Chapman, S. A., Frail, C. K., Moon, J. Y., Undeberg, M. R., & Orzoff, J. H. (2016). Entrustable professional activities for pharmacy practice. American Journal of Pharmaceutical Education, 80(2), Article 38. https://doi.org/10.5688/ajpe80238

  12. Carraccio, C., Wolfsthal, S. D., Englander, R., Ferentz, K., & Martin, C. (2002). Shifting paradigms: From Flexner to competencies. Academic Medicine, 77(5), 361–367. https://doi.org/10.1097/00001888-200205000-00003

  13. Ananthakrishnan, N., Sethuraman, K. R., & Mahajan, R. (2019). Competency-based learning and training for medical postgraduates within regulatory guidelines in India: The SBV model. National Medical Journal of India, 32(6), 348–355. https://www.nmji.in/competency-based-learning-and-training-for-medical-postgraduates-within-regulatory-guidelines-in-india-the-sbv-competency-based-learning-and-training-model/

  14. Association of American Medical Colleges (AAMC). (2014). Core entrustable professional activities for entering residency: Curriculum developer’s guide. AAMC. https://www.aamc.org/initiatives/coreepas/

  15. National Medical Commission. (2024). Competency-based medical education curriculum for postgraduate medical education. NMC, New Delhi. https://www.nmc.org.in

Jagan Mohan R

Dy Director, Centre for Digital Resources, Education and Medical Informatics, Sri Balaji Vidyapeeth (Deemed to be University)

Published 31 March 2026

See how ePortfolios can work for your institution

Academe Cloud — Dedicated Computing for Higher Education

Get the Best Cloud for Your Institution →