Guide 31 March 2026

Delivering Effective Feedback to Postgraduate Medical Residents: A Narrative Review of Models, Barriers, and Evidence-Based Practice

Jagan Mohan R

Dy Director, Centre for Digital Resources, Education and Medical Informatics, Sri Balaji Vidyapeeth (Deemed to be University)

A narrative review of feedback models, delivery methods, and barriers in clinical training — with implications for Indian postgraduate medical programmes.

Abstract

Feedback is among the most powerful influences on learning (Hattie & Timperley, 2007), yet its delivery in postgraduate medical training is frequently inadequate — too vague, too delayed, or too hierarchically constrained to support meaningful development. This narrative review examines the principal structured feedback models deployed in residency education — Pendleton’s rules (Pendleton et al., 1984), the SBI/SBID framework, and the R2C2 model (Sargeant et al., 2015) — alongside the foundational empirical and theoretical work of Ende (1983) and Hattie and Timperley (2007). We address documented barriers to effective feedback in clinical training, with particular attention to the Indian postgraduate medical context: hierarchical culture, time pressure, assessment misalignment, and the gap between feedback intention and resident experience identified by Bing-You and Trowbridge (2009) and Watling and Lingard (2012). We then examine the role of ePortfolio-triggered feedback in creating structured, documented, longitudinal feedback relationships. The paper concludes with practical recommendations grounded in NMC CBME 2024 requirements.

Keywords: feedback, residency, postgraduate medical education, R2C2, Pendleton, formative assessment, India, CBME


1. Introduction

In a landmark meta-analysis, Hattie and Timperley (2007) synthesised data from 196 studies and 6,972 effect sizes to conclude that feedback is among the most powerful influences on student achievement — but only when it is specific, goal-referenced, and actionable. The same analysis revealed that feedback can also have negative effects on learning when it is directed at the self rather than the task, or when it threatens the learner’s identity rather than informing their practice.

This duality is nowhere more consequential than in postgraduate medical training, where the quality of feedback residents receive directly affects clinical skill development, professional identity formation, and, ultimately, patient safety. Yet the evidence on feedback in medical education consistently documents a substantial gap between what educators intend to deliver and what residents actually receive and use. Bing-You and Trowbridge (2009), in their commentary in JAMA, described this as a “feedback failure” — pervasive across training programmes, rooted in structural, cultural, and interpersonal barriers that structured models alone cannot resolve.

Understanding why feedback fails — and how structured, evidence-informed approaches can close the gap — is the central concern of this review. The paper proceeds as follows: Section 2 examines foundational definitions and the theoretical basis for feedback in medical education. Section 3 reviews the major structured feedback models, with evidence on their effectiveness. Section 4 addresses barriers to effective feedback delivery, with particular attention to the Indian postgraduate context. Section 5 examines the role of ePortfolio-mediated feedback in creating longitudinal feedback relationships. Section 6 offers evidence-based recommendations and conclusions.


2. Foundational Theory: What Is Feedback and Why Does It Matter?

Ende’s (1983) early articulation of feedback principles in JAMA remains foundational. He defined feedback in the clinical context as “information describing students’ or house officers’ performance in a given activity that is intended to guide their future performance in that same or in a related activity.” This deceptively simple definition contains three critical elements: information, performance, and intention to guide future action. Feedback that lacks any of these elements — that is purely evaluative, that refers to personal traits rather than observed performance, or that is delivered without intention to support development — does not meet the threshold.

Hattie and Timperley (2007) extended this foundation by proposing a four-level model of where feedback operates: at the task level (is this specific task done correctly?), the process level (what strategies and processes support task performance?), the self-regulation level (how is the learner monitoring and directing their own learning?), and the self level (what kind of person am I?). Their analysis showed that task-level and process-level feedback produce the strongest effects on learning; self-regulation feedback supports the development of independent learners; and self-level feedback — praise or criticism directed at personal attributes — frequently reduces performance by activating ego-protective responses rather than learning behaviours.

For clinical supervisors, this framework has direct practical implications. “Good case presentation” is self-level feedback masquerading as task feedback. “Your presentation was clear but lacked a differential diagnosis — what were you considering?” operates at the process level and opens dialogue. “You’ve noticed your own tendency to anchor on the first diagnosis — what strategies might you use to check for that?” operates at the self-regulation level and supports professional independence.


3. Structured Feedback Models: Evidence and Application

3.1 Pendleton’s Rules

Pendleton et al. (1984) introduced what became one of the most widely taught feedback structures in medical education: the learner first identifies what they did well; the observer reinforces these strengths; the learner then identifies areas for improvement; the observer adds their observations. The model was designed to reduce learner defensiveness by beginning with self-assessment and anchoring positive observations before addressing deficits.

The pedagogical logic is sound — placing self-assessment first activates metacognition, and affirming strengths before addressing weaknesses reduces the threat response. In practice, however, Pendleton’s rules have attracted criticism for rigidity: the prescribed sequence can feel artificial, particularly in urgent clinical situations or when significant safety concerns require immediate, direct communication. Where the model excels is in routine formative feedback with junior residents where building self-assessment capacity and reducing anxiety are primary goals.

3.2 The SBI/SBID Framework

The Situation-Behaviour-Impact (SBI) and its extension, SBID (adding Development), emerged from leadership development literature as a practically efficient, behaviourally grounded feedback structure. The framework directs the feedback-giver to describe the specific situation in which behaviour was observed, describe the observable behaviour without inference or attribution, explain the impact of that behaviour on patients, team, or outcomes, and (in the SBID extension) collaboratively identify development strategies.

The value of the SBID framework lies in its insistence on behavioural specificity and causal connection. “You were unprofessional” is neither situation-specific nor behaviourally described. “During the ward round this morning (situation), when the patient asked about her prognosis, you redirected the question without addressing it (behaviour), which left her visibly distressed and meant the consultant had to re-engage the conversation (impact)” is observable, specific, and non-attributional. The framework’s compatibility with competency-based milestone descriptors — which are also behaviourally anchored — makes it particularly well-suited to formal assessment contexts and CCC documentation.

3.3 The R2C2 Model

Sargeant et al. (2015) published the R2C2 model — Relationship, Reaction, Content, Coaching — in Academic Medicine as an evidence-informed framework for feedback conversations when multi-source or formal assessment data is involved. The model’s central innovation is its explicit recognition that effective feedback is not information transmission but a relational and emotional process that must be actively managed.

The Relationship phase emphasises establishing psychological safety before introducing assessment data. The Reaction phase acknowledges that residents have emotional responses to feedback — particularly feedback that contradicts self-assessment — and that these reactions must be explored rather than bypassed before substantive discussion can occur. The Content phase involves collaborative interpretation of specific performance data. The Coaching phase shifts the interaction toward goal-setting and action planning, with the supervisor in a facilitative rather than directive role.

R2C2 was developed in response to documented failures of information-transmission models of feedback: residents frequently report receiving feedback that they intellectually understood but did not act upon because they did not perceive it as credible, did not feel emotionally safe engaging with it, or did not know how to translate general observations into specific behavioural changes. By addressing the relational and emotional preconditions for feedback uptake, R2C2 is better suited to high-stakes or emotionally charged feedback contexts — including multi-source feedback review, mid-point assessments, and remediating residents — than models that treat feedback as communication of information alone.


4. Barriers to Effective Feedback: The Indian Context

4.1 The Feedback Gap

Bing-You and Trowbridge (2009), in their analysis of feedback failure in medical education, identified a persistent gap between faculty intentions and resident experience: faculty believe they are providing useful feedback; residents report receiving insufficient, vague, or unhelpful feedback. Watling and Lingard (2012), in a qualitative study published in Medical Education, examined how feedback culture — the norms, implicit rules, and relational dynamics governing feedback in a clinical environment — determines whether feedback is sought, received, and acted upon more powerfully than the specific model employed. Their central finding was that residents quickly develop sophisticated, often accurate, models of when feedback is genuine and when it is ritualistic performance.

These findings have direct implications for Indian postgraduate medical programmes. A feedback culture that does not exist cannot be created by deploying a structured model. The model is a tool; the culture is the condition in which the tool either works or does not.

4.2 Hierarchical Culture and Power Distance

The hierarchical structure of Indian medical education — codified in institutional norms and amplified by the “guru-shishya” relational model — creates power distances that impede bidirectional feedback exchange. Residents who perceive feedback as a unidirectional pronouncement from an authority figure, rather than a collaborative dialogue about professional development, are less likely to engage authentically, disclose uncertainties, or challenge interpretations they believe to be inaccurate. Faculty who equate hierarchy with authority may experience resident engagement as impertinence rather than as evidence of reflective self-assessment.

Education for Health surveys and studies of Indian postgraduate residents consistently report that a substantial proportion of residents do not feel comfortable disclosing gaps in knowledge or questioning feedback to senior faculty. This is not a personality trait of Indian residents; it is a rational response to a culture in which such disclosure has historically carried professional risk. Changing this requires explicit institutional commitment, visible senior role-modelling of bidirectional dialogue, and the deliberate construction of psychological safety in feedback interactions.

4.3 Time Pressure and Volume Constraints

Teaching hospitals in India operate under patient-care pressures that compress educational time substantially. Time-motion data from Indian teaching hospitals indicate that faculty spend the overwhelming majority of their working time on direct patient care, leaving inadequate space for structured feedback conversations. Feedback in high-volume environments tends to default to brief, real-time correction — necessary for patient safety but insufficient for professional development — rather than the 15 to 20 minutes that structured models such as R2C2 require.

Addressing this barrier requires systemic intervention: explicit allocation of faculty time for educational activities, departmental norms that protect feedback conversations from clinical interruption, and assessment tool design that allows brief but structured feedback to be captured in real time and supplemented by more extended conversations at scheduled intervals.

4.4 Assessment Misalignment

NMC CBME 2024 mandates formative workplace-based assessment — Mini-CEX, DOPS, case-based discussion — as portfolio requirements for postgraduate residents. The intent is to create documented feedback records that reflect developmental progression across competency domains. The risk is that without genuine faculty engagement with the formative purpose of these tools, they become compliance exercises: forms completed, boxes ticked, signatures obtained, with no feedback conversation of substance occurring.

Achieving the intent of CBME assessment requirements demands faculty development focused on assessment literacy: understanding the difference between formative and summative purposes, ability to observe clinical performance specifically and document it behaviourally, and willingness to engage in authentic developmental conversations rather than globally positive ratings that insulate the assessor from the discomfort of critical feedback.


5. ePortfolio-Triggered Feedback and Longitudinal Relationships

The evidence on ePortfolio effectiveness (Driessen et al., 2008; Tochel et al., 2009) converges on a consistent finding: ePortfolios do not improve feedback quality in isolation. Their educational value emerges from the structured interactions they trigger — scheduled mentor-mentee meetings, portfolio review conversations, CCC deliberations — and the longitudinal feedback relationships they support and document.

The concept of a longitudinal feedback relationship — in which a designated supervisor tracks a resident’s development over time, maintains continuity of feedback, and develops the contextual knowledge necessary to calibrate observations accurately — is central to both van der Vleuten’s programmatic assessment framework (2015) and the practical logic of NMC CBME requirements. ePortfolios support this relationship by creating shared artefacts: the resident’s self-assessments, workplace assessment records, and reflective entries that both parties can review together, building a common understanding of developmental trajectory.

The ePortfolio-mediated feedback conversation has structural advantages over ad hoc clinical feedback. The presence of documented evidence reduces reliance on episodic recollection; the scheduled nature of portfolio review meetings creates protected time; the structured format provides a scaffold for covering multiple competency domains systematically rather than returning habitually to the resident’s most recent prominent performance. For supervisors, documented portfolios also provide the evidence base for CCC discussions — connecting individual feedback conversations to the institutional progression decision process.


6. Discussion and Conclusion

The evidence reviewed supports a model of effective feedback in postgraduate medical training that is specific, behaviourally grounded, developmentally calibrated, relational, and longitudinal. None of the major structured models — Pendleton, SBID, R2C2 — is universally superior; each is better suited to particular contexts. Pendleton’s rules support formative feedback with junior residents; SBID structures behavioural feedback for documented assessment purposes; R2C2 addresses the relational and emotional complexity of high-stakes or conflict-prone feedback conversations.

The more significant determinants of feedback quality, in the evidence reviewed, are cultural and structural. Watling and Lingard’s (2012) finding that feedback culture outweighs feedback technique echoes throughout the literature. Hattie and Timperley’s (2007) evidence that where feedback is directed — task, process, self-regulation, or self — matters as much as its content. Bing-You and Trowbridge’s (2009) documentation of the persistent feedback gap between faculty intention and resident experience identifies a systemic failure that requires systemic response.

For Indian postgraduate medical programmes implementing NMC CBME 2024 and PGMER-2023 requirements, three priorities are indicated. First, faculty development on feedback literacy — not model training alone, but deep engagement with the purpose, ethics, and cultural context of formative assessment — is essential and cannot be delegated to individual interest. Second, structured longitudinal feedback relationships, supported by ePortfolio infrastructure and scheduled review meetings, must be designed into programme architecture rather than left to emerge organically. Third, the hierarchical cultures that inhibit bidirectional feedback exchange must be directly addressed by institutional leadership, with visible commitment to psychological safety and explicit modelling of reflective, dialogic feedback practice by senior faculty.

Effective feedback to residents is not a communication technique. It is a professional and institutional commitment to the proposition that clinical competence is developed, not born, and that development requires honest, specific, timely, and caring information about the gap between current and desired performance.


References

Bing-You, R., & Trowbridge, R. L. (2009). Why medical educators may be failing at feedback. JAMA, 302(12), 1330–1331. https://doi.org/10.1001/jama.2009.1393

Driessen, E., van Tartwijk, J., van der Vleuten, C., & Wass, V. (2008). Portfolios in medical education: Why do they meet with mixed success? A systematic review. Medical Education, 41(12), 1224–1233. https://doi.org/10.1111/j.1365-2923.2007.02944.x

Ende, J. (1983). Feedback in clinical medical education. JAMA, 250(6), 777–781. https://doi.org/10.1001/jama.1983.03340060055026

Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112. https://doi.org/10.3102/003465430298487

National Medical Commission. (2024). Competency-based medical education curriculum for postgraduate medical programmes. NMC.

Pendleton, D., Schofield, T., Tate, P., & Havelock, P. (1984). The consultation: An approach to learning and teaching. Oxford University Press.

Postgraduate Medical Education Regulations. (2023). PGMER-2023. National Medical Commission of India.

Sargeant, J., Lockyer, J., Mann, K., Holmboe, E., Silver, I., Armson, H., Driessen, E., MacLeod, T., Yen, W., Ross, K., & Power, M. (2015). Facilitated reflective performance feedback: Developing an evidence- and theory-based model that builds relationship, explores reactions and content, and coaches for performance change (R2C2). Academic Medicine, 90(12), 1698–1706. https://doi.org/10.1097/ACM.0000000000000809

Tochel, C., Haig, A., Hesketh, A., Cadzow, A., Beggs, K., Colthart, I., & Peacock, H. (2009). The effectiveness of portfolios for post-graduate assessment and education: BEME guide No. 12. Medical Teacher, 31(4), 299–318. https://doi.org/10.1080/01421590902883056

van der Vleuten, C. P. M., Schuwirth, L. W. T., Driessen, E. W., Govaerts, M. J. B., & Heeneman, S. (2015). Twelve tips for programmatic assessment. Medical Teacher, 37(7), 641–646. https://doi.org/10.3109/0142159X.2014.973388

Watling, C., & Lingard, L. (2012). Toward meaningful evaluation of medical trainees: The influence of participants’ perceptions of the process. Medical Education, 46(8), 790–800. https://doi.org/10.1111/j.1365-2923.2012.04269.x

Jagan Mohan R

Dy Director, Centre for Digital Resources, Education and Medical Informatics, Sri Balaji Vidyapeeth (Deemed to be University)

Published 31 March 2026

See how ePortfolios can work for your institution

Academe Cloud — Dedicated Computing for Higher Education

Get the Best Cloud for Your Institution →