Guide 31 March 2026

ABC of Feedback in Medical Education: Theory, Practice, and Evidence

Jagan Mohan R

Dy Director, Centre for Digital Resources, Education and Medical Informatics, Sri Balaji Vidyapeeth (Deemed to be University)

A concise, evidence-based introduction to feedback theory in medical education: formative vs summative, models, receiving feedback, and programmatic approaches.

Abstract

Feedback is simultaneously the most frequently cited educational intervention in medical training and among the most poorly implemented. This paper provides a conceptually grounded introduction to feedback theory in medical education, proceeding from Ramaprasad’s (1983) original definition through to contemporary programmatic approaches. Key topics include the formative-summative distinction (Sadler, 1989), Hattie and Timperley’s (2007) four-level model of feedback direction, the evidence against the “feedback sandwich,” dialogic and co-constructed approaches to feedback, the receiving side of feedback including Telio, Regehr, and Ajjawi’s (2016) credibility framework, the role of self-regulated learning (Zimmerman, 2002), and the place of feedback within competency-based programmatic assessment (van der Vleuten et al., 2015). The paper draws on van de Ridder et al.’s (2008) systematic review of feedback definitions and Bing-You and Trowbridge’s (2009) diagnosis of feedback failure in medicine. The intended audience is clinical educators and postgraduate residents engaging with CBME requirements.

Keywords: feedback, formative assessment, summative assessment, medical education, self-regulated learning, programmatic assessment, feedback credibility, CBME


1. Introduction

The word “feedback” is used freely in medical education — in curriculum documents, supervision policies, accreditation standards, and faculty development materials. Yet when van de Ridder et al. (2008) conducted a systematic review of how the term was defined in the medical education literature, they found no consensus. Definitions ranged from “any information given to a learner about performance” to highly specified constructs requiring specific conditions of delivery, relationship, and intention. The terminological confusion is not merely academic: if educators do not share a common understanding of what feedback is, they cannot reliably deliver it, evaluate it, or improve it.

This paper begins, therefore, with definitions. It then traces the principal theoretical frameworks that explain how feedback works — and why it frequently does not. It examines the formative-summative distinction, the major models of feedback delivery, the neglected receiving side of feedback, and the role of feedback within programmatic assessment systems. Throughout, the emphasis is on evidence: what the research demonstrates, where the gaps lie, and what conclusions are warranted for clinical educators working within competency-based medical education frameworks.


2. Defining Feedback

Ramaprasad (1983), writing in the context of management science rather than education, offered what has become the most cited foundational definition: feedback is “information about the gap between the actual level and the reference level of a system parameter which is used to alter the gap in some way.” Three elements are essential in this definition: (1) a reference level — a standard or desired level of performance — must exist; (2) information about the gap between current and desired performance must be generated; and (3) that information must be used to alter the gap.

The third condition is critical and often overlooked. Information about a performance gap that is not used to change behaviour is not, in Ramaprasad’s sense, feedback — it is assessment data. This distinction has direct implications for how feedback is understood in medical education: the act of completing a Mini-CEX form and filing it in a portfolio is not feedback. The educational conversation in which that data informs a resident’s understanding of their performance, motivates reflection, and generates a concrete development plan — that is feedback.

Van de Ridder et al. (2008), synthesising definitions across the medical education literature, concluded that effective feedback requires: specific information about observed performance; comparison to a standard; intent to improve future performance; and sufficient relational context for the information to be received and acted upon. This four-part synthesis integrates Ramaprasad’s structural insight with the relational and motivational conditions that the medical education literature has subsequently elaborated.


3. Formative and Summative Feedback

The distinction between formative and summative assessment — between assessment for learning and assessment of learning — is one of the most important and most misunderstood in educational theory. Sadler (1989), in a foundational paper in Instructional Science, argued that formative assessment is not simply assessment that occurs before a final examination; it is assessment whose central purpose is to close the gap between current and desired performance while learning is still in progress. For this to work, the learner must understand the desired standard, be able to compare their current performance against it, and have access to strategies for closing the gap.

In postgraduate medical training, the formative-summative distinction matters for several reasons. First, the psychological conditions under which feedback can be received differ substantially between formative and summative contexts. In genuinely formative contexts — where the purpose is developmental and the stakes are low — residents are more willing to disclose uncertainties, acknowledge errors, and engage openly with critical information. In summative contexts — where the outcome affects progression, examination results, or formal records — self-protective responses are more readily activated, and residents are more likely to manage impressions than engage authentically with development.

Second, the formative-summative distinction is embedded in NMC CBME requirements for postgraduate training, which mandate workplace-based assessments as documented formative assessment records distinct from the high-stakes examinations that determine progression and certification. Faculty who conflate these purposes — treating formative assessment tools as proxy summative judgments, or using portfolio evidence primarily to identify candidates for failure — undermine the educational value of the formative infrastructure.

Van der Vleuten et al. (2015) propose a resolution to the formative-summative tension through programmatic assessment: a system in which multiple low-stakes, formative assessment moments are designed from the outset to accumulate into evidence that informs high-stakes progression decisions. In this model, individual feedback episodes remain genuinely formative — low stakes, development-focused, psychologically safe — while their aggregation over time provides the evidence base for summative judgments. The key is transparency: residents must understand how individual feedback moments relate to the broader evidence considered by Clinical Competency Committees.


4. How Feedback Works: Hattie and Timperley’s Model

Hattie and Timperley’s (2007) review of feedback research in the Review of Educational Research is the most comprehensive meta-analysis of feedback effects available. Synthesising 196 studies with a total of 6,972 effect sizes, they concluded that feedback is among the most powerful influences on learning — with an average effect size of 0.79, nearly twice the average effect of all educational interventions — but that this average conceals enormous variability. Some feedback has very large positive effects; some feedback has negative effects on learning.

The variability is explained, in large part, by where feedback is directed. Hattie and Timperley identified four levels:

Task level: Feedback about whether specific work or a specific task has been done correctly. This is the most common form of feedback in clinical training — “that diagnosis is incorrect” or “your knot was not secure.” Task-level feedback is effective for simple, well-defined tasks but does not transfer readily to new contexts.

Process level: Feedback about the processes and strategies the learner is using. “You anchored on the presenting symptom without considering the full picture — what would a systematic approach look like?” This level improves understanding and transfer more effectively than task-level feedback.

Self-regulation level: Feedback that supports the learner’s capacity to monitor, evaluate, and direct their own learning. “You’ve identified the pattern yourself — how might you build a habit of checking for it?” Self-regulation feedback develops the independent learner who continues to improve beyond formal supervision.

Self level: Feedback directed at the learner as a person — praise (“you’re brilliant”), criticism of personal attributes, or comments about character. Hattie and Timperley found that self-level feedback frequently reduces performance by activating ego-protection rather than learning. “You’re an excellent clinician” is less useful than specific task or process-level information; personal criticism provokes defensiveness rather than reflection.

The practical implication for clinical educators is direct: effective feedback is not about saying nice things or avoiding harsh ones. It is about directing attention to the right level — primarily task and process for skill acquisition, self-regulation as the resident develops — and away from the self.


5. The “Feedback Sandwich” and Why It Fails

The “feedback sandwich” — embedding critical observations between positive comments — remains one of the most widely taught feedback techniques in medical education, despite the evidence against it. Studies examining resident experience of sandwich-format feedback consistently find that the critical message is diluted, obscured, or displaced by the surrounding positive comments. When positive feedback is delivered instrumentally — as a frame for criticism rather than as genuine recognition — experienced residents detect the formula and discount the positive observations. The predictability of the sandwich format signals to the resident that critical feedback is coming, potentially activating defensiveness before the critical content arrives.

Bing-You and Trowbridge (2009) cite the sandwich among the ritualistic feedback practices that contribute to feedback failure in medicine — practices that perform the appearance of feedback without delivering its substance. The underlying problem is that the sandwich is a strategy for managing the feedback-giver’s discomfort with delivering critical observations, not a strategy for maximising the resident’s learning. Feedback delivered honestly, specifically, and with genuine developmental intent does not require architectural concealment of its difficult elements.

The alternative to the feedback sandwich is not bluntness. It is specificity, context, and authentic positive observation when positive observation is warranted. Balanced feedback that identifies genuine strengths and genuine areas for development — without artificial sequencing designed to soften the critical content — is both more honest and more educationally effective.


6. Receiving Feedback: The Credibility Problem

Most feedback training focuses on the delivery side. The receiving side — how residents interpret, evaluate, and act on feedback — is equally important and substantially less well addressed in faculty development or resident orientation programmes.

Telio, Regehr, and Ajjawi (2016), in an important paper in Medical Education, introduced the concept of feedback credibility as a central determinant of whether feedback is acted upon. Their analysis showed that residents do not passively receive feedback; they actively evaluate its credibility based on the perceived competence and experience of the feedback-giver, the consistency of feedback with their own self-assessment, the relational context in which feedback is delivered, and the institutional feedback culture in which the interaction occurs. Feedback from a supervisor perceived as clinically expert, delivered in the context of a trusted relationship, consistent with observations from other respected sources, is acted upon. Feedback that fails on these dimensions — however technically well-structured — is dismissed, reattributed, or filed without consequence.

Watling (2014), extending this analysis in Perspectives on Medical Education, argued that feedback culture — the norms, tacit rules, and relational dynamics of a clinical department — determines whether feedback seeking and feedback uptake are rational behaviours for residents. In cultures where feedback is rare, formulaic, or primarily used for evaluation, residents quickly learn that seeking feedback carries risk and that received feedback is unreliable. In cultures where feedback is frequent, bidirectional, and visibly valued by senior clinicians, feedback seeking becomes the rational strategy and feedback uptake becomes the norm.

The implication is that improving feedback requires attention to culture and relationship, not only to technique. Residents’ feedback literacy — their ability to seek, interpret, and act on feedback — is developed within cultures that make those behaviours safe and rational, not within training sessions that teach feedback reception as a skill in isolation from the contexts that shape whether the skill is exercised.


7. Self-Regulated Learning and Feedback

Zimmerman (2002) described self-regulated learning as a cyclical process of goal-setting, strategy selection, self-monitoring, and self-evaluation — a metacognitive loop in which the learner is an active agent in directing their own development. The relationship between feedback and self-regulated learning is reciprocal: good feedback supports the development of self-regulation skills; self-regulated learners are better able to seek, interpret, and use feedback effectively.

For clinical supervisors, this framework points to a long-term educational objective that extends beyond the immediate feedback episode. Feedback that tells a resident what to do next addresses task-level performance. Feedback that engages the resident in analysing their own performance, identifying the processes that led to success or error, and designing strategies for practice and improvement — this feedback builds the self-regulation capacity that enables continued professional development throughout a career of independent practice.

This objective has direct relevance to the competency-based programmatic assessment framework. One of the developmental trajectories along which residents are expected to progress is the capacity for self-directed learning and evidence-based practice improvement. Supervisors who deliver consistently directive feedback — however accurate and specific — without cultivating the resident’s own evaluative and reflective capacities are, paradoxically, working against one of the core competencies they are expected to develop.


8. Feedback in Programmatic Assessment

Van der Vleuten et al. (2015) place feedback at the centre of programmatic assessment not because feedback is technically required by accreditation standards but because it is the mechanism through which the assessment data generated by a programme’s measurement tools actually connects to resident learning and development. In the absence of meaningful feedback conversations, workplace-based assessments become documentation requirements without educational function. The portfolio accumulates records; the resident’s competency may or may not develop.

The programmatic assessment model makes explicit what is implicit in the best clinical training: that individual assessment moments are building blocks in a longitudinal conversation about development. Each Mini-CEX, each case-based discussion, each reflective entry in a portfolio, is a data point that informs that conversation. The conversation — the feedback — is what makes the data educationally meaningful.

Bing-You and Trowbridge (2009) estimate that faculty in typical residency programmes believe they are providing substantially more and better feedback than residents report receiving. This perception gap is not primarily a measurement artefact; it reflects a genuine difference between what supervisors count as feedback — any information communicated about performance — and what residents experience as feedback: specific, credible, actionable information delivered in a relational context that makes uptake and action possible. Closing this gap requires systematic attention to the conditions for effective feedback, not simply increased frequency of assessment tool completion.


9. Discussion and Conclusion

The ABC of feedback in medical education is neither simple nor reducible to a mnemonic. It is grounded in a theoretical tradition running from Ramaprasad’s (1983) systems definition through Sadler’s (1989) formative assessment framework, Hattie and Timperley’s (2007) empirical model of feedback direction, and Telio et al.’s (2016) analysis of the credibility conditions that determine whether feedback is used.

Three conclusions emerge from this review with particular force. First, feedback is a relational and cultural phenomenon as much as a technical one. Models and structures support good feedback; they do not generate it in the absence of trust, psychological safety, and institutional commitment to developmental rather than purely evaluative assessment cultures (Watling, 2014). Second, the receiving side of feedback — the resident’s evaluation of feedback credibility, their capacity for self-regulation, their feedback orientation — is as important as delivery and deserves explicit developmental attention in CBME programmes. Third, feedback within programmatic assessment is only educationally meaningful when it is connected to the longitudinal conversation about a resident’s developmental trajectory — when individual assessment moments are explicitly linked to cumulative evidence, reviewed regularly in mentor-mentee meetings, and used by Clinical Competency Committees as the basis for informed, defensible progression decisions.

Feedback, understood this way, is not a technique or a requirement. It is the primary educational mechanism through which postgraduate medical training fulfils its commitment to producing graduates whose competence is evidenced, whose self-regulatory capacities are developed, and whose patients are safer as a consequence.


References

Bing-You, R., & Trowbridge, R. L. (2009). Why medical educators may be failing at feedback. JAMA, 302(12), 1330–1331. https://doi.org/10.1001/jama.2009.1393

Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112. https://doi.org/10.3102/003465430298487

Ramaprasad, A. (1983). On the definition of feedback. Behavioral Science, 28(1), 4–13. https://doi.org/10.1002/bs.3830280103

Sadler, D. R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18(2), 119–144. https://doi.org/10.1007/BF00117714

Telio, S., Regehr, G., & Ajjawi, R. (2016). Feedback and the educational alliance: Examining credibility judgements and their consequences. Medical Education, 50(9), 933–942. https://doi.org/10.1111/medu.13063

van de Ridder, J. M. M., Stokking, K. M., McGaghie, W. C., & ten Cate, O. T. J. (2008). What is feedback in clinical education? Medical Education, 42(2), 189–197. https://doi.org/10.1111/j.1365-2923.2007.02973.x

van der Vleuten, C. P. M., Schuwirth, L. W. T., Driessen, E. W., Govaerts, M. J. B., & Heeneman, S. (2015). Twelve tips for programmatic assessment. Medical Teacher, 37(7), 641–646. https://doi.org/10.3109/0142159X.2014.973388

Watling, C. J. (2014). Cognition, culture, and credibility: Deconstructing feedback in medical education. Perspectives on Medical Education, 3(2), 124–128. https://doi.org/10.1007/s40037-014-0102-x

Zimmerman, B. J. (2002). Becoming a self-regulated learner: An overview. Theory Into Practice, 41(2), 64–70. https://doi.org/10.1207/s15430421tip4102_2

Jagan Mohan R

Dy Director, Centre for Digital Resources, Education and Medical Informatics, Sri Balaji Vidyapeeth (Deemed to be University)

Published 31 March 2026

See how ePortfolios can work for your institution

Academe Cloud — Dedicated Computing for Higher Education

Get the Best Cloud for Your Institution →