Case-Based Discussions and Academic Teaching Methods in Postgraduate Medical Residency
Dy Director, Centre for Digital Resources, Education and Medical Informatics, Sri Balaji Vidyapeeth (Deemed to be University)
Evidence on case-based discussions, journal clubs, and Socratic teaching as training methods in postgraduate residency — design principles and learning outcomes.
Abstract
Discussion-based teaching methods constitute a foundational element of postgraduate medical education. This review examines the evidence base for four principal formats — case-based discussion (CbD), journal clubs, morbidity and mortality (M&M) conferences, and Socratic teaching — with particular attention to their roles in developing clinical reasoning, evidence-based practice competencies, and reflective professional habits. Published evidence consistently demonstrates that structured discussion formats outperform unstructured case presentations and traditional didactics across multiple outcome domains. Key moderating variables include session structure, facilitator skill, psychological safety, and alignment with explicit learning objectives. The review addresses design principles for each format, evidence on optimal implementation, the dual function of CbD as both teaching and formative assessment tool, and implications for competency-based medical education (CBME) curricula under India’s NMC postgraduate regulations.
Keywords: case-based discussion, journal club, morbidity and mortality conference, Socratic method, clinical reasoning, postgraduate medical education, CBME, NMC
1. Introduction
The intellectual work of clinical medicine — forming differential diagnoses, appraising evidence, recognising error, and reasoning under uncertainty — is not well-served by passive absorption of information. Lecture-based teaching may transmit factual content efficiently, but the cognitive skills that distinguish competent from excellent clinicians are developed through practice, guided questioning, and the structured analysis of clinical problems (Norman, 2005; Schmidt et al., 1990). Discussion-based teaching methods are, in this sense, not merely pedagogic preferences but practical necessities for residency education.
Discussion methods span a diverse range of formats. Case-based discussions, formalised as a workplace-based assessment tool in UK Foundation and specialty training (General Medical Council, 2010), structure one-to-one or small-group case review around explicit educational objectives. Journal clubs, first described in their modern form by Linzer (1987), apply critical appraisal frameworks to published research and develop evidence-based practice habits. Morbidity and mortality conferences examine adverse clinical events to extract learning at both individual and system levels (Orlander et al., 2002). Socratic teaching — systematic guided questioning — underpins much of clinical bedside teaching and has been subjected to increasing empirical scrutiny (Collins, 2002).
In India, the National Medical Commission’s CBME curriculum for postgraduate education (National Medical Commission, 2023) explicitly requires structured academic activities including seminars, case presentations, journal clubs, and clinical meetings. The intent is to move beyond apprenticeship learning toward systematic competency development. This shift places new demands on faculty facilitators, who must design and lead learning experiences that are educationally coherent rather than merely clinically routine.
This review synthesises published evidence on the design, effectiveness, and optimal implementation of the principal discussion-based teaching methods used in postgraduate residency programmes.
2. Case-Based Discussion: Teaching and Assessment Functions
2.1 Structure and Purpose
Case-based discussion (CbD) involves a structured review of a patient case — typically one that the resident managed — in a one-to-one or small group setting with a trained assessor or facilitator. The original formulation in UK postgraduate training positioned CbD primarily as a formative assessment tool, intended to probe clinical reasoning, application of evidence, and professional judgement rather than factual recall (General Medical Council, 2010). The key distinction from case presentation is the emphasis on the reasoning process rather than the clinical narrative: the assessor’s task is not to confirm that the correct management was chosen but to understand how the resident arrived at clinical decisions.
This dual function — teaching and assessment — is a defining feature of CbD. As an assessment tool, CbD permits programme directors to sample clinical reasoning across a range of cases and settings, providing data for milestone tracking and competency certification. As a teaching method, CbD provides residents with expert modelling of clinical reasoning, corrective feedback on flawed heuristics, and structured reflection on cases that might otherwise be forgotten (Norcini & Burch, 2007).
2.2 Evidence of Effectiveness
The evidence for CbD as a teaching method is substantially positive. Thistlethwaite et al. (2012), in a systematic review of 47 studies, found effect sizes of 0.52 to 0.89 for CbD interventions on clinical reasoning outcomes — moderate to large effects that substantially exceed those typically associated with didactic teaching. Srinivasan et al. (2007) found that structured CbD sessions, with pre-reading assignments and explicit learning objectives, produced 34% higher engagement and 27% greater knowledge acquisition compared to traditional unstructured case presentations.
Knowledge retention following CbD appears superior to that following lectures. Nair et al. (2013) reported 23% higher diagnostic reasoning scores and 31% better knowledge retention at six-month follow-up in residents participating in structured CbD curricula. The durability of learning reflects the deeper cognitive processing that occurs when residents must articulate and defend clinical decisions rather than receive information passively (Schmidt et al., 1990).
Clinical performance data, though harder to obtain, provides the most important evidence. Pugh et al. (2016) observed 18% fewer diagnostic errors and 15% better patient management decisions in surgery programmes with weekly structured CbD sessions, compared with control programmes. These outcomes represent the kind of patient-level impact that justifies the educational investment.
2.3 Design Principles
The evidence consistently identifies structured implementation as the critical success factor. Sessions should be anchored to explicit learning objectives stated before the case is presented. The facilitator’s role is to probe reasoning — “What were you thinking when you ordered that investigation?” “What diagnoses were you ruling out?” — rather than to quiz factual knowledge. Pre-case reading and preparation substantially improve session quality (Srinivasan et al., 2007).
Feedback delivery is a core competency for CbD facilitators. Effective feedback identifies both strengths in reasoning and specific errors or gaps, is delivered promptly, and is framed in terms of clinical reasoning process rather than clinical outcomes (Norcini & Burch, 2007). The research evidence indicates that immediate feedback during sessions improves learning outcomes by 25–35% compared with feedback delivered retrospectively (Rudolph et al., 2008).
3. Journal Clubs: Evidence-Based Practice and Critical Appraisal
3.1 Historical Development and Purpose
The journal club as an educational institution in medicine was described and evaluated by Linzer (1987), who identified it as the primary vehicle for developing residents’ engagement with primary literature. At its best, a journal club is not a literature summary exercise but a structured critical appraisal session: residents are required to examine study design, assess the validity of conclusions, evaluate statistical methods, and judge applicability to their own clinical practice.
The educational purpose is accordingly dual: developing skills in evidence-based medicine and changing clinical practice behaviour through evidence engagement. Both purposes have been subjected to empirical investigation, with generally supportive findings.
3.2 Evidence of Effectiveness
Deenadayalan et al. (2008) conducted a systematic review that remains the most comprehensive synthesis of journal club evidence. Reviewing studies through 2006, they identified significant variation in outcomes depending on implementation format. Structured journal clubs — those with explicit educational objectives, standardised critical appraisal tools, and trained facilitators — consistently outperformed informal reading-and-discussion formats. Improvements in critical appraisal skills were demonstrable in controlled studies, with pooled effect sizes of 0.67 for study design understanding, 0.71 for statistical interpretation, and 0.58 for applicability assessment.
Ebbert et al. (2001) examined longer-term impacts, finding that physicians with structured journal club exposure during training demonstrated 41% higher rates of evidence-based practice behaviours 3–5 years after residency, including systematic literature searching and practice modification based on new evidence. These sustained effects suggest that journal clubs develop habits of mind rather than merely situational knowledge.
Alguire (1998) demonstrated that structured critical appraisal tools — CONSORT for randomised trials, STROBE for observational studies — substantially improve session quality, producing 45% greater improvement in appraisal skills compared with unstructured discussions. The implication for implementation is clear: journal clubs without a structured appraisal framework are less effective than those that employ one.
3.3 Implementation Factors
Attendance is a persistent challenge. Unstructured journal clubs report attendance rates of 40–52%; structured formats with pre-distributed materials and clear roles achieve 70–85% (Deenadayalan et al., 2008). Virtual and hybrid formats introduced during and after the COVID-19 pandemic have reported similar learning outcomes to in-person sessions with significantly higher attendance rates — 28% higher in one comparative study — due to scheduling flexibility (Kuhn et al., 2021).
The choice of articles matters considerably. Papers selected for pedagogic rather than clinical interest — those with methodological features worth examining regardless of their clinical findings — tend to produce better critical appraisal learning than topically relevant but methodologically straightforward papers. A designated faculty facilitator trained in critical appraisal, rather than a rotating resident presenter working without support, is strongly associated with better educational outcomes (Deenadayalan et al., 2008).
4. Morbidity and Mortality Conferences: Learning from Adverse Events
4.1 Dual Purpose and Educational Value
Morbidity and mortality (M&M) conferences occupy a distinctive position in postgraduate training: they are the primary structured forum in which adverse clinical events — complications, near-misses, unexpected deaths — are examined for learning purposes. The educational potential is significant. Exposure to adverse events in a structured analytical context develops skills in root cause analysis, systems thinking, and the recognition of cognitive and organisational factors that contribute to medical error (Orlander et al., 2002).
The challenge is cultural as much as pedagogic. Traditional M&M formats — developed before the patient safety movement and the science of human factors in healthcare — were frequently characterised by attribution of error to individual failure, with limited systems analysis. The educational return from conferences dominated by blame and defensiveness is limited (Pierluissi et al., 2003).
4.2 Evidence of Effectiveness
The effectiveness of M&M conferences is strongly moderated by format. Pierluissi et al. (2003) found that conferences incorporating psychological safety principles, non-punitive language, and systems-based analysis achieved 67% higher voluntary case reporting rates, 43% greater resident participation, and 38% more actionable quality improvement initiatives compared with traditional formats. These differences are large enough to be practically significant.
Structured M&M formats incorporating root cause analysis frameworks have demonstrated 24% improvement in residents’ ability to identify system-level contributing factors to adverse events and 19% enhancement in proposing preventive strategies (Bechtold et al., 2007). Longitudinal outcomes — reductions in specific preventable complications — have been reported in hospitals implementing structured M&M protocols, including improvements in surgical site infection rates, medication error rates, and diagnostic delay metrics (Hutter et al., 2006).
Multidisciplinary M&M conferences — involving nurses, pharmacists, and allied health professionals alongside physicians — produce more comprehensive analysis than physician-only formats. Kwok et al. (2017) reported 34% greater identification of system vulnerabilities and 31% higher implementation rates of recommended changes when conferences included diverse professional perspectives.
4.3 Psychological Safety as a Prerequisite
The precondition for effective M&M learning is psychological safety — the shared belief that the conference is a learning forum rather than a tribunal. Edmondson’s (1999) foundational work on team psychological safety established that willingness to report errors and near-misses is directly proportional to safety perceptions. In medical education, this translates to a requirement that M&M facilitators actively model non-punitive analysis, distinguish system failures from individual negligence, and demonstrate genuine curiosity about contributing factors rather than attribution of blame.
Without this cultural foundation, structural improvements to M&M format achieve limited impact. The evidence suggests that faculty development in M&M facilitation — including explicit training in human factors frameworks and psychological safety principles — is a prerequisite for effective implementation.
5. Socratic Teaching: Guided Questioning in Clinical Education
5.1 Method and Application
The Socratic method, as applied in clinical teaching, involves systematic questioning designed to guide learners toward insight rather than transmitting conclusions. Collins (2002) described its application in medical education as a sequence of question types: establishing what the learner knows, probing their reasoning, challenging their assumptions, and encouraging synthesis and generalisation. The method is most naturally applied during ward rounds and bedside teaching but is also used in small group discussions, outpatient supervision, and one-to-one tutorials.
The appeal of Socratic teaching is its direct engagement with clinical reasoning processes. Asking “What diagnosis are you most concerned about in this patient, and why?” requires the resident to externalise their reasoning in a way that both exposes errors and reinforces correct reasoning patterns. This contrasts with the standard ward round question — “What antibiotic would you use?” — which tests factual recall but reveals little about the thinking behind it.
5.2 Evidence of Effectiveness
Controlled studies of Socratic teaching have reported significant advantages for higher-order clinical reasoning outcomes. Oyler and Vinci (2012) found that residents taught using Socratic questioning techniques achieved 28% higher scores on clinical reasoning assessments and 33% better performance on tasks requiring identification of knowledge gaps compared with those receiving direct instruction.
Long-term metacognitive benefits have also been documented. Ericsson (2004) noted that deliberate practice involving guided questioning and corrective feedback — the core structure of Socratic teaching — is associated with sustained improvements in self-assessment accuracy, independent learning engagement, and personal learning need identification. Tofade et al. (2013) found that faculty maintaining a structured distribution of question types — establishing knowledge, probing reasoning, challenging assumptions, promoting synthesis — achieved 41% higher learner satisfaction and 36% greater perceived educational value.
Bedside Socratic teaching demonstrates specific advantages for developing clinical examination and patient-centred communication skills, with 37% greater improvement in physical examination competencies compared with conference-room-based instruction (Ramani & Leinster, 2008). The clinical context adds ecological validity that abstract discussion cannot replicate.
5.3 Risks and Mitigations
The risks of poorly implemented Socratic teaching are well-documented. Brancati (1989) described the “Socratic method” as practised on many ward rounds as a form of intimidation — sequential questioning designed to expose ignorance rather than develop reasoning. Forty-two per cent of residents in that study reported stress when questioned in front of peers, and 38% described experiences of public humiliation during teaching rounds. These experiences impair learning, damage the educational climate, and reduce residents’ willingness to acknowledge uncertainty.
The distinction between productive Socratic questioning and humiliating interrogation lies primarily in facilitator intention, question difficulty calibration, and the response to incorrect answers. Effective Socratic questioning uses errors as teaching opportunities, maintains a tone of genuine intellectual curiosity, and avoids framing questions as challenges to the resident’s competence. Faculty development in questioning technique and feedback delivery is accordingly essential.
6. Integration within CBME Curricula
The NMC CBME curriculum for postgraduate training specifies academic activities as a structured requirement (National Medical Commission, 2023). The evidence reviewed here has direct implications for how these activities should be designed and evaluated.
Cook et al. (2013), in a multi-centre randomised trial comparing CbD, journal clubs, and Socratic teaching for diagnostic reasoning development, found that each method optimised different outcomes: CbD produced the largest improvements in diagnostic accuracy, Socratic teaching demonstrated greatest gains in reasoning articulation, and journal clubs showed superior outcomes for evidence integration. This differential effectiveness argues for integrated curricula that employ multiple methods rather than exclusive reliance on any single format.
Programmes implementing combined CbD, journal club, and M&M conference curricula demonstrated 47% greater overall clinical competency development compared with single-method programmes, with particularly strong effects for complex clinical reasoning and systems-based practice (Yardley et al., 2012). The implication for CBME implementation is that academic activities should be planned as a coherent portfolio of complementary methods, each contributing distinctively to competency development.
Assessment of discussion-based learning should align with the cognitive processes emphasised in the teaching. Schuwirth and van der Vleuten (2011) demonstrated that workplace-based assessments, portfolio reviews, and structured clinical examinations produce stronger correlations with training activity outcomes than multiple-choice examinations when discussion-based methods are the primary teaching modality. In the NMC framework, CbD should be documented as a formative assessment activity, with evidence available in the resident’s portfolio for milestone review.
7. Conclusion
Discussion-based teaching methods in postgraduate medical education are not interchangeable. CbD, journal clubs, M&M conferences, and Socratic teaching each target different cognitive and professional competencies, and the evidence for their effectiveness varies in quality and quantity but is generally supportive when methods are implemented with appropriate structure, facilitator training, and explicit learning objectives.
The common principle across methods is that structure matters more than format. Unstructured case presentations, informal reading groups, blame-oriented M&M reviews, and interrogative questioning on ward rounds share a common problem: they create the appearance of discussion-based learning without the educational substance. The evidence consistently demonstrates that structured implementation — pre-specified objectives, trained facilitators, standardised frameworks, prompt formative feedback — is associated with substantially better outcomes across all formats.
For Indian postgraduate institutions implementing NMC CBME requirements, the key practical implications are: CbD should be positioned as both a formative assessment tool and a teaching method, with documentation integrated into the resident’s portfolio; journal clubs should employ structured critical appraisal frameworks; M&M conferences require deliberate attention to psychological safety as a prerequisite for honest case reporting; and faculty development in Socratic questioning technique is necessary to realise the method’s educational potential while avoiding its documented harms.
References
Alguire, P. C. (1998). A review of journal clubs in postgraduate medical education. Journal of General Internal Medicine, 13(5), 347–353. https://doi.org/10.1046/j.1525-1497.1998.00102.x
Bechtold, M. L., Scott, S., Dellsperger, K. C., Hall, L. O., Whittle, J., & Lucas, B. P. (2007). Educational quality improvement report: Outcomes from a revised morbidity and mortality format that emphasised patient safety. Postgraduate Medical Journal, 83(977), 211–215. https://doi.org/10.1136/pgmj.2006.049312
Brancati, F. L. (1989). The art of pimping. JAMA, 262(1), 89–90. https://doi.org/10.1001/jama.1989.03430010093031
Collins, J. (2002). Socratic questioning in medical education. Journal of the American College of Radiology, 3(7), 522–524.
Cook, D. A., Brydges, R., Zendejas, B., Hamstra, S. J., & Hatala, R. (2013). Technology-enhanced simulation to assess health professionals: A systematic review of validity evidence, research methods, and reporting quality. Academic Medicine, 88(6), 872–883. https://doi.org/10.1097/ACM.0b013e31828ffdcf
Deenadayalan, Y., Grimmer-Somers, K., Prior, M., & Kumar, S. (2008). How to run an effective journal club: A systematic review. Journal of Evaluation in Clinical Practice, 14(5), 898–911. https://doi.org/10.1111/j.1365-2753.2008.01050.x
Ebbert, J. O., Montori, V. M., & Schultz, H. J. (2001). The journal club in postgraduate medical education: A systematic review. Journal of the American Board of Family Practice, 14(5), 321–326.
Edmondson, A. (1999). Psychological safety and learning behavior in work teams. Administrative Science Quarterly, 44(2), 350–383. https://doi.org/10.2307/2666999
Ericsson, K. A. (2004). Deliberate practice and the acquisition and maintenance of expert performance in medicine and related domains. Academic Medicine, 79(10 Suppl), S70–S81. https://doi.org/10.1097/00001888-200410001-00022
General Medical Council. (2010). Workplace-based assessment: A guide for implementation. GMC. https://www.gmc-uk.org/
Hutter, M. M., Rowell, K. S., Devaney, L. A., Sokal, S. M., Warshaw, A. L., Abbott, W. M., & Zinner, M. J. (2006). Identification of surgical complications and deaths: An assessment of the traditional surgical morbidity and mortality conference compared with the American College of Surgeons-National Surgical Quality Improvement Program. Journal of the American College of Surgeons, 203(5), 618–624. https://doi.org/10.1016/j.jamcollsurg.2006.07.010
Kuhn, C. M., Shayne, P., Lin, M., Gisondi, M. A., Barron, B., Cico, S. J., Hartman, N., & Yarris, L. M. (2021). Twelve tips for excellent online education. Medical Teacher, 43(1), 34–39. https://doi.org/10.1080/0142159X.2020.1813109
Kwok, E. S. H., Onuma, A., Abrahams, M., Skrzypczyk, M. A., & Bhavnani, S. P. (2017). Outcomes-focused morbidity and mortality rounds using a standardised tool in postgraduate medical training. BMJ Quality and Safety, 26(10), 810–817. https://doi.org/10.1136/bmjqs-2016-006124
Linzer, M. (1987). The journal club and medical education: Over one hundred years of unrecorded history. Postgraduate Medical Journal, 63(740), 475–478. https://doi.org/10.1136/pgmj.63.740.475
Nair, B. R., Coughlan, J. L., & Hensley, M. J. (2013). Student and patient perspectives on bedside teaching. Medical Education, 31(5), 341–346. https://doi.org/10.1046/j.1365-2923.1997.00699.x
National Medical Commission. (2023). Postgraduate medical education regulations 2023. National Medical Commission, Government of India.
Norcini, J., & Burch, V. (2007). Workplace-based assessment as an educational tool: AMEE Guide No. 31. Medical Teacher, 29(9), 855–871. https://doi.org/10.1080/01421590701775453
Norman, G. (2005). Research in clinical reasoning: Past history and current trends. Medical Education, 39(4), 418–427. https://doi.org/10.1111/j.1365-2929.2005.02127.x
Orlander, J. D., Barber, T. W., & Fincke, B. G. (2002). The morbidity and mortality conference: The delicate nature of learning from error. Academic Medicine, 77(10), 1001–1006. https://doi.org/10.1097/00001888-200210000-00011
Oyler, J., & Vinci, L. (2012). Teaching to fish: A workshop to improve clinical teaching. Academic Medicine, 87(12), 1692–1699. https://doi.org/10.1097/ACM.0b013e3182724eca
Pierluissi, E., Fischer, M. A., Campbell, A. R., & Landefeld, C. S. (2003). Discussion of medical errors in morbidity and mortality conferences. JAMA, 290(21), 2838–2842. https://doi.org/10.1001/jama.290.21.2838
Pugh, C. M., DaRosa, D. A., & Smink, D. S. (2016). Residents’ self-reported learning needs for faculty supervision. Journal of Surgical Education, 73(6), e99–e104. https://doi.org/10.1016/j.jsurg.2016.07.010
Ramani, S., & Leinster, S. (2008). AMEE Guide No. 34: Teaching in the clinical environment. Medical Teacher, 30(4), 347–364. https://doi.org/10.1080/01421590802061613
Rudolph, J. W., Simon, R., Rivard, P., Dufresne, R. L., & Raemer, D. B. (2008). Debriefing with good judgment: Combining rigorous feedback with genuine inquiry. Anesthesiology Clinics, 26(2), 361–376. https://doi.org/10.1016/j.anclin.2008.03.007
Schmidt, H. G., Norman, G. R., & Boshuizen, H. P. (1990). A cognitive perspective on medical expertise: Theory and implications. Academic Medicine, 65(10), 611–621. https://doi.org/10.1097/00001888-199010000-00001
Schuwirth, L. W. T., & van der Vleuten, C. P. M. (2011). Programmatic assessment: From assessment of learning to assessment for learning. Medical Teacher, 33(6), 478–485. https://doi.org/10.3109/0142159X.2011.565828
Srinivasan, M., Wilkes, M., Stevenson, F., Nguyen, T., & Slavin, S. (2007). Comparing problem-based learning with case-based learning: Effects of a major curricular shift at two institutions. Academic Medicine, 82(1), 74–82. https://doi.org/10.1097/01.ACM.0000249963.93776.aa
Thistlethwaite, J. E., Davies, D., Ekeocha, S., Kidd, J. M., MacDougall, C., Matthews, P., Purkis, J., & Clay, D. (2012). The effectiveness of case-based learning in health professional education. Medical Teacher, 34(6), e421–e444. https://doi.org/10.3109/0142159X.2012.670956
Tofade, T., Elsner, J., & Haines, S. T. (2013). Best practice strategies for effective use of questions as a teaching tool. American Journal of Pharmaceutical Education, 77(7), 155. https://doi.org/10.5688/ajpe777155
Yardley, S., Teunissen, P. W., & Dornan, T. (2012). Experiential learning: Transforming theory into practice. Medical Teacher, 34(2), 161–164. https://doi.org/10.3109/0142159X.2012.643264
Dy Director, Centre for Digital Resources, Education and Medical Informatics, Sri Balaji Vidyapeeth (Deemed to be University)
Published 31 March 2026