Peer-Assisted Feedback (PAF) in Case-Based Learning (CBL) Tutorials in Undergraduate Medical Education
H Wiemer, I Siddiq, D Day, K Blake
Keywords
case-based learning cbl tutorials, peer assessment, self-assessment, tutor assessment, tutorial feedback, undergraduate medical education
Citation
H Wiemer, I Siddiq, D Day, K Blake. Peer-Assisted Feedback (PAF) in Case-Based Learning (CBL) Tutorials in Undergraduate Medical Education. The Internet Journal of Medical Education. 2013 Volume 3 Number 1.
Abstract
objective
The purpose of this study was to investigate the feasibility and acceptability of a novel Peer-Assisted Feedback (PAF) protocol to students and tutors in pre-clerkship Case-Based Learning (CBL) tutorials. PAF involved weekly round-table feedback sessions including self, peer and tutor assessment.
Methods
Our study followed two CBL tutorial groups of first-year medical students and tutors at Dalhousie Medical School. The tutorial groups practiced the standard institutional feedback protocol for four weeks, and implemented PAF during the subsequent four-week intervention phase. Quantitative and qualitative data were gathered from questionnaires distributed before and after the introduction of PAF, from observational field notes of PAF sessions, and from semi-structured debriefing interviews with participating tutors.
Results
Quantitative data showed no change in student satisfaction with tutorial feedback after the four-week trial of PAF. Thematic analysis of qualitative data identified five major themes, namely: engaged group feedback, effects on group dynamics, personal barriers, resistance, and misconceptions about feedback. Qualitative analysis also revealed that the informal PAF method had a positive or neutral impact on tutorial group function.
Conclusion
PAF was found feasible at our institution and was generally acceptable to students and tutors with some limitations, including personal discomfort and resistance to group feedback. The literature suggests that such barriers to feedback may be overcome with greater exposure to the protocol and training. The major advantage of the low-stakes PAF method over other novel tutorial feedback models which include high-stakes peer assessment is that PAF does not negatively affect small group learning atmosphere. In our trial, PAF enhanced tutorial group dynamics and facilitated a more comprehensive assessment of students by tutors.
Introduction
Medical students and physicians require accurate self-assessment skills of their professional performance [1, 2, 3].The ability to provide and receive formative feedback (constructive praise and criticism) from peers and senior staff is of equal importance [4].Adequate self-assessment and feedback skills are also indispensable for achieving positive learning outcomes in medical education, as is widely documented in the literature [5, 6, 7, 8, 9, 10].Yet these skills are often only briefly introduced in undergraduate medical curricula, and practiced even less frequently.
One of the logical places where self-assessment and feedback skills could be taught effectively in medical schools is the small-group setting of a pre-clerkship tutorial conducted within a Problem-Based Learning (PBL) or Case-Based Learning (CBL) approach.These tutorials simulate future clinical teamwork as they facilitate the collective study and discussion of medical cases, hence replicating the very conditions under which assessments by self, peers and senior staff members are of prime practical relevance.Yet most standard tutorial feedback models rely on infrequent, tutor-led assessments alone.Thus, tutorial feedback remains too untimely and unspecific to effect significant change in student behaviour [11].Peer feedback and self-reflection may not occur at all.
Background
Educators have taken note of this state of affairs and begun to test more comprehensive feedback protocols, mainly in Problem-Based Learning (PBL) tutorial settings.In these trials, written self and/or peer assessments were generally added to the customary tutor assessments.These adjustments of the standard feedback process were informed by various educational objectives, namely to improve student assessment [12, 13, 14], student learning [15], student capacity for self and peer assessment [16], student professionalism [17], tutor assessment by students [18] and overall PBL tutorial group functioning [19].The results reported for these divergent aims are beyond the scope of this study.Our focus is, instead, on the general experiences reported in the literature with implementing the various elements (self, peer and tutor assessment) of comprehensive tutorial feedback models.
Formative assessment models may be evaluated according to criteria developed by Norcini et al. [20], namely validity or coherence, reproducibility or consistency, equivalence, feasibility, educational effect, catalytic effect, and acceptability – concepts of which researchers make various use.In PBL settings, self-assessment has been found reliable but not valid [12], correlated poorly with tutor ratings [13, 14], and uncomfortable for students [13, 21].Most other researchers report that students regularly under-rate themselves, as is most recently summarized in Papinczak et al. [13].In one Brazilian study [12], however, undergraduate medical students consistently self-aggrandized their performance.Machado et al. hypothesize that age and cultural differences may account for discrepancies in self-assessment styles among Latin and English-speaking study populations.
Despite the various problems with tutorial self-assessment presented above, its inclusion into existing medical education feedback models is generally endorsed because of its important role in developing self-reflection and self-awareness skills [10, 13].Moreover, student self-assessment accuracy is thought to be achievable with sustained practice [22], an assumption that is foundational for our own work.
Peer assessment was also found reliable but not valid in PBL tutorial settings [12], but correlated more accurately with tutor ratings [13, 14].One research group observed that peer feedback had a positive impact on the quality of individual tutorial contributions, albeit only for below-average students [15].Other researchers reported that peer assessment resulted in improved learning across the board [19].Peer assessment was perceived by students as enhancing their engagement, confidence, motivation, self-directed and life-long learning skills in tutorial, but its fairness as an assessment tool was not uniformly supposed [16].Peer assessment was also found to influence positively such professional behaviours as task performance, aspects of communication, and personal performance [17].
Significant problems with peer assessment in undergraduate medical tutorials have also been noted.In particular, students experienced difficulty in openly assessing their peers, finding it uncomfortable to risk group cohesion with critical remarks [21].This led to a perception among students that peer feedback lacked relevance [19, 21].Most importantly, however, distinctly negative effects of peer feedback on the PBL tutorial group atmosphere have been reported [19].
Researchers nevertheless advocate the use of peer assessment to reinforce educational objectives and reduce reliance on formal grading in the tutorial environment [9]. Successful implementation appears to be contingent on a generally trusting and confidential learning environment [4].The practise of peer assessment in conjunction with self-assessment may also ensure a successful implementation while increasing validity, reliability and positive student involvement in assessments [23].Finally, trials of more inclusive feedback protocols have so far been based on written (scoring) rather than verbal (non-scoring) self and/or peer assessment, and conducted in PBL, not CBL, settings which may have been relevant for outcomes [24].
Development of Peer-Assisted Feedback (PAF)
On the basis of these considerations, it was hypothesized that a comprehensive self, peer and tutor-assessment model would function differently – potentially better – in CBL tutorials than in PBL settings, and in a face-to-face rather than written format.Before model development, a pilot study was conducted by a research assistant (IS) to ascertain student attitudes toward regular tutorial feedback at Dalhousie Medical School in Halifax, Nova Scotia, Canada.
Pilot Study
First and second-year medical students were surveyed to explore their experiences with the standard formative assessment protocol in pre-clerkship tutorials. Questionnaires were completed by ten medical students (n = 5 first-year, n = 5 second-year).These students had an average of over 400 hours of tutorial experience each.
The study [25, 26] identified student dissatisfaction with the standard assessment model practiced at Dalhousie University.It also highlighted student concerns with the frequency and quality of tutorial feedback delivery.While variable experiences were noted, most students indicated that formative assessment of students by tutors was infrequent, non-specific and lacked developmental direction.Students furthermore reported minimal opportunities for direct involvement in their own tutorial assessment, and no opportunity for peer assessment.As one first-year medical student stated: “I’m fairly certain I’ve never received feedback from my group members, nor have I given out any, unless in the form of off-hand positive remarks in an informal setting.”Overall, students perceived that timely, concrete and effective developmental feedback or constructive criticism was rare in their tutorials.
Students also reported that there was little consistency among individual tutors’ assessment practices within the standard feedback protocol.Given the undisputed importance of timely and specific formative assessment to the achievement of learning outcomes in medical education [27, 28], we perceived this aspect of current feedback provision at our institution as particularly problematic.
The decision to develop and study a new feedback model was, then, informed by three main factors.First, a comprehensive (self, peer and tutor) feedback protocol had not been previously implemented as a group activity in a face-to-face, verbal format.Second, such a model had not been tested in a Case-Based Learning (CBL) tutorial setting.Third, student dissatisfaction at our institution with standard tutorial feedback practices had identified a need for intervention.Therefore, the principal investigator of this study (KB) developed a novel tutorial feedback protocol called Peer-Assisted Feedback (PAF).
The Model
Peer-Assisted Feedback (PAF) aims at addressing students’ concerns by providing ongoing and actionable formative assessment in the CBL tutorial with regular round-table feedback.PAF differs from many standard assessment methods insofar as it may allow tutors to gather detailed information [29] about students which can inform standard formative and summative tutorial assessments.In the PAF protocol, public (face-to-face) formative self, peer and tutor assessment is conducted weekly in the CBL tutorial group at the end of a tutorial session.In PAF, each student is assessed in turn, with the tutor performing self-assessment and receiving student comments last (Figure 1).
Figure 1
Student A begins by reflecting on his/her tutorial performance over the past week and provides a verbal self-assessment, ideally in a concrete and specific manner.Next, peers are given the opportunity to offer feedback on Student A’s performance.The tutor shares his/her observations and provides developmental feedback to Student A.The PAF process then continues with the next student, until all students in the tutorial group have provided a self-assessment and received peer and tutor feedback.PAF concludes with the assessment of the tutor, consisting of tutor self-assessment followed by the students’ feedback.The tutor may also provide and receive feedback about general group functioning at the end of the PAF session.The tutorial feedback given in PAF is very brief – 1.0 to 1.5 minutes per student – and the expected timeframe of the entire session is 10 to 15 minutes total.
PAF emphasizes effective communication skills.Positive communication strategies include: highlighting student attainment during formative assessment; focusing on specific, achievable tasks to promote behavioural change when giving constructive criticism; encouraging reticent students to contribute more frequently (and loquacious students to spend more time listening); and promoting emotional honesty and discussing personal weaknesses to facilitate group cohesion [29].
Methods
The current study investigated the feasibility and acceptability to students and tutors of the novel Peer-Assisted Feedback (PAF) protocol in pre-clerkship Case-Based Learning (CBL) tutorials at Dalhousie Medical School.Feasibility is defined here with Norcini & McKinley as “the degree to which the assessment method selected is affordable and efficient for the testing purpose” [30].The term acceptability is used to denote “the extent to which stakeholders in the process (e.g., medical students and faculty, practicing physicians, patients) endorse the measure and the associated interpretation of scores” [30].
Setting
The PAF feedback protocol was trialled at Dalhousie Medical School.The Undergraduate Medical Education (UME) program at Dalhousie recently introduced a renewed curriculum which implements self-directed and collaborative learning within a CBL approach.
participants
Our study followed a cohort of first-year tutorial groups at the Halifax campus of Dalhousie Medical School over the course of one unit of their curriculum (approximately eight weeks) from March to May 2012.Tutors and students were recruited in February 2012.Two tutorial groups of first-year medical students (n = 16) and tutors (n = 2) agreed to participate in this study.There were equal numbers of male and female student participants; both participating tutors were male.Informed consent was obtained from all participants.
Ethical Approval
Ethical approval for this study was obtained from the Dalhousie University Health Sciences Research Ethics Board.
Procedure
Tutor Training
Tutors received a one-hour PAF training session prior to its introduction in the tutorial group setting.Materials distributed to tutors included background information about PAF and formative assessment in general.The PAF process was described and potential problems with its implementation were discussed.
Main PAF Study
During the first half of the unit, tutors and students practiced the standard tutorial feedback format of CBL tutorials at Dalhousie Medical School.This format consists of mid-unit formative and end-unit summative assessments of all students by the tutor.Assessments are then made available online on standardized forms which comment on attendance, information handling and communication skills, professional behaviours and overall tutorial performance.End-unit assessments of the tutor by students are also completed.
During the second half of the unit, the PAF protocol was implemented.In this four-week intervention phase, plenary formative assessment sessions were held weekly, at the end of Wednesday tutorials.Each student first assessed his/her performance (self-assessment), then peers provided feedback (peer assessment) and finally the tutor gave feedback (tutor assessment).Each student was assessed in turn and the tutor was the last group member to be evaluated.Tutors had the option to provide and receive feedback on general group function at the end of each PAF session.The time allotted for the PAF protocol was limited to 10-15 minutes total.
Data Collection
Questionnaires
Questionnaires were distributed to all participants twice: mid-unit before PAF introduction and end-unit after exposure to PAF.The aim of these questionnaires was to determine tutor and student perceptions of tutorial feedback.The questionnaire included open and closed-ended questions concerning the frequency and delivery of tutorial feedback by tutor, peers and self.One question regarding overall student satisfaction with tutorial feedback was answered using a Likert scale of 1–10 (1 = poor, 2-4 = OK, 5-6 = good, 7-9 = very good, 10 = excellent).Additional comments were invited as free text in the questionnaire.The questionnaire design was validated by consulting a representative group of faculty members and students.
PAF Observation
The third PAF tutorial sessions were observed by the two senior investigators (KB and DD) acting as non-participating observers.These observations were recorded as descriptive field notes.
Tutors Interviews
Tutor perceptions of tutorial feedback were further explored in one-hour semi-structured interviews which were conducted with participating tutors after the conclusion of the PAF pilot (June 2012).Two senior investigators (KB and DD) and a research assistant (HW) acted as interviewers, and recorded the tutors’ responses in strategic and focused notes [31].All data collected for this phase of the study, including tutor and student questionnaires, PAF observational field notes, and semi-structured tutor debriefing interviews, were transcribed by a research assistant (HW).
Data Analysis
Quantitative Analysis
The quantitative analysis of Likert scale responses from student questionnaires were analyzed using descriptive statistics (median, mode, range, and inter-quartile range), as is appropriate given the ordinal basis of these data.Non-parametric analysis was conducted using the Mann–Whitney U test. Tutors’ Likert scale responses were incomplete and thus could not be analyzed.
Qualitative Analysis
The study did not seek generalizable results.Rather, its purpose was to understand better the phenomenon of Peer-Assisted Feedback (PAF) in the context of other approaches to feedback.Therefore, the qualitative data analysis did not use a priori categories of analysis but rather identified themes arising from the data. The goals of these analyses were to gain an in-depth understanding of student and tutor perceptions of feedback models, situated in the undergraduate medical tutorial itself [32, 33].
Open-ended questionnaire responses, PAF observational field notes, and semi-structured tutor debriefing interview transcripts were qualitatively analyzed as follows.All qualitative data were reviewed independently by all investigators (KB, DD, IS and HW), and then thematically coded in a collective process.Descriptive labels (codes) were applied to passages or other segments from the notes by each researcher. The descriptive labels arising from the data were subsequently examined to determine whether meaningful clusters could be identified, so that these labels could be grouped into overarching themes [34, 35].While both consensus and distinctiveness were, in principle, of value in the coding process [36], overarching themes were agreed upon by researcher consensus.Credibility and dependability of these analyses were achieved by triangulation of data sources, methods and investigators [34, 37].
Results
Quantitative Results
The overall questionnaire response rate was 100% (n = 18) for both, mid-unit and end-unit questionnaires (pre- and post-PAF introduction).The questionnaire was completed and returned by 16 first-year students and two tutors.There were equal numbers of male and female student participants (n = 8 male, n = 8 female); both participating tutors were male (n = 2).For the single Likert-style questionincluded in the questionnaire (“Please rate below your satisfaction with the feedback you have received in your tutorials”), the response rate was 94% (n=15) for students.Tutors’ responses were not included as they were incomplete.
There was a significant difference in student satisfaction with tutorial feedback, before and after the introduction of PAF (Mann–Whitney U = 50.5, n1 = n2 = 15, P < 0.05 two-tailed).As Table 1 presents, this difference is not evident in the median rating pre- and post-PAF, because the numerical values 5 and 6 fall into the same rating category, “good.”The modal rating, instead, is notably lower, decreasing from 6 (“good”) to 3 (“OK”).
Table 1
Before the introduction of PAF, 47% of the students rated their satisfaction with the feedback practiced in their tutorial as “very good,” and 40% as “good.”The remaining students found tutorial feedback “OK” (7%) or “poor” (7%).After PAF was introduced, 47% of students rated their satisfaction as “OK” and 40% as “good.”The remainder rated tutorial feedback “very good” (13%).No student rated feedback as “excellent” before or after PAF introduction, and none as “poor” after it was introduced. These results are presented in Figure 2.Before the introduction of PAF, 47% of the students rated their satisfaction with the feedback practiced in their tutorial as “very good,” and 40% as “good.”The remaining students found tutorial feedback “OK” (7%) or “poor” (7%).After PAF was introduced, 47% of students rated their satisfaction as “OK” and 40% as “good.”The remainder rated tutorial feedback “very good” (13%).No student rated feedback as “excellent” before or after PAF introduction, and none as “poor” after it was introduced.These results are presented in Figure 2.
Figure 2
Qualitative Results
In the inductive analysis of qualitative data (open-ended questionnaire responses, observational field notes, and tutor debriefing interviews), five themes were identified: (1) Engaged Group Feedback, (2) Effects on Group Dynamics, (3) Personal Barriers (sub-themes: Perceived Lack of Competency, Discomfort), (4) Resistance (sub-themes: Credibility of PAF, Disengagement in PAF, Inauthentic Peer Feedback), and (5) Misconceptions about Feedback.
1. Engaged Group Feedback
Students engaged in concrete and authentic (self, peerand tutor) feedback during Peer-Assisted Feedback (PAF).Student self-assessment was often candid and personal, and was corrected by peers when perceived as unduly modest.For example:
Student: “[I was] prepared, contributed, shared a few jokes … [I] try to do what I can ...”
Peers: “You talked about it from a different perspective … synthesized it better for the rest of us … resolved a misunderstanding and conflict.”
Self-assessments at times led to light-hearted, constructive peer exchanges which contributed to a positive and relaxed group atmosphere.Students also occasionally used the PAF sessions for group-assessments such as: “Today we all seemed to skim a bit …” Importantly, students regularly expressed their appreciation for their tutors’ efforts.
2. Effects on Group Dynamics
Tutors and students noted a positive effect of the PAF sessions on group dynamics.Tutors welcomed the insight gained into their students’ emotional experience during the learning process, found that PAF “helped speed up the development of intimacy” and concluded that overall group function was “better, more relaxed.” Students seconded that view.As one student put it: “Talking about yourself and others in front of others directly, in a nice, professional, respectful way eliminates many problems if people do it.As a student, you become less focused on what you know or don’t know, and realize that you are all in the same boat.”
3. Personal Barriers
Perceived Lack of Competency.Comments from tutors such as: “The function of an individual in a group is abstract for scientists [like me] not familiar with education” reflected concerns that tutors lacked the communication skills requisite for guiding round-table feedback sessions.They felt insecure and unprepared for a facilitator role, particularly as: “Tutors are never trained in the touchy-feely stuff” – interpersonal and emotional matters.Limited institutional tutor training was perceived as a contributing factor to this sense of unpreparedness and insecurity.
Discomfort. Tutors and students described difficulties adjusting to the face-to-face nature of PAF feedback.A minority of students expressed a general preference for private assessment by tutors over public tutorial assessment.One other participant suggested that round-table feedback be enhanced by private feedback.As the PAF model is not intended to replace, but rather complement, standard mid- and end-unit tutor assessments, this concern rests on a misunderstanding.Many participants felt awkward, nervous and embarrassed during round-table feedback sessions, particularly at the outset of the PAF trial.One tutor disclosed that: “[PAF] is a bit uncomfortable because it is about feelings.”Tutors perceived that students were also not always at ease during feedback sessions.This impression was confirmed by several students with comments such as: “The feedback sessions are often somewhat awkward, but would possibly improve if a group was to do it more often.”Discomfort with the public format of PAF was thus assumed to be temporary and lessen with practice.
4. Resistance
Credibility of PAF.Tutors expressed skepticism about the scientific credentials of PAF, soliciting evidence for the model with such comments as: “It would be good to have a paper to read about the theory and research [supporting PAF] so people do not think it is fluff.”Tutors may assume that the implicit focus of the PAF protocol on communicational proficiency detracts from its academic merit and aligns it with ‘soft science.’This perception may recur in the realm of medical education and include other innovative approaches to tutorial feedback.Furthermore, while tutors acknowledged that: “Becoming a physician is a social function,” they appeared ambivalent about their own role in educating medical students toward that purpose.
Some students indirectly reflected a similarly dismissive attitude.They found PAF too ‘touchy-feely,’ too ‘basic’ for their educational level (medical students at the end of their first year), and to removed from the ‘real’ content of medical education.
Disengagement in PAF.Student involvement in the PAF process was occasionally reduced to ‘going through the motions,’ particularly toward the end of the PAF trial.One tutor supposed that students had become, over time: “perhaps less interested and had nothing new to add,” a notion shared by some students.Students’ lack of motivation to engage in feedback with integrity may also reveal a cynical attitude to self-disclosure and peer commentary in public settings.Witness the formulaic nature of self and peer assessment in the following exchanges:
Student 1: “I did fine, normal.”
Peer: “You did great.”
Student 2: “I came prepared for the discussion.”
Peer: “Good job.”
Student 7: “Everything went well, I contributed. Give it to me straight.”
Peer: “It was good.”
Student 8: “I was prepared.”
Peer: “Good job team.”
This strictly procedural use of the PAF method rendered the feedback process meaningless here.
Inauthentic Peer Feedback.The most serious reservations about PAF were expressed by students in regard to the token nature of some of the peer feedback.Of frequent concern was the lack of ‘constructive criticism’ offered by peers, perceived to be a key factor in shaping the effectiveness of PAF in tutorial.Recurrent comments such as: “People don’t really offer constructive criticism” or “No one gives constructive criticism using PAF” were a leitmotif in the data.
Some students attributed the lack of genuine developmental peer feedback to the verbal, face-to-face format of PAF: “The group verbal feedback makes it so that no one is really up for saying anything that could be taken negatively,” or, in another student’s words: “No one is honest with comments when they are given face to face.”Others observed that there was: “Just a lot of patting each other on the back,” thus attributing the lack of critical peer assessment to a reluctance to denigrate peers or hurt their feelings.These observations also revealed a sense that positive peer feedback during PAF may have been perfunctory and unreliable, merely a form of empty flattery.Finally, it was felt that peer feedback could improve with better guidance: “Students [are] shy to give feedback to each other due to little or no direction.”
5. Misconceptions about Feedback
From the data, a final theme arose concerning the perceived connotations of the terms ‘feedback’ and ‘constructive feedback.’Tutor participants exclusively used the word ‘feedback’ to denote developmental feedback (constructive criticism), rather than applying it to both, positive and negative assessments.Therefore, ‘no feedback’ indicated for them that student performance was satisfactory and did not warrant commentary, as the following comment shows: “I didn’t give students feedback because they were very good.”
Students frequently equated the term ‘constructive feedback’ with ‘constructive criticism.’This emerged in comments like: “All feedback has been positive, but non-constructive” and “[I received] mostly positive feedback, not really anything constructive.”In this logic, positive assessment is understood as less valuable for the educational process.In part, this may reflect students’ view that the positive feedback received during PAF was insincere (see the section on Inauthentic Peer Feedback above).Student attitudes here may also reveal their genuine desire to receive honest assessment, actionable advice and clear suggestions for improvement – that is: realistic and effective developmental feedback.
Discussion
This study set out to observe Peer-Assisted Feedback (PAF) as practiced in the small-group learning environment of Case-Based Learning (CBL) tutorials in undergraduate medical education.In particular, we investigated whether our novel feedback protocol which combines self, peer and tutor assessment in a weekly, face-to-face, round-table format, was more feasible and acceptable to students and tutors than the standard tutorial assessment protocol at our institution.The results of our study are considered in the context of the literature documenting previous trials of comprehensive tutorial feedback protocols which – unlike PAF – use written self and peer assessment scores, in addition to tutor assessment, to arrive at students’ formal assessments.
Feasibility
PAF was found to be feasible in Norcini & McKinley’s sense [30].No additional institutional or participant costs were associated with its pilot implementation, and neither tutors nor students voiced major concerns about the additional tutorial time spent in its practice.As a formative assessment tool, PAF was thus found to be “practical, realistic, and sensible, given the circumstances and context” [20].
Acceptability
In terms of the acceptability [20, 30] of PAF to students and tutors, our results are more variable.Our quantitative results revealed that students’ satisfaction with the tutorial feedback received did not improve after the four-week trial of PAF.Rather, students who had been quite content before the trial began were generally less satisfied after the introduction of the PAF model.Globally, then, PAF was not as immediately acceptable to students as we anticipated for a low-stakes, informal and round-table peer, self and tutor feedback approach implemented in a CBL tutorial setting.
Our qualitative findings are congruent with the literature in some key areas.Tutors found PAF generally acceptable, while requesting additional background materials and process supports.Students, instead, responded to PAF with substantially greater initial reluctance, hesitating to provide authentic feedback or to take an active role in feedback at all.The greatest barrier to an effective PAF process, as perceived by students, was their unwillingness to engage in constructive peer criticism, an issue that is widely commented upon by researchers but not fully explored [12, 13, 16, 19, 23].We noted in our study that not only educators, but also students themselves realize that formative feedback is necessary and valuable, yet remains meaningless without critical content or authentic praise.The prevailing reluctance of tutors and students to voice critical commentary in tutorial is thus somewhat paradoxical [38].As McIlwrick et al. put it, there is an “… apparent disconnect between what should be happening and what is actually happening during feedback conversations …” [39].
The emotional experience of tutors and students with PAF was largely ambivalent. While the increased intimacy, warmth and humour admitted to the tutorial group atmosphere was welcomed and the group benefitted from the concomitant effects, a sense of personal awkwardness, embarrassment and apprehension prevailed.Ambivalence was also notable in regard to the perceived value of PAF.Tutors and students were skeptical about its credentials, protested its ‘touchy-feely’ nature, and found it a “waste of time.”However, they concurrently wished to receive the very feedback they were unwilling to offer, hence recognizing the benefits of regular and authentic formative assessment but simultaneously resisting its modes of implementation.We anticipated this resistance to introducing an unfamiliar assessment mode, as research in medical education and higher education at large widely reports similar findings [13, 19, 21, 23, 40].
In one participating tutorial group, the tutor and one student reported that significant interpersonal problems were present prior to trialling PAF.In principle, PAF provides an ideal platform for addressing such group dysfunction.However, neither the tutor nor students were prepared to raise and attempt to resolve the relevant issues during PAF feedback sessions.One student remarked that: “It can be difficult to actually raise issues in the group.”The tutor also perceived that “Students wouldn’t dare go there with PAF” and that he himself also “didn’t have the courage to make stronger statements.”Research suggests that it is indeed highly challenging to address difficult incidents in tutorial groups [41].In the tutor’s view, the unresolved tensions in his group explained why feedback sessions stayed “safe, superficial and positive.”
Participants in our study assumed that a process of acclimatization would alleviate their initial discomfort with PAF, a view confirmed in the literature [5, 9, 17, 19].In particular, peer criticism may become less challenging with practice because it will no longer be perceived as an event of extraordinary seriousness [2].Researchers conclude by the same token that skeptical attitudes toward novel feedback protocols will decrease with familiarity [23].As Papinczak et al. observe, “The implementation of self-assessment and peer assessment in any setting is likely to lead to initial scepticism and doubt about its validity.However, through repeated exposure to, and practice in peer-assessment, such perceptions should be moderated” [13].
Students and tutors clearly require more in-depth preparation and ongoing support in order to engage constructively in the PAF process.The appropriateness and effectiveness of various preparation tools including workshops, demonstrations and in-tutorial process supports for PAF require further investigation.In particular, students must be made aware that peer feedback is a crucial component of teamwork in medical settings and that, as Kitay among others has emphasized: “The negative impact of little or ineffective peer assessment … affects patients, clinicians, and the medical field as a whole” [2].
While many of our qualitative findings (disengagement, discomfort and reluctance to voice constructive criticism) are in keeping with the literature, our results depart from previous research in one major area, namely the positive or at least neutral effects of PAF on the group learning process, an aspect which constitutes a new finding.As was noted earlier, the most serious reservation expressed by medical students against other comprehensive tutorial feedback protocols was the detrimental impact on group function and the learning environment as a whole [4, 9, 13, 19, 21].Unlike PAF, the protocols these students experienced integrated self and peer assessment with tutor assessment for high-stakes, scoring purposes, and used private written forms for assessment.One major study reported a significant decline in group functioning in PBL tutorials when students were asked to rate each other anonymously in writing.Students perceived that such peer rating “promotes judgemental attitudes (and) tensions, and destroys harmonious learning” [19].
Problems of this nature were not reported for the low-stakes assessment approach of PAF where formal assessment is not the central goal.Instead, students in our PAF trial used peer assessment to praise their peers or bolster their confidence by correcting unduly negative self-assessments – practices of peer assessment that are constructive for the individual student as well as the group [13, 23].The fact that participating students did not only engage in self and peer assessment but also spontaneously assessed their tutorial group function may indicate improved group cohesion via the PAF process.
The negative effects of high-stakes peer assessment on the small-group learning process are thus precluded in PAF.Our approach therefore represents an effective alternative to other comprehensive feedback protocols documented in the literature.PAF’s weekly public, verbal assessment format, unlike the private, written format of other models, highlights the developmental aspect of our model.PAF thus provides effective formative feedback as outlined by Norcini et al. [20]:
Effective formative assessment is typically low stakes, often informal and opportunistic in nature, and is intended to stimulate learning ... It works best when it (1) is embedded in the instructional process and/or work flow, (2) provides specific and actionable feedback, (3) is ongoing, and (4) is timely.
PAF may indeed provide a platform on which to voice the individual and collective successes and failures occurring in a small-group learning environment.Moreover, the public forum of PAF more closely replicates the team environment that students will encounter in their clinical education and future professional life, whereas written, private forms of self and peer assessment cannot help prepare medical students for this role.
Finally, the round-table format of PAF introduces elements to formative feedback that are absent from other tutorial feedback methods encountered in the literature: weekly tutor self-assessment and student assessments of the tutor.The inclusion of the tutor into the regular formative feedback process promotes a more democratic, egalitarian and team-building mode of functioning in tutorial that backgrounds the tutor’s leadership role.All group members, the tutor included, thus receive timely and specific feedback – a fact that underscores the lifelong learning aspect of all participants in medical education.
Limitations
The findings of this study are subject to several limitations, the principal one being the small sample size of the PAF trial.Notwithstanding the attention paid to thematic saturation during qualitative data analysis (reached at approximately mid-point), additional thematic clusters may therefore have been overlooked.Other relevant limitations include the fact that participants were recruited and data collected from only one medical school, and only one subset of first-year students and tutors.Participants may, therefore, not have been representative of medical students and tutors in general.
Conclusion
In conclusion, Peer-Assisted Feedback (PAF) is a novel formative assessment protocol which combines weekly self, peer and tutor assessment in the informal, low-stakes environment of a round-table discussion. PAF was found feasible in Case-Based Learning (CBL) tutorial groups at our institution.The PAF protocol was also acceptable to students and tutors with some limitations, including personal discomfort and resistance to group feedback.These barriers to effective PAF practice may decrease with training and familiarity with the process.Above all, PAF may represent an effective alternative to other comprehensive, more high-stakes feedback protocols because it promotes rather than impairs the group cohesion of a small-group learning environment.PAF therefore shows promise as a viable developmental feedback instrument in undergraduate medical education and other areas of higher learning.Further research is needed to confirm these findings at our institution and beyond.Investigations of PAF in the context of student achievement outcomes are also warranted.