P Schick, M Burke
audience response systems, curriculum, interactivity, post-hoc analysis, training
P Schick, M Burke. Post-hoc Analysis of Audience Responses-Enabled Conferences on Hematological Subjects. The Internet Journal of Hematology. 2008 Volume 6 Number 2.
We have developed a strategy for the post hoc analysis of questions asked during audience response enabled conferences that objectively assesses baseline knowledge and comprehension of lectures. Examples of the application of this approach in training residents to manage hem/onc problems are discussed. Data from lectures on hem/onc emergencies and the interpretation of peripheral blood smears are presented. The demographics of knowledge in an audience consisting of medical students and residents at different level of training were demonstrated. Strengths and weaknesses in knowledge were identified. The data indicated that skills in managing hem/onc emergencies increased with the length of training but not in interpreting peripheral smears. Feedback indicated that participants were not intimidated by being tested. It is valuable to monitor knowledge during conferences since they usually review essential core information. The information derived from monitoring of conferences on hem/onc disorders can be used to optimize curriculum and training programs.
Audience Response systems instantly display graphs of responses to questions during PowerPoint presentations. Therefore, this technology engages participants and encourages interactivity. Since lecturers can scan the graph of the responses, they are aware of audience comprehension and can spend more time on topics that were poorly understood (1-11).
Only a few published studies have objectively evaluated this technology, and the assessment has been limited to feedback (1-6), knowledge retention (7, 8) and program evaluation (11). The potential of in-depth post hoc analysis of audience responses during lectures has not been appreciated and investigated.
This paper describes a strategy that we developed for post-hoc analysis of audience response enabled lectures to assess the effectiveness of lectures and to determine baseline knowledge. This paper reports our experience with the post hoc analysis of two lectures on hematological subjects given to group of residents at different levels of training and medical students. The comparison of these lectures provided a way of testing the sensitivity of post hoc analysis.
Obtaining information using Audience Response Systems is less formal and less intimidating than standard testing, is easy to implement and can efficiently establish the demographics of baseline knowledge and identify strengths and weaknesses in knowledge. The application of this information for improving lectures, curriculum and training programs is discussed.
a)The study reports our experience using audience response systems in two lectures. The audience response enabled lectures were given at a medical center and the audience consisted of medical residents at different levels of training and medical students. Most of the residents and medical students had attended both conferences, but there were minor differences in the audience due to clinical rotations and involvement in clinical care. However, the distribution of participants at different levels of training was similar in both lectures.
b)One lecture was on hematologic emergencies that are clinical problems that residents frequently encounter during their training. The other lecture was on the interpretation of peripheral blood smears that is taught in medical school. However, blood smears are usually interpreted by clinical pathologists and the hematology laboratory and not by residents during their training program. The comparison of these lectures (one that has been emphasized and the other that has not being emphasized during the medical residency training program) provided a means for determining the sensitivity of the post-hoc analysis for assessing the knowledge gained during the training program.
Questions were asked during PowerPoint presentations coupled with an audience response system. The audience responses to questions were instantly displayed (Turning Technologies, LLC). The rationale for correct answers was discussed.
a)Questions were asked in PowerPoint slides prior to the discussion of topics to establish baseline knowledge
b)Questions were asked after topics had been discussed in the lecture on morphology to test for comprehension of the presentation.
Since all responses were recorded, it was possible to carry out a post-hoc analysis of responses to questions.
a)Strengths and weaknesses in baseline knowledge were quantified.
b)The data was sorted according to level of training to obtain demographics of baseline knowledge and strengths and weaknesses in medical students and in residents at different levels of training (first, second and third year residents, PGY1, PGY2, PGY3, respectively).
c)Outstanding and poor grades in each demographic group were determined. The mean of grades plus one SD was used to identify outstanding performance. Conversely, the mean minus one SD was used to identify poor performance (12).
d)The comprehension of the lecture on morphology was evaluated.
Audience feedback about the value of using audience response systems during lectures was obtained by asking Likert questions in slides at the end of the PowerPoint presentations. Questions asked are shown in Table III.
Demographics of baseline knowledge:
Lecture on hematologic emergencies:
A more detailed analysis of this lecture was possible since there were 8 participants in each of 4 demographic groups (medical students, PGY1, PGY2 and PGY3 residents).
Figure 1 demonstrates that there were significant differences in grades between residents at different levels of training and medical students. Grades for medical students, PGY1, PGY2 and PGY3 in Hematology Oncology Emergencies were 39.5%, 48%, 54% and 68%, respectively. All differences in grades except those between PGY1 and PGY2 were significant (1 way ANOVA and Scheffe analysis). It is clear the baseline knowledge was proportionate to the length of training.
Table I demonstrates the results of the assessment for competency derived from the analysis of the data shown in Figure1. Outstanding grades were the mean grade plus one SD and poor grades were the mean grade minus one SD. Four of 8 PGY3 had outstanding grades and none had poor grades. Very few PGY1 and PGY2 had grades in the outstanding and poor grades. Four out of 8 medical students had poor grades and none had outstanding grades.
b) Comparison of the performance of residents and medical students in the lectures on hem/onc emergencies and blood cell morphology. Performance of all residents was compared with that of medical students in the 2 lectures. The results are shown in Table II,
Identification of strengths and weaknesses in baseline knowledge:
a)Hematologic emergencies: While participants demonstrated strengths in knowledge in most of the questions, weakness was evident in several topics. For example, the grades in 3 of these questions in Hematologic Emergencies were due to lack of experience in managing Sickle Cell Anemia and Hemophilia. Low grades in the other 2 questions were surprising since residents had managed low platelet counts and nutritional anemia during their training program.
b)Blood Cell Morphology: The analysis of 2 questions revealed that participants did not understand the differential diagnosis of large blood cells and did not know that sickle cells are usually not seen in the peripheral smear in sickle cell trait
Differences in knowledge between medical students and residents: We also identified 3 questions in the lecture on Blood Cell Morphology and 3 questions in the lecture on Hematology/Oncology Emergencies in which resident and medical student performances were markedly different. For example, in the lecture on Hematology/Oncology emergencies, medical student grades were considerable lower than resident grades in the following: indications for transfusions (0% and 75%), respectively, and the management of fever in patients with cancer with low white blood counts (13% and 75%), respectively. In contrast, medical students were more skillful than residents in interpreting abnormal blood morphology and laboratory results.
Analysis of the comprehension of the lecture on Blood Cell Morphology: There was significant increase in grades for comprehension (questions asked after the discussion of topics) compared to baseline grades (questions asked prior to the discussion of topics), 58.1% and 43.1%, respectively. However, the analysis of responses to individual questions indicated that some topics required additional clarification. When participants were ranked into upper, middle and lower tertiles according to baseline grades, there was an increase in the comprehension of the lecture was most evident in the lower third and to a lesser extent in the middle tertiles but not in the upper tertile (Paired t-Test) as shown in Figure 2.
Tertiles were established for grades in baseline knowledge with the highest grades in the upper tertile and the lowest grades in the lower tertile. The performance of participants in each of these tertiles was traced to their grades in comprehension of discussions during the lecture. For each tertile, the mean ±SD of grades in baseline knowledge (B) and in comprehension (C) are shown in the Figure. The participants whose baseline knowledge grades were in the lower and middle tertiles in baseline knowledge had significantly higher grades in comprehension while those in the upper tertile did not have significantly higher grades in comprehension. (Paired t-Test). (n=24)
Feedback from Likert Questions in shown in Table III: Feedback was exceptionally favorable. Note that the participants’ feedback score on whether “the presentation on peripheral blood smears was appropriate for my level of training” was slightly lower than the other scores.
Our experience with audience response systems has been very favorable and consistent with the experience of other educators. It engaged participants. Since responses to questions are instantly displayed, lecturers were aware of misconceptions and could spend more time discussing the rationale for correct answers. Participant feedback scores were very positive about the educational value of lectures enhanced with audience response systems.
The strategy for post hoc analysis proved to be effective for the following reasons:
The comparison of the data from the two lectures demonstrated the sensitivity of the post-hoc analysis of audience response system lectures. The analysis revealed that residents’ ability to manage hem/onc emergencies was significantly greater than medical students. In contrast, there was no apparent difference between the skills of residents and medical students in interpreting peripheral blood smears. These data are shown in Table II
A more detailed analysis of the lecture on hematological emergencies was possible since there were 8 participants in each of the 4 demographic groups (medical students, PGY1, PGY2 and PGY3 residents). The analysis was sufficiently sensitive to detect significant growth in skills in managing hem/onc emergencies proportionate to years of training. Also, outstanding and poor performance was identified. Four out of 8 PGY3 had outstanding grades and none had poor grades. Very few PGY2 and PGY1 were either in the outstanding or poor categories. However, 4 out of 8 medical students were in the poor category and none had outstanding grades. These data are shown in Figure 1 and Table I and support the contention that residents were exposed to hematological emergencies but not involved in the interpretation of blood smears during the 3 year training program..
The discrepancy in the demographics of grades in the 2 lectures was not due to differences in the audience since essentially the same group of participants attended both lectures. As mentioned above, questions in both lectures were screened for quality and ambiguous questions had been discarded. These results mostly likely were due to the fact that residents had ongoing experience managing hematologic emergencies but did not routinely interpret peripheral smears. Medical residents rely on interpretations of peripheral smears by clinical pathologists.
Blood cell morphology was an extreme example of a subject that currently is not emphasized in resident training. Our study indicates that overall difficulty and demographic of grades at different levels of training can be easily determined by Audience Response Systems. A clear increase in baseline knowledge with the level of training is a major indication of a successful exposure to a subject in an educational or residency program. An absence of improvement in knowledge in a specific subject during the tenure of training or education is important criteria for revealing a potential weakness in an educational or training program.
Strengths and weaknesses in knowledge in individual topics were identified. For example, although participants had adequate knowledge of most the topics that were presented, weaknesses in the management of certain anemias and bleeding disorders were detected. Demographic analysis revealed that residents were more competent in decisions on transfusion therapy and the management of fever in patients with cancer than medical students. This reflected residents’ clinical experience. However, medical students had better grades than residents in interpreting peripheral blood smears and some lab tests.
Information on strengths and weakness in knowledge is invaluable for modifying and enriching lectures, curriculum and training programs to cover gaps in knowledge. The demographic information is essential when presenting lectures to audiences consisting of students and trainees at different levels of training.
Questions asked after topics have been discussed can assess comprehension of the lecture and the quality of the presentation. In the lecture on blood cell morphology, questions were also asked prior to the discussions to establish baseline knowledge and after the topics had been discussed to determine audience comprehension of the lecture. The post-topic grades were markedly higher than baseline grades of topics. When participants were ranked into tertiles according to baseline grades, the increase in the post-discussion grades was most evident in the lower and middle tertiles. Therefore, the lecture was well-comprehended and there was significant increase in post-discussion grades. Post-topic grades not only tests comprehension but also provide information on the effectiveness of the lecture.
The study did provide information that will be useful for training residents how to manage hem/onc problems. However, the findings from the post hoc evaluations in this study are primarily relevant to our institution since each medical center has possible strengths and weaknesses.
The study demonstrates the value of in-depth post hoc evaluation of questions asked during lectures coupled with Audience Response Systems in order to monitor lectures and training programs. This information is necessary to improve and adapt lectures, curriculum and educational and training programs to the needs of students. This approach is also helpful when planning a lecture in which there are differences in participants’ level of training.
Ordinarily, most institutions use periodic exams to assess strengths and weaknesses in training programs. Testing for baseline knowledge during lectures coupled with audience response systems is less intimidating and more practical than traditional examinations. Lectures and conferences are integral components of training programs, present state of the art information and review essential core information. Therefore, evaluating baseline information during these conferences provides critical information for optimizing curriculum and training programs. Our experience indicates that using Audience Response Systems to establish baseline knowledge and assess lecture quality is far more flexible, less intimidating, easier to implement and more informal than standard testing. The strategy can be applied to conferences on areas of medicine and medical specialties.
Source of grant support:
Sharpe-Strumia Foundation of the Bryn Mawr Hospital, Bryn Mawr, PA