Abstract
Objective. To assess the impact of a debate exercise on self-reported evidence of student learning in literature evaluation, evidence-based decision making, and oral presentation.
Methods. Third-year pharmacy students in a required infectious disease therapeutics course participated in a modified debate exercise that included a reading assignment and readiness assessment tests consistent with team-based learning (TBL) pedagogy. Peer and faculty assessment of student learning was accomplished with a standardized rubric. A pre- and post-debate survey was used to assess self-reported perceptions of abilities to perform skills outlined by the learning objectives.
Results. The average individual readiness assessment score was 93.5% and all teams scored 100% on their team readiness assessments. Overall student performance on the debates was also high with an average score of 88.2% prior to extra credit points. Of the 95 students, 88 completed both pre- and post-surveys (93% participation rate). All learning objectives were associated with a statistically significant difference between pre- and post-debate surveys with the majority of students reporting an improvement in self-perceived abilities. Approximately two-thirds of students enjoyed the debates exercise and believed it improved their ability to make and defend clinical decisions.
Conclusion. A debate format adapted to the pedagogy of TBL was well-received by students, documented high achievement in assessment of skills, and improved students’ self-reported perceptions of abilities to evaluate the literature, develop evidence-based clinical decisions, and deliver an effective oral presentation.
INTRODUCTION
Active learning is an essential component of health care education and team-based learning (TBL) is a unique teaching pedagogy that is gaining popularity in pharmacy education due to its emphasis on teamwork, communication skills, and problem-solving as a means of didactic education.1 At California Northstate University College of Pharmacy (CNUCOP), the entire didactic curriculum is delivered in the TBL format, with assigned pre-class readings and readiness assessment tests as a part of the educational process. However, even in a TBL curriculum, creation of activities to promote higher-order thinking to stimulate analytical and critical thinking skills can be a challenge. The Accreditation Council for Pharmacy Education (ACPE) 2016 Standards 3 and 4 refer to skills such as problem solving, collaboration, communication, and professionalism, which are often some of the most challenging yet crucial to develop within the pharmacy curriculum.2
One approach that has been well-established in the literature as a means to expose students to such skills are debates. Debates have been studied in the setting of pharmacy education and have been consistently shown to promote communication skills, teamwork, and critical thinking among other skills.3-7 A conventional debate format was implemented as part of an infectious disease therapeutics course at CNUCOP to enhance the aforementioned skills. However, a consistent theme in class feedback was that students in the audience did not have adequate background knowledge regarding the controversial debate topics other than that assigned to them, and were therefore unable to comprehend the arguments of their peers. As the didactic curriculum at CNUCOP is delivered via TBL, a debate format incorporating components of the TBL pedagogy to address this issue was implemented. The debate format used a pre-class reading assignment and individual and team-based readiness assessment tests at the beginning of class to hold students accountable for the pre-class reading. The purpose of incorporating these TBL-components was to provide students with adequate background knowledge on all debate topics to encourage student-audience engagement in the debates. To the authors’ knowledge, there have been no previously published reports of a debate format incorporating components of the TBL pedagogy.
The primary objective of this study was to assess the impact of this TBL-adapted debate exercise on self-reported evidence of student learning in literature evaluation, evidence-based decision making, and oral presentation. The secondary objective was to assess general student perceptions of the debate exercise.
METHODS
The debate exercise was conducted during the spring semester of 2015 as a component of a required infectious disease therapeutics course for pharmacy students at CNUCOP during the third year of a four-year professional program. Ninety-five students were enrolled in the course. The nature of the debates provided students with an opportunity for learning at the higher levels of Bloom’s taxonomy of learning, namely analysis, synthesis, and evaluation.8 The learning objectives for the debates exercise were: identify and critically evaluate primary literature including the strengths and weaknesses of each article; use information from the literature to develop a concise, evidence-based argument defending a viewpoint; anticipate opposing arguments and successfully identify limitations in them; effectively answer questions defending your argument and develop challenging questions probing an opposing argument; convince an audience of your viewpoint on a topic with credibility and evidence-based rationale; and deliver a clear, concise, professional, organized, and engaging oral presentation within specified time constraints.
In mapping back to the objectives of the study, the first two learning objectives relate to literature evaluation, the second two to evidence-based decision making, and the last two relate to oral presentations. The 95 students were divided into 18 teams of 5-6 per team. At CNUCOP, students are randomly assigned to new teams at the beginning of each semester and remain in the same teams for all required courses in a given semester. The debates were conducted over three class days, each spanning 3 hours, with six teams debating each day. Each team was randomly assigned an opposing team to debate against resulting to three debate topics per day. Students were only required to attend the class day during which they were debating and the same three debate topics were used on each class day.
The three debate topics were: cefepime can be used for the treatment of serious infections due to ESBL-producing organisms, probiotics are effective for the prevention of Clostridium difficile-associated diarrhea, and linezolid should be used over vancomycin for the treatment of MRSA pneumonia. Topics were randomly assigned 6 weeks prior to the debates and each team was provided two high-quality primary research articles, one for and one against each topic. Student teams were not informed whom they would be debating against. Two articles were provided because of a concern noted after a pilot of the debate exercise in the previous year. At that time, facilitators noted that opposing teams had often researched different articles and therefore had no common ground to directly critique the literature behind each other’s arguments. Providing the single best articles for and against each topic allowed for both debating teams to have some common ground for their presentation, thereby increasing the quality of the debate arguments. Students were still encouraged and expected to identify other sources independently. The format of the debate presentations is shown in Table 1 and was adapted from previously published examples.3,9,10 As the curriculum is built upon TBL at CNUCOP, some of the fundamental components of TBL were incorporated into the debate exercise. As previously discussed, to better engage the student audience, a pre-class reading assignment covering background information on all three topics was given to enhance students’ understanding of classmates’ debate topics. This reading was only two pages to limit additional work that might distract teams from preparing for their own debate topics. Consistent with the pedagogy of TBL, students were held accountable for the reading assignment by the administration of individual and team readiness assessment tests (iRATs and tRATs) followed by discussion of underlying concepts at the beginning of class, prior to the debates.
Format of Debate Presentations
The debate presentations were graded using a rubric that was provided to students in advance. With two student teams debating, the remaining four student teams in the audience were required to participate in grading using the same rubric as faculty. The grading rubric scored presenters on content and delivery and was written to be identical to the learning objectives to document achievement of each objective for assessment and curricular mapping purposes (Appendix 1). Student-rated scores were averaged and comprised 40% of the overall grade, while two faculty members’ scores were averaged to comprise the remaining 60%. After each debate presentation, students in the audience voted for the team they believe made the most convincing argument. If a team garnered at least 60% of the audience vote, they received 5% extra credit toward their overall presentation score. The purpose of this extra credit was to provide additional incentive to produce high-quality arguments and presentations. Information from the debate topics themselves were not assessed in subsequent examinations as the purpose of the debates exercise was not to learn the topics, but to provide students the opportunity to refine their skills in literature evaluation, evidence-based decision making, and oral presentations, as noted in the learning objectives. The debates accounted for 10% of a student’s overall course grade (housed in the team component of the TBL course) and intra-team peer evaluations (which included feedback on the debates) were conducted at the midpoint and at the end of the semester, accounting for 2% of the overall course grade.
To externally evaluate the debatable nature of the topics, clinical pharmacists’ opinions on each topic was surveyed via the American College of Clinical Pharmacy (ACCP) Infectious Disease Practice Research Network (ID-PRN) email listserv. Assessment of student learning was documented via peer and faculty evaluation of debate performance and students’ self-reported perceptions of abilities to perform skills outlined by each of the learning objectives using a pre- and post-survey design. The pre-debate survey was administered at the time of topic assignment, 6 weeks prior to the debates. The post-debate survey was administered in class at the conclusion of each debate day. The pre-survey consisted of two demographic questions (age and gender), six questions directly phrased off each learning objective, and one summarizing question. The same questions were replicated in the post-survey, which also included an additional four questions reflecting on student perceptions of the debates, and one question to capture open-ended feedback. Participation in the pre- and post-surveys was optional to students and no incentives were provided for participation. Informed consent was obtained prior to administration of the pre-survey. All survey responses, as well as voting for debate teams, were collected through the TurningPoint electronic audience response systems. Each student has a TurningPoint response system tied to their unique student identification number, and this was used to match pre- and post-survey responses. Anonymous open-ended feedback upon completion of the debates was provided on paper and collected at the end of class by a faculty member not involved in course grading. To eliminate the possibility of survey participation and responses affecting course grades, all survey responses were de-identified and matched by a faculty member not involved in grade assignments before course coordinators were provided survey data.
The CNUCOP Institutional Review Board approved this study. The expert opinion survey as well as the pre- and post-debate student surveys were analyzed using descriptive statistics. Comparisons between the pre- and post-surveys were analyzed with the paired samples Wilcoxon signed rank test with an a-priori alpha value of 0.05 for statistical significance. All statistical analyses were performed using SPSS statistics version 23 (IBM Corp., Armonk, NY).
RESULTS
The debate topic survey of clinical pharmacists’ opinions using the ACCP ID-PRN email listserv garnered 163 respondents. Average age was 37.1 years with an average of 8.2 years of experience practicing/affiliated with the area of infectious disease. The majority of respondents practiced in an inpatient or hospital setting (82.6%). Their level of agreement to each of the debate topic statements is shown in Table 2. Overall, each debate topic had approximately one-third of respondents either agreeing or having neutral opinions on the statement while the other two-third disagreed, thus validating the debatable nature of the assigned topics.
Survey of Expert Opinion on Each of the Debate Topic Statements (N=163)
Average student performance on the individual readiness assessment tests was 93.3% (14/15) and all teams scored 100% on their team readiness assessments. The average student score on the exercise as a whole was 88.2% (48.5/55) prior to addition of extra credit points, which were awarded to 7 of the 18 teams. Average scores by student/peer grading was 95% (52.3/55), while average scores by the two faculty facilitators were 81.1% (44.6/55) and 81.8% (45/55) respectively. Appendix 1 shows a breakdown of peer and faculty scoring by each component of the grading rubric. Student-rated scores were significantly higher than that of faculty graders. Debaters generally performed well in the delivery portion of the rubric, which mirrored the sixth learning objective, but had more variable scores in the content portion, which tied to the first five learning objectives. In particular, students scored the lowest (4.5/5 by peers and 3/5 by faculty) in anticipating opposing arguments and identifying limitations in them which represented the third learning objective. The next lowest faculty-rated score pertained to the fourth learning objective, the cross-examination portion of the debates, where the average faculty-rated score was 3.3/5. Scores related to assessment of the other learning objectives were generally higher with all faculty-rated scores being greater than 3.8/5. All debate teams’ presentations were scored as being professional and completed within the allotted time.
Eighty-eight of 95 students completed both the pre- and post-surveys (93% RR). Demographics are shown in Table 3. The majority of students (71%) were below the age of 29 years with 60% of respondents being female. Results from the surveys are shown in Table 4. All questions on the surveys (six phrased identically to each of the learning objectives and one summary question) showed statistically significant changes between pre- and post-surveys with the majority of students showing an improvement in self-assessed abilities to perform skills outlined by each learning objective. The post-survey results of students’ perceptions of the debate exercise are shown in Figure 1. Overall, the majority of students enjoyed the debate exercise and believed it improved their ability to make and defend clinical decisions. Although the majority of students didn’t feel that the workload associated with the debates was excessive, 23% believed they were too much work.
Demographics of Students Included in the Analysis of Pre- and Post-Surveys (N=88)
Pre- and Post-Survey of Students’ Self-Assessed Ability to Perform Learning Objectives (N=88)
Post-Survey of Students’ Perceptions of the Debate Exercise.
There was lack of consensus among students as to whether there should be more debates in the pharmacy curriculum. Open-ended feedback was provided by 65 students and anecdotally confirmed the overall positive student experience (Examples: “I really liked the debates and feel more comfortable talking about evidence”; “Good public speaking practice”; “I liked that the debates gave us two sides to a topic, as we’re always told, things aren’t black and white in the real world”). In addition, students found the reading assignment and readiness assessments helped them understand and learn from the presentations of their classmates (Examples: “The readiness assignment was short and helped me follow the debates of my classmates”; “I at first found the reading assignment to be extra work on top of preparing my debate but after class I realized it really helped me stay interested while I was in the audience”; “I’m glad the reading wasn’t too long to distract my debate prep. It made me interested in hearing others’ arguments and learn from them”). Although most students felt motivated by the opportunity for extra credit and were more engaged in the assignment as a result, some students felt the adjudication of extra credit was not fair (Examples: “Great idea with the extra credit. It doesn’t promote hostility or discourage students if they don’t win”; “The extra credit made me want to work harder, even though I didn’t get the points”; “It seemed the team that went last had the advantage of having the last say, this wasn’t fair in terms of extra credit”; “We worked really hard on our debate so I don’t think it was fair that we didn’t get any extra credit”). Most students believed the debates helped them learn about the disease states while also improving their confidence and ability in literature evaluation, evidence-based decision making, and oral presentation (Examples: “Although we’ve done journal clubs before, this was cool how we had to look at it from both sides and defend our opinion”; “Great practice for presentations before APPEs”; “I think this will help me on rotations when I need to back-up what I say with evidence”).
DISCUSSION
The debate exercise, incorporating components of a TBL pedagogy, was an effective classroom assignment demonstrated significant improvements in students’ self-reported perceptions of abilities to perform the skills outlined in the learning objectives. In addition, peer and faculty evaluations of students’ performance documented formal assessment of skills tied to each learning objective of the assignment. Students generally delivered their debate presentations well but struggled with cross-examination, anticipating opposing arguments and identifying limitations in them. The methodology used in this debate improved upon the previously piloted conventional debate format by incorporating a pre-class reading assignment, readiness assessment tests, and immediate discussion and clarification of background information on all topics prior to starting the debates. Performance on readiness assessments was high indicating adequate pre-class preparation by the students and this ensured all students were knowledgeable on the material to be actively engaged in their colleagues’ presentations.
Although iRAT percent scores typically average in the mid- to low-80s in the course, the average iRAT score for the debates was 93.5%. This may be because the reading assignment for the debates was much shorter and intended to minimize distraction from students preparing for their debate topics. It was just intended to provide basic background knowledge on the three topics, and this may explain the unusually high iRAT scores in this assignment. In the future, RAT questions may have to be cautiously increased in difficulty. Although open-ended student feedback regarding the TBL components of the exercise (readiness assessment and discussion) was positive, a limitation of this study is that it did not specifically evaluate the impact of the implementation of TBL-based components to the debate format. This evaluation was not possible as formal data collection was not performed for comparison with the pilot of the conventional debate format during the prior year.
To further promote audience engagement in the debates, students were required to participate in grading of debate teams. During the previous year’s pilot of the debates, average student-rated scores were similar to that of faculty demonstrating reasonable objectivity. However, in this exercise, student-rated scores were significantly higher than that of faculty and were unlikely to have been objective. As a result, the weight of student-rated scores toward the final grade will be reassessed for future debates. Although this may have affected the overall grade assignments and represented a limitation in design of the debate’s grading, this is unlikely to have affected student learning or survey results. Another weakness of this study was that it does not formally document an improvement in the skills associated with each learning objective. Such documentation of an improvement in skills would require a baseline assessment of skills pertaining to each learning objective, and this would have necessitated an additional course assignment in which such skills were demonstrated. As the course design did not have room/time for such an additional assignment, an assessment of students’ skills at baseline was not performed. However, the study authors documented an objective assessment of students’ skills pertaining to each learning objective and students generally scored well on the debate exercise. Of note, the purpose of the debates was not to provide knowledge on the debate topics, which could have been assessed by an examination before and after the debates, but rather to promote students’ skills in literature evaluation, evidence-based decision making, and oral presentation. Therefore, an objective measure of skill improvement was not feasible. The topics used for the exercise were validated as being sufficiently debatable as there was lack of consensus on the debate topics among clinical pharmacists practicing in the field of infectious disease. Although the debate topics were carefully chosen by course coordinators, this step was important to ensure the debate topics were fair and balanced to provide opportunity for compelling arguments for and against a topic. In addition, this lack of consensus among clinicians highlighted the clinical relevance of the debate topics.
Participation rate in the surveys was excellent (>90%) especially considering students were not required to participate nor provided any incentive for study participation. Most survey respondents were between 25 and 29 years of age. This was likely due to our institution’s requirement for a bachelor’s degree to qualify for admission to the college at the time the surveyed class was admitted. Evaluation of pre- and post-surveys showed that the majority of students improved in their self-reported perceptions of abilities to perform skills outlined by each of the learning objectives. Consistent with previous studies of debates in health care education, students perceived particularly strong improvements in presentation skills and ability to form evidence-based decisions.4,5,11 Moore and colleagues describe a clinical controversy debate format similar to ours while also summarizing the findings from other studies on debates in the allied health curricula.11 Students were enrolled in an ambulatory care elective and 13/18 students were included in the study. Overall, students either agreed or strongly agreed that the debates improved their ability to make evidence-based decisions, think critically, and to deliver presentations. Similarly, McNamara and colleagues conducted a clinical controversy debate as part of a pharmaceutical care laboratory course. Student learning skills was assessed by means of a pre- and post-survey of student self-assessments. Their study had 140 students and similar to this study’s findings, they found students’ self-assessments improved in identifying and analyzing the literature, predicting and defending against an argument, and delivering a presentation. The skill in which they saw the greatest improvement was in predicting opposing arguments. In this study, the third learning objective (anticipate opposing arguments and identify limitations in them) showed the least improvement in students’ self-reported abilities. In addition, this correlated with the actual scores of debating teams as both peers and faculty scored debating teams the lowest in this learning objective (Appendix 1). Although students are given numerous opportunities to practice presentation skills during the curriculum, anticipation of opposing arguments is unique to a debate-style format. Students had previously participated in activities related to literature evaluation and presentation, such as journal clubs and Objective Structured Clinical Examinations (OSCEs), but had not experienced formal debates in the curriculum. Therefore, students’ responses likely reflect that this was a relatively new competency they were exposed to.
In this study, the majority of students documented a positive change in self-reported abilities. A small percentage of students had lower scores on self-reported abilities on the post-survey as compared to the pre-survey. This is unlikely to truly represent students having a decline in abilities and is most likely representative of a phenomenon of survey-based research known as response shift bias.12,13 This is characterized by a shift or change in the respondents’ baseline level of comparison from the pre- to the post-survey. In general terms applicable to this study, this can be explained as follows: Upon going through the debates exercise, students may have become more aware of their deficiencies, or aspects of the exercise they struggled with. This then prompts the student to have a shift in their baseline self-rating and results in them rating themselves with a lower score on the post-survey. Thus, in response shift bias, the decline in score may actually represent increased self-awareness of weaknesses rather than a true decline in abilities.14,15 One potential solution to response shift bias is use of a retrospective pre-test and post-test design, also known as a then-post design, where students are asked to rate both their pre- and post-intervention abilities at the end of the exercise.16,17 This prompts students to directly self-reflect on their change in abilities upon completion of an activity. This methodology was not used and may be a limitation of this study. In addition, our survey instrument was not pre-tested or validated prior to use which is another limitation. However, the occurrence of response shift bias makes it more difficult to detect a positive change with the conventional pre-post survey design used in this study. Despite this potential disadvantage, the results demonstrate that a majority of students perceived improvements in their self-reported abilities. It is therefore reasonable to have expected an even larger magnitude of improvement in self-reported abilities if a retrospective pre-survey design would have been used. In addition, some studies have noted limitations with both conventional pre-post and retrospective pre-post designs showing lack of consensus on the ideal methodology of pre-post survey research design.17,18
Despite no use of incentives to participate in the study, the sample size was large with 93% of students completing both surveys, making the results representative of the overall student experience. Identical questions were used on pre- and post-surveys and these questions were phrased identically to learning objectives, which were also used in the grading rubric. Consistency in phrasing across these three mediums improves internal validity to surveys and assessments while enhancing curricular mapping and documentation of achievement in Standards 3 and 4 of the 2016 ACPE standards.2 Evaluation of debate topics by survey of expert opinion is a strength that makes this study unique in comparison to previously published literature on debate exercises. In addition, this study is the first to incorporate components of the TBL pedagogy in a debate format to promote audience engagement. Due to the success of the debates exercise coupled with positive student perceptions of their experience, the only change course coordinators anticipate for future debate exercises is to reduce the weight of the student-rated score to 25% of the overall grade. The remaining overall format and design of the debates will likely remain unchanged.
CONCLUSION
The results of this study of a TBL-based debate format document evidence of improvement in students’ self-assessed perceptions of abilities in literature evaluation, evidence-based decision making, and oral presentation. The study also documents assessment of student learning pertaining to these skills. The assignment described provides a template for debates in a TBL-curriculum that can be extrapolated to other topics and classroom settings. Future studies on similar topics may consider utilization of both conventional and retrospective pre-surveys to better document the potential occurrence and impact of response shift bias. In addition, future studies could include an assessment of knowledge of the topics before and after the debates as an objective measure of the impact of debates in the classroom setting.
Appendix 1. Grading rubric for debate presentations, and average peer and faculty scores on each component

- Received January 13, 2017.
- Accepted May 5, 2017.
- © 2018 American Association of Colleges of Pharmacy