Abstract
Objective. Within pharmacy experiential education, practicing literature evaluation skills usually occurs via journal clubs. Clinical debates have gained traction as an engaging alternative to journal club meetings while completing advanced pharmacy practice experiences (APPEs). The purpose of this study was to compare clinical knowledge and literature evaluation application between journal clubs and clinical debates during APPEs.
Methods. This mixed-methods prospective study was conducted in fourth year pharmacy students completing inpatient general medicine APPEs at four institutions. Students participated in a journal club and clinical debate during their experience. Students completed a 10-item knowledge assessment after each activity. Differences in journal club and clinical debate assessment scores were analyzed. Following completion of both activities, a perception survey was administered to gauge preferences and opinions. Differences in perception survey scores for journal clubs compared to clinical debates were evaluated quantitatively, and a thematic analysis was completed for qualitative responses.
Results. Fifty students participated in both activities. There were no differences between journal club and clinical debate assessment scores (57.4%±21.0% and 62.9%±20.7%, respectively). Forty students completed the post-perceptions survey and globally agreed or strongly agreed that both journal clubs and clinical debates improved confidence in literature evaluation and clinical skills. Common themes identified included applicability to pharmacists’ roles and need for clear instructions and examples.
Conclusion. There was no significant difference between student performance on knowledge assessments of journal clubs and clinical debates, and students found both activities to be beneficial. Clinical debates are a reasonable alternative to journal clubs to improve pharmacy students’ knowledge and literature evaluation skills.
INTRODUCTION
Pharmacy curricula are designed to model and teach students how to foster and cultivate critical thinking and literature evaluation skills over time. While exposure to these concepts and opportunities for application exist in the classroom, experiential education is an ideal opportunity to practice in real time, specifically with literature evaluation activities. Traditionally, in-depth literature evaluation is conducted using a single article in a journal club discussion-based format. However, interactive clinical debates are becoming more commonplace in health care education.1,2
Journal clubs have been utilized within health professions education for over 150 years.3 While the increased availability of published peer-reviewed literature has changed access to these resources, the overarching skills employed within journal clubs remain the same: critical literature evaluation and facilitation of dialogue with peers to understand the application to patient care.3 In a meta-analysis conducted in 2020, Ilic and colleagues examined the use of journal clubs to teach literature evaluation skills to health professionals.4 The broad inclusion criteria allowed for journal club facilitation without restriction to style of delivery and any types of health education learners. Of the 152 sources identified, five were included for analysis. The authors sought to evaluate broader education of evidence-based medicine, including application. However, individual components of knowledge and critical appraisal were evaluated. The control groups used for these studies included lectures, email summaries, and self-study. Surprisingly, assessments of improvement in health professionals’ knowledge and critical appraisal skills did not find any significant differences between those who participated in journal clubs and those in control groups.4
Although less widely studied, clinical debates have been used in health education and can be done in a variety of formats.1,2,5 The Lincoln Douglas format is the most traditional, with both sides presenting their argument, followed by the opportunity for rebuttal and ultimately summarizing their argument. Like journal clubs, clinical debates present an opportunity to critically evaluate literature, practice communication, and collaborate with peers or other health care professionals. However, clinical debates take it one step further and require students to challenge prior opinions and more comprehensively contemplate evidence for both sides of a topic.5 Additionally, it encourages respectful dialogue and enables students to refine their persuasion skills. Identifying unique and promising ways to engage students in developing these essential skills is timely. The projected publication volume makes efficient literature evaluation a necessity to stay up to date.6 Likewise, educators note that the latest generation of students may have deficiencies in verbal communication resulting from a dependence on digital communication and devices.7
To date, the benefits of clinical debates have been evaluated primarily as single-center pilot research using perception surveys without comparator groups and in the didactic setting. Evaluation of their benefits in the experiential setting and head-to-head comparisons to journal clubs are lacking.5 Therefore, the purpose of this study was to compare pharmacy students’ clinical knowledge and literature evaluation skills learned from participating in journal clubs vs debates held during advanced pharmacy practice experiences (APPEs). We present findings from a multi-site, multi-year study assessing the use of journal clubs and clinical debates on pharmacy student learning.
METHODS
This was a two-year, prospective study across four institutions in fourth year professional pharmacy students (P4s) on an inpatient general medicine APPE from May 2018 to April 2020. Institutions were both public and private four-year schools or colleges, and rotations ranged from four to six weeks. Each institution incorporated core courses or experiences pertaining to literature evaluation within either the second or third didactic year. Students participated in both a journal club and clinical debate during their experience. All experiences involved two to four students. For clinical debates involving two or four students, one or two students represented one side of a controversial topic while the remaining one or two students represented the other side. For clinical debates with three students, each student represented their own side, resulting in three opposing viewpoints. For journal clubs with two or three students, each student worked individually on a different article. However, for journal clubs with four students, one article was assigned to each group of two students and students had the same partner for the clinical debate. For consistency, the journal club was completed during week two of the experience and the clinical debate was completed during week four of the experience, regardless of the rotation length. Students were provided instructions on day 1 of the APPE (Appendix 1), with more detailed instructions provided to students assigned to the clinical debate groups, because unlike journal club sessions, debates were not standard learning experiences within the didactic curricula at the study institutions. The clinical debate followed the Lincoln Douglas format.
The activities were completed on one of four topics chosen because they were more applicable to a general internal medicine experience. Initially, eight topics were proposed and researched. All investigators discussed which of these eight topics would be best based on available evidence to support multiple views. Final topics included the use of beta-blockers in patients with heart failure with reduced ejection fraction (HFrEF), the use of direct oral anticoagulants (DOACs) in patients with non-valvular atrial fibrillation (NVAF), venous thromboembolism (VTE) prophylaxis in patients with obesity, and the use of DOACs in patients with heparin-induced thrombocytopenia (HIT). Topics were randomly assigned to students and completed as either a journal club or clinical debate. The topic selected for the journal club meeting was different from that used in the clinical debate in order to limit confounders in the clinical knowledge assessments. All topics had corresponding articles that were predetermined by the investigators and were assigned and provided to students for the journal club portion of the study. Students were assigned a stance for clinical debates as well (eg, the use of carvedilol or metoprolol succinate in cases of HFrEF). In instances where three students were on the same APPE, an additional article and stance were provided (eg, use of carvedilol, metoprolol succinate, or bisoprolol in HFrEF).
Immediately following each experience, students completed a 10-item knowledge assessment consisting of six items related to clinical knowledge and application of the topic and four questions related to literature evaluation and biostatistics, scored on a 10-point scale. Each assessment was designed by one of the investigators individually, then underwent peer review by the remaining investigators. Assessment questions were then screened by non-investigators for additional review. These activities and assessments were piloted with a cohort of students prior to study initiation to fine tune the process and assessment questions, identify potential issues, and ensure everything would operate smoothly. The assessment questions were identical for each topic, at each respective site, and consistent throughout the entire study period. While assessment scores did not impact students’ APPE evaluations, some investigators chose to incorporate students’ overall performance on the activities into their evaluations. After completing the final assessment activity, an anonymous, online survey was distributed to participants via Qualtrics (Qualtrics, Provo, UT). This was a 39-item survey aimed to gather perceptions of both activities. Data collected included baseline demographics, prior degree, self-reported pre-APPE grade point average (GPA), APPE block, number of prior clinical APPEs, level of agreement with perceived skills, and abilities gained from each activity (Likert scale), and advantages and disadvantages of each activity (open-ended free text response).
The primary outcome was the difference in overall assessment scores on journal clubs compared to clinical debates. Secondary outcomes included differences in subscales of clinical knowledge and literature evaluation questions on journal clubs compared to clinical debates and differences in perceived skills and abilities gained during journal clubs vs clinical debates. Furthermore, predictors of performance on journal clubs vs clinical debates were assessed. Finally, qualitative analysis was completed to identify themes in student free-text responses when asked about journal clubs vs clinical debate.
Descriptive statistics were used to analyze the overall data and combined and individual topic assessment scores as well as perception survey responses. Assessment scores were compared between journal club and clinical debate assessments using a student t test or Mann-Whitney U test as appropriate. Differences in perception survey scores for journal clubs compared to clinical debates were evaluated using the Mann-Whitney U test. Correlation analysis was completed to identify variables associated with higher assessment scores for both journal clubs and clinical debates. Based on univariate analysis of the perception survey, variables with p<0.2 were included in a linear regression analysis to identify predictors of higher assessment scores. For the qualitative analysis, responses to open-ended questions on advantages and disadvantages of each activity were independently coded by two investigators (MLH, SAN). Investigators established a final code book through consensus, and then patterns were reviewed to identify themes for each question. Final themes were determined through consensus and all investigators reviewed themes to determine consistency and agreement.
Based on results from the pilot period, a sample size of 46 participants was estimated to provide 80% power to detect a 10% difference in the primary outcome of assessment scores on journal clubs compared to clinical debates. A 10% difference was identified as a meaningful difference in assessment scores as this would theoretically result in a change in letter grade on the assessment. All statistical tests were two-sided with p values <0.05 considered statistically significant. Statistical analyses were conducted using SPSS, version 27.0 (IBM).
RESULTS
Fifty students participated in the journal club and clinical debate activities and completed the corresponding assessments. Baseline demographics of participants that responded to the perception survey (n=40) can be found in Table 1. Twenty-seven students (54%) completed their APPE during an early block (blocks 1 to 4) while the remaining 23 students (46%) completed their APPE during block 5 and later (ie, they have completed 20-30 weeks or more of APPEs). Students reported an average preparation time of 7.9 ± 5.4 hours for journal clubs and 9.1 ± 6.7 hours for clinical debates (p=.366). There was no significant difference in mean overall assessment scores in journal clubs compared to clinical debates (57.4%±21.0% vs 62.9%±20.7%, respectively; p=.143). Furthermore, there were no significant differences in clinical knowledge question scores (53.7%±26.3% vs 63.3±24.0%; p=.073) or literature evaluation question scores (63.3%±25.7% vs 61.8%±26.2%; p=.875) in journal clubs compared to clinical debates, respectively. Correlation analysis revealed female gender was associated with higher journal club assessment scores (rs 0.345, p=.029) and pre-APPE GPA was associated with higher clinical debate assessment scores (rs 0.329, p=.041). However, when variables were entered into a regression analysis, no significant findings were noted for journal clubs (r2 0.187, p=.056) or clinical debates (r2 0.067, p=.112) assessment scores.
Demographics of Pharmacy Students Who Participated in an Evaluation of Journal Club vs Clinical Debate Activities (N=40)
Forty participants responded to the perception survey following the activities, resulting in a response rate of 80%. Results of the perception survey are presented in Table 2. All but one student (97.5%) reported having previously completed a journal club on APPEs, whereas only 15 (37.5%) reported completing a debate on APPEs. Overall, participants agreed or strongly agreed that journal clubs and clinical debates improved their confidence in their literature evaluation and clinical skills and was a beneficial experience. There was no significant difference in the perception survey responses for any question (p>.05 for all interactions).
Pharmacy Students’ Perceptions Regarding Learning in Journal Club vs Clinical Debate Activitiesa
Responses to open-ended questions about the advantages and disadvantages of journal clubs and clinical debates and why students thought one was more beneficial compared with the other yielded 40 participant responses totaling 7163 words for qualitative analysis. Nine unique codes were identified from the students’ responses. This resulted in six themes: three overall themes, plus one additional theme specific to the journal club and two additional themes specific to the clinical debate. Themes and examples of students’ supporting statements can be found in Table 3.
Qualitative Analysis of Students’ Perceptions of Advantages and Disadvantages of Journal Clubs and Clinical Debate Activities
DISCUSSION
This is the first study to assess knowledge application and perception of journal clubs vs clinical debates within health professions education. A review of the quantitative assessment and student perception survey responses found no differences in learning between journal clubs and clinical debates, though participating in clinical debates favored better clinical knowledge assessment scores. Qualitative data from student perception surveys identified some differences for these learning experiences. The structured framework with journal clubs resulted in limited engagement and collaboration with the audience. Students’ quantitative perception responses regarding time commitments showed no significant difference, but this theme was evident in the qualitative data. The time within the qualitative data may be reflective of the novelty of the clinical debate experience for many, as less than 40% reported having completed a similar activity prior to the study. Other aspects from the qualitative data highlighted clinical debates resulting in participants reporting themes of enhanced understanding of disease state pharmacotherapy due to consideration of conflicting perspectives and refinement of critical skills for future pharmacists.
While prevalent in health professions education for years, there are limited studies evaluating the educational impact of journal clubs, with most within medical education and training. One study involving medical residents found no difference in assessment scores for clinical application of biostatistics in those participating in journal clubs compared to controls.8 In another study of medical residents, those completing a journal club demonstrated enhanced clinical application assessment scores and perceptions compared to controls.9 The findings from Linzer and colleagues show the utility and benefit of journal clubs in enhancing knowledge compared to baseline, which the current study did not assess. While knowledge assessments are used in medical education to evaluate student learning from participation in journal clubs, in pharmacy education, most published journal club literature focuses solely on student perceptions of the experience rather than on assessment of learning.10⇓-12
For clinical debates within health professions education, published articles within pharmacy and medical education are available; however, most articles focus on pilot studies using quantitative pre/post student perceptions.5,13 Limited published literature includes an evaluation on application-focused assessments from these different learning activities within experiential education. In a didactic setting, one study found no difference in examination question performance in comparing a debate-based vs lecture-based delivery format.2 However, the examination questions were unique for each topic without a crossover design, as employed in the current study. With the current study, the findings show no difference in assessment scores between clinical debates and journal clubs; however, it is pertinent to realize the potential utility of each approach within the experiential pharmacy setting. Clinical educators are encouraged to evaluate not only the Lincoln Douglas debate and journal club formats used in this study (Appendix 1), but other published literature to determine which approach best suits them as well as implementation strategies for each.
Based on the intent and use within experiential education, there can be benefits to both journal clubs and clinical debates. The current study demonstrated no difference in assessment performance or quantitative student perceptions for these learning activities. However, the qualitative data from the student perception survey identified potential advantages of clinical debates in synthesizing plans from multiple pieces of published literature and refining communication and critical thinking skills, which are vital in their development as future pharmacists. Although these contradict the results of the quantitative survey, which showed no perceived differences in synthesizing conclusions, communication, or critical thinking skills, the qualitative data highlight the potential utility of clinical debates.
The current study does have limitations. First, while the study included student pharmacists from multiple institutions, the sample size of 50 limits generalizability. However, this met the a priori power calculations to detect a difference if one existed. Secondly, the baseline familiarity with these learning experiences could have influenced their responses on the quantitative perception survey. Additionally, exposure to these topics during or prior to the experience could have affected assessment scores. Similarly, the sequence of debates after journal clubs may have impacted students’ performance on the literature evaluation knowledge assessment items despite using unique clinical topics. Another limitation was that the investigators created the assessments used to evaluate application and knowledge rather than using a published, fully validated assessment instrument. However, the assessments were reviewed by multiple faculty members, including non-investigators, and piloted with student pharmacists prior to use within the study. In addition, there was large variability in the assessment scores, which may have impacted our results. Finally, the current study did not evaluate long-term endpoints for these learning activities. The strengths of this study include the crossover design and the inclusion of students from four schools of pharmacy, both private and public. Additionally, it is the first study within health professions education to use both direct and indirect assessments of student learning.
CONCLUSION
Student pharmacists completing journal clubs and clinical debates during inpatient general medicine APPEs demonstrated no difference in application-based assessments or in perceptions of these learning experiences. Both journal clubs and clinical debates are viable learning activities to refine pharmacy students’ literature evaluation and communication skills.
Appendix 1. Student Instructions for Learning Activities
Journal club
The journal club activity is focused on evaluation of primarily literature and clinical application.
One journal article will be assigned to each student or group of two students (if 4 APPE students). Additionally, prior to the journal club date, you should read the article(s) of your peer(s).
Students will be responsible for evaluating the journal club using your College’s journal club format.
Clinical debate
Background
As you have learned during your pharmacy training, there is often not one right answer to some clinical questions or situations. In fact, there are often valid arguments with contradicting conclusions. This activity provides an opportunity to explore a relevant pharmacy topic with competing viewpoints. The goal of the debate is not to prove your colleagues wrong; instead, it is to clearly articulate a rational, evidence-based argument that supports your position. You will be evaluated on your ability to clearly support an argument with evidence or logic, respond to information presented by the opposition, answer questions posed by preceptors, and articulate an appropriate conclusion based on all information presented. Feedback will be provided to the teams from the preceptor(s).
Overview of Activity
You will be assigned to one of two groups (or three groups if three APPE students) representing one side of the debate.
The first week of the rotation, you will be given a clinical scenario for the debate by your preceptor and which side you will defend.
You will be allowed to bring written notes to the debate; however, you should be well prepared and NOT rely on these notes or read directly from them while speaking during the debate.
o Your team is required to bring a handout (hardcopy or electronic); the handout must contain the top three points of your argument. For each point, include 2-3 bullet points supporting each argument with evidence and citations.
o All references utilized in the debate must be listed at the end of the handout. All references should be documented using American Medical Association (AMA) citation
The debate will proceed as follows (50 minutes total)
o Round 1 – Presentation of opening arguments (up to 5 minutes for each team)
o Preparation of rebuttal (up to 5 minutes total)
o Round 2 – Rebuttal (up to 5 minutes for each group)
o Preparation of concluding arguments (up to 5 minutes total)
o Round 3 – Concluding arguments (up to 5 minutes per team)
o Peer and faculty questions and answers (up to 10 minutes total)
The debate should be a civil and respectful learning environment. Groups will only be allowed to speak during their allotted time, not during their peers’ time.
Tips to be Successful
A debate is a persuasive conversation. Your goal is to persuade the audience to agree with your point of view through supporting with evidence.
Preparing for the debate:
o Utilize appropriate primary literature and guidelines to support your argument (when available).
▪ Make sure you thoroughly evaluate the literature, it may be helpful to attack opposing articles’ validity or flaws.
▪ Be aware of your own literature flaws and be comfortable backing up what your articles conclude.
o When primary literature does not exist, use other relevant information (pathophysiology, pharmacodynamics/pharmacokinetics, phase I and II drug trials, etc.).
o Stick with the facts based on what your literature says (do NOT make things up).
Verbal and nonverbal communication tips:
o Speak clearly and exhibit confidence.
o Present information with good flow and organization.
o Maintain professional composure.
Tips for debate components:
o Opening Arguments
▪ Have a strong, articulate opening argument that grabs the audience’s attention as to why your side of the argument is optimal.
▪ Know what your conclusion is and state this in the opening of the debate (as with a thesis in an essay).
o Rebuttal
▪ In the rebuttal, address the gaps in the other team’s arguments (literature quality, patient’s studied, etc.) and how your side of the argument is more ideal.
▪ Preparing for the rebuttal:
While the opposing team is presenting their opening arguments, take notes.
During the time allotted for preparing for the rebuttal, evaluate the opposing team’s argument and literature to support the deficiencies.
o Concluding Arguments
▪ Have a strong, articulate closing statement summarizing your side of the debate.
▪ Clearly state what your conclusion is at the end.
PRACTICE before the debate.
- Received January 26, 2021.
- Accepted June 4, 2021.
- © 2022 American Association of Colleges of Pharmacy