Abstract
Objective. To determine the impact of the holistic redesign of top 200 medications learning activities within a Doctor of Pharmacy (PharmD) curriculum by comparing student performances on a comprehensive examination before and after the redesign.
Methods. During a curricular revision at The Ohio State University College of Pharmacy that began with the class of 2020, learning activities involving the top 200 medications were implemented that involved repeated retrieval and mastery concepts, alignment with therapeutic coursework, and autonomous learning regarding the top 200 medications. A high-stakes comprehensive top 200 medications examination was administered to students at the end of their third professional year both before and after implementation of these activities. The difference in the percentage of students who achieved a satisfactory score on the comprehensive examination was compared between cohorts prior to and following the curricular redesign.
Results. The study analyzed results from 134, 130, and 120 students from three PharmD classes (one before and two after the redesign of top 200 medications activities). Following the redesign, a higher percentage of students achieved a satisfactory score of 85% on the examination (class of 2020: 116/130, 89.2%; class of 2022: 107/120, 89.2%) compared to before the redesign (class of 2019: 88/134, 65.7%).
Conclusion. The combination of repeated retrieval and mastery, alignment with therapeutic coursework, and development of autonomous learning can significantly increase student knowledge and retention of top 200 medications.
INTRODUCTION
The pharmacist’s role on the health care team is that of medication expert.1,2 Critical to this role is knowledge of the top 200 most prescribed medications. Foundational drug knowledge is also required by the Accreditation Council for Pharmacy Education (ACPE) standards and guidance documents.3,4 Students may perceive drug knowledge as a rote memorization activity and, therefore, may have difficulty retaining or retrieving this content when necessary.5
Many studies have focused on student perception of teaching strategies related to the top 200 medications (referred to as the “top 200” herein); however, not all of these strategies have shown a measurable impact on performance. Stoner and Billings found that a course redesign involving alignment of the top 200 with science coursework along with weekly cumulative multiple-choice assessments improved student satisfaction on course evaluations, but the direct impact on student performance was not measured.6 Similarly, the combination of active learning with quizzing described by O’Brocta and Swigart led to favorable student satisfaction and high grade performance, but student performance was not compared to traditional lectures.7 Sando and Feng reported positive student perceptions and high engagement with the use of spaced online gamification; however, there was no significant increase in examination scores or retention of content.8
Studies that have investigated strategies for improving student performance on the top 200 have found conflicting results. Santee found that using practice examinations produced a trend toward improved performance on top 200 assessments, but this did not reach significance.9 In contrast, a study by Mospan and colleagues found that completion of at least 80% of available practice quizzes was associated with higher grade performance in a top 200 drugs course.10 Two studies by Terenyi and colleagues examined retrieval strategies and their impact on retention of brand and generic names among top 200 drugs. Their results showed that repeated practice was associated with better long-term retention; however, the two studies produced differing results regarding the most effective spacing strategy, indicating a need for further investigation.11,12
Given the importance of top 200 knowledge as a foundation for the pharmacist’s role on the health care team, Doctor of Pharmacy (PharmD) programs should use evidence-based pedagogical techniques in the delivery and assessment of this content. This can be ensured by implementing strategies from earlier studies cited above as well as investigating the effectiveness of novel techniques or new ways to incorporate existing strategies into their curriculum. While previous studies have demonstrated the utility of pedagogical principles in teaching top 200, no study has compared performance data on a comprehensive mastery-based top 200 examination before and after a multifaceted curricular redesign. This research article will demonstrate that the use of repeated retrieval practice, integration of content across courses (alignment of top 200 with therapeutics content), and use of autonomous learning led to improved student mastery of top 200 content.
The Ohio State University College of Pharmacy’s PharmD program is a four-year postbaccalaureate program. Top 200 drugs are taught within the Integrated Patient Care Laboratory, a practice laboratory sequence during the first three years of the program. The top 200 medications list used at The Ohio State University College of Pharmacy is adapted from a portion of the Sigler Top 300 Prescription Drug Cards (SFI Medical Publishing, http://siglerdrugcards.com). For the cohorts in this study, the top 200 list comprised the 200 highest-ranking (out of 300) medications.
METHODS
Before the implementation of a revised curriculum in 2016, the top 200 was covered during the second year of pharmacy school (P2) through a self-guided study of the medications and four low-stakes quizzes (fill-in-the-blank and multiple-choice quizzes), each worth 2%-3% of the final grade each semester. The first quiz covered the first 25 medications, and each additional cumulative quiz covered an additional 25 medications based on rank order.
Beginning with the class of 2020, a new PharmD curriculum at The Ohio State University College of Pharmacy was implemented in which the skills laboratory sequence expanded from two semesters to six semesters. As part of curricular redesign, the top 200 was integrated into all six semesters and expanded using three guiding principles: repeated retrieval practice with mastery, alignment with therapeutics coursework, and development of autonomous learning.
Repeated retrieval practice with mastery was implemented to ensure students had frequent practice recalling top 200 knowledge to build toward mastery-level assessments. To do this, students completed electronic quizzes (mostly multiple-choice questions, with some fill-in-the-blank questions on brand and generic name) during laboratory sessions throughout the first six semesters. Three to five low-stakes quizzes were added each semester, with most of these worth less than 1% of the course grade. Quizzes pulled items from question banks that were cumulative, with new questions added for each quiz. Students were permitted unlimited attempts within the laboratory period and kept their highest score. After each quiz, students were provided an optional practice version to use in preparation for one of a series of end-of-semester examinations. End-of-semester examinations were given during each of the first five semesters (autumn of students’ first year [P1] through autumn of their third year [P3]); these examinations were cumulative, multiple-choice examinations worth 16%-20% of the final course grade. Students were required to achieve a minimum mastery score on each end-of-semester examination or successfully complete a remediation assessment to earn a passing grade in the course. The mastery score threshold increased each semester as summarized in Table 1.
Pharmacy Students’ Minimum Mastery Score on the Comprehensive Top 200 Examination Administered Before and After Redesign of Medication Learning Activities in a Doctor of Pharmacy Program
Another key principle of the redesign was alignment with therapeutics coursework. Throughout their second and third years, students take the Integrated Pharmacotherapy course sequence that runs concurrently with the laboratory sequence. Content in the Integrated Pharmacotherapy sequence includes medicinal chemistry, pharmacology, pharmaceutics, and therapeutics, and is organized in modules that correspond with body systems. Each quiz added questions covering medications that corresponded to the recently completed module as outlined in Table 1. As each semester progressed, questions from medications covered in previous semesters remained in the question banks to ensure knowledge retention.
A final guiding principle for the redesign was fostering the development of autonomous learning. Students were not provided a source of information containing all information covered in the assessment; instead, students were required to independently consult multiple resources and consider the relevance of the information they found. The intent was to encourage students to autonomously research a medication and determine the most important information for that medication over memorizing an assigned resource. Several activities were introduced in the beginning of P2 to develop autonomous learning. First, students were shown an example process for researching a medication and discussed sources in a lecture. During two following laboratory exercises, students gathered information for four assigned medications then compared their process and information with a peer. Then, students discussed their process and findings with a course instructor or teaching assistant to receive feedback. For the remainder of the laboratory sequence, students were expected to independently determine the most relevant side effects, warnings, and monitoring parameters for all subsequent assessments.
As a final assessment, students in the classes of 2019, 2020, and 2022 took a comprehensive top 200 examination (CT2E) at the end of P3 as part of a program-level assessment. Due to COVID-19, the class of 2021 did not complete the standard CT2E and instead took a remote assessment, which was not proctored and graded only for completion. The CT2E is a 100-question computerized examination that pulls items from banks of over 700 questions. Items in the question banks were developed locally by laboratory faculty. The computerized format and use of question banks on the CT2E is intended to simulate the North American Pharmacist Licensure Examination (NAPLEX). The CT2E is one of several assessments that make up a half-credit-hour program-level assessment course at the end of P3 at The Ohio State University College of Pharmacy. Final grades in this program-level assessment course are based on the number of assessments that each student passes. To pass the CT2E, students must score 90% or greater on at least one attempt. Following each attempt, students see their score but do not have access to examination questions or answers and are not permitted to review resources.
The CT2E format and procedures were identical for the classes of 2019 and 2020, with each student assigned a two-hour examination period to complete an unlimited number of 20-minute attempts using a LockDown Browser (Respondus Inc) in a live proctored classroom. For the class of 2022, one end-of-semester mastery examination in autumn 2020 (prior to the CT2E) was converted to an alternate assessment due to COVID-19, but the CT2E was administered remotely with only minor modifications: virtual proctoring software was used, and students were allowed a maximum of six 25-minute attempts within a three-hour testing period. The expanded testing window with a limited number of attempts was used in case of technical difficulties with remote proctoring. For all cohorts, the use of multiple attempts maximized fairness, since each attempt is randomly generated from a large question bank; for example, although questions may vary in perceived difficulty, the allowance of multiple attempts increases the likelihood that students will be exposed to average perceived difficulty across all attempts. Students in the classes of 2019, 2020, and 2022 were all provided a practice version of the comprehensive examination one week prior to the CT2E.
Study Design
The objective of this study was to determine the impact of the holistic redesign of top 200 activities by comparing student performances on the CT2E for the class of 2019 (before redesign, designated cohort 1 in this study) and the classes of 2020 and 2022 (after redesign, designated cohort 2 and cohort 3, respectively). All students from each cohort who took the CT2E on the originally scheduled assessment date were included. The class of 2021 was excluded because they did not take the traditional CT2E. This study was determined to be exempt by The Ohio State University Institutional Review Board.
The primary outcome was the difference in percentage of students who performed satisfactorily on the CT2E (defined as 85% or greater for this study). Although the original passing bar within the course was set at 90%, a threshold of 85% was chosen for study purposes to account for a bonus incentive that was offered to cohort 1 and 2. Students who met a predetermined threshold on a separate cumulative program-level knowledge examination were given a 5% bonus on the CT2E, making their satisfactory score 85%. As a result, these students ceased additional attempts after scoring at least 85%.
Secondary endpoints included differences in the mean highest score achieved and the mean number of attempts needed to score 85% or higher among students who achieved a satisfactory score. For secondary endpoints, additional attempts made after achieving a score of 85% were excluded. Additionally, some students chose to abort attempts early, possibly because they felt they were performing poorly. To account for this, attempts with fewer than 10 items answered were excluded from secondary analyses. This adjustment only impacted one student from the three study cohorts: One student from cohort 1 achieved a satisfactory score after answering fewer than 10 questions on 12 separate attempts. These 12 partial attempts from this one student were excluded when calculating their total number of attempts during the secondary analyses. An additional secondary end point was students’ perceived value of the redesigned activities as measured on course evaluations.
All data were first analyzed using descriptive statistics. Discrete data were presented as count (No.) and frequency (%), while continuous data were summarized as mean (standard deviation [SD]). Fisher exact tests or χ2 tests were used to analyze the discrete data where appropriate; one-way analysis of variance followed by Tukey honestly significant difference (HSD) multiple comparisons was used to analyze continuous data.
Potential associations between the success of achieving a satisfactory score on the examination and demographic factors, such as the class, sex, race, number of attempts to take the examination, best score, and the Pharmacy College Admission Test (PCAT) composite score, were evaluated using univariate and multivariate logistic regression analyses. All factors with p values <.15 in univariate analyses were included in multivariate analyses. A step-back procedure was applied in the multivariate logistic analyses and the “final” multivariate model contained only factors with p values <.05. The final multivariate model was justified using goodness-of-fit χ2 tests.
All statistical tests were unpaired, two sided, and the significance level was α=.05. R version 3.5 software (The R Foundation for Statistical Computing) was used in this study.
RESULTS
This study included 134, 130, and 120 students in cohort 1, cohort 2, and cohort 3, respectively. Two students in cohort 1 were excluded since they did not take the assessment as scheduled with their cohort. The entirety of cohorts 2 and 3 were included in the study. There were no significant differences in demographics across cohorts (Table 2).
Comparisons of Students in Three Cohorts of Students Enrolled in a Doctor of Pharmacy Program Before and After Redesign of the Learning Activities for Top 200 Medications
There were statistically significant differences in the percentage of students who earned a passing score among each cohort (overall p value <.001). The pass percentage of cohort 1 was statistically significantly lower than cohort 2 (65.7% vs 89.2%, p<.001) and cohort 3 (65.7% vs 89.2%, p<.001), whereas the pass percentages in cohorts 2 and 3 were almost identical (89.2% vs 89.2%, p=.99). Moreover, while cohorts 2 and 3 had comparable mean best scores (mean [SD] of 87.9 [4.0] vs 88.6 [7.1], p=.65), the mean best score of cohort 1 was statistically significantly lower than that of cohort 2 (84.7 [6.7] vs 87.9 [4.0], p<.001) and cohort 3 (84.7 [6.7] vs 88.6 [7.1], p<.001).
Among students who achieved a satisfactory score, a post hoc Tukey HSD analysis showed that cohorts 2 and 3 had a comparable mean number of attempts needed to achieve a satisfactory score (1.6 [1.2] vs 1.3 [0.7], p=.14); however, the mean number of attempts needed in cohort 1 was statistically significantly higher than in cohort 3 (2.0 [1.9] vs 1.3 [0.7], p<.001). Additionally, cohort 1 tended to require a higher mean number of attempts in comparison to cohort 2, although this was not statistically significant (2.0 [1.9] vs 1.6 [1.2], p=.057).
Multivariate logistic regression analyses showed that both cohort and PCAT composite scores were associated with the odds of achieving a satisfactory score on the CT2E (Table 3). An increase of one point in PCAT composite score led to an increase of 3% in odds of achieving a satisfactory score (p=.0038); after adjustment by cohort, the increase in odds of achieving a satisfactory score caused by an increase of one point in PCAT composite score remained to be 3% (p=.0023). The odds of students in cohort 2 or cohort 3 achieving a satisfactory score were about fourfold higher than that for students in cohort 1 (4.33 and 4.30, respectively, both p values <.001); after adjustment by PCAT composite score, students in cohort 2 had more than fivefold higher odds of achieving a satisfactory score than cohort 1 (p<.001), while the corresponding odds between cohort 3 and cohort 1 were about 4.5-fold higher (p<.001). Although there were more female students in cohorts 2 and 3 than in cohort 1, sex was not associated with the odds of achieving a satisfactory score in univariate analysis (p=.12) nor multivariate analysis (p=.19). Similarly, race was not statistically associated with the odds of achieving a satisfactory score (all p values >.05).
Univariate and Multivariate Logistic Analyses of Pharmacy Students’ Success on the Comprehensive Top 200 Examination
Students also perceived benefit in the redesigned activities. Following the redesign, most respondents from cohort 2 (71/74, 95.9%) either strongly or somewhat agreed that they felt more confident in their top 200 knowledge at the end of the laboratory sequence (58.1% responded strongly agree; 37.8%, somewhat agree). Most students in cohort 3 (76/79, 96.2%) answered the same question similarly (48.1% responded strongly agree; 48.1%, somewhat agree).
DISCUSSION
Previous studies demonstrated that students perceive value in top 200 activities that involve repeated retrieval, alignment with other courses, and autonomous learning; however, comparative data that demonstrate a significant increase in performance postintervention is lacking in many studies.6-9 Our results reveal a measurable increase in student knowledge that accompanies positive student perception, further supporting the use of these principles in curricular design.
A strength of our study is the exploration of additional factors that could have contributed to differences in performance between classes. Our analysis demonstrates that differences in sex or race had no measurable impact on CT2E performance, although data on race were unavailable for 28.4% of students in cohort 1. We also analyzed the impact of PCAT composite scores due to its similarity in format to the CT2E (comprehensive, electronic, multiple-choice examination). Higher performance on PCAT correlated with higher likelihood of achieving a satisfactory score on the CT2E; however, there was no meaningful difference in mean PCAT composite score between classes, and the increase in CT2E satisfactory rates between classes remained evident after adjusting for PCAT composite scores.
Studies have suggested that work or experiential education may impact success on a top 200 assessment. Greene and colleagues found that having little or no community pharmacy work experience is correlated with poor performance on a top 200 medications assessment.13 Mospan and colleagues found that the timing and setting of a community introductory pharmacy practice experience rotation may impact top 200 medications examination performance.10 A possible limitation of our study is that we did not explore differences in pharmacy experience across cohorts. However, unlike previous studies in which performance was assessed during the first year, the CT2E examination occurred during the third year, at which point all students had completed multiple rotations in a community pharmacy setting, minimizing the impact of differences in practice experience.
This study has other limitations. Although the primary outcome measured the impact of three combined principles used in redesign, our results do not reveal which principle contributed most or how effectively each principle was implemented. For example, although our exercises were intended to develop autonomous learning through multiple resources, we did not capture data related to how students ultimately chose to approach review of each medication or whether they tended to use one specific reference rather than consulting multiple sources. There are also potential confounders. For example, changes in other courses as part of curricular revision may have contributed to the increase in student knowledge. Outside of the expanded Integrated Patient Care Laboratory sequence, the two most relevant changes in the revised curriculum included a change in the sequence of pharmacotherapy modules and the additional integration of medicinal chemistry and pharmaceutics along with pharmacology and therapeutics during the second and third years (the previous curriculum primarily integrated pharmacology and therapeutics but no other basic sciences). Since the CT2E occurred after completion of all pharmacotherapy modules in both the previous and revised curricula and did not assess content related to medicinal chemistry or pharmaceutics, the changes in sequence and integration are not expected to have a major confounding effect. Additionally, the Integrated Patient Care Laboratory is the primary series in which top 200 content is practiced and assessed, so changes within other courses were unlikely to have a large impact. Another potential confounder includes differences among the individuals in each cohort, since each student is unique in their natural aptitude for this subject. We partially accounted for this by adjusting for PCAT score, since it is a standardized metric and similar in format to the CT2E. Although we did not adjust for prepharmacy grade point average, doing so may be misleading given the nonstandardized nature of grade point averages; students matriculate into the program from many institutions where they have earned variable undergraduate degrees, each with its own unique course requirements. Finally, the nature of the examination (questions randomly generated from a bank) results in unique versions of the examination for each student, making a cohort comparison imperfect. Nonetheless, there are advantages to this approach because it mimics the format of the NAPLEX and increases examination integrity since students cannot easily discuss examination content with future classes. Further, the allowance of multiple attempts minimizes this concern; a student who encounters a version they perceive as challenging would have the opportunity to make additional attempts and would likely encounter a version perceived as less challenging.
CONCLUSION
The combination of repeated retrieval and mastery, alignment with therapeutic coursework, and development of autonomous learning successfully increased student performance on a comprehensive examination. The benefit of this redesign remained evident after analysis of other factors between classes, including demographics and PCAT composite scores. Instructors who aim to increase student knowledge of top 200 medications should consider implementing these learning strategies into their courses. Future studies should evaluate which principle used in the redesign has the largest impact on student learning and whether a similar multifaceted approach is useful in teaching other topics that target the lower cognitive domains.
ACKNOWLEDGEMENTS
The authors acknowledge the Integrated Patient Care Laboratory teaching team (JD Bickel, Colleen Dula, Anna Gehres, Kristy Jackson, Stacy King, Cindy Mann, and Sean Nebergall) for their assistance in implementing the learning activities described in this study.
- Received February 25, 2022.
- Accepted May 20, 2022.
- © 2023 American Association of Colleges of Pharmacy