Abstract
Objective. To compare student performance measures and perceptions of learning in 2 content areas, conventional and integrated pharmacy curricula, at a single institution.
Methods. Prospective cohort study of pharmacy students enrolled in either conventional (cohort C) or integrated (cohort I) curricula. Summative examination performance in the neuropsychiatric and infectious diseases courses, student self-rating of confidence and comfort in integrating and applying knowledge, and performance on a delayed knowledge assessment were compared between cohorts.
Results. Cohort I students performed significantly lower on summative assessments compared to cohort C (78.4±9.1 vs 84.5±8.3, respectively). Prior to the integrated course, cohort I students rated themselves as significantly less confident and comfortable in knowledge integration, application, and communication compared to cohort C students; these differences were attenuated in a follow-up survey, although some remained significant. There was no difference between cohorts in performance on objective structured clinical examinations (OSCEs) or on a delayed knowledge assessment of neuropsychiatric and infectious diseases content.
Conclusion. Pharmacy students in an integrated curriculum initially performed modestly worse in summative assessments and self-assessed their baseline knowledge as lower than did students in a conventional curriculum. However, differences in self-rated knowledge decreased at follow-up, and performance of the two cohorts on OSCEs and a delayed examination was similar. As pharmacy curricula shift towards integrated models, institutions should also consider evaluating experiential performance outcomes and student motivation to fully assess the impact of these transitions.
INTRODUCTION
To become a competent pharmacist, students need to apply basic science concepts to clinical practice.1,2 Several learning theories support the use of integrated curricula as effective structures for health professions education.2 Adult learning theory posits that learners are more willing to invest time in learning topics relevant to their future work.3 Theories from cognitive psychology suggest that learners are better able to organize and transfer knowledge if clinical context is provided.2 Therefore, an integrated curriculum, in which basic science concepts are taught within the context of clinical practice, may improve learners’ motivation, retention, and application of knowledge.2
The Accreditation Council for Pharmacy Education (ACPE) recommends promoting integration of content through curricular sequencing.4 A 2014 survey of Doctor of Pharmacy (PharmD) programs in the United States indicated that 70% of respondents integrate the basic and clinical sciences to varying degrees.2 However, few studies have evaluated effects of integrated curricula on student performance.5,6 A few published studies in medical education suggest that integration is noninferior or may offer advantages to conventional curricula for knowledge acquisition and learning patterns.2,7,8 To our knowledge, effects of transitioning from a conventional to an integrated curriculum on pharmacy student performance and perceptions have not been reported in the literature.
In 2018, the University of California San Francisco (UCSF) School of Pharmacy underwent a major transformation in its PharmD curriculum. Compared to the prior curriculum, students in the new curriculum graduate in fewer calendar years, experience concurrent integration of basic and clinical sciences through instruction via integrated blocks, and undergo competency-based assessment. During this curricular transition, 2 cohorts of students graduated in 2021: those admitted to the new curriculum in 2018 and those admitted to the prior curriculum in 2017.
To investigate the impact of an integrated curriculum on learning outcomes, we designed a study to compare performance of 2 student cohorts on identical knowledge and skills assessment items across 2 major content domains, neuropsychiatry and infectious diseases, but using different curricular structures. A secondary objective of this study was to characterize student self-perceptions of learning.
METHODS
We conducted a prospective cohort study among students enrolled in the UCSF PharmD program that compared students’ learning outcomes in an integrated versus conventional curriculum. The UCSF Institutional Review Board certified this study with an exemption.
The conventional curriculum prior to 2018 consisted of a letter-graded 4-year curriculum, with 3 years of didactic instruction and 1 year of advanced pharmacy practice experiences (APPEs). The first year consisted of basic-science courses, such as biochemistry, pharmaceutical chemistry, and anatomy. The second and third years consisted of 10-week courses in foundational sciences, pharmacology, and therapeutic sciences. The curriculum presented course material in a stepwise fashion; training in patient care skills, such as medication counseling, was integrated into the therapeutics courses, as there was no standalone patient care skills course.
In 2018, UCSF launched an integrated, pass-fail, 3-year PharmD curriculum, with 2 years of didactic instruction and 1 year of APPEs. Instead of a total of 8 quarters spread across 3 didactic years, students in the integrated curriculum complete 8 quarters (including summer quarters) across 2 years. The number of credit hours was similar between the curricula (184 units in the new compared to 190 in the prior curriculum) with the difference primarily related to a decreased elective requirement. Instructional blocks in the new curriculum range from 4 to 10 weeks and consist of an Integrated Sciences (IS) course, an Applied Patient Care Skills (APCS) course, and an introductory pharmacy practice experiences (IPPE) course. The IS course includes physiology, pharmacology, pharmaceutical chemistry, scientific inquiry, and therapeutics. The curriculum was designed to be integrated both in concurrent instruction (with integration and sequential scheduling of the basic and clinical sciences for each disease state), and longitudinally, with intentional spiraling and longitudinal threads throughout the didactic curriculum. For example, for the topic of depression, students are first introduced to an anchor case of a patient with depression. Then, they learn the pathophysiology of depression and pharmacology of antidepressants, followed by therapeutics of depression. Their APCS session for that week focuses on interviewing a depressed patient. Assessments were also integrated, with basic science instructors and clinician instructors working together to write integrated assessment questions aimed at application of basic science principles to clinical care.
We evaluated student performance and perceptions in the neuropsychiatric (NP) and infectious diseases (ID) courses. For the conventional cohort admitted in 2017 (designated cohort “C”), students took siloed courses in a stepwise manner, culminating in the therapeutics courses in fall 2019 for neuropsychiatric and winter 2020 for infectious diseases (Figure 1). For the integrated cohort in 2018 (cohort “I”), students took the integrated neuropsychiatric block in fall 2019 and the integrated infectious diseases block in winter 2020. We selected these courses for comparison because of similarities in course administration personnel and scope and depth of the material between cohorts. In addition, because this coursework was offered near the end of both curricula, the overall background knowledge of the students coming into the courses was more comparable than if different courses had been selected for comparison.
Timeline of Courses and Questionnaires.
a Pharm/Pharm Chem=Pharmacology/Pharmaceutical Chemistry.
Evaluation of Student Performance
All students enrolled in both cohorts were invited to participate in the study by completing four anonymous, online questionnaires between fall 2019 and fall 2020 (Figure 1). To evaluate student performance, we compared scores on written summative assessments, an objective structured clinical examination (OSCE), and a delayed knowledge assessment between cohorts. Across both cohorts, there were identical items on the summative examinations, which were written by study investigators who were also the course directors. The assessment items were multiple choice, fill-in-the-blank, or short -answer questions that emphasized application of basic science concepts to patient cases, a recommended method for assessing curricular integration.2 Examples of item content included explaining relevant physiology and drug mechanisms of action and making a therapeutic recommendation based on a patient case. Most assessments were closed-book; however, due to differences in course administration, the neuropsychiatric assessments for cohort C were open-book. The assessments were administered and graded using Examsoft (Dallas, TX). All questions were graded using standardized rubrics. Point values for the questions were summed to obtain each student’s score in the individual and combined domains.
All students in both cohorts participated in an objective structured clinical examination (OSCE) consisting of a 12-minute encounter with a standardized patient at the end of the neuropsychiatric course. Students were tasked with interviewing, making a therapeutic assessment of, and counseling a patient with depression. Trained faculty and resident assessors evaluated students using a standardized rubric, which included content items (eg, counseling points) and a communication rubric. The UCSF OSCE Subcommittee set passing standards using the Angoff and Ebel methods.9 We conducted independent Mann-Whitney U tests to compare overall, content, and communication scores between cohorts.
A delayed knowledge assessment was administered approximately 6 months after completion of the infectious diseases course (Figure 1). The course directors who wrote the summative assessments also wrote the delayed knowledge assessment. The assessment included 11 case-based questions, similar in content and format to the summative assessments, and was piloted with pharmacy residents. Those who completed the delayed assessment received a $10 gift card. Course directors, who were blinded by cohort, scored the questions in the same manner as the summative assessments.
An independent samples t test was used to evaluate aggregate differences in summative and delayed assessment scores in the domains between cohorts. To adjust for prior academic performance and compare survey responders to nonresponders, we obtained student grade point averages (GPA on a 4.0 scale, for the C cohort) or total summative examination scores (total summed points received on assessments out of a maximum of 2129 points, for the I cohort) across the entire curriculum and converted them to a normalized percentile cohort ranking. Linear regression models were constructed to evaluate the individual and combined effects of curricular cohort, percentile ranking in cohort, and final student self-assessment on the student’s delayed and summative assessment scores in that domain.
Evaluation of Student Perceptions
To compare student perceptions between cohorts, we administered questionnaires at the beginning and end of each course (Figure 1). Though direct measurement of academic performance is a more valid estimate of student learning than student self-report, due to the differences in the neuropsychiatric examination administration, we sought secondary outcomes to evaluate student learning.10 The questionnaires were based on items from previous educational studies and designed to assess proposed benefits of integrated curricula: integration and application of knowledge to patient care and knowledge retention.2 The questionnaires asked students to rate their confidence or comfort on a 5-point, Likert-type scale (1=not confident/comfortable at all, 5=very confident/comfortable), across 4 components: confidence in integrating knowledge, communicating effectively, making therapeutic recommendations, and comfort in interacting with patients or healthcare providers.
Student responses prior to and after the course were matched using anonymous identifiers. The Wilcoxon-signed rank test for matched pairs was conducted to compare each student’s pre- and post-course self-assessment scores. The Wilcoxon rank-sum test was conducted to evaluate differences between cohorts in median self-assessment scores at baseline and follow-up. To evaluate for interactions between cohort and time point for the self-assessment, an ordinal logistic regression was conducted.
RESULTS
The demographic characteristics of the 2 cohorts were similar (Table 1). Within cohort I, the mean total number of points on all summative examination items was higher among those who consented (1676 vs 1425, P < .001; maximum possible points 2129); in cohort C, mean GPA was similar between those who consented and who did not (3.65 vs 3.63, P = .79). When converted to percentile rankings and pooled between cohorts, the mean percentile rankings were higher for those consenting (54th percentile) vs those who did not consent (37th percentile, P =.004).
Comparison of Two Cohorts of Students Completing Two Content Areas in the Conventional Versus Integrated Doctor of Pharmacy Curricula
For students who consented and completed both examinations, the mean total points on identical summative examination items was significantly higher for cohort C than for cohort I in both domains, both separately and combined (Cohort C: 84.5±8.3 vs. Cohort I: 78.4±9.1, P < .001, Table 1). After adjusting for class rank percentile and student self-assessment, cohort C was significantly associated with higher summative assessment scores across both domains compared to cohort I (coefficient= -6.66, P < .001, Table 2). Student class rank percentile had minimal effect on the relationship between cohort and summative examination score but was a significant predictor of summative examination performance. A summed measure of self-assessment ratings from the questionnaire was weakly associated with summative examination scores on unadjusted analysis but did not contribute significantly to a combined model (Table 2).
Linear Regression Models for Summative Assessment Performance and Delayed Assessment Performance by Two Cohorts of Pharmacy Students
All students in both cohorts participated in the neuropsychiatric OSCE. The overall percentage score did not differ significantly between cohort I and cohort C [median (IQR) I: 79% (73%-85%), C: 78% (70%-84%); P = .32]. There was no significant difference in communication [median (IQR) I: 83% (73%-90%), C: 83% (68%-88%); P = .24] or content scores between cohorts [median (IQR) I: 83% (73%-90%), C: 75% (69%-81%); P = .80].
A total of 109 (48%) students who consented to the study completed the delayed assessment: 55 (58%) from cohort I and 54 (42%) from cohort C (P = .02). Mean class percentile did not significantly differ between students who completed the delayed assessment and those who did not (53.9% vs 49.8%; P = .30). Delayed assessment scores did not significantly differ between cohorts for any domains (Table 1). There was no significant correlation between scores in the infectious diseases and neuropsychiatric domains (r=.025; P = .79). Cohort did not have a significant effect on delayed assessment scores as a single predictor or after adjustment for class percentile rank, self-assessment score, or score on summative assessments (Table 2). Percentile rank showed a significant association with delayed assessment score overall in the infectious diseases domain, but not in the neuropsychiatric domain. Student self-assessment scores and summative assessment performance were not associated with scores on delayed assessment.
More than half the students in both cohorts completed baseline and follow-up self-assessments (I: 88/95 [93%], C: 84/130 [65%]; P < .001). Across both domains, cohort I rated their confidence and comfort as lower than those in cohort C at baseline (Table 3). At follow-up survey, the gap between cohorts decreased. In the infectious diseases domain, only 1 self-assessment question, confidence in communicating with patients, rated significantly higher in cohort C on follow-up (Table 3). For the other self-assessments, there was no significant difference between cohorts on follow-up assessment in the infectious diseases domain. Statistical tests of interaction showed proportionally greater gains by cohort I relative to cohort C in the infectious diseases domain. In the neuropsychiatric domain, cohort I also demonstrated gains from baseline but still self-assessed at lower levels than cohort C in 3 of the 4 question areas on follow-up, and tests of interaction did not suggest greater relative gains.
Paired Student Self-Assessment Ratings Before and After Coursework by Cohort
DISCUSSION
We identified differences in performance and perceptions in 2 major content areas between students enrolled in integrated and conventionally organized curricula. Though there was a difference between the cohorts in summative assessment performance, this did not translate to performance on OSCEs or on delayed assessment. This indicates that the type of deep learning necessary for delayed recall and practical applications may be similar between curricula regardless of short-term testing results. These findings align with previous studies of integrated curricula in medical education, which demonstrated similar or improved board examination scores and residency match rates.2,8,11,12
Due to their exposure to the basic sciences (eg, pharmacology, prior to taking therapeutics courses on that content), students in the conventional curriculum may have rated their confidence in integrating knowledge, communicating with patients, and making therapeutic recommendations higher at the beginning of a course. Conversely, students in the integrated curriculum received little instruction in the basic sciences of a content domain prior to entering their integrated block. However, by the end of the second block, the gap between curricula closed, as students in the integrated curriculum reported a greater increase in their self-assessment of confidence and comfort over time, compared to those in the conventional curriculum. These findings are consistent with other studies that report a steeper learning curve in integrated curricula.8
Literature has suggested that student self-perceptions of learning are not always accurate representations of actual learning.10 Similarly, our study demonstrated that there was little-to-no association between students’ perceptions of learning and their performance on written summative or delayed assessments. However, we also saw some alignment between students’ perceptions and performance with regard to confidence and assessment of patient communication via an OSCE; self-reported levels of confidence in communicating effectively with patients were similar across both groups by the end of the neuropsychiatric course, and neuropsychiatric OSCE scores did not differ significantly between groups.
Several caveats should be considered when interpreting our study findings. While the response rates for the surveys were adequate, they were significantly lower for cohort C students compared to cohort I students. Given that the mean percentile ranking for students who consented was higher than those that did not, the sample from cohort C may have been less representative of the entire cohort. The response rates for the delayed assessment were lower overall (approximately 50%) and lower still among cohort C students. Cohort C was allowed to utilize self-prepared resources for the neuropsychiatric assessments, which is a significant confounder and may partially explain why that cohort performed significantly better on the neuropsychiatric assessment. Another meaningful difference between the cohorts was the grading structure. The conventional curriculum was letter-graded, whereas the integrated curriculum was pass-fail. This may explain why cohort C scored consistently higher than cohort I as they had more incentive to achieve the highest possible score for a higher GPA. This finding differs from studies on pass-fail vs letter-graded curricula in medicine, which showed no difference in average course score.13 Our study utilized identical assessment questions, whereas previous studies examined total course grades, which may explain the discrepancy. Though this study focused on 2 blocks in the curriculum, the significant relationship between percentile rank and summative assessment score indicates that the assessments utilized in this study were consistent with and indicative of a student’s overall performance in the curriculum. Other limitations to this study include its single-center nature and lack of experiential performance outcomes.
This study was not an isolated controlled experiment of the impact of changing the structure of didactic content delivery alone. Rather, it occurred in the context of a comprehensive curricular revision, one that included shortening the duration of training from 4 to 3 calendar years, incorporating a competency-based Pass/No Pass assessment system, and placing greater emphasis on early experiential and skills experiences. These factors may have interacted with the change in content delivery structure, such that the results may be less generalizable to institutions not undertaking these curricular interventions. In this context, it is also notable that the finding of similar assessment-related outcomes between curricula should not necessarily be interpreted in a negative light; rather, being able to provide students with a more time-efficient curriculum with competency-based assessment and a greater experiential focus while achieving similar academic performance is a net positive, in the authors’ view. Furthermore, this study did not evaluate student performance in the clinical real-world setting, which may be more indicative of the students’ true abilities to integrate and apply knowledge learned. Our study also did not evaluate the student experience of the integrated curriculum, which may be another area in which the 2 curricula differ. Our study suggests that comparing student performance on didactic assessments may not be sufficient for evaluating the transition from a conventional to an integrated curriculum. Given that theoretical advantages of an integrated curriculum are improved learner motivation and transfer of knowledge, future studies should also include comparison of student APPE performance and motivation to learn.
CONCLUSION
Few studies report student-centered outcomes on the transition from conventional progressive to integrated block curricula in pharmacy education. Overall, our study demonstrated that pharmacy students in a letter-graded conventional curriculum may initially self-assess and perform higher on summative assessments compared to those in a pass-fail integrated curriculum. However, these differences largely disappear on later follow-up. As pharmacy education moves toward the medical education model of integrated competency-based curricula, it is imperative to characterize the impact of these changes properly to help guide institutions in deciding whether to commit the resources to transition to an integrated curriculum. Future studies should incorporate additional outcomes, such as APPE performance and student motivation and satisfaction, to provide the most complete view of the impact of curricular changes.
ACKNOWLEDGMENTS
The authors would like to acknowledge the UCSF School of Pharmacy Dean’s Office, who provided a Curricular Transformation grant to fund this study.
- Received June 1, 2022.
- Accepted July 22, 2022.
- © 2023 American Association of Colleges of Pharmacy