Abstract
Objective. To model the relationship of common pharmacy education assessment data including student demographics, pre-pharmacy performance, core didactic performance, and external testing measures to identify predictors of student readiness for advanced pharmacy practice experiences (APPEs).
Methods. The associations between 23 predictive covariates from 226 graduating students from 2015-2018 (5786 observations) and APPE readiness as measured by midpoint core APPE scores were modeled. Multiple linear and Poisson regression models with backward selection were used. A selection criterion of p >.10 was used for covariate elimination from the model. Three models were evaluated: average of all midpoint core APPE rotation scores; average of midpoint acute care pharmacy practice and ambulatory care APPE rotation scores; and number of midpoint core clerkship failing scores.
Results. The average age of the population at admission was 25.4±4.5 years, 47% were female, and 75.2% had prior degrees. Across the three prediction models, knowledge-retention covariates were the strongest predictors. Total score on the Pharmacy Curriculum Outcomes Assessment was a modest yet consistent predictor across the models. All other significant predictors were unique to the various models.
Conclusion. This four-year, population-based modeling study of the relationship of common pharmacy education assessment data to APPE midpoint scores shows a modest correlation with knowledge-based measures. There is a need for greater innovation in this area of research.
INTRODUCTION
Many of the skills, attitudes, values, and behaviors that pharmacists need to successfully practice in the current health care system are acquired outside of the didactic classroom and within practice settings. This process is referred to as experiential education. The definition of experiential education from the Association of Experiential Education1 is “a philosophy that informs many methodologies, in which educators purposefully engage with learners in direct experience and focused reflection in order to increase knowledge, develop skills, clarify values, and develop people's capacity to contribute to their communities.” The underlying tenets of this definition are that the individual student pharmacist as acquired a set of knowledge, skills, values, and capacity during their didactic pharmacy education, prior to beginning advanced pharmacy practice experiences (APPEs), to prepare them to fully participate in experiential learning.2 To this end, the Accreditation Council for Pharmacy Education requires pharmacy education to assess the readiness of individual students to enter APPEs.2
Within the curriculum of pharmacy education programs, APPEs logically follow successful completion of all required didactic curricular content and introductory pharmacy practice experiences (IPPEs). Based on successful completion of these curricular components and their associated set of competencies, the individual student pharmacist is considered ready for progression to APPEs. However, a functional definition for a student being “APPE ready” is based on having valid and reliable assessment mechanisms that provide insights into pharmacy education programs regarding which students are ready for APPE learning and which are not.
It is unclear whether students’ performance on assessments contained within the didactic curriculum predict their APPE performance. A single study assessing the correlation between APPE outcomes and data commonly available to pharmacy education programs identified weak correlations between admission data, and objective clinical structured examinations (OSCEs) administered prior to students starting APPEs.3 Other colleges of pharmacy have developed capstone experiences,4 simulations,5-6 focus groups using pharmacy preceptors and student pharmacist perceptions of student competence in skill performance, course mapping, course redesign,7 pharmaceutical care plan and course evaluations,8 and a triple jump examination (students access evidence, appraise information at hand, and apply it to formulate a treatment plan during an OSCE)9 to evaluate the readiness of Doctor of Pharmacy (PharmD) students for APPEs. However, a valid and reliable assessment mechanism with generalizability to other pharmacy education programs remains elusive.
Our hypothesis was that components of our didactic curriculum at the University of Utah College of Pharmacy do assess individual student readiness to begin APPEs. Acquisition of knowledge that would be routinely used on APPEs may be evident in course grades for pathophysiology, therapeutics, pharmacology, and drug literature evaluation. Likewise, long-term retention of these concepts may be reflected by the score on the Pharmacy Curriculum Outcomes Assessment (PCOA) that our students take just prior to entering APPEs. Grades on cases that students complete in teams at the conclusion of their therapeutics course in a module, Problems in Pharmacotherapy, demonstrate the students’ ability to apply information gained over the course of their didactic experience. Performance on OSCEs provides evidence of the more intangible, difficult-to-assess, qualities of students that may predict APPE performance, such as professionalism and communication. To determine whether our didactic curriculum could predict APPE performance, we undertook a comprehensive evaluation of four years of student demographics, pre-pharmacy performance, core didactic performance, and external testing measures with the goal to create a statistical model that might predict student readiness for APPEs.
METHODS
A population-based longitudinal study of all students enrolled in the University Utah College of Pharmacy from 2015-2018 was performed to statistically model their APPE readiness. The study was conducted in accordance with the 1964 Declaration of Helsinki and its later amendments and received approval from the University of Utah Institutional Review Board.
The primary outcome variable was the APPE midpoint score. The midpoint was chosen because we believed this score more closely reflected what the student’s readiness for the APPE was at the time they started the experience than did the final APPE score, which is more likely to be impacted by learning, teaching, and progress that occurs throughout the entire APPE. The midpoint score was also selected because we believed it to be less subject to grade inflation than the final score. To this point, across the four years, 19.3% of students received failing grades at the midpoint of their APPE compared to only 3% when the final APPE grade was issued. The core APPEs included in the analysis were ambulatory care pharmacy practice, hospital/health system practice, community, acute care, and clinical information systems (ie, drug information, poison control, pregnancy risk line, and Pharmacy Outcomes Research Center [Utah only]). Every student was required to complete one of each type of APPE. Two community introductory pharmacy practice experiences and one institutional introductory pharmacy practice experience were incorporated into the P1and P3 curriculum, respectively, with a pass/fail grading schema. They were not included in the modeling exercise because every student passed their respective IPPE.
Three separate models were evaluated: All Core APPE Rotation Model; Acute and Ambulatory Care APPE Model and Number of Midpoint Core Clerkship Failing Scores Model. The second analytical approach to combine acute care and ambulatory care APPE scores was determined because they share the same student assessment rubric, which is different from those used for the other core APPEs.
Student demographics and preadmission performance measures included were age at pharmacy program admission, gender, earned degree prior to admission, graduated from University of Utah PharmD program (yes or no), graduation year, pre-pharmacy GPA, pre-pharmacy prerequisite GPA, and Pharmacy College Admission Test (PCAT) total percentile. The individual academic performance covariates included were final grade point average in Pathophysiology (1-5 scale), Pharmacology I and II final grade point average, and final grade point average of the Pharmacology sequence (one year/two courses) (1-5 scale), Drug Literature Evaluation I and II final grade point average and final average grade point of the Drug Literature Evaluation sequence (one year) (1-5 scale), Disease and Drug Therapy I, II, and III final grade point average (ie, pharmacotherapeutics) and final grade point average for the therapeutics sequence (1.5 years/three courses) (1-5 scale), and Problems in Pharmacotherapy Module final grade (ie, capstone, team-based pharmacotherapeutic assessment) (0-100% scale). Students’ final scores on a comprehensive OSCE was included for graduation years 2017 and 2018 only. The OSCE included conducting a medication reconciliation, patient interviewing, patient counseling, constructing a subjective, objective, assessment and plan note (SOAP), and communicating a patient case to a health care provider using a situation, background, assessment, recommendation, and question [S-BARQ] product) (0-100% scale). The Pharmacy Curriculum Outcomes Assessment (PCOA) total score, and biomedical, pharmaceutical, social and behavioral, and clinical domain percentiles (scaled scores) were also included. All course education was completed prior to any student taking the PCOA examination. Student history of repeating a course within the pharmacy education program was also used as a predictive variable. The APPE midpoint and final scores were programmatically calculated on a 1-5 scale based on input from preceptors in the areas of general knowledge base, problem-solving skills, professional communication skills, and professional attitudes and behavior, and experience projects (if applicable to the site). The system converted the programmed scores to a letter grade that was in line with the scoring rubric.
Categorical and continuous covariates were summarized with descriptive statistics and compared across graduation years with chi-square tests and Kruskal-Wallis tests, respectively. To model the associations between model outcome variables and predictive variables, multiple linear and Poisson regression models with backward selection were used. A selection criterion of p>.10 was used for variable elimination from the model. Multiple models were developed. The three outcome variable definitions were used to determine the model definition robustness with two sets of predictors: one with all variables including OSCE scores and one with all variables except OSCE scores. Two separate sets were used as OSCE scores were only available for the last two years of data collection and thus resulted in a reduced sample size. To meet the underlying assumptions of linear regression, an extensive data exploration of the dependent and independent variables was undertaken. Data transformations of independent variables were performed to overcome violations of normality. For the count modeling, overdispersion was assessed to determine use of Poisson vs negative binomial regression. The coefficient of determination was calculated for the final linear models. Analyses were performed using SAS, v9.4 (SAS Institute Inc., Cary, NC).
RESULTS
There were 226 students across the four-year study period for the 23 predictive covariates and three models incorporating 5876 distinct data values. The average age of the population at admission was 25.4±4.5 years, 47% were female, and 75.2% had prior degrees. Across the four pharmacy classes, there were significant differences among the proportions of students with a bachelor’s degree or any degree upon entry, and average age at admission. (Table 1)
Characteristics of the Study Population Included in a Modeling Exercise to Identify Predictors of Student Readiness for Advanced Pharmacy Practice Experiences
The results of the three prediction models are shown in Table 2. Across the three prediction models, the knowledge-based variables of PCOA total score was a significant predictor of APPE scores/readiness. In two of the three modeling approaches, OSCEs dropped out early in the backward selection.
Associations Between Student Readiness for Advanced Pharmacy Practice Experiences and Predictor Variables Using Three Outcome Definitions
In the All Core APPE Rotation Model the PCOA total score, pre-APPE GPA, and age were the strongest predictors. The R2 value was .16. When OSCEs were incorporated, PCOA total score and Diseases and Drug Therapy I, II and III remained as predictors.
In the Acute and Ambulatory Care Model, the only predictor in this model was the PCOA total score, resulting in a R2 value of .07. However, when the OSCEs were incorporated, the OSCE scores remained significant, along with grades in the Drug Literature course and the Pharmacology course sequence.
In the Number of Midpoint Core Clerkship Failing Scores Model, which modeled the number of students with poor APPE performance (defined as a grade below 3.67 where the maximum score is 5.00), the strongest predictors were PCOA total score, age, and grades in the Therapeutics course sequence and Pharmacotherapy. With OSCEs included, only age and grades in Therapeutics remained significant predictors.
The OSCE scores were further assessed singularly outside of the model for two years to determine the predictive correlation with all core APPE midpoint evaluation scores. The 2017 class found a significant correlation with all APPE rotation scores at an R2 value of 0.19 (p<.0001). When both the 2017 and 2018 classes were combined, the correlation was no longer significant.
Some variables, such as the PCOA test scores, exhibited moderate to strong multicollinearity during data exploration. These variables were all initially left in the model and, pending inclusion in the final model, would have been strategically eliminated based on the relative importance. In this analysis, there was no multicollinearity detected between the independent variables included in the final models. Additionally, no significant heteroscedasticity was detected.
DISCUSSION
Predicting students’ readiness for APPEs remains an elusive goal for pharmacy education programs. The goal of this study was to model common student demographics with admission and didactic performance measures to predict which students might need further education and training before entering APPEs. The results show that commonly available pharmacy education student assessment data are a modest overall predictor of APPE readiness. We found that aggregate pharmacy education knowledge-based variables (ie, PCOA total score, pharmacy didactic performance in therapeutics) and entering age of students were more predictive than skills-based variables (eg, performance in OSCEs), admission measures, or other student demographics, and this was true independent of the type of APPE-readiness outcome definition used in the models.
There is a growing body of APPE-readiness literature that highlights the need for continued evaluation in this area. Our study findings validate those of McLaughlin and colleagues who examined the relationship between admission and OSCE scores to APPE grades and found that these variables are only modest predictors of APPE grades.3 Other studies have used curriculum changes to assess or develop students to perform to a higher level in APPEs.4-9 None of these, including our data, should be considered a valid reliable measure of predicting APPE success or failure, emphasizing the need for greater innovation in the area of research and educational pedagogy.
The inability of current research to identify academic markers that can completely predict APPE-readiness requires an understanding of all factors that might predict APPE grading. First, communication skills are a key area for student success in APPEs. Although difficult to quantify, the following measures contribute to student success in APPEs: verbal (eg, English as a second language students) and nonverbal communication, active listening, authenticity, conflict resolution, emotional intelligence, articulation and tone of voice, mirroring others’ expressions, and ability to ask insightful questions. Second, student professionalism and ethical and legal behaviors can be very important components of APPE-readiness, but are not commonly assessed. Third, issues unrelated to educational preparedness could contribute to APPE-performance. These might include mental and physical health and family or financial issues. Fourth, there is a lack of standardization in APPE scoring, even when standardized grading rubrics are used. Fifth, the quality of preceptor-student relationships during APPEs are important to how a student is precepted and graded. Finally, it is often a combination of factors that leads to failure in APPEs. A comprehensive and rigorous approach to understanding and standardizing APPE evaluations is needed to fully access APPE readiness.3
Another area of research into APPE readiness is the potential interaction between pharmacy education’s two major components: the didactic and experiential curricula. Building a learning bridge to integrate didactic learning with experiential learning can enhance student learning in both areas.10 One possible learning bridge is to expose students on a regular basis to both didactic and experiential content by alternating classroom lecture time with time spent in a clinical setting in the same week. This educational philosophy was used in the University of Illinois College of Pharmacy’s post-baccalaureate PharmD program in the 1980s. A learning bridge can effectively assist in self-directed learning and enhance critical-thinking skills.10 We were not able to assess the interaction of didactic and experiential learning during early IPPE experiences to determine their relationship to student readiness for APPEs because all students passed their IPPEs throughout their P1 to P3 years. Nonetheless, this is an area of investigation that should be undertaken to completely understand whether building a learning bridge can contribute to APPE readiness.
This study showed that student retention of knowledge as measured by academic performance in certain core didactic courses and the PCOA examination is modestly important in predicting APPE readiness. However, with any research limitations exist. This study was limited to a single institution, which bounds the ability to generalize the results to other pharmacy education programs. However, assessments such as student demographics, pre-pharmacy performance, core didactic performance, and external testing measures are commonly used for programmatic assessments. The study is also limited by differences found in baseline demographics between the graduating classes included in the study. These differences would be expected as schools are not admitting students based on student demographic profiles or by selecting student enrollment by equivalent accumulated GPA between feeder colleges or universities. Finally, the study was limited by the availability of across-the-curriculum student performance data over the four years of the study. Because curriculums are constantly changing and assessments often lag behind development, this limitation should be expected. For example, OSCEs were only conducted for the last two years of this analysis, thereby limiting our ability to assess this experience as a predictor of students’ APPE readiness. The APPE grading rubrics used may have also contributed to the inability to correlate traditional student, didactic, and external measures to APPE scores. Grading students on clerkships, which vary from traditional community pharmacy to institutional pharmacy and from acute care to ambulatory care, have different grading rubrics. These differences may contribute to the lack of homogeneity of the data and assigning APPE grades is an inherently subjective process thereby contributing to the weakness in the modeling fit.
CONCLUSION
This four-year population-based modeling study of common pharmacy education assessment data to APPE midpoint scores shows a modest correlation with knowledge retention-based measures. This study was not able to model ethical, professional, or behavioral performance or other factors, which may have contributed to students APPE readiness and success in pharmacy education programs. This field of research remains challenging and requires innovative longitudinal research incorporating noncognitive factors to fully evaluate pharmacy students’ APPE readiness.
ACKNOWLEDGMENTS
The authors thank the University Utah College of Pharmacy P1-P4 curriculum stewards for assistance in the design of this study; Heidi Bates, Shawna Hansen, Madeline Marshall, and Carrie Kilpatrick for their assistance with data assimilation; and James Herron, PhD, for study data collection oversight.
- Received July 29, 2019.
- Accepted September 30, 2019.
- © 2020 American Association of Colleges of Pharmacy