Abstract
Objective. To evaluate levels of entrustability and practice readiness in advanced pharmacy practice experience (APPE) students using a pilot instrument designed to assess their competency in performing the entrustable professional activities (EPAs) expected of new pharmacy graduates.
Methods. A pilot instrument was developed directly from EPAs to measure entrustability levels on a scale of one to five. Five APPE preceptors from several different practice areas participated. Fourth-year students used the instrument to self-evaluate their knowledge and skills at the beginning, midpoint, and end of the APPE. The preceptors evaluated students using the same instrument at APPE midpoint and end. The instrument had assigned weights for each EPA and entrustability level for a score of 100 if all items were marked five. If a rating of non-applicable was chosen, score adjustments were made. All students in the graduating class of 2018 were invited to perform a self-evaluation at the end of the fourth (APPE) year using the same instrument that study participants used.
Results. Twenty-eight students and five preceptors completed evaluations during the APPE year. Overall scores from both preceptor evaluations of students and student self-evaluations increased significantly from pre-APPE to midpoint to final. Student self-evaluations were only slightly higher than preceptor evaluations. The mean (SD) preceptor scores for students and student self-assessment scores at the end of each APPE were 85.4% (7.1) and 87.2% (10.3), respectfully. One practice manager EPA and three population health EPAs were considered to not be applicable by preceptors on ≥50% of evaluations. Approximately 94% of all graduating students completed the year-end self-evaluations, with a mean (SD) score of 89% (8.6) and no EPAs marked as not applicable.
Conclusion. Pharmacy students’ proficiency in EPA improved during individual APPEs. According to preceptors, students’ greatest improvement in entrustability was in educating patients and colleagues regarding appropriate use of medications and collecting information to identify medication-related problems.
- advanced practice experiences
- core entrustable professional activities
- entrustability
- performance assessment
- proficiency development
INTRODUCTION
Entrustable professional activities are observable units of work initially developed by the Association for American Medical Colleges for the evaluation of medical students, graduates, and residents.1-3 The core EPAs for new pharmacy graduates were developed by American Association of Colleges of Pharmacy (AACP) committees and stakeholders, and describe tasks that any new pharmacy graduate should be able to perform.4-7 Published in 2017 with suggested stages for assessing student performance and practice readiness, the 14 EPAs are organized into six domains, and each EPA includes supporting tasks for assessing performance (Table 1).5 For example, the EPA “Create a care plan for a patient in collaboration with the patient, caregivers and other health-care professionals,” has six examples of supporting tasks, one of which is “Selecting monitoring parameters.”5
Core Entrustable Professional Activities (EPAs) for New Pharmacy Graduates5,6 Included in a Pilot Evaluation Instrument to Assess Students Completing Advanced Pharmacy Practice Experiencesa
Proficiency development and entrustability levels, which range from one (lowest) to five (highest), were published alongside EPAs and include the role of the supervisor or preceptor at each stage.6 While several published articles provide guidance for applying EPAs in pharmacy education, the ultimate goal is for pharmacy students to perform all EPAs at level three or higher upon graduation.4,6-11 Stakeholders have expressed confidence in the relevance of EPA to pharmacy education and practice; however, articles reporting on the application of EPAs and evaluating student entrustability levels in didactic or experiential pharmacy education are limited.12-16 We found no literature on how to apply AACP’s EPAs when assessing students’ entrustability levels on advanced pharmacy practice experiences (APPEs).
Given the scarcity of literature on evaluating APPE students using EPAs and the lack of consensus on expected performance levels from previous work,10,14,15 we believed it was important to gather pilot data on this subject to inform future implementation of EPA-based assessment in APPEs in our college and other colleges of pharmacy. The purpose of our project was to examine the use of AACP EPAs to evaluate the entrustability and practice readiness of APPE students. The primary objective was to use EPAs for preceptor evaluation and student self-evaluation of practice readiness for a cohort of students, and then to examine student self- and preceptor evaluation results. Secondary objectives were to assess which EPAs, if any, preceptors or students deemed as not applicable, determine students’ perception of the usefulness of EPA-based preceptor and student evaluations, and measure students’ perception of their EPA proficiency at the end of the APPE year.
METHODS
This project was approved by Mercer University’s Institutional Review Board, and informed consent was obtained from participants. The authors developed a pilot evaluation instrument directly from AACP’s EPAs and adopted the AACP EPA ranking levels of one to five for performance entrustability.5,6 The pilot instrument included 14 of the 15 AACP EPAs, each with a description of the five entrustability levels. Preceptors and students were asked to indicate student proficiency in performing each EPA or to indicate if an EPA was not applicable to that APPE (Table 1). We included the exact language used by Haines and colleagues for the five entrustability levels used to assess practice readiness as well as all AACP example supporting tasks for each of the 14 EPAs included in our instrument.5,6 For example, for “Minimize adverse drug events and medication errors,” the example supporting tasks “Assist in the identification of underlying system-associated causes of errors” and “Report adverse drug events and medication errors to stakeholders” were included.5
The AACP’s EPA 13, “Oversee the pharmacy operations for an assigned work shift,” was omitted from our pilot instrument.5 Pharmacy students are not licensed pharmacists and therefore are not allowed to supervise licensed pharmacists or other interns during APPEs in accordance with state law and college and practice site affiliation agreements.18 With omission of this EPA, AACP’s EPAs 14 and 15 became the 13th and 14th items on our instrument (Table 1).5
The pilot instrument assigned equal weights to each EPA and levels one to five so that the total score would be 70 if all 14 items were ranked level five (ie, 5 x 14 = 70). Total scores were normalized to 100, with adjustments made if a response of not applicable was given. For example, if not applicable was chosen for two EPAs, the total maximum score possible was 60. The total score was then converted based on a scale of 100. We did not ask preceptors and students to rank the 14 EPAs on the instrument in order of relevance to the specific APPE they were completing. Two APPE preceptors who were not part of the study reviewed the pilot instrument and suggested changes, which were subsequently incorporated in the study instrument.
Four faculty preceptors and one non-faculty preceptor overseeing Advanced Community, Ambulatory Care, Cardiology, Geriatrics, and Infectious Disease APPEs, volunteered to participate. Each of the APPE sites represented by a preceptor had an affiliation agreement with the college. The one non-faculty preceptor also served as a community-based pharmacy residency preceptor. The primary investigator demonstrated how to use the pilot instrument and provided examples of concrete tasks for the five study preceptors (Table 1).
All students assigned to these five preceptors during academic year 2017-2018 were asked to voluntarily participate and complete an IRB-approved informed consent form. Once students consented to participate in the study, they used the instrument to perform a self-evaluation at the beginning of the APPE and again at the midpoint and end of the APPE.
Each student’s preceptor reviewed their initial self-evaluation with the student and discussed and established entrustability goals for that APPE as well as the specific supporting tasks the student would be performing. Preceptors then evaluated students at the midpoint and end of the APPE using the instrument. After completion of the APPE, each preceptor and student discussed the student’s performance goals and entrustability ratings based on observed concrete tasks, the student’s self-evaluations, and reasons for any discrepancies. To encourage objectivity in their assessments, the study preceptors did not have access to students’ performance on prior APPEs with study or non-study preceptors.
To determine student perception of EPA-based self- and preceptor evaluations, the students were invited to voluntarily complete an anonymous four-question survey at the conclusion of the APPE. Students used a four-point Likert scale (responses ranged from strongly disagree to strongly agree) to respond to the following statements: Performing the self-evaluation at APPE beginning with discussion with the preceptor helped to establish performance goals. Performing the midpoint self-evaluation with preceptor discussion helped to refine performance goals. Having a midpoint evaluation with discussion of student performance levels was valuable in providing a realistic view of practice readiness and steps needed to improve performance entrustability. Performing the final self-evaluation with discussion of this along with the preceptor final evaluation helped to establish performance goals for remaining APPEs. These four survey questions were not pretested.
To further explore the perceptions of graduating pharmacy students regarding their entrustability and practice readiness after completing all APPE requirements, all fourth-year students in the 2018 graduating class (N=153) were invited to voluntarily perform a self-evaluation. Students were given the same pilot instrument to use and asked to indicate their level of entrustability for each EPA based on their four years in pharmacy school, including all didactic and experiential education and work experience.
Data are presented as mean (standard deviation) or No. (%) and were analyzed using SPSS, version 25 (IBM, Armonk, NY). Preceptor and student evaluation scores were compared using one-way ANOVA and post-hoc least significance difference. Documentation of EPA items as not applicable, the student opinion survey, and EPA self-assessment at the end of the fourth year were analyzed using descriptive statistics. A p value <.05 was considered statistically significant.
RESULTS
During the 2017-2018 APPE year, all 28 (100%) of the APPE students of the five study preceptors completed the self-assessments (n=60) using the pilot instrument. The five study preceptors completed 90 evaluations, for a combined total of 152 evaluations. Only two students were assigned to more than one preceptor during the study. The results of preceptor midpoint and final evaluations and the students’ pre-APPE, midpoint, and final self-evaluations are reported in Table 2. In the student self-evaluations, a significant increase from pre-APPE to midpoint, pre-APPE to final, and midpoint to final was identified on 11, 14, and seven EPAs, respectively (Table 2). Student scores on final self-evaluations for seven EPAs were slightly higher than those scores on preceptor evaluations, but none of the differences in scores was significant. Preceptors’ scores on EPAs were significantly higher on students’ final evaluations than on midpoint evaluations on eight EPAs, with a nonsignificant increase on the remainding four (Table 2). The difference between preceptor scores on the midpoint and final evaluations was greatest for two EPAs: “Educating patients and professional colleagues regarding the appropriate use of medications” and “Collect information to identify a patient’s medication-related problems and health-related needs.” Preceptors’ final evaluation scores for students were equal to or slightly higher than students’ self-evaluation scores for seven EPAs, with the difference in scores for one EPA, “Maximizing the appropriate use of medications applying cost benefit or epidemiologic principles,” being significant (preceptor final evaluation score on entrustability = 4.6 (0.5) vs student final self-evaluation score = 4.0 (0.5), p<.03; Table 2). Scores on student final self-evaluations were significantly higher than scores on pre-APPE student self-evaluations on all EPAs. The greatest increase in self-evaluation scores was on the EPA, “Establish patient-centered goals and create a care plan for a patient in collaboration with the patient, caregivers, and other health-care professionals (HCPs)” (pre-APPE score=3.0 (0.9) vs final score=4.4 (0.7), p<.0001; Table 2). Preceptor final evaluations were significantly higher, compared to student pre-APPE evaluations, on all 14 EPAs. Overall EPA performance scores from both preceptor evaluations and student self-evaluations increased significantly from pre-APPE to midpoint to final. The mean (SD) preceptor scores for students at the end of each APPE was 85.4% (7.1); the mean student self-assessment scores at the end of each APPE was 87.2% (10.3), p=.48.
Evaluation of Pharmacy Students’ Performance on the Core Entrustable Professional Activities for New Pharmacy Graduates While on Advanced Pharmacy Practice Experiences Using a Pilot Evaluation Instrument5,6
The percentages of EPAs marked not applicable by the students and preceptors are shown in Table 3. While some not applicable rankings changed as each APPE progressed (beginning to final evaluation), most stayed consistent throughout the APPE. On final evaluations, one EPA, “Fulfill a medication order,” was marked as not applicable by at least 50% of students, and four EPAs, “Identify patients at risk for prevalent diseases,” “Maximize the appropriate use of medications,” “Ensure that patients have been immunized,” and “Fulfill a medication order,” were marked as not applicable by at least 50% of preceptors (Table 3). The type of APPE did have some impact on how many EPAs were considered not applicable. Preceptors of APPEs in cardiology, geriatrics, and infectious disease indicated that at least one EPA was not applicable on 100% of final student evaluations. In comparison, the preceptor for the ambulatory care APPE marked at least one EPA as not applicable on 60% of final student evaluations, while the preceptor for the advanced community APPE did not mark any EPA as not applicable on final student evaluations. From 80% to 100% of the students completing cardiology, geriatrics, and infectious disease students indicated on final self-evaluations that at least one EPA was not applicable, compared to 40% of students completing a final evaluation for ambulatory care and 0% completing a final evaluation for advanced community.
Percentage of Core Entrustable Professional Activities for New Pharmacy Graduates Marked “Non-Applicable” by Advanced Pharmacy Practice Experience Students and Preceptors
Twenty-eight students (100%) completed the opinion survey at completion of the APPE. All students chose either agree or strongly agree on each of the four survey questions regarding the utility of preceptors and students completing evaluations on entrustability and discussing student progress in entrustability.
At the end of the academic year, immediately prior to graduation, 153 (93.5%) of students in the class voluntarily completed a self-evaluation using the pilot instrument that the preceptors had used. None selected not applicable for any EPA (Table 3). The mean (SD) self-evaluation entrustability level given by students was greater than 4.0 for all EPAs on the instrument, ranging from 4.2 (1) to 4.7 (0.5). The lowest student self-evaluation average was for “Maximize appropriate use of medications in a population” and highest for “Collect information to identify medication-related problems.” The mean (SD) calculated score out of 100 for all self-evaluations completed by members of the graduating class was 89 (8.6).
DISCUSSION
This study examined the use of EPAs in assessing APPE student practice readiness. To our knowledge, this pilot study is the first to describe using AACP’s EPAs with the five levels of entrustability in an APPE curriculum to determine pharmacy students’ practice readiness. Preceptors evaluated students on their entrustability in performing concrete tasks for each EPA applicable to that specific APPE, while students’ perception of their entrustability was captured via self-evaluation using the pilot instrument.
Analysis of preceptor midpoint and final evaluations indicated progression in student proficiency in the EPAs that the preceptor deemed applicable to that setting. The difference between preceptor midpoint and final evaluations scores was greatest for the two EPAs, “Educating patients and professional colleagues regarding the appropriate use of medications,” and “Collect information to identify patients’ medication-related problems and health-related needs.” This could imply that the ample opportunities for students to collect medication histories, discuss patients’ experiences with medications, and educate patients and other health care professionals regarding medications, along with preceptor feedback, are important entrustable professional activities for APPE students to demonstrate in a variety of practice settings. These opportunities are more limited in introductory pharmacy practice experiences and in the didactic curriculum compared to in APPEs. In our didactic curriculum, simulated patient encounters are more common than actual patient encounters, and educating peers, residents or faculty members is more common than educating other HCPs.
Students' final self-evaluation scores on all EPAs were significantly higher than their pre-APPE self-evaluation scores. The greatest increase in student self-evaluation score at the end of an APPE compared to at the beginning was for the EPA “Establish patient-centered goals and create a care plan for a patient in collaboration with the patient, caregivers, and other HCPs,” which implies that working with patients, caregiver, and health care providers during the APPEs included in our study had a positive impact on students’ confidence in their entrustability. Most students had not had extensive experience performing these tasks with actual patients and health care professionals prior to starting their APPEs. Study authors shared these results with college practice experience and assessment directors to use in making appropriate quality improvements. As part of ongoing curricular improvement, each year our college increases student opportunities to work with health care providers and educate patients in simulation laboratories, service-learning activities, inter-professional education, and introductory practice experiences.
Students found the EPA-based evaluation tool to be useful in establishing performance goals and valuable in providing them with a realistic view of their progress during the APPE and their readiness for practice. These results were communicated to the experiential education office. The impact of preceptor-led discussion regarding students’ pre-self-assessment on subsequent preceptor and student self-evaluations for that APPE cannot be measured, although scores on preceptor and student self-evaluations correlated well. For “Maximizing the appropriate use of medications in a population by performing a medication use evaluation or applying cost benefit, formulary or epidemiologic principles to medication-related decisions,” preceptors’ mean final scores of student performance were significantly higher than students’ final self-ranking scores, which could suggest that students did not understand how these tasks related to APPE activities rather than that students had relatively low self-confidence in their knowledge of this area. However, this EPA was also the one most often considered not applicable by our subset of preceptors, a finding that was also shared with college practice experience and assessment directors. Students could also indicate that one or more EPAs was not applicable for a particular APPE. During the APPE year, our cohort of students most often perceived the EPA “Fulfill a medication order, maximize the appropriate use of medications and ensure that patients have been immunized” as not applicable; however, this was likely because of students on three of the five study APPEs not filling medication orders as part of their responsibilities. Our student perceptions of EPAs that are not applicable for students during APPEs differed from perceptions of students at four colleges regarding which EPAs are relevant and expected activities by pharmacists in multiple practice settings.15 In a study by Pittenger and colleagues, APPE students perceived that all 15 EPAs were relevant and applicable to multiple practice settings, but they ranked “Create a written care plan for continuous professional development, implement a care plan and monitor a care plan” as lowest in terms of relevance.15 Immediately prior to graduation, when we asked our graduating students to indicate their perceived entrustability level for each EPA using the pilot instrument (Tables 1 and 3), or to indicate any they felt were not applicable, the respondents did not indicate not applicable on any of the EPAs, which was more expected of practice-ready graduates and consistent with the findings from Pittenger’s study.15
The limitations of this study include the limited number of preceptor volunteers, the limited types of APPE settings, and the inclusion of a single graduating cohort. The initial student self-evaluations and discussions with preceptors to establish goals and consider the concrete tasks the students would perform for a particular APPE may also have impacted the proficiency of student performance and preceptor’s assessment of student entrustability. We hope that the discussions between the preceptor and student at the beginning of the APPE, along with the discussions they had at midpoint, positively influenced students’ proficiency in performing concrete tasks that all pharmacy students should be able to perform upon graduation (Table 2). Other factors, such as having completed other APPEs prior to the study, having had prior work experience, and having had interactions with health care providers who were not preceptors during the APPE also may have had an impact on student entrustability. Although the study only included one small cohort (n=28) of students, we also reported on the entrustability self-perceptions (mean score greater than 4.0) of 93.5% of an entire class of pharmacy students just prior to graduation. These self-assessment scores were not dissimilar to those reported by a sample of APPE students at four institutions who had median self-assessment scores of four on 13 EPAs.15 At the time of our study, the majority of the graduating class was not very familiar with AACP’s EPAs and entrustability levels, and only the 28 study participants had ever used an EPA-based APPE evaluation instrument. Finally, students may have chosen agreed or strongly agreed on the student opinion survey regarding the utility of preceptor and student entrustability evaluations out of politeness, which is an inherent limitation of perception studies.
CONCLUSION
In this pilot study, EPAs for new pharmacy graduates were used in APPEs by a group of preceptors and students to assess student practice readiness. Students’ levels of entrustability, as measured by preceptor evaluations, showed improvement during APPEs. Per preceptor midpoint and final evaluations, student entrustability for “Educating patients and professional colleagues regarding appropriate use of medications” and “Collect information to identify patient’s medication-related problems and health-related needs” yielded the greatest improvement. Students’ final self-evaluation ratings were significantly higher on all EPAs than their self-evaluation ratings on the first day of their APPE. Additional studies are needed on the applicability and assessment of EPA entrustability of APPE students for all types of APPEs involving multiple institutions. Prior to the widespread implementation of EPA-based evaluations across the entire college of pharmacy curriculum, including in didactic courses and practice experiences, additional projects using EPA-based assessment instruments and entrustability levels are needed to inform educators and practitioners, ensure end user efficiency, and further establish validity.
ACKNOWLEDGMENTS
The authors thank Dr. Nader Moniri for his assistance in the preparation of this manuscript.
- Received September 21, 2019.
- Accepted May 25, 2020.
- © 2020 American Association of Colleges of Pharmacy