Abstract
Objective. To examine entrustable professional activities (EPAs) as an assessment tool for student pharmacists completing early practice experiences.
Methods. Students completed a 2-month practice experience upon conclusion of their first year. Student performance on EPAs was assessed by preceptors and students at the midpoint and conclusion of the experience using a scale that ranged from dependent (1.0) to independent (5.0). Wilcoxon Signed-Rank Test assessed for differences between the midpoint and final evaluations on student self-evaluations and between the midpoint and final evaluation on preceptor-student evaluations. Cronbach’s α assessed reliability of the EPAs.
Results. From May to August 2016, 147 students completed a practice experience. Student-self and preceptor-student evaluations at the midpoint and final approximated a median score of 3.0 (IQR 2) and 4.0 (IQR 3), respectively, on EPAs 1-14. Analyses revealed statistically significant increases from midpoint to final evaluation for all constructs on both evaluations. Cronbach’s α yielded scores of 0.98 for the preceptor evaluations and 0.95 for the student self-evaluation.
Conclusion. There was an increase in student performance over time. The EPA statements may be a reliable assessment tool for student performance in pharmacy education.
- entrustable professional activities
- clinical education
- experiential education
- introductory pharmacy practice experience
- performance assessment
INTRODUCTION
Entrustable Professional Activities (EPAs) are units of professional practice, defined as tasks or responsibilities that trainees are entrusted to perform unsupervised once they have obtained sufficient competence. EPAs are independently executable, observable, and measurable in both process and outcome.1-3 By comparison, a competency is an observable ability of a health professional that may include knowledge, skills, values, or attitudes. A milestone represents a behavioral description that marks a level of performance for a given competency. EPA statements require the integration of competencies, usually from multiple domains. Each competency is accompanied by milestones that serve as descriptive narratives for expected behaviors for pre-entrusted learners compared to entrusted-learners.1,4,5 EPAs originated in medical education when the Association of American Medical Colleges (AAMC) defined core EPAs for medical students to enter medical residency.2 The American Association of Colleges of Pharmacy (AACP) finalized and published a final list of core EPAs for pharmacy graduates in March 2017.3
The authors performed a comprehensive literature search in PubMed using the search term “entrustable professional activity.” Their search yielded 143 results, three of which were related to pharmacy practice. Two additional manuscripts were found in the American Journal of Pharmaceutical Education relating to the profession of pharmacy that are not yet indexed in PubMed. Literature about EPAs is largely limited to commentaries regarding the future direction of pharmacy or medical education, the development of EPA statements for medical specialties, the linking of EPA statements to programmatic competencies, or to the level of entrustment garnered to post-graduate medical residents. Minimal literature is available that describes EPA-related performance assessment in medical students; available data describes entrustment on one or two specific EPA statements in the weeks prior to graduation from the medical curriculum.6,7 Lomis and colleagues described a pilot program involving 10 medical schools that are in the process of implementing the AAMC Core EPAs framework in undergraduate medical education. Outcomes data and key learnings will be reported as they come available from each medical school.8 No literature is available that discusses a learner’s journey to entrustment throughout the medical school curriculum, nor to specific programmatic improvements levied to raise a learner’s level of entrustment across the continuum of medical education. Primary literature regarding the use of EPAs in pharmacy education is limited, detailing procedures for creating EPA statements, competencies, and milestones at a single institution.9-11 No data is available yet describing the actual use of EPA statements as a performance assessment tool at any level in pharmacy education.
This project is the first to describe the development of an early pharmacy practice experience that uses EPA statements and a criterion-based clinical evaluation scale as a method of performance assessment for student pharmacists. The objective of this exploratory study was to derive EPA statements for an early practice experience called “Immersion Experience 1,” develop a reliable tool to evaluate student pharmacist EPA performance, and to examine self-reported and preceptor-reported student performance on professional practice-related activities at the midpoint and conclusion of the practice experience.
METHODS
Upon completion of their first professional year (PY1), students completed a 2-month early practice experience in a community pharmacy or health system pharmacy setting.12 Students prepared for the practice experience by completing the foundational coursework during the PY1 year.13 Foundational coursework focused on instructing learners to perform the process of patient care,14 including how to perform a patient work-up, complete a comprehensive medication history interview, identify medication-related problems, communicate recommendations to the pharmacist or provider, perform patient/caregiver medication education, and provide documentation of the encounter. This study was exempted for review by the Institutional Review Board (IRB) at the University of North Carolina at Chapel Hill (UNC).
As part of the curricular transformation occurring at this school, two experiential programs faculty members developed a list of EPA statements that graduates of the program should be entrusted to complete upon graduation. Following the AAMC format, functions were defined for each EPA and mapped to core competencies for the curriculum in collaboration with the school’s director of the Office of Strategic Planning and Assessment (Table 1).1,15 After completion of the mapping process, a draft of the AACP EPA Statements for Pharmacy Graduates was released for comment; however, the list of Core EPAs for Pharmacy Graduates had not been finalized prior to the development of UNC EPA statements. Experiential programs faculty together with the school’s educational researchers also developed a clinical evaluation scale to measure student pharmacist performance on each EPA. The scale was adapted from a criterion-based rating tool that is used to evaluate the performance of clinical activities in nursing practice.16 This tool was chosen because of its wide use in health profession education,17-22 and for its ability to track a learner’s progress across the curriculum. The scale categorizes a learner on one of five levels, ranging from dependent through independent, based on assessment in three domains: professional practice standard, performance quality, and assistance required (Table 2). If a student evaluation fell within different scale levels for the three domains, preceptors were instructed to choose the lowest level for the evaluation score. The EPA statements and the scale were approved for use by the school's Curricular Transformation Steering Committee. The school will use these 14 EPA statements and corresponding scale to assess student readiness for independent practice as they complete practice experiences throughout the PharmD curriculum. At the time of the study, the school had only used EPAs to assess student performance for PY1 students.
Entrustable Professional Activity (EPA) Statements Used by the UNC Eshelman School of Pharmacy
Clinical Evaluation Scale Used by the UNC Eshelman School of Pharmacy
Immersion Experience 1 practice sites were expected to provide students with experience in the medication use process (50% of experience in pharmacy operations) and in the patient care process (50% of experience in clinical practice). Health system and community practice sites were selected based upon their willingness to host students and their ability to provide learners with operational and clinical experiences as defined above. All community pharmacies selected to host students for this practice experience offered enhanced services that extend beyond conventional medication dispensing and basic patient education (eg, medication synchronization, comprehensive medication review, immunizations, compliance packaging, medication adherence monitoring, patient coaching, and more). All practice sites were required to participate in a live webinar hosted by the Immersion Experience 1 course directors prior to the launch of the experience. The webinar described coursework student pharmacists had completed to date, expectations of Immersion Experience 1, and oriented preceptors to the appropriate use of the UNC EPA Statements and Clinical Evaluation Scale.
Student performance on the early practice experience was assessed by preceptors (“Preceptor Evaluation of Student”) and student self-assessment (“Student Self-Evaluation”) based upon the learner’s ability to perform each of the 14 EPAs according to the scale. Potential scores included dependent (score of 1.0), marginal (score of 2.0), assisted (score of 3.0), supervised (score of 4.0), or independent (score of 5.0) for each EPA statement. Preceptors and students were each required to submit an evaluation of student performance at the end of the first month (referred to as “midpoint”) and the end of the second month (referred to as “final” or “conclusion”) of the experience. Prior to the launch of Immersion Experience 1, course directors established a priori that for a student pharmacist to successfully pass the course, a level of “marginal” (Level 2.0) or greater must be achieved for EPA1 (review and collect pertinent medication and medical information), EPA2 (perform a comprehensive medication history interview), EPA8 (document clinical encounters), EPA11 (provide an oral presentation of a clinical encounter to a pharmacist or health care provider), and EPA12 (form clinical questions and retrieve evidence to advance patient care). These EPA statements were chosen based upon didactic coursework that all student pharmacists encountered during the PY1 year.
Descriptive statistics were used to characterize all data. Assessments of student performance on EPA statements are presented as the median and interquartile range. Wilcoxon Signed-Rank Test was used to assess for differences between the midpoint and final evaluations on student self-evaluations and between the midpoint and final evaluations on preceptor evaluations. Sub-analyses were performed using Mann-Whitney U to assess for differences between preceptor evaluations and student self-evaluations at the midpoint of the experience and at the conclusion of the experience. Sub-analyses were performed using Mann-Whitney U to assess for differences in preceptor final evaluations between community and health system practice setting and between student final self-evaluations between community and health system practice setting. Differences in student self-evaluation from midpoint to final, preceptor evaluation from midpoint to final, student and preceptor midpoint, and student and preceptor final were stratified by practice setting (community and health system). Holm adjusted p-values (pHolm) are presented for analyses involving multiple comparisons to reduce the possibility of type 1 error.23 All sub-analyses were intended to be exploratory and hypothesis generating. Cronbach’s α was used to assess the internal consistency of EPA statements. Analyses were performed using SPSS v.23 (IBM, Armonk, NY). Face validity was assessed for the scale through the process of vetting by experts at the school. A full evaluation of instrument validity was beyond the scope of this exploratory study and was not performed.
RESULTS
Upon enrollment in the PharmD curriculum, 81% (n=123) of 153 students had a prior degree and the mean cumulative grade point average was 3.5. The majority of students were female (68%, n=104) and the mean age was 22 years (range 19-32 years). Additional characteristics of student pharmacists were not collected in the time immediately preceding Immersion Experience 1. After the completion of PY1 didactic coursework, 147 students progressed to participate in Immersion Experience 1; 67 students completed the experience in May-June 2016 (35 students in community, 32 students in health system) and 80 students completed the experience in July-August 2016 (42 students in community, 38 students in health system). Five major health systems and 34 community pharmacies in North Carolina hosted students during Immersion Experience 1.
Median and interquartile ranges for each EPA statement for students across all practice sites during Immersion Experience 1 are depicted in Table 3. Figures 1 and 2 show the spread of student performance scores according to type of evaluation (ie, student self-evaluation vs. preceptor-student evaluation) at the midpoint and conclusion of the experience.
Comparison of Midpoint vs Final Student Performance on Student Self-Evaluations and Preceptor Evaluation of Student
Comparison of Midpoint vs. Final Student Performance on Student Self-Evaluations.
Comparison of Midpoint vs. Final Student Performance on Preceptor Evaluation of student.
Student self-evaluation at the midpoint evaluation returned median scores of 2.0 (IQR 2) on one EPA statement, 2.0 (IQR 3) on one EPA statement, 2.5 (IQR 1) on one EPA statement, 3.0 (IQR 2) on seven EPA statements, 3.0 (IQR 3) on one EPA statement, 4.0 (IQR 1) on one EPA statement, 4.0 (IQR 2) on one EPA statement, and 4.0 (IQR 3) on one statement. At the conclusion of the experience, student self-evaluation on EPA statements yielded median scores of 3.0 (IQR 2) on four EPA statements, 3.5 (IQR 2) on one EPA statement, 4.0 (IQR 1) on three EPA statements, and 4.0 (IQR 2) on six EPA statements. Wilcoxon Signed-Rank Test revealed statistically significant increases from midpoint to final evaluation for all EPA constructs on student self-evaluations (pHolm<.01). Preceptor evaluation of student on EPA statements at the midpoint evaluation returned median scores of 3.0 (IQR 1) on six EPA statements and 3.0 (IQR 2) for eight EPA statements. At the end of the experience, preceptor evaluation of student on EPA statements yielded median scores of 3.0 (IQR 1) on three EPA statements, 3.0 (IQR 2) on four EPA statements, 4.0 (IQR 1) on two EPA statements, and 4.0 (IQR 2) on five EPA statements. Wilcoxon Signed-Rank Test revealed statistically significant increases from midpoint to final evaluation for all EPA constructs on preceptor evaluation of student (pHolm<.01). Reliability was high for both instruments, as indicated by Cronbach’s α of 0.98 for the Preceptor Evaluation of Student instrument and 0.95 for the Student Self-Evaluation instrument. Mann-Whitney U sub-analyses did not reveal statistically significant differences between preceptor evaluation of student and student self-evaluation at the midpoint or final (Appendix 1).
When examining results by practice setting, some differences emerged. In comparing between community and health system for final student self-evaluations, analyses revealed significantly higher ratings on community experiences for EPA2 (Z=-3.4, pHolm=.01), EPA9 (Z=-2.9, pHolm=0.5), and EPA10 (Z=-3.3, pHolm=.01). On preceptor final evaluations, Mann-Whitney U test revealed higher ratings on community experiences when compared to health system experiences for all EPAs except for EPA2. When looking at community setting alone (n=77), preceptor evaluations from midpoint to final significantly increased for all EPAs (pHolm <.01) and student self-evaluations significantly increased for all EPAs except for EPA 9 (Appendix 1). In contrast, health system preceptor evaluations from midpoint to final increased only significantly for EPA 1 (Z=-3.3, pHolm =.015) and health system student evaluations increased only for EPA 9 (z=-3.0, pHolm =.03) (Appendix 1).
DISCUSSION
The changing landscape of medical education to include EPAs for entering professional practice merits the integration of an EPA framework into the PharmD curriculum.2,3,24 As pharmacy education embraces the use of EPAs, it is imperative to collect longitudinal, real-time data that examines a leaner’s journey to entrustment on professional practice related activities because entrustment for units of practice can only be demonstrated and achieved over time.25 This study described this school’s first experience with using a set of EPA statements as an assessment tool for student pharmacist performance early in the PharmD curriculum.
As this institution began to embark upon the integration of EPAs into the PharmD curriculum, a final list of EPAs had not yet been defined for the pharmacy profession. However, two experiential programs faculty developed an institution-specific list of EPA statements that were believed to represent the daily work of a pharmacist. These EPA statements were vetted and approved by the school’s Curricular Transformation Steering Committee. After AACP published a final list of Core EPAs for Pharmacy Graduates in March 2017, this institution assessed the UNC EPAs for compatibility with the new AACP EPAs. The UNC EPAs were mapped to one of the five domains represented in the AACP EPAs based upon descriptions of each domain, the activity described in each EPA statement, and example supporting tasks. The authors believe that the UNC EPA statements describe activities and tasks that are generally in alignment with all domains identified in the AACP EPAs. Regarding the “self-developer” domain, self-assessment was used during Immersion Experience 1; other professional development items included in this AACP EPA domain are addressed in other didactic and co-curricular experiences. A summary of the EPA mapping is listed in Table 1.2
These findings are the first to describe the implementation of EPA statements as a performance assessment measure in a PharmD curriculum. The quantitative analysis of evaluations submitted by preceptors at the midpoint and conclusion of the experience may indicate the ability of the learner to progress along the continuum to becoming an independent practitioner through hands-on experience at a practice site. It cannot be ruled out, however, that the data showing increased student performance scores did not occur because preceptors became more comfortable with the student’s abilities rather than through demonstration of improvement. Similarly, students reported confidence in performance that was in alignment with preceptor perception at the end of the experience for all EPA statements. Cronbach’s α analyses indicate high internal consistency for preceptors and students at the midpoint and final evaluation, suggesting that evaluators used the assessment tools similarly across practice settings and practice sites.
Prior to the launch of the experience, course directors considered the current status of learners in the curriculum and foundational coursework that was completed as they set an a priori standard of a level of entrustment of “marginal” (Level 2.0) or greater for EPAs 1, 2, 8, 11, and 12. Interestingly, a high yield of assisted, supervised, and independent (Level 3.0 through Level 5.0) ratings were returned on midpoint and final evaluations submitted by both preceptors and students. This is particularly surprising because the cohort of students only completed one year of pharmacy coursework, and no direct pharmacotherapy content was covered during this time. Despite the offering of preceptor and student training on the new evaluation metrics and scales, preceptors and learners may have used a norm-based approach for evaluation completion rather than a criterion-based methodology partially explaining the results observed. That is, evaluations were completed by comparing student performance to personal expectations of learners at a specific stage of the PharmD curriculum or compared to peer performance instead of using objective criteria specified in the Clinical Evaluation Scale. This phenomenon is not new, but it does demonstrate the need for additional training on the use of the assessment scales.26
To improve consistency of practice experiences between community and health system experiences, course directors selected community pharmacies that offered enhanced services that extend beyond conventional dispensing and basic patient education. Additionally, preceptors in both types of practice settings received the same training and were informed of expectations of students and practice sites (ie, students were to complete 50% of the experience in operations and 50% providing clinical services). Despite these efforts, sub-analyses suggested that student pharmacists completing the practice experience in a community setting received higher performance scores for some EPAs on preceptor final evaluations and final self-evaluations than those in the health system practice setting. Due to the exploratory nature of this research, insufficient data is unavailable to explain why these differences occurred. These and other differences found between community and health system settings will be examined further and monitored in subsequent immersion experiences.
While key learnings from the use of EPA statements at this institution are present, there are several limitations worth noting. First, the EPA statements used were developed for use at this institution and differ from the newly published AACP Core Entrustable Professional Activities for Pharmacy Graduates.3 While this institution’s EPAs are similar in content and structure, they will likely require adaptation to allow uniformity in pharmacy education nationwide. Second, a high return of assisted, supervised, and independent scores on evaluations demonstrates the need for additional recalibration and training on the appropriate use of the proposed assessment scales. Third, medical and pharmacy literature contain no EPA assessment data for learners early in the curriculum, limiting the comparability of this study’s data to other institutions or professions. Fourth, while face validity was obtained, full validation was beyond the scope of this exploratory study. Additional work will need to be done to fully validate the EPA statements and Clinical Evaluation Scale for widespread use in pharmacy education.
CONCLUSION
The EPA statements and Clinical Evaluation Scale were developed to introduce a criterion-based assessment of student performance into the PharmD curriculum. Results from this study indicate that EPA statements can be implemented into pharmacy curricula and can be used to categorize a learner’s level of performance on key practice-related activities. The results of this study demonstrate these EPA statements are a reliable performance assessment tool for student performance evaluations at midpoint and final evaluations of early practice experiences. Future studies should evaluate training of practice sites and preceptors to provide criterion-based evaluations of student performance. Additionally, a new method for practice site and preceptor performance assessment is needed that is in greater alignment with the EPA framework and competency-based education.
ACKNOWLEDGMENTS
The authors would like to thank Dr. Tom Angelo of the UNC Eshelman School of Pharmacy for his guidance and recommendations in the creation of the Clinical Evaluation Scale.
Appendix 1. Tables Describing Analyses of EPA Ratings for All Students and Separately for Students in Community Practice and Students in Health System Practice.
Table 1. Comparison of Student Self-evaluation on the Midpoint and Final Evaluations for Each Entrustable Professional Activity

Table 2. Comparison of Preceptor Evaluation of Students on the Midpoint and Final Evaluation for Each Entrustable Professional Activity

Table 3. Comparison of Student and Preceptor Responses on Midpoint Evaluations for Each Entrustable Professional Activity

Table 4. Comparison of Student and Preceptor Responses on Final Evaluations for Each Entrustable Professional Activity

- Received May 12, 2017.
- Accepted September 3, 2017.
- © 2019 American Association of Colleges of Pharmacy