Abstract
Objective. To develop, implement, and validate an entrustable professional activity (EPA) assessment tool that could be used to calculate course grades for experiential students in all practice environments.
Methods. An EPA assessment tool was developed and directly mapped to 18 EPAs, and a criterion, or passing score, for each EPA was established for all practice experiences. The EPA assessment tool was implemented in the college’s experiential program during summer 2018 and comparative outcomes and reliability of the EPA assessment tool were assessed within the core advanced pharmacy practice experiences (APPEs).
Results. The EPA assessment tool reliability was strong (Cronbach’s alpha=0.93), with preceptor-suggested grades and grades calculated using the EPA assessment tool equivalent in 95% of completed APPEs. All nonequivalent calculated-preceptor grade pairs were evenly split between one grade higher than scored and one grade lower than scored.
Conclusion. The EPA assessment tool is a reliable and valid instrument for assessing EPA achievement in the APPE year. Future work should focus on determining the longitudinal utility of the EPA tool by comparing outcomes in introductory and advanced pharmacy practice experiences.
INTRODUCTION
As pharmacy moves toward provider status, pharmacy educators must ensure that pharmacy graduates can competently perform activities and contribute as a member of the health care team. Stakeholders expect pharmacy graduates to demonstrate and apply requisite knowledge and skills to direct patient care. In response to anticipated stakeholder demand, a core list of entrustable professional activities (EPAs) for pharmacy graduates was developed by the American Association of Colleges of Pharmacy (AACP).1-3 These core EPAs are essential activities that all pharmacy graduates, regardless of the setting in which they intend to practice, must be able to perform without direct supervision.1
In addition to the traditional use of objectives to measure knowledge and progression toward competencies, assessment using EPAs has become necessary to determine students’ preparedness to autonomously perform pharmacist duties. Core EPAs have been shown to be valid and pertinent to pharmacy practice.4 Studies have shown that core EPAs are consistently rated as relevant to most practice settings and are important for determining a student’s readiness for practice.5,6 Students should perform these activities at certain levels of entrustment based upon the depth and maturity of their knowledge, skills, and attitudes as they progress through the professional program.2 However, there is minimal research available that describes EPA assessment in pharmacy education. Scott and colleagues found that quantifying the use of EPAs with supporting tasks within the practice manager domain was a useful method of assessment.7 Entrustable professional activities incorporated into an assessment tool for practice experiences completed by first-year student pharmacists demonstrated a significant increase in the performance of EPAs from midpoint to final evaluations, suggesting student growth.8 Studies reporting on the psychometric properties of EPAs are few; however, those that have been published report moderately strong inter-rater reliability and good content validity.9
While the body of evidence supporting use of EPAs is growing, there is a need to establish a valid approach for their assessment during practice experiences. To help meet the need for a validated instrument to assess EPAs throughout the experiential curriculum, an assessment tool was developed to determine student’s level of entrustment during each practice experience; calculate scores and letter grades based on student EPA performance; and capture data for personal and professional development (PPD). This paper describes the development and pilot implementation of the EPA assessment within the curriculum’s core advanced pharmacy practice experiences (APPEs), which include general medicine, ambulatory care, institutional, and community settings.
METHODS
During fall 2016, the University of Louisiana Monroe (ULM) College of Pharmacy designed a new experiential assessment tool that incorporated the evaluation of EPAs.3 Tool development focused on content validity, criterion validity, and convergent/discriminant evidence.10 To ensure content validity and construct validity, 14 EPAs that had been established by the AACP Academic Affairs Committee were included in the tool.1 Four program-specific EPAs (Table 1) were also developed and included to ensure all college objectives were represented. Each EPA was listed on the assessment tool (Appendix 1), along with examples of supporting tasks, expected level of entrustment, and feedback fields. The assessment tool also contained four PPD sections (self-awareness, leadership, innovation, and professionalism), and one section for overall feedback. To determine criterion validity, preceptors were asked to provide a suggested letter grade for each students’ performance; however, the grade was hidden from the student’s view and not factored into the student’s actual grade.
Doctor of Pharmacy Students’ Expected Levels of Entrustment on AACP and Program Specific EPAs by Professional Year Milestones
Development and implementation of the EPA tool were led by faculty members from the Office of Experiential Education (OEE). The Pharmacy Practice Experience (PPE) Committee, the Curriculum and Assessment Committees, and all active preceptors were consulted for input throughout the process. First, the PPE Committee, which included eight faculty members and two non-faculty preceptors representing all core practice experiences, established expected levels of entrustment for each professional year milestone (Table 1) using a modified Angoff method.10-13 Expected levels of entrustment were determined for each EPA, with the minimum for APPE students being level 3 per the AACP recommendation that all students should achieve this level upon graduation.2 Second, a sample of faculty and non-faculty preceptors were polled to ensure that opportunities for students to perform the EPAs at the established levels of entrustment were achievable at their practice sites.
Historically, ULM students receive letter grades during practice experience evaluations; therefore, the assessment tool (Appendix 1) was required to calculate scores, which were later translated into grades. Each EPA had six options from which the preceptor could select: five levels of entrustment and a not applicable (N/A) option. Those who met or exceeded the expected performance level for a given EPA were awarded 100% of points for that EPA. Students who did not meet the expected level for an EPA were awarded a proportional percentage of points for that EPA. For example, if the expected performance level for an EPA was four, a score of three would result in the student receiving 75% of the points. If a student had no opportunity to demonstrate performance on a given EPA, the item was marked N/A. Any EPA marked as N/A did not contribute toward the student’s overall grade for the practice experience. Professionalism was either demonstrated or not demonstrated during the practice experience; a student receiving a “no” rating for professionalism on the final assessment meant an automatic failure for the practice experience. Scores for the applicable EPAs and professionalism were equally weighted and combined to arrive at the final grade. The areas of self-awareness, leadership, and innovation were evaluated but not scored. Students were rated as “novice,” “approaching proficiency,” or “demonstrates proficiency” for each of these personal growth areas.
The proposed assessment tool was beta-tested by select faculty preceptors from various practice settings who provided input on the tool’s design and utilization. Changes were made based on this feedback, resulting in the final version of the instrument. Prior to pilot implementation in May 2018, training on EPAs, milestone expectations, definitions of levels of entrustment, and utilization of the new assessment tool was provided to preceptors (n=397). Training was administered via live seminars at professional meetings, email and telephone correspondence, or through one-on-one educational opportunities. Prior to APPE onset, students were informed about EPAs and mandatory activities that provided additional opportunities for demonstrating entrustment.
Data from May 2018 to May 2019 were collected for eight APPE blocks. All data points that were marked as N/A were treated as missing values within the data file and were replaced through regression imputation to address potential loss of cases listwise during statistical analysis.14 Descriptive statistics were reported for each student demographic and performance for each of the EPAs. Implementation assessment was undertaken using Multi-Factor Analysis of Variance. The association of the dependent variable (EPA performance) with various independent variables (APPE type, APPE block, student gender, student ethnicity, student age, and grade point average [GPA] at the end of the third academic year) was analyzed. Bonferroni post hoc analyses were performed where appropriate. Comparisons of EPA assessment reliability between core practice experiences (test-retest reliability) and assessment of internal consistency were made using Cronbach’s alpha. Criterion validity and convergence/discriminant evidence were assessed by comparing preceptor suggested grades and grades generated by the EPA assessment (chi-square and hit rate).15
RESULTS
One hundred forty-two preceptors completed evaluations between May 2018 and May 2019: 22 general medicine preceptors (27% were faculty members), 19 ambulatory care preceptors (47% were faculty members), 36 institutional preceptors (none were faculty members), and 65 community preceptors (none were faculty members). Ninety (63%) preceptors were female. Ninety-two students completed the 428 APPE blocks assessed in this analysis. One hundred thirty-nine general medicine, 104 ambulatory care, 93 institutional, and 92 community APPE blocks were completed. During this study, 7704 data points were collected, of which 704 (9%) were recorded as N/A. No students from this cohort required remediation. Of the 92 students, 70.7% were female and the majority identified their race/ethnicity as Caucasian (71.8% Caucasian, 14.1% African American, and 14.1% other). The mean age was 25.2±2.6 years, and the mean GPA was 3.37±.40.
Multi-factorial modeling included the main effects of APPE type, APPE block, student gender, student ethnicity, student age, and GPA. The model was predictive of EPA outcomes (p<.001). All variables except ethnicity were contributory and retained in the model. A comparison of student EPA outcomes by APPE type is presented in Table 2. General medicine and ambulatory care assessments of student EPA outcomes were similar (mean difference=-.03, 95% CI=-.09 to .03; p=1.0). Community assessment scores were generally higher than in institutional (mean difference=.12, 95% CI=.06 to .19; p<.001). However, both community and institutional APPEs evaluated outcomes higher than either ambulatory care (mean difference of .53 and .41, respectively) or general medicine APPEs (mean difference of .56 and .44, respectively). All pairwise comparisons were significant at p<.01. Post hoc analysis of rotation block sequence showed no differences among EPAs. There was a significant association (p<.05) of certain EPAs with student gender, student age, and GPA. In general, female students outperformed male students (specifically, female students scored higher on EPAs 1, 2, 3, 5, 7, 9, 10, 13, and 15). A negative correlation between student age and score was found for EPAs 3 and 9, and a positive correlation between student GPA and score was found for EPAs 2, 5, and 16.
Doctor of Pharmacy Students’ Performance on Entrusted Professional Activities as Measured Using a Novel Assessment Instrument
Test-retest reliability was strong (Cronbach’s α=93, p<.001). Similarly, internal consistency was high (Cronbach’s α=.97, p<.001). Preceptor-suggested grades and EPA assessment calculated grades were congruent in 407 (hit rate=95%, p<.001) completed APPEs. Twenty-one grade disagreements (5%) were observed. Grade disagreements were evenly distributed, with 11 grades received being one grade higher than recommended and 10 being one grade lower. No grade disagreements resulted in a student receiving a passing or failing grade when the preceptor indicated the opposite.
DISCUSSION
We designed and implemented a comprehensive tool to evaluate the AACP committee-established EPAs,2 our program-specific EPAs, and the PPD soft skills. Our findings were similar to those of other studies in that scores generated from the tool across the APPE year were associated with APPE type, APPE block, GPA, student gender, and student age.16-22 Similar to Fay’s work, scores given by our faculty preceptors were lower than those given by non-faculty preceptors.23 In our program, faculty members solely engage in delivery of either general medicine (27%) or ambulatory care (47%) APPEs, which could explain the finding that non-faculty community and institutional preceptors tended to score students higher. Although a positive and expected correlation between GPA and EPA scores was observed, some of our results may warrant further investigation. Our findings suggested that women performed better on clinical APPEs than men, which had been previously demonstrated by Riese and Haist.20,21 Student age was found to be negatively correlated with EPA outcomes, which has been seen in other areas of medical education.24-26
Strong internal consistency and test-retest reliability suggest the tool may provide a mechanism for consistent assessment of EPAs. Additionally, the findings that EPA tool outcomes provided high congruency with preceptor-suggested grades and no pass/fail disagreements lends strength to the validity of the tool. Factors attributing to this successful pilot implementation included the incorporation of required activities to give students additional opportunities to demonstrate entrustment in a variety of settings. Furthermore, assessment of EPAs demonstrated consistent achievement of the expected levels of entrustment at the onset and throughout the APPE year, which could imply students were adequately prepared in advance of APPEs. Future investigation should focus on the refinement of the tool’s utility for assessing growth in students within specific APPE blocks and for determining longitudinal growth throughout pre-APPE simulation-based laboratory activities and introductory and advanced pharmacy practice experiences.27
This study has several limitations. A relatively large percentage of individual EPAs were marked as N/A, which was addressed using a valid method of score imputation.14 Also, the study analyzed a single year of APPE data from one program; therefore, the generalizability of our findings may be limited. Though our method of assessing APPEs using EPAs seems to be reliable while providing a mechanism for delivering meaningful feedback, we recognize the need for further refinement of the tool. As further guidance for best practices emerges in the area of EPA assessment, we hope to enhance our methods by using a more holistic and prospective approach to ensure the practice-readiness of our pharmacy students.28
CONCLUSION
This pilot study describes the development and implementation of an EPA assessment tool during a single APPE year. The test-retest reliability of the instrument was strong and internal consistency was high. Outcomes demonstrated high congruency with preceptor-suggested grades and no pass/fail disagreements. Though further investigation is needed for tool refinement and longitudinal application from introductory to advanced pharmacy practice experiences, this tool may be adapted by other programs in the development of EPA assessment methods.
ACKNOWLEDGMENTS
The authors thank all preceptors involved in beta-testing the evaluation tool. The authors also thank Jeffery Evans, PharmD, Seetharama Jois, PhD, Savannah Posey, PharmD, Laurel Sampognaro, PharmD, Paul Sylvester, PhD, and Jamie Terrell, PharmD, for review and feedback during the editing of this manuscript.
Appendix 1. APPE Assessment Tool Example Excerpts
- Received October 7, 2019.
- Accepted June 19, 2020.
- © 2020 American Association of Colleges of Pharmacy