Abstract
Objective. To describe the design and evaluation of a program implemented to ready clinical faculty members to use entrustable professional activities (EPAs) for teaching and assessment in experiential education.
Methods. The school adopted a set of EPAs for faculty members to implement in advanced pharmacy practice experiences (APPEs), and then delivered a two-session faculty development program to ensure faculty members’ readiness to implement the EPAs. To determine the success of the faculty development program, qualitative analysis of the moderated discussion held during the program was conducted, post-program and follow-up surveys were administered, and the results of the pilot implementation of EPAs were analyzed.
Results. Eleven faculty members participated in the development program, and 10 of them completed the pilot implementation of EPAs and the follow-up survey after completing three APPE blocks. All faculty members responded that the program prepared them to apply what they learned about EPAs to their practice setting in both the post-program and follow-up surveys. In the follow-up survey, 80% of faculty members reported that they were confident in correctly applying what they learned from the program to the EPA pilot, and 100% answered a hypothetical application question correctly. Forty students participated in the pilot implementation of EPAs. Of these, 95% were directly observed by faculty members before making an entrustment decision, and 100% received feedback on their performance.
Conclusion. The faculty development program was effective in preparing faculty members to implement the EPA framework in experiential teaching and use the entrustment rubric.
INTRODUCTION
Entrustable professional activities (EPAs) were first introduced in the medical literature in 2005 as a method of connecting competency-based education with the professional practice expected of graduating medical residents.1 The EPA framework defines increasing levels of independence and accountability as students gain more experience in performing clinical work that constitutes pharmacy profession.2 The American Association of Colleges of Pharmacy (AACP) Academic Affairs Committee developed 15 core EPAs that all pharmacy students should be entrusted to perform prior to graduation.3 In concert with developing EPA statements, the committee proposed a 10-step implementation roadmap.4 Since that time, despite increased adoption of EPAs in schools and colleges of pharmacy, there is a paucity of literature on model approaches for fulfilling some steps of the roadmap, particularly around piloting the assessment of select EPAs with students (Step 7) and training faculty members (Step 8). In their commentary, Jarrett and colleagues underscore the importance of faculty development and advise giving faculty members practice applying entrustment levels to different student scenarios.5 No articles in the pharmacy literature specifically describe such faculty development programming.
As medicine was the discipline that pioneered EPAs, the medical literature reports on aspects of EPA curricular integration that are pertinent to faculty development, including the importance of creating a shared mental model, or collective understanding of common framework for teaching, evaluation, and arriving at entrustment decisions.6 In addition, faculty development should address techniques for direct observation and provision of feedback to students, as well as the diverse development needs for all stakeholders who engage in EPA learning and assessment.7,8 While these articles contain general recommendations regarding good practices for faculty development for EPAs, only one article provides a specific approach by reporting on a development program’s activities, including use of sample student scenarios, and faculty evaluation of the program.9
To address the lack of descriptive approaches to faculty development for EPAs in the pharmacy literature, this article describes the design and evaluation of a program implemented to ready clinical faculty members to use EPAs for teaching and assessment in experiential education.
METHODS
The Doctor of Pharmacy (PharmD) program at Fairleigh Dickinson University responded to the need to re-examine its current experiential program delivery and to explore an EPA framework for curricular enrichment. After attending the AACP 2018 Spring Institute to learn about EPAs and an implementation roadmap, a working group of faculty members, preceptors, and administrators was charged with integrating EPAs in the curriculum. The group proposed a plan for EPA adoption, with the initial steps selecting the EPAs provided by AACP verbatim, piloting a subset of EPAs in faculty-led experiential courses, and developing a meaningful faculty development program to ensure their readiness to pilot the implementation of EPAs.3 The group planned to use insights from the pilot implementation to inform an eventual roll out of a full-EPA program.
The group recommended that the faculty development program held prior to the pilot implementation of EPAs in the curriculum occur over two sessions. This recommendation was based on evidence that repetition of concepts spaced in time, with the additional dimension of application, would provide better learning and retention of concepts.10,11
The first session (1.5 hours) offered a general overview of the EPA framework. The session contextualized how EPAs relate to existing outcome frameworks (namely, the Center for Advancement in Pharmacy Education outcomes and the Pharmacists’ Patient Care Process), and introduced a five-level entrustment rubric (Figure 1).12 Interactive polling questions assessed audience’s comprehension. The EPA pilot program was introduced at the conclusion of the first session. Three EPAs were launched for the pilot program: collect, assess, and information master. These EPAs were best aligned with the type of APPEs (ie, ambulatory care and acute care) selected for the pilot implementation of EPAs.
Clinical Faculty Entrustment Ratings Per Case (from a moderated discussion during the second session, N=11)
The second session (4 hours) was case-based. Six cases were developed by authors in consultation with the director of experiential education. Each case described a student performing EPA supporting tasks in a specific clinical context. The second session had two goals: to give faculty members experience discerning between levels of entrustment, and to allow for collaborative examination of the factors leading to variability in ratings.
Faculty members were given cases and instructions in advance and asked to apply the entrustment rubric, working independently. During the session, faculty members placed their trust rating on the white board for each case. After all ratings were revealed, a structured moderated discussion ensued, allowing faculty members to justify their rating and propose an action plan.
The session concluded with an overview of pedagogy and implications of EPAs in experiential education. Key themes discussed during the moderated portion of the session included providing ample opportunities to hone each EPA domain, sharing timely and formative feedback for improving performance, and focusing on attainment of ability to competently perform a work task with minimal supervision.
The pilot was launched with the fourth-year class starting their APPEs in May 2019; data for the first three five-week APPE blocks were collected and analyzed.
Outcome Measures and Statistical Analysis
Data collected from clinical faculty members were from several sources. Individual faculty members’ trust ratings for all cases were recorded and verbal rating justifications were transcribed.
Two survey instruments were administered: a post-program survey completed immediately after the second session and a follow-up survey administered approximately 15 weeks after the start of the pilot implementation of EPAs, by which time three APPE blocks had been completed. A post-program survey ascertained faculty members’ perceived readiness to utilize EPAs in APPEs and opinions about aspects of program quality. A follow-up survey contained similar items for comparison and to monitor faculty information needs (seven items, multiple-choice and open-ended questions); additionally, three items were incorporated to assess faculty members' factual knowledge and application of key concepts introduced during the sessions.
Actual EPA ratings and faculty-provided rating justifications from the pilot implementation of EPAs were also examined. Faculty members assigned ratings at midpoint and at the end of each APPE block for each student precepted using the entrustment rubric; students rated level 3 or higher by the end of an APPE block were deemed entrusted. Additionally, they recorded a rationale for their rating, and answered questions about patient complexity, clinical contexts, and whether they directly observed the student performing an EPA. The EPA ratings were not used in the current APPE grading structure.
All survey data and data from the pilot implementation of EPAs were collected using Qualtrics (Qualtrics XM, Provo, UT). Notes from the moderated case discussion and open-ended survey and implementation data were analyzed using the constant comparative method, the most common method for analyzing qualitative data, which involves repeatedly reading through a set of data and grouping/regrouping individual pieces of data into categories to create a coding scheme that addresses the research inquiry.13,14 (Surveys and training cases are available from the first author upon request). Fairleigh Dickinson University institutional review board determined that this project did not meet criteria for human subject research and was therefore exempt.
RESULTS
The majority of the clinical faculty members who teach experiential courses participated in the faculty development program and completed the post-program survey (n=11/12, 92%). Ten faculty members completed the follow-up survey and participated in the pilot implementation of EPAs (n=10/11, 91%). All faculty members had been employed by the school for at least one year (range one to five years).
The summary of the distribution of faculty member’s ratings for six cases is presented in Figure 1. In some situations, there was mostly agreement on the entrustment level; however, in others, faculty members had variable evaluation of student's ability to perform a task with reactive supervision. Rich discussion regarding the justification of entrustment levels was provided by each faculty member (Table 1).
Key Themes Related to Justification of Entrustment Level During a Faculty Development Program on Using Entrustable Professional Activities in Teaching and Assessing Doctor of Pharmacy Students
All faculty members responded that the program prepared them to apply what they learned about EPAs to their APPE practice setting in both the post-program survey and follow-up survey. In the follow-up survey, 80% of faculty members reported that they were extremely confident or confident that they were correctly applying what they learned from the program to the EPA pilot (Table 2).
Evaluation of a Faculty Development Program on Using Entrustable Professional Activities in Teaching and Assessing Doctor of Pharmacy Students
Half of the faculty members reported that the most important concept learned from the program was gaining a greater awareness of the distinctions between entrustment rubric levels. On the follow-up survey, 30% of faculty members reported being challenged by the additional time involved in explaining EPAs to their students. After the pilot implementation, more faculty members expressed concern about how EPAs will be used in summative evaluation decisions (10% on the post-program survey and 30% on the follow-up survey). The remainder of the open-ended responses did not have a unified theme and are not included here.
Faculty members’ perspectives on aspects of program quality are presented in Table 2. One area for improvement that was suggested by 30% of the faculty members was to quicken the pace of the program.
Three multiple-choice knowledge/application assessment questions about EPAs asked on the follow-up survey were evaluated for answer correctness (Table 2). One of the three questions proved to be a challenge to half of the faculty; nevertheless, all faculty members answered the application question correctly.
Further, the EPA implementation pilot data verified that faculty members applied the concepts learned in this program when administering their EPA ratings during the pilot implementation of EPAs in APPE at their clinical sites. Forty students completed the pilot; 95% of students were directly observed by faculty members at the midpoint and end of the APPE before making an entrustment decision. One faculty member reported creating additional learning opportunities for a student in order to meet the EPA.
The results of the pilot implementation of EPAs revealed variability in the complexity of patients and experiences students faced in clinical practice, mimicking the simulated scenarios used during the program. Rating justifications highlighted the spectrum of contexts on how entrustment decisions were made. Some faculty based their rating on performance on a few specific assignments, while others evaluated a more holistic continuum of patient care activities related to an EPA domain.
DISCUSSION
Only one publication outlining a faculty development program about EPAs was identified in the medical literature. Sood and colleagues described a 90-minute program that prepared faculty members to embrace the EPA framework as a useful tool in assessing medical trainees.9 The program incorporated didactic presentation of material and case-based exercises, and covered content applicable to the medical field and generalizable to broader audiences. In developing our program, we used some components of the program developed by Sood and colleagues, but also expanded the application exercises, developed pharmacy-relevant cases, and extended the program to two consecutive sessions with discrete objectives. Additionally, we evaluated the program immediately after delivery and after three APPE blocks had been completed to identify how it was applied in the clinical education environment, as well as to highlight some additional learning needs and rollout modifications identified during the pilot implementation of EPAs.
Program cases were purposefully constructed to address the variable complexity of patients that may be encountered in clinical practice. As expected, more complex cases resulted in more divergent ratings during the program as cases were intentionally designed to generate discussion around various considerations that go into entrustment decisions and to attempt reaching consensus. As the EPAs were implemented, we observed the extent of diversity in the types of comorbidities and the acuity of patients that students were working up, even within similar clinical settings. Through this program and the pilot implementation of EPAs, we confirmed the need to revise the EPA supporting tasks to better define what was expected of students in their fourth professional year, as suggested by ten Cate.15 While faculty development addressed many issues regarding variability in rating, the need to guide faculty members on how to make precisely right/good entrustment decisions in uneven clinical contexts still remains.16
Based on their responses on two surveys, faculty members responded positively to this development program immediately after its conclusion and used its content during the pilot implementation of EPAs. Overall, faculty members felt confident in their ability to apply the entrustment rubric, and many commented that the most important aspect of the program was the case discussions and hearing other faculty members’ perspectives and rationales for their ratings. Faculty members also felt they gained a clearer understanding of EPA statements and supporting tasks from participating in the faculty development program.
More faculty members expressed concern about how EPAs will be used in summative decisions after they worked with EPAs in the clinical environment. The faculty development program did not address several important policy questions that would need to be in place prior to full implementation. The working group was relying on the results of the pilot implementation of EPAs to inform some policy recommendations, which would be communicated in ongoing programming.
Thirty percent of faculty members reported that students or non-faculty preceptors were not well informed of the concept of EPAs. The school communicated with students prior to the start of the APPE cycle about pilot implementation and held a preceptor forum on the same topic. It was likely insufficient and will be improved once the EPAs are fully adopted by the school. Lack of preceptor knowledge about EPAs is a challenge that schools nationwide are facing. Based on our experience, for preceptor training, we recommend abbreviated sessions that include a case-based discussion component. Consideration should be given to using technology to deliver the training to a wide range of school-affiliated preceptors.
Faculty members directly observed their students prior to rating them using the entrustment rubric in 95% of cases. Additionally, feedback was provided to all students regarding their performance. Faculty members’ justifications for their ratings included the level of supervision the student required and their trust in the student’s ability to perform EPA tasks with reactive supervision. The faculty members justifications clearly showed they applied concepts learned during the faculty development program in their experiential courses.
Only one faculty member reported creating additional learning opportunities for a student in order for the student to meet an EPA. This may suggest adequate preparedness and students’ progression through the APPE experience. More likely, however, this is reflective of the lack of challenge for the “fixed-time” model, which relies on a predetermined number of encounters during a five-week timeframe (eg, 10 clinical interventions) rather than on an undefined, flexible number of repeated experiences to achieve EPA mastery (eg, performing as many clinical interventions as needed to meet required EPA entrustment level). As the pilot implementation only addressed two EPAs per APPE type, the need to generate more practice opportunities and scaffold them based on students’ learning needs may become more essential when all relevant EPAs are rolled out. Additionally, because the pilot EPA program was implemented concurrently with an existing pedagogical approach (ie, set number of specific assignments per APPE), it was not possible for faculty members to shift to purely competency-based time-independent teaching and assessment.
Our program focused on assessing students using the EPA framework. Additional insights after program completion included emphasizing how teaching approaches would change when EPA-based curriculum is implemented. Moreover, we concur that step 8 of the EPA implementation roadmap is essential and, based on our experience, we recommend ongoing faculty development that focuses on clearly delineating EPA tasks and expected levels of student performance on APPEs, and continues to shift the current teaching paradigm.
This study had several limitations. Our study population was small; however, it involved greater than 90% of our clinical faculty members. Another limitation is that program evaluators designed and led the program, which could present a possible source of social desirability bias (where research subjects feel compelled to provide responses that will be viewed favorably by others rather than sharing their true opinion). Finally, our study population was limited to clinical faculty members. Expanding such a program to include non-faculty preceptors would need to be scaled down in time and content to achieve similar outcomes. We would need to evaluate which components of the program were the most impactful and retain those; and explore whether virtual delivery is essential to reaching a vast number of school-affiliated preceptors.
CONCLUSION
Our faculty development program was effective in preparing faculty members to use an entrustment rubric and teach and evaluate students during the pilot implementation of EPAs. We plan to deliver similar programming to preceptors, keeping successful elements while addressing the lessons learned from delivering the program to clinical faculty members.
ACKNOWLEDGMENTS
We would like to acknowledge clinical faculty members who participated and provided their time and effort during this study.
- Received October 22, 2019.
- Accepted April 17, 2020.
- © 2020 American Association of Colleges of Pharmacy