Abstract
Objective. Fidelity metrics can provide insight into the extent to which experiential programs are implemented as they were designed to be. The UNC Eshelman School of Pharmacy at the University of North Carolina at Chapel Hill implemented a new curriculum that included a series of three, two-month introductory pharmacy practice experiences (IPPEs). The objective of this study was to design a logic model for the first IPPE within the series, identify key variables to program implementation, define fidelity indices and benchmarks, and compute a single fidelity score for each IPPE site.
Methods. Data were collected from the course syllabus, learning outcomes, assignments, and evaluations from students and preceptors for 50 sites that had hosted 147 students for IPPEs. A logic model was defined to describe inputs, activities, outputs, and outcomes of the IPPE. Data were reviewed for key variables and measures to include in the fidelity framework and then a fidelity score was generated for each site.
Results. Twenty-four variables were identified across three components that were deemed critical for experience implementation (eg, patient care activities, preceptor compliance, and overall site training and evaluation). The mean fidelity score for all sites was 59.1% (SD 16.4%).
Conclusion. A logic model and fidelity framework provided an objective method to assess the extent to which practice sites delivered the IPPE course. This work could be used by schools as a basis for individualizing quality assurance efforts.
- implementation fidelity
- clinical education
- experiential education
- introductory pharmacy practice experience
- performance assessment
INTRODUCTION
Experiential education represents a significant component of degree programs for health professionals. The Accreditation Council for Pharmacy Education (ACPE) sets standards for didactic and experiential education of student pharmacists. However, pharmacy accreditation standards do not articulate specific requirements for methods to examine quality assurance in experiential education.1 Because the experiential curriculum typically occurs at external sites (eg, community pharmacies, clinics, practices, hospitals), schools and colleges of pharmacy are tasked with determining the quality of practice experiences delivered to learners. Pharmacy practice sites have historically been evaluated through the completion of site visits or through the solicitation of student feedback at the conclusion of the practice experience. Some institutions focus quality assurance methods on preceptor development and training without completing a full evaluation of practice site performance.2-5
Utilization of findings from site visits alone or reliance on student-reported perceptions of a practice site provides limited objective data by which to evaluate how a practice experience is delivered to learners. Educational programs in the health professions are now encouraged to integrate competency-based goals and objectives into curricula. This movement is evidenced by recently revised standards from the ACPE, the development of core entrustable activities for medical and pharmacy graduates, and a call to immerse learners in direct patient care early and frequently throughout the curriculum.1,6-9 With such changes occurring, it is imperative for school faculty and administrators to leverage a criterion-based method to assess student, preceptor, and practice site performance.10
Literature focused on the assessment of experiential education has concentrated primarily on reporting program design, trends in program components or requirements, preceptor training endeavors, and discussion of the use of evaluations of student performance (by preceptors or themselves) and site visits.2,11-13 The Academy has not yet identified a single method for accurately and objectively assessing practice site quality. An implementation fidelity framework can provide a criterion-based quality assurance model focused on experiential education curricula within introductory and advanced pharmacy practice experiences (IPPEs and APPEs). Implementation fidelity is the extent to which a program is instituted as designed. In educational research, implementation fidelity literature describes four primary steps to creating a fidelity model: articulation of the change and logic model, identification of critical components, selection of fidelity variables and definition of measures, and computation and application of fidelity scores.14
At the University of North Carolina at Chapel Hill (UNC), student pharmacists complete a series of three IPPEs.15-16 The first IPPE within the series is a two calendar month experience that occurs within either a community pharmacy or a health system pharmacy setting. Rhodes and colleagues described assessment data generated from this IPPE, during which student pharmacists who had completed foundational coursework during their first professional year in the Doctor of Pharmacy (PharmD) curriculum were assessed on their ability to perform core entrustable professional activities (EPAs) according to a clinical evaluation scale. This specific IPPE syllabus defined EPAs 1, 2, 8, 11, and 12 as required areas of focus. Full descriptions of the UNC EPA statements have been previously published.17 The intent of this research was to expand upon our previous work and evaluate each of our pharmacy practice site’s implementation of this IPPE as it was designed by the UNC Eshelman School of Pharmacy to be implemented. Additionally, the authors were curious to see if the type of practice setting influenced the fidelity score.
This project describes the creation of an implementation fidelity framework to quantify the extent to which sites implemented an experience as designed by a school of pharmacy. The objective of this study was to develop a logic model for the first IPPE within the school’s curriculum, identify key variables to program implementation, define fidelity indices and benchmarks, and compute a single fidelity score for each pharmacy practice site. A secondary objective was to assess whether differences existed between fidelity scores based on pharmacy practice setting.
METHODS
This study was deemed exempted from full review by the UNC Institutional Review Board. Data were collected from experiences that were completed during the summer of 2016.
The first objective was to formulate a logic model. The authors completed this using methodology described in the educational research literature. The logic model needed to contain four main categories: program inputs, activities, outputs, and short- or long-term outcomes.14 Within each category, we included elements that were necessary for program implementation. Each element contained one or more specific stakeholders or tasks. Each element was mapped to one or more element(s) in each successive category (ie, inputs linked to activities, activities linked to outputs, and outputs linked to outcomes). In addition to these four categories, additional factors were developed: external factors (factors that exist outside of the model that have the potential to influence an input, activity, output, or outcome) and assumptions (statements assumed to be true in regard to the inputs, activities, outputs, or outcomes of the logic model). The logic model was approved for use by the research team, which included leaders of the Office of Experiential Programs and the Office of Strategic Planning and Assessment.
The investigators used the aforementioned logic model to develop the fidelity framework. The research team, which included the course directors, conducted a series of meetings in which the design of the course was discussed (eg, course syllabus, learning outcomes, assignments, evaluations of students, evaluations of preceptors/practice site). The investigators considered what components of this early practice experience would be critical for experience implementation. Given a lack of published evidence in this area, the investigators defined a critical component as an item that could establish an environment likely to achieve learning outcomes. For example, involvement in patient care activities would be critical to ensure that the student could enhance their skills in conducting a patient interview and documenting medication-related problems in a patient care note.
Next, specific variables were selected from the activities and outcomes described in the logic model. Variables became part of the fidelity framework if they linked directly to the course outcomes or contributed to the student’s successful completion of the course. A detailed codebook was created to define how points (expressed as a positive, neutral, or negative numeric value) would be allocated for each variable based on performance. Measures were allocated based upon expectations communicated to sites and students by the school. A measure was allocated for each variable as defined by the codebook for each individual practice experience (ie, for each completed rotation). In general, higher performance yielded higher point allocations, while lower performance yielded lower or negative point allocations. Where possible, scores were assigned on a gradient to help generate adequate spread in the fidelity score (ie, so that a difference could be detected between high-performing and low-performing sites). Negative values were reserved for situations in which failure to execute the task would have added a substantial burden to experiential programs staff or if the student rated the quality of the site or preceptors as fair or poor. Investigators pulled relevant data from designated sources in order to assess a variable or define a measure.
A fidelity score was calculated by dividing the total number of points earned for the practice experience by the total number of possible points that could have been obtained, and then the result was multiplied by one hundred. When a practice site hosted multiple learners, the fidelity scores were averaged to yield one overall score per pharmacy site. Descriptive statistics were used to characterize all data. Fidelity scores are presented as the mean and standard deviation unless otherwise noted. An independent samples t test assessed was used to assess for differences in fidelity scores between community sites and health system sites. A p value less than .05 was considered significant. Analyses were performed using SPSS, version 26 (IBM Corp).
RESULTS
The final logic model developed for the first IPPE within the series is depicted in Figure 1. The components, variables and measures that comprise the fidelity framework for this practice experience are presented in Table 1. Three components were deemed to be critical for experience implementation and were adopted into the fidelity framework: patient care activities, preceptor compliance, and overall site training and evaluation. Of the 24 variables in the fidelity framework, 11 were derived from patient care activities (eligible for between 0 to 34 points), six were derived from preceptor compliance (eligible for between -6 to 10 points), and seven were derived from overall site training and evaluation (eligible for between -7 to 9 points). A complete example of how the fidelity framework was applied to one specific practice experience to generate a fidelity score is presented in Appendix 1.
Logic Model for Introductory Pharmacy Practice Experience 1
Components, Variables, and Measures of a Fidelity Framework Implemented to Assess Quality in Experiential Pharmacy Education
Data were available from 147 practice experiences that took place at 50 practice sites (39 community pharmacy sites and 11 health systems sites). The mean fidelity score for all practice sites was 59.1% (SD 16.4%).
Sub-analyses comparing fidelity based upon practice site type are available in Table 2. A significant difference was not found between community pharmacy and health systems sites.
Comparison of Implementation Fidelity to Assess Quality of Pharmacy Practice Sites for an Introductory Pharmacy Practice Experience
DISCUSSION
We used an implementation fidelity framework to create a standard method for assessing the extent to which an IPPE was implemented as designed by an experiential site. Implementation fidelity is a concept that has been widely used in secondary education (eg, K-12 programs); however, this is the first known study of its kind within health sciences education, specifically.14 The investigators believed a fidelity framework approach could be useful to measure the extent to which practice sites adhered to expectations regarding implementation of the IPPE, as articulated in preceptor training sessions and course syllabus. This project is the first that we are aware of to describe the creation of an implementation fidelity framework to quantify the extent to which sites implemented an experience as designed by a school or college of pharmacy.
The increased focus on competency-based education and required percentage of the PharmD curricula dedicated to experiential education warrants discussion on how practice experiences are assessed. Historical mechanisms for experiential program evaluation have relied on student perceptions of their practice experience, with or without site visits from experiential faculty.1-4,18 An implementation fidelity framework can be developed to evaluate practice sites through criterion-based metrics. At our institution, we piloted use of an implementation fidelity framework for the first IPPE of a three-course series. As this IPPE occurred after the conclusion of the first professional year, the course focused on exposing students to foundational activities related to the aforementioned EPA statements. Repeated exposure to an activity can provide an opportunity for a learner to raise their level of entrustment on a specific EPA statement over time. It was beyond the scope of this model to assess the quality of student learning, as this is assessed through other means in our curriculum. This study describes our institution’s first attempt to develop an objective method for evaluating how well external pharmacy practice sites implement an IPPE as designed. Further, this study demonstrates the integration of data from various data sources and systems for the purpose of evaluation.
Pharmacy is becoming more accustomed to criterion-based performance measures and being held to a quality standard for patient care delivery. Using a similar approach, an experiential quality report card could be generated. The report card could provide the mean fidelity score, in addition to the mean score for each component (eg, patient care activities, preceptor compliance, and overall site training and evaluation). Fidelity scores can be used to guide practice site and/or preceptor development. Using criterion-based performance scores, a school could leverage site visits to further develop practice sites and raise the implementation fidelity score for a given practice site over time. At our institution, scores were not shared with the sites because of the pilot nature of the model. Additionally, some sites only hosted one student experience for the IPPE so sharing data could remove the anonymity of the student.
Although the mean fidelity score that was generated could be perceived as poor (ie, a score <70% would be a failure in a didactic course), our team views this as a positive as it provides a baseline for our new IPPE. While expectations about the curriculum were discussed, sites were not aware of this specific fidelity framework and how it could be used for assessment a priori. We believe this removes an element of bias, as sites used our training materials to implement the experience to the best of their ability. If they had known the grading criteria, they may have implemented the experience differently to influence the results in their favor. The fidelity score may help experiential programs identify where a site struggles with implementing an experience and, as a result, allocate resources to develop the site in their area of need. Finally, experiential programs can determine the success of their efforts by tracking practice site fidelity over time. A potential area for future work is to examine how fidelity scores could be allocated into categories such as acceptable, warrants review, or requires immediate attention. A pharmacy school using an implementation fidelity approach could determine what an acceptable score for their institution is, considering factors such as the overall curriculum, placement of the specific experiential course within the overall curriculum, or expectations communicated by the institution to students or sites.
While this study provides key learning points, limitations do exist. First, the fidelity framework presented here examines one data set that were generated from one course in the experiential curriculum. Expanding the study to assess additional IPPEs and APPEs will be necessary to fully examine practice site implementation fidelity. Second, additional work will be needed to link fidelity scores to overall programmatic outcomes for the school. Institutions can consider analyses that track student performance on EPA statements in both the didactic and the experiential setting, aiming to determine whether high implementation fidelity at pharmacy practice sites yields higher student performance on profession practice related activities inside the didactic curriculum. Institutions can also consider whether they wish to seek input from internal or external stakeholders on development of the fidelity framework or on developing the codebook. This model was defined by the research team, which included leaders from the school’s Office of Experiential Programs and the Office of Strategic Planning and Assessment. A fidelity framework could be developed by integrating the perspectives of other faculty, administrators, and/or preceptors.
CONCLUSION
A logic model and fidelity framework were developed to introduce a method for criterion-based assessment of an IPPE at the UNC Eshelman School of Pharmacy. Results from this research indicate that leveraging an implementation science approach to assessing quality in experiential education is possible and may provide an objective method to assess how external sites deliver the experiential curriculum. Future studies could expand the fidelity framework to include evaluation of the entire experiential curriculum rather than just a single course. Additionally, researchers should consider how implementation fidelity of pharmacy practice sites contributes to student competence on programmatic standards for schools and colleges of pharmacy. Finally, generating a fidelity score for each site may help a school to target their efforts related to conducting quality assurance.
ACKNOWLEDGMENTS
The authors graciously thank Carlos R. Melendez, PhD, for his contributions to the development and assessment of the implementation fidelity framework.
Appendix 1. Example of Fidelity Framework Application
Under the component of patient care activities, students were expected to demonstrate a progression of skill from dependent (Level 1) to marginal (Level 2) on UNC EPA8 Function 1. Students were expected to document a specific number of medication histories in the EHR by the end of the practice experience. Students reported the completion of this task back to the School through an EPA Learning Log (Logic Model: Outputs > Evaluations > EPA Learning Log). Investigators exported this evaluation data from the course management software, reviewed the reported number of repetitions from the EPA Learning Log, and assigned a numeric score (measure) for the variable according to the codebook. For example, with EPA8 Function 1, the expectation was to document a minimum of 40 medication histories in the EHR. Points could be earned as follows: 0 points (0-9 documented), 1 point (10-19 documented), 2 points (20-29 documented), 3 points (30-39 documented), or 4 points (40 or more documented). This process was repeated for each variable. The total points were summed once all variables were assigned a measure. Points earned were then divided by the total possible points that could have been earned and multiplied by 100.
- Received July 22, 2020.
- Accepted December 24, 2020.
- © 2021 American Association of Colleges of Pharmacy