Abstract
Objective. To describe the composition of an advanced pharmacy practice experience (APPE) readiness assessment plan (APPE-RAP) along with initial findings following retrospective application to a cohort of students.
Methods. The APPE-RAP uses existing summative assessment data within the ExamSoft platform on six skills and 12 ability-based outcomes from the pre-APPE curriculum. Thresholds were created to sort students into three readiness categories for skills and knowledge, determine overall readiness, and identify need for curricular review. Students that completed their third professional year in spring 2021 served as the pilot cohort. The APPE-RAP was applied after the cohort progressed to APPEs to analyze appropriateness of categorization and revise the plan before full implementation.
Results. The APPE-RAP was applied to 131 students that progressed to APPEs in spring 2021. Overall, 87.9% were APPE ready for all skills and aggregate knowledge. Two skills met criteria for curricular review. Seven students (5.3%) were categorized as red on at least one skill after one remediation attempt. Nine students (7%) were categorized as red on an aggregate knowledge-based ability-based outcomes (ABO) evaluation. Four students (3.1%) did not pass one of their first two experiential rotations. Using a red categorization on aggregate knowledge as a risk indicator identified APPE failure with 94% specificity and a 98% negative predictive value.
Conclusion. Existing assessment data may be leveraged to identify assessment targets to help quantify APPE readiness. Further research is warranted to identify additional assessment thresholds that enhance quantification of APPE readiness as well as the impact of focused remediation on attainment of APPE readiness.
INTRODUCTION
Ensuring pharmacy graduates are practice-ready clinicians has been a consistent focus throughout the transition to the Doctor of Pharmacy (PharmD) as the only degree leading to pharmacist licensure.1 Robust experiential education, including advanced pharmacy practice experiences (APPEs), allow students dedicated time to deepen and apply classroom knowledge in a practical way. Success requires a foundation of knowledge and skills taught during the pre-APPE curriculum. Unaddressed deficiencies in knowledge and skills prior to APPEs may detract from clinical experiences designed to prepare students for practice. Predictors of APPE success have been reported, highlighting the importance of proactively identifying students with foundational deficiencies in knowledge, skills, and professionalism to mitigate APPE course failures.2,3
Consensus definitions of APPE readiness have been lacking, with limited clarity available in the literature or accreditation guidance.4-7 Early reports have also identified discordance in assessment strategies and thresholds, necessitating a data-driven approach to guide practice.4
Historically, our program has considered any student that passed all courses in the pre-APPE curriculum to be APPE ready. Few programmatic failures during APPEs and a strong passing rate on licensure examination(s) implied this approach was effective.8 However, successful course completion may overlook individual core outcome deficiencies given the multitude of factors considered in overall course grades.9 As such, progression to APPEs may occur despite an absence of minimal competence in some skills and knowledge needed for APPEs.
A data-driven APPE readiness assessment plan (APPE-RAP) was developed to complement existing programmatic assessment processes aiming to identify underperforming students on individual ability-based outcomes (ABOs) and patient care skills. Identified deficiencies will serve to guide early intervention for students as well as curricular modification, if necessary. The purpose of this Brief is to describe the composition of the APPE-RAP along with initial findings following retrospective application to a cohort of students.
METHODS
The APPE-RAP was developed by a faculty subcommittee with membership from the college’s curriculum and assessment committees. The APPE-RAP consisted of skills and knowledge assessment components. Skills-based abilities were assessed through an APPE readiness objective structured clinical examination (OSCE) delivered at the end of the pre-APPE curriculum. Knowledge was assessed through evaluation of performance on summative examinations in the pre-APPE curriculum. Skills and summative knowledge assessments, except for one biochemistry course, were administered through ExamSoft (ExamSoft Worldwide LLC).
Six core patient care skills taught and assessed throughout the pre-APPE curriculum were included (Table 1).10 Standardized rubrics, developed by faculty, are used throughout the pre-APPE curriculum with four consistent categories of proficiency (proficient, competent, novice, unacceptable) per item. Readiness for each skill was determined by summative, individual performance in an APPE readiness OSCE administered over three days, with one remediation attempt if needed. Each skill is taught and assessed at least twice prior to the summative assessment. Remediation was required per standard practices if a student had any unacceptable ratings on each skill rubric.
Six Core Patient Care Skills and Twelve Knowledge Areas Taught to an Entering Class of Pharmacy Students and Assessed Throughout the Pre-APPE Curriculum (N=131)
The knowledge component includes 12 college-approved ABOs within domain 1 of Standards 2016 (Table 1).7 These outcomes focus on foundational knowledge exclusively assessed within the pre-APPE curriculum. Each individual examination question within ExamSoft must be tagged with college-approved ABOs. All questions administered via ExamSoft categorized by faculty as summative examinations in the pre-APPE curriculum, including remediation attempts, were aggregated for the analysis.
Within the APPE-RAP, readiness for each skill and knowledge was represented as three distinct categories (green, yellow, red) (Table 2). Skills categorizations aligned with predetermined passing criteria for the APPE readiness OSCE. Knowledge categorizations were created under the assumption that each student cohort represented a normal distribution of performance. Aggregate average performance of the cohort throughout the pre-APPE curriculum served as the benchmark. Red categorization, for students performing below two standard deviations of the mean, represents significant underperformance compared to peers. Using individual ABO categorizations, students were then given an overall aggregate categorization (Table 2).
Thresholds for Determining First-Year Pharmacy Students’ Readiness Category for Skills and Knowledge
To determine whether curricular modification was needed, university-level assessment guidance was used, which suggests curricular intervention if less than 85% of the cohort meets the passing threshold. For skills, the aggregate first-attempt pass rate was reviewed.
Students in the 2018 entering class that progressed to APPEs on time were evaluated, as this entering class was the only cohort to complete the revised pre-APPE curriculum at the time of analysis. The APPE-RAP was retrospectively applied once all eligible students completed two APPEs. The APPE-RAP was not used to intervene in student progression during this pilot. The focus was to determine the appropriateness of categorization and identify opportunities for improving the APPE readiness plan.
Categorization appropriateness was determined by manual review of each student’s performance on their first two assigned APPEs compared to their readiness categorization. Because knowledge-based data provide early intervention opportunities, we used the data to estimate predictive values, sensitivity, and specificity. Positive predictive value represents the probability that students categorized as red (not APPE ready) would fail at least one of their first two APPEs. Negative predictive value represents the probability that students categorized as yellow or green (APPE ready) would not fail at least one of their first two APPEs. At our institution, there is no required order in which APPEs must be completed. As a result, some students may have had electives as their first two APPEs, while others may have had two required direct patient care experiences.
All analyses were performed using Microsoft Excel. This quality improvement project was reviewed and approved by the Ferris State University Institutional Review Board.
RESULTS
One hundred thirty-one students completed the pre-APPE curriculum in three years and progressed to APPEs in spring 2021. Overall, 87.9% of students were categorized as green or yellow (APPE ready) for aggregate knowledge and skills.
First-attempt skills performance on the APPE readiness OSCE is shown in Table 1. Two skills did not meet the 85% first-attempt pass threshold, indicating the need for curricular review. Most students (94.7%) were categorized as green or yellow for all six skills after one remediation attempt. Fewer than three students (2.3%) received novice ratings in >50% of items assessed for each skill.
Aggregate student performance on each ABO is shown in Table 1. Evaluation of aggregate knowledge categorization identified nine students (7%) as red on multiple ABOs (five on two, one on three, two on four, and one on five). Six of the nine students classified as red (66.7%) were assigned to elective APPEs in their first two blocks, which traditionally have a low failure rate.
Four students (3.1%) did not pass one of their first two APPEs. For aggregate knowledge, one student was classified as red, while the other three were classified as yellow. A more detailed overview of these students’ APPE readiness results is shown in Table 3. Each of these failures was a required, direct patient care APPE (two ambulatory care, two adult inpatient medicine). Based on data obtained from auto-fail criteria on the standard APPE evaluation, failures were due to the following: student’s performance on four or more outcomes was novice (n=4), student failed to meet rotation-specific criteria (n=3), student’s performance was unacceptable on one or more outcomes (n=2), and student’s lack of knowledge prohibited their ability to provide patient care (n=1).
Individual APPE Readiness Results for Each Student Who Failed One of the First Two APPEs
A validation analysis was conducted using aggregate knowledge red categorization as the risk indicator for APPE failure. It identified APPE failure with 25% sensitivity, 94% specificity, 11% positive predictive value, and 98% negative predictive value.
DISCUSSION
This paper describes an APPE-RAP with retrospective application to a cohort of students. The APPE-RAP serves as a tool to guide focused remediation as well as inform the college of potential needs for curricular revision. Analysis of data within ExamSoft allows feasible evaluation of student-level deficits without the use of novel assessments (eg, mile marker examinations), potentially avoiding additional workload strain on faculty and students. Prior to implementing the APPE-RAP for programmatic progression decisions, our institution chose to review the data for a cohort of students that had already begun APPEs to determine appropriateness of the initial categorizations. The frequency of students with more than 50% novice ratings on individual skills was low (2.3%). Therefore, skills thresholds were revised to ensure underperforming students undergo additional remediation. The revised threshold for red categorization is now a rating of ≥1 unacceptable or ≥50% of the ratings as novice for any individual skill. Future use of the APPE-RAP will include enhanced student self-monitoring and/or earlier programmatic remediation interventions.
A review of first-attempt pass rates for skills revealed that a significant number of students required remediation to improve their ability to write SOAP (subjective, objective, assessment, plan) notes, which was suggestive of curricular deficiencies and resulted in course modification. Similar modifications were discussed but not implemented for patient counseling because the assessment of patient counseling occurred virtually due to teaching restrictions related to the COVID-19 pandemic. Several students shared that they perceived a different level of stress when learning and practicing counseling skills via distance-based videoconferencing technology and assessment in person, which may have produced spurious results.11
Knowledge-based deficits were more difficult to generalize, as student-specific deficiencies varied. These findings are reflective of the overall infrequent APPE failures observed (5.4 failures per year for the last four years), but they also represent that a simple APPE-RAP designation of red may be insufficient to capture those with an increased likelihood of failing an APPE. While knowledge thresholds remain unchanged, a threshold of multiple yellow ABOs may need to be considered in addition to other supportive interventions to ensure success on APPEs.2
Using red as the risk indicator estimated a low positive predictive value and a high negative predictive value. Alternatively, being categorized as green or yellow predicted passing APPEs with good accuracy (98% negative predictive value). However, a high proportion of students categorized as red also passed APPEs (11% positive predictive value). Using the red categorization as the threshold would conservatively identify students at risk of failure, and early intervention resource allocation to such students may still be prudent.
Individual APPE-RAP reports are now provided to students to provide insight into their performance beginning in their second year of pharmacy school. This data will allow student self-monitoring and help guide academic advising efforts, potentially leading to targeted remediation prior to APPEs to improve readiness. This could prevent students categorized as red from spending a disproportionate amount of time during their APPEs on reviewing or remediating content, thus limiting their advancement of knowledge beyond the pre-APPE level. Our initial analysis demonstrates a way to identify not only those that are failing but also those that share similar deficiencies that may benefit from earlier assistance. Our goal is that early focused intervention will optimize APPE growth.
Focusing on programmatic ABOs and essential skills aligns well with published strategies.2-6 Identifying a consistent passing threshold on item-level achievement proved difficult due to varying assessment philosophies across the college. For example, faculty may prefer to offer APPE-level examination questions to second-year students, using grade adjustments to achieve more normal distribution. However, preadjustment item-level statistics may indicate poor performance on that ability-based outcome. Instead of using an arbitrary performance threshold, average performance of the cohort was leveraged to identify relative outliers in student performance. Given that the ability for more real-time tracking and dissemination to students exists, early intervention and/or remediation options could become part of a final APPE readiness plan. Additionally, consideration has been given to adding measures of professionalism to future iterations of the APPE-RAP.2
This analysis represents the work of one institution with one cohort of students. The results might not be generalizable to other institutions. Reliability needs to be evaluated when additional cohorts complete the pre-APPE curriculum. Also, the timing of skills and knowledge assessments differed, which may have introduced issues with equal weighting throughout the pre-APPE curriculum in the APPE-RAP. However, use of relative performance criteria, rather than a preset threshold to categorize readiness, minimized these issues. To simplify data aggregation and build an initial plan, professionalism and early experiential data were not considered in our assessment matrix, but these have been previously associated with APPE failures.6 Additionally, not all students were assigned to a direct patient care APPE within the first two blocks, which may impact our results. Given the paucity of literature, our intention is for this approach to stimulate further discussion regarding the utility of APPE readiness to improve both individual and aggregate student outcomes.
CONCLUSION
Existing assessment data may be leveraged to identify assessment targets to help quantify APPE readiness. Observed deficiencies can be triaged to focused remediation or curricular revisions based on objective criteria. Further research is warranted not only to identify additional assessment thresholds that enhance quantification of APPE readiness but also to describe the impact of focused remediation on attainment of APPE readiness.
ACKNOWLEDGEMENTS
Kali M. VanLangen, PharmD, BCPS, served as the lead and corresponding author on this paper. She led the development of the APPE-RAP, preliminary literature review, facilitated the writing of the first draft, and provided critical review and approval of the final version. All authors contributed equally to the writing of the first draft and subsequent revisions.
- Received December 17, 2021.
- Accepted July 29, 2022.
- © 2023 American Association of Colleges of Pharmacy