Abstract
Objective. To describe a monitoring and early intervention process for students at risk of substandard performance on advanced pharmacy practice experiences (APPEs).
Methods. Using a dashboard of key indicators, students with potential deficits in knowledge, skills, or noncognitive attributes were identified as at risk of substandard performance on APPEs and placed on a list of students to be monitored during the APPE year. Employing a traffic light–based approach, at-risk students were initially designated with a monitoring status of red. If no issues were identified, students were de-escalated to yellow status and, subsequently, to green status. Monitored students who had issues or received a substandard evaluation on APPEs had a deficit-specific action plan implemented.
Results. For the 2018-2019 and 2019-2020 academic years, 87 of 499 students entering APPEs were monitored. Of those 87 students, 77 (88.5%) completed experiences successfully on the first attempt, but 66 (75.9%) did require extended higher-level (red or yellow) monitoring. Over these two years, 54 (62.1%) of the 87 students deemed at risk did not have a substandard performance on APPEs, with 26 in the 2018-2019 year and 28 in the 2019-2020 year.
Conclusion. A student monitoring and early intervention process may be beneficial in assisting at-risk students to successfully complete APPEs.
INTRODUCTION
Transitioning from didactic to experiential learning comes with a significant increase in responsibility and performance expectation.1 This may present a challenge for some student pharmacists. According to the Accreditation Council for Pharmacy Education (ACPE) Standards 2016, colleges of pharmacy should analyze students’ knowledge, skills, and competencies in providing direct patient care and taking part in an interprofessional team prior to starting advanced pharmacy practice experiences (APPEs).2,3 The ACPE calls upon individual institutions to construct their own assessment processes to determine APPE readiness. Determining factors that accurately predict which students may perform poorly on APPEs has proven to be challenging. Data looking at predictors of performance in pharmacy school have been varied.4-6 A study by Heldenbrand and colleagues supported the idea that prepharmacy grades, Pharmacy College Admission Test (PCAT) scores, and multiple mini-interviews are predictive of pharmacy school performance. Individuals having a grade point average (GPA) <3.25, PCAT score <60th percentile, and a multiple mini-interview score <4.5 were 12, seven, and three times more likely to have academic difficulty (grade of D or F or delayed progression).6 In a study by Schauner and colleagues, factors predictive of students obtaining a D or lower in their first year were lower PCAT composite and subcategory scores, a lower prepharmacy GPA, lower prepharmacy math and science GPAs, a lower interview score, and fewer cumulative prepharmacy coursework hours. A study by Meagher and colleagues also supported that PCAT scores and prepharmacy GPA were predictive of success. These studies have largely focused on overall academic performance as opposed to assessing experiential performance.1 Additionally, applicability beyond the study institution can be difficult because in some cases, schools have curricular strategies such as readiness and capstone courses that may not be generalizable.7
Progress has been made in recent years, though, as seen with the 2020 study by Nyman and colleagues that focused on factors specifically affecting APPE readiness.7 The authors found that knowledge retention covariates such as course grades in pathophysiology, pharmacology, therapeutics, and drug literature evaluation as well as scores on the Pharmacy Curriculum Outcomes Assessment were identified as positive predictors in three models. Other factors with more modest predictability were objective structured clinical examination scores and age, which were positively correlated.7 Also published in 2020, Call and colleagues looked at predictive factors of poor performance or failure on APPEs.1 The study had a large cohort (N=669) and found that poor academic performance (specifically failure of pharmacotherapy courses and low professional GPAs) correlated with poor performance or failure on APPEs. Additionally, the investigators found a negative correlation between professionalism infractions on introductory pharmacy practice experiences and APPE success.1
These existing studies may help identify students who are not ready for APPEs based on specific variables, but they do not provide insight on how to support those students who are at risk of substandard performance on APPEs. Because appropriate pre-APPE remediation for curriculum milestones takes place, some learners may be deemed ready for APPEs but may still need extra support throughout their APPE year to ensure success. If a structured support system is not in place, there can be a learner support gap when these students embark on APPEs.
While incorporating a variety of predictive factors into an assessment model may allow colleges to identify those at risk of failure or poor performance on APPEs, there is not a definitive tool for identifying APPE readiness.1 The University of Florida College of Pharmacy recognized that there was a need to develop a comprehensive and integrated tool that collects a variety of information from multiple sources to provide a holistic view of a student’s overall abilities. The tool evaluates a variety of abilities in the areas of didactic knowledge, skills, and noncognitive attributes, such as professionalism and teamwork skills. Details on the development and implementation of this dashboard tool are described elsewhere.8 In implementing the dashboard tool, the college found that students fell into three groups: ready for APPEs, ready but potentially at risk of substandard performance, and not ready.
When students are accurately classified, creation of an appropriate support plan is possible. Based on the results of the dashboard process, the experiential program team sought to develop an effective monitoring and support program for at-risk students on APPEs. To be proactive in helping struggling learners in the experiential environment, the college created a monitoring system for at-risk students identified by the dashboard tool. The monitoring system allowed for early detection of substandard performance and development of student-centric intervention plans. The primary objective of this study is to describe the design and utility of a monitoring and support system for students on APPEs who have been deemed at risk of substandard performance.
METHODS
Prior to the start of APPE years 2018-2019 and 2019-2020, a dashboard of key indicators was used by a readiness evaluation committee to identify students with potential deficits in knowledge, skills, or noncognitive attributes.8 Table 1 shows the indicators assessed to determine knowledge, skill, or noncognitive attribute deficits over the two academic years evaluated. Through a holistic evaluation, students with at least three deficits were deemed at risk of substandard performance on APPEs. Substandard performance was defined by the criteria outlined in Table 2. Students deemed at risk of substandard performance on APPEs were placed on a monitoring list, and the regional coordinators from the Office of Experiential Programs were responsible for implementing the monitoring plan. Regional coordinators are faculty-appointed pharmacists deployed around the state, serving as the local college contacts who support sites, preceptors, and students in their area.
Knowledge, Skills, and Noncognitive Attribute Indicators Assessed
Substandard Performance Criteria (One or More of the Following)
The frequency of monitoring for each learner was designed around a traffic light–based, color-coded system. High-risk students, placed on red monitoring status, had the most intensive monitoring with an early, midpoint, and final check-in for each APPE. The Office of Experiential Programs regional coordinators performed a check-in with the student and preceptor within the first week of each experience. The check-in consisted of the regional coordinator asking the preceptor for general feedback on the student’s performance on the APPE and obtaining the student’s perception of their own level of performance. In addition, the regional coordinator conducted a review of the midpoint evaluation within a week of submission. If there were concerns noted on the midpoint evaluation, the regional coordinator would work with the preceptor and student on a plan for corrective action. Intervention was on an as-needed basis. If students on red status did not have any noted concerns or substandard performance after completion of two APPEs, then the monitoring status was changed to yellow. For students on yellow status, the regional coordinator decreased the monitoring frequency but continued a timely review of the midpoint evaluation. If the midpoint evaluation was not submitted on schedule, the regional coordinators contacted the preceptors to request midpoint feedback. If students on yellow status did not have any noted concerns or substandard performance after completion of two APPEs, then the monitoring status was changed to green. On green status, students remained on the monitored student list, but no additional proactive monitoring was employed by regional coordinators. Maintaining students on green facilitated reescalation of monitoring, should challenges arise.
Whenever monitored students had significant issues (concerns about performance that might impact their ability to pass the rotation) or received substandard evaluations, the Office of Experiential Programs developed deficit-specific evidence-based action plans, and the students were retained or reinstated to red status. Students who had minor issues (for example, difficulty with one aspect of the rotation but able to improve with feedback) were retained at their current level of monitoring for an additional APPE, and then they were de-escalated if there were no issues on the subsequent experience. Regional coordinators worked with preceptors and students to triage and determine the etiology of the challenge. They then led the development of student-specific action plans in conjunction with the site preceptor. Plans were developed based on recommendations from the text Remediation in Medical Education: A Mid-Course Correction.9 Leadership from the Office of Experiential Programs and the Office of Student Affairs were included in development of the plan if the issues warranted (eg, medical concerns). Example challenges and associated plans are described in Table 3. The regional coordinators documented the challenges, plans, and outcomes in the student profile using the Salesforce platform (Salesforce Inc). Over the course of each year, students could be added to the monitored list if they experienced challenges and needed increased monitoring and support.
Example Assessments and Plans
In the first academic year (2018-2019), all students at risk were initially placed on red monitoring status. After two APPEs, students without significant issues were de-escalated to yellow. Continued progress without significant issues for the subsequent two APPEs resulted in de-escalation to green status. In the second year (2019-2020), students deemed at risk were initially placed on red or yellow status, at the discretion of the evaluation committee. Students placed on yellow status typically met criteria for monitoring, but indicators were scattered across domains or were indicators with less data to support them as predictors. Students were managed by the same process employed in the first academic year. Monitoring was considered de-escalated on schedule if a student progressed to yellow or green after two experiences. Monitoring was considered extended if a student stayed at red or yellow for more than two experiences.
Impact of the dashboard and student monitoring process were evaluated by assessing the number of students with substandard performance, the number of students who had issues and experienced delayed de-escalation, no de-escalation, or regression, and the number of students who matriculated on track through APPE experiences. Descriptive statistics were used to describe the outcome parameters, and chi-square tests were used to compare outcomes between years one and two of the monitoring program. This study was approved by the University of Florida Institutional Review Board.
RESULTS
Over the first two years of the student monitoring program, the dashboard of key indicators was used to evaluate readiness of 499 students about to enter advanced pharmacy practice experiences. Overall, 87 students were deemed at risk for substandard performance on APPEs and placed on the monitoring list. Of the 87 students, 88.5% completed all APPEs successfully on the first attempt, and 62.1% had no substandard performance (Table 4). While the vast majority completed all experiences successfully on the first attempt, 75.9% of students required extended red- or yellow-level monitoring. The percentage of students who required extended monitoring (ie, they did not de-escalate on the established schedule) was significantly higher in the second year. A description of the de-escalation timeline outcomes by year is provided in Figure 1. The percentage of students who completed experiences without a substandard performance was not significantly different between the two years of the monitoring program.
Outcomes for Students Monitored in Each Academic Year and Combined for Both Years
Outcomes for the N=87 students who were monitored during their advanced pharmacy practice experience year.
For the 2018-2019 APPE year, 39 students were initially identified as at risk using the readiness dashboard. Two students who were offtrack in the 2017-2018 APPE year were carried over from the previous experiential cycle and added to the 2018-2019 monitored student list. Three additional students were placed on the monitored student list during the 2018-2019 year, for a total of 44 students. Students were added to the monitored student list if they had a substandard performance and there was concern that the student would experience another substandard performance without monitoring and early intervention. Of the 44 monitored students, 39 (88.6%) completed all APPEs successfully on the first attempt, and the monitoring level for 16 students (36.4%) was de-escalated on schedule. Of the 28 students who had delayed de-escalation, 15 exhibited substandard performance on one or more APPEs. English-language barriers were cited as a factor in six students with substandard performance. Of the 15 students with substandard performance, two were placed on remedial experiences and three failed or were removed from an APPE. Remedial experiences were experiences where students worked with preceptors specifically selected to help them improve in an area of identified weakness (remediate) before starting or continuing traditional advanced pharmacy practice experiences.
For the 2019-2020 year, 39 students were initially identified as at risk and placed on the monitored student list, and four students were added to the list over the course of the year. A total of 38 of 43 students (88%) completed all APPEs successfully and on schedule. Of the 43 monitored students, the monitoring level for five students (11.6%) was de-escalated on schedule. Of those 38 who did not de-escalate on track, 15 had a substandard performance on APPEs. Five students failed an experience, and two students were placed on a remedial APPE. English as a second language was cited as a factor in two of the cases.
Of the seven students added to the monitoring list with unanticipated concerns over the two-year program period, two were added due to health issues that impacted performance, two had professionalism infractions, one had a knowledge gap, and two had both knowledge and noncognitive attribute concerns.
DISCUSSION
The percentage of monitored students who completed all experiences successfully on their first attempt was noteworthy, at 88.5%. A significantly greater percentage of students required extended monitoring the second year of the program. A possible cause for the increase in extended monitoring in the second year was that the dashboard tool used to identify at-risk students was modified in the second year. As such, the tool may have done a better job of identifying students truly at risk. Another possible cause of the higher rate of extended monitoring in the second year of the student monitoring program was that the COVID-19 pandemic was escalating, reaching a crisis point in the spring of the second study year. The stresses and logistical issues associated with the pandemic may have impacted student performance and the need for extended monitoring and support. This idea is supported by data from Elbeshbeshy and colleagues, which suggest that the well-being of students on APPEs was impacted during the pandemic.10 It is notable that while the number of students who required extended monitoring increased, the number of substandard performances did not.
The program did allow for students who have unanticipated challenges on APPEs to be added to the monitored student list. Of the seven students added to the list, only one had a substandard grade after being added. While the outcomes of the students who were added were similar to the outcomes of the initial cohort, the need to add students during the year suggests there is still opportunity to improve the dashboard tool used to identify students for the monitored list. An evaluation is currently underway to assess characteristics of students who were not initially identified by the dashboard tool and were added to the monitoring list due to substandard performance. Assessment of this information will help determine opportunities for further refinement of the dashboard metrics.
While the percentage of students who successfully completed an experience on first attempt approached 90%, there may be opportunity to enhance the monitoring and intervention process to further increase the success rate on first attempt and decrease substandard performances. The Office of Experiential Programs will be evaluating characteristics of students who were monitored and still were not successful on their first attempt or had substandard performances to determine whether there is opportunity to further improve the monitoring and intervention process. Potential opportunities include conducting the check-ins earlier in the rotation, evaluating effectiveness of intervention plan elements and standardizing them in certain situations, and seeking feedback from regional coordinators on where they may need more support or training. While this was an incidental finding, another area of opportunity may be for those with English as a second language, as this was cited in several cases. These students may have unique challenges, and approaches to support them may differ from approaches that help English speakers. Diaz-Gilbert and colleagues suggested that some students with English as a second language may have deficits in understanding medical terminology and lack essential writing skills.11,12 Giving students the opportunity to teach back to the preceptor to demonstrate understanding and to practice writing might be helpful in these situations.
This evaluation has potential limitations. For example, the implementation of the dashboard tool and the monitoring process happened concurrently. Because the identification of a group of students who were ready for APPEs but at risk prompted the development of a monitoring program in real time, this limited the ability to compare the outcomes of students in the monitoring program to a cohort deemed at risk but not monitored. Evaluating a retrospective cohort defined as at risk using the dashboard was also challenging, as some of the metrics used to determine risk were new at the institution and were not evaluated in prior student cohorts.
CONCLUSION
A monitoring and early intervention process using evidence-based action plans for students identified as at risk of poor performance using a dashboard screening tool resulted in the vast majority of at-risk students completing experiences successfully on their first attempt. The monitoring and intervention processes may be beneficial in assisting at-risk students. An evaluation of those who were not successful on the first attempt or had substandard performances will be conducted to identify whether the intervention process can be improved.
- Received December 23, 2021.
- Accepted June 20, 2022.
- © 2023 American Association of Colleges of Pharmacy