Abstract
Objective. To explore whether metacognition can be improved in Doctor of Pharmacy (PharmD) students through routine self-assessment over a year-long advanced pharmacy practice experience (APPE) sequence.
Methods. Differences between self-assessment scores and preceptors’ scores for three cohorts of pharmacy students between 2015 and 2018 were compared between the first, second, and third trimester to determine whether students more accurately evaluated their performance over time. The primary endpoint was change in the absolute difference between student and preceptor evaluation (rubric and composite scores) between trimesters.
Results. Of 2577 student and preceptor evaluations eligible for inclusion, 1713 were completed, matched, and analyzed. Using the same rubric as preceptors, students overestimated their performance by an average of 16 points during the first trimester, followed by 14 and 12 points during the second and third trimester, respectively. This reflected a significant improvement over time. No significance difference was found between student and preceptor composite scores. Faculty preceptorship, students’ pre-APPE grade point average, and type of APPE were not associated with any difference in rubric or composite scores.
Conclusion. This analysis revealed that the difference between student self-evaluation grades and preceptor evaluation grades was greatest during the first trimester and significantly decreased in the second and third trimesters. This could reflect students’ development of metacognitive processes over time. Metacognition is a vital skill for pharmacy students to learn, and opportunities to develop this skill should be incorporated throughout the pharmacy curricula.
INTRODUCTION
Metacognition, described as “thinking about thinking,” refers to one’s ability to regulate thinking and learning through three self-assessment skills: planning, monitoring, and evaluating.1 In health professions, metacognition is an important consideration in the process of becoming a professional and practicing as a clinician. Health professionals must subscribe to lifelong learning, and metacognition guides this process by making the learner or clinician aware of what they do not know and prompting them to bridge the knowledge gaps. Self-awareness of one’s knowledge, skills, abilities, beliefs, motivation, and emotions has been recognized by the Accreditation Council for Pharmacy Education (ACPE) in Standards 2016 as a key element of personal and professional development (Standard 4).2
Self-awareness is a non-cognitive skill that may be viewed as difficult to foster and assess in health professions learners, but it is a skill that can be developed.3 Self-evaluation assignments have been used in health professions curricula to stimulate reflection on didactic and experiential learning and to assess learners’ self-awareness of knowledge and skills. Change in accuracy of self-evaluations is one measure that can be used to indicate change in the metacognitive process as self-evaluation is an important component of metacognition.1 In several studies, pharmacy learners have been found to overestimate their performance when compared to the results on evaluations conducted by faculty members or preceptors.4-7 This is consistent with findings in medical and nursing students.8,9 Additionally, top performers have been found to more accurately predict or evaluate their performance than bottom (third or fourth quartile) performers in studies of pharmacy and medical students, and underperformers tend to highly overestimate their knowledge and skills.4,5,8 Findings in some studies suggest that while overall performance scores might be similar between instructor and learner, particular areas of rubrics tend to have larger discrepancies in scores. Examples of rubric items reported with higher score discrepancies include empathy, team relationships, industriousness and enthusiasm, personal attitudes, and global knowledge.5,10-11
Approaches to developing metacognition in health professions students have been described in the literature. Steuber and colleagues evaluated implementation of metacognition processes in student pharmacists taking a semester-long elective course. Strategic feedback was provided by faculty members and peers, which resulted in improved student prediction of performance and self-awareness. Despite the improvement seen, this study found that one semester was not enough time to fully realize the effects of interventions on metacognition.4 Mort and Hansen studied the impact of video review on first-year student pharmacists’ self-assessment of communication skills following a patient counseling encounter.6 Using the same rubric as faculty evaluators, student pharmacists evaluated their encounter prior-to and after viewing the recording. Students’ self-evaluations were more highly correlated with faculty scores after viewing their video compared to their self-evaluation before viewing the video, suggesting that video review increased student pharmacists’ awareness of skill achievement.6 However, students’ overestimation of their performance remained post-video review, although to a lesser extent. Conversely, use of videoed encounters in a small cohort of physical therapy students did not lead to significant changes in performance between mid-term and final examinations, providing conflicting evidence about the use of video recording and review of performance.12 Baxter and Norman evaluated senior-year nursing students’ ability to self-assess their performance during a simulated emergency situation.13 While the simulation did increase their perceived confidence and competence, it had no effect on their perceived ability to collaborate or communicate. The authors went on to state that they believe self-assessments do not provide any additional benefits in development of a student’s metacognition and may in fact provide a faux level of confidence.13 This conflicting evidence highlights the difficulty of developing metacognition in learners over the course of a semester or during individual events.
This study was designed to explore whether metacognition could be developed over a year-long advanced pharmacy practice experience (APPE) sequence through repeated use of self-assessment at the end of each APPE. Differences between self-assessment scores and instructor scores for three cohorts of student pharmacists were compared between the first, second, and third trimesters to determine whether students more accurately evaluated their performance near the end of the APPE sequence as compared to the beginning. We hypothesized that repetition of self-assessment, along with preceptor feedback throughout the APPE year, would lead to a smaller difference between student and preceptor evaluation scores, indicating increased self-awareness and metacognitive change throughout the APPE year.
METHODS
Completed student and preceptor matched final evaluations for all APPEs between September 2015 and April 2018 were eligible for analysis. These dates were chosen because of the implementation of identical student and preceptor evaluation rubrics in September 2015. Students who failed to complete a self-evaluation or did not receive an evaluation from their preceptor were excluded. The Wingate University School of Pharmacy APPE curriculum consists of the following required five-week rotations: one adult internal medicine, one ambulatory care, one advanced institutional, one long-term care, and two community experiences. The three remaining APPEs are student selected electives. In total, students completed 1800 hours of APPEs over three trimesters of rotations. Students received a graded evaluation, which sum to 100 points, with the following breakdown: A=89.5-100 points; B=79.5-89.4 points; C=69.5-79.4 points; F=<69.5 points.
Each APPE was curated by the student’s preceptor, but usually encompassed supplemental activities beyond patient care, including journal clubs, patient case presentations, and drug information requests. Preceptors and students completed identical evaluations at the midpoint and the conclusion of each experience through an online system. Evaluations consist of a detailed rubric, broken into different sections (Table 1). Supplementary activities (eg, journal clubs, patient case presentations) were evaluated separately using standardized rubrics and manually added to the midpoint and final evaluations as dictated by APPE type or preceptor. Supplementary activities occurred at timepoints dictated by the preceptor. As such, students may have received their grade and verbal feedback for the activity prior to the final evaluation for inclusion in their self-assessment. If none had been received prior to completing the final evaluation, students manually entered in a predicted score for each activity. Student completion of identical supplementary activity grading rubrics is not mandated through the electronic grading system. The combination of the scored rubric and individual supplementary activities grades received was considered the composite score for the rotation.
Percentage of Individual Preceptor Evaluation Components by APPE Type
Each participant received electronic reminders prior to due dates with final evaluations due one week after the APPE ended. To encourage both student self-reflection and student provision of feedback to the preceptor, students received a 3% grade reduction for failure to complete the preceptor and self-assessments beginning in May 2016. The grading rubric Likert scale (Table 2) used in each evaluation was electronically converted to a numeric grade.
Likert Ranking Terminology within Evaluations
Student self-evaluations were paired with assigned preceptor evaluations, comprising the data for this study. Data was garnered from the electronic evaluation software and de-identified prior to data analysis. Beyond self-evaluation and preceptor scores for each individual rotation pair, investigators captured the type of rotation, rotation trimester, and grade point average (GPA) prior to the APPE year.
The primary endpoint was change in the absolute difference between student and preceptor scores over trimesters. The absolute difference in rubric scores and composite scores was considered independently over each trimester. Secondary endpoints included the impact of pre-APPE GPA, faculty preceptors, and rotation type. A one-way ANOVA was performed to identify any significant difference in student and preceptor rubric and composite scores between trimesters. A Tukey HSD post-hoc analysis was done to identify where the difference between trimesters was found. A logistic regression analysis was completed to identify factors contributing to the statistical difference found in the primary endpoint. All analyses were performed using SPSS, Version 25 (IBM, Armonk, NY).
RESULTS
A total of 2577 student and preceptor evaluations were eligible for inclusion during the study period. Completed student and preceptor evaluations were matched, resulting in 1713 evaluations for analysis. The inclusion of only trimesters two and three resulted in 282 matched evaluations for the graduating class of 2016. There was an increase in completed evaluations by the graduating classes of 2017 and 2018 (696 and 735 evaluations, respectively).
The combined data points by trimester showed a reduction in absolute difference between preceptor and student rubric scores over time (Table 3). This resulted in a significant difference between student and preceptor scores for the third trimester in comparison to each of the first two trimesters. There was no significance noted between trimesters when comparing the absolute difference in composite scores. No significant difference was seen in either rubric or composite scores across APPE types (Table 4).
Absolute Difference in Evaluation Scores by Trimester
Rubric and Composite Evaluation Score Difference Between Students and Preceptors by Rotation Type
In a regression analysis, the following factors were considered predictors of a significant absolute difference in student and preceptor rubric scores: long-term care and ambulatory care APPEs and first trimester scores (R 0.32, p<.0001). Faculty preceptors, other APPEs, and graduation year were not significant in the regression analysis. Finally, students were divided into top, middle, and low performers by GPA prior to APPE year. No significant difference in the primary endpoint was found between any GPA ranking.
DISCUSSION
This analysis revealed that the absolute difference between pharmacy students’ self-evaluation grades and preceptor evaluation grades was largest during the first trimester. In addition, the difference in grades significantly decreased during the third trimester compared with the first and second trimesters. This could be the result of students developing a metacognitive process from repeated exposure to the grading rubric over time. As part of the program, students must continually monitor and reflect upon their performance in order to thoughtfully complete self-evaluations. With significant overlap of evaluation components across APPE types, the repeat exposure is likely to “sharpen” student’s ability to think critically about their output. Unfortunately, during all trimesters, students overestimated performance on the self-evaluation rubric. Significance was lost when non-patient care activities (eg, journal clubs and presentations) were incorporated into the final composite scores. While the supplementary activities all had rubrics available for individual grading use, the consistent use of these paper forms was not documented electronically. Likewise, this section of the evaluation required manual grade entry, and students likely had received their grade prior to final evaluation for incorporation into their self-assessment. These factors may have contributed to the smaller difference in preceptor and student composite scores by the end of each trimester.
Previous studies have shown that learners frequently have poor self-assessment skills and thus, likely poor metacognition skills.4-7,13 This is often demonstrated by a poor correlation between student self-assessments and preceptor evaluations and has been demonstrated in many of the health professions, including medicine, nursing, and pharmacy.4,10,13,14 There are limited data assessing the development of metacognition over time, particularly during APPEs. As previously described, Steuber and colleagues assessed the development of metacognition during a third-year pharmacy elective.4 In that study, the authors implemented repetitive faculty and peer feedback and creation of action plans for future work into the course to enhance learner cognition. Despite these interventions, students’ ability to self-predict performance did not improve over time and there was a trend of overconfidence in self-assessment. The authors advocate for longitudinal continuous feedback in pharmacy curricula to help learners improve self-assessment.4 Hill and colleagues compared student self-assessment of knowledge and skills during the last APPE with preceptor evaluations.7 At the end of APPEs, students rated themselves significantly higher than preceptors in six areas of drug knowledge. Students also overrated their proficiency in 16 skill areas compared to preceptors’ evaluations of the students’ proficiency. The authors concluded that as self-assessment is integral to students’ professional development, pharmacy educators and program curricula should teach students how to improve the accuracy of their self-assessment.7 This can be accomplished by providing ample opportunities for students to receive feedback on their self-assessment skills. Our study demonstrated a decrease in grade differences over time, which may indicate improvement in metacognition over time. This supports the idea that longitudinal continuous assessment can help learners’ metacognitive skills.
Notably, in our study, the type of preceptor (faculty vs non-faculty) a student was assigned to during an APPE was not associated with a significant difference in either student or preceptor rubric scores. We hypothesized that there may be a smaller difference between student and faculty rubric scores as faculty members may be more likely to provide more frequent, specific, and constructive feedback, allowing students to better self-assess. As this was not seen, preceptor development that encourages preceptors to provide feedback that incorporates learner self-assessment may be useful. A possible explanation for the “assessment gap” not improving while students were on faculty-precepted APPEs could be the interpersonal effects from students working with the same faculty members during their didactic studies or IPPEs in earlier years of the PharmD program. Put simply, a student may not have wanted to admit weaknesses to a faculty member who knew the knowledge and skills the student should have acquired earlier in the PharmD curriculum.
Though students’ self-assessment skills improved during the second and third trimesters, there was still an 11-point overestimation on the rubric score when compared with preceptor scores. This difference may have been because of wording of the rubric. By its very nature, evaluation during experiential education is fraught with subjectivity. Rubrics are built to attempt to remove as much subjectivity as possible, but it is, at best, an imperfect approach. Anecdotally, several preceptors have described, through commentary to the experiential department and their use of the evaluation tool, a likelihood to take Likert scale guidance quite literally; that is, they fail to award scores of 5 to students, based on the premise that “no one is perfect” and that per the school’s rubric, a score of 5 indicates perfect performance. As may be expected, students assigned to these preceptors provided feedback to the experiential department on how they were being evaluated. Based on this, and the research findings annotated above, edits to rubrics and anchors were implemented beginning with the 2018-2019 academic year. Edits to Likert scale language were made to address the absolutes in the language and to allow for more open interpretation of when to award higher (or lower) scores. The most significant edits to illustrate come at the top and bottom of the scale. A score of 5 is now described as “almost always” and a score of 1 is now described as “rarely.” This comes in tandem with other improvements, such as more explicit mention/assessment of the Pharmacist’s Patient Care Process and documentation of experiential interprofessional education (IPE) activities. Moving forward, comparisons of student self-evaluations vs preceptor evaluations will be conducted to assess whether these language changes contribute to better reliability.
This analysis was the first of its kind to review a metacognitive process for final-year APPE students over time. The large sample size and ability to evaluate differences over multiple years strengthen the findings. Conversely, limitations exist. The data was gathered from an electronic database, which limited the amount of granular detail we were able to examine. For example, we were unable to “drill down” to individual rubric components for each APPE. A more detailed analysis of individual rubric sections might yield significant results. Additionally, students had neither any incentive to accurately predict their grade, nor any penalty for failing to do so. Thus, students may have overpredicted their rubric grades because they were not attempting to accurately self-assess, possibly because of a lack of understanding of the importance of self-evaluation in personal and professional development. Students and preceptors are expected to meet for a final discussion at the APPE conclusion. However, there is no formal process in place to encourage discussion about differences in student and preceptor scoring. Our findings highlight the need for discussions regarding the value of accurate self-evaluation discussions between the preceptor and student. Indeed, periodic structured feedback from the experiential department given to students with regards to their submitted self-evaluations may be a future intervention to study. Another approach to consider is reflective exercises where students are asked to consider feedback from preceptors in past APPEs.
Several strategies beyond routine self-assessment exist to increase metacognition and should be used throughout the pharmacy curriculum.1,3 These strategies include providing high-quality, frequent external feedback to guide learners as they develop their own self-assessment skills, reflective-thinking assignments, and documentation of the clinical thought process. Implementation of multiple strategies such as these on a long-term, continuous basis can improve learners’ metacognitive skills.
CONCLUSION
Metacognition is a vital skill for pharmacy students to develop because of the ever-changing nature of the pharmacy profession, which requires life-long learning and ongoing self-assessment of knowledge and skills. Requiring pharmacy students to complete the same grading rubric as their preceptor and to reflect on differences in scoring resulted in improvement in students’ self-assessment skills over time. This simple strategy may dovetail with larger initiatives to improve metacognition in pharmacy students.
ACKNOWLEDGMENTS
The authors acknowledge Christy Inge and Daniela Rodriguez Reynaga for their assistance with data preparation.
- Received January 5, 2019.
- Accepted June 10, 2019.
- © 2020 American Association of Colleges of Pharmacy