Abstract
Objective. A virtual educational innovation was designed and implemented to have student pharmacists simulate insurance processing. This article describes the impact of this third-party payer simulation on student knowledge and confidence and reports student perceptions of the activity.
Methods. First-, second-, and third-year pharmacy students (P1, P2, and P3 students, respectively) at four institutions completed the self-paced simulation. Knowledge was assessed by comparing results of multiple-choice questions on the pre- and post-assessments and evaluated by the Wilcoxon signed rank test. Confidence was assessed by students’ change in self-reported confidence scale measurements and compared using the chi-square test.
Results. The simulation had a significant impact on student knowledge. The largest improvement was in P1 students, with a pre- to post-assessment average score difference (scale 0-100) of 16.6 compared to 7.2 for P2 and 10.2 for P3 students. Significant improvement was seen on most of the knowledge questions, with variations for certain questions between groups. All groups had significantly improved self-rated confidence in their abilities. Most students agreed that they would recommend this activity to other students (91.7%) and that it encouraged them to think about the material in a new way (85%).
Conclusions. Through an innovative simulation on prescription insurance processing, positive results were seen across all three levels of learners. Knowledge assessments significantly improved, and student confidence increased across all groups and all confidence items. Participants would recommend this activity to other students and felt it was an effective way to learn about insurance adjudication.
INTRODUCTION
Approximately half of all pharmacists in the United States practice in community-based retail settings, which include chain pharmacies, independent pharmacies, and supermarkets. 1 At least 50% of a community pharmacist’s time is spent dispensing medications and providing patient counseling, and pharmacists are heavily involved in the insurance claims adjudication process. 2 Prescription drug claims are reimbursed from a variety of sources including employer-sponsored plans, private insurance, and Medicare/Medicaid. According to the National Community Pharmacists Association, the percentage of prescriptions filled at independent pharmacies billed to a third party has risen from 44% in 1990 to 90% in 2012. 2
Patients rely on pharmacists to correctly submit claims for their prescription insurance benefits. 3 The Accreditation Council for Pharmacy Education (ACPE) and the Center for the Advancement of Pharmacy Education (CAPE) have included medication use systems management as a necessary skill for pharmacists. 4, 5 Additionally, recently established entrustable professional activities (EPAs) include the domain of the practice manager, for which the task of fulfilling a medication order is described by examples such as “determine the patient co-pay” and “ensure that formulary preferred medications are used when clinically appropriate.” 6
While third-party insurance adjudication is an important process of medication dispensing and patient advocacy, no published manuscripts relate to this educational topic. In one published poster abstract, simulated prescription insurance processing exercises were shown to increase students' confidence compared to traditional lectures, but a full description of the activity and the associated statistical impact has not been published. 7 To date, the majority of the literature focuses on services for Medicare Part D plan selection and benefits but overlooks the imperative process of adjudicating prescription third-party payer claims. 8-10
Adjudication of insurance claims taught in a traditional didactic classroom will likely only reach the lowest level of Bloom’s taxonomy (remember and understand). 11 For students to be prepared for community introductory and advanced pharmacy practice experiences (IPPEs and APPEs), educational content should be focused on higher levels of learning, with Bloom’s levels at or above the level of application. While this concept has not specifically been studied in the literature, it is reasonable to say that insurance adjudication is best learned through simulations, as many curricula focus on application of skills for similar community pharmacist roles such as patient counseling, compounding, and dispensing. Simulation within pharmacy education is widely accepted as an effective instructional method to improve student knowledge and confidence as well as educational outcomes. 12-15
A key element of student success is formative feedback, which has been demonstrated across application-based simulations and objective structured clinical examinations. 16 Ryan and colleagues found that “response-oriented and conceptually focused feedback was superior to traditional right/wrong feedback.” 17 Integrated formative step-by-step feedback focuses on improvements in the student’s process instead of completion of the task and provides elaborate rationale in manageable units to enhance learning. 18 Thus, to further enhance students’ understanding of key concepts related to insurance claim adjudication, faculty from four institutions developed a platform upon which to host a simulation to immerse PharmD students in insurance claim adjudication with individualized, formative feedback embedded in every correct and incorrect answer. This article evaluates the impact of this virtual insurance simulation on participating students’ knowledge and confidence and reports students’ perception of the activity.
METHODS
The virtual insurance simulation was developed as part of a collaborative effort by a postgraduate year two (PGY-2) academic pharmacy resident, a fourth-year pharmacy student in an academic APPE, and faculty from four institutions. The simulation was developed based on prescription insurance-related issues seen in the community pharmacy practice setting. The education activity was designed to address high levels of Bloom’s taxonomy, involve EPAs, and incorporate the Pharmacists’ Patient Care Process (PPCP), which were linked to the activity’s learning objectives (Appendix 1). 6, 11, 19
A PowerPoint slide deck was created with several patient cases and associated multiple-choice questions with hyperlinked answer choices. Based on the choice selected, students would be directed to a slide that said “correct” or “incorrect, try again.” Detailed justification was provided for each correct and incorrect choice. If they answered correctly, they would progress to the next question. If they chose incorrectly, they would be directed back to the same question, with unlimited attempts for each question. Probing questions and statements were included for all incorrect choices to guide students’ critical thinking (Figure 1). Students filled out a worksheet as they went through the PowerPoint slide deck with the same questions, which could be turned in for a graded assignment assessed on completion.
Choose Your Own Adventure (CYOA) insurance processing simulation activity for pharmacy students.
The activity was designed to teach students through immediate, formative feedback on correct and incorrect answers as they progressed so that students at any level of previous employment and academic experience could complete the activity without participating in a didactic lecture component. The formative feedback component for all correct and incorrect choices was embedded in the activity and gave detailed justification so each student received individualized guidance throughout the entirety of the simulation, reflecting an added value beyond lecture-based delivery of this educational content. An optional reading from the textbook Medical Insurance for Pharmacy Technicians was included as a suggested prereading. 20 Given the learn-as-you-go nature, little faculty facilitation was required during the simulation.
The activity took place within a longitudinal pharmacy skills-based laboratory sequence for first-, second-, and third-year pharmacy (P1, P2, P3) students at four institutions. This simulation was aligned in the skills laboratory course at each institution with curricular topics relevant to community pharmacy practice.
To ensure ease of use, the virtual insurance simulation was piloted in skills-based laboratory courses of two Doctor of Pharmacy programs. Fourth-year APPE students at school A tested the activity for completeness, accuracy, and ease of use. P3 students at one institution and P2 students at another then piloted the activity. Students who completed the pilot were asked to provide feedback via an anonymous online survey on the length of the simulation and suggestions for improvement. Based on this feedback, cases were adjusted to ensure P1 students could complete the activity without prior therapeutics knowledge, and directions were provided for making a medication recommendation to a provider, including a sound clip with an example. Updated materials were distributed for implementation in three different class years (P1, P2, P3) across four institutions for the 2020-2021 academic year.
The activity was implemented virtually across all cohorts. The simulation was delivered asynchronously by posting materials to each institution’s learning management system. Materials included the PowerPoint presentation, activity worksheet and key, an insurance formulary for one of the cases, suggested reading citation, and student instructions to post to a learning management system. This study was exempted from full review by South Dakota State University’s institutional review board with reliance agreements at the other institutions.
Differences in demographic characteristics between groups were determined by the chi-square test, Friedman’s test, and t tests. Changes between pre- and post-assessment knowledge scores were evaluated by the Wilcoxon signed rank test due to violations of normal distribution. Differences in correct answers were compared for each question for each year of study using the chi-square test, and differences in mean pre- and post-assessment scores by characteristic were also compared using the chi-square test. Results were considered significant for p<.05. Missing values were excluded from analyses, and outliers were retained.
Pre- and post-assessments were embedded within the simulation to be completed by students using an electronic survey via QuestionPro (QuestionPro Inc). The preassessment included questions to gather demographic information and baseline knowledge related to the educational topic. The post-assessment included the same knowledge-based questions as well as items addressing students’ self-assessment of confidence and items on students’ perceptions of the learning activity.
The knowledge assessment included 14 multiple-choice questions that were developed by the faculty and pilot tested prior to the study. All questions had four answer options and were worth one point for correct answers and zero points for incorrect answers. Answers for the questions were not provided to decrease recall bias. The knowledge assessment captured variations of insurance adjudication problems seen in practice and were created to be similar but not direct copies of the problems covered in the form of cases. The total knowledge assessment scores were summed for each student, and percentages were calculated to compare differences between students’ pre- and post-assessments. All knowledge assessment questions were mapped to learning objectives, Bloom’s taxonomy levels, case descriptions, alignment with EPAs, and the Pharmacists’ Patient Care Process (PPCP) (Appendix 1).
Before and after the activity, students rated their confidence on a scale of 0-100 for five items related to their ability to process and handle insurance problems addressed by pharmacists (Table 3). The mean scores were calculated and compared before and after completing the training module. Additionally, four items from a previously published educational innovation perception tool were modified and used to measure overall perceptions of the simulation using a five-point Likert scale ranging from 1=strongly disagree to 5=strongly agree. 21 Student perceptions were assessed as frequencies and percentages of the Likert-scale questions.
RESULTS
The study participants (N=462) were grouped by their year of academic study as P1, P2, and P3 students. Table 1 displays the study population characteristics. Overall, several significant differences existed between the participants when comparing them by year of study, which was expected, such as that P3 students were more likely to have completed insurance processing training (32% for P3 compared to 19% for P1 and 28% for P2). The P1 students were less likely to have worked in any pharmacy setting. A total of 88% of P1 students had worked in a pharmacy, compared to 95% of the P2 and P3 cohorts (p<.001). Of those student groups that indicated they had processed insurance rejections, differences existed between the groups (p=.001); most first-year students indicated they had done this fewer than 20 times, while P3 students more frequently indicated that they had done this more than 100 times. Other differences that were not expected involved uninsured rates and differences in race and ethnicity among the groups: P2 students were less likely to have been uninsured, and P2 students were also more likely to be White.
Characteristics (N=462) of Pharmacy Students Who Participated in an Insurance Adjudication Simulation Conducted Across Multiple Institutions and Levels of Learners
The pre- and post-assessments among pharmacy students by year of study (P1, P2, P3) showed a significant increase in knowledge across all groups for the total assessment score (Table 2). The largest improvement was seen in the P1 cohort. The data for each individual question show that students significantly improved on most of the 14 knowledge questions, with variations for certain questions between the groups (Appendix 2). The P1 students displayed significant improvements on 10 of the 14 knowledge assessment questions, P2 students on seven, and P3 students on eight. For all groups, significant improvements were seen on the knowledge assessment questions 1, 2, 3, 5, and 6. In the post-assessment, question 9, which was on determining the co-pay for a patient, was answered with 100% accuracy by P1 students.
Mean Performance on Insurance Adjudication Knowledge Pre- and Post-assessmentsa by Pharmacy Student Year (N=462)
Students in all three years were significantly more likely to answer question 4 incorrectly after the training. For this question, the students were given a case of an antibiotic ear drop with the messaging “Max daily dose exceeded: Max daily dose = 0.5 mL/day.” The correct resolution was to adjust the day supply from seven days to 10 days per the insurance, even though the prescription read “for 7 days.” Of the students, 66.8% of P1 students, 43.8% of P2 students, and 43.2% of P3 students chose to call the prescriber for a medication change rather than adjust the day supply (the correct answer).
Additionally, question 7 for both P2 and P3 students and question 12 for P2 students demonstrated a decrease in correct answers on the post-assessment, although not significantly decreased. Question 7 referred to processing a rosuvastatin prescription with a rejection that read “Prior authorization required. Nonpreferred product.” Students should have recognized the insurance required an alternative statin medication from the same class; however, many students selected that the entire medication class would require prior authorizations (27.1% of P2 and 20.6% of P3 students in the post-assessment). The P2 students provided fewer correct answers for question 12 after completing the simulation, which focused on a patient acquiring medication through a patient assistance program (from 75% correct on the preassessment to 71.9% correct on the post-assessment). The majority still chose the correct answer, ie, a patient assistance program, but the manufacturer coupon (22%) was the next most commonly chosen answer on the post assessment.
The P1, P2, and P3 cohorts demonstrated significantly different preassessment average scores when assessed per question. This difference became insignificant for post assessment knowledge question 1 (p=.003 vs .37), question 5 (p≤ .001 vs .699), question 9 (p=.004 vs .119), and question 10 (p≤.001 vs .124). Differences in mean pre- and post-assessment scores by characteristic demonstrated variations for students who previously received formal training in processing insurance rejections or had personally processed an insurance rejection. Students reporting previous experience in processing insurance rejections demonstrated significantly greater knowledge on the pre-assessment (73.4%, SD=16.9) and the post-assessment (80.2%, SD=12.9) and had higher confidence scores before and after the activity (58.6% [SD=25.3] and 72.91% [SD=20.3]) compared to their colleagues without previous experience (p<.001). Within the students reporting to have had experience processing insurance rejections, those answering “20 or more times” had higher means than those who answered “less than 20 times” in the pre-and post-assessments for both knowledge and confidence (p<.001).
Knowledge questions that focused on the lower level of Bloom’s taxonomy (application) and on the early stages of the PPCP and the EPA practice manager’s task to “fulfill a medication order” demonstrated more significant improvement after the virtual simulation (Appendix 1). Questions 8 and 9, which were linked to the PPCP assessment and plan, were improved for all groups but only significantly for the P1 group. Notably, the P2 and P3 students scored high on these questions in the preassessment, making a significant change difficult. The questions rooted in the PPCP steps related to implementing the care plan and monitoring and evaluating its effectiveness, and those questions with the highest level of Bloom’s taxonomy (creation) were questions 6 and 13. Each showed an increase, but only question 6 had a significant increase.
Confidence self-ratings from before to after the simulation increased significantly among the students in all groups. The largest increases were seen among P1 students for all indicators. Interestingly, improvements differed by category for P2 and P3 students: P2 students had higher gains for items 2, 3, and 5, while P3 students had larger increases for items 1 and 4 (Table 3).
Change in Mean Student Confidencea in Processing Insurance Rejections Pre- and Post-assessment for the Insurance CYOA Simulation by Pharmacy Student Year
Results on student perceptions using the modified perception scale were positive. Most students (85%) agreed or strongly agreed that the virtual insurance simulation encouraged them to think about the material in a new way. Additionally, 91.7% of the participants agreed or strongly agreed that they would recommend this activity to other students and 87.8% agreed or strongly agreed that this was an effective way to learn new information. More variation was seen in the students’ perception of the statement “I learn better in this format than in a classroom lecture,” with 14.4% in disagreement, 24.8% neutral, and 60.8% in agreement. This difference could be attributed to the fact that students are comfortable with lecture-based learning in general, and this response was not necessarily specific to this educational topic or innovation.
DISCUSSION
The virtual insurance simulation was transferable to multiple institutions, was delivered virtually, and demonstrated an increase in students’ knowledge and confidence in all student program years (P1, P2, and P3). The structure allowed students to learn as they proceeded through the cases on the PowerPoint slideshow, requiring minimal faculty facilitation. The simulation and associated materials are available for download as article supplements to encourage seamless transferability to all interested faculty at https://tinyurl.com/4cezwpzt. These materials include the complete simulation file (Supplement 1), student worksheet (Supplement 2), learning management system announcement posting with description of activity (Supplement 3), and additional case materials (Supplement 4). This activity did not require grading and instead provided immediate, formative feedback throughout the simulation.
Participants’ knowledge in the overall 14-question knowledge assessment significantly increased between the pre- and post-assessment for all program years. This may indicate that programs can place the simulation within their curriculum whenever it works best for their program’s organization. Students with formal training or IPPEs consistently displayed higher knowledge scores (for both pre- and post-assessments) than their peers, and those with more experience displayed higher knowledge scores for the pre- and post-assessments than those with less experience. Interestingly, group comparisons by question displayed more significant variation during the preassessment than the post-assessment on certain questions. This may mean that this activity can help students with no or limited experience in claim adjudication attain a similar knowledge base to that of their more experienced peers; however, this study was not designed to display that.
Of note, each group displayed worsening scores for certain questions, of which question 4 was significantly worse (Appendix 2). This question was developed to assess simulation learning objective 1, namely “Describe third-party entry into an electronic dispensing system. While this learning objective performed well for questions 1 through 3, question 4 was a practical question on day supply (Appendix 2). Day supply has been taught in skills-based laboratory courses across multiple institutions to use the duration of therapy (ie, for seven days) as the day supply, since dispensing in the laboratory setting typically assumes cash-paying patients without the ability to process claims. For future interactions, examples of variations from education settings and realities of practice will be included.
Simulation learning objective 2, “Identify errors for third-party processing adjudication claims,” was aligned with questions 5, 9, and 10. All student groups significantly improved on question 5, with P1 students significantly improving on all questions. The P2 and P3 students improved on questions 9 and 10, but only question 10 was significantly improved for P3 students. This variation in significance was largely due to the high scores for the pretest, with all the posttest scores above 97% for these two questions.
Simulation learning objective 3, “Develop resolution for third-party processing adjudication claims rejections,” was assessed with questions 7, 8, 11, 12, and 14. The P1 students improved significantly on all knowledge questions mapped to this learning objective, with the exception of question 12. The P2 students only significantly increased their scores on question 11, and P3 students only significantly increased their scores on questions 11 and 14. Question 7 showed a decrease for P2 and P3 students and involved a similar extrapolation error as seen in question 4, where students applied a specific case example to a knowledge assessment. For question 7, students were to call the provider and change the medication within the class, but students chose the option stating that a prior authorization would be needed, as they had determined in the simulation case. Additional education on determining when a prior authorization or a medication change is appropriate will be included in the future.
Simulation learning objective 4, “Discuss recommendations to the prescriber and/or patient,” was assessed with questions 6 and 13. All student groups improved significantly on question 6 and increased insignificantly on question 13.
Confidence increased across all groups between their pre- and post-assessments for all items (Table 3). The posttest mean student confidence demonstrates a range of confidence between approximately 50%-80%, with 100% signifying complete confidence. As a pre-APPE simulation, it is not expected that a one-time immersion in third-party claims adjudication would result in 100% confidence for any level or learner. Furthermore, while student self-reported confidence is not always a predictor of competency, when these self-ratings are used as an adjunct to competency or knowledge gains, they contribute a critical element to evaluating educational interventions. 22
Students’ perspectives about the educational activity were overall positive, with disagreement only when students compared the simulation to traditional lectures. Still, over half of the students agreed that they learned better in this format. With 92% of students suggesting this activity for other students, faculty could consider using the virtual insurance simulation with immediate, formative feedback.
This study is not without limitations. The knowledge assessment was created to be similar but with variations seen in practice and not direct copies of the patient cases. However, gaps were made evident between the simulation and the knowledge assessment after individual questions were statistically analyzed. Upon review, the authors suggest that future offerings should consider additional information on common dispensing adjudication messages. Knowledge and skill retention and IPPE/APPE readiness was not an objective of this study and was, therefore, not measured. This would be a valuable avenue for future research.
This activity requires no additional cost, minimal faculty resources, and is transferable to multiple institutions. Future cohorts at each program will continue to use this virtual simulation. Programs will see the largest impact on P1 students, but adding this education intervention within any portion of the didactic curriculum will benefit both student knowledge and confidence of their skills in insurance adjudication.
CONCLUSION
The virtual insurance adjudication simulation was implemented across P1, P2, and P3 students at four institutions. Knowledge assessments significantly improved across all three groups of students, while analysis between each question demonstrated variations in student performance. Student confidence increased across all groups and all confidence items. Participants would recommend this activity to other students and felt it was an effective way to learn about insurance adjudication.
Appendix
Objectives of the Choose Your Own Adventure Insurance Processing Simulation for Pharmacy Students Mapped to Learning Outcomes and Assessments
Appendix
Percentage of Correct Answers on the Knowledge Pre- and Post-assessmenta by Learning Objective, Question, and Pharmacy Student Year
- Received June 3, 2021.
- Accepted October 15, 2021.
- © 2022 American Association of Colleges of Pharmacy