Abstract
Objective. To assess the effectiveness of optional online quizzes written by peer tutors in a pharmacology course for doctor of pharmacy students.
Methods. Online quizzes were written by peer tutors for second-year pharmacy students. Quizzes reflected the material taught during lecture and were in a format similar to that of the examinations. Data related to performance on each quiz and each examination were collected throughout the semester. At the end of the semester, students and peer tutors were surveyed to gather information on the utility and success of the quizzes.
Results. Students taking online quizzes performed significantly better on examinations than those who did not take quizzes. In addition, students received higher scores on examinations than when practicing with the quizzes. Surveys suggest that students liked the quizzes and felt they increased their confidence and performance on examinations.
Conclusion. The quizzes were beneficial to student performance on examinations as well as student perception of performance and confidence going into the examinations. Quizzes were also beneficial learning experiences for peer tutors.
INTRODUCTION
Students are continually seeking study tools and materials that utilize new formats and emerging technologies as a means of delivering the content.1 Using emerging technologies to provide learning resources, such as online programs and course management systems, is useful and popular among students.2 Self-testing, a type of formative assessment, provides students with a study tool that can help identify areas of weakness that require focus. Additionally, the availability of formative assessments such as self-testing contributes to the development of skills related to self-directed learning.3 This type of engagement in course material is categorized as active learning, which can promote life-long learning activities.4 In the short term, self-testing is linked to increases in both individual course examination scores and overall academic performance.4,5 Repetition and frequent use of quizzes has also been linked to better retention of course material.6
The combination of self-testing and technology through the use of online quizzes is popular among students as a study method and effective at increasing examination scores.4 The use of this method provides students with instant access to study materials based on individual schedule and learning pace. It also allows students to receive instant feedback on their performance including corrections of misconceptions and identification of areas for focus. Despite their usefulness, this type of additional learning resource can often be time consuming for faculty members, limiting its successful use across multiple courses. Creating a database of questions, promoting material use, and monitoring student scores places an additional burden already over-taxed faculty members.
Peer tutors in higher education can serve as resources for student learning. The use of peer tutors provides advantages to students and tutors. Peer tutoring can significantly increase student performance compared to performance when no peer tutoring is offered and is at least equivalent in efficacy to faculty-led tutoring—and in some cases is even more beneficial than faculty-led tutoring.7 Additionally, students may feel more comfortable asking for help from peers compared to faculty members, and peer tutors may better identify areas of difficulty coming from a student perspective, having also taken the course.8 Peer tutors themselves can benefit by reinforcing their knowledge of material they are teaching and increasing their overall confidence.9,10 Often, peer tutors take on this role because of their interest in academia and may benefit from gaining experience with teaching and other instructional methods. Educating peer tutors to be involved in the academic process by preparing study materials for students, such as self-testing quizzes, may increase their involvement in the learning and teaching experience.
At the Wegmans School of Pharmacy, peer tutors were underutilized, so a program was developed to involve them in the development of online weekly quizzes made available to students as an optional formative assessment tool for use in examination preparation. The primary objectives of this study were to use peer tutor-generated formative assessments to improve student performance on examinations and to increase student confidence and perceived readiness for examinations. We hypothesized that use of the quizzes would improve examination performance and favorably affect the students’ confidence and perceived readiness for examinations. Further goals of the described intervention were to increase the use of peer-tutoring services and increase the value of the peer-tutoring program.
METHODS
Online self-testing was conducted in a team-taught, 4-credit, second-year (P2) pharmacology course. Formal course assessments included four noncumulative semester examinations, each worth 18% of the final course grade and one final examination worth 28% of the final course grade. The final examination consisted of 25% noncumulative material and 75% cumulative material. All examinations contained a mixture of multiple-choice, fill-in-the-blank, and short-answer questions. Prior to implementation, some review materials were available to students (which varied by tutor), but no formal practice quiz or examination questions were provided.
The peer tutor program had formally been in place for one year prior to using online self-testing. Peer tutors for each class are students one to two years ahead in the curriculum, chosen by faculty members after an application process. In previous semesters, peer tutors held office hours every week and conducted review sessions prior to most examinations. After observing the underutilization of peer tutors, two third-year (P3) students were chosen to make changes for the P2 pharmacology course. Peer tutors were still required to hold two office hours per week and hold review sessions prior to most examinations. In addition to these requirements, peer tutors prepared one 10-question quiz per week (five questions each) based on material taught in the class that week. Peer tutors were provided with all course materials as well as access to the course Blackboard online learning site (Blackboard Inc., Washington, DC). At the beginning of the semester, peer tutors met with the course coordinator, reviewed the requirements for building online quizzes, and were provided with examination question writing resources. During the semester, peer tutors communicated with course faculty members on a weekly basis.
Online quizzes were developed and conducted using ExamSoft online testing software (ExamSoft Worldwide, Dallas, TX), which is used for all course examinations. The format of the quizzes was developed to be as similar to course examinations as possible. Questions consisted primarily of multiple-choice and fill-in-the-blank questions. Peer tutors were provided restricted access to ExamSoft to enter questions into the database each week. Along with questions, tutors also provided a rationale for the correct and incorrect answers. After students entered the questions, faculty members teaching the related content reviewed them. To maintain the integrity of the peer tutor-generated content, questions were not directly edited by faculty members. Instead, an internal feedback mechanism in ExamSoft was used by faculty members to communicate with tutors regarding the accuracy, clarity, or quality of questions. Peer tutors were then responsible for editing the content based on feedback. Using this mechanism, faculty members were able to ensure that quizzes and examinations were consistent in their difficulty, coverage of course learning objectives, and format. Although quizzes reflected the topics covered on examinations, there were no identical questions or questions with significant overlap.
After reviewing quiz questions, faculty members assembled the questions into a weekly quiz and made the quiz available to students. Quizzes were voluntary and could be taken at any time and in any location; students only needed access to a computer. After the quiz was made available, students had access to the quiz until the date of the corresponding examination. Students were able to take the quiz up to two times each, and, after completion, were able to see their quiz score, review all the questions and correct answers, as well as the rationale for correct and incorrect answers. Similar to the format of the course examinations, the quizzes were secured, meaning that students were not able to access the Internet or other computer programs while taking the quiz. However, to maximize the accessibility and learning benefit of the quizzes, students were able to use class notes or other printed resources while taking the quizzes, as desired. In addition, while in-class examinations were 50 minutes in length (the final examination was 120 minutes), students had an unlimited time to complete the quizzes. Students were also able track their performance on quizzes throughout the semester. Most course examinations had 2-3 corresponding tutor-generated quizzes associated with the same material.
During the semester, the following data from quizzes and examinations were collected for analysis: average class performance for the assessment, individual scores for each student, and the percent of students who answered each question correctly. Each quiz and examination question was mapped to a course learning outcome so that performance could be examined both in aggregate and by course topic. In addition to quiz and examination scores, information was collected on the use of peer tutor services, including number of students taking each quiz, number of visits to office hours, and attendance at review sessions.
To measure student perceptions of the online quizzes, students were given a voluntary survey at the end of the semester. The quiz was conducted using Qualtrics online survey software (Qualtrics LLC, Provo, UT). Using a numeric scale from 1-5 (1=strongly disagree; 2=disagree; 3=neutral; 4=agree; 5=strongly agree), students were asked to rate their level of agreement with items related to value of the online quizzes, the influence the quizzes had on study technique, perceptions of quiz value in increasing confidence and performance on examinations, how well the quizzes reflected examination material, if the quizzes should continue to be used, and if the quizzes increased the likelihood of using other tutoring services in the future. Peer tutors were surveyed to measure their perceived experience using the same scale. Peer tutors rated the overall peer instruction experience, as well as their perception of the added benefit to students in completing online quizzes. Specifically related to writing quiz questions, peer tutors were asked to rate their own perceived value in terms of reinforcing content knowledge and providing teaching experience.
All data were analyzed using Microsoft Excel and SigmaPlot (Systat Software, San Jose, CA). Four major comparisons were made using percentage scores from the 12 quizzes and five examinations. This comparison was suitable since quizzes and examinations both measure content knowledge, and considerable efforts were made to ensure consistency between the two types of assessments. Only data from the first quiz attempt were used for analysis. Students completing at least 50% of the 2-3 quizzes for each examination were considered “quiz takers” for that examination. Students completing less than 50% of the 2-3 quizzes for each examination were considered “non-quiz takers” for that examination. For the semester average, individuals taking 50% or more quizzes during the total semester were considered quiz takers. With the exception of 2 comparisons, all data were distributed normally. When data were not distributed normally, nonparametric tests were used, as indicated below.
The first analysis sought to determine whether quiz takers performed better on examinations than non-quiz takers. To do this, average examination scores for quiz takers was compared with average examination scores of non-quiz takers. Scores were examined individually and as an aggregate for the whole semester. To determine significance, a t test was used. Second, to determine if individual students performed better on examinations than related quizzes, quiz and examination scores for each quiz taker were recorded and paired analyses were conducted. In this manner, students’ average quiz scores were compared to their scores on the related examination. To determine significance of any differences in performance between quizzes and examinations, a paired t test was used for normally distributed samples and the Wilcoxon signed rank test was used in the two instances when data were not normally distributed.
The third comparison used course-level mapping data for each question. All quiz and examination questions were mapped to course learning outcomes and performance was compared to examine whether the quizzes affected examination performance similarly for each learning outcome. Differences in performance were examined for significance using a t test. Finally, to determine if there was a difference in the benefit to quiz takers in the top and bottom of the class, a comparison was made between quiz and examination performance for the top 25% and lowest 25% of performers in the overall course (based on final course grades). A mixed-effects analysis of variance (ANOVA) was used to determine significance. Examination and quiz scores are all expressed as the average with the 95% confidence interval. For all statistical analyses, a p value less than 0.05 was considered significant.
For both surveys, an average of student answers on each question was tabulated. All values are expressed ± standard deviation (SD), and the percent of students responding strongly agree or agree (SA/A) is reported. This study was approved by the St. John Fisher College Institutional Review Board as exempt from review.
EVALUATION AND ASSESSMENT
Seventy-eight students were enrolled in the course in which the quizzes were used. Over the course of the 15-week semester, 12 quizzes were made available (2-3 quizzes per examination). All but four students (5%) took at least one quiz over the course of the semester. This number represents a large increase in use of peer instruction services, from less than 10 students in previous semesters to 74 students in the test semester. The average number of quizzes taken by each student was 8.5 (3.9), with 21 students (27%) taking all 12 quizzes. On average, 70% of the students took each quiz. This number dropped slightly toward the end of the semester, but was never below 50%. Students taking quizzes were equally distributed between high, middle, and low course performers. Students often took quizzes within one day of the examination, and approximately one-third of quiz takers took each quiz twice. Table 1 summarizes the design, utilization, and overall performance on quizzes and examinations during the semester. Quizzes were designed to be as close in format and difficulty to the examinations as possible without overlapping any specific questions.
Assessment Parameters for Online Quizzes and Course Examinations
Data collected from quiz and examination performance were analyzed a number of ways to determine the impact of quizzes on student performance. First, performance on quizzes was compared to performance on examinations. Examination averages ranged from 83.2% to 89.5%, with an average examination score for the semester of 87.4 (0.66%). The average quiz score was 79.9 (1.0%), with individual quiz averages ranging from 56.5% to 86.7%. As multiple quizzes were given during most examination periods, quiz performance was averaged for each examination period for further analysis. For every examination, performance was better than on corresponding quizzes (p<0.001 for examinations 1,2,4, and 5; p<0.05 for examination 3; Figure 1). Differences between examination and quiz performance ranged from 2.4% (examination 3) to 26.7% (examination 5). The average examination score was 7.5% higher than the average quiz score.
Quiz Performance vs Examination Performance Quiz averages were compared with examination averages for each individual examination and all five examinations combined. Sample size for quiz takers ranged from 40-62, depending on the examination. For the semester average, n=62. Data are expressed as average with the 95% confidence interval. *p<0.05; **p<0.001
Next, examination performance of quiz-takers versus non-quiz takers was compared (Figure 2). Examination performance of quiz takers was significantly higher than performance of non-quiz takers on three of the five semester examinations (1, 2, and 4; p<0.05), with quiz taker scores averaging 4.5% higher. Additionally, the semester examination average was significantly higher for quiz takers (88.2%) compared to non-quiz takers (83.7%; p<0.001).
Examination Performance of Quiz Takers vs Non-quiz Takers Quiz and examination performance was compared between quiz takers and non-quiz takers for each individual examination and all five examinations combined. Individuals taking 50% or more quizzes for each examination were considered quiz takers for that examination. For the semester average, individuals taking 50% or more quizzes during the total semester were considered quiz takers. Sample size for quiz takers ranged from 40-62 and from 14-38 for non-quiz takers, depending on the examination. For the semester average, n=62 (quiz takers) and n=16 (non-quiz takers). Data are expressed as average with the 95% confidence interval. *p<0.05; **p<0.001
To assess the impact of quizzes on the course learning outcomes, all quiz and examination questions were mapped to the related outcomes. Because course learning outcomes varied in content and difficulty level, quiz and examination performance could be compared to individual outcomes (Figure 3). Examination performance was significantly higher than quiz performance for learning outcomes 2, 4, and 5 (p<0.05). These learning outcomes pertain to pathophysiology, drug mechanisms of action and side effects, and pharmacokinetic and pharmacodynamic properties, respectively. The average difference between quiz and examination scores for each learning outcome was 4.6%. Examination and quiz performance were similar for outcomes related to anatomy and physiology (learning outcome 1) and medicinal chemistry (outcome 3).
Quiz and Examination Performance on Each Course Learning Outcome Quiz averages were compared with examination averages for each course learning outcome. The learning outcomes are as follows: (1) Describe the anatomy and physiology of the endocrine system, pain pathway, gastrointestinal system, and the sensory system (eg, eyes, ears, nose, skin); (2) Describe the pathophysiology and representative symptoms of endocrine, pain, gastrointestinal, and sensory disease states; (3) Identify the medicinal chemistry that characterizes drugs belonging to each of the pharmacological classes presented; (4) Based on site and mechanisms of action, predict the therapeutic and side effects associated with pharmacological agents used to treat endocrine, pain, gastrointestinal, and sensory disease states; (5) Based on pharmacokinetic and pharmacodynamic properties, explain why certain pharmacological agents are preferred for treating endocrine, pain, gastrointestinal, and sensory disease states. Sample size for quiz questions ranged from 6-38 per learning outcome. Sample size for examination questions ranged from 13-82 per learning outcome. Data are expressed as average with the 95% confidence interval. *p<0.05
Finally, differences in quiz and examination scores were compared between the highest 25% and lowest 25% of scorers in the course to see if the differences observed could have been a result of the population of students taking the quizzes. Relatively equal numbers of the top, middle, and bottom performers in the course were part of the quiz taker population. However, the top 25% of scorers took more quizzes on average (11.4) compared to the bottom 25% (9.4; p<0.01). Average quiz performance for the top 25% of scorers in the course was significantly higher than average quiz performance for the bottom 25% (85.3% and 74.1%, respectively, p<0.001). Despite this, there was no significant difference in the benefit of quizzes between the two populations. On average, the top 25% of the class had examination scores 8.4% higher than quiz scores, while the bottom 25% had examination scores 6.7% higher than quiz scores (p=0.485).
Results of student and peer tutor surveys were positive. Thirty-eight of the seventy-four students who utilized quizzes completed the voluntary survey (52% response rate) (Table 2). A majority of responding students perceived the quizzes to be beneficial, including 95% strongly agreeing or agreeing that they were a valuable resource, 92% that the quizzes increased confidence, and 81% that the quizzes increased examination performance. In addition, 73% of students responding said the quizzes influenced how they studied for examinations, and 92% felt that the quizzes accurately reflected the material that was on the examinations. Supporting their continued use, 98% strongly agreed or agreed that the quizzes should be continued in future semesters, and 69% said using the quizzes would increase their likelihood of using other tutoring services (eg, office hours, review sessions, or review sheets). Attendance at other peer tutor services supports this last response. Attendance at both tutor office hours and tutor-led review sessions increased during the studied semester. Visitors to office hours rose from less than five in previous semesters to approximately 10 in the studied semester. Attendance at review sessions was not closely monitored, but was only 5-10 students in previous semesters. Peer tutors in the studied semester reported groups of 15-25 students.
Student Perceptions of Online Quizzes (N=38)
Peer tutors provided similar feedback related to the benefit of quizzes on students they were tutoring (Table 3), with tutors strongly agreeing that the quizzes were a valuable resource for students. Perhaps equally important is the impact of the question-writing experience on the peer tutors themselves. Peer tutors were provided with numerous resources and were mentored by course faculty members on a continuous basis. Question feedback was left for peer tutors in ExamSoft so they could review their performance each week. On the survey, peer tutors strongly agreed or agreed that writing the quizzes increased their knowledge of the course topics covered and that learning to write questions was a valuable experience. Finally, tutors reported that their experience as peer tutors was excellent.
Peer Tutor Perceptions of Online Quizzes (N=2)
DISCUSSION
Self-testing has been used with success in higher education for years.4,5,11,12 While previous studies independently discussed the benefits of self-testing and peer tutoring, this study combined the approaches by illustrating the advantage of utilizing peer tutors to develop a formative assessment resource. Additionally, the comparisons made allowed for a deeper analysis of the impact of self-testing on course performance. Finally, this study describes an easily implemented program to increase self-directed learning and formative assessment in pharmacy curricula.
We observed many quantitative benefits for students from the use of online quizzes. In general, class performance on examinations was higher than performance on practice quizzes, both on individual examinations and in the semester averages. This may represent students taking quizzes to test their preparedness on examination material. Student survey data supports this, as 73% of students reported that the quizzes influenced the way that they studied for examinations. Using the quizzes as practice identified areas of strength and weakness, leading to better performance when the students encountered similar topics on the examinations. Additionally, repeated quizzing is beneficial as a type of formative assessment. For example, Roediger and colleagues demonstrated that students who took quizzes to reinforce material retained information better than those who read or undertook other traditional study techniques alone.13,14 This “test-enhanced learning” improved learning, especially when used after studying to gauge knowledge and practice recall in a number of settings.15,16 Larsen et al documented this long-term impact even in postgraduate neurologists and also found it superior to repeated studying alone.16
The benefit of using formative self-assessments is also seen when comparing populations of students who take quizzes and those who do not routinely use quizzes. Examination scores for quiz takers averaged 4.5 % higher than non-quiz takers. Even though the sample size of the non-quiz taker population was small, this difference was still significant. With the plus/minus grading system used in the course, 4.5% can mean an increase in a student’s letter grade for the course. Despite the potential for a self-selection bias in the population, analysis revealed the highest, middle, and lowest performers in the class were equally represented in the population of quiz takers. Additionally, all students appeared to have benefitted from the quizzes, as no significance was ssen between the benefit to the highest and lowest 25% of scorers in the course. However, the sample size and power of the analysis may have limited the ability to detect a difference.
A third analysis was completed to assess the impact of quizzes on the different course learning outcomes. The pharmacology courses at Wegmans School of Pharmacy are offered as a 5-course sequence that covers anatomy and physiology, pathophysiology, and medicinal chemistry, in addition to the traditional pharmacology topics. To reflect this content, the learning objectives are broken down into five category areas (Figure 3). Pharmacology is broken into two learning outcomes, reflecting first the mechanisms of action, therapeutic effects and side effects, and second, the pharmacokinetic and pharmacodynamic properties of the drugs. In this pharmacology course, the areas students struggle with the most are those related to pharmacology (learning outcomes 4 and 5). When analyzing the benefit of quizzes on content broken down by learning outcome, we observed that the largest gains were in these two learning outcomes. Gains were also high for the pathophysiology content. These data indicate that quizzes may be most beneficial to students on the most difficult material.
Benefit to student assessment scores is important, but perhaps equally as important is the way that students perceived the resource. Based on usage alone, a majority of the class found the quizzes useful. Although usage slightly declined toward the end of the semester, the numbers still stayed above 50%. Survey responses from students led to similar conclusions. Ninety-five percent of students responding felt that the quizzes were valuable, and 98% recommended continuing to offer the quizzes. The ultimate goals of an educational intervention should be to improve learning outcomes, but perceived benefits to student performance and confidence are also important aspects. Students reported that quizzes not only increased their confidence, but made them feel that their performance was better. These perceptions may have been reinforced by the fact that they felt that quizzes accurately reflected the examinations. Faculty members were careful when reviewing the quiz questions to make sure there was not direct overlap of content. However, they did make sure that the level of depth and difficulty of questions were comparable. If questions do not accurately reflect the examinations, students may not perceive as much of a benefit. Quiz usage and survey data also support the study goals of increasing usage of tutoring services. Students used tutoring services at a higher rate, valued the resource, and reported that they were more likely to use other tutoring services in the future. The value of the self-assessments, in terms of benefits for examination scores and increased confidence was clearly apparent to the student population.
Although perceptions of the format of quizzes were not directly measured, the online format of quizzes made them easily accessible and similar to the testing environment. Quizzes were taken using the same software as all in-course examinations. Students were allowed to take the quizzes at any time and location, and were given an unlimited time to complete, but otherwise the format was identical to examinations. This extra practice with both material and format may lead to a reduction in test anxiety.13 After completing the quizzes, using ExamSoft allowed students to review the quiz, read rationales for questions, and track their progress. This approach allows for self-reflection in addition to providing a low-stakes environment for students to make mistakes and learn from them. Making errors facilitates learning even if students are not aware of it, especially when mistakes are followed by corrective feedback,17 as was available in this study.
In addition to the student benefit from using peer tutors in quiz development, question writing was a benefit to peer tutors. In terms of confidence and knowledge, peer tutors benefit from the experience.9,10 By incorporating peer tutors into higher level teaching activities, this benefit may be even more pronounced. Peer tutors received specific question-writing training at the beginning, as well as continuous feedback on questions throughout the semester. Feedback from peer tutors was positive. Peer tutors felt that writing the questions improved their knowledge of the topic and that learning to write questions was a valuable learning experience. Although the population size of peer tutors was small (n=2), the insight is valuable. Future studies will more closely examine the impact of this program on larger populations of peer tutors.
While this study represents data from one semester of the self-assessment program, results support their continued use. Despite the strengths and positive impact, some limitations do exist. Demographic information, with the exception of final course grades, was not collected from students in the study to maintain anonymity in the small sample population. Characteristics of quiz takers and non-quiz takers may therefore have contributed to difference in scores. In addition, a psychometric validation of the survey instrument was not conducted. A third limitation is that because of the team-taught nature of the course and turn-over of faculty members, it was not possible to make comparisons of examination scores between the test semester and previous course offerings. This restricted comparisons to one semester and one cohort of students. While this approach avoided variations that could result between semesters, it did not allow for a comparison of examination scores before and after implementation of the online self-assessments. Additionally, the program requires extra time of faculty members for training peer tutors, reviewing questions, and administering quizzes. The time required was heavier toward the beginning of the semester and decreased once both the peer tutors and faculty members became more experienced with the process. Overall, faculty members should expect a time investment of 30 to 45 minutes per week for question review and quiz distribution, although this is lower than if faculty members have to write the resource themselves.
Some challenges also arose with the peer tutors chosen for each course. A structured application process is recommended to ensure students are qualified and invested in the process. While all students will have to learn a new skill, some require more guidance than others. Finally, although efforts were made to make the quizzes and examinations consistent, there were obvious differences in the number of questions and the testing environment that may have contributed to differences in performance. Any differences in performance resulting from this were expected to artificially inflate quiz scores (because of the accessibility of notes, unlimited time, and lower stress environment), however, this was not observed.
The success supports further implementation in other courses. In future semesters, this program will be implemented in all courses of the pharmacology sequence. This will allow for expanded, analyses of the impact of quizzes on larger populations of students and at different levels of study. After full implementation, it will be interesting to examine if the quizzes vary in their impact in different courses, different grade levels, and/or different colleges and schools of pharmacy.
CONCLUSION
The use of online self-testing resources created by peer tutors in this course was beneficial to students as a formative assessment of learning in graduate pharmacy courses. In addition, peer tutors themselves benefit from the skills obtained during question-writing exercises. In other courses where students struggle, or where peer tutors are already employed, the implementation of online self-testing may be useful addition that benefits both students and peer tutors, alike.
ACKNOWLEDGMENTS
The authors wish to thank Dr. Jane Marie Souza for her technological support and assistance in program development and Dr. David Hutchinson for his guidance in study design and statistical analysis.
- Received June 18, 2015.
- Accepted October 31, 2015.
- © 2016 American Association of Colleges of Pharmacy