Abstract
Objective. Faculty at Massachusetts College of Pharmacy and Health Sciences University’s School of Pharmacy-Worcester/Manchester are engaged in continuous quality improvement of their teaching and assessment methods to prepare students for successful careers in pharmacy. This study evaluated the impact of a formative mock examination on student performance on a main summative examination (main examination) administered during the spring 2020 semester of a pharmaceutical calculations course.
Methods. A retrospective analysis of student test scores in a summative assessment (main examination) was performed across two cohort years (2019 and 2020) during which students were not administered and administered a formative mock exam, respectively. Central tendency and comparative analysis measures were performed to assess differences in student performance.
Results. Out of 237 students enrolled, 221 students participated in the optional mock exam, and all 237 students participated in the main examination, with average scores for the mock examination and the main examination being 67% and 94%, respectively. Ninety-two students who received a grade C or better on their mock examination had a main examination average score (98%) that was significantly higher than those who received a D or F (n=129, main average score of 92%). Further, the average score in the 2020 examination was significantly higher when compared to the 2019 examination when no mock examination was offered (94% vs 77%, respectively).
Conclusion. This was a descriptive, cross-sectional study to understand the differences in student performance in a summative assessment across two cohort years with and without a formative mock assessment. The results demonstrate that the formative mock examination was correlated with better performance among students but did not establish a causal relationship.
INTRODUCTION
The School of Pharmacy-Worcester/Manchester at the Massachusetts College of Pharmacy and Health Sciences University prepares students for careers in pharmacy through a 34-month accelerated Doctor of Pharmacy (PharmD) curriculum. The curriculum is divided into three academic terms (fall, spring, and summer) during the first two years of professional pharmacy training, whereas the third and final year is dedicated to advanced pharmacy practice experiences (APPEs). The pharmaceutical calculations course is a foundational science course that is offered to students during their first academic spring term (January to April), over a 15-week period. The course design relies on students’ background knowledge in college-level mathematics (arithmetic and calculus) as foundational principles for accurately performing calculations across a range of pharmaceutical/pharmacy concepts, including but not limited to calculation of medication/dosage form to be dispensed, calculation of ingredients for compounding, altering of drug product concentration/strength, and conversion of doses and units of measurements. Assessment of student performance in the course is determined through three midterm summative assessments/examinations and a final cumulative examination.
Successful completion of mathematics principles that are taught in courses such as math, computer science, calculus, and statistics are prerequisites for admission into the PharmD program.1 However, students matriculating into the program demonstrate different levels of knowledge and proficiencies in these foundational mathematic principles, which impact their level of understanding and performance in the pharmaceutical calculations course. Additionally, poor study skills, internalized fears of mathematics and mathematics-related subjects, insufficient opportunities for practicing examination-type problem-solving, and the potential lack of familiarity with an examination format could contribute to poor student performance in pharmaceutical calculations.2-5 Upon careful review and analysis of past student performances in the first midterm summative assessment, hereafter referred to as the main examination (with the class averaging 70%-75%, compared to the second and third midterm assessments, with average class scores of 90%-95% each), the course instructor proposed introducing a mock examination as a supplement to other formative assessment methods (eg, in-class quizzes, dedicated sessions for practicing problem-solving, and homework assignments with multiple attempts allowed) for improving students’ learning, testing familiarity, and performance. This intervention was also informed in part by student feedback from prior years about how the examination format differed from what students encountered during in-class practice sessions and homework assignments. The purpose of the mock examination, therefore, was to familiarize students with the examination format to best prepare them for the main examination. Due to the ethical challenge of administering the mock examination (ie, the intervention) to only a subset of the class, participation in the mock examination was open to all students enrolled in the Pharmaceutics II spring 2020 course; administering the intervention to a select group of students, as would have been done in a randomized controlled study, could potentially enhance the academic performance and progression of one group to the detriment of the other. The mock examination was designed to closely recapitulate a substantive examination in number of questions, level of difficulty, duration of examination, examination platform, physical space, and proctoring methods. The mock examination was optional and not included in the course grade, and students were under no obligation to participate.
Formative assessment tools are essential for enhancing student learning by reinforcing and improving retention of important concepts.6 Formative assessments such as mock examinations, can be used as summative assessment preparation tools, helping to improve student performance, where their impact is greater the more closely the formative assessment (mock examination) parallels the summative assessment (main examination).7-8 Additionally, formative assessment opportunities can help students identify areas for improvement as well as provide insight into how questions on an examination would be structured.9 Whereas mock examinations have been shown to be effective for improving student performance on application-based and short-essay examinations, the role of mock examinations in improving students’ examination preparedness and performance in pharmaceutical calculations has not been demonstrated.10 Our working hypothesis was that administering a mock examination that simulates a substantive examination format would facilitate students’ familiarity with test taking to the end of enhancing their examination preparedness by clarifying expectations and identifying areas for improvement, ultimately improving their learning and examination performance. Importantly, demonstrating the effectiveness of mock examinations would provide inferential evidence in support of the efforts toward effective quality improvement in teaching, learning, and student performance assessment. This study retrospectively evaluated the impact of a mock examination on student performance in the main examination during the spring 2020 term. The study also compared student performance in an examination taken with a mock examination component (2020) versus a similar examination without a mock examination component (2019). The results of this study provide logical evidence of the potential usefulness of mock examinations in improving student learning, examination preparedness, and overall performance and competency in pharmaceutical calculations.
METHODS
This was a retrospective observational study that reviewed student test scores in a pharmaceutical calculations course at the School of Pharmacy-Worcester/Manchester and examined the impact of a formative mock examination on student performance in a summative examination. The Pharmaceutics II course offered in spring 2019 consisted of a summative assessment (here called the main examination) that was only preceded by in-class exercise and quizzes as well as homework assignments as formative assessments. In spring 2020, an optional mock examination was introduced as a supplemental formative assessment. The mock examination was designed to simulate the main examination in number of questions, level of difficulty, duration, and topical content, and examination-delivery and examination-taking conditions. Both examinations were administered as closed-book, proctored assessments. The expectation was that the mock examination would serve as a prospective examination review tool to help familiarize and clarify the examination format for students as well as to help students identify their content and topical areas of weakness so they could refine their study habits and test-taking strategies to best prepare and improve their performance. The mock examination and main examination each consisted of 16 calculation-type, multiple-choice questions, with the mock examination being administered the day prior to the main examination. While the questions in the mock examination and main examination were different in content, they were similar in format and topical selection. Both assessments were administered via the electronic testing/computer-based ExamSoft platform (ExamSoft Worldwide LLC) with a testing duration of 55 minutes. The complete mock examination file was released to all students immediately following the mock examination. The mock examination file contained all 16 questions with the correct answers selected but without the worked solutions. The file showed the students’ scores as well as other central tendencies, including the mean, median, and class percentile. Although the mock examination score did not count toward a student’s final grade, scores from the mock examination and the main examination were collected and analyzed. Data analysis was performed using Excel. Central tendency measures (mean/average, median, and range) and comparative statistics (p value and Pearson correlation coefficient) were calculated. An average test score was calculated for both examinations and compared using a t test. The p value was calculated and considered to be significant if p<0.05. To assess whether the mock examination was associated with any changes in performance, the main examination averages between 2019 and 2020 were compared.
The retrospective nature of the study did not allow for the collection of participant demographic information as would have been done in a prospectively designed study. We recognized the potential impact of various confounding factors in participant demographics regarding the study outcomes, such as age, gender, type of college attended for prerequisite course credits (eg, four-year university vs two-year community college), previous degree earned (bachelor vs associate), when prerequisite courses were taken (math and science courses must be taken no more than 10 years to the time of matriculation), etc. However, we considered both cohorts to be generally comparable based on their meeting the same pass grade requirements (grade C or better) for three math-related admission prerequisites (eg, calculus, statistics, and math/computer science). On the same basis, we considered students who participated in the mock examination (n=221) vs those who did not (n=16) within the 2020 cohort as generally similar and comparable. All of the students who were enrolled in the course and participated in the mock examination were included in the study. There were no exclusion criteria. This research project was deemed exempt by the Massachusetts College of Pharmacy and Health Sciences University Institutional Review Board.
RESULTS
Out of 237 students enrolled in the pharmaceutical calculations course during the spring 2020 semester, 221 (93%) students took the optional mock examination that was administered the day before the main examination. As the main examination was mandatory and counted toward the students’ final grade, all students completed the main examination. The mock examination average score was 67%, and the main examination average score was 94% (Table 1).
Grades and Scores Earned by Pharmacy Students on the Mock and Main Examinations Administered in a Pharmaceutical Calculations Course
Ninety-two students (42%) who received a passing grade (letter grade C or better) on their mock examination had a main examination average score that was significantly higher than those who received a failing grade (letter grade D or F) on their mock examination (p<.001) (Table 2). There was a moderate anticorrelation (Pearson correlation, r = −.65)11 between mock examination and main examination scores, with the number of students with strong performance (grade B or better) skewing higher in the main examination (92%) compared to the mock examination (30%) (Table 2, Figure 1).
Cross-tabulation Analysis Comparing Statistics Between Pharmacy Students Who Passed or Did Not Pass the Mock and Main Examination in a Pharmaceutical Calculations Course
Examination scores of pharmacy students who completed a mock examination and main examination in a pharmaceutical calculations course. When stratifying student examination scores by letter grades, the Pearson correlation coefficient analysis of the association between mock examination and main examination grades showed a strong anticorrelation between students with grade B or better on the mock examination versus the main examination.
When student examination scores were stratified by letter grades, there was about an eightfold increase in the percentage of students with letter grade A (defined as an examination score of 90%-100%) in the main examination compared to the mock examination (78% vs 10%). Overall, individual student performance was higher in the main examination compared to the mock examination (Table 3). A significant number (69.7%) of students outperformed their mock examination grades (Table 3). A total of 25 students (11.3%) did not see any grade change between their mock examination and the main examination (A, n = 20 [8.4%]; B, n = 3 [1.3%], and D, n = 2 [0.8%]). Only one student who participated in the mock examination underperformed in the main examination. Of the 16 students who did not participate in the mock examination, 15 (94%) had a grade C or higher on the main examination, while one student received a failing grade.
Letter Grades Earned by Pharmacy Students on Main Examinations Versus Mock Examinations in a Pharmaceutical Calculations Course
The spring 2019 Pharmaceutics II course had 215 students enrolled. All students enrolled in the spring 2019 course participated in the main examination, which was not preceded by a mock examination. The main examination average scores for spring 2019 were significantly lower compared to the spring 2020 main examination average scores (77% vs 94%, p<.001) (Table 4). In order to determine how student performance was sustained after the first examination, we compared student performance in the subsequent summative assessment between the 2019 and 2020 cohorts. The 2019 cohort performed nominally better on examinations 2 (95.1% ± 8.5) and 3 (95.8% ± 8.8) and the final cumulative examination (72.9% ± 15.4) compared to the 2020 cohort (92.4% ± 10.9, 90.9% ± 11.98, and 72% ± 15.8, respectively). However, an objective comparison between the cohorts for these examinations is complicated by the disruption caused by the COVID-19 pandemic, which occurred just before these three examinations were administered. Unlike 2019, where all teaching and examinations were offered in person and on campus, in 2020, only examination 1 (referred to here as the main examination) reflected the same conditions of teaching and assessment.
Comparison of Pharmacy Students’ Main Examination Average Scores in a Pharmaceutical Calculations Course Between Spring 2019 and Spring 2020
DISCUSSION
This study, because of its observational, noninterventional design, provides some logical basis to infer that administering a mock examination prior to the main examination may be associated with (but not necessarily the cause of) improved examination performance for students enrolled in a pharmaceutical calculations course. The hypothesis of the study was that the administration of a mock examination with the chief goal of facilitating testing familiarity would improve examination preparedness for students by clarifying expectations and identifying areas for improvement and, thus, help improve student grades. The theoretical framework of our working hypothesis is best captured by the concept test-wiseness, which was defined by Millman and colleagues as “a subject’s capacity to utilize the characteristics and formats of the test and/or the test-taking situation to receive a higher score.”12 Millman and colleagues also described test-wiseness as “logically independent of the examinee’s knowledge of the subject matter for which the items are supposedly measured.”12 Central to our understanding of test-wiseness is the fact that the cause of a student’s poor performance on a test may not be due to the lack of knowledge of the core subject matter. There may be a number of factors (eg, time management and testing anxiety) other than the intended variable the students is being tested on that could affect their performance.2,4 The goal of the mock examination, therefore, was to simulate a test-taking situation that would otherwise not be fully addressed by any of the other formative assessment methods (eg, repeated self-testing, quizzes, classroom test, and assignments).13-17 This simulated test-taking situation allows students to add another skill to their test-taking competency that is not only intellectual (knowledge based), but also very experiential, down to their emotive and psychological state, which makes this concept more novel in its application.
A majority (93%) of the students enrolled in the 2020 pharmaceutical calculations course participated in the mock examination as a practice test tool for the main examination. The students who performed better on the mock examination (received a grade C or higher) had significantly higher average scores on the main examination than those who received a grade D or F on the mock examination. The findings in this study correlate with findings in studies using other forms of formative assessments involving PharmD curriculum courses.18-20 A retrospective study conducted by Stewart and colleagues found that 76 out of 79 pharmacy students (>96% of the class) used the opportunity to self-test, and they demonstrated a correlation between student performance on self-testing practice quizzes and subsequent examination scores.19 Ultimately, the results in this study align with studies on student self-testing in other disciplines (pathophysiology, pharmacokinetics, and clinical pharmacokinetics), which underscored the impact of formative assessment methods (eg, preexamination quizzes, student self-assessment, and immediate feedback assessment) in improving student learning outcomes.19-21
There was a moderate anticorrelation (r = −.65) between the mock examination and main examination scores, where the number of students with strong performance (grade B or better) skewed higher in the main examination (92%) compared to the mock examination (30%). The significance of this statistic is that it reinforces the observation that students who had underperformed in the mock examination did much better in the main examination. In quantitative terms, the improvement in student performance in the 2020 cohort was significant when compared to the 2019 cohort (p<.001). It is worth noting that most students who did not participate in the mock examination performed well in the main examination. We speculated that the overall strong performance demonstrated by the students who did not participate in the mock examination may be attributed to this group of students being made up of those who felt adequately prepared for the main examination and presumed they would not gain additional benefit from a practice opportunity. It is also possible that these nonparticipants in the mock examination were either confident in their high-performing abilities, relied on other self-testing tools, or did not consider the stakes high enough to participate in the mock examination. However, because the complete examination file was released to all students after the mock examination, it is also possible that these nonparticipants still benefitted from their own informal self-test, howbeit without the constraints of the mock examination environment. Indeed, metadata from the examination platform showed that all students had accessed the examination file.
Overall, the strength of the study rests on the weight of data from the 221 students who participated in the mock examination, which provided a large sample size for appropriate analyses to be performed with high confidence (CI=95%). This allowed for appropriate inferences to be made on the potential impact of the mock examination as a form of formative assessment on student performance in the pharmaceutical calculations course. These data support previous findings demonstrating the benefits of preexamination assessments on overall student performance.9,22-25 Another potential strength of the study was the timing of the mock examination. Because students may tend to wait until an examination is imminent before preparing, the scheduling of the mock examination a day before the main examination ensured that the results of the mock examination were as accurate an indicator of the level of students’ preparedness as possible. Taken together, the results underline how a nominal addition to the teaching and assessment design of a pharmaceutical calculations course in the form of a mock examination resulted in a real and positive impact on learning and student performance.
Several limitations were identified with this study. First, this was a retrospective observational study that lacked randomized and controlled study measures. However, while the 2019 main examination assessment data was not representative of the same student population as in 2020, it does serve as a useful quasi-control group, as the student population demographics at the macro level and the examination design were similar. It is worth noting that we relied on historic student examination data from 2015 to 2019 showing that the central tendency measures (mean and median) in the main examination (examination 1) were consistently in the range of 70%-75% (eg, grade C) from year to year, thereby serving as a useful baseline to predict future student performance. We relied on this evidentiary assumption as a useful internal control to allow for comparisons between groups. However, we are of the opinion that a thoughtfully designed prospective study that accounts for additional participant variables (eg, age, gender, degree earned, grades on math-related prerequisites, etc) would enhance the study’s goals and serve to further strengthen the study’s conclusions. Finally, while the COVID-19 pandemic is an important confounder for many recent studies, it did not present any limitation to the comparative analyses of data from the 2020 cohort. This is because both spring 2020 assessments (mock examination and main examination) were administered prior to the COVID-19 shutdown (they were held on January 29 and 30, 2020, respectively).
Future studies could benefit from a multi-institutional and multiprogram design (eg, accelerated curriculum [2 + 3 years] vs regular curriculum [2 + 4 years]) that use other student variables (eg, age, gender, degree earned, grades on math-related prerequisites) in order to address how these variables may affect student performance in both mock (formative) and main (summative) examinations. Additionally, a study design that allows for a greater time interval (at least one week) between examinations would add rigor to the data by clarifying expectations and facilitating students’ retention of content, and this would also allow for the monitoring of students’ emotive and psychological state at two independent time points. Finally, including a survey questionnaire as part of the study design that assesses students’ perspectives about the content and format of the tested materials (eg, number of questions, appropriateness of questions, level of difficulty, test duration) would yield data that would be useful in reinforcing the study results and provide valuable feedback for quality improvement of future assessments and studies.
CONCLUSION
For students enrolled in the spring 2020 Pharmaceuticals II course, the administration of a mock examination as a form of formative assessment prior to the main examination served as an opportunity in facilitating students’ understanding of the scope of the examination in order that they may self-assess their level of preparedness. Within the many limits that retrospective, observational (noninterventional) studies such as ours present, the study results provide logical evidence of the potential usefulness of formative mock assessments in reinforcing learning and improving performance. While a prospective, randomized controlled design may not be ethically appropriate for such studies, we hope that more studies focusing on this specific type of formative assessment would provide sufficient evidentiary basis to support a pedagogical framework for the thoughtful design of mock assessments to facilitate student learning and self-assessment and to help improve academic performance in pharmaceutical calculations courses.
- Received March 23, 2021.
- Accepted January 6, 2022.
- © 2023 American Association of Colleges of Pharmacy