Abstract
Objective. To assess the impact on student performance of increased active learning strategies in a foundational pharmacokinetics course and a clinical pharmacokinetics course over an 8-year period.
Design. A foundational pharmacokinetics course with a lecture-with-active-learning (LAL) format was redesigned to a recitation-format (REC) using smaller groups of students (ie, the class divided into thirds) and eventually to a team-based learning (TBL) format. The lecture-based clinical pharmacokinetics course was redesigned to a case-based learning (CBL) format to encourage preclass preparation with class time used for application; this course format underwent minor redesigns over an 8-year period. An analysis of covariance (ANCOVA) was performed on examination scores in the clinical course based on foundational course format changes. End-of-semester student evaluations of the course were used as a secondary measure of impact.
Assessment. The highest grades in the clinical course were associated with the TBL format within the foundational course compared to LAL format (effect size 0.78). The REC format in the foundational course compared to LAL was associated with higher performance in the clinical course (effect size 0.50). Examination performance in the clinical course had a small increase when the foundational course was transitioned from the REC format to the TBL format (effect size 0.27). There was a trend within the foundational course that overall student ratings of the course decreased with enhanced self-directed learning; there was no change in overall ratings of the clinical course.
Conclusion. Increasing the amount of active learning within the foundational pharmacokinetics course increases performance in the clinical course but this increase in performance may be associated with decreases in student evaluations of the foundational course.
INTRODUCTION
Within higher education and health science education, students need skills beyond content knowledge such as critical, creative, and practical thinking skills, written and oral communication skills, other application skills, and skills working in teams. This need has led to courses off-loading content onto more self-directed learning and refocusing class time on helping students develop these skills. This format is often referred to as the “flipped classroom” (aka, inverted classroom, blended learning, hybrid learning). While most studies compare the initial change to a control such as “lecture,” few longitudinal studies (ie, following a flipped course over time) examine the impact of a flipped course on work in subsequent courses.
The “flipped classroom” is a new term for an old concept. The main premise in the flipped model is that class time is used to apply information introduced outside of class. We previously used this model to show an improvement in student attitudes or increase in performance.1-3 The first part of a flipped course consists of utilizing various supportive structures of self-directed learning to impart some foundational content. Self-directed learning has an effect size of 0.40; that is, the score of the average student in the experimental group (ie, self-directed learning) is 0.4 standard deviations above the score of the average student in the control group (eg, lecture).4 One strategy used within courses, once students have acquired some foundational level of understanding outside of class, is the cooperative learning technique (ie, students teaching other students in class). This strategy has effect sizes on average of 0.90.5 Active engagement within class also occurs when instructors use inclass questioning techniques to reinforce outside learning. Questioning techniques have an effect size of 0.40.6,7 Team-based learning (TBL) is popular in health science education and is a model of the flipped classroom. Studies on TBL found positive effects in student engagement, communication skills, and critical-thinking skills.8 Finally, the US Department of Education found that blended or hybrid learning environments had effect sizes on average of 0.35,9 “blended” or “hybrid” learning being previous terms used for “flipped.”
Pharmacokinetics is part of all pharmacy curricula. It involves characterizing drug behavior and using these characteristics to inform future decisions (eg, whether to initiate or adjust dose regimens). In traditional formats (eg, lecture), student opportunities for application of the information is deferred to either other courses such as a clinical pharmacokinetics course or to work outside of class. Many curricula include 2 pharmacokinetic courses: a foundational course that develops underlying concepts and mathematical constructs, and a clinical course that applies previously learned material to individual patients based on specific medications, disease states, or special populations. Previously, pharmacokinetics courses were changed to increase the application of concepts within the foundational pharmacokinetics courses2,10-14 and within clinical pharmacokinetics courses.1,15 However, there is little data relating the 2 courses in terms of whether performance or learning in the foundational course translates to performance in the clinical course. Persky previously examined this correlation and found clinical course performance was increased based on changes in the foundational course.11
The primary objective of this manuscript is to describe changes made over an 8-year time period to foundational and clinical pharmacokinetics courses and the impact that the foundational course changes had on performance in the clinical course. Secondary objectives include examining the relationship between course changes and course evaluations and comparing flipped course work to other active-learning strategies—a more novel comparison than comparing a flipped approach to a traditional unidirectional lecture.
DESIGN
The foundational pharmacokinetics course was a 3-credit course in the fall of the second professional year and a prerequisite for the clinical course; the clinical pharmacokinetics course is a 3-credit course in the spring semester of the second professional year. Both courses are synchronously video-conferenced to satellite campuses. Each year, there were approximately 140 students on the main campus and 10 to 20 students on the satellite campuses. Table 1 and Figure 1 summarize the course format changes in both courses over time. Briefly, the foundational course used lecture-with-active-learning strategies (LAL) (eg, think-pair-share, cases, questioning techniques) and introduced immediate-feedback assessments (ie, students receive immediate feedback on correct or incorrect answers). The course was then was transitioned to a recitation (REC) format where students were asked to prepare before class and class time was utilized for discussion or case studies in smaller groups (eg, 50 students in each group vs 150 students in class).2 In the REC format, the class was divided into thirds and asked to attend class once a week for 90 minutes instead of the 3 times a week for 50 minutes. The course was again transitioned to a TBL format.11 For the first 14 weeks of the semester, students prepared before class and were held accountable for the preparation; class time was used for cases. The last 1-2 weeks were used for capstone cases (ie, cases that integrated topics within the course and represent realistic clinical scenarios), and review. A further change was made during the last year, where TBL was used for the first 9 weeks of class, covering all material.16 The remainder of the course used cases to review material. In addition, low-stakes cumulative assessments were used (ie, assessments that cover all the course content and contribute to a small fraction of the overall course grade). The number of examinations and assessments varied year to year (Table 2).
Summary of Changes within the 2 Pharmacokinetics Courses. N is the number of students in the foundational and clinical course, respectively
Depiction of the 2 pharmacokinetics courses over time based on format and assessment.
Examination Scores and Effect Size in the Clinical Pharmacokinetics Course Based on Format of Both the Foundational and Clinical Pharmacokinetics Courses
The clinical course transitioned from an LAL course to case-based course (CBL).1 The format remained largely intact with minor alterations made over time. Changes included replacing case-dedicated days with abbreviated overviews and case illustrations, and piloting a peer-review writing software to help with consult note writing (SWoRD, Panther Learning, Pittsburgh, PA). The number of examinations was constant but the number of graded assessments increased over time (Table 2). For the most part, the same instructors taught the courses over the 8-year period.
Preclass preparation utilized a variety of resources. For the foundational course, students had a choice between an instructor-developed e-book or instructor-developed animated module.10 Previously, we reported on student preferences for this material.17 In the later weeks of the TBL format, students were asked to complete cases prior to class. In the clinical course, resources included annotated PowerPoint slides, abridged readings, textbook chapters with reading guides, and primary literature; previously, we also showed student preferences for these materials.1 For approximately two-thirds of the course, students were asked to complete cases prior to class and then discuss the cases in class.
An analysis of covariance (ANCOVA) was performed on examination scores in the clinical course when CBL was used, controlling for grades and format in the foundational course. Standardized effect size (ie, difference in means between control and experimental group normalized to the standard deviation) and confidence intervals were calculated using Hedges approach.18 Effect sizes were estimated for course formats and for effect of changes in the foundational course on clinical course performance. Course evaluations were used as a secondary measure of impact and correlation analysis was performed with examination scores. Significance was set at p<0.05. Effect sizes over 0.40 are considered significant educational interventions.19 This study was exempted by the Institutional Review Board.
EVALUATION AND ASSESSMENT
The primary outcome was examination performance in the clinical pharmacokinetics course. Using an ANCOVA to control for performance in the foundational course, the highest grades in the clinical course were seen when the foundational course used the TBL format compared to LAL format (p<0.001, effect size 0.78, 95% confidence interval 0.58-0.97) (Table 2). The REC format of the foundational course was associated with a moderate effect size in the clinical course compared to LAL format (p<0.001, effect size 0.50, 95% confidence interval 0.31-0.69). In the clinical pharmacokinetics course, there was a small increase in performance when transitioning from the REC format to the TBL format (p<0.001, effect size 0.27, 95% confidence interval 0.14-0.40). There was a significant format by campus interaction within the TBL format, with examination scores being higher on the main campus compared to the satellite campuses (89.4 vs 85.0). Further analysis of the TBL years showed that the clinical course examination scores only differed between campuses during 1 year (year 1: 90.7 vs 89.1, ns; year 2: 88.4 vs 88.4, ns; year 3: 89.0 vs 81.1, p<0.001), which may explain the overall observed difference between campuses.
In the clinical course, there was a moderate increase in examination performance when comparing the CBL format to LAL format (Table 3). There were also significantly more opportunities on average to demonstrate learning in the CBL format; in the lecture format there were 3 examinations per year but, during the CBL format, there were on average 6 quizzes, 7 cases, and 3 examinations (Table 1). There were other cases submitted but not for grades. These additional practices may have helped explain the improvement when comparing CBL to the LAL format.
Examination Performance in the 2 Pharmacokinetics Courses by Format
In the foundational course, there was no increase in examination performance among the LAL, REC, and TBL formats (Table 3). The number of graded assessments varied, but on average, there were 3 examinations during the LAL format, 3 examinations during the REC format, and 5 quizzes, on average 7 cases, and 2 examinations during the TBL (Table 1). During this 8-year period, the focus of examinations increased to more higher-order questions according to Bloom’s Taxonomy. When comparing campuses, only during the TBL years was a significant difference noted that favored the main campus (90.9 vs 86.3, p<0.001); there were no differences between the other 2 course formats. Further analysis of the 3 years TBL was used revealed that only 1 of the years demonstrated a difference between campuses (year 1: 90.4 vs 88.7, ns; year 2: 90.2 vs 88.5, ns; year 3: 91.9 vs 83.8, p<0.001). This single year of difference may be considered the outlier as the other 2 years did not show a significant difference; regardless, this outlier year did impact the overall average of the 3 years.
There was a trend in the foundational pharmacokinetics course that overall course rating based on student evaluations decreased as the self-directed and cooperative learning format increased (r=-0.682, p<0.05); there was no change in students’ overall rating of the clinical course (r=0.230) (Figure 2).
Relationship between overall evaluation of courses and median examination score for lecture-with-active-learning (LAL), recitation form at (REC), team-based learning (TBL); and cased-based learning (CBL).
DISCUSSION
Learning can be defined by both “retention” and “transfer.” Retention is the ability to use information for a period of time after the initial learning, and transfer is the ability to use information in a slightly different context than the original learning.20 The model of a course sequence (in this case, foundational followed by clinical) can help elucidate the impact of student learning because what is learned in one course must be utilized in a subsequent course (ie, retention) and within a slightly different context (ie, transfer). Here, we have reported on several years of experience on off-loading content and re-framing class time to apply that information. Our study data indicated that the better prepared students were for the foundational course, the better they performed in the clinical course. Increasing the amount of active learning and accountable self-directed learning appeared to translate to performance improvement, especially in course examinations. However, this might have come at the cost of positive student evaluations of the course.
When comparing educational interventions, we believe the control should be the best-practice format; however, traditional lecture is often used as the control. Our investigation used LAL format as the control and found that examination performance did not differ between this format, a REC format using active learning and smaller class size, and a TBL format. Including more questions associated with higher orders of Bloom’s Taxonomy (eg, application and above) on examinations may partly explain the lack of change. Even though examination performance was unaffected, changes in performance in the subsequent course were noted. In addition, off-loading inclass content onto students via self-directed learning did provide more opportunities for application of patient cases, finding/reading/interpreting sources, communicating pharmacokinetics concepts, and developing team skills.
Use of active-learning strategies increases student learning, but the effects of these strategies on attitudes toward the courses can vary. In this study, student evaluations were used to assess overall student satisfaction with the courses. While the course evaluations within the clinical course remained stable over time, there was a significant negative correlation between examination performance and course evaluations in the foundational course. As course grades were related to course format, it could be inferred that some aspects of the course resulted in less student satisfaction. This could be related to the amount of self-directed learning, the cooperative learning nature, or the instructor. The course director (ie, individual charged with managing the course) remained the same throughout the study period but the percent of the course taught by 1 instructor did increase over time (eg, 50% of the course to 100%). Other studies found a decrease in course satisfaction as the amount of self-directed learning or active learning increased even though course performance increased.21 In previous work, we found that students self-reported spending approximately 3 hours preparing for class (interquartile range 2-3.5 hours) per week.17 Students also self-reported they were not spending more time on this course than others.11 We also showed that preference for team learning was lower among students higher on the introvert scale,16 and within the cohort of students, 50% of the students leaned towards introversion thus team-activities may have had lower preference. Although satisfaction in course evaluations still remain high, more work is needed to elucidate reasons for the relationship between grades, course format, and course evaluations. Over time, the number of flipped courses taken prior to these courses increased in the curriculum, and students have experienced at least 2 flipped formats prior to the pharmacokinetics courses.
In this study, we reported the effect size and the confidence interval around the effect size. The effect size is a standardized index that quantifies the magnitude of the difference between treatment and control groups independent of sample size. This standardized index allows easier comparison to studies that have used similar metrics. Hattie investigated numerous effect sizes for educational interventions mostly in K-12 education.19 He reported that effect sizes of 0.40 or above would be effective instructional interventions and that a student’s prior cognitive ability has an effect size of 1.0.19 In relation to our study, Hattie’s findings suggest that if students come in with a better understanding of the foundational material, their performance in subsequent work should be better.
The first strength of our study was the use of several years of data to calculate effect size and instructional impact. Examination scores can vary year to year as new assessments are used; thus it is important to account for test-to-test variability. By combining years or using correlations, the variability can be considered to avoid making judgments based on 1 year’s performance. Second, the study attempted to evaluate the retention of information by examining performance in a subsequent course. Other studies tended to focus on immediate learning performance compared to longer-term retention and application. Third, the clinical course remained relatively consistent over time with respect number and type of assessments, instructors involved, and content.
In terms of limitations, this study was retrospective, which led to varying student numbers, potential differences in students over time, minor variations in content and faculty members, and newly constructed examinations from year to year. Over the 8-year period, new examinations were developed based on a common set of learning outcomes. In some cases, variables in examinations included making changes to numeric values (eg, creatinine clearance, serum concentration) or asking the opposing view from prior years (eg, most appropriate course of action vs least appropriate course of action). The format and length remained relatively unchanged. Student composition remained relatively constant over time based on entrance grade point average and PCAT scores, although experiences with active learning in undergraduate education could vary. In the foundational course, there was one year where only one examination was used compared to prior years that had 2 to 3 examinations; this reduction could impact the sampling of student knowledge and skills (ie, number of questions on a given topic). The number of examinations in the clinical course was constant (n=3). In addition, the number of instructors changed. For the LAL and REC format, there were typically 2 instructors but there was one instructor involved in the TBL format. Content in the clinical course did vary slightly year to year based on lost or added class days but this accounted for 2 days at most. There was some faculty member turnover in the clinical course, part of which was related to availability of instructors, including pharmacy residents.
SUMMARY
Increasing the amount of active learning in the foundational and clinical courses resulted in increased learning as measured by examination performance. Students were able to successfully learn the foundational aspects of the discipline through self-directed learning but this did require student-friendly material that could be completed efficiently with opportunities to self-assess. Increasing student accountability or using cooperative learning may come at a cost in terms of overall student satisfaction with courses, but evaluations can still remain positive.
- Received February 5, 2014.
- Accepted April 8, 2014.
- © 2014 American Association of Colleges of Pharmacy