Abstract
Objective. To adapt a classroom assessment technique (CAT) from an anthropology course to a diabetes module in a clinical pharmacy skills laboratory and to determine student knowledge retention from baseline.
Design. Diabetes item stems, focused on module objectives, replaced anthropology terms. Answer choices, coded to Bloom’s Taxonomy, were expanded to include higher-order thinking. Students completed the online 5-item probe 4 times: prelaboratory lecture, postlaboratory, and at 6 months and 12 months after laboratory. Statistical analyses utilized a single factor, repeated measures design using rank transformations of means with a Mann-Whitney-Wilcoxon test.
Assessment. The CAT revealed a significant increase in knowledge from prelaboratory compared to all postlaboratory measurements (p<0.0001). Significant knowledge retention was maintained with basic terms, but declined with complex terms between 6 and 12 months.
Conclusion. The anthropology assessment tool was effectively adapted using Bloom’s Taxonomy as a guide and, when used repeatedly, demonstrated knowledge retention. Minimal time was devoted to application of the probe making it an easily adaptable CAT.
- Classroom assessment techniques
- diabetes
- skills lab
- background knowledge probe
- knowledge retention
- CAPE domains
INTRODUCTION
In 2011, the Accreditation Council for Pharmacy Education (ACPE) updated the Accreditation Standards and Guidelines for the Professional Program in Pharmacy Leading to the Doctor of Pharmacy Degree.1 Standard No. 15, which addresses Assessment and Evaluation of Student Learning and Curricular Effectiveness, denotes that curricular evaluation should include varied formative and summative assessment methods that are systematically and sequentially administered to determine students’ achievement at different levels and foster experimentation and innovation.
Based on this guideline, educators might want to consider implementing formative and summative assessment techniques in tandem with the course, module, or lecture. These techniques should be incorporated frequently through a course, making assessment part of the learning process.2 Doing so would facilitate the ability to use analyzed assessment data for immediate improvements in the educational modality rather than waiting until later time points, such as the final evaluation, when adjustments would only benefit students in subsequent semesters. Additionally, using frequent assessment techniques could demonstrate student learning changes by comparing data from several time points. A systematic and sequential administration of assessments will either validate the educational modality or bring attention to improvements needed relating to long-term learning gains. Classroom assessment techniques (CATs) are brief formative evaluations that are available in a large variety of constructs. When administered at various time points, they may be used to track student achievement at different levels, while fostering instructor experimentation and innovation. Therefore, CATs could be used to meet the need set forth by Standard No. 15.
CATs augment clearly defined teaching objectives by allowing instructors to formatively determine what, how much, and how well students are achieving learning goals.3-5 Important characteristics of effective CATs include being learner-centered, teacher-directed, mutually beneficial, formative, context-specific, and well integrated into the teaching and learning process.6 CATs allow instructors to gage students’ knowledge or perceived knowledge of various topics before, during, or after learning experiences. In addition to supporting instructional development, CATs also allow instructors to focus valuable class time toward knowledge deficient areas rather than topics more comfortable for students.3,6 Due to the formative nature of CATs, educational adjustments can be made immediately rather than waiting until the following semester. CATs also present an opportunity for students to provide anonymous feedback about their learning. Students particularly hesitant to ask questions aloud in class can more comfortably communicate with instructors by completing a CAT and may also learn that other classmates share like concerns.3 Additionally, by providing welcome feedback, students become more involved in their learning, motivated to successfully complete the course, and self-directed.3,4
CATs may be used to determine students’ prior knowledge or ability to recall, apply, analyze, synthesize, create, or critically consider material. The CATs most commonly used to assess knowledge and recall are the “background knowledge probe,” the “one minute paper,” and the “muddiest point.” Instructors use the one minute paper to allow students to respond in writing to the following two questions during the final few minutes of class time: “What was the most important thing you learned today in class?” and “What are you still confused about?” 7 This technique allows instructors to compare course or module objectives with students’ perceptions of important topics and their learning. Again, students respond in writing at the conclusion of a lecture to the muddiest point. This knowledge and recall CAT mirrors half of the one minute paper by allowing students to respond regarding the most confusing aspect of education provided. This helps the instructor identify points to better emphasize or explain prior to starting the next course lecture.3,6
Quite different from the one minute paper and muddiest point, the background knowledge probe CAT allows instructors to easily collect and analyze student preparedness about a particular topic. Gathered information can then be applied to determine the most appropriate starting point and instruction level for a given lesson.3,6 Highlighting essential information when using a background knowledge probe not only allows review of previously presented material, but also provides direction for future topics of study within a course. Background knowledge probes require students to reflect on current knowledge and assess their understanding, often through either short answer responses or selection of answer choices.3,6 They are most commonly administered prior to beginning a new course, lesson, or topic.6 However, they may also be administered immediately following an event to gain a rough sense of improvement in knowledge base or familiarity.3,6
Although CAT examples are described in the literature, including the arena of Allied Health, most offer anecdotal support of their impact rather than sufficient, well-articulated detail to determine objective, outcome-related changes.8-10 A study by Wise is one exception, describing the process and impact of implementing the muddiest point CAT in a 2-hour lecture-based course in physical therapy for third-year students.11 Confidentially completed evaluations showed significant improvements in student perception of the course and instructor when a CAT was administered throughout the semester compared to previous semesters, when no CAT was used. Students responded that course assignments were reasonable and contributed to learning (p=0.002), the course was well organized (p=0.007), and the instructor provided students with opportunities for questions (p=0.004).11
CATs have also been developed and implemented in various doctor of pharmacy programs, although few studies describe specific outcomes. Van Amburgh and colleagues assessed the impact of various active-learning techniques integrated into pharmacy classroom lectures, including CATs, but did not specifically outline the benefits gained by the individual assessment measures.12 Other assessment review articles mention that CATs, especially the muddiest point, is often use by pharmacy faculty, but again, few specifics are provided.8,13,14 Bartlett and Morrow, however, provide a comprehensive description of adapting and implementing the one minute paper in a first-year Biochemical Basics of Drugs and Disease pharmacy course. They expanded the 2-question CAT to include a third: “What was the most interesting fact you learned today?” Course feedback gathered through the mid-term evaluation and a student-completed survey indicated that inclusion of the one minute paper improved student-faculty relationships, student understanding of difficult material, and student likelihood of asking questions in class (with women feeling more encouraged than men to ask questions in class) (p<0.01).15 Additionally, students believed the CAT was an effective use of class time.15
While limited literature outlines the implementation and assessment process of including a CAT in pharmacy or other health-related disciplines, findings do tend to point to improvements in either student perception of the course, content, or instructor. The current study investigates the process of translating a background knowledge probe CAT from another discipline (anthropology6) to a pharmacy-specific course. Additionally, it is the first assessment to objectify changes in student knowledge retention over a 12-month interval using this type of background knowledge probe.
DESIGN
Third-year pharmacy students attended (or later viewed the recording of) a 1-hour prelaboratory lecture on type 1 diabetes at Auburn University in a large classroom setting. The off-campus instructor used a video conference system (Polycom) to present background content to students. Whether students attended the real-time prelaboratory lecture or later watched the recorded presentation, they received identical content. Over the following 7 days, students completed a homework assignment of photographing their dinner, counting carbohydrates in the meal, and calculating a dose of rapid-acting insulin based on a given insulin-to-carbohydrate ratio. A week later students attended a 2-hour clinical pharmacy skills laboratory (approximately 32 students per section), where they participated in hands-on, active learning, case-based education. The laboratory included 4 carbohydrate counting exercises, presentation of 4 meal photographs per laboratory section with discussion, and 10 patient cases requiring insulin dosing. Each exercise was conducted through the think-pair-share method.
After completing the 1-hour prelaboratory and 2-hour clinical pharmacy skills laboratory, students were expected to have met the following learning outcome objectives: (1) Comprehension: identify the foods most likely to increase blood glucose; (2) Application: apply carbohydrate counting to food labels; (3) Comprehension: describe the plate method; (4) Knowledge: define “basal-bolus” insulin regimen; (5) Analysis: compare and contrast “basal-bolus” regimens to pre-mixed insulin products; (6) Knowledge: define the term “insulin sensitivity factor;” (7) Knowledge: define the term “insulin-to-carbohydrate ratio.” The above learning outcome objectives were used through the 2011 fall semester (2013 graduating class). The learning outcome objective regarding the plate method (objective 3) was removed from the module in the fall of 2012 (2014 graduating class) and inserted into a laboratory module occurring later in the fall semester. Therefore, content regarding the plate method was also removed from the prelaboratory and laboratory sections of the 2012 fall module.
The original anthropology background knowledge probe contained 50 items and was administered on the first day of a semester-long course.6 Tested topic items were recommended by prerequisite and lower-level course instructors. Each item consisted of a term (for example “The Weimar Republic,” “Senator Joseph McCarthy,” or “The Golden Triangle”), followed by 4 possible answer choices indicating varying degrees of familiarity, which were nearly identical for each item. Prior to administering the in-class, paper-based probe, students were informed that it was not a test and would not be graded. The results were used to improve student learning during the semester.6 (Table 1)
Anthropology Item with Coded Ranking Classifications per Bloom’s Taxonomy
Because the background knowledge probe was adapted from a semester-length course to a brief module, the number of items tested was decreased from 50 to 5. Tested item topics were selected from and focused on module objectives and were presented in order of appearance during the prelaboratory and laboratory sections and in increasing complexity. The wording of answer choices was kept as closely as possible to the original anthropology examples. To more accurately translate to applied aspects of patient care, 1 additional option was added to the first 3 items and 2 additional options were added to the last 2 items. To accurately analyze the anthropology probe, each answer choice was coded to Bloom’s Taxonomy to indicate degree of higher order thinking (Table 1). The first answer choice (“Have not heard of this”) was not assigned a level of Bloom’s Taxonomy, as it indicates that no knowledge had been acquired on the topic. The second and third answer choices were both coded as “Knowledge,” while the fourth answer choice was coded as “Comprehension.” The same method of coding answer choices to Bloom’s Taxonomy was applied to the adapted diabetes probe (Table 2). Two independent individuals, an education assessment expert, and a course coordinator skilled in assessment techniques assisted in the coding of the answers. Answer choices were listed in order of increasing level of understanding. Four 4th-year pharmacy students participated in cooperative inquiry to validate the utility of the survey and to ensure accurate interpretation. The 5-item probe was reformatted as a Vovici online questionnaire (Vovici Corporation, Herndon, VA), in which knowledge levels were ranked to represent participants’ knowledge of the diabetes term.
Student Self-Reported Familiarity with Diabetes Terms Using the Background Knowledge Probe and Coded to Bloom’s Taxonomya – Number (%)
All third-year pharmacy students enrolled in the mandatory clinical pharmacy skills laboratory during the fall of 2011 and 2012 were e-mailed an informational letter two days prior to the prelaboratory lecture explaining the voluntary, anonymous nature of participation and containing a link to the online 5-item questionnaire. The letter explained that participation in the questionnaire would not affect laboratory or course grades and would be used for teaching purposes only, so using outside resources to complete the questionnaire would not benefit students. The questionnaire was closed and data analyzed 30 minutes before the prelaboratory class lecture. Results were shared with students during the 1-hour lecture period. Students were e-mailed a second, third, and fourth request to complete identical surveys immediately after the 2-hour laboratory section, which occurred 1 week later, at 6 months, and lastly at 12 months, respectively. Students were allowed 10 days to complete the follow-up questionnaires and were sent 3 reminder e-mails to improve response rates. The average response rate for each familiarity statement was calculated for all 4 phases of measurement.
Data for each familiarity statement consisted of ranking classifications based on Bloom’s Taxonomy. In order to account for nonparametric trait outcomes, rank transformations were applied to the knowledge responses of the 5 terms. Using these ranks, a single factor repeated measures analysis was applied to examine student-reported familiarity and retention over the 4 time-point measurements from the prelaboratory through 12-month follow-up. SAS version 9.4 (SAS Institute, Cary, NC) was used for all analyses. The project received Institutional Review Board approval through exempt procedures at Auburn University.
EVALUATION AND ASSESSMENT
Responses from both graduating classes were combined for analysis to evaluate student learning. A total of 281 third-year pharmacy students were enrolled in the 2011 and 2012 fall course. Twelve student e-mails were returned in the fall of 2011 during the prelaboratory and immediate postlaboratory survey due to changes in the university’s e-mail system. Response rates for each time point of measurement were 72.1%, 85.5%, 72.2%, and 78.6%, respectively. Results from the student sample showed a dramatic improvement in self-reported term familiarity from prelaboratory to each time point assessment after the laboratory, including a shift to higher order learning per the application of Bloom’s Taxonomy to the background knowledge probe (Table 2). When changes in familiarity were coded based on Bloom’s Taxonomy using a Mann-Whitney-Wilcoxon test, significant increases (p<0.0001) were noted for all 5 terms over the 12-month assessment period when compared to the prelaboratory measurement (Table 3). This shift was most pronounced for more complex terms such as “insulin-to-carbohydrate ratio” and “insulin sensitivity factor,” which overlay each other in Figure 1, vs the most basic term “carbohydrate.” The modest, but still significant, improvement from prelaboratory with “carbohydrate” was likely due to a stronger baseline familiarity. Additionally, loss of knowledge retention was most notable with more complex terms. Table 4 denotes change in knowledge retention over time. Significant knowledge improvements were noted for all 5 terms from prelaboratory through the 12-month assessment point (p<0.0001). This change in knowledge (per the negative difference values) demonstrated a steeper improvement with more complex terms possibly due to a lower Bloom’s Taxonomy baseline starting point (Table 4 and Figure 1). The most basic term, “carbohydrate,” shows no significant change in familiarity from postlaboratory through 12 months (p=1), indicating no significant loss of knowledge. On the other hand, the most complex terms of “insulin-to-carbohydrate ratio” and “insulin sensitivity factor” demonstrate a significant decline in knowledge retention (per the positive difference values) when the assessment time points occurring after the prelaboratory were compared to each other (p<0.0001). Knowledge retention for the term “basal-bolus” displayed a more balanced change across the time points with a significant decline seen when comparing the 12-month assessment to the postlaboratory (p=0.017) and 6-month (p<0.0001) assessment (Table 4).
Single Factor Repeated Measures for Background Knowledge Probe
Difference between Wilcoxon Scores for Time Points of Measurement
Change in Familiarity Over the Two 12-Month Long Assessment Intervals.
Change in knowledge and familiarity with the term “plate method” exhibited a uniquely different trend over time. This was the only term with results indicating knowledge continuing to increase from postlaboratory through the 6-month follow-up and a non-significant knowledge decline from the 6-month assessment to the 12-month assessment. Removal of “plate method” content from the module during the 2011 fall semester and placing it in a module later in the semester explained this knowledge change. When the data for “plate method” was separated based on year of collection, a stark difference was noted in Wilcoxon scores at prelaboratory, postlaboratory, 6 months, and 12 months, with the 2013 graduating class scoring 59.3, 248.2, 221.8, and 214.5, respectively, and the 2014 class scoring 99.8, 229.3, 313.9, and 307, respectively. The removal of “plate method” content from the module allowed the term to function as a control by comparing data between the two years. This demonstrated the sensitivity of the background knowledge probe to capture data differently when term-related education was removed.
Collectively, this data indicated a significant knowledge gain occurred from the prelaboratory to postlaboratory assessment time points, but most dramatically for more complex terms. Conversely, knowledge regression was steeper with more complex terms, but was retained with basic terms. Finally, the probe demonstrated sensitivity to change in knowledge when new information was gained at later time points.
DISCUSSION
The highest level of understanding measured in the anthropology probe, per Bloom’s Taxonomy, regardless of actual achievement, was ‘Comprehension’ (Table 1). This limited assessment range may not accurately represent the true baseline level of understanding occurring at higher orders. Secondly, as the anthropology knowledge probe stood, it would be difficult to measure much depth in understanding improvements for this same reason; thus the utility of the original probe was further limited for follow-up efforts to measure changes. While objectives for the anthropology course were unknown, the objectives for the diabetes laboratory pushed students toward “Application” and “Analysis” in addition to lower orders of understanding. This further supported the importance of altering the probe to better assess higher orders of understanding. For these reasons, it was essential during the adaptation process of the probe from anthropology to clinical pharmacy, to add more answer choices to better measure higher orders of understanding. Furthermore, it was recommended that instructors strategically map learning activities, objectives, and learner knowledge for CATs to Bloom’s Taxonomy to stimulate higher-order thinking.9 While the adapted background knowledge probe met this need, it failed to specifically measure “Analysis,” although it did reach “Synthesis” and “Evaluation.”
The background knowledge probe presented here is not a traditional multiple choice quiz with one correct answer in addition to several distractors. Rather, through the answer selection process, students could reflect and consider their level of understanding and abilities. Furthermore, completing the background knowledge probe periodically gave students a way to reflectively consider improvements in their learning and abilities throughout the 12-month assessment interval.
Unlike many studies analyzing knowledge retention, the present investigation demonstrated a trend in knowledge enhancement at several time points. Other studies demonstrate immediate increases in student understanding directly following a specific educational intervention16,17 or document knowledge or skill retention from baseline to a sole second time point months after the educational intervention.18,19 Study designs that collect data at only 2 time points limit the ability to evaluate knowledge retention trends, whereas assessing knowledge more often over the same or longer intervals provides more meaningful data regarding knowledge retention. Two pharmacy student investigations assessed knowledge retention at 3 time points. Kopacek and colleagues aimed to assess P2 pharmacy student retention of knowledge about automated external defibrillator use following a didactic training and simulated experience using identical questionnaires administered at baseline, 3 weeks, and 4 months following the intervention.20 While significant improvements were noted in 2 out of 5 knowledge measures and all 6 performance measures at 3 weeks, the statistical difference declined to zero knowledge and 2 performance measures by 4 months.20 The authors concluded that the intervention was not sufficient to improve student knowledge at 4 months, and recommended incorporating short refresher courses into the curriculum to enhance knowledge retention.20 Morello and colleagues aimed to evaluate P1 pharmacy student confidence and knowledge retention about diabetes self-care education using a performance case-based knowledge test administered at baseline, immediately after, and 9 months after the educational intervention, which consisted of lectures, active learning assignments, and workshops.21 While no inferential statistical tests were performed, the average overall percent correct on the knowledge test nearly doubled from baseline (39.5%) to the test administered immediately following the educational intervention (85%), but declined by 9 months postintervention (76%).21 Collectively, both studies indicated that while knowledge improved as expected immediately following an educational intervention, retention declined over time.20,21 The more extensive study design of the present investigation, which used the background knowledge probe at 4 time points (3 following the educational intervention), more fully illustrated that knowledge decline is not always stable or consistent throughout the time frame (Figure 1). Certainly, additional investigations are warranted in the realm of knowledge retention, but in our study, knowledge did appear to decay at a steeper rate for more complex terms.
Although the number of terms evaluated with the adapted clinical pharmacy probe was decreased to 5, compared to the 50 terms evaluated with the anthropology course, the probe addressed understanding for each key concept outlined in the module objectives. Use of an online survey was likely a more efficient use of technology than the paper method used for the anthropology probe. Lastly, providing the survey prior to the prelaboratory lecture and after the laboratory module reserved class time for course work.
There are 2 drawbacks to using this assessment. First a considerable amount of time was put towards the upfront development of the probe, which is comparable to another investigation in which 71% of instructors reported more preparation time was needed to develop active learning teaching efforts.12 Through the translation of the probe from an anthropology class, time was predominately devoted to the application of Bloom’s Taxonomy and efforts to expand ability to assess deeper levels of thinking. If adapted for another clinical pharmacy course, the primary effort would likely be devoted to selection of the key terms, resulting in a relatively short development stage. Secondly, the probe provided subjective measurements, which may have varied from student to student. However, the probe likely has reliable intra-subject variability, which would provide a reasonable prediction of true student change in familiarity.
There are several benefits to using this background knowledge probe. After the translation process was complete, administering the probe resulted in little impact on workload. The probe was formatted into an electronic survey tool, which took minimal effort and would for anyone using relatively simple and familiar software. It was easy to implement because students completed the assessment via an e-mail link. It was only 5 items long and multiple choice, so it placed little burden on students who chose to participate. Because it was electronically administered, analyzing results and returning them to students was easy. Finally, because the probe was administered prior to class, results were analyzed and reported to students during the prelaboratory lecture, which also took little time. The lack of time required to administer this probe is significant because when asked, most pharmacy instructors elect not to incorporate active learning or assessment tools into didactic course work because their belief that it is too time intensive and integrated at the cost of lecture content outweighs their belief that it improves knowledge retention and student engagement.12 This concern expands beyond pharmacy faculty members to nursing faculty members, who also perceive it as a barrier to implementing critical thinking strategies.22
Instead of conducting the probe as a preclass assignment, one could administer the assessment during the beginning of class using electronic “clickers,” which provide instant responses. This method would likely not detract much time from the lecture, possibly increase the response rate (assuming all students attend the lecture), and potentially increase student satisfaction, attentiveness, and involvement.23 However, implementation of an in-class probe utilizing “clickers” would only be reasonable for the baseline time point and immediate follow-up measurement.
Adding more time between completion of the probe and the prelaboratory lecture could allow instructors to adjust content to target knowledge gaps revealed in the baseline results. Alternatively, instructors could use baseline results to adjust laboratory (versus prelaboratory) content to target knowledge deficits. However, course coordinators require faculty members to turn in content for laboratories several weeks in advance, which hinders acute adjustment of material. Still, the results allow for content of future years to focus on past student deficits in baseline knowledge, which in this case, appeared to revolve around every term except for “carbohydrate.”
SUMMARY
The adaptation of an anthropology background knowledge probe was effectively translated to a clinical, diabetes-focused, pharmacy skills laboratory using Bloom’s Taxonomy as a guide. The probe showed improvements in and retention of student familiarity and understanding of 5 diabetes related terms. This is the first objective assessment measuring knowledge retention using a CAT-designed background knowledge probe.
ACKNOWLEDGMENTS
Kristen Helms, PharmD, BCSP, Associate Clinical Professor at Auburn University and Sharon McDonough, PhD at University of Tennessee for their expertise in education assessment techniques and Paul Jungnickel, PhD, RPh, Associate Dean and Professor for Academic and Student Affairs for his expertise in CAPE domain interpretation and application.
- Received January 13, 2014.
- Accepted March 13, 2014.
- © 2014 American Association of Colleges of Pharmacy