Abstract
Objectives. To compare the effectiveness of 2 case question formats (multiple choice and open ended) to prompt faculty members and students to explore multiple solutions and use factual evidence to defend their solutions.
Methods. Doctor of pharmacy (PharmD) faculty members and students responded to 2 pharmacy law/ethics cases, one followed by a case question prompt in multiple-choice format and the other by a question in open-ended format. The number of conclusions and the quality of the arguments generated were assessed using general linear modeling.
Results. PharmD faculty members outperformed students on every outcome variable measured, demonstrating expert problem-solving skills. All participants provided better quality arguments when the case prompt question was in multiple-choice format.
Conclusions. The better quality arguments prompted by multiple-choice case questions suggests this format should be used when constructing case question prompts.
- argument analysis
- case-based learning
- active learning
- question format
- problem solving
INTRODUCTION
Standard 11 in the Accreditation Council for Pharmacy Education (ACPE) Standards 2007 requires that faculty members foster the maturation of students' problem-solving skills from novice toward expert.1 Navigating students from novice (those lacking training and/or experience) toward expert is desirable because experts possess more organized and accessible knowledge, and solve problems more rapidly and accurately in their fields of expertise.2,3 Experts also are more likely to defend their decisions with evidence and verifiable facts, leading to more accurate and defensible decisions, in contrast to novices who are more likely to rely on opinions not supported by evidence.4,5 However, expert problem-solving skills are not immediately realized upon graduation because experts require at least a decade of intensive preparation and practice in the profession to build the specialized knowledge and skills that facilitate decision making and problem solving.2
To help students mature as problem solvers, many faculty members use active-learning techniques such as case studies. However, using cases to promote active learning does not guarantee the advancement of students' problem-solving skills because the effective use of cases relies on the skills of the instructor executing the activities to reduce barriers to active learning (Table 1). Although faculty training is required to overcome these barriers, the way a case is written (the case format) may be the first step in helping faculty members use cases to promote active learning and advance students' problem-solving abilities.
Five Barriers to Promoting Active Learning with Cases
A case's format can support students' ability to problem solve when it emphasizes: (1) exploring several alternative solutions rather than focusing on a single correct answer; (2) using quality evidence to defend proposed solutions; and (3) reflecting on a solution's strengths and weaknesses.6–9 The format in which a case is presented influences students' contemplation of and response to the case, problem solving for the case, and ability to generate differential diagnoses.10–12 If faculty members overlook the importance of the format when writing and teaching with cases (such as requiring 1 correct answer rather than encouraging students to explore several solutions), the development of active learning and problem-solving skills may suffer, ultimately affecting students' lifelong learning and their progress toward becoming experts.
One specific aspect of the case format that may be instrumental to problem solving is how the question at the end of the case, or the case question prompt, is written because it directs students' thinking.13 One type of case question prompt is the open-ended question, such as “what would you do, or what would you recommend?” Another type of case question prompt is the multiple-choice question, such as “select the best response from the following choices. Case question prompts commonly are written as open-ended questions; however, using a multiple-choice question at the end of a case may help shape students' explanations because this question format helps learners identify the important issues in the case and may reinforce the necessary steps for problem solving.14,15 The multiple-choice format also may help students raise more questions, view the case from multiple perspectives, and consider alternative solutions, and ultimately may enrich case analysis.16–19 Students may overlook problem-solving steps if a case question prompt is limited to a simple open-ended question like “what would you do?” compared to a detailed prompt such as “Explore the possible solutions to this case and explain why your solution is better than the other available solutions.” However, a multiple-choice question may constrain students' thinking and their ability to answer because their choices are limited to the available options.
The objectives of this study were to compare how well 2 different case question prompt formats (multiple-choice and open-ended questions) supported PharmD faculty members and first- through fourth-professional-year (P1-P4) pharmacy students in the following: (1) creating and exploring multiple solutions (conclusions) for each case response, and (2) using quality evidence to defend solutions We hypothesized that faculty members would serve as the control group, demonstrating expert problem-solving skills by exploring multiple solutions (objective 1) as measured by the number and quality of the conclusions provided. We also hypothesized that students would generate better arguments when given a multiple-choice case question prompt. Fourth-year students' responses were compared to those of first- through third-year students because it was hypothesized that P4 students would use better evidence to support their answers due to their work in advanced pharmacy practice experiences (APPEs), which require knowledge and skill application.
METHODS
Two pharmacy law/ethics cases that were similar in length, structure, difficulty, reading ease, and detail were used in the study. Case A (suicide case) involved a patient who possibly was stockpiling a medication for a suicide attempt; thus, a strong answer to the question that followed was expected to address the importance of communicating with the prescribing physician and applying the Beers criteria.20 Case B (police officer case) involved a police officer filling a prescription for an antipsychotic medication; thus, a strong answer to the question that followed was expected to include a discussion of patient privacy and Health Insurance Portability and Accountability Act (HIPAA) laws, knowledge of which is vital for practicing pharmacists.21 Both open-ended and multiple-choice case question prompts were constructed for each case. Both question formats asked for a 100-word minimum response so that participants' answers would be equivalent in length. The open-ended question asked participants to assume the role of a pharmacist, explain how they would respond to the situation, and describe why their response was better than other possible responses. The multiple-choice questions asked participants to select the best response option from 4 possible responses, justify their choice, and explain why the response they chose was better than the other 3 options. The question formats and cases were paired together in a counterbalanced manner so that participants were randomly assigned one case (A or B) with an open-ended question, and the other case (A or B) with a multiple-choice question. During the pilot study, 5 pharmacy students (from P1 – P4 years) and 3 PharmD faculty members responded to the open-ended questions for each case and reviewed the clarity and intent of the cases. The responses to the open-ended question gathered in the pilot study were then used to construct the responses for the multiple-choice questions for cases A and B.
After receiving institutional review board approval, all P1 through P4 students and full-time and adjunct PharmD faculty members who worked with patients were invited to participate in the study. PhD faculty members who did not hold a PharmD were excluded from the study due to the clinical nature of the study cases. Separate student and faculty data collection sessions were held in the university computer laboratory. At the beginning of the testing session for each group (students and faculty members), all participants were asked to provide signed consent and then were given verbal instructions and assigned a unique user name and password. Testing began and ended at the same time for all participants. All participants received their first randomly assigned case and question combination (case A/multiple-choice question; case A/open-ended question; case B/multiple-choice question; or case B/open-ended question) on their computer screen via the Internet at the same time. Participants had no time limit for answering each case question. As soon as a participant submitted his/her first answer, the second randomly assigned case and question combination appeared on the screen. Participants could not modify their response to a question once it was submitted. Upon completion of both case questions, all participants completed a demographics questionnaire (year in the pharmacy program, amount of paid pharmacy work experience in years, current work setting, and length of time in a pharmacy work setting).
Participants' answers were assessed based on the number of words written and length of time taken to respond to each question. Answers were also reviewed using a case response scoring system called argument structure analysis, where responses are divided into reasons, conclusions, and arguments.5 Conclusions are the belief/point of view offered and constitute the “what” of the argument. Reasons are linked to the conclusion and constitute the “why” of the argument. An argument required 1 conclusion and at least 1 reason linked to the conclusion (more than 1 reason could be offered). Submitting a conclusion without providing any reason did not qualify as an argument. Some statements did not fit the above categories and were coded as irrelevant information.
Once each participants' statement was coded as a conclusion, reason, or argument, each reason and the overall quality of the response (Table 2) were scored on a scale of 0 to 3 (0 = irrelevant or false; 1 = weak, eg, matter of opinion; 2 = moderate, ie, evidence implied but not explicitly stated; and 3 = strong, ie, evidence explicitly stated, such as stating a law to defend a conclusion).5 Specific reason strength criteria were reviewed by a practicing PharmD who also held a juris doctorate degree. An example of a weak reason is, “I would not follow up with the doctor because I assume that the doctor already knows the patient's situation.” An example of a moderate reason is, “I would not fill the prescription because of my religious and ethical beliefs.” An example of a strong reason is, “I would fill the prescription if the patient filled the medication on a regular basis, which I confirmed by reviewing the patient's medication refill history in the computer.”
Argument Quality Scoring Guide #1 in Case Question Prompt Study
Once each argument was identified and scored using the guide in Table 2, an average argument quality score was calculated for each participant's response, and was the first measure of overall response quality (overall response quality 1). A second score of overall response quality (overall response quality 2) was generated using a 7-question checklist to evaluate each response where each yes counted as 1 point and a total score was calculated (Table 3).5 The number of conclusions generated reflected the problem- solving step of exploring multiple solutions. The overall response quality 1 and 2 reflected the problem- solving step of using quality evidence to defend solutions.
Argument Checklist Total: A 7-Question Checklist in Case Question Prompt Study
The analyses were designed to (1) compare individual participants' response to an open-ended question with their response to a multiple-choice question, and (2) to compare faculty (expert) responses to student (novice) responses. Ten percent of the total case responses were randomly selected and reviewed by 2 independent raters not affiliated with the project with backgrounds in educational methods and pharmacy education. Interrater reliability for the overall response quality 1 and 2 was 80%.
RESULTS
Demographics
Sixty-five percent (197 of 303) of students (novice group) completed the study (P1 = 45 students, P2 = 50 students, P3 = 53 students, and P4 = 49 students). The average pharmacy school grade point average for all participants was 3.1 ± 0.5 on a 4.0 scale (Table 4). Fifty-four pharmacy faculty members (37 full-time and 17 adjunct), with varying amounts of work experience (> 60% had 11 years or more experience, which is considered expert) also enrolled in the study.
Descriptive Statistics for Faculty (Full-time and Adjunct) and Students (P1 – P4) in Case Question Prompt Study
Objective 1: Exploring Multiple Solutions
To measure participants' ability to explore multiple solutions, the number of conclusions given by each participant for each case were tallied. Faculty members provided significantly more conclusions than students (p < 0.05). However, there was no significant difference between the mean number of conclusions provided by P1-P3 students and the mean number provided by P4 students. The number of conclusions provided by participants who were given the open-ended question for case A (the patient stockpiling drugs potentially for suicide) and the multiple-choice question for case B (the police officer taking an antipsychotic medication) was significantly higher (p < 0.05) than participants given the opposite combinations (multiple-choice question for case A and open-ended question for case B). Within subject differences also occurred. A significant format by participant effect (p < 0.05) indicating the number of conclusions given by faculty members for the multiple-choice question was greater than the number of conclusions given for the open-ended question. For students, however, the number of conclusions given for the open-ended question was greater than the number of conclusions given for the multiple-choice question.
Objective 2: Using Quality Evidence to Defend Solutions
Generalized linear model (GLM) repeated measures design was also used to evaluate objective 2. Faculty members generated a higher average argument quality score (overall response quality score =1) on the 2 cases than the P1-P4 students, (p < 0.05) The average argument quality score of P4 students was not significantly different from that of P1-P3 students. The argument quality score was greater among those who were given case A first and case B second compared to those who were given case B first and case A second (p < 0.05), resulting in a significant case effect. However, there was no significant difference between the average argument quality score when cases were open-ended versus multiple-choice on this outcome (p > 0.05).
On the argument checklist total (overall response quality score 2), faculty members had better quality responses as judged by the argument checklist total than the P1-P4 students (p < 0.05). There were no significant differences between the quality of the P4 students' responses and that of the P1-P3 students. The argument checklist total was higher for participants who were given the open-ended question for case A (the patient stockpiling drugs potentially for suicide) and the multiple-choice question for case B (the police officer taking an antipsychotic medication) was significantly higher (p < 0.05) than participants given the opposite combinations (multiple-choice question for case A and open-ended question for case B).
Multiple-Choice Option Selection Results
There were no significant differences between faculty members and students in the 4 multiple-choice options selected for the police officer and patient suicide cases, regardless of when the case was received. However, the majority of participants chose option 3 for both cases (59.8%), and 32.1% of participants chose option 4.
Unanticipated Results
Originally, we thought that the suicide case (case A) and the police officer case (case B) were equivalent, as both were presented as ethical dilemmas that were described using a similar number of paragraphs, words per sentence, and total number of words, and were written at a similar Flesch-Kincaid grade level. However, the suicide case appeared to elicit stronger feelings/decisions and responses from participants. The suicide case also produced greater differences in responses depending on whether the participant was given the open-ended question or the multiple-choice question, and whether the case was presented first or second. Since there were differences, merging the scores on the responses to the open-ended questions for both the police and suicide cases or for the scores on the responses to the multiple-choice questions for both cases was not possible. To take case type difference into account, a variable indicating whether the case with the open-ended question was case A (suicide) or case B (police) was added to the final model in the analyses.
DISCUSSION
The data from this study pertaining to case formatting considerations may provide direction to faculty members creating and using cases to promote active learning in the instructional environment. The study resulted in 5 findings. First, faculty members outperformed students on every case response outcome variable Table 5) yielding significant differences in their ability to explore multiple solutions and use quality evidence to defend solutions, supporting our hypothesis that faculty members would serve as experts or the control group in the study. This supports the findings of previous studies that experts possess problem-solving abilities that differ significantly from novices. One study limitation was that approximately 40% of the faculty members did not meet the criteria for being an expert, which is defined as 10 years or greater experience, but were included in the study analyses in order to adequately power the study. Although significant differences were seen in the current design, greater differences may have been observed if more stringent exclusion criteria had been used.
Expert (Faculty) vs. Novice (P1 – P4 Student) Differences on all Study Variables in Case Question Prompt Study
Second, faculty members explored multiple solutions (a problem-solving step indicative of experts) for the open-ended and multiple-choice case questions as indicated by the number of conclusions generated. It was originally hypothesized that multiple-choice options might constrain the thinking of experts who had extensive knowledge and experience to draw from when considering a solution, but this did not occur, and it is possible that the opposite is true and the multiple-choice questions triggered consideration of additional options by the experts. Students generating more conclusions when responding to the open-ended case questions was also an unanticipated outcome. This finding may have revealed that the number of conclusions was not an effective measure for indicating whether participants explored multiple solutions. This variable quantifies the number of solutions explored, but it does not account for the quality of the solutions. A better option may have been the argument checklist total since it evaluates whether alternative points of view are considered and fairly presented. In fact, participants had higher argument checklist totals when given multiple-choice cases question prompts as compared to scores when given open-ended case question prompts, suggesting that all participants benefitted from the multiple-choice question format when responding to a case. Therefore, the third finding is that the argument checklist total may be a useful tool to measure responses to cases. This tool may increase interrater reliability and facilitate delivering students structured feedback about their case responses to increase learning, but this would need further exploration in future studies. Fourth, the average argument quality score did not appear to detect overall response quality or differences between multiple-choice and open-ended question formats, as well as the argument checklist total. In spite of the extensive guidelines created for evaluating the quality of arguments generated, this measure was time intensive to use when assessing the participants' responses, which may be prohibitive if using this measure to assess student performance on classroom assignments or tests.
Fifth, as indicated by the argument checklist total, participants benefitted from the multiple-choice format for case question prompts, which supports the hypothesis that the case question prompt format affects how all students (and even experts) think about and respond to a case, which could ultimately impact problem-solving skills. Faculty members who use cases in their teaching should consider using the multiple-choice format for case question prompts to help shift students from focusing on finding a single correct answer to the case to exploring multiple solutions and using the best evidence to defend solutions. The multiple-choice format may facilitate problem-solving skill development that is sustainable during coursework and beyond.
There are limitations to these study findings. Only 2 cases were used in the study which may have resulted in a case specificity effect (ie, an individual's ability to solve problems varies across cases), implying that the results from these 2 cases may not apply to other cases.22 In addition to the low number of cases used in the study, an unanticipated yet significant case effect was found that impacted the results and highlights the need for using factual evidence when responding to highly sensitive topics and legal/ethical dilemmas.
CONCLUSION
In summary, cases are a teaching tool in pharmacy education commonly used to promote active learning. Including cases in the curriculum does not guarantee that students will become actively engaged, explore multiple solutions, or use quality evidence to justify their solutions. Designing cases with a multiple-choice case question prompt appeared to help students structure their responses to the case and focus on using quality evidence to defend their solutions, which is an important skill in problem solving. This format increases active learning because instructors can encourage all students to select a response and then raise their hand to indicate the option they chose, making students' thinking visible (more apparent) to the instructor. Faculty members can also ask individual students to explain their selected answer and ultimately facilitate debate within the class over the various options.
ACKNOWLEDGMENTS
The contributions of all of the faculty members, staff, administrators, and students involved in the study are greatly acknowledged. Special recognition goes to Angela M. O'Donnell, PhD, Clark A. Chinn, PhD, and Susan Goodin, PharmD, BCOP for their thoughtful review and guidance with this work.
- Received July 1, 2009.
- Accepted August 27, 2009.
- © 2010 American Journal of Pharmaceutical Education
REFERENCES
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
- 11.
- 12.
- 13.
- 14.
- 15.
- 16.
- 17.
- 18.
- 19.
- 20.
- 21.
- 22.