Skip to main content

Main menu

  • Articles
    • Current
    • Early Release
    • Archive
    • Rufus A. Lyman Award
    • Theme Issues
    • Special Collections
  • Authors
    • Author Instructions
    • Submission Process
    • Submit a Manuscript
    • Call for Papers - Intersectionality of Pharmacists’ Professional and Personal Identity
  • Reviewers
    • Reviewer Instructions
    • Call for Mentees
    • Reviewer Recognition
    • Frequently Asked Questions (FAQ)
  • About
    • About AJPE
    • Editorial Team
    • Editorial Board
    • History
  • More
    • Meet the Editors
    • Webinars
    • Contact AJPE
  • Other Publications

User menu

Search

  • Advanced search
American Journal of Pharmaceutical Education
  • Other Publications
American Journal of Pharmaceutical Education

Advanced Search

  • Articles
    • Current
    • Early Release
    • Archive
    • Rufus A. Lyman Award
    • Theme Issues
    • Special Collections
  • Authors
    • Author Instructions
    • Submission Process
    • Submit a Manuscript
    • Call for Papers - Intersectionality of Pharmacists’ Professional and Personal Identity
  • Reviewers
    • Reviewer Instructions
    • Call for Mentees
    • Reviewer Recognition
    • Frequently Asked Questions (FAQ)
  • About
    • About AJPE
    • Editorial Team
    • Editorial Board
    • History
  • More
    • Meet the Editors
    • Webinars
    • Contact AJPE
  • Follow AJPE on Twitter
  • LinkedIn
Research ArticleResearch Article

Relationship Between Case Question Prompt Format and the Quality of Responses

Melissa S. Medina
American Journal of Pharmaceutical Education March 2010, 74 (2) 29; DOI: https://doi.org/10.5688/aj740229
Melissa S. Medina
College of Pharmacy, University of Oklahoma Health Science Center
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • PDF
Loading

Abstract

Objectives. To compare the effectiveness of 2 case question formats (multiple choice and open ended) to prompt faculty members and students to explore multiple solutions and use factual evidence to defend their solutions.

Methods. Doctor of pharmacy (PharmD) faculty members and students responded to 2 pharmacy law/ethics cases, one followed by a case question prompt in multiple-choice format and the other by a question in open-ended format. The number of conclusions and the quality of the arguments generated were assessed using general linear modeling.

Results. PharmD faculty members outperformed students on every outcome variable measured, demonstrating expert problem-solving skills. All participants provided better quality arguments when the case prompt question was in multiple-choice format.

Conclusions. The better quality arguments prompted by multiple-choice case questions suggests this format should be used when constructing case question prompts.

Keywords:
  • argument analysis
  • case-based learning
  • active learning
  • question format
  • problem solving

INTRODUCTION

Standard 11 in the Accreditation Council for Pharmacy Education (ACPE) Standards 2007 requires that faculty members foster the maturation of students' problem-solving skills from novice toward expert.1 Navigating students from novice (those lacking training and/or experience) toward expert is desirable because experts possess more organized and accessible knowledge, and solve problems more rapidly and accurately in their fields of expertise.2,3 Experts also are more likely to defend their decisions with evidence and verifiable facts, leading to more accurate and defensible decisions, in contrast to novices who are more likely to rely on opinions not supported by evidence.4,5 However, expert problem-solving skills are not immediately realized upon graduation because experts require at least a decade of intensive preparation and practice in the profession to build the specialized knowledge and skills that facilitate decision making and problem solving.2

To help students mature as problem solvers, many faculty members use active-learning techniques such as case studies. However, using cases to promote active learning does not guarantee the advancement of students' problem-solving skills because the effective use of cases relies on the skills of the instructor executing the activities to reduce barriers to active learning (Table 1). Although faculty training is required to overcome these barriers, the way a case is written (the case format) may be the first step in helping faculty members use cases to promote active learning and advance students' problem-solving abilities.

Table 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Table 1.

Five Barriers to Promoting Active Learning with Cases

A case's format can support students' ability to problem solve when it emphasizes: (1) exploring several alternative solutions rather than focusing on a single correct answer; (2) using quality evidence to defend proposed solutions; and (3) reflecting on a solution's strengths and weaknesses.6–9 The format in which a case is presented influences students' contemplation of and response to the case, problem solving for the case, and ability to generate differential diagnoses.10–12 If faculty members overlook the importance of the format when writing and teaching with cases (such as requiring 1 correct answer rather than encouraging students to explore several solutions), the development of active learning and problem-solving skills may suffer, ultimately affecting students' lifelong learning and their progress toward becoming experts.

One specific aspect of the case format that may be instrumental to problem solving is how the question at the end of the case, or the case question prompt, is written because it directs students' thinking.13 One type of case question prompt is the open-ended question, such as “what would you do, or what would you recommend?” Another type of case question prompt is the multiple-choice question, such as “select the best response from the following choices. Case question prompts commonly are written as open-ended questions; however, using a multiple-choice question at the end of a case may help shape students' explanations because this question format helps learners identify the important issues in the case and may reinforce the necessary steps for problem solving.14,15 The multiple-choice format also may help students raise more questions, view the case from multiple perspectives, and consider alternative solutions, and ultimately may enrich case analysis.16–19 Students may overlook problem-solving steps if a case question prompt is limited to a simple open-ended question like “what would you do?” compared to a detailed prompt such as “Explore the possible solutions to this case and explain why your solution is better than the other available solutions.” However, a multiple-choice question may constrain students' thinking and their ability to answer because their choices are limited to the available options.

The objectives of this study were to compare how well 2 different case question prompt formats (multiple-choice and open-ended questions) supported PharmD faculty members and first- through fourth-professional-year (P1-P4) pharmacy students in the following: (1) creating and exploring multiple solutions (conclusions) for each case response, and (2) using quality evidence to defend solutions We hypothesized that faculty members would serve as the control group, demonstrating expert problem-solving skills by exploring multiple solutions (objective 1) as measured by the number and quality of the conclusions provided. We also hypothesized that students would generate better arguments when given a multiple-choice case question prompt. Fourth-year students' responses were compared to those of first- through third-year students because it was hypothesized that P4 students would use better evidence to support their answers due to their work in advanced pharmacy practice experiences (APPEs), which require knowledge and skill application.

METHODS

Two pharmacy law/ethics cases that were similar in length, structure, difficulty, reading ease, and detail were used in the study. Case A (suicide case) involved a patient who possibly was stockpiling a medication for a suicide attempt; thus, a strong answer to the question that followed was expected to address the importance of communicating with the prescribing physician and applying the Beers criteria.20 Case B (police officer case) involved a police officer filling a prescription for an antipsychotic medication; thus, a strong answer to the question that followed was expected to include a discussion of patient privacy and Health Insurance Portability and Accountability Act (HIPAA) laws, knowledge of which is vital for practicing pharmacists.21 Both open-ended and multiple-choice case question prompts were constructed for each case. Both question formats asked for a 100-word minimum response so that participants' answers would be equivalent in length. The open-ended question asked participants to assume the role of a pharmacist, explain how they would respond to the situation, and describe why their response was better than other possible responses. The multiple-choice questions asked participants to select the best response option from 4 possible responses, justify their choice, and explain why the response they chose was better than the other 3 options. The question formats and cases were paired together in a counterbalanced manner so that participants were randomly assigned one case (A or B) with an open-ended question, and the other case (A or B) with a multiple-choice question. During the pilot study, 5 pharmacy students (from P1 – P4 years) and 3 PharmD faculty members responded to the open-ended questions for each case and reviewed the clarity and intent of the cases. The responses to the open-ended question gathered in the pilot study were then used to construct the responses for the multiple-choice questions for cases A and B.

After receiving institutional review board approval, all P1 through P4 students and full-time and adjunct PharmD faculty members who worked with patients were invited to participate in the study. PhD faculty members who did not hold a PharmD were excluded from the study due to the clinical nature of the study cases. Separate student and faculty data collection sessions were held in the university computer laboratory. At the beginning of the testing session for each group (students and faculty members), all participants were asked to provide signed consent and then were given verbal instructions and assigned a unique user name and password. Testing began and ended at the same time for all participants. All participants received their first randomly assigned case and question combination (case A/multiple-choice question; case A/open-ended question; case B/multiple-choice question; or case B/open-ended question) on their computer screen via the Internet at the same time. Participants had no time limit for answering each case question. As soon as a participant submitted his/her first answer, the second randomly assigned case and question combination appeared on the screen. Participants could not modify their response to a question once it was submitted. Upon completion of both case questions, all participants completed a demographics questionnaire (year in the pharmacy program, amount of paid pharmacy work experience in years, current work setting, and length of time in a pharmacy work setting).

Participants' answers were assessed based on the number of words written and length of time taken to respond to each question. Answers were also reviewed using a case response scoring system called argument structure analysis, where responses are divided into reasons, conclusions, and arguments.5 Conclusions are the belief/point of view offered and constitute the “what” of the argument. Reasons are linked to the conclusion and constitute the “why” of the argument. An argument required 1 conclusion and at least 1 reason linked to the conclusion (more than 1 reason could be offered). Submitting a conclusion without providing any reason did not qualify as an argument. Some statements did not fit the above categories and were coded as irrelevant information.

Once each participants' statement was coded as a conclusion, reason, or argument, each reason and the overall quality of the response (Table 2) were scored on a scale of 0 to 3 (0 = irrelevant or false; 1 = weak, eg, matter of opinion; 2 = moderate, ie, evidence implied but not explicitly stated; and 3 = strong, ie, evidence explicitly stated, such as stating a law to defend a conclusion).5 Specific reason strength criteria were reviewed by a practicing PharmD who also held a juris doctorate degree. An example of a weak reason is, “I would not follow up with the doctor because I assume that the doctor already knows the patient's situation.” An example of a moderate reason is, “I would not fill the prescription because of my religious and ethical beliefs.” An example of a strong reason is, “I would fill the prescription if the patient filled the medication on a regular basis, which I confirmed by reviewing the patient's medication refill history in the computer.”

Table 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Table 2.

Argument Quality Scoring Guide #1 in Case Question Prompt Study

Once each argument was identified and scored using the guide in Table 2, an average argument quality score was calculated for each participant's response, and was the first measure of overall response quality (overall response quality 1). A second score of overall response quality (overall response quality 2) was generated using a 7-question checklist to evaluate each response where each yes counted as 1 point and a total score was calculated (Table 3).5 The number of conclusions generated reflected the problem- solving step of exploring multiple solutions. The overall response quality 1 and 2 reflected the problem- solving step of using quality evidence to defend solutions.

Table 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Table 3.

Argument Checklist Total: A 7-Question Checklist in Case Question Prompt Study

The analyses were designed to (1) compare individual participants' response to an open-ended question with their response to a multiple-choice question, and (2) to compare faculty (expert) responses to student (novice) responses. Ten percent of the total case responses were randomly selected and reviewed by 2 independent raters not affiliated with the project with backgrounds in educational methods and pharmacy education. Interrater reliability for the overall response quality 1 and 2 was 80%.

RESULTS

Demographics

Sixty-five percent (197 of 303) of students (novice group) completed the study (P1 = 45 students, P2 = 50 students, P3 = 53 students, and P4 = 49 students). The average pharmacy school grade point average for all participants was 3.1 ± 0.5 on a 4.0 scale (Table 4). Fifty-four pharmacy faculty members (37 full-time and 17 adjunct), with varying amounts of work experience (> 60% had 11 years or more experience, which is considered expert) also enrolled in the study.

Table 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Table 4.

Descriptive Statistics for Faculty (Full-time and Adjunct) and Students (P1 – P4) in Case Question Prompt Study

Objective 1: Exploring Multiple Solutions

To measure participants' ability to explore multiple solutions, the number of conclusions given by each participant for each case were tallied. Faculty members provided significantly more conclusions than students (p < 0.05). However, there was no significant difference between the mean number of conclusions provided by P1-P3 students and the mean number provided by P4 students. The number of conclusions provided by participants who were given the open-ended question for case A (the patient stockpiling drugs potentially for suicide) and the multiple-choice question for case B (the police officer taking an antipsychotic medication) was significantly higher (p < 0.05) than participants given the opposite combinations (multiple-choice question for case A and open-ended question for case B). Within subject differences also occurred. A significant format by participant effect (p < 0.05) indicating the number of conclusions given by faculty members for the multiple-choice question was greater than the number of conclusions given for the open-ended question. For students, however, the number of conclusions given for the open-ended question was greater than the number of conclusions given for the multiple-choice question.

Objective 2: Using Quality Evidence to Defend Solutions

Generalized linear model (GLM) repeated measures design was also used to evaluate objective 2. Faculty members generated a higher average argument quality score (overall response quality score =1) on the 2 cases than the P1-P4 students, (p < 0.05) The average argument quality score of P4 students was not significantly different from that of P1-P3 students. The argument quality score was greater among those who were given case A first and case B second compared to those who were given case B first and case A second (p < 0.05), resulting in a significant case effect. However, there was no significant difference between the average argument quality score when cases were open-ended versus multiple-choice on this outcome (p > 0.05).

On the argument checklist total (overall response quality score 2), faculty members had better quality responses as judged by the argument checklist total than the P1-P4 students (p < 0.05). There were no significant differences between the quality of the P4 students' responses and that of the P1-P3 students. The argument checklist total was higher for participants who were given the open-ended question for case A (the patient stockpiling drugs potentially for suicide) and the multiple-choice question for case B (the police officer taking an antipsychotic medication) was significantly higher (p < 0.05) than participants given the opposite combinations (multiple-choice question for case A and open-ended question for case B).

Multiple-Choice Option Selection Results

There were no significant differences between faculty members and students in the 4 multiple-choice options selected for the police officer and patient suicide cases, regardless of when the case was received. However, the majority of participants chose option 3 for both cases (59.8%), and 32.1% of participants chose option 4.

Unanticipated Results

Originally, we thought that the suicide case (case A) and the police officer case (case B) were equivalent, as both were presented as ethical dilemmas that were described using a similar number of paragraphs, words per sentence, and total number of words, and were written at a similar Flesch-Kincaid grade level. However, the suicide case appeared to elicit stronger feelings/decisions and responses from participants. The suicide case also produced greater differences in responses depending on whether the participant was given the open-ended question or the multiple-choice question, and whether the case was presented first or second. Since there were differences, merging the scores on the responses to the open-ended questions for both the police and suicide cases or for the scores on the responses to the multiple-choice questions for both cases was not possible. To take case type difference into account, a variable indicating whether the case with the open-ended question was case A (suicide) or case B (police) was added to the final model in the analyses.

DISCUSSION

The data from this study pertaining to case formatting considerations may provide direction to faculty members creating and using cases to promote active learning in the instructional environment. The study resulted in 5 findings. First, faculty members outperformed students on every case response outcome variable Table 5) yielding significant differences in their ability to explore multiple solutions and use quality evidence to defend solutions, supporting our hypothesis that faculty members would serve as experts or the control group in the study. This supports the findings of previous studies that experts possess problem-solving abilities that differ significantly from novices. One study limitation was that approximately 40% of the faculty members did not meet the criteria for being an expert, which is defined as 10 years or greater experience, but were included in the study analyses in order to adequately power the study. Although significant differences were seen in the current design, greater differences may have been observed if more stringent exclusion criteria had been used.

Table 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
Table 5.

Expert (Faculty) vs. Novice (P1 – P4 Student) Differences on all Study Variables in Case Question Prompt Study

Second, faculty members explored multiple solutions (a problem-solving step indicative of experts) for the open-ended and multiple-choice case questions as indicated by the number of conclusions generated. It was originally hypothesized that multiple-choice options might constrain the thinking of experts who had extensive knowledge and experience to draw from when considering a solution, but this did not occur, and it is possible that the opposite is true and the multiple-choice questions triggered consideration of additional options by the experts. Students generating more conclusions when responding to the open-ended case questions was also an unanticipated outcome. This finding may have revealed that the number of conclusions was not an effective measure for indicating whether participants explored multiple solutions. This variable quantifies the number of solutions explored, but it does not account for the quality of the solutions. A better option may have been the argument checklist total since it evaluates whether alternative points of view are considered and fairly presented. In fact, participants had higher argument checklist totals when given multiple-choice cases question prompts as compared to scores when given open-ended case question prompts, suggesting that all participants benefitted from the multiple-choice question format when responding to a case. Therefore, the third finding is that the argument checklist total may be a useful tool to measure responses to cases. This tool may increase interrater reliability and facilitate delivering students structured feedback about their case responses to increase learning, but this would need further exploration in future studies. Fourth, the average argument quality score did not appear to detect overall response quality or differences between multiple-choice and open-ended question formats, as well as the argument checklist total. In spite of the extensive guidelines created for evaluating the quality of arguments generated, this measure was time intensive to use when assessing the participants' responses, which may be prohibitive if using this measure to assess student performance on classroom assignments or tests.

Fifth, as indicated by the argument checklist total, participants benefitted from the multiple-choice format for case question prompts, which supports the hypothesis that the case question prompt format affects how all students (and even experts) think about and respond to a case, which could ultimately impact problem-solving skills. Faculty members who use cases in their teaching should consider using the multiple-choice format for case question prompts to help shift students from focusing on finding a single correct answer to the case to exploring multiple solutions and using the best evidence to defend solutions. The multiple-choice format may facilitate problem-solving skill development that is sustainable during coursework and beyond.

There are limitations to these study findings. Only 2 cases were used in the study which may have resulted in a case specificity effect (ie, an individual's ability to solve problems varies across cases), implying that the results from these 2 cases may not apply to other cases.22 In addition to the low number of cases used in the study, an unanticipated yet significant case effect was found that impacted the results and highlights the need for using factual evidence when responding to highly sensitive topics and legal/ethical dilemmas.

CONCLUSION

In summary, cases are a teaching tool in pharmacy education commonly used to promote active learning. Including cases in the curriculum does not guarantee that students will become actively engaged, explore multiple solutions, or use quality evidence to justify their solutions. Designing cases with a multiple-choice case question prompt appeared to help students structure their responses to the case and focus on using quality evidence to defend their solutions, which is an important skill in problem solving. This format increases active learning because instructors can encourage all students to select a response and then raise their hand to indicate the option they chose, making students' thinking visible (more apparent) to the instructor. Faculty members can also ask individual students to explain their selected answer and ultimately facilitate debate within the class over the various options.

ACKNOWLEDGMENTS

The contributions of all of the faculty members, staff, administrators, and students involved in the study are greatly acknowledged. Special recognition goes to Angela M. O'Donnell, PhD, Clark A. Chinn, PhD, and Susan Goodin, PharmD, BCOP for their thoughtful review and guidance with this work.

  • Received July 1, 2009.
  • Accepted August 27, 2009.
  • © 2010 American Journal of Pharmaceutical Education

REFERENCES

  1. 1.↵
    Accreditation Council for Pharmacy Education Accreditation Standards and Guidelines for the Professional Program in Pharmacy Leading to the Doctor of Pharmacy Degree 2007. http://www.acpe-accredit.org/pdf/ACPE_Revised_PharmD_Standards_Adopted_Jan152006.DOC. Accessed February 4, 2010.
  2. 2.↵
    1. Ericsson KA
    The Road to Excellence: the Acquisition of Expert Performance in the Arts and Sciences, Sports, and Games1996Mahwah, NJLawrence Erlbaum Associates1, 50
  3. 3.↵
    1. Halpern DF,
    2. Wai J
    The world of competitive scrabble: Novice and expert differences in visuospatial and verbal abilitiesJ Exp Psychol Appl.200713(2)79, 94
    OpenUrlCrossRefPubMed
  4. 4.↵
    1. Govier T
    1. Govier T
    , Govier TWhat is an argument and what is not?A Practical Study of Argument1997Belmont, CAWadsworth Publishing Company1, 18
  5. 5.↵
    1. Halpern DF
    Analyzing argumentsThought & Knowledge: An Introduction to Critical Thinking20034th edMahwah, NJLawrence Erlbaum Associates182, 228
  6. 6.↵
    1. McKeachie WJ
    1. McKeachie WJ
    , McKeachie WJProblem-based learning: Teaching with cases, simulations, and gamesMcKeachie's Teaching Tips: Strategies, Research, and Theory for College and University Teachers200612th edBoston, MAHoughton Mifflin222, 225
  7. 7.↵
    1. Ertmer PA,
    2. Stepich DA
    Case-Based Instruction in Post-Secondary Education: Developing Students' Problem-Solving Expertise1999ERIC Document Reproduction Service No. ED 435375. Retrieved August 10, 2009, from EBSCO Host ERIC database.
  8. 8.↵
    1. Eshach H,
    2. Bitterman H
    From case-based reasoning to problem-based learningAcad Med.200378(5)491, 496
    OpenUrlPubMed
  9. 9.↵
    1. Evenson DH,
    2. Hmelo CE
    1. Kelson ACM
    , Evenson DH, Hmelo CEEpilogue: Assessment of students for proactive lifelong learningProblem-Based Learning: A Research Perspective on Learning Interactions2000Mahwah, NJLawrence Erlbaum Associates315, 345
  10. 10.↵
    1. Kamin CS,
    2. O'Sullivan PS,
    3. Younger M,
    4. Deterding R
    Measuring critical thinking in problem-based learning discourseTeach Learn Med.200113(1)27, 35
    OpenUrlCrossRefPubMed
  11. 11.↵
    1. Lohman MC,
    2. Finkelstein M
    Designing cases for problem-based learning to foster problem solving skillEuro J Dent Educ.20026(3)121, 127
    OpenUrl
  12. 12.↵
    1. Sutyak JP,
    2. Lebeau RB,
    3. Spotnitz AJ,
    4. O'Donnell AM,
    5. &Mehne PR
    Role of case structure and prior experience in a case-based surgical clerkshipAm J Surg.1996172(3)286, 290
    OpenUrlPubMed
  13. 13.↵
    1. Ilaria DR
    Questions that engage students in mathematical thinkingAthens, GAPaper presented at the annual meeting of the North American Chapter of the International Group for the Psychology of Mathematics EducationOctober, 2002. ERIC Document ED471774. ERIC Document Reproduction Service No. ED471774. Retrieved March 1, 2010, from EBSCO Host ERIC database.
  14. 14.↵
    1. Resnick LB
    1. Chi MTH,
    2. Bassok M
    , Resnick LBLearning from examples via self-explanationsKnowing, Learning and Instruction: Essays in Honor of Robert Glaser1989Hillsdale, NJLawrence Erlbaum Associates252, 282
  15. 15.↵
    1. Shulman J
    1. Kleinfeld J
    , Shulman JLearning to think like a teacherCase Methods in Teacher Education1992NYTeachers College Press33, 49
  16. 16.↵
    1. Barrows HS
    A taxonomy of problem-based learning methodsMed Educ.198620(6)481, 486
    OpenUrlCrossRefPubMed
  17. 17.↵
    1. Gallagher SA
    Problem-based learning: Where did it come from, what does it do, and where is it going?J Educ Gifted.199720332, 362
    OpenUrl
  18. 18.↵
    1. Norman GR,
    2. Schmidt HG
    The psychological basis of problem-based learning: A review of the evidenceAcad Med.199267(9)557, 565
    OpenUrlCrossRefPubMed
  19. 19.↵
    1. Shulman J
    1. Shulman J
    , Shulman JTeacher-written cases with commentariesCase Methods in Teacher Education1992NYTeachers College Press131, 152
  20. 20.↵
    1. Fick DM,
    2. Cooper JW,
    3. Wade WE,
    4. Waller JL,
    5. Maclean JR,
    6. Beers MH
    Updating the Beers criteria for potentially inappropriate medication use in older adults: results of a US consensus panel of expertsArch Intern Med.2003163(22)2716, 2724
    OpenUrlCrossRefPubMed
  21. 21.↵
    1. Veatch RM,
    2. Haddad A
    Case Studies in Pharmacy Ethics1999NYOxford Press125.
  22. 22.↵
    1. Wimmers PF,
    2. Splinter TAW,
    3. Hancock GR,
    4. Schmidt HG
    Clinical competence: General ability or case-specific?Adv Health Sci Educ.200712(3)299, 314
    OpenUrl
PreviousNext
Back to top

In this issue

American Journal of Pharmaceutical Education
Vol. 74, Issue 2
1 Mar 2010
  • Table of Contents
  • Index by author
Print
Download PDF
Email Article

Thank you for your interest in spreading the word on American Journal of Pharmaceutical Education.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Relationship Between Case Question Prompt Format and the Quality of Responses
(Your Name) has sent you a message from American Journal of Pharmaceutical Education
(Your Name) thought you would like to see the American Journal of Pharmaceutical Education web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
1 + 0 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.
Citation Tools
Relationship Between Case Question Prompt Format and the Quality of Responses
Melissa S. Medina
American Journal of Pharmaceutical Education Mar 2010, 74 (2) 29; DOI: 10.5688/aj740229

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Share
Relationship Between Case Question Prompt Format and the Quality of Responses
Melissa S. Medina
American Journal of Pharmaceutical Education Mar 2010, 74 (2) 29; DOI: 10.5688/aj740229
del.icio.us logo Digg logo Reddit logo Twitter logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • INTRODUCTION
    • METHODS
    • RESULTS
    • DISCUSSION
    • CONCLUSION
    • ACKNOWLEDGMENTS
    • REFERENCES
  • Figures & Data
  • Info & Metrics
  • PDF

Similar AJPE Articles

Cited By...

  • No citing articles found.
  • Google Scholar

More in this TOC Section

  • Examining Validity for the Pharmacy Affective Domain-Situational Judgment Test (PAD-S)
  • Measuring Empathy in Iranian Pharmacy Students: Using The Jefferson Scale of Empathy-Health Profession Students
  • Differences In Multiple-Choice Questions of Opposite Stem Orientations Based On A Novel Item Quality Measure
Show more RESEARCH ARTICLE

Related Articles

  • No related articles found.
  • Google Scholar

Keywords

  • argument analysis
  • case-based learning
  • active learning
  • question format
  • problem solving

Home

  • AACP
  • AJPE

Articles

  • Current Issue
  • Early Release
  • Archive

Instructions

  • Author Instructions
  • Submission Process
  • Submit a Manuscript
  • Reviewer Instructions

About

  • AJPE
  • Editorial Team
  • Editorial Board
  • History
  • Contact

© 2022 American Journal of Pharmaceutical Education

Powered by HighWire