Skip to main content

Main menu

  • Articles
    • Current
    • Early Release
    • Archive
    • Rufus A. Lyman Award
    • Theme Issues
    • Special Collections
  • Authors
    • Author Instructions
    • Submission Process
    • Submit a Manuscript
    • Call for Papers - Intersectionality of Pharmacists’ Professional and Personal Identity
  • Reviewers
    • Reviewer Instructions
    • Call for Mentees
    • Reviewer Recognition
    • Frequently Asked Questions (FAQ)
  • About
    • About AJPE
    • Editorial Team
    • Editorial Board
    • History
  • More
    • Meet the Editors
    • Webinars
    • Contact AJPE
  • Other Publications

User menu

Search

  • Advanced search
American Journal of Pharmaceutical Education
  • Other Publications
American Journal of Pharmaceutical Education

Advanced Search

  • Articles
    • Current
    • Early Release
    • Archive
    • Rufus A. Lyman Award
    • Theme Issues
    • Special Collections
  • Authors
    • Author Instructions
    • Submission Process
    • Submit a Manuscript
    • Call for Papers - Intersectionality of Pharmacists’ Professional and Personal Identity
  • Reviewers
    • Reviewer Instructions
    • Call for Mentees
    • Reviewer Recognition
    • Frequently Asked Questions (FAQ)
  • About
    • About AJPE
    • Editorial Team
    • Editorial Board
    • History
  • More
    • Meet the Editors
    • Webinars
    • Contact AJPE
  • Follow AJPE on Twitter
  • LinkedIn
LetterLETTERS

Summative Evaluations When Using an Objective Structured Teaching Exercise

Michael J. Peeters, Conor P. Kelly and M. Kenneth Cor
American Journal of Pharmaceutical Education May 2015, 79 (4) 60; DOI: https://doi.org/10.5688/ajpe79460
Michael J. Peeters
aUniversity of Toledo College of Pharmacy and Pharmaceutical Sciences, Toledo Ohio
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Conor P. Kelly
aUniversity of Toledo College of Pharmacy and Pharmaceutical Sciences, Toledo Ohio
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
M. Kenneth Cor
bUniversity of Alberta Faculty of Pharmacy and Pharmaceutical Sciences, Edmonton, Alberta, Canada
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Info & Metrics
  • PDF
Loading

Sturpe and Schaivone’s primer on objective structured teaching exercises (OSTEs) was a timely addition to the pharmacy education literature.1 The article cogently pointed out notable needs for effective improvements in faculty development and many “how-to” OSTE elements for pedagogical faculty development. Building off these ideas, we would like to add to the conversation by expanding on reliability needs (ie, consistency and fairness) with this type of assessment.

The OSTE is an elegant extension of the objective structured clinical examination (OSCE) technique. Such examinations are typically used to summatively assess pharmacy students’ clinical abilities. In an OSCE’s high-stakes context, achieving high levels of reliability is imperative. Generalizability theory (G-theory) is a gold-standard means to quantify reliability with this type of testing. Generalizability theory provides a framework to tease apart variation resulting from assessment aspects, such as raters, scoring instrument components, and each specific case context that contribute to total score variability.2,3 This theory demonstrates how context specificity leads to variation in performances based solely on differences in how students experience or are treated from one context to the next (ie, different raters and/or station scenarios). In recent decades, notable developments describing and examining context specificity within assessments have occurred.4-6

For example, G-theory analyses with data from OSCEs and OSTEs show that increasing the number of stations and/or examiners in a scoring scheme reduces score variation attributable to these design elements and subsequently improves reliability substantially.2,5-7 Taken together, these findings suggest that if colleges and schools of pharmacy move toward using OSTEs for summative purposes, OSTE designers must pay careful attention to the number of stations and raters used to produce overall OSTE scores. Thus, pharmacy education should be moving away from single-rater/single-station models of performance assessment towards models with more stations and more raters to improve reliability.

We commend Sturpe and Schaivone for discussing how an OSTE can be used for formative assessment (ie, ongoing feedback and faculty development) and summative assessment (ie, faculty/preceptor evaluation). In a setting of formative faculty development, feedback is more important than high-level reliability.8 Sturpe and Schaivone eloquently describe this formative development goal. However, if an OSTE were used in evaluation or as an outcome in research, high-level reliability and avoiding measurement error would become imperative.3 Ultimately, we emphasize that, as Sturpe has noted elsewhere with summative OSCEs, more stations should be used to achieve acceptably-high reliability.9 By accurately viewing OSTEs as a version of OSCEs, the same principle applies. If an OSTE is to be used for summative evaluation, multiple stations and raters are needed (with possibly more than 3-5 stations;7 in general with fewer stations, more raters are needed—so numerous raters if only 3 stations are used, knowing that for reliability more stations is often much better than more raters in each station10).

  • © 2015 American Association of Colleges of Pharmacy

REFERENCES

  1. 1.↵
    1. Sturpe DA,
    2. Schaivone KA
    . A primer for objective structure teaching exercises. Am J Pharm Educ. 2014;78(5):Article 104.
    OpenUrl
  2. 2.↵
    1. Swanwick T
    1. Norman G,
    2. Eva KW
    . Quantitative research methods in medical education. In: Swanwick T, ed. Understanding Medical Education: Evidence, Theory and Practice. 2nd ed. Chichester, UK: Wiley Blackwell; 2014: 349-369
  3. 3.↵
    1. Peeters MJ,
    2. Beltyukova SA,
    3. Martin BA
    . Educational testing and validity of conclusions in the scholarship of teaching and learning. Am J Pharm Educ. 2013;77(9):Article 186.
    OpenUrl
  4. 4.↵
    1. van der Vlueten CP
    . When I say … context specificity. Med Educ. 2014;48(3):234-235.
    OpenUrl
  5. 5.↵
    1. Eva KW
    . On the generality of specificity. Med Educ. 2003;37(7):587-588.
    OpenUrlCrossRefPubMed
  6. 6.↵
    1. Peeters MJ,
    2. Serres ML,
    3. Gundrum TE
    . Improving reliability of a residency interview process. Am J Pharm Educ. 2013;77(8):Article 168.
    OpenUrl
  7. 7.↵
    1. Quick M,
    2. Mazor K,
    3. Haley HL,
    4. et al
    . Reliability and validity of checklists and global ratings by standardized students, trained raters, and faculty raters in an objective structured teaching exercise. Teach Learn Med. 2005;17(3):202-209.
    OpenUrlPubMed
  8. 8.↵
    1. Cox CD,
    2. Peeters MJ,
    3. Stanford BL,
    4. Seifert CF
    . Pilot of peer assessment within experiential teaching and learning. Curr Pharm Teach Learn. 2013;5(4):311-320.
    OpenUrl
  9. 9.↵
    1. Sturpe DA
    . Objective Structured Clinical Examination in doctor of pharmacy programs in the United States. Am J Pharm Educ. 2010;74(8):Article 148.
    OpenUrl
  10. 10.↵
    1. van der Vleuten CPM,
    2. Swanson DB
    . Assessment of clinical skills with standardized patients: state of the art. Teach Learn Med. 1990; 2(2):58-76.
    OpenUrlCrossRef
PreviousNext
Back to top

In this issue

American Journal of Pharmaceutical Education
Vol. 79, Issue 4
25 May 2015
  • Table of Contents
  • Index by author
Print
Download PDF
Email Article

Thank you for your interest in spreading the word on American Journal of Pharmaceutical Education.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Summative Evaluations When Using an Objective Structured Teaching Exercise
(Your Name) has sent you a message from American Journal of Pharmaceutical Education
(Your Name) thought you would like to see the American Journal of Pharmaceutical Education web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
4 + 14 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.
Citation Tools
Summative Evaluations When Using an Objective Structured Teaching Exercise
Michael J. Peeters, Conor P. Kelly, M. Kenneth Cor
American Journal of Pharmaceutical Education May 2015, 79 (4) 60; DOI: 10.5688/ajpe79460

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Share
Summative Evaluations When Using an Objective Structured Teaching Exercise
Michael J. Peeters, Conor P. Kelly, M. Kenneth Cor
American Journal of Pharmaceutical Education May 2015, 79 (4) 60; DOI: 10.5688/ajpe79460
Reddit logo Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • REFERENCES
  • Info & Metrics
  • PDF

Similar AJPE Articles

Cited By...

  • No citing articles found.
  • Google Scholar

More in this TOC Section

  • A Plea for Psychometric Rigor
  • Response to a Plea for Psychometric Rigor
  • Expanding Dress Code Requirements in the Doctor of Pharmacy Program
Show more LETTERS

Related Articles

  • No related articles found.
  • Google Scholar

Home

  • AACP
  • AJPE

Articles

  • Current Issue
  • Early Release
  • Archive

Instructions

  • Author Instructions
  • Submission Process
  • Submit a Manuscript
  • Reviewer Instructions

About

  • AJPE
  • Editorial Team
  • Editorial Board
  • History
  • Contact

© 2023 American Journal of Pharmaceutical Education

Powered by HighWire