Skip to main content

Main menu

  • Articles
    • Current
    • Early Release
    • Archive
    • Rufus A. Lyman Award
    • Theme Issues
    • Special Collections
  • Authors
    • Author Instructions
    • Submission Process
    • Submit a Manuscript
    • Call for Papers - Intersectionality of Pharmacists’ Professional and Personal Identity
  • Reviewers
    • Reviewer Instructions
    • Call for Mentees
    • Reviewer Recognition
    • Frequently Asked Questions (FAQ)
  • About
    • About AJPE
    • Editorial Team
    • Editorial Board
    • History
  • More
    • Meet the Editors
    • Webinars
    • Contact AJPE
  • Other Publications

User menu

Search

  • Advanced search
American Journal of Pharmaceutical Education
  • Other Publications
American Journal of Pharmaceutical Education

Advanced Search

  • Articles
    • Current
    • Early Release
    • Archive
    • Rufus A. Lyman Award
    • Theme Issues
    • Special Collections
  • Authors
    • Author Instructions
    • Submission Process
    • Submit a Manuscript
    • Call for Papers - Intersectionality of Pharmacists’ Professional and Personal Identity
  • Reviewers
    • Reviewer Instructions
    • Call for Mentees
    • Reviewer Recognition
    • Frequently Asked Questions (FAQ)
  • About
    • About AJPE
    • Editorial Team
    • Editorial Board
    • History
  • More
    • Meet the Editors
    • Webinars
    • Contact AJPE
  • Follow AJPE on Twitter
  • LinkedIn
Research ArticleResearch Articles

Objective Structured Clinical Examinations (OSCEs) Compared With Traditional Assessment Methods

Stewart Brian Kirton and Laura Kravitz
American Journal of Pharmaceutical Education August 2011, 75 (6) 111; DOI: https://doi.org/10.5688/ajpe756111
Stewart Brian Kirton
School of Pharmacy, University of Hertfordshire, United Kingdom
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Laura Kravitz
School of Pharmacy, University of Hertfordshire, United Kingdom
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • PDF
Loading

Abstract

Objectives. To compare objective structured clinical examinations (OSCEs) and traditional assessment methods among recent pharmacy graduates.

Methods. Individual student performance in OSCEs was compared with performance on traditional pharmacy-practice examinations at the same level of program study.

Results. A moderate correlation was found between individual attainment in OSCE examinations and on traditional pharmacy practice examinations at the same level.

Conclusions. OSCEs add value to traditional methods of assessment because the 2 evaluation methods measure different competencies.

Keywords
  • objective structured clinical examination
  • assessment
  • pharmacy practice
  • student performance
  • clinical competency

INTRODUCTION

Training of pharmacy undergraduates in the United Kingdom (UK) in preparation for their careers as pharmacists is undergoing change. The seminal White Paper1 by Anne Galbraith illustrates the need for the discipline to evolve to accommodate the changing demands of the population. Without community pharmacists playing a more prominent, clinical role in the routine management of chronic and lifestyle-related diseases, economic pressures place the National Health Service (NHS) at risk of being unable to provide appropriate quality of care. As highlighted in the white paper, the biggest risks to the nation's health include the prevalence of obesity, smoking, sexually transmitted infections, and alcohol abuse. The aging population also poses a significant threat to the provision of quality healthcare. By 2029, the UK population 65 to 74 years old will increase by an estimated 40%, and the population 75 to 84 years old will increase by 50%. Over the same period, the proportion of people over 85 years old is expected to double, placing the greatest burden on the healthcare sector.2

There are approximately 12,500 community pharmacies across the United Kingdom.3 With increasing frequency, these pharmacies are offering a range of extended services, such as smoking cessation clinics, cholesterol and blood-pressure testing, and screening for chlamydia.4 Services such as these, coupled with the community pharmacist supplying excellent advice regarding lifestyle choices, aim to tackle the biggest risks to public health and ease the burden on the healthcare sector as a whole in the near future. To prepare future graduates for the challenges facing modern practicing pharmacists, however, undergraduate training has to adapt accordingly.

The UK undergraduate master's degree in pharmacy is a 4-year university-based degree, followed by a 1-year vocational “preregistration” year, after which students take a set of examinations. Candidates who have displayed competence in practice and pass the examinations are able to work as pharmacists in the United Kingdom.

The undergraduate program can be divided into 4 disciplines: pharmaceutical chemistry, pharmaceutics, pharmacology, and pharmacy practice. Traditionally, the emphasis for undergraduate training of pharmacists has been on science, with pharmacology, pharmaceutical chemistry, and pharmaceutics accounting for the majority of the academic content. As the role of the pharmacist has predominantly been dispensing medications prescribed by the doctor and advising regarding administration, this theoretical bias was adequate training to prepare undergraduates for a career in the field. While a solid foundation in the scientific basis of medicines and the way in which they interact with the body is still paramount to any pharmacist's training, the evolution of the pharmacist as a clinician warrants a concomitant change on the emphasis placed on pharmacy practice in undergraduate degree programs.

Research by The Royal Pharmaceutical Society of Great Britain (RPSGB), in conjunction with the University of East Anglia, has highlighted the poor correlation between academic achievement and performance during the preregistration year.5 To meet the necessary high standards of professional practice, the RPSGB advocates the inclusion of competency-based learning and assessment in the form of objective structured clinical examinations (OSCEs), alongside traditional methods of assessments such as written examinations. However, OSCEs should take into account that the measure of competence is contextual and that the assessment of competence (ie, what the student is able to do under examination conditions) should ideally reflect what the student will habitually do when not being observed.6

The master of pharmacy (MPharm) degree at the University of Hertfordshire (UH) has been developed as a degree that integrates the clinical and scientific aspects of a pharmacist's training from day 1 of the program. Students are co-taught with nurses and paramedics on several modules to put their clinical role in the context of actual practice. The program incorporates experiential learning using weeklong clinical practice placements in each year of study to ensure that each student gains hands-on experience in a clinical environment at every stage of the degree program. UH is also seeking to address the need for pharmacists to undergo clinical training, including formative and summative OSCEs at all levels.

OSCEs were first introduced in the 1970s as training tools and a means of assessing students' practical skills in medicine and nursing demanded by their future professions.7 OSCEs are intended to assess whether students are competent as practicing professionals by using multiple OSCE stations. Each station details a different scenario designed to test a range of clinical competencies, which take between 5 and 15 minutes to complete. These stations are categorized as manned and unmanned. Manned stations require students to interact with an assessor who subsequently awards grades based on their performance. Unmanned stations involve submission of a written report, which is then assessed according to a predetermined grading scheme. The use of multiple OSCE stations leads to an increase in student performance, with the optimum number being around 15 stations (or learning outcomes) per phase of assessment. This conclusion could be attributable to the fact that use of multiple OSCE stations removes reliance on a student being familiar with a single case study in order to pass the competency tests.8 Increasing the number of learning outcomes to more than 15 does not correlate with an increase in performance but significantly increases the cost of assessment.9-11

The relative merits of manned versus unmanned stations, the factor of examiner-bias on manned workstations, and the economic cost to institutions incurred when running OSCEs are all important factors to consider before using OSCEs as an assessment and teaching tool.12 Given the current economic climate and proposed cuts to funding for UK higher-education institutions, it is pertinent to analyze how inclusion of OSCEs has thus far impacted undergraduate training at UH.

METHODS

All data used in this investigation were taken from the cohort of 39 students who graduated from the UH School of Pharmacy in the summer of 2009 and began their preregistration placements. At the time of writing, this was the only cohort of students to have graduated from the school and the only complete set of data available. In order to preserve anonymity, each student was assigned a number from 1 through 39.

OSCEs had been an integral part of these students' undergraduate training at all levels of the program. The research described below investigates whether there is a correlation between the students' success with OSCEs and their success in other aspects of the course.

The format of the OSCE examinations at UH is the same at every level of the program; however, the complexity and clinical content of the task associated with a given station reflect the students' level of education at the time of the assessment. Students complete 15 stations, each of which takes 5 minutes and is associated with a distinct learning outcome. The OSCEs in year 1 are formative, while the OSCEs in years 2, 3, and 4 are both formative and summative. Performance in the summative OSCEs contributes 5% to 10% toward the overall coursework grade for the specialist pharmacy-practice module at that level of study. There are no formal pass/fail grades associated with OSCEs at UH.

Because OSCEs build on academic theory, the initial hypothesis was that performance in OSCEs would correlate highly with performance in the academic modules associated with pharmacy practice. A weaker or no correlation was expected between OSCEs and aspects of the program with no pharmacy-practice content. There are several instances in the literature proclaiming that success in competency-based assessments is relative to an undergraduate's experience in clinical situations.10,12,13 Thus, as a cohort, students were expected to perform better in their final-year OSCEs than in first-year examinations. The null hypotheses were that there would be no correlation between OSCEs and pharmacy-practice examinations and that there would be no improvement in individual OSCE grades as the student progressed through the program.

The following examination scores for the students were identified:

  • Overall OSCE grade attained by the student in their year 1 assessment.

  • Overall OSCE grade attained by the student in the year 3 assessment.

  • Overall OSCE grade attained by the student in the final (year 4) assessment.

  • The grade attained by the student on the examination for the year 3 Medicines and Pharmacy Practice course.

In the Medicines and Pharmacy Practice 3 course, students evaluated a range of clinical and therapeutic data to develop skills that would enable them to support patients with diverse medical conditions, such as developing an understanding of the role of the pharmacist as an allied health professional and certain aspects of social pharmacy (eg, how social pharmacy has informed patient-focused pharmacy practice).

The data for OSCEs in year 2 of the program were incomplete and thus omitted from this analysis. Pearson's correlation coefficient (r), is a measure of the strength of association between 2 variables achieved by plotting 2 sets of data against one another in the form of a scatter plot. For these investigations, the chronological order in which examinations took place was used to determine whether the data were dependent or independent. The examination occurring first chronologically was always assumed to be the dependent data, as it would inform the student's success on subsequent examinations.

All values for r were calculated using the correlation option in the Data Analysis ToolPak within Microsoft Excel 2007, which also was used to generate graphs depicting the correlations. A Pearson correlation coefficient was calculated for the year 3 OSCE grade (dependent) and Medicines and Pharmacy Practice examination grade (independent).

RESULTS

Although, the initial hypothesis supported a strong correlation between student grades in year 3 OSCE examinations when compared with their attainment in the related pharmacy-practice module for that year, Medicines and Pharmacy Practice 3, the correlation data generated showed no such relationship (Figure 1). For a linear correlation to exist, we would have expected to see a Pearson correlation coefficient of r ≥ 0.8. The actual value is 0.6, which, at best, describes only a moderate correlation between the 2 data sets. The mean grades and standard deviations for both assessments (OSCE 62.9% ± 10.9%; MPP3 63.7% ± 9.0%) showed that there were no great disparities between the data sets, and hence, the statistical model used was appropriate. The difference between them is immediately evident when the 2 assessments are mapped onto the levels described by Miller's Pyramid of Competence.15

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Scatterplot to show the correlation between Year 3 objective structured clinical examination (OSCE) grades and the grades attained for MPP3. Mean mark for OSCE = 62.9% ± 10.9%; Mean value for MPP3 examination = 63.70 % ± 9.0%. Pearson Correlation Coefficient r = 0.6; r2 = 0.4; ρ = 1.1 × 10-5

Students' OSCE grades are expected to increase as their experience with clinical situations increases. Therefore, improvements in OSCE grades from year to year should be observed. When examining the relative performance of the students in year 1 and year 3 OSCEs, 100% of the students performed better in year 3 (Figure 2). Comparing the results at year 1 and year 4 (final year), shows only 80% attained higher grades in their final-year OSCEs compared with their year 1 results (Figure 3).

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

Line graph to show the trends for individual student performance in Objective Structured Clinical Examination (OSCE) examinations at year 1 (circles) and year 3 (diamonds). The average grade for Year 1 OSCE was 50.6 % ± 10.7%; the average grade for year 3 OSCE was 62.9% ± 10.9%.

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

Line graph to show the trends for individual student performance in Objective Structured Clinical Examination (OSCE) examinations at year 1 (circles) and year 4 (triangles). The average grade for year 1 OSCE was 50.6 % ± 10.7%; the average grade for year 4 OSCE was 55.6% ± 11.4%.

DISCUSSION

These figures suggest that the observed bias for faculty members to inflate OSCE grades13 does not seem to hold for UH. The lack of a correlation is thought to be a result of either some students performing better in their OSCEs than on their MPP3 examination, or some students performing better on the MPP3 examination than in their OSCEs.

Analysis of the correlation plot (Figure 1) shows instances of both possible scenarios. Students' whose scores were significantly above the trend line performed better on their MPP3 examinations than in their OSCEs; those below the trend line performed better in their OSCEs than on their MPP3 examination. These differences may be attributable to the different skill sets the OSCEs and the MPP3 examination were designed to test.

The MPP3 examination is a traditional written examination consisting of a series of multiple-choice and essay questions. It primarily assesses the lowest levels of competence of Miller's pyramid – “knows” and “knows how.” While OSCEs also can be used to assess the 2 lower levels of the pyramid,16 their strength as an assessment tool is that they also can assess the higher levels (“shows how” and, to some extent, “does”) which are the most important for gaining an understanding of actual clinical competence. Although there is evidence to support the premise that a strong knowledge base will improve clinical competence,16 a correlation between the MPP3 examination and the level 3 OSCE performance should not be expected because the assessments measure 2 distinct capabilities.

The Cambridge Model17 for assessing competence expands on Miller's model of competency by examining the top 2 levels of Miller's Pyramid in greater detail. This model, which accounts for the influence of individual-specific environment and circumstances on competency and performance, is useful when rationalizing the results of the statistical analysis.

Students may perform poorly in OSCEs relative to their performance on the MPP3 examination because of their inability to cope with the stressful nature of the multiple-station assessment. Moving from one station to another at 5-minute intervals may not give some students enough time to recover from a poor performance at a previous station, resulting in a poor overall performance. Several studies show that poor performance in an OSCE examination is not indicative of a student's lack of competence.18-20 Eighteen students performed markedly better in their written MPP3 examination than on their OSCEs (Figure 1). This finding may be attributable to their ability to handle the stress of a more familiar written examination compared with that of the relatively unfamiliar OSCEs (ie, an influence relating to the system). Another possible factor is the importance associated with passing the MPP3 examination, which is required for progressing to the final year of the undergraduate degree program. In contrast, passing the OSCEs is not crucial for progression (ie, students may perform poorly in this component but still progress because of credit earned elsewhere. The knowledge that passing OSCEs is not critical to progression may impact student attitudes toward the assessment (influence relating to an individual), causing some not to prepare as thoroughly for the OSCEs as they would for the MPP3 examination. While there is anecdotal evidence that this hypothesis holds true for a small number of students, further research is needed before concrete conclusions can be made regarding the impact of this attitude on performance.

As studies have shown that performance in OSCEs is not necessarily an indicator of competence,18-20 we feel student success in this individual element is not crucial for progression toward a degree. This is largely because poor performance resulting from external factors, as opposed to lack of ability, could hinder an otherwise capable student from moving on to the next stage of the program. Obviously, this problem could be ameliorated by running summative OSCEs more than once during any given year, as performance over a series of OSCEs would enable staff members to assess more accurately the competence of individual students. However, it is not feasible to give individual students multiple attempts to pass an OSCE in any single year because of the resource-intensive nature of these examinations. Given the cost of OSCEs and their inconsistency in accurately measuring clinical competence, an argument could be made for eliminating them from the program as a means of assessment, even though doing so would mean relying solely on the performance of students in their traditional examinations (eg, MPP3) as a metric for progression. This research, however, shows that the correlation between performance in traditional examinations versus performance in OSCEs is weak, suggesting that the assessments examine different skills. Hence, we advocate the continued use of traditional examinations in conjunction with OSCEs in training pharmacy students.

There is a general expectation that an increase in experience will lead to a concomitant increase in clinical ability and, as a consequence, in OSCE marks.10,12,13 To assess the extent to which this assumption held for the 2009 cohort, we compared relative achievements of individuals in OSCEs at years 1, 3, and 4 of the undergraduate program. Although increased experience resulted in improved OSCE marks for the vast majority, this did not hold true for 20% of the cohort, despite improvements at year 3. Further investigation revealed that final-year OSCEs took place at the same time that the final-year project dissertation was due, which is a prime example of system-related influences impacting student ability to perform in clinical examinations. This finding is supported in part by the comparison of year 1 with year 3, wherein there were no such conflicts and 100% of students showed improvement over their first-year mark. Other possible factors affecting year 4 assessment performance include personal external pressures experienced by individual students and the increased complexity of year 4 OSCEs. The relatively small contribution of OSCEs toward the overall grade for coursework associated with year 4 OSCEs also may have seemed unimportant to a small number of students, and this may have impacted how much effort these students invested in preparing. While there is anecdotal evidence to support this hypothesis, more research is needed before definitive conclusions can be reached regarding the impact of attitude toward OSCEs on performance in OSCEs.

CONCLUSIONS

The findings of this study do not support the hypothesis that students who perform well on examinations on the theoretical aspects of pharmacy practice will perform well in the clinical aspects. This may be attributed to these examinations assessing different areas of expertise, according to Miller's Pyramid of Competence. As such, a strong correlation should not be expected. This conclusion lends credence to the argument that OSCEs are not only an invaluable tool in assessing clinical competency, which cannot be gauged merely by examining academic ability, but also an important methodology for preparing undergraduates for clinical practice. The research also supports the argument that success in OSCE examinations is generally proportionate to the level of clinical experience of a candidate, although individual performance may be negatively influenced by constraints placed on them by the system.

ACKNOWLEDGMENTS

The authors thank Miss Reshma Patel for her assistance in collating the raw data, and Drs. Richard O'Neill and Andrzej Kostrzewski for providing helpful comments with respect to the manuscript.

  • Received February 21, 2011.
  • Accepted April 15, 2011.
  • © 2011 American Association of Colleges of Pharmacy

REFERENCES

  1. 1.↵
    Department of Health. Pharmacy in England: building on strengths – delivering the future. 2008. http://www.official-documents.gov.uk/document/cm73/7341/7341.pdf. Accessed May 27, 2011.
  2. 2.↵
    Office for National Statistics. Population trends. 2008. http://www.statistics.gov.uk/downloads/theme_population/Population_Trends_131_web.pdf. Accessed May 27, 2011.
  3. 3.↵
    PharmaLife. Why choose community pharmacy? 2010. http://www.pharmalife.co.uk/npa/community_pharmacy.php. Accessed May 27, 2011.
  4. 4.↵
    1. Hepler CD,
    2. Strand LM
    . Opportunities and responsibilities in pharmaceutical care. Am J Hosp Pharm. 1990;47(3):533-543.
    OpenUrlAbstract
  5. 5.↵
    1. Wright D,
    2. Loftus M,
    3. Christou M,
    4. Eggleton A,
    5. et al
    . RPSGB – Healthcare professional education and training: how does pharmacy in Great Britain compare? 2006. Royal Pharmaceutical Society of Great Britain.
  6. 6.↵
    1. Epstein RM
    . Assessment in medical education. N Engl J Med. 2007;356(4):387-396.
    OpenUrlCrossRefPubMed
  7. 7.↵
    1. Harden RM,
    2. Stevenson M,
    3. Downie WW,
    4. et al
    . Assessment of clinical competence using an objective structured examination. Br Med J. 1975;1(5955):447-451.
    OpenUrlAbstract/FREE Full Text
  8. 8.↵
    1. Wass V,
    2. McGibbon D,
    3. Van der Vieuten C
    . Composite undergraduate clinical examinations: how should the components be combined to maximize reliability? Med Educ. 2005;35(4):326-330.
    OpenUrl
  9. 9.↵
    1. Austin Z,
    2. O'Byrne C,
    3. Pugsley J,
    4. et al
    . Development and validation processes for an objective structured clinical examination (OSCE) for entry-to-practice certification in pharmacy: the Canadian experience. Am J Pharm Educ. 2003;67(3):Article 76.
    OpenUrl
  10. 10.↵
    1. Quero Munoz L,
    2. O'Bryne C,
    3. Puglsey J,
    4. et al
    . Reliability, validity, and generalisability of an objective structured clinical examination (OSCE) for assessment of entry-to-practice in pharmacy. Pharm Educ. 2005;5(1):33-43.
    OpenUrl
  11. 11.↵
    1. McRobbie D,
    2. Fleming G,
    3. Ortner M,
    4. et al
    . Evaluating skills and competencies of pre-registration pharmacists using objective structured clinical examinations (OSCEs). Pharm Educ. 2006;6(2):133-138.
    OpenUrl
  12. 12.↵
    1. Corbo M,
    2. Patel JP,
    3. Abdel Tawab R,
    4. et al
    . Evaluating clinical skills of undergraduate pharmacy students using OSCEs Pharmacy Education. 2006;6(1):53-58.
  13. 13.↵
    1. Sloan DA,
    2. Donnelly MB,
    3. Schwartz MD,
    4. et al
    . The objective structured clinical examination. Ann Surg. 1995;222(6):735-742.
    OpenUrlCrossRefPubMed
  14. 14.
    1. Rodgers JL,
    2. Nicewander WA
    . Thirteen ways to look at the correlation coefficient. Am Stat. 1988;42(1):59-66.
    OpenUrlCrossRef
  15. 15.↵
    1. Miller GE
    . The assessment of clinical skills /competence/performance. Acad Med. 1990;65(9 Suppl):S63-67.
    OpenUrlCrossRefPubMed
  16. 16.↵
    1. Wilkinson TJ,
    2. Frampton CM
    . Comprehensive undergraduate medical assessments improve the prediction of clinical performance. Med Educ. 2004;38(10):1111-1116 .
    OpenUrlCrossRefPubMed
  17. 17.↵
    1. Rethans JJ,
    2. Noricni JJ,
    3. Barón-Maldonado M,
    4. et al
    . The relationship between competence and performance: implications for assessing practice performance. Med Educ. 2002;36(10):901-909.
  18. 18.↵
    1. Ernesto L
    . Clinical skills assessment: limitations to the introduction of an “OSCE” (objective structured clinical examination) in a traditional Brazilian medical school. Sao Paulo Med J. 2004;122(1):12-17.
    OpenUrlPubMed
  19. 19.
    1. Major DA
    . OSCEs –seven years on the bandwagon: the progress of an objective structured clinical examination programme. Nurs Educ Today. 2005;25(6):442-454.
    OpenUrl
  20. 20.↵
    1. Hodges B
    . Validity and the OSCE. Med Teach. 2003;25(3):250-254.
    OpenUrlCrossRefPubMed
PreviousNext
Back to top

In this issue

American Journal of Pharmaceutical Education
Vol. 75, Issue 6
10 Aug 2011
  • Table of Contents
  • Index by author
Print
Download PDF
Email Article

Thank you for your interest in spreading the word on American Journal of Pharmaceutical Education.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Objective Structured Clinical Examinations (OSCEs) Compared With Traditional Assessment Methods
(Your Name) has sent you a message from American Journal of Pharmaceutical Education
(Your Name) thought you would like to see the American Journal of Pharmaceutical Education web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
5 + 9 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.
Citation Tools
Objective Structured Clinical Examinations (OSCEs) Compared With Traditional Assessment Methods
Stewart Brian Kirton, Laura Kravitz
American Journal of Pharmaceutical Education Aug 2011, 75 (6) 111; DOI: 10.5688/ajpe756111

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Share
Objective Structured Clinical Examinations (OSCEs) Compared With Traditional Assessment Methods
Stewart Brian Kirton, Laura Kravitz
American Journal of Pharmaceutical Education Aug 2011, 75 (6) 111; DOI: 10.5688/ajpe756111
del.icio.us logo Digg logo Reddit logo Twitter logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • INTRODUCTION
    • METHODS
    • RESULTS
    • DISCUSSION
    • CONCLUSIONS
    • ACKNOWLEDGMENTS
    • REFERENCES
  • Figures & Data
  • Info & Metrics
  • PDF

Similar AJPE Articles

Cited By...

  • An Objective Structured Clinical Examination to Assess Competency Acquired During an Introductory Pharmacy Practice Experience
  • Google Scholar

More in this TOC Section

  • Science of Safety Topic Coverage in Experiential Education in US and Taiwan Colleges and Schools of Pharmacy
  • The Capacity Ratio as a Measure of Solvency in Experiential Education
  • Faculty and Student Perceptions of Effective Study Strategies and Materials
Show more Research Articles

Related Articles

  • No related articles found.
  • Google Scholar

Keywords

  • objective structured clinical examination
  • assessment
  • pharmacy practice
  • student performance
  • clinical competency

Home

  • AACP
  • AJPE

Articles

  • Current Issue
  • Early Release
  • Archive

Instructions

  • Author Instructions
  • Submission Process
  • Submit a Manuscript
  • Reviewer Instructions

About

  • AJPE
  • Editorial Team
  • Editorial Board
  • History
  • Contact

© 2022 American Journal of Pharmaceutical Education

Powered by HighWire