Skip to main content

Main menu

  • Articles
    • Current
    • Early Release
    • Archive
    • Rufus A. Lyman Award
    • Theme Issues
    • Special Collections
  • Authors
    • Author Instructions
    • Submission Process
    • Submit a Manuscript
    • Call for Papers - Intersectionality of Pharmacists’ Professional and Personal Identity
  • Reviewers
    • Reviewer Instructions
    • Call for Mentees
    • Reviewer Recognition
    • Frequently Asked Questions (FAQ)
  • About
    • About AJPE
    • Editorial Team
    • Editorial Board
    • History
  • More
    • Meet the Editors
    • Webinars
    • Contact AJPE
  • Other Publications

User menu

Search

  • Advanced search
American Journal of Pharmaceutical Education
  • Other Publications
American Journal of Pharmaceutical Education

Advanced Search

  • Articles
    • Current
    • Early Release
    • Archive
    • Rufus A. Lyman Award
    • Theme Issues
    • Special Collections
  • Authors
    • Author Instructions
    • Submission Process
    • Submit a Manuscript
    • Call for Papers - Intersectionality of Pharmacists’ Professional and Personal Identity
  • Reviewers
    • Reviewer Instructions
    • Call for Mentees
    • Reviewer Recognition
    • Frequently Asked Questions (FAQ)
  • About
    • About AJPE
    • Editorial Team
    • Editorial Board
    • History
  • More
    • Meet the Editors
    • Webinars
    • Contact AJPE
  • Follow AJPE on Twitter
  • LinkedIn
Research ArticleInstructional Design and Assessment

Intergroup Peer Assessment in Problem-Based Learning Tutorials for Undergraduate Pharmacy Students

Vicky S. Kritikos, Jim Woulfe, Maria B. Sukkar and Bandana Saini
American Journal of Pharmaceutical Education May 2011, 75 (4) 73; DOI: https://doi.org/10.5688/ajpe75473
Vicky S. Kritikos
University of Sydney, Australia
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Jim Woulfe
University of Sydney, Australia
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Maria B. Sukkar
University of Sydney, Australia
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Bandana Saini
University of Sydney, Australia
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • PDF
Loading

Abstract

Objective. To develop, implement, and evaluate a process of intergroup peer assessment and feedback using problem-based learning (PBL) tutorials.

Methods. A peer-assessment process was used in a PBL tutorial setting for an integrated pharmacy practice course in which small groups of students graded each others’ PBL case presentations and provided feedback in conjunction with facilitator assessment.

Assessment. Students' quantitative and qualitative perceptions of the peer assessment process were triangulated with facilitator feedback. Students became more engaged, confident, and motivated, and developed a range of self-directed, life-long learning skills. Students had mixed views regarding the fairness of the process and grade descriptors. Facilitators strongly supported the peer assessment process.

Conclusions. Peer assessment is an appropriate method to assess PBL skills and is endorsed by students as appropriate and useful.

Keywords
  • pharmacy students
  • peer assessment
  • problem-based learning

INTRODUCTION

The academic and practice components of pharmacy courses are increasingly geared toward developing therapeutic expertise as well as critical-thinking, problem-solving, teamwork, reflection, and negotiation.1 This change in direction is driven largely by a shift in the professional practice of pharmacy over the last 20 years from traditional product supply toward the provision of primary care services, including patient education, medication and lifestyle management, health promotion, disease monitoring, screening, and prevention.2-4 Because the delivery of these services requires effective interdisciplinary cooperation,5 pharmacists must develop skills that foster interprofessional relationships. To adequately equip students with the diverse skills required for pharmacy practice today, traditional models of didactic teaching are being replaced with student-centered and group-based teaching methods, such as problem-based learning.6-8 This changing scope of pharmacy practice is reflected in the stipulation of the Accreditation Standards and Guidelines for the Professional Program in Pharmacy Leading to the Doctor of Pharmacy Degree stipulate that students should be encouraged to participate in the education of others, including fellow students and healthcare providers.9

PBL fosters a deep approach (ie, not just surface learning) to learning and promotes self-directed, life-long learning skills.10 It encourages active learning and collaboration between students and provides a context designed to promote internal motivation through the provision of pragmatic goals. As PBL emphasizes the development of proficiency in the “real-time” resolution of clinical problems, it would be appropriate for the assessment of student skills, processes, and attitudes to take place in tutorials at the same time that the problem is presented and solved rather than by means of formal examinations and tests conducted much later.11 However, assessment of student progress within PBL tutorials has remained a challenge because more traditional forms of assessment are not aligned with and do not readily assess what is being learned in PBL tutorials.11

Peer assessment in higher education is a process whereby students engage with criteria and standards and apply them to evaluate the work of their peers.12-15 The process can be formative or summative and also can include qualitative feedback relating to the grading criteria used rather than a quantitative focus on the actual grade.13 Peer assessment can occur in the context of individual or group work, the latter taking 1 of 3 forms: intragroup, wherein each member of a group rates the performance or contribution of the other individual group members to the shared product; intergroup, wherein 1 or more members in a group rate the performance or product of another group; and extragroup, wherein individuals who are not group members assess the performance or product of 1 of the groups.13

Peer assessment provides a powerful avenue for students to receive feedback on their learning.12-27 In the context of group work, peer assessment improves student learning and increases confidence in future collaborative work by contributing to the development of a variety of skills, such as self-directed learning, critical reasoning, reflection, negotiation, professional judgment, teamwork, and self-awareness.17-24 Peer assessment also can benefit teaching staff members by reducing their workload,13 providing new insights into student learning processes,24 and encouraging staff members to provide greater transparency regarding assessment objectives and grading criteria.25,27 Given that both peer assessment and PBL focus on group collaboration and share key objectives and philosophies, peer assessment seems an appropriate evaluative process for the PBL tutorial setting.

PBL techniques are used in 18 of the 48 credit hours that students must attain during the final year of the bachelor of pharmacy (BPharm) programs at the University of Sydney, Australia. Assessing learning in these PBL courses has been a challenge. Problems posed to senior students working in small groups are usually highly complex; often with incomplete data as in real life, and involve many interrelated factors, such as pathology results, polypharmacy, pyschosocial determinants of medication use, prescribing or medication use error; and often have more than 1 reasonable solution or approach. In this cohort, peer assessment was considered an innovative method of assessing higher-order learning in PBL tutorials.

Although peer assessment by small groups has been applied in different settings encompassing a diversity of study designs,28 no previous study has investigated the use of intergroup peer assessment within the PBL setting, particularly in pharmacy undergraduate curricula. In contrast to other studies that have examined assessment of a process by small groups, group assessment of a discrete product or performance needs to be studied.13 This study aimed to apply the peer-assessment process in a PBL tutorial setting in which small groups of students grade each other's PBL case presentations and provide feedback in conjunction with facilitator assessment. The specific objectives of this study were to implement and evaluate a process of intergroup peer assessment and feedback in the PBL tutorial setting. It was hypothesized that students undertaking PBL tutorials would be able to understand and engage in group peer assessment and the PBL process.

DESIGN

At the University of Sydney, Australia, all BPharm students take Integrated Pharmacy Practice, a 12 credit-hour course, in the first semester of their fourth year. An overview of this course is provided in Table 1. Integrated Pharmacy Practice integrates 3 components: clinical chemistry, experiential learning, and applied therapeutics. Applied therapeutics is delivered through a mix of lectures and PBL tutorials. Within each PBL tutorial, students work in 2 groups of 6 to 8 students and undertake two 2-hour sessions of PBL tutorial time in each week throughout a 13-week semester. The structure of the PBL tutorials and cases are described in Figure 1. Working in a collaborative environment within their small groups, students analyze a case, formulate hypotheses, try to describe issues in the patient's disease handling process, and make recommendations for management of identified issues. The whole group carries out the PBL tasks each week and, on an alternating weekly basis, half the group is responsible for giving the corresponding 12- to 15-minute clinical case presentation. Conventionally, the facilitator-assessed clinical case presentations account for 20% of the final grade for the course.

View this table:
  • View inline
  • View popup
Table 1.

Overview of the Integrated Pharmacy Practice Course

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

PBL Case Structure.

Over the last couple of years, facilitators have observed that students were passive and disinterested in their peers’ presentations. Facilitators of the PBL cases felt it was necessary to develop methods to keep students motivated and engaged during clinical case presentations, as these presentations are not only part of the overall assessment but also a vehicle for further learning, especially regarding alternative approaches to case management. The facilitators determined that 1 possible way to reduce lack of interest and passivity would be to actively engage students in the process by requiring them to assess the clinical case presentations using established criteria, just as the facilitators do. This would allow for immediate and transparent feedback both for the presenters and the peers assessing the clinical case presentation.

These observations provided the concept for the current study, which was conducted during the first semester of 2009 within the Integrated Pharmacy Practice course. Ethics approval to conduct the study was obtained from the Human Research Ethics Committee of the University of Sydney (HREC Approval Number 11707).

Subgroup clinical case presentations were assessed by all members of the other group. Peer assessment was done in conjunction with facilitator assessment for cases 3 to 8, thus accounting for 15% of each student's final course grade (Table 1).

Assessment Criteria

Assessment criteria and grade descriptors were developed for use in peer evaluations of clinical case presentations based on: (1) an extensive review of the literature on peer assessment and clinical reasoning skills, (2) academic staff and clinical practitioner teachers’ experience in conducting PBL in the same course, and (3) input by a panel of experts from the Faculty's education unit and the University of Sydney's Institute of Teaching and Learning. The grading criteria (Appendix 1) assessed all domains of Bloom's Taxonomy of Learning (cognitive, affective, and psychomotor)29 and were framed along 4 key areas of assessment: clinical reasoning skills (cognitive), reflection on practice (cognitive/affective), teamwork (affective), and presentation (psychomotor). Grade descriptors were detailed and established standards for a clinical case presentation assessment of high distinction (>85%), distinction (75% to 84%), credit (65% to 74%), pass (50% to 64%) and below pass (<50%) levels (Appendix 2).

Training In Peer Assessment

Eleven facilitators who were unfamiliar with peer assessment were trained by Institute of Teaching and Learning experts to facilitate peer assessment using the developed criteria and grade descriptors within the PBL component of the course. Facilitators were mostly experienced in teaching the PBL component in this course and were asked to review and comment on the grading criteria and descriptors prior to implementation. During the first week, the peer assessment process was explained to students in an introductory lecture, the use of the assessment criteria and grade descriptors was demonstrated, and the students were asked to give their consent to participate during the first PBL tutorial. All materials (grade descriptors, assessment criteria, and clinical case presentation examples) were posted on the course e-Learning Web site. The first 2 PBL tutorials were devoted to a practice case (case 0, which did not contribute toward the final assessment for the course), during which students were initiated into the peer assessment process, provided with tips on how to give constructive peer feedback by their PBL facilitators, and provided with a solved template example of what would be expected in their clinical case presentations. The tips on constructive feedback were based on published references, which were made available to students in their course handbook.30,31 Students were instructed on how to use the grading criteria and feedback form and encouraged to write constructive comments in the space provided. Over the following 2 weeks, facilitators led clinical case presentation assessments for cases 1 and 2, which exemplified the use of the grading criteria and feedback form.

Peer-Assessment Process Design

For cases 3 through 8, students assessed clinical case presentations delivered by student members of the other group. In an attempt to eliminate individual bias, students were asked to provide peer assessment as a group, rather than as individuals. After each presentation, students were given 10 minutes as a group to negotiate and agree on a final grade for the clinical case presentation presented by their peers. Facilitators independently assessed the presented clinical case presentations at the same time. After both presentations were delivered and assessed, facilitators allowed each group an additional 5 minutes to provide reciprocal verbal feedback based on their written comments. This was followed by facilitator feedback and case debriefing. The students' assessment was required to be in grade format following the grading criteria form, ie, assessment result was given by peers as credit, pass, etc, as opposed to numerical marks, ie, 1 out of 5, 2 out of 5, etc (Appendix 2). When grades awarded by peers were not consistent with the facilitator's grade, peers were required to justify the grade they assigned and negotiate with the facilitator about the final grade. If the students could not clearly justify their grade, the facilitator was allowed to override the grade with one that was justifiable. Facilitator grade override was an extreme process and had to be brought to the attention of the course coordinator (B.S.). All group members received the same grade, unless otherwise determined by their assessing peers. Grades awarded were converted by facilitators into marks for each presenting member of either group.

EVALUATION AND ASSESSMENT

All processes were implemented according to plan with 16 PBL groups (total N= 235) and 11 facilitators. There were no occasions reported to the coordinator in which facilitators had to override peer-assessed grades, nor were there any other complaints made to the coordinator from either the students or the facilitator during the semester.

Student Peer Assessment Feedback Questionnaire

Students were invited to complete a voluntary, anonymous 7-item questionnaire in the final week of the semester to assess student perceptions of peer assessment (6 items) and satisfaction with the group work (1 item). These 7 items were adapted from a measure originally developed by Gatfield for assessing perceptions32 and used a 5-point Likert scale to record responses (1 = strongly agree, 5 = strongly disagree or 1= extremely satisfied, 5 = extremely dissatisfied). In addition to these 7 items the research team added an additional item that assessed respondents’ perceptions regarding the grading criteria on a 5-point Likert scale (1 = extremely easy to use and 5 = extremely difficult to use), and 2 open-ended questions that allowed respondents to provide information about the assessment and feedback process. A section at the end of the questionnaire requested respondent's' gender, age group, nationality, hours worked in pharmacy per week over the past year, and prior experience in peer assessment of individual and group work.

All data collected were deidentified. Quantitative data analyses were performed using SPSS, Version 17. Mean ratings for each item in the questionnaire were calculated. Likert scale responses 1 and 2 (strongly agree and agree) and responses 4 and 5 (disagree and strongly disagree) were combined for each item, and the proportion of student responses to each item was calculated. Exploratory analyses were undertaken using student t tests for continuous variables. A 2-tailed, 5% (0.05) level of significance was used for all statistical procedures.

Of the 235 students invited to participate in the survey, 220 returned a completed questionnaire (94% response rate). Sixty-four percent of the respondents were female; 95% were aged 21 to 25 years, and 90% were born in Australia. Eighty-two percent had previously peer-assessed either individual student work or group work. The mean hours that participating students had worked per week in pharmacy over the previous year were 9.4 ± 6.5 (mean ± SD) hours. No correlations were found between age, gender, nationality, or work experience and perception of peer assessment on the questionnaire items. The majority of respondents (96%) indicated that they understood the peer-assessment process (item 1); and more than 70% agreed that it is an appropriate group assessment method (item 2) and that students should assess their peers (item 3). Mean ratings across questionnaire items 1 through 3 (Table 2) show a positive acceptance of the peer assessment process by students. In contrast, less than 50% agreed that the peer-assessment process is a fair way to divide grades (item 4), that grades are a fair reflection of students’ efforts (item 5), and that peers are capable of assessing fairly (Item 6). Mean ratings across items 4 to 6 indicate that on average, students held an approximate level of neutrality (ie, a score of 3 on the 5-point Likert scale). Eighty-two percent of students were satisfied with the group work process (mean rating 2.1 ± 0.7). Seventy percent of students found the grading criteria extremely easy or easy to use in grading their peers’ clinical case presentations (mean rating 2.3 ± 0.8), with only 18% indicating ambivalence about the ease of use.

View this table:
  • View inline
  • View popup
Table 2.

Student Mean Ratings and Level of Agreement to Items Regarding Peer Assessment of Clinical Case Presentations (n=220/235)

Peer Assessment Marks Versus Facilitator Marks in Previous Year

Facilitator-assessed average course grades for 2008 clinical case presentations of students undertaking the same course with the same structure were compared to peer facilitator coassessed average course grades for this component of the 2009 course. This was the only element of the course that related solely to the applied therapeutics component.

The mean grades (out of 20) for the clinical case presentation component attained by students undertaking the same course with the same structure in 2008, ie, facilitator assessment (17.1 ± 2.1, n = 230) vs. 2009, ie, peer facilitator co-assessment (17.3 ± 1.3, n=235), showed there was no significant difference between facilitator- and peer-assessed average class marks for this component of the course in (P = 0.22).

Qualitative Feedback

To gauge levels of satisfaction with the peer-assessment process and to understand whether students felt it had helped them engage with their peers’ presentations, qualitative methods were employed to explore the perceptions and observation of both students and facilitators. At the end of the semester, students were invited to attend a focus group session designed to elicit further comments about the peer-assessment activity. A student focus-group session guide used an open-ended approach to querying about the overall peer assessment experience, what components were liked or disliked, whether it should be retained in future courses, and if retainable, how it could be improved. Focus groups were facilitated by 1 of the researchers, and PBL facilitators were invited to attend a debrief session facilitated by 2 members of the research team. Feedback was sought regarding their overall teaching experience in the PBL tutorials, their specific experiences and observations in implementing the peer assessment process, and suggestions to change or improve the process.

Thirty students (13% of total class) participated in 4 focus groups at the end of semester. Seven of the 16 PBL tutorial classes were represented by these 30 students. Although non-respondents would have held different views, focus group students provided both positive and negative feedback, and content analyses of the focus group conversations revealed a saturation of ideas and feedback.

The focus group sessions were tape-recorded, transcribed, and thematically summarized. To ensure consistency, coding of the focus group transcripts, session notes from student sessions, and qualitative commentary from the questionnaires were undertaken independently by 2 researchers.

The focus group participants stated that the peer assessment process gave them a clear understanding of the standards expected of them and made it easier for them to learn after. Students stated that peer assessment in PBL tutorials helped them “learn from each other” and become more engaged, attentive, reflective, analytical, critical in reasoning, confident, and self-aware. Consistent with the responses on the feedback questionnaire, the main negative aspects of the assessment process reported by students were ambivalence about the fairness of the process and lack of confidence in their own ability and the ability of their peers to assess fairly for several reasons. Some focus group participants stated that the grade descriptors were too detailed and difficult to use and needed to be simplified, while others stated that they were quite useful and did not need to be improved as “it was just a matter of becoming familiar with them.” All found the grading criteria easier to use than grade descriptors.

Other themes obtained from focus groups indicated that the peer assessment and feedback process helped students become better team members by improving their skills in professional judgment, assertiveness, negotiation, oral presentation, leadership, engagement, time-management, and delegation. Students valued the feedback they received from peers, explaining that it was a “different kind of feedback” because their peers had researched the same information and were on the “same plane” as the students they were assessing. Negative aspects of the feedback process reported by students included that feedback was initially “taken a bit personally,” resulting in students reacting defensively, and that feedback provided toward the end of the semester became “a bit picky.”

Facilitator Feedback

At the end of the semester, the 11 facilitators participated in a debriefing session. The facilitator sessions were tape-recorded, transcribed, and thematically summarized just as the focus groups sessions were. To ensure accurate coding of the debriefing session transcripts, notes from facilitator sessions were reviewed by 2 researchers.

All facilitators evaluated the training provided to them quite highly and supported the peer assessment process. They reported that the process kept students engaged and motivated, resulting in better-quality clinical case presentations being presented, and that there were few episodes of bias, such as reciprocal marking and collusion between student groups. Facilitators revealed that there were few occasions of mismatch between facilitators’ and students’ assessments and that their grades often coincided exactly with those of the peer assessors. Most facilitators suggested that students should receive more guidance in the tone/style and wording of feedback provided. Other suggestions for improvement included giving students more time to complete the feedback process and limiting peer feedback to qualitative peer feedback without grades. Most facilitators felt that the grading criteria and descriptors had worked well and may have accounted for their grades often coinciding with the peer-assessed grade. They reported that students mostly referred to the criteria rather than the descriptors for assessing.

DISCUSSION

This study is the first to investigate the use of peer assessment in the PBL tutorial setting for small groups of fourth-year pharmacy students who graded each others’ presentations and provided feedback in conjunction with facilitator assessment and feedback. The characteristics of this study have been described in a framework28 originally developed by Topping,25 which covered the full scope of details relating to assessment structure and design. This framework enables researchers and educators to accurately replicate and compare our study with other studies and synthesize the results. This is an important step that should be part of any study reporting on peer-assessment research. The current study, which uses a posttest design, is strengthened by the use of items from previously validated questionnaires about peer assessment and the triangulation of quantitative and qualitative data. Based on facilitator feedback, the intergroup peer assessment and feedback process was shown to be effective in reducing student passivity and lack of interest, thus addressing the problem for which it was initiated. Students reported that the peer assessment process increased their level of confidence, motivation, satisfaction, and exposure to feedback, as well as promoted collaboration, teamwork, and a broad range of self-directed, life-long learning skills that are aligned with the PBL method's key objectives.10 No differences were found in average class grades between peer-assessed and peer-facilitator coassessed cohorts for the same component of the course over 2 consecutive years. Peer assessment, therefore, is an appropriate assessment method for skills taught in PBL tutorials and works well with final-year undergraduate pharmacy students. Study participants mostly valued the experience and endorsed the appropriateness of the method; hence, the hypotheses proposed can be reasonably accepted.

We believe that the highly positive feedback from students may be a result of the carefully designed process. In this study, considerable effort was expended in training. The development of the structures to scaffold student learning about how to assess their peers’ work were time consuming and intricate, involving didactic descriptions, case examples, and the show-and-tell technique, in which facilitators demonstrated the use of grading criteria and descriptors (cases 1 and 2). This effort seems to have been well spent based on an overwhelming majority of students understanding the process quite well.

A unique feature of the study design was the parallel facilitator co-assessment, which balanced the allocation of grades by student assessors. The success of this design feature is illustrated by the few discrepancies between peer and facilitator assessments and that the peer grade usually held as the final grade. The easiest way to reconcile mismatches between facilitator and peer grades would be to either average the grades or have the facilitator grade override the peer grade. However, our study used a collaborative co-assessment process that involved negotiation and discussion between the students and facilitators. This unique provision probably resulted in fewer episodes of collusion or reciprocal marking, as reported by facilitators. Also, students were allowed to self-select into groups, making it more likely that friends would work together in the same group and less likely that students in one group would have influential relationships with students in another group.33

Other key design features of the assessment included intergroup rather than individual assessment. Intergroup grading involves an entire group of students grading another group, making the process less threatening to individuals, while providing instant feedback on a delivered product. The use of this approach is valuable because it delivers the benefits of peer assessment without necessitating elaborate methods to ensure anonymity for individuals evaluating their peers. Further, students were aware that their assessment of 6 cases was “summative” and accounted for 15% of the total grade for the course (facilitator assessed Cases 1 and 2 were worth 5%), which may have led students to actively engage in and be diligent in their peer assessments.34,35 Knowing that each student's grades would be the same as that of the group enhanced positive interdependence, which has been associated with greater individual accountability and task ownership.36 The use of the clinical case presentation as a product to be peer-assessed is also a good choice. A study with Nigerian medical students learning pathophysiology through group clinical presentations revealed that most students found the presentations to be fun, informative, creative or innovative, and, most importantly, beneficial to their learning.37 The majority of students felt that this exercise improved their understanding of pathophysiology, taught them to research independently, and encouraged better class interactions and group learning.37 In the current study, facilitators remarked on the creativity and innovative presentation methods used by the groups, which made the process more interesting to assess. Thus, students possibly were engaged not only with the peer assessment process but also with the idea of clinical case presentations.

Benefits of the peer assessment and feedback process reported by students are consistent with those of other related studies supporting the appropriateness of this process in intergroup settings.16-27 However, roughly a third of the students responding to the feedback questionnaire expressed concerns about the fairness of the process. This finding is consistent with that of other studies that show students are equivocal about the fairness of the peer-assessment process because they often lack confidence in their own ability or the ability of their peers to assess fairly for a number of reasons.17 These reasons include students feeling unqualified to assess others’ work,33 finding it difficult to assign grades to their peers’ work,12 disliking a cognitively challenging assessment process,24 having difficulty being objective, tending to award higher grades to friends,38 being reluctant to award low grades to peers even if they were deserved,12 lacking ability to provide constructive feedback,20 being skeptical about their peers’ ability to grade fairly,20 and questioning the value of their peers’ comments.33 In our study, inexperience or lack of confidence are the more probable reasons for ambivalence or concerns about the fairness of peer assessment. This problem could be addressed by increasing students’ confidence in their ability to assess fairly by dedicating more time to the student preparation phase. For studies that already include a thorough preparatory phase for students, as our study did, another possible way to boost confidence would be to increase student awareness of the benefits and problems associated with peer assessment derived from this research and other previous investigations. Another possibility is to introduce peer assessment earlier in the pharmacy undergraduate program so students have more opportunities over the course of their education to gain experience, master their skills, and boost their confidence. Because some students reported that grading descriptors were rather complicated to use, students could be included in the development of grading criteria.12-14,34 Future research on the peer-assessment exercise should include students feedback about how to simplify the terminology of grade descriptors to a less academic style.

There are several potential limitations to the current investigation. This study did not use a controlled group design because it is difficult to accomplish in naturalistic research settings. To avoid respondent fatigue, the peer-assessment questionnaire was not administered before and after the peer assessment exercise.38 Agreement between peer and facilitator grades was assessed only qualitatively. Grading criteria and descriptors were customized specifically for the course, and students were not involved in the development of these criteria. Because the Integrated Pharmacy Practice course is not a standard pharmacotherapy course, standard criteria for measuring either pharmacotherapeutic knowledge or presentation skills may not be applicable. Further, not all students attended the debriefing focus groups. In implementing the peer-assessment process in other institutions, these possible limitations should be addressed. The peer assessment process highlighted in our study can be used in any course dependent on group work and self-directed learning and in which presentations are part of the course evaluation.

CONCLUSION

A structured quality-controlled peer-assessment process in a nonthreatening collaborative PBL tutorial setting is an appropriate and effective assessment method for pharmacy student-centered teaching approaches. Based on student endorsement of this process and the value of feedback from their peers, peer assessment is an appropriate method for evaluating skills taught in PBLs and works well with final-year undergraduate pharmacy students. Future investigations should address students’ perceptions regarding the fairness of their peers’ assessment, provide more guidance to students on giving and receiving feedback, and simplify grade descriptors.

ACKNOWLEDGEMENTS

We acknowledge the Teaching Improvement and Equipment Scheme of the Faculty of Pharmacy, University of Sydney, Australia for rendering financial support for this project. All participating facilitators and students are acknowledged for their time and support.

Appendix 1. Grading Criteria Form Used by Student Groups and Peer-Based Learning (PBL) Facilitators

View this table:
  • View inline
  • View popup

Appendix 2. Grading Descriptors Used by Student Groups and PBL Facilitators to Assign a Relevant Grade

View this table:
  • View inline
  • View popup

  • Received December 29, 2010.
  • Accepted February 22, 2011.
  • © 2011 American Association of Colleges of Pharmacy

REFERENCES

  1. 1.↵
    1. Blouin RA,
    2. Joyner PU,
    3. Pollack GM
    . Preparing for a Renaissance in pharmacy education: the need, opportunity, and capacity for change. Am J Pharm Educ. 2008;72(2):Article 42.
  2. 2.↵
    1. Holland RW,
    2. Nimmo CM
    . Transitions, part 1: beyond pharmaceutical care. Am J Health-Syst Pharm. 1999;56(17):1758-1764.
    OpenUrlAbstract/FREE Full Text
  3. 3.↵
    1. Roberts AS,
    2. Benrimoj SI,
    3. Chen TF,
    4. Williams KA,
    5. Aslani P
    . Implementing cognitive services in community pharmacy: a review of facilitators. Int J Pharm Pract. 2006;14:163-170.
    OpenUrlCrossRef
  4. 4.↵
    1. Reid LD,
    2. Posey LM
    . The changing face of pharmacy. J Am Pharm Assoc. 2006;46(3):320-321.
    OpenUrl
  5. 5.↵
    1. Dolovich L,
    2. Pottie K,
    3. Kaczorowski J,
    4. et al
    . Integrating family medicine and pharmacy to advance primary care therapeutics. Clin Pharmacol Ther. 2008;83(6):913-917.
    OpenUrlCrossRefPubMed
  6. 6.↵
    1. Marriott JL,
    2. Nation RL,
    3. Roller L,
    4. Costelloe M,
    5. Galbraith K,
    6. Stewart P,
    7. Charman WN
    . Pharmacy education in the context of Australian practice. Am J Pharm Educ. 2008;72(6):Article 131.
  7. 7.↵
    1. Hubball H,
    2. Burt H
    . Learning outcomes and program-level evaluation in a four-year undergraduate pharmacy curriculum. Am J Pharm Educ. 2007;71(5):Article 90.
  8. 8.↵
    1. Beck DE
    . Where will we be tomorrow? We need a 2020 vision. Am J Pharm Educ. 2002;66(2):208.
    OpenUrl
  9. 9.↵
    Accreditation Standards and Guidelines for the Professional Program in Pharmacy Leading to the Doctor of Pharmacy Degree. The Accreditation Council for Pharmacy Education, Chicago, IL 2006. http:/www.acpe-accredit.org/standards/default.asp (Last accessed: October 10, 2010).
  10. 10.↵
    1. Everson DH,
    2. Hmelo CE
    1. Kelson AC,
    2. Distlehorst LH
    . Groups in problem-based learning (PBL): essential elements in theory and practice. In: Everson DH, , Hmelo CE (eds.), Problem-Based Learning: A Research Perspective on Learning Interactions. Mahwah, NJ: Lawrence Erlbaum Associates, 2000.
  11. 11.↵
    1. Eva KW
    . Assessing tutorial-based assessment. Adv Health Sci Educ. 2001;6(3):243-257.
    OpenUrl
  12. 12.↵
    1. Falchikov N
    . Peer feedback marking: developing peer assessment. Innovat Educ Teach Int. 1995;32(2):175-187.
    OpenUrl
  13. 13.↵
    1. Falchikov N
    . Involving students in assessment. Psych Learn Teach. 2003;3(2):102-108.
    OpenUrl
  14. 14.↵
    1. Falchikov N
    . Group process analysis: self and peer assessment of working together in a group. Educ Tech Train Int. 1993;30:275-284.
    OpenUrl
  15. 15.↵
    1. Boud D,
    2. Cohen R,
    3. Sampson J
    . Peer learning and assessment. Asses Eval Higher Educ. 1999;24(4):413-426.
    OpenUrl
  16. 16.↵
    1. Dochy F,
    2. Segers M,
    3. Sluijsmans D
    . The use of self-, peer and co-assessment in higher education: a review. Stud Higher Educ. 1999;24(3):331-350.
    OpenUrlCrossRef
  17. 17.↵
    1. Ballantyne R,
    2. Hughes K,
    3. Mylonas A
    . Developing procedures for implementing peer assessment in large classes using an action research process. Asses Eval Higher Educ. 2002;27(5):427-441.
    OpenUrl
  18. 18.↵
    1. Vickerman P
    . Student perspectives on formative peer assessment: an attempt to deepen learning? Asses Eval Higher Educ. 2008;1:1-10.
    OpenUrl
  19. 19.↵
    1. Papinczak T,
    2. Young L,
    3. Groves M
    . Peer assessment in problem-based learning: a qualitative study. Adv Health Sci Educ. 2007;12:169-186.
    OpenUrl
  20. 20.↵
    1. McDowell L
    . The impact of innovative assessment on student learning. Innovat Educ Train Int. 1995;32(4):302-313.
    OpenUrl
  21. 21.↵
    1. Hanrahan SJ,
    2. Isaacs G
    . Assessing self- and peer-assessment: the students’ views. Higher Educ Res Dev. 2001;20(1):53-70.
    OpenUrlCrossRef
  22. 22.↵
    1. Searby M,
    2. Ewers T
    . An evaluation of the use of peer assessment in higher education: a case study in the school of music. Asses Eval Higher Educ. 1997;22(4):371-383.
    OpenUrl
  23. 23.↵
    1. Somervell H
    . Issues in assessment, enterprise and higher education: the case for self-, peer- and collaborative assessment. Asses Eval Higher Educ. 1993;18(3):221-233.
    OpenUrl
  24. 24.↵
    1. Topping KJ,
    2. Smith EF,
    3. Swanson I,
    4. Elliot A
    . Formative peer assessment of academic writing between post graduate students. Asses Eval Higher Educ. 2000;25(2):149-166.
    OpenUrl
  25. 25.↵
    1. Topping K
    . Peer assessment between students in colleges and universities. Rev Educ Res. 1998;68(3):249-276.
    OpenUrlCrossRef
  26. 26.↵
    1. Papinczak T,
    2. Young L,
    3. Groves M,
    4. Haynes M
    . An analysis of peer, self, and tutor assessment in problem-based learning tutorials. Med Teacher. 2007;29:122-132.
    OpenUrl
  27. 27.↵
    1. Boud D
    . Enhancing Learning Through Self-Assessment. London: Kogan Page, 1995.
  28. 28.↵
    1. Gielen S,
    2. Dochy F,
    3. Onghena P
    . An inventory of peer assessment diversity. Asses Eval Higher Educ. 2010;1:1-19.
    OpenUrl
  29. 29.↵
    1. Bloom BS
    . Taxonomy of Educational Objectives. The Classification of Educational Goals. Handbook1: Cognitive Domain. New York, NY: McKay, 1956.
  30. 30.↵
    1. Ende J
    . Feedback in clinical medical education. JAMA. 1993;250:777-781.
    OpenUrl
  31. 31.↵
    1. Westberg J,
    2. Jason H
    . Collaborative Clinical Education: The Foundation of Effective Health Care. New York: Springer Publishing, 1993.
  32. 32.↵
    1. Gatfield T
    . Examining student satisfaction with group projects and peer assessment. Asses Eval Higher Educ. 1999;24(4):365-377.
    OpenUrl
  33. 33.↵
    1. Magin D
    . Reciprocity as a source of bias in multiple peer assessment of group work. Stud Higher Educ. 2001;26(1):53-63.
    OpenUrl
  34. 34.↵
    1. Orsmond P,
    2. Merry S
    . The importance of marking criteria in the use of peer assessment. Asses Eval Higher Educ. 1996;21(3):239-250.
    OpenUrl
  35. 35.↵
    1. Boud D,
    2. Feletti G
    1. Swanson D,
    2. Case S,
    3. VanderVleuten C
    . Strategies for student assessment. In: Boud D, , Feletti G, eds. The Challenge of Problem Based Learning. London: Kogan Page, 1991.
  36. 36.↵
    1. Prins FJ,
    2. Sluijsmans DMA,
    3. Kirschner PA,
    4. Strijbos J
    . Formative peer assessment in a CSCL environment: a case study. Asses Eval Higher Educ. 2005;30(4):417-444.
    OpenUrl
  37. 37.↵
    1. Higgins-Opitz SB,
    2. Tufts M
    . Student perceptions of the use of presentations as a method of learning endocrine and gastrointestinal pathophysiology. Adv Physiol Educ. 2010;43(2):75-85.
    OpenUrl
  38. 38.↵
    1. Cheng W,
    2. Warren M
    . Having second thoughts: student perceptions before and after a peer assessment exercise. Stud Higher Educ. 1997;22(2):233-240.
    OpenUrl
PreviousNext
Back to top

In this issue

American Journal of Pharmaceutical Education
Vol. 75, Issue 4
10 May 2011
  • Table of Contents
  • Index by author
Print
Download PDF
Email Article

Thank you for your interest in spreading the word on American Journal of Pharmaceutical Education.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Intergroup Peer Assessment in Problem-Based Learning Tutorials for Undergraduate Pharmacy Students
(Your Name) has sent you a message from American Journal of Pharmaceutical Education
(Your Name) thought you would like to see the American Journal of Pharmaceutical Education web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
2 + 0 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.
Citation Tools
Intergroup Peer Assessment in Problem-Based Learning Tutorials for Undergraduate Pharmacy Students
Vicky S. Kritikos, Jim Woulfe, Maria B. Sukkar, Bandana Saini
American Journal of Pharmaceutical Education May 2011, 75 (4) 73; DOI: 10.5688/ajpe75473

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Share
Intergroup Peer Assessment in Problem-Based Learning Tutorials for Undergraduate Pharmacy Students
Vicky S. Kritikos, Jim Woulfe, Maria B. Sukkar, Bandana Saini
American Journal of Pharmaceutical Education May 2011, 75 (4) 73; DOI: 10.5688/ajpe75473
del.icio.us logo Digg logo Reddit logo Twitter logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • INTRODUCTION
    • DESIGN
    • EVALUATION AND ASSESSMENT
    • DISCUSSION
    • CONCLUSION
    • ACKNOWLEDGEMENTS
    • Appendix 1. Grading Criteria Form Used by Student Groups and Peer-Based Learning (PBL) Facilitators
    • Appendix 2. Grading Descriptors Used by Student Groups and PBL Facilitators to Assign a Relevant Grade
    • REFERENCES
  • Figures & Data
  • Info & Metrics
  • PDF

Similar AJPE Articles

Cited By...

  • Establishment of a peer-mentoring program for student pharmacists
  • Google Scholar

More in this TOC Section

  • Transformation of an Online Multidisciplinary Course into a Live Interprofessional Experience
  • Enhancing Student Communication Skills Through Arabic Language Competency and Simulated Patient Assessments
  • Qualitative Analysis of Student Perceptions Comparing Team-based Learning and Traditional Lecture in a Pharmacotherapeutics Course
Show more Instructional Design and Assessment

Related Articles

  • No related articles found.
  • Google Scholar

Keywords

  • pharmacy students
  • peer assessment
  • problem-based learning

Home

  • AACP
  • AJPE

Articles

  • Current Issue
  • Early Release
  • Archive

Instructions

  • Author Instructions
  • Submission Process
  • Submit a Manuscript
  • Reviewer Instructions

About

  • AJPE
  • Editorial Team
  • Editorial Board
  • History
  • Contact

© 2023 American Journal of Pharmaceutical Education

Powered by HighWire