Abstract
Objective. To utilize a skills-based workshop series to develop pharmacy students’ drug information, writing, critical-thinking, and evaluation skills during the final didactic year of training.
Design. A workshop series was implemented to focus on written (researched) responses to drug information questions. These workshops used blinded peer-grading to facilitate timely feedback and strengthen assessment skills. Each workshop was aligned to the didactic coursework content to complement and extend learning, while bridging and advancing research, writing, and critical thinking skills.
Assessment. Attainment of knowledge and skills was assessed by rubric-facilitated peer grades, faculty member grading, peer critique, and faculty member-guided discussion of drug information responses. Annual instructor and course evaluations consistently revealed favorable student feedback regarding workshop value.
Conclusion. A drug information workshop series using peer-grading as the primary assessment tool was successfully implemented and was well received by pharmacy students.
INTRODUCTION
Evaluation, interpretation, and dissemination of medical literature to guide evidence-based use of drug therapy are vital skills in pharmacy practice that should be fostered within didactic curricula. The Center for the Advancement of Pharmaceutical Education 2013 Educational Outcomes specifically address drug information skills as part of the foundational knowledge needed by pharmacy graduates. The Outcomes state that learners should be able to “critically analyze scientific literature related to drugs and disease to enhance clinical decision making.”1 Furthermore, within the Accreditation Council for Pharmacy Education Standards, drug information skills are identified as a cornerstone of competently practicing pharmacy. The Standards state that graduates must be able to: “retrieve, analyze, and interpret the professional, lay, and scientific literature to provide drug information and counseling to patients, their families or care givers, and other involved health care providers.”2
To provide timely, evidence-based and effective responses to drug information questions, pharmacists need knowledge of and access to literature resources, sound literature evaluation skills, and strong written communication skills.3 In addition to the development of foundational knowledge and skills, students need practice opportunities to ensure they can apply their abilities. The need for application-based skills workshops in pharmacy education is well established,4-8 and creates the environment for formative assessment of critical thinking skills. A major barrier to written assignments in education is the labor-intensive nature associated with faculty member assessment of the work, especially when class size exceeds 100 students.4,9 The use of peer-assessment to facilitate timely scoring, reduce faculty member workload, engage the learner in reflection of their own work, and foster constructive feedback skills has been successfully used in higher education including pharmacy curricula.9-14 Incorporating drug information workshops using peer-grading for formative and summative assessment throughout a curriculum can help students improve their skills over time as their clinical knowledge is simultaneously enhanced.
The Midwestern University College of Pharmacy – Glendale (CPG) curriculum included a required, skills-based, 8-course sequence titled Professional Skills Development (PSD) in which students were exposed to various elements of pharmacy practice through direct instruction and workshops. The PSD course sequence provided 1.5-credit hours per quarter for the 4 quarters in which the workshop series occurred. Workshops included consultations with standardized patients, simulated provider interventions, mock prescription order verification, subjective, objective, assessment, and plan (SOAP) note writing, high-fidelity simulation using mannequins, formal topic presentations, debates on therapeutics, and written drug information responses. Each workshop was aligned to the didactic content delivered in the integrated sequence course to complement and extend learning through application and integration of the knowledge, skills, and attitudes required of a pharmacist. The integrated sequence was organized by organ system (eg, cardiovascular, renal, central nervous system) and delivered pathophysiology, medicinal chemistry, pharmacology, and therapeutics in a sequenced fashion beginning in the third of 8 didactic quarters. The drug information workshop series was added to CPG’s curriculum in 2010 in the second year of a 3-year PharmD program to complement the 2 drug information courses: Research Methods and Epidemiology, and Evidence-Based Health Care. This workshop started as a 2-part series and grew to a 4-part series starting with the class of 2015 and was incorporated into each quarter during the second didactic year of the program.
The objective of this manuscript is to describe the development and implementation of the drug-information workshop series, which used peer-grading to build drug information, writing, critical-thinking, and constructive feedback skills.
DESIGN
The 3 ability-based workshop outcomes for the drug information workshop series required students to utilize all 6 cognitive processes of Bloom’s Taxonomy and were founded in procedural and metacognitive application of knowledge. Students who completed all 4 workshops were expected to achieve the following outcomes: (1) provide an evidence-based evaluation of medical literature as it pertains to posed drug information questions; (2) credibly and persuasively educate health care professionals and/or patients about their evaluation and interpretation of medical literature; and (3) utilize reflection to learn from the approaches of their peers and provide quality constructive feedback. To achieve these outcomes, students had to accomplish the following workshop objectives: (1) expand knowledge of (drug information content area) through exploration, evaluation, and interpretation of medical literature; (2) strengthen drug information, written communication, and evaluation skills; and (3) hone analytical skills with newly gained didactic information (ie, pathophysiology, pharmacology, medicinal chemistry, and therapeutics of drug information content area) as taught in the integrated sequence course. With each successive workshop, the therapeutic question complexity was enhanced, which allowed for continued advancement of skills.
Each drug information workshop consisted of 4 parts: preparation, writing, peer-grading, and grade-review sessions, all taking place over 3 weeks during each 10-week quarter. This formative workshop series required each student to evaluate a case-based drug information question, establish the essential question to be answered, conduct a Medline search to identify and obtain evidence-based literature supporting their argument, and formulate a referenced written response. The writing session for the first workshop in the series was conducted outside of class, while the remaining 3 were conducted as timed, inclass sessions using the campus computer laboratory and facilitated by workshop faculty members. Peer-grading using both general and content-specific rubrics was led by a faculty member, and conducted anonymously. The final workshop component was a grade-review and “challenge” session where students could review their peer-graded response, scored rubrics, and peer critique; during this session students could submit their responses to faculty members for grade review. Students were split into 2 groups for the writing and grading sessions to maintain anonymity during peer-grading and because the class was large. In total, each student spent 4-6 inclass hours for each of the 4 workshops.
As a result of the complexity of the workshop, including its occurrence over several class periods and integration of multiple skills, a “toolkit” was created with detailed instructions for students. The toolkit provided the ability-based workshop outcomes and objectives, requirements and standards for the response format, response submission details, discussion of the university plagiarism policies, “hints” for commonly encountered obstacles, a copy of the general rubric, and an example response. The required response structure and standards remained constant in all 4 workshops while the case-based question topic, intended audience, and/or required references differed between workshops. Differences from workshop to workshop were highlighted at the beginning of each toolkit. Sample workshop materials including the student preparatory toolkit, case-based questions, general and content-specific rubrics, and weighted scoring rubrics are available upon request.
The preparation session for the first workshop of the series involved a detailed explanation of workshop purpose and components. Additionally, a candid discussion regarding plagiarism was held to educate students about and preempt such practices. Students were encouraged to consider their confidence with the skills required of the assignment prior to the workshop so they could gather resources (eg, class notes, earmarked electronic references) and/or seek additional instruction prior to the writing session. Preparation sessions for this workshop ranged in length from 0.5 to 1 hour.
To encourage critical thinking and to simulate clinical practice, case-based questions covered therapeutic dilemmas not specifically addressed in didactic lecture. The questions required assessment of primary literature to compensate for the lack of specific recommendations in clinical guidelines or tertiary sources, rapidly changing practice, or newly emerging data. For the 3 workshops with inclass, timed writing sessions, the question was posed in a scenario requiring an immediate response (ie, within a couple of hours) to create a realistic scenario in which a more thorough review of the literature was impractical.
Workshop faculty members, consisting of the content rubric originator, content area expert, course coordinator, and/or pharmacy residents, were present for inclass writing sessions held in the campus computer laboratory. Faculty members were available to answer questions and guide students through obstacles (eg, primary question clarification, Medline search strategy, retrieval of electronic full-text articles from university library, adherence to required referencing standard). Students were advised to not place their name anywhere within their response document and upload their submission to the course learning management system, Blackboard Learn (Blackboard, Washington, DC), at the end of the timed session.
In the first offerings of this workshop, students worked in pairs to complete the assignment so they could learn alongside their peers and work through obstacles together. This practice also reduced the number of responses faculty members would need to grade. Ultimately, we moved away from paired work as peer-grading became the standard for peer interaction.
The use of peer-grading for this workshop series evolved over time and was first instituted in fall 2011 with the graduating class of 2013. The 2 workshop sessions for the prior graduating class were graded by faculty members alone, which was time intensive and arduous requiring around 50 hours over a 4-week period. A streamlined approach was needed that would reduce faculty member grading time and hasten the time to student receipt of feedback. Peer-grading had been successfully implemented within the college’s PSD course sequence for SOAP notes and seemed a viable option.9 Additionally, peer-grading would incorporate another layer of critical appraisal and reflection, which would create a more learner-focused experience.
In fall 2011, when the peer-grading session was implemented, a faculty member scanned the responses for content, checked the scored rubrics for accuracy, and reviewed the written peer critique. Students were awarded up to 5 “bonus points” for providing quality constructive feedback. The average workshop grade exceeded 100% because of this grading structure. To prevent overall course grade inflation, 2 changes were made to the workshop structure: (1) allocation of points for being an engaged participant of the peer-grading session in lieu of bonus points, and (2) use of grade-review sessions in which students self-assessed the accuracy of the peer-scored rubrics.
For each drug information response, both a general and content-specific rubric were applied. The general rubric remained unchanged for all workshops and assessed the quality of the students’ Medline search, response format (ie, structure, language, grammar), and referencing skills. The content-specific rubric, worth 30 points (60% of total points), assessed relevant components (ie, background, clinical evidence, recommendation) and breadth of “acceptable” content. Because the posed questions were based on therapeutic dilemmas that did not have a singular predetermined answer, the content-specific rubric was created in consideration of the clinical scenario and literature indexed in Medline that was accessible to students. Preparation of the rubric took 20 hours for the inaugural offering of the question and 3 to 5 hours were spent in subsequent years making revisions based on feedback from the prior year and newly published literature.
Peer-grading sessions were held in 2 sections (2 hours each) on the same day to accommodate half of the class at a time. Each submission was assigned a random number and graded by a peer in the opposite group to allow for candid evaluation and open discussion of response content. In addition to the peer-grader being blinded from the author of the response, the author was blinded from the identification of the peer-grader. The course coordinator kept track of this information but did not disclose it to students.
The faculty member leading the session went through the rubric in a stepwise fashion, which allowed students to discuss content-specific points, extend or hone knowledge, and clarify requirements and standards of the assignment and general rubric. Peer-graders used the general rubric to rate required components as “complete” or “missing.” They used the content-specific rubric to assess the incorporation of clinical content themes (whether expressly or implicitly stated), noting whether each item was “discussed” or “missing.” This system guided allocation of points for each component.
An essential component of this workshop was “quality-constructive feedback” from peers. At the beginning of the session, students were asked to read through the peer response, notating directly on the response any feedback or critique including their “first impressions” of the work (ie, approach, flow, therapeutic recommendation). To avoid having these first impressions influenced by the substance of the grading rubrics, the general and content rubrics were not provided to the peer-graders until after this step was completed. At the end of the session, students were asked to give summative feedback to their peers and were prompted to provide constructive, explicit, and concrete descriptions of what was done well and why and what could be improved and how. Students were eligible to earn 10 points for being an engaged participant of the peer-grading session. Engaged participation was defined as providing “fair and accurate grading according to the rubric and faculty guidance.” The course coordinator could deduct peer grading points for gross inaccuracy of rubric grading or absent/derogatory feedback.
At the completion of the peer-grading session, the weighted content rubric was applied by the teaching assistant. The weighted content rubric was not made available to students during peer-grading to avoid tangential or disparaging commentary regarding expectations or validity of the instrument to assess the intended content or outcomes.
The grade review sessions were held 1-7 days after the peer-grading session and provided the opportunity for students to evaluate their peer-assigned grade and review the written feedback. During these sessions, faculty members were available to answer questions, and students were able to submit their responses for faculty member review and grade adjustments. Attendance at these sessions was optional; however, students could also review their work outside of class in a meeting with the course teaching assistant. To streamline faculty member workload, submission for consideration of additional points was only allowed during this grade-review session.
EVALUATION AND ASSESSMENT
This case-based drug information workshop series provided the opportunity to develop and sharpen skill sets that were stated curricular outcomes for doctor of pharmacy programs including critical analysis of scientific literature and written communication. This was accomplished through active-learning, receipt of formative and summative assessment, and the opportunity for students to reflect on their work and that of their peers.
Overall, students performed well in these workshops (Table 1). During the 4 years, over 560 students completed at least 1 of the workshops in the series. The average aggregate score on completed assignments for the classes of 2012, 2013, 2014, and 2015 were 88.4%, 89.3%, 86%, and 88.7%, respectively. During the peer-grading sessions, a significant amount of time was spent discussing the content and approach of the responses. Students were encouraged to read excerpts of the responses to the group to facilitate class input. This environment had the potential to advance learning through reflection of individual work in light of the content rubric and group discussion. The faculty facilitator highlighted key concepts as they related to the specific case-based question or general principles regarding the evaluation of medical literature (eg, retrospective study findings are able to show association but not causation). Additionally, students had the opportunity to individually discuss their response and have it assessed by faculty members in terms of any/all parts of the general/content rubric during the grade-review session. Students could also request consideration for points on material covered in their response not included in the content rubric.
Peer Grades vs Faculty Grades for Each Drug Information Response
Student evaluations of the course and workshop instructor occurred at the end of each course offering (Table 2). Open-response comments revealed recognition of the value of the workshop and appreciation for clarity of workshop structure and expectations even though they found that the timed writing assignments were “intense.” Students commented that at times they found it difficult to assess the content included in their peers’ responses if it was not expressly stated. While most students saw the value the workshops offered in their professional development, some felt their time would be better spent focusing on “everyday” pharmacist tasks such as filling prescriptions or counseling patients. In contrast, several students contacted workshop faculty members to suggest that additional drug information workshops be incorporated into the curriculum to further improve efficiency, confidence, and clinical writing skills.
Course and Instructor Evaluations for Workshop Series over 4 Graduating Classes
Only 1 faculty member created this workshop series, but input from content experts and additional faculty members was solicited and used to create detailed, evidence-based content rubrics. Prior to the use of the case-based question, a minimum of 1 resident or third-year student was asked to complete the assignment adhering to the 2 hour writing session timeframe to gauge feasibility and to provide constructive feedback to case writers. Content experts reviewed the body of literature used to create the content rubric and provided guidance regarding key concepts and level of detail to include. Using the standardized general rubric across the entire series reinforced the required response components and brought attention to the importance of providing an organized, complete, and well written response. Faculty member peer review of the workshop and its instructor validated this assessment citing that the use of the created rubrics resulted in grading that was consistent, measureable, and grounded in the application of evidence-based literature.
DISCUSSION
The drug information workshop series successfully integrated practice-based opportunities to build literature evaluation, writing, critical-thinking, and constructive feedback skills using peer-grading. By creating a consistent structure across all workshops and using a standardized general rubric for each assignment, students found value in the workshop by mastering the skills necessary to competently gather, analyze, and disseminate evidence-based information. Being efficient in accessing high-quality information and formulating a response rooted in evidence-based medicine prepares graduates for advanced pharmacy practice that meets the needs of an evolving health care system.1
Critical thinking is an important skill for pharmacy graduates. In the context of pharmacy practice, critical thinking can be viewed as the ability to link foundational knowledge and evaluation of evidence-based medicine with clinical scenarios to meaningfully impact patient outcomes. One method of fostering critical thought is to challenge individuals to examine information and defend their evaluation of the information.9,10 De Sousa et al stated, “Effective learning occurs when students are able to correlate theory with practice. This means that they are able to make connections between the knowledge gained in one setting and apply it to another.”8 The structure and design of this workshop allowed students to use critical thought through the writing, peer-grading, and grade review sessions.
The use of peer-grading to facilitate both peer-assessment (scored rubrics) and peer-feedback (summative critique) provided the opportunity to advance the knowledge, skills, and objectivity of students in a bidirectional fashion.15,16 A student-peer mentoring program for drug information responses at the Ohio State University improved the perception of both the mentees (first-year pharmacy students) and mentors (second-year pharmacy students) in their ability to compose drug information responses.17 Our workshop faculty members found the peer-feedback provided by students to be appropriate, meaningful, and valued. The confidence and constructiveness of peer feedback evolved over the workshop series. This is likely due to several factors including the progression of the workshop series over the course of an entire didactic year to allow for skill maturation and the purposeful incorporation of self and peer feedback within the workshops. Students were asked to assess the written work based on response focus, flow, tone, and use of evidence to support a rationale. Through assessment of their peers’ work, the peer-grader is likely to build objectivity in consideration of their own work, which drives self-reflection.10,15,16 Self-reflection can improve the quality of thinking, which is a hallmark of the metacognitive application of knowledge.18 This intellectual engagement can help clarify what constitutes high quality performance and may fuel life-long learning.16 This interchange of knowledge and peer-feedback in higher education has been shown to develop “transferrable skills” desired by employers and are rooted in advanced communication abilities and willingness to take responsibility for one’s own learning.15,19 Therefore, peer-grading may have extended learning opportunities beyond the workshop series and prepared graduates for the assessment required in post-graduate training and employment, or in the manuscript peer review process.12
Students performed well in the workshops and provided insightful commentary to faculty members through personal discussion and on course evaluations regarding the value of the series. Student comments regarding the difficulty of the writing assignment were not unexpected given the high demands of this workshop. In analyzing the course and instructor evaluations, we noted that students generally agreed that the evaluation methods used within the course adequately sampled the information they were expected to learn. The majority of students felt the faculty member explained concepts clearly and used effective teaching methods.
The overall design of this workshop series followed the 7 principles for good practice in education described by Chickering and Gamson.20 The principles state that good practice in education: (1) encourages contact between students and faculty members, (2) develops reciprocity and cooperation among students, (3) uses active learning techniques, (4) gives prompt feedback, (5) emphasizes time on task, (6) communicates high expectations, and (7) respects diverse talents and ways of learning. In our workshop series, student-faculty contact was achieved through all 4 parts of each workshop (ie, preparation, writing, peer-grading, grade-review sessions). The writing and peer-grading sessions encouraged active learning and cooperation among students while emphasizing time on task. The peer-grading and grade-review sessions allowed for prompt feedback and communicated high expectations of a pharmacy graduate. The use of the general and content-specific rubrics provided guidance yet respected different approaches to writing style and approach through weighted allocation of points and through the opportunity to gain credit for including pertinent information outside of the content-specific rubric.
The class averages for faculty member-graded versus peer-graded responses did not differ greatly and, in fact, for most cases where students challenged their peer grade, additional points were granted by the faculty member reviewer. This trend is consistent with previous research done at our institution where significantly lower grades were associated with peer-graded versus faculty-graded SOAP notes using a rubric.9 While other pharmacy education research found the opposite trend,11,14 the author of this paper hypothesizes that the combination of a blinded peer-grading process with points awarded for providing accurate grading and quality constructive feedback may have resulted in enhanced engagement and attention to detail. The average percentage of students who took advantage of the grade challenge opportunity was 27.9%, which indicated that they were mostly satisfied with the score provided by their peer. However, the relatively low percentage of students submitting challenge forms could conversely indicate over-scoring of the peer-graded rubrics. Yet, when considering that the average score for the 2 sessions graded solely by a faculty member had similar or higher averages than peer-graded scores, this seems less of a concern.
There are several advantages of using the peer-grading process for both students and faculty members. For students, peer-grading promotes critical appraisal of one’s own work while creating an open forum for discussion of different, yet successful, approaches to the same complex question. This exchange of information is in alignment with the learner-focused principles of andragogy. For the faculty member, grading 150 several hundred-word responses is time intensive, and it can be challenging to maintain consistency and objectivity.4 In addition to reducing faculty member workload, the preparation of content-specific rubrics encourages faculty members to establish clear and consistent grading criteria.9
The structure of this workshop was not without disadvantages. It was difficult to craft questions that challenged students but did not exceed what they could reasonably accomplish given the writing session time restraints, clinical knowledge, and research and writing skill sets. The logistics and planning required to align the question content to the didactic lecture sequence for a workshop that spans 3 to 4 weeks was complex and required the collaboration of multiple faculty members. Creation of the content-specific rubrics was tedious and required a thoughtful balance of being detailed and focusing on key concepts without being overly stringent. Often in writing, ideas are implied through context and not stated explicitly, which can create frustration among students when trying to determine if content-specific rubric items are covered within the response. To overcome this, the faculty member leading the peer-grading session needed to guide and empower students to make judgment calls regarding content coverage. The use of detailed rubrics sometimes resulted in cumbersome grading sessions and required redirection when discussion became tangential. Fortunately, with repeated offerings of this workshop, the logistics and preparation became easier.
As we continue to reflect on this workshop series, we are encouraging faculty members and students to provide suggestions to balance its academic nature (eg, requiring adherence to specific referencing guidelines) with professional practice.
SUMMARY
A skills-based workshop series focused on written responses to drug information questions and using peer-grading as the primary assessment tool was successfully implemented. This workshop provided students opportunities to apply and extend information delivered in the didactic curriculum. Aggregate assignment grades showed achievement of these skills and student evaluations revealed acknowledgement of the utility of the workshop. From a student perspective, the integration of the knowledge and skills required to accomplish the written responses was a challenging yet valuable experience. The structure of this workshop series could be used to develop novel, application-based workshops encompassing the delivery of pharmaceutical care.
- Received January 30, 2014.
- Accepted April 3, 2014.
- © 2014 American Association of Colleges of Pharmacy