Abstract
Objective. To evaluate attitudes toward peer review of teaching and its impact on teaching practices and perceptions.
Methods. The University of Waterloo School of Pharmacy implemented a peer-review process for its teaching program in 2015. Those reviewed were invited to complete an electronic survey that captured their attitudes toward teaching, attitudes toward peer review, and changes in teaching practices, and to participate in semi-structured follow-up interviews for more in-depth discussion of these issues.
Results. Twenty-six (76%) instructors completed the survey. Instructors agreed that peer reviews of teaching are a development opportunity (96%), and 73% were comfortable with the idea of peer review. Over half (58%) indicated that the review made them feel more confident that their teaching strategies were effective, and the same percentage indicated that they planned to make changes to their teaching as a result of the feedback received from the peer review. Only a few instructors indicated that peer review changed their attitudes toward teaching (12%) or increased the value they placed on teaching (34%).
Eight instructors (23.5%) participated in the semi-structured interviews. Themes that emerged included: attempts to make the reviewee comfortable during the peer review were successful; the feedback provided to instructors regarding their teaching was positive but not critical enough; there was lack of clarity as to the purpose of the feedback; and instructors planned to make only minor changes to their teaching as a result of the review.
Conclusion. Peer review of teaching was well received and feedback was confirmatory in nature but had minimal impact on teaching practices as it was not deemed to be critical enough. Changes to the peer review program are needed to increase its impact on teaching practices.
INTRODUCTION
Since its inception in 2008, the University of Waterloo School of Pharmacy placed an emphasis on excellence in the classroom. Early teaching retreats and workshops encouraged instructors to adopt innovative teaching methods and share with colleagues in creating an integrated curriculum. As early as 2009, the school implemented a Teaching Squares program where groups of four instructors observed one another’s classes and then reflected on their own performance. While not a peer evaluation, it promoted a culture of sharing and openness to giving and receiving feedback among faculty members. The school also follows the standard supervisory evaluation process used as a part of the university’s performance evaluation plan. Supervisory evaluations are summative assessments and are not intended to facilitate formative learning. To meet this need, the school implemented a formal peer review of teaching process to promote formative learning about teaching in a lower-stakes, non-evaluative environment.
Peer review of teaching offers several potential benefits for both the person being reviewed and the reviewer.1 For example, as a feedback tool it may promote reflective practices and professional development and increase confidence in teaching.1-4 Reviewers have an opportunity to observe others in the classroom and consider modifications to their own teaching practices.5,6 Furthermore, a comprehensive assessment of teaching that is used as part of merit review or for tenure and promotion process must include multiple evaluations from several sources, including students, administrators, peers, and self.7-9 Thus, while peer review of teaching is intended to be used for formative purposes, it can also be used as a component of faculty evaluations, and its inclusion has been reported by many American pharmacy schools.9
Many models for peer review of teaching have emerged as described in the literature.2,3,5-7,9-20 Some models involve only classroom observation (a single visit or multiple visits), while others involve a more comprehensive review of materials including student work, assessments, and syllabi. Feedback tools may include simple checklists, rating scales, written feedback, and/or verbal feedback. Reviewers may or may not receive specialized training prior to conducting the reviews. Reviewers may consist of a small group of individuals chosen for the task or pairs that agree or are selected to conduct reciprocal reviews. Pre- and/or post-observation meetings may be included as part of the review process.
Given these variations, it is not clear whether the benefits attributed to peer review are generalizable in all contexts. Therefore, the University of Waterloo School of Pharmacy sought to evaluate faculty members’ attitudes toward the peer review program that the school had established and to assess its impact on their teaching practices and perceptions.
METHODS
A peer review of teaching program was implemented in the University of Waterloo School of Pharmacy in fall 2015. Evaluations were completed by the school’s peer review team, which consisted of three individuals who taught on a regular basis (two faculty members and one teaching staff member). Each review was completed by one peer review team member, although in the early stages of the study, two or three members reviewed the same instructor to ensure consistency in expectations and feedback. The choice of which peer review team member completed an instructor’s review was typically based on reviewer availability; reviewers were not matched with reviewees on other factors such as the reviewee’s years of teaching or content expertise. Reviews were based on a single classroom observation of the subject’s teaching. Visits were typically one hour in duration, and a time was mutually agreed upon by the reviewer and the individual being reviewed. Instructors were encouraged to send course materials (such as the course syllabi) and any teaching materials (such as slides and pre-class reading material) to the peer reviewer. A standardized tool was developed for the peer review process that provided feedback via rating scales on learning objectives, content expertise, instructional design, and instructional delivery, and included the reviewer comments and suggested future professional development (available from corresponding author upon request). After the peer review, instructors received a copy of the completed tool via email. In addition, written feedback highlighted strengths and suggestions for improvement, and, where appropriate, suggested follow-up development (such as workshop attendance or reviewing resources provided by the university’s Centre for Teaching Excellence). While designed to be formative in a nature, a copy of the peer review was also provided to the school’s dean to add to the instructor’s file so that the results could be incorporated into annual performance reviews and, if appropriate, tenure and promotion packages. Upon request, instructors could follow up with the peer review team for clarification or further discussion. In terms of frequency, instructors were evaluated on a continuous cycle every three to five years. The goal for tenure-track faculty members was to have three peer reviews completed prior to submission of their tenure package. Reviewees were identified each term by the school’s assessment committee based on teaching assignments and time elapsed since the last review.
In order to evaluate the impact of the peer review on teaching practices and perceptions at the school, instructors were invited via email to complete a 25-item anonymous survey that included 19 Likert-scale questions (1=strongly disagree to 5=strongly agree) and six yes/no questions that captured attitudes toward teaching, attitudes toward peer review, and changes in teaching practices (Tables 1 and 2). The survey was administered through SurveyMonkey (SurveyMonkey, Inc, Palo Alto, CA). Survey results were presented and analyzed using descriptive statistics and presented in aggregate, with mean, median, and standard deviation values calculated for five-point Likert-scale questions using Excel.
Survey Responses by Pharmacy School Instructors Who Underwent Peer Review (5-Point Likert Scale Questions)
Survey Responses by Pharmacy School Instructors Who Underwent Peer - Yes/No Questions
In an attempt to gain a more in-depth understanding of participants’ attitudes toward and experiences with the peer-review process, we also designed a qualitative component to the analysis. A semi-structured interview guide was developed that included questions that probed instructors more specifically about how the peer review process had affected their teaching practices and attitudes towards peer review and teaching. The interview also sought suggestions for how to improve the peer review process. Questions included: “Did the peer review process help you identify areas of your teaching that you would like to target for improvement?” “If you could only make one change to the peer review process, what would it be?” Faculty members who had undergone a peer review of their teaching were invited by email to participate. All interviews were conducted by an independent research assistant. The interviews were audio recorded and transcribed verbatim and each speaker was identified in the transcript by an alphanumeric code to preserve their anonymity. Transcripts were analyzed using NVivo 10 Software (QSR International, Burlington, MA). An inductive approach to analysis was used, with an initial round of descriptive coding used to gain insight into themes and relationships in the data. Two rounds of coding were used to analyze the interviews. Emerging themes and the relationships between them were interpreted and agreed upon by the investigators. Ethics approval for this study was obtained from the Office of Research Ethics at the University of Waterloo.
RESULTS
By the time the study was completed, 18 months after the peer review program was introduced, 34 instructors had been reviewed, including 19 faculty members, three teaching staff, six adjunct faculty members, and six sessional instructors. The link to the survey instrument that was sent to the 34 eligible instructors who had participated in the peer review was completed by 26, for a response rate of 76%. Results from the survey are presented in Tables 1 and 2. In terms of feedback on the review process, most respondents agreed or strongly agreed that they received enough notice regarding the peer review (92%), the purpose of the review was clear (88%), feedback from the review was provided in a timely manner (96%), feedback was clear and understandable (96%), and feedback was specific (92%). All respondents agreed or strongly agreed that they were comfortable with the person chosen to be their peer reviewer, and 73% were comfortable with the idea of being reviewed by a peer before the classroom observation occurred. However, 31% of respondents did indicate that they felt nervous during the class in which the peer reviewer was present. Over half of the respondents agreed or strongly agreed that they would feel more comfortable with another peer review of their teaching now that they had completed one.
The survey included questions that assessed instructors’ attitudes toward teaching and peer review of teaching. While 96% of respondents agreed or strongly agreed that peer reviews of teaching are a development opportunity and 81% agreed or strongly agreed that peer reviews are a requirement of teaching, few respondents indicated that peer review changed their attitudes toward teaching (12%) or increased the value they placed on teaching (34%). However, 58% of respondents agreed or strongly agreed that the peer review process caused them to think more about their teaching process and feel more confident that their teaching strategies were effective, and that their attitude toward peer review of teaching was more positive as a result of the process.
In terms of teaching practices, 77% of respondents agreed or strongly agreed that the peer review provided feedback that would help improve their teaching. Additionally, 58% of respondents planned to make changes to their use of classroom time based on the peer review of teaching. At the time of survey completion, 23% of survey respondents indicated they had sought professional development opportunities or consulted teaching-specific resources as a follow-up to the peer review. This included attending workshops, reviewing online materials, and meeting with teaching liaisons affiliated with the university’s Centre for Teaching Excellence.
The peer review team had some concern that their presence in the classroom might cause instructors to alter their usual teaching practices or styles. However, only 12% of survey respondents indicated that they altered their teaching style as a result of being observed.
Eight (23.5%) instructors out of 34 consented to participate in semi-structured interviews. Upon analysis of the interview transcripts, four major themes emerged: the attempts made to make the reviewee comfortable were perceived as successful; feedback was confirmatory but not critical enough to be useful; there was a lack of clarity among participants about the purpose of the feedback; and the instructors made few changes to their teaching approach as a result of the review (Appendix 1).
The reviewees who were interviewed were asked how comfortable they were with the peer review process as a whole. The majority of respondents felt that there was a clear and successful attempt by the review team to cater to participants as much as possible. This was reflected in the perceptions of reviewees that attempts were made to accommodate them, both from a scheduling perspective and in by allowing them to select a lecture to be reviewed. For example, one participant noted, “…I was very comfortable… I specifically like how they asked us to specifically indicate when we would like this [the review] to happen. It was topics that you were obviously very comfortable in and that’s what you selected.” Some of the reviewees appreciated the efforts to make them comfortable, but felt that the review would perhaps be better served by reducing these efforts in favor of making the review more spontaneous. There was still some nervousness experienced by reviewees despite the efforts of reviewers to make the process as worry-free as possible, but this was generally seen as positive by those reporting it.
The majority of the reviewees believed that the feedback they received from the peer review process was confirmatory and supportive of their current teaching practices. However, there were several participants who expressed disappointment over the lack of constructive criticism provided regarding their teaching and few tangible suggestions for improvement of their teaching were offered. For example, “I did expect to receive a bit more constructive feedback on areas that I could develop or improve in. It wasn’t so much that, it was more a validation that the things I was doing were correct.”
Why reviews did not focus more on providing critical feedback was explored in interviews. Some participants posited that the peer-based nature of the review and the fact that the reviewers had to work side by side with the reviewees, as well as the hierarchy of positions within the school, may have contributed to reluctance on the part of the reviewers to be critical. One participant stated, “I will say that given my position, maybe there would be a little reluctance for [a peer] to come in and say [critical comments]. There’s always those dynamics, I think, within a relatively small unit like the School of Pharmacy.” Also, the pleasant personality of the reviewers was, in several instances, thought to be an impediment to them providing criticism to the reviewees.
There were, however, participants who felt that although there was a minimum of critical feedback provided, the confirmatory feedback was useful and gave them confidence that what they were doing in terms of teaching strategies was valid. A participant commented, “The person who was reviewing me teaches a similar course… [it was comforting] that confirmation of how I was approaching things appeared to be fine and very much congruent with what that individual was doing.”
Some of the reviewees expressed opinions that a reviewer from outside the school might help to address some of the lack of constructive criticism seen in the reviews and lead to more practical suggestions being provided. We believed this might reduce the awkwardness that arises when the reviewer is also a close work colleague. In addition, individuals specifically trained in pedagogy might be better suited to provide more useful feedback.
There was a significant degree of confusion noted among reviewees with respect to the purpose of the feedback. Most of those interviewed were comfortable with the idea of the feedback being used for personal growth and for formative purposes, but several comments reflected a lack of understanding about whether there would be a summative aspect to the review that would impact performance evaluations. “I wasn’t entirely sure of what was going to happen with the feedback… where does it go?” one participant asked. “What do they do with it, if it’s good or bad. If you’re doing something well, who knows about this now?”
There were some reviewees who felt that the feedback should have a strong summative component for the purposes of merit based evaluation and promotion, rather than simply being a formative tool.
The reviewees were asked directly whether they felt they would change their approach to teaching based on the peer review process. Overwhelmingly, the participants said that they would not be making any significant changes to teaching strategies. In many cases, this related to theme two, part of which was that the reviewers’ suggestions were not substantive enough to warrant change. As one participant expressed, “I don’t think it [identified areas for change] which sort of was disappointing… I didn’t feel there was enough constructive feedback.”
Despite the lack of constructive feedback and its effect on the ability of reviewees to envision changes to their teaching strategies, a few of the participants described minor suggestions made by the peer review team that were ultimately adopted. “She [the reviewer] did state that I should probably have learning outcomes for each section,” a participant commented. “I think that was the only suggestion of the whole thing. It was a good one, though.”
DISCUSSION
The University of Waterloo School of Pharmacy introduced a peer review of teaching program in fall 2015. While student evaluations are still the most common form of teaching assessment in pharmacy schools, peer review of teaching has become widespread and offers many benefits, such as the promotion of more reflective teaching practices.1,4,9 Our peer review of teaching program involved single classroom observations performed by one of a three-member peer review team. The program was similar in scope to the program described by Sullivan and colleagues.11 Each peer review was followed by feedback provided through a standardized tool that included rating scales and the reviewer’s written comments. The tool was similar to the template described by Davis, capturing domains similar to those described by Wellein and colleagus.6,7, Our evaluation of the impact of our peer review program on teaching practices and perception identified three key findings.
First, we found that the peer review of teaching process used at the school was well-received, which was consistent with what has been observed elsewhere.21 There is debate within the Academy as to whether peer review of teaching should be imposed or voluntary.12 Despite the imposed nature of our peer review program (all instructors participated in the program), there was a high degree of comfort with the process, with many instructors indicating that they would feel more comfortable with a subsequent peer review because of their experience. The comfort with the process arose in part from allowing instructors to choose the time for the classroom observation. Additionally, most instructors indicated that they viewed peer review of teaching as a development opportunity and a requirement of teaching. The participants’ positive attitude toward peer review of their teaching may have arisen in part from the high value that instructors at the school already placed on teaching.
Second, there was limited evidence that the peer review of teaching program impacted attitudes toward teaching or peer review. While over half of survey respondents indicated they intended to make changes to teaching practices as a result of the peer review, some of the participants in the semi-structured interviews stated that they did not plan to make substantial changes. Many instructors cited already holding teaching in high regard, and this was not altered by the peer review process, similar to what has been reported by Bernstein.21 The changes that participants anticipated making appeared to be relatively minor, such as incorporation of learning objectives, and in some instances related to the specific class that was observed rather than to their overall teaching approach. The lack of critical feedback provided in the peer review appeared to be the main reason cited by instructors who were not planning to change their teaching practices. The confirmatory nature of the feedback led instructors to believe that changes were not necessary. Other investigators have suggested that peers who know each other well may focus only on positive aspects of the evaluation; they may want to avoid confrontation and maintain friendships.12 However, similar to our study, instructors reported that only focusing on positive feedback did not provide insight on how to enhance their teaching.12 Other pharmacy schools have reported higher percentages of instructors who planned to make changes to their teaching practices after peer review compared to what we found in our study; contributing factors may have included a larger number of reviews conducted (resulting in more opportunity for feedback), training for observers, post-observation meetings, and a balance of positive and constructive feedback.13 Nonetheless, positive confirmatory feedback may build confidence in instructors, and peer review of teaching provides a good opportunity for that as teaching is usually done in a solitary fashion without peers present in the classroom.12,19 Indeed we found that over half of our instructors indicated that the peer review process increased their confidence in their teaching strategies.
Finally, there was confusion and some disagreement regarding the intent of peer review of teaching, specifically whether it was to be (or should be) used for formative or summative purposes. When a peer observation and evaluation program was established in one pharmacy school, over half of survey respondents agreed that results should be included as part of their yearly performance evaluations.13 However, another pharmacy school reported that faculty members preferred that review should be a formative process as this was deemed to be nonthreatening.22 Peer review as a summative assessment is desirable as it provides an alternative to student evaluations, which may be influenced by factors such as grade expectations.23 Furthermore, students may not have appropriate pedagogical training to be able to accurately evaluate teaching quality. Factors that may increase the validity of peer review for its use in summative assessment include reviewer training, multiple classroom visits, the use of standardized instruments, and some opportunity for verbal as well as written feedback.8,18
This study has identified several changes that can be made to the existing peer review of teaching program at the University of Waterloo School of Pharmacy. First, reviewers will be encouraged to provide more critical feedback. This may be facilitated by face-to-face meetings between reviewers and reviewees before and after the classroom visit, where instructors can comment on what they think they did well and can improve and ask for feedback in specific areas.2,10,11,18, Additional training in providing meaningful feedback has also been planned for an upcoming instructor development workshop; such training has been considered useful by other institutions and will benefit our peer review team.2,12 We will also consider changes to the feedback tool that may allow for more constructive feedback. Second, the involvement of additional reviewers could be beneficial, possibly including instructors from other departments who may feel more comfortable providing critical feedback. The current peer review team is often labeled as being “too nice” with their feedback, so the engagement of additional reviewers may be beneficial. Anecdotally, the three members of the peer review team found that the opportunity to observe other instructors in the classroom was highly beneficial, as has been reported by other research teams, so extending this opportunity to additional individuals may be of value and seen as a development opportunity.5,6 Adoption of peer review pairs (using a reciprocal process with the observer and observee) has been suggested in the literature, as has the use of cross-disciplinary reviewers that can focus more on pedagogy than content.2,10 Finally, the school will clarify whether the review is to be used for formative or summative purposes. Other institutions have suggested that it be used primarily for formative purposes, but individual instructors can opt to include the review in promotion packages.6,22 Alternatively, Mager and colleagues described a model where peer review of teaching was both formative and summative, using a rubric that gave the reviewee several options to choose from for inclusion in the review; this was suggested to increase the faculty member’s trust in the process.10
This study had some limitations that may affect the applicability of the findings. First, as previously mentioned, there are several models of peer review that have been described in the literature, many of which involve multiple visits, more comprehensive examination of course materials, and consultation between reviewer and reviewee. The effectiveness and response to the review may relate to the breadth of the review and less to the process itself. Second, because of the relatively small size of the University of Waterloo School of Pharmacy, to maintain confidentiality, we did not collect demographic information, such as number of years teaching, academic rank, etc. Findings may have differed if other subgroups could have been evaluated. Third, the rate of participation in semi-structured interviews was fairly low, although there did appear to be consistency in the themes generated across the eight interviews. There also may have been self-selection bias that may have led those most interested and passionate about teaching to be most likely to participate in the interviews. Finally, the ultimate goal of peer review is to improve student learning; this study assessed self-reported changes in teaching practices, but was not designed to directly assess student learning.
CONCLUSION
The peer review of teaching program that was implemented at the University of Waterloo School of Pharmacy was well-received and viewed as helpful in assuring instructors with regards to their competence but had minimal impact on teaching practices or attitudes toward teaching. As the program continues to evolve, consideration should be given to maintaining focus on increasing confidence but also providing feedback specific and comprehensive enough to promote improvements in student learning and instructor skill.
ACKNOWLEDGMENTS
We would like to thank the following individuals and groups: Mary Power (Centre for Teaching Excellence) for her initial contributions to the design for this study; Caitlin Carter for completing a literature search for this manuscript; the School of Pharmacy’s Assessment Committee for the creation of the terms of reference and standardized instrument used for the peer review process; and Kelly Grindrod for providing advice on coding. Funding for this project was provided through a University of Waterloo Learning Innovation and Teaching Enhancement Seed Grant.
Appendix 1. Examples of Instructor Comments Captured During a 45-Minute Interview After the Instructors Had Undergone Peer Review
Theme One: The attempts made to make the reviewee comfortable were perceived as successful
“…I had a bit of an idea what to expect before, so that was helpful…I think the lead-up and when I asked some questions ahead of the time that the lecture they came in [sic], the information back about the process was comforting. They didn’t make it as a huge, scary thing.” [R3]
“…I was very comfortable… I specifically like how they asked us to specifically indicate when we would like this [the review] to happen. It was topics that you were obviously very comfortable in and that’s what you selected.” [R4]
“[I was] very comfortable… my person’s really, really nice. Very warm. Very welcoming. Very, you know? The person’s very, very… they’re just very good at putting you at ease.” [R6]
There was still some nervousness experienced by reviewees despite the efforts of reviewers to make the process as worry-free as possible, but this was generally seen as positive by those reporting it:
“…I always have a baseline level of… nervousness, going into a lecture. I lecture quite a bit, but I still have it. I always have it. It keeps me on top of it, keeps me going.” [R1]
“Anytime you know that you’re going to be evaluated, you’re going to be a little bit more nervous. I think I’m nervous anytime I lecture. I once had somebody who had been teaching for decades, he also said he was nervous. He said the reason he’s nervous is because he cares. If he stopped being nervous, it means that he stopped caring. It’s okay to be nervous.” [R5]
Some of the reviewees appreciated the efforts to make them comfortable, but felt that the review would perhaps be better served by reducing these efforts in favor of making the review more spontaneous:
“It’s probably giving too much leeway to instructors… personally, I think the peer review team should be able to pop in whenever they want to… if someone just came into my lecture I would be so much more uncomfortable, but they would actually see me in the real form.” [R1]
“I can see the value of having it done both ways… I like the way it’s done now… but I could also see the value in not disclosing that info [the lecture] beforehand and just having them drop in and see that.” [R4]
“Random attendance, that’s something I would probably consider adding on a trial basis… to see if the benefits over weigh the issues.” [R8]
Theme Two: Feedback was confirmatory but not critical enough to be useful
“I would say [the feedback] was more confirmatory than helpful.” [R1]
“I did expect to receive a bit more constructive feedback on areas that I could develop or improve in. It wasn’t so much that, it was more a validation that the things I was doing were correct.” [R4]
“I think in terms of the output of it, I wish I had gotten a little bit more constructive criticism… I got lots of compliments. I didn’t feel like I got a lot of ‘this is something you can do differently that might be helpful.’” [R6]
“It was very positive feedback. I think there were some attempts made for constructive criticism, but they weren’t particularly critical. [I would have preferred] some more, sort of, aggressive level of criticism.” [R7]
“I will say that given my position, maybe there would be a little reluctance for [a peer] to come in and say [critical comments]. There’s always those dynamics, I think, within a relatively small unit like the School of Pharmacy.” [R2]
“I also think that the particular person who did my evaluation is well known for being a very nice person… it is not easy to give constructive criticism to someone and then go and sit in a meeting with them.” [R6]
“The person who was reviewing me teaches a similar course… [it was comforting] that confirmation of how I was approaching things appeared to be fine and very much congruent with what that individual was doing.” [R3]
“I do think having a peer reviewer, as it were, who’s not from the School of Pharmacy would be very helpful, provide maybe a little different perspective then how we do things… and also is somebody you see on a day-to-day basis, so might feel a little more comfortable providing stronger feedback if that’s something that was necessary.” [R2]
“I think somebody from outside would be better able to give you constructive criticism. I also think somebody who actually has a lot of experience with teaching and pedagogy would provide more meaningful feedback and there would be a credibility behind that feedback.” [R6]
Theme Three: There was a lack of clarity among participants about the purpose of the feedback
“I don’t know where the info… what is done with that data? Do I get something formal? Does it go into my employee file?” [R3]
“I wasn’t entirely sure of what was going to happen with the feedback… where does it go? What do they do with it, if it’s good or bad… if you’re doing something well, who knows about this now? Where is this going?” [R5]
“We have this peer review process that gives them [instructors] every opportunity to do their best, and it’s not used in merit scores… I think it’s an opportunity and it’s not there, and it should be.” [R1]
“Given that there’s not a lot of weight behind it [peer review] in terms of using it for performance review or tenure and promotion… I’d like to see it weighted more heavily.” [R2]
Theme Four: The instructors made few changes to their teaching approach as a result of the review
“[The feedback] didn’t state anything that needed to improve. I don’t have much to answer about that [teaching improvements].” [R1]
“I can’t think of any changes as a result of the peer review.” [R2]
“I don’t think it [identified areas for change] which sort of was disappointing…I didn’t feel there was enough constructive feedback.” [R6]
“I think not really… I do some curriculum but it’s all related to teaching. So I already have a really high value of teaching.” [R6]
“I hold it [attitude toward teaching] pretty high… so it [the peer review] didn’t affect it.” [R7]
“She [the reviewer] did state that I should probably have learning outcomes for each section… I think that was the only suggestion of the whole thing. It was a good one, though.” [R1]
- Received September 29, 2017.
- Accepted February 6, 2018.
- © 2019 American Association of Colleges of Pharmacy