Abstract
Objective. To assess a previously described peer observation and evaluation program 2 years after implementation.
Methods. An pre-implementation survey assessed faculty needs and attitudes related to peer evaluation. Two years after implementation, the survey was repeated and additional questions asked regarding adherence to peer observation and evaluation policies and procedures, feedback received, and impact on teaching.
Results. Faculty attitudes towards peer evaluation stayed the same or improved post-implementation. Adherence to the initial 3 steps of the process was high (100%, 100%, and 94%, respectively); however, step 4, which required a final discussion after student assessments were finished, was completed by only 47% of the respondents. All faculty members reported receiving a balance of positive and constructive feedback; 78% agreed that peer observation and evaluation gave them concrete suggestions for improving their teaching; and 89% felt that the benefits of peer observation and evaluation outweighed the effort of participating.
Conclusions. Faculty members adhered to the policies and procedures of peer observation and evaluation and found peer feedback was beneficial.
INTRODUCTION
Teaching effectiveness is assessed in many ways in colleges and schools of pharmacy. Use of student evaluations of teaching is common, but limitations surrounding the breadth and depth of student evaluations and their ability to capture and comment on the total teaching experience exist.1-4 Faculty peer evaluations offer another method to obtain constructive feedback about the quality of teaching and can be used in conjunction with student evaluations for the purposes of improving teaching methods, as well as for merit, promotion, and tenure decisions.4 The 2007 Accreditation Council for Pharmacy Education Accreditation Standards and Guidelines state that faculty members who teach should be evaluated annually and that assessment procedures should include self-assessment and “…appropriate input from peers, supervisors, and students. The use of self-assessment and improvement tools, such as portfolios, by faculty and staff members is encouraged.”5 The success of any peer observation and evaluation program relies heavily on having a clear and evidence-based program, as well as support and leadership from administration, but the heart of any peer observation and evaluation program’s success lies with faculty engagement in and reflection on the process.6
Improving and assessing the teaching-learning process is an essential mission of the Department of Pharmacy Practice and the School of Pharmacy at Northeastern University. A formal peer observation and evaluation program for classroom teaching was developed and implemented by this department in January 2008, and the development of its process and the peer observation and evaluation tool (POET) have been previously published.7 Two years after implementing the peer observation and evaluation process, the program was formally assessed. The objectives of this study were to determine: (1) faculty attitudes regarding peer observation and evaluation at both pre- and post-implementation points; (2) the degree of adherence among faculty members to the peer observation and evaluation policies and procedures; (3) the type and nature of peer observation and evaluation feedback received; and (4) faculty perceptions of the peer observation and evaluation program’s value and impact on their teaching.
DESIGN
During the 2005-2006 academic year, a task force representative of pharmacy practice faculty members was charged to develop a formalized, comprehensive, peer-driven, teaching assessment program to promote the improvement of teaching to classes with large enrollments. Departmental faculty members approved a teaching philosophy statement that emphasized teaching strategies to foster active learning and student engagement. Based on this philosophy, and with the assistance of the director of the University’s Center for Effective University Teaching, the task force used Webb and McEnerney’s8 stepwise approach in the development of a successful peer observation system. Also, to elicit faculty input during this process, the task force surveyed faculty members to determine their perceptions of the need for and value of a peer assessment program. Faculty comments helped shape the design, development, and implementation of the program; overall, faculty members indicated that the ensuing program should be mandatory and used for formative evaluation purposes.
A process for peer observation and evaluation of teaching programs was approved by the department for implementation in the fall 2007 and consisted of the following 4 steps:
Step 1: Pre-Observation Meeting. A pre-observation meeting was to occur between the trained peer evaluator and the faculty member to discuss the objectives of the class session, the placement of the session in the development of the course content, and any class-specific issues. In addition to the mandated elements of the process, the observed faculty member could request that the peer evaluator provide specific feedback on any topics that might be helpful. This meeting was to occur approximately 1 week before the observation.
Step 2: Classroom Observation. For classroom observation, the peer evaluator was to attend 1 of the faculty member’s class sessions. Using a teaching observation record, the evaluator was to take written notes during the class session concerning the positive aspects of the faculty member’s teaching performance as well as areas for improvement. After the classroom visit, using the teaching observation record as a resource, the peer evaluator was to complete the POET developed by the task force.
Step 3: Post-observation Meeting. The post-observation meeting was to occur approximately 1 week after the observation. During the interim, the observed faculty member was to complete a self-reflection on his/her teaching performance in the class and the peer evaluator was to complete the POET and summarize the faculty members’ major strengths and areas for improvement. Based on the assumption that any class session could be improved, the peer observation and evaluation process required the evaluator to provide preliminary written feedback for 2 to 3 areas of improvement as well as 2 to 3 strengths. The pair was then to discuss these findings, the faculty member’s self-reflection, and strategies for improvement.
Step 4: Post-Student Assessment Meeting. After class content was assessed through analysis of quizzes, examinations, and/or other student assessments, the observer was to meet with the faculty member to discuss students’ achievement of learning outcomes. This step was to complete the process by incorporating the results of student learning.
The director of CEUT conducted a 4-hour, mandatory faculty training session to standardize the peer observation and evaluation process and maximize the value of the peer evaluations. During the training session, the purpose and value of peer evaluation was discussed, faculty members were oriented to the peer observation and evaluation process and tool, mock peer observations and evaluations were role-played, and feedback strategies were discussed.
Department-wide implementation of the peer observation and evaluation process began in January 2008. The content and process of the program and the content of the POET are described in more detail in a previous publication.7 All faculty members hired prior to 2008 were required to undergo peer evaluation of 1 lecture annually. All faculty members who received training were also asked to serve as peer observers. At the beginning of each calendar year, faculty members identified a lecture (first choice and alternate) for peer evaluation, as well as 3 possible observers from a list of trained faculty members. Matches were made based on observer availability and with an effort to distribute the workload. Newly hired faculty members were briefly oriented to the peer observation and evaluation process by the assistant dean for academic affairs and were to participate in peer observation and evaluation per departmental policy, but could not serve as peer observers.
The pre-implementation survey was conducted in 2007, before the peer observation and evaluation process was implemented. As indicated above, the results of the survey were used to shape the design, content, and implementation of the peer observation and evaluation process and POET form.
The invitation to participate in the program evaluation survey was sent to pharmacy practice faculty members at the end of the second implementation year with objectives to establish the degree of adherence to the peer observation and evaluation policies and procedures, faculty experience with peer observation and evaluation program, and their attitudes regarding peer observation and evaluation. Faculty members were asked about their overall participation in peer observation and evaluation both as instructors and observers. For each of the class sessions evaluated, instructors were asked about adherence to the peer observation and evaluation processes, the type and value of the feedback received, the changes that they made or plan to make in class session as a result of the feedback, and evidence of changes in student evaluations or student learning. To control the lengths of the survey instrument, questions were repeated for a maximum of 2 of the most recent lectures evaluated even though some faculty members had more than 2 lectures evaluated during the study period. Attitude questions, similar to the pre-survey questions, were repeated. Faculty members who had served as observers were asked questions about adherence to the peer observation and evaluation policies, and the impact of the program on their time commitment a workload. Recommendations for improving the program and providing additional training were also solicited.
EVALUATION AND ASSESSMENT
Faculty members were asked to use Survey Monkey (SurveyMonkey.com, LLC, Palo Alto, CA) to complete both the pre-implementation and program evaluation survey instruments. Participation in the surveys was anonymous and no unique identifiers were used, therefore data of individual responders could not be compared between the 2 survey instruments. The Northeastern University Institutional Review Board approved evaluation of the program. Independent samples Mann Whitney U test was used to compare Likert scale responses on the pre- and post-survey instruments. Responses were converted numerically as follows: strongly agree = 4, agree = 3, disagree = 2, strongly disagree = 1, unable to comment = 0. Significance was set at p < 0.05.
In 2007, 19 faculty members (76% of eligible faculty members) responded to the pre-implementation survey instrument; of these, 7 were members of the task force that developed the peer observation and evaluation process and instrument.
Eighty-six percent of respondents expressed interest via the survey instrument in having a peer evaluate their lecture in the next year, and 63% wanted to attend a formal training session on peer observation and evaluation. Seventy-four percent of respondents strongly agreed or agreed that peer observation of classroom teaching should be mandatory for all faculty members, and 90% strongly agreed or agreed that the same process should be used for tenure track/tenured and non-tenure faculty members. Forty-seven percent of respondents strongly agreed or agreed that peer evaluation results should be shared with the department chair, and 58% strongly agreed or agreed that the results should be included as part of yearly performance evaluation documents. Over 80% strongly agreed or agreed that the appropriate frequency of peer evaluation was once per year. Faculty members were asked to rank their desirability for feedback in 5 areas using a scale of 1 (do not need feedback) to 4 (need the most feedback). Assessment of learning received an average rating of 2.8, while the use of active learning, lecture content, presentation style, and classroom atmosphere each received an average rating of 2.6. Sixty-three percent of respondents reported having at least 1 lecture peer evaluated in the past 5 years, and 63% reported serving as a peer evaluator for a colleague during the same time period.
In 2010, 22 faculty members (76% of eligible faculty members) responded to the program evaluation survey instrument. Of these, 16 faculty members (73%) were pre-2008 hires and had been trained as peer observers. Twenty (91%) respondents reported participation in the peer observation and evaluation process. The 2 faculty members who were hired in 2008 and 2009 did not participate. Over 2 years, 39 observations took place (average per faculty = 2 observations; range 1-4). Because the survey asked respondents to report on the 2 most recent evaluations, faculty members reported the details of 32 independent peer observations (91.4% of allowed reports). Faculty adherence to 3 of the 4 steps of the process was high (100%, 100%, and 94% for steps 1 through 3, respectively); however, faculty adhered to step 4, the post-student assessment discussion, only 47% of the time (Table 1). Faculty members adhered to suggested timelines and format of the meeting in the majority of observations; however, only 62% held the post-observation meeting within the recommended timeframe of 1 week.
Pharmacy Practice Faculty Members’ Adherence to Components of a 4-Step Peer Observation and Evaluation Process (N = 32 Observations)
Table 2 summarizes the types of feedback provided by the observer to the instructor and the resultant changes to the teaching process in response to that feedback reported on the program evaluation survey. Most commonly (43% of observations), faculty members received balanced feedback addressing all aspects of lecture delivery and assessment. As a result, 87% of faculty members made changes to lecture content, teaching methods, assessment, or all elements of their teaching.
Feedback Received From Faculty Peer Observation and Types of Changes Made by Faculty Members as a Result (N = 30)
All faculty members reported receiving a balance of positive and constructive feedback. One hundred percent agreed with the observer’s assessment of strengths, and 94% agreed with the observer’s assessment of areas for improvement. Overall, 72% reported that peer observation and evaluation made them more aware of their strengths, and 72% stated the peer observation and evaluation identified areas for improvement in classroom teaching. Seventy-eight percent of faculty members strongly agreed or agreed that peer observation and evaluation gave them concrete suggestions for ways to improve their teaching, and 71% incorporated their reflection regarding peer evaluation in their annual performance review submission. Of those who participated in the training session, all faculty members agreed or strongly agreed that they received appropriate initial observer training, and all respondents expressed interest in reinforcement and further training. Finally, 89% strongly agreed or agreed that overall, the benefits of the peer observation and evaluation process outweighed the effort that participation required.
Overall, 82% of faculty members responding to the program evaluation survey reported being trained (100% of those hired before 2008 and 33% of those hired after). Fourteen faculty members (88% of 16 who have been trained prior to implementation) reported serving as peer observers during the 2-year evaluation period. Sixty-four percent reported adhering to all 4 steps in the process and 71% to all timelines. The most frequently missed step was the post-student assessment discussion, with most faculty members listing workload/time issue as a barrier. On average, peer observers reported spending 4.3 hours for each peer observation and evaluation review.
On the pre-implementation survey, faculty members indicated that they were generally positive about peer evaluation; collectively, their opinions either stayed the same or improved after peer observation and evaluation training and participation (Table 3). For example, more faculty members strongly agreed or agreed that peer observation and evaluation positively impacted student learning (100% post-implementation vs. 61% pre-implementation), is a better measurement of teaching effectiveness than student evaluations (83% post-implementation vs. 63% pre-implementation), and would improve their ability to get promoted (67% post-implementation vs. 53% pre-implementation). Only a few faculty members thought that their peer reviewer needed content expertise to serve as a peer observer (22% post-implementation vs. 47% pre-implementation). Comparisons of item responses between the 2 survey instruments were not significant.
Faculty Attitudes Towards a Peer Observation and Evaluation Programa
DISCUSSION
The survey results were positive and encouraging and indicated that peer observation and evaluation were accepted, practiced, and adopted into the culture of the department as a formative tool for ongoing feedback for teaching improvement. This new practice was reinforced by the high value faculty members placed on the program after the 2-year pilot project. The process of pre-observation visit, meeting, and post-observation visit are strongly adhered to, perhaps because of the perceived benefits of in-person, telephone, and/or e-mail feedback. Faculty members have also reported that they received a balance of positive and constructive feedback, which further demonstrated adherence to the post-observation discussion process.
We also saw a change in attitudes towards peer observation and evaluation that we believe was influenced by program implementation. While no significant changes in faculty responses between the pre-implementation and post-implementation surveys were found, in part explained by the small sample size, we did see some positive trends. After the peer observation and evaluation program was implemented, 100% of faculty members agreed or strongly agreed that peer assessment positively impacts student learning compared to 61% before the program was implemented. Interestingly, 33% of faculty members were unable to comment about that statement prior to program implementation, suggesting an initial lack of understanding of the role and benefits of peer assessment in teaching and learning. There was also a change from 16% (pre-implementation) to 5% (post-implementation) of faculty members unable to comment on the statement “peer assessment should be conducted by a colleague with content expertise,” resulting in a change in the strongly disagree/disagree responses (37% pre-implementation/72% post-implementation). These changes suggest a positive perception of the peer observation and evaluation process by faculty members.
Previously the department used a check-off evaluation form for those who requested peer feedback, however, this was not mandated nor formal observer training provided. Only 1 member of the department had received formal training before the department-wide training session. Data from the initial survey assisted the task force in developing a departmental policy mandating that all faculty members obtain formative peer evaluation of their classroom teaching once a year. Participation in peer observation and evaluation is documented on a form which is included in performance evaluation documents; however, the specific feedback from the observer to the instructor is not reported to anyone else.
Based on the survey results, the area where continued professional development is needed is the final step of the process, the assessment of student learning. While time constraints and difficult timing at the end of the semester were cited as contributing factors to faculty members not completing this step, further analysis is needed to determine why this step is so difficult to accomplish, especially as the pre-implementation survey instrument indicated this as the area where faculty members would like the most feedback.
An additional area for continued study is the type of change faculty members actually make to their teaching practices based on the feedback received as part of the peer observation and evaluation process. Changing a faculty member’s perception about how they teach is complex. Having a faculty member incorporate self-reflection about his/her teaching, as well as peer feedback, into his/her perceptions about teaching is a key element of “teacher change” (ie, faculty members making changes to how they teach).9 Training may be needed to help peer observers understand how to elicit faculty perceptions of their role in the classroom and teaching as part of the pre-observation, as well as how to provide feedback to create a greater portion of faculty members who make changes to their teaching after meeting with their peer observer.
Additional efforts will be spent on discussions of how our peer evaluation process and self-reflections can be triangulated with students’ evaluations and included into faculty portfolios. All of the courses taught in the school undergo formative mid-semester feedback10 and summative end-of-semester course evaluations. However, many of the courses are taught by more than 1 faculty member, and depending on the lecture that faculty members choose for peer evaluation, they may or may not receive formal student feedback. Collecting specific lecture evaluations could be useful for some faculty members, while aligning peer evaluation with a course where formal student feedback is provided could be useful for others.
There are limitations to this study including its small sample size and slightly different cohorts responding to the pre- and post-implementation survey instruments. Additionally, data presented are self-reported with no objective evidence to support improvements in faculty teaching and student learning. Estimating the impact of peer feedback on student learning outcomes is challenging, as there are variables that impact a student’s learning that cannot be controlled by a faculty member. Examples include the time of day when the class is offered, the size of the class, and the student’s previous knowledge of the material and preconceived notions about the lecture topic. These variables impact how and what a student will learn, making the assessment of student learning difficult. Also, this study included only pharmacy practice faculty members and not faculty members from the pharmaceutical sciences department. In addition, we do not believe that our process and tool would be useful outside of the classroom, and a separate process and tools will need to be identified or developed for peer evaluations in experiential settings, laboratories, and seminar.
Our next steps are to continue reinforcing the ideas and concepts around the peer observation and evaluation process, as well as to validate the quality of the process, thereby encouraging more participation and confidence in the process. After sharing our experiences with colleagues in the Department of Pharmaceutical Sciences, they formed a small taskforce to adapt the peer observation and evaluation process and develop their own policy and procedures. The goal is to make the peer observation and evaluation program a school-wide practice. We also plan to develop a manual that will include the purpose and value of a peer observation and evaluation review and clear descriptions of the roles, responsibilities, and timelines involved in participation. As part of the manual, we will provide additional information about the importance of adhering to step 4 and its value to student learning. Additionally, we will continue to monitor faculty members’ perceptions about the role of peer assessment in the tenure and promotion process as only 53% strongly agreed or agreed that their participation in this process will improve their ability to get promoted prior to implementation and 67% after implementation. We believe the small improvement is because of the short period between the survey instruments (2 years), the formative nature of the peer observation and evaluation program, and insufficient time for the impact of the program on faculty promotion to be assessed.
As the process moves out of the pilot stage, final policies will need to be developed, vetted, and approved by the faculty members. As confidence in the process grows, we will consider how to adapt this process to provide a summative function of teaching excellence that will become part of a tenure and promotion process.
Our peer observation and evaluation process can be transferred to other programs seeking to implement formal peer evaluation in pharmacy education or in other disciplines. Each program will need to consider their teaching philosophy and whether the POET would be applicable in their academic culture. While we believe the 4-step peer observation and evaluation process is universally applicable, each program should develop its own policies and procedures as far as whether the peer observation and evaluation is formative or summative, or both; who will participate as the instructors and observers; the frequency of participation; and the nature of the training program.
CONCLUSIONS
Based on feedback from pharmacy practice faculty members, we conclude that our 4-step peer observation and evaluation program for large classroom teaching was comprehensive, feasible, and sustainable, and could be adapted by other colleges and schools of pharmacy, keeping in mind the many variables that create the climate for learning that differ from institution to institution. Although the first 3 steps were followed in a majority of cases, faculty members had the most difficulty complying with step 4 (post-student assessment meeting). Through improved faculty training, including suggestions for adherence to this step, we feel the overall peer observation and evaluation process can be a means to foster quality teaching through peer mentorship and to elicit improved student learning in the large classroom setting.
ACKNOWLEDGEMENTS
The authors acknowledge Jennifer Kirwin, Thomas Pomfret, and Mark Douglass for their service on the original peer observation and evaluation task force.
- Received September 23, 2011.
- Accepted December 8, 2011.
- © 2012 American Association of Colleges of Pharmacy