Abstract
Objective. To examine and summarize policies and procedures for peer evaluation of teaching/instructional coaching (PET/IC) programs within departments, schools, and colleges of pharmacy and to identify opportunities for improving these based on best practices.
Methods. A survey was sent to all Accreditation Council for Pharmacy Education (ACPE)-accredited pharmacy programs to collect information regarding procedures to support and evaluate PET/IC programs across institutions. Descriptive statistics were used to summarize the general features of PET/IC programs, and inferential statistics were used to make group comparisons based on institutional control (public, private) and institution age (0-10 years, older than 10 years).
Results. Surveys for 91 institutions were completed (response rate=64.5%). Most institutions (78.4%) reported having a PET/IC program. Most institutions with PET/IC programs reported using a combination of formative and summative evaluations (57.4%). The top purposes for PET/IC programs were faculty development (35.8%) and improving teaching (35.8%). Almost half of the PET/IC programs (46.3%) were mandatory for all faculty at the institutions. Most institutions (66.7%) had one standardized instrument used in their PET/IC program. Few institutions (11.9%) reported evaluating or being in the process of evaluating the effectiveness or success of their PET/IC program. Private institutions were more likely to incentivize observers than public institutions (17.1% vs 0).
Conclusion. Overall, PET/IC programs are needed to assess and provide feedback to instructors about their teaching practices. While most institutions report having a PET/IC program, wide variability exists in how the programs are implemented. Opportunities exist for institutions to evaluate the effectiveness of their program and identify best practices.
INTRODUCTION
Effective teaching is critical to prepare the next generation of pharmacists. Therefore, the need to evaluate teaching effectiveness exists in all Doctor of Pharmacy (PharmD) programs at colleges and schools of pharmacy. As the role of pharmacists in health care continues to expand and with an emphasis on team-based, multidisciplinary models of care, pharmacy education must continue to evolve. To do this requires committed, knowledgeable, and competent pharmacy educators.1-3 Studies indicate that although health professions educators are experts in the content they teach, they rarely receive training on effective teaching practice.1,2
Commonly, teaching effectiveness is assessed through student course evaluations. However, there is increasing recognition that this should not be the sole method of informing teaching effectiveness.4-6 Student course evaluations may be influenced by an instructor’s charisma or communication style rather than the instructor’s ability to improve knowledge, understanding, and application of material through evidence-based teaching strategies.7-9 Student course evaluations may also be affected by factors that are beyond the instructor’s control, such as logistical and scheduling issues and learner motivation, or even racial and gender bias.7,10
Peer evaluation of teaching offers another method to assess and provide feedback to instructors about their teaching practices for improving the quality of teaching and/or informing personnel decisions. Generally, two types of peer evaluation exist: formative evaluation and summative evaluation.9 Formative evaluation is designed to provide feedback that informs teaching with an intention for personal use; it is also known as instructional coaching, peer feedback, or peer observation. This contrasts with summative evaluation, which is designed to provide information that informs decision-making by the institution (eg, promotion, reappointment, merit awards) and is typically intended for use by others. This summative type of peer evaluation is often also known as peer review.
Within the Academy, several institutions have published on the implementation and impact of peer evaluation of teaching programs.11-18 Based on these publications and others in higher education, it is possible to develop suggested peer evaluation of teaching process considerations (Table 1).11-19 However, as neither the Accreditation Council for Pharmacy Education (ACPE) nor the American Association of Colleges of Pharmacy (AACP) provide guidance on recommended components and processes for peer evaluation of teaching, there exists variability in the presence, purpose, and operationalizing of peer evaluation programs. The purpose of this research was to examine and summarize policies and procedures for peer evaluation of teaching within departments, schools, and colleges of pharmacy and to identify opportunities for improvement of such programs based on best practices reported in the literature.
Suggested Considerations for a Peer Evaluation of Teaching/Instructional Coaching Process
METHODS
Using our expertise and the results of a literature search focused on best practices in peer evaluation of teaching and instructional coaching, we developed a web-based survey (Qualtrics International Inc). For the purposes of the study, we used the terms peer evaluation of teaching (PET) and instructional coaching (IC) interchangeably. Peer evaluation of teaching/instructional coaching (PET/IC) was defined as the review of teaching performance by colleagues, usually in the same or a similar discipline, with the purpose of assessing and improving the quality of teaching.20
As part of the survey development, we identified objectives for the study and created items related to each objective. These objectives included, first, identifying the breadth and depth of ongoing peer evaluation of teaching/instructional coaching across institutions; second, determining common procedures to support such evaluation/coaching; and, third, determining whether institutions have evaluated the success and impact of such programs on teaching and learning. Thus, our research team generated items, iterated to improve item clarity, and met to finalize the survey. After the team agreed upon the final survey instrument, five faculty at different ranks at two institutions pretested the questionnaire and provided feedback. The final survey instrument used skip logic and consisted of a maximum of 41 questions plus two demographic questions. Definitions were given for terms, including peer evaluation of teaching/instructional coaching, formative evaluations, and summative evaluations, to increase the consistency in responses.
Northeastern University Institutional Review Board deemed this research exempt. In October 2020, the survey invitation was emailed to 141 pharmacy program assessment leads. They were asked to contact investigators if the survey should be sent to another individual and to complete only one survey per program, consulting with colleagues as needed. A PDF of the questionnaire was provided to assist with this request. Due to COVID-19 disruptions, the survey remained open through March 2021, with numerous Qualtrics-based reminders, emails, and phone call follow-ups to attain a satisfactory response rate. Responses were linked to pharmacy programs but analyzed collectively. All survey items are described using frequency and percentage. Differences according to institutional control (ie, publicly controlled vs privately controlled) and age (ie, established <10 years ago vs 10+ years ago) were explored using chi-square tests with Fisher exact test where appropriate (ie, n<5). Statistical significance was established at p<.05.
RESULTS
Surveys for 91 institutions were completed (response rate=64.5%). Most were publicly funded (n=49, 54%), and the others were privately funded (n=42, 46%). The years schools were established ranged from 1823 to 2017. Most institutions (n=69, 78.4%) had a PET/IC program, with fewer that were developing a program (n= 4, 4.5%) or did not have one at all (n=15, 17%). Institutions with PET/IC programs (designated as “n”) were proportionally representative of the institutional control for all 141 US pharmacy schools (designated as “N”) (publicly funded, n=32, 46% vs N=68, 48% and privately funded n=37, 54% vs N=73, 52%; p=0.80) and representative of the age of all schools (up to 10 years, n=7, 10% vs N=18, 13% and 10+ years n=62, 90% vs N=123, 87%; p=0.58).
As seen in Table 2, most institutions with PET/IC reported using a combination of formative and summative evaluations (n=39, 57.4%). Few used only formative evaluations (n=19, 27.9%) or summative evaluations (n=10, 15.0%). The top purposes for having PET/IC programs were faculty development (n=24, 35.8%) and improving teaching (n=24, 35.8%). Less common purposes consisted of meeting requirements for tenure and promotion (n=14, 20.9%) and improving student learning (n=5, 7.5%). Fulfilling requirements for accreditation and providing data for merit awards were not considered primary purposes for PET/IC programs. Most institutions did not have additional centers or services at the university to support PET/IC (n=52, 77.6%). Most also did not make any changes to their process/policy during COVID-19 (n=41, 60.3%).
Characteristics of PET/IC Programs at US Schools and Colleges of Pharmacy
Of those institutions with PET/IC programs, almost half of the PET/IC programs were mandatory for all faculty at the institution (n=31, 46.3%). Of the 31 institutions that required faculty participation in PET/IC programs, over one-third (n=12, 38.7%) stated there were consequences for not participating. Nearly a quarter (n=16, 24.2%) of the institutions reported having a unified method of instruction. Most faculty picked when their class was going to be observed (n=53, 79.1%). When asked to indicate core aspects of the PET/IC review process, institutions most commonly responded with classroom observations (n=67, 97.1% of 69) and post-class observation meetings (n=39, 56.5%). Thirty institutions (43.5%) reported including a pre-class meeting in their process, 30 (43.5%) included self-reflection by the instructor, and only 18 (26.1%) included a post-assessment meeting. The PET/IC review processes most often included review of instructional materials (eg, handouts, presentation slides, assignments, homework) (n=60, 87%) and student reactions/perceptions during peer evaluation (n=29, 42%), and they rarely included review of a teaching portfolio (n=9, 13%) and results of assessments (ie, student performance) (n=13, 18.8%).
Regarding PET/IC instruments, most PET/IC programs had one standardized instrument (n=44, 66.7%), while others had multiple standardized (n=10, 15.2%) or no standardized (n=12, 18.2%) instruments. Of those institutions that had at least one instrument, most developed their own (n=29, 53.7%). Nine institutions (16.7%) reported validating their instrument. Nine out of 10 institutions reported not incentivizing faculty to conduct peer observations (n=59, 90.8%).
When asked to select all training options offered, participants most commonly reported that no formal training process (n=47, 68.1%) was available to peer observers. When asked to select all methods of sharing observation results, participants most often indicated that they shared the completed instrument (n=51, 73.9%), engaged in a verbal discussion about the results (n=46, 66.7%), and provided a letter summarizing the results (n=25, 36.2%).
The primary responsibility for overseeing the processes associated with PET/IC programs (eg, communication of policies) was most often held by department chairs (n=29, 43.3%). As indicated by write-in responses, other faculty committees, faculty administrators, and staff members frequently oversaw evaluation processes as well (n=23, 34.3%). Chi-square results indicated that privately controlled institutions more frequently incentivized peer observers (n=6) than publicly controlled institutions (n=0, p=.03). Roughly one in 10 institutions reported evaluating (n=8, 11.9%) or being in the process of evaluating the effectiveness or success of their PET/IC program (n=8, 11.9%). In other words, most institutions (n=51, 76.1%) had not evaluated the effectiveness of their PET/IC program.
DISCUSSION
To prepare faculty to meet the dynamic needs of the student population and to provide a premier student experience, schools and colleges of pharmacy must consistently evaluate their teaching effectiveness and quality. Over the decades, PET has become a prevalent way for faculty to measure and improve their teaching quality to better meet the needs of students, especially in countries with the most widespread use of PET, namely the United States, Australia, and the United Kingdom.20 Because PET has become an additional component of the comprehensive assessment of teaching (in addition to other measures of teaching effectiveness),21 teaching has moved from a singular to a community enterprise for improvement.22 While ACPE Standard 10 requires that the curriculum be delivered via teaching methods that incorporate strategies to address the learning needs of students and facilitate the achievement of course expectations and outcomes, it does not mandate a quality assurance process for faculty to provide feedback on the teaching and learning process.3
Our study found that most institutions have a PET/IC program. However, we found that the variability in the PET/IC programs across the Academy is broad. As this may be appropriate due to different schools having different pedagogical models and incentive structures for faculty, the lack of a universal standard for how to analyze and foster peer review is also potentially problematic, as this may result in a lack of awareness about whether instructors are using evidence-based teaching practices. Thus, opportunities exist for the Academy to improve PET.
Historically, PET has been applied using a formative or summative approach in higher education. In our study, most institutions with PET/IC programs reported using a combination of formative and summative evaluations. Both assessment approaches to PET have been shown to help faculty reflect on their teaching, increase their confidence, feel less isolated, enhance student learning experiences, improve their teaching, create community and collegiality, and critically reflect upon the social constructs that play a role in teaching.23-30 Thus, PET programs should be viewed as a holistic process where pedagogy is able to be discussed, critiqued, transformed, and preserved.
In pharmacy education, limited literature has aimed to establish best practices for PET, but other disciplines have published guidelines. Successful PET programs need an explicit framework to ensure success. Evaluations should incorporate appropriate evidence and processes to ensure fairness, consistency, and the reliability of data.31 Evaluation methods should be reflective of the college’s mission, values, and practices. Rubrics and other assessment methods are recommended to be identified and validated so that the measure can accurately assess teaching impact and allow for faculty to track their progress for continual improvement of instruction. This is a particular area in which the pharmacy Academy can improve, as only 16.7% of institutions surveyed reported using a validated instrument in their PET/IC process. Reynolds and colleagues suggested that rubrics should be developed to assess four distinct criteria: disciplinary expertise, design and development skills, instructional practices and performance, and teaching environment.32 To further explain each one in detail, each criterion is broken down into subcategories with descriptive statements.
Many higher education institutions provide a central teaching and learning center to support faculty instruction regardless of discipline. However, colleges of pharmacy are encouraged to develop their own center that reconnects teaching to the discipline. By doing this, efforts can be made to value and reconnect teaching as a vital part of the student experience by engaging faculty in the scholarship of their profession.23,33
After reviewing the literature, we have developed some suggested best practices for creating and maintaining a PET program, in conjunction with the steps recommended by Trujillo and colleagues, outlined in Table 1.34 Reflection and feedback should be incorporated and seen as a valuable part of the review process, preferably by including a meeting to foster discussion rather than by only providing written feedback. Feedback should appear as nonthreatening, and there should be sufficient time to discuss findings.35 Creating this safe space will encourage faculty to experiment with new and innovative ideas without feeling threatened, anxious, or embarrassed and without causing negative internal dialogue.35 Some examples of providing feedback as nonthreatening and creating a safe space include programs trying to make PET as transparent as possible by using video to record the reviewee and the pre- and post-discussions with the evaluators and faculty being reviewed.36 Other programs allow for the faculty being evaluated to observe and assess their own teaching prior to the post-review meeting to allow them the opportunity to identify challenges and solutions, which creates the conditions for a two-way dialogue and for the feedback session to be more conversational.29 In other programs, feedback is provided only to the faculty member being evaluated, and then they have the choice to share the feedback with others (eg, their department chair).19 In our study, it was apparent that in many institutions, opportunities exist to improve PET by enhancing the feedback provided. This can be done by incorporating additional elements as part of the process, such as a pre-observation meeting and consistent timelines and expectations for post-observation meetings and feedback. Additionally, programs should consider incorporating the review of assessments as part of the process to ensure the quality of both teaching and learning.
Few formal training programs exist that prepare faculty to conduct peer reviews. Most faculty receive informal mentoring from committee members or staff.36 In some instances, more experienced faculty are selected as observers.37,38 Regardless of previous teaching experience or type of faculty appointment, peer reviewers should receive guidance and training to better understand pedagogical approaches. As each institution is unique, we recommend that each college of pharmacy and department encourage a formal training process based on best practices and establish a culture where performance measures are created through faculty consensus, ongoing mentorship, and where evaluations are linked to meaningful outcomes.36 In our study only 31.9% of the institutions reported formal training for peer observers. Peer reviewers should receive appropriate training and orientation on how to rate peers against a professional standard, collect evidence, and provide feedback. Training should be developed by senior faculty in conjunction with educational experts. All of these are ways to increase trust, enhance mutual respect, consider sociocultural differences, and minimize bias.
The lack of a thoughtful and comprehensive approach toward evaluation standards can often result biases from evaluators. This can present disadvantages to historically underrepresented groups where structural and institutional barriers exist. To combat this, the reliability of PET can be improved by using standardized evaluation tools, training evaluators, engaging in reflections and discussions, and intentionally recognizing and celebrating sociocultural norms, values, and conventions. Some literature recognizes the impact of sociocultural contexts in the peer review process. Sociocultural perspectives should be recognized and discussed during the process, as they contribute to the department and college’s norms, rules, and attitudes as well as shape the ways that faculty communicate, support, and engage with each other and students. A climate built on trust, support, and common goals facilitates open communication to allow for peer review to be successful.37 As institutions have increasingly been encouraged to acknowledge and separate themselves from parts of their discriminatory history, the review process should articulate, embrace, value, and respect faculty who have different cultural intersections within the department or institution. Incorporating peer reviewers who belong to historically underrepresented groups are one way to mitigate patterns of discrimination that may appear in the review process.
Our study revealed that very few institutions have evaluated the effectiveness and impact of their PET/IC programs, which remains one of the largest opportunities to identify how these programs can have a meaningful impact on teaching and learning across the Academy. For one institution that did evaluate their PET/IC program, this evaluation first involved administering a pre-implementation survey assessing faculty needs and attitudes related to peer evaluation. Then, two years after implementation, the survey was repeated and additional questions were asked regarding adherence to peer observation policies and procedures, feedback received, and impact on teaching.13 Overall, little remains known on the direct connection between PET/IC programs and student attainment of knowledge and skills. Along with evaluating PET/IC programs, we also suggest that PET be used in combination with and to supplement other teaching effectiveness methods, such as student evaluations, self-assessments, and teaching artifacts, as teaching is multidimensional and cannot be captured by one or two measures.
Several important limitations to our study exist. First, out of 141 institutions contacted, 91 completed the survey for a response rate of 64.5%. Despite having a response rate of less than 80%, our respondents were proportionally representative of the institutional control (p=0.8) and age of pharmacy schools (p=0.58). Second, the study was limited to the responses obtained from the respondent for each institution and may not reflect all elements of that institution’s PET/IC program accurately, particularly those aspects that the respondent was not aware of. To mitigate this potential limitation, when the survey was sent to respondents, they were asked to consult others as necessary and/or forward the survey to another individual more knowledgeable of the PET/IC program at their institution to complete, as appropriate. Third, the survey did not meaningfully distinguish between PET and IC. Future research should explore the differentiation between summative and formative approaches to improving teaching effectiveness in pharmacy education.
CONCLUSION
The use of PET/IC programs is an essential component to determining the effectiveness of teaching practices across the pharmacy Academy. While most pharmacy schools have a PET/IC program, wide variability within these programs exist, and gaps are evident in the use of PET/IC best practices. Many institutions could evaluate the effectiveness of their existing programs to implement best practices that lead to meaningful impact on teaching and learning. Additional research is needed to further the development of PET/IC programs and the instruments used to ensure that teaching practices are effectively preparing graduates within pharmacy education.
- Received September 9, 2021.
- Accepted December 23, 2021.
- © 2022 American Association of Colleges of Pharmacy