“Student Evaluations: Feared, Loathed and Not About to Go Away” was the headline of a recent article in the Chronicle of Higher Education. The article opens by describing a variety of techniques that faculty members use to increase student ratings of their teaching, such as bringing homemade cookies to class, offering low-stakes assessments prior to the evaluation, and not providing adequate time to complete surveys, all to ensure they achieve the desired scores on their faculty evaluations.1 Pharmacy education is not immune to the “fear and loathing” of student evaluations. The utility of faculty and course evaluations completed by students has long been a contentious issue at many universities. However, these data are often considered in annual performance evaluations and promotion and tenure decisions, so its impact is not trivial.
There are 3 central issues commonly discussed regarding students' evaluations of faculty members and courses. First, faculty members routinely cite the low response rate and question the credibility of the data. Survey burden may be a real issue for students and something worth considering. Our society is rife with surveys. For example, a weekend get-away trip may elicit several surveys: from the hotel, TripAdvisor, the airline, and restaurants where you dined. In the academic world, faculty members often use their students as research subjects, and surveys are a useful and frequently used method of collecting data. Faculty members may also use surveys as a teaching tool. Then the end of the term comes, and students are asked to complete faculty and course evaluations, which are most frequently surveys.
At my institution, which uses a quarter system for its academic calendar, a typical student may be asked to complete 19-20 course evaluations and 57-60 faculty evaluations annually. Olson presents the thesis that the overuse of surveys diminishes their effectiveness as a data collection tool.2 Simply put, our students are tired of filling out surveys. Olson goes on to illustrate that low response rates as a result of nonresponse error may be more damaging to the credibility of results than even a small sample.2 Our common method of surveying all students in all classes and in a short window of time may present a situation of survey fatigue, and students simply stop completing surveys. A better approach may be to utilize good sampling estimations and survey a sample of students for each academic term.
Typically, faculty and course evaluations employ a Likert scale. Students are asked to select a range of responses to a statement, and often these data are reported as means. There is a long-standing debate on whether or not calculating averages on ordinal data is statistically viable. Classic biostatistics textbooks simply state that “common arithmetic cannot be performed on ordinal data in a meaningful way.”3 Others may argue that parametric statistics can be used on ordinal data, with robust results.4 While there may not be a conclusive answer to this debate, perhaps presenting the data in multiple ways (median, mode, and mean, interquartile range, and standard deviation) may provide a more comprehensive picture of what the data are telling us.
Finally, the question of who the best person is to conduct evaluations of effective teaching arises. No one will argue that feedback is critical to improvement, but are students the best source? Others may argue that students are in the best position to evaluate effective teaching. Benton and Cashin provide ample evidence of the reliability and validity of student evaluations of teaching.5 On the other hand, Stark and Freisstat state: “We will never be able to measure teaching effectiveness reliably and routinely.” They provide a menu of ideas, ranging from peer observation and review of teaching materials for currency and relevancy to review of student outcomes and assessments.6 In other words, a more complete picture of effective teaching may be gained by triangulating data from multiple perspectives through multiple means.
Effective teaching is essential to effective learning and to our mission of educating tomorrow's pharmacists. Effective teaching also is multifaceted and cannot be measured by reducing it to a single number at the end of an academic term. Time, place, subject matter, and multiple student variables all affect the learning environment, and the teacher is just one part of that environment. Constructive and systematic feedback regarding our teaching and our courses provides us with a framework for our own continuous professional development. We need to ensure that this feedback is reliable, valid, and from multiple perspectives, including those with expertise in effective teaching.
Pharmacy education has demonstrated a steady expansion of teaching evaluation, using various approaches and perspectives.7 We need to continue to share best practices, use multiple perspectives, choose appropriate statistical analysis, learn from esteemed colleagues, and consider alternative ways to collect data from students, such as using samples. A multi-dimensional approach to evaluating effective teaching may truly provide us with a more complete picture of our instruction.
- © 2015 American Association of Colleges of Pharmacy