Skip to main content

Main menu

  • Articles
    • Current
    • Early Release
    • Archive
    • Rufus A. Lyman Award
    • Theme Issues
    • Special Collections
  • Authors
    • Author Instructions
    • Submission Process
    • Submit a Manuscript
    • Call for Papers - Intersectionality of Pharmacists’ Professional and Personal Identity
  • Reviewers
    • Reviewer Instructions
    • Call for Mentees
    • Reviewer Recognition
    • Frequently Asked Questions (FAQ)
  • About
    • About AJPE
    • Editorial Team
    • Editorial Board
    • History
  • More
    • Meet the Editors
    • Webinars
    • Contact AJPE
  • Other Publications

User menu

Search

  • Advanced search
American Journal of Pharmaceutical Education
  • Other Publications
American Journal of Pharmaceutical Education

Advanced Search

  • Articles
    • Current
    • Early Release
    • Archive
    • Rufus A. Lyman Award
    • Theme Issues
    • Special Collections
  • Authors
    • Author Instructions
    • Submission Process
    • Submit a Manuscript
    • Call for Papers - Intersectionality of Pharmacists’ Professional and Personal Identity
  • Reviewers
    • Reviewer Instructions
    • Call for Mentees
    • Reviewer Recognition
    • Frequently Asked Questions (FAQ)
  • About
    • About AJPE
    • Editorial Team
    • Editorial Board
    • History
  • More
    • Meet the Editors
    • Webinars
    • Contact AJPE
  • Follow AJPE on Twitter
  • LinkedIn
Research ArticleRESEARCH

Predictive Relationships Between Students’ Evaluation Ratings and Course Satisfaction

Catherine B. Terry, Keri L. Heitner, Leslie A. Miller and Clea Hollis
American Journal of Pharmaceutical Education April 2017, 81 (3) 53; DOI: https://doi.org/10.5688/ajpe81353
Catherine B. Terry
aLipscomb University, Nashville, Tennessee
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Keri L. Heitner
bAll Aspects Research, Amherst, Massachusetts
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Leslie A. Miller
cUniversity of Phoenix, Tempe, Arizona
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Clea Hollis
cUniversity of Phoenix, Tempe, Arizona
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site

Abstract

Objective. To assess the reliability and validity of course evaluation data.

Methods. A correlation study was conducted using archival data from pharmacy student course evaluations. Bivariate relationships between eight course-rating items and overall rating item and the extent to which course type, level, and grade point average moderated these relationships were analyzed.

Results. Significant bivariate relationships were found between the eight course evaluation rating variables and the overall course rating variable. Pharmacy practice course type significantly moderated the relationship between all predictor and criterion variables.

Conclusion. Pharmacy school administrators should consider individual course evaluation item ratings when making decisions regarding course offerings or faculty promotion and tenure.

Keywords
  • curriculum evaluation
  • program evaluation
  • instructor evaluation
  • validity of course evaluations
  • reliability of course evaluations

INTRODUCTION

Pharmacy education has undergone a major transformation since the 1980s with the adoption of the doctor of pharmacy (PharmD) degree.1 The Accreditation Council for Pharmacy Education (ACPE) is responsible for accrediting professional programs in pharmacy. The ACPE requires college and school of pharmacy administrators to evaluate the curriculum systematically and sequentially.2 Institutional administrators must evaluate the structure, content, and organization of each course offered and must use and analyze course evaluation data to aid in continuous improvement of the curriculum. By fostering data-driven continuous quality improvements of the curriculum, institutional administrators are able to meet ACPE standards.

To maintain academically strong and effective programs responsible for training student pharmacists to be responsive to the changing health care needs of society, institutional leaders must meet the greater demand for accountability.2 The accreditation standards that regional accreditors, the ACPE, and the US Department of Education set forth necessitate transparency and accountability for all stakeholders. Guidance is provided by ACPE through its accreditation standards and descriptive guidelines.

Student course evaluations are often the only data source available regarding the course’s effectiveness.3-8 Faculty members and administrators often ignore or misunderstand course evaluation results.5,9-12 Administrators use course evaluations in almost all higher education institutions, but often use these assessment tools incorrectly.13 Despite faculty protests, higher education administrators use student evaluations in making administrative decisions, including promotion, tenure, termination, and course offerings.13-17 Although higher education administrators have been successful in developing processes to measure course effectiveness through course evaluations, their understanding of how to use these tools and who should use the data to make judgments and decisions is poor.5,9,12,18,19

Some higher education administrators lack strong understanding about using course evaluation data to make judgments and decisions. Most previous course evaluation research focuses on traditional undergraduate degree programs and settings. Pharmacy schools have a different structure, which introduces unique influences on evaluations. Pharmacy schools have a mixture of didactic, laboratory, and experiential courses focused on practice and clinical education.

Medical and allied health educational administrators often look to general higher education literature for guidance when developing course evaluations.20 As course evaluations in medical education serve to assess teaching effectiveness, medical education administrators must determine whether to use a global assessment of teaching assessment or a multidimensional scale.20 The medical and allied health education literature has addressed both multidimensional and global assessment approaches.20 The decision between global or multidimensional assessment is also important in pharmacy education.

No published research exists about the predictive relationship between the identified course evaluation rating variables and the overall course evaluation rating. Although many studies have focused on the relationship between course evaluations and student grades, none identified which components of a course predict student pharmacists’ overall rating for the course on the course evaluation.21-28

Because higher education administrators and instructors are likely to continue using the overall course evaluation score as a single rating for comparing courses, knowing the extent to which the course evaluation variables in the current study are predictive of the overall course ratings could be useful. By identifying what aspects are most predictive, administrators and faculty members will have a better understanding of student course ratings and use the ratings to improve courses, the faculty promotion process, the tenure process, and course offerings.

No published studies address which extraneous variables moderate student course evaluation ratings in a professional school of pharmacy. One previous study in particular demonstrated that evidence indicating a single rating of teaching effectiveness is reliable when identifying instruction that needs improvement.29 The focus of most of the previous research was on traditional undergraduate degree programs and university settings.30,31 Medical schools, like pharmacy schools, have a different structure from traditional degree programs and therefore introduce unique influences on evaluations.20,31,32

Beckman and colleagues noted medical and allied health education course evaluations should consist of two areas: interpersonal teaching and clinical teaching.32 In contrast, several scholars determined a multidimensional course evaluation approach is most appropriate when evaluating medical and allied health education.33,34,35 Within medical and allied health education, multiple scholars have described characteristics of instructional effectiveness and how best to measure it.20 Forms such as the Course Experience Questionnaire are used in medical and allied health settings to produce valid and reliable results.20 Some evaluation instruments in the field focus on the effectiveness of teaching technical skills.20 In regard to medical and allied health education, Litzelman and colleagues designed an instrument to evaluate clinical education focused on seven categories of teaching effectiveness: establishing a positive learning environment, controlling the teaching session, communicating goals to the learner, promoting understanding, evaluating achievement of goals, providing feedback to the learner and promoting self-directed learning, and promoting retention.33

The most commonly used measuring technique for pharmacy college course effectiveness is student pharmacists’ ratings of courses.36,37Although pharmacy education administrators focus on course evaluation items collectively, they should not ignore variables that moderate course evaluation items.20 If a factor such as type of course significantly contributes to more favorable course evaluation ratings, pharmacy administrators should consider compensating for lower evaluation scores resulting from a less popular type of course when using course evaluation ratings to evaluate faculty members.37

The goal of the current study was twofold. The first goal was to assess evidence of reliability for precision of measurement and validity for intended use of the course evaluation survey instrument using archival data. The second goal was to quantify the results of the relationships among predictor, criterion, and moderating variables by means of statistical analyses using archival data. The method was quantitative; the research design was descriptive correlational.

METHODS

Application of a correlational research design supported examining bivariate relationships among numerical variables measured on the course evaluation instrument in the archival data set (Figure 1). Application of a correlational design also supported examining the influence of the moderator variables on these relationships.

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

College of Pharmacy Course Evaluation Instrument.

Using a quantitative descriptive correlational design, we examined the bivariate relationship between students’ ratings of eight predictor variables (quantity of material covered, relevance of material, textbook selection, intellectual challenge, effort put into course, interest and stimulation of the course assignments, the appropriateness of the assigned coursework to meet the learning objectives, and grading method) and one criterion variable (overall course evaluation score). We also examined the extent to which the type of course, grade point average (GPA), and course level moderated the relationship between the predictor variables and the overall course evaluation.

Using Spearman’s rho, eight null hypotheses were tested pertaining to bivariate relationships between each of the predictor variables and the overall course evaluation rating (criterion variable). Ordinal logistic regression was used to test a null hypothesis pertaining to the extent to which type of course, grade point average (GPA), and course level moderated the relationship between the predictor variables and overall course evaluation.

This study used data from all course evaluations conducted at Lipscomb University College of Pharmacy in Tennessee completed between fall 2008 and spring 2012. The archival data consisted of 3,697course evaluations. The course evaluation instrument was generated electronically at the end of every semester. Pharmacy students were required to complete all course evaluations before being allowed to register for the following semester. The archival data consisted of private records from alumni dating from Fall 2008 to Spring 2012. Staff members at the college originally collected these data using a course evaluation instrument completed by students enrolled in the college of pharmacy. IRB approval was obtained before the study began.

The college provided the GPAs for the students who had completed the course evaluations. College personnel stripped the data set of student names and identification numbers and assigned unique identifiers matching the course evaluations to the grades before providing us with access to the data. We obtained information regarding course type and course level from the academic catalog.

In total, 298 student pharmacists’ evaluation ratings across 39 different courses were analyzed; 13 courses were first-year didactic courses, 13 were second-year didactic courses, and 13 were third-year didactic courses. Twelve of the didactic courses covered basic science content, and the other 27 covered pharmacy practice content.

RESULTS

The results provided specific information regarding the extent to which ratings of specific course aspects were most predictive of the overall course evaluation rating.

Each of the eight course evaluation items were positively and significantly correlated with the overall course evaluation item (Table 1). The findings revealed positive bivariate relationships among ratings of the eight individual course evaluation items and the overall course rating item. Power analysis using the Heinrich-Heine-Universität Düsseldorf’s G*Power 3.1.9.2 statistical software yielded a sample size of 59, given a standard alpha level of .05 and a recommended power of .80 to detect an effect size (r=.3) for logistic regression, supporting the adequacy of the current study’s sample size.38

View this table:
  • View inline
  • View popup
  • Download powerpoint
Table 1.

Bivariate Correlations Between Each Predictor Variable and the Overall Course Rating Criterion Variable

The ordinal regression model (Table 2) depicts the results of the multiple ordinal regression analyses for the current study. Ordinal logistic regression analysis revealed that GPA bands and course level did not significantly moderate the relationship among the eight individual course evaluation items and the overall course evaluation item. One of the two course type attributes significantly moderated the relationship between the eight individual course evaluation items and the overall course evaluation rating. This attribute was pharmacy practice course type (r=-.25, p=.003).

View this table:
  • View inline
  • View popup
  • Download powerpoint
Table 2.

Ordinal Regression Models

DISCUSSION

In higher education, evaluation has always been critical to the effective delivery of education and a crucial part of personnel and programmatic decision-making.39 Given the importance of evaluation, administrators monitor course evaluation results to assess course and instructor performance.39 In theory, instructors and administrators use evaluation results as practical tools for improving the program. However, if the design of the course evaluation instrument has flaws and the course evaluation items and other variables contribute to the relationship between the variables measured, then the usefulness of the tool is problematic. The usefulness of course evaluation items depends on the content and coverage of the items in conjunction with the variables contributing to the relationship between the variables measured.39,40 Higher education administrators need to consider other things that may explain or influence the relationship between the course evaluation variables, such as GPA, course type, and course level.39

Although higher education administrators focus on overall course evaluations, administrators and faculty members can identify means to improve course performance in individual course evaluation items. The results support prior research indicating individual course evaluation item ratings are predictive of how well a course performs.40-44

Although higher education administrators focus on course evaluation items collectively, they should not ignore the possibility that any one item, ie, GPA, course type, or course level, may moderate course evaluation items. The current results both supported and differed from prior research indicating GPA, course level, and course type moderate the relationship between ratings of various course evaluation components and the overall course evaluation rating. If something such as type of course contributes to more favorable student course evaluation ratings, administrators must compensate for course type differences when using course evaluation ratings to evaluate faculty members.37

Driscoll and Cadden indicated that course evaluation ratings were significantly different across departments.31 Type of course and level of course can affect students’ rating of courses.32,34 In the current study, pharmacy practice type courses somewhat negatively moderated the relationship between the eight individual course evaluation items and the overall course evaluation item. Higher ratings were associated with pharmacy practice courses, thus higher education administrators and faculty members must take into account how course type may influence students’ ratings. However, Subramanya found that faculty members did not believe there was differentiation between course factors such as course level, course type, and class size that needed to be considered when interpreting course evaluation data.13 Faculty members and administrators must assert appropriate weights to account for the inherent course differences when interpreting course evaluation data.13

Critical limitations of the current study include the inability to draw causal inferences, the lack of control over predictor variables, and the inability to examine the contribution of other variables, such as faculty and student characteristics. Another limitation was the focus on one pharmacy school. Additionally, higher education administrators continue to use student evaluations of teaching as an essential tool, despite the controversy over their use. In higher education institutions, standardized course evaluation forms are often available for use within all courses, and therefore administrators will continue to use them.39 Because programmatic accreditation requirements include assessment, faculty members in health professions education have embraced assessment as a useful tool to provide feedback about instructional practices.39 Further research into the factors that influence how student pharmacists rate courses is needed.

The results both supported and differed from prior research indicating GPA, course level, and course type moderate the relationship between ratings of various course evaluation components and the overall course evaluation rating.

In the ordinal logistic regression model, each of the moderating variable outputs had one attribute with parameters set at zero. Thus, another limitation was the inconclusive results about the specific contributions of GPA, course level, and course type variables in explaining the significant relationships found between the eight individual course evaluation items and the overall course evaluation item. Replicating the study with a larger data set to ensure each attribute has sufficient numbers would help to provide a clearer picture of the moderating relationships.

CONCLUSION

Because no published research existed about the predictive relationship between the identified course evaluation rating variables and the overall course evaluation rating, understanding the extent to which student level, student GPA, and course type moderate the relationship between course evaluation variables and the overall course evaluation contributed to addressing the gap in the literature. The information gleaned may help faculty members to develop professionally, assist administrators to improve the quality of curricular offerings, and aid in promotion and tenure committee decisions. Understanding the relationship between course evaluation variables and the overall course evaluation will aid in faculty development when integrated with a structured program of improvement, such as consultation with an educational specialist.45,46 The results might also aid administrators in determining what professional development opportunities should be available to the faculty for improving instruction.46

  • Received October 13, 2015.
  • Accepted January 20, 2016.
  • © 2017 American Association of Colleges of Pharmacy

REFERENCES

  1. 1.
    1. Mackinnon GE
    . Evaluation, assessment, and outcomes in pharmacy education: the 2007 AACP Institute. Am J Pharm Educ. 2008;72(5):Article 96.
  2. 2.
    Accreditation Council for Pharmacy Education. Accreditation standards and guidelines for the professional program in pharmacy leading to the doctor of pharmacy degree. 2011. https://www.acpe-accredit.org/pdf/S2007Guidelines2.0_ChangesIdentifiedInRed.pdf. Accessed March 28, 2017.
  3. 3.
    Braskamp LA, Ory JC. Assessing Faculty Work: Enhancing Individual and Institutional Performance. San Francisco, CA: Jossey-Bass Publishers; 1994.
  4. 4.
    Centra JA. Reflective Faculty Evaluation: Enhancing Teaching and Determining Faculty Effectiveness. San Francisco, CA: Jossey-Bass; 1993.
  5. 5.
    1. Jones J,
    2. Gaffney-Rhys R,
    3. Jones E
    . Handle with care! An exploration of the potential risks associated with the publication and summative usage of student evaluation of teaching (SET) results. J Further Higher Educ. 2014;38(1):37-56.
  6. 6.
    1. Matos-Díaz H
    . Student evaluation of teaching, formulation of grade expectations, and instructor choice: explorations with random-effects ordered probability models. Eastern Econ J. 2012;38(3):296-309.
  7. 7.
    1. Wachtel HK
    . Student evaluation of college teaching effectiveness: a brief review. Assess Eval Higher Educ. 1998;23(2):191-212.
  8. 8.
    1. Weinberg BA,
    2. Fleisher BM,
    3. Hashimoto M
    . Toward a more effective and useful end-of-course evaluation. Aust Univ Rev. 2009;40(3):227-261.
  9. 9.
    Gates K, Wilkins D, Conlon S, Mossing S, Eftink M. Maximizing the value of student ratings trough data mining. In: Educational Data Mining: Applications and Trends. Springer International; 2014:379-410.
  10. 10.
    1. Gigliotti RJ,
    2. Buchtel FS
    . Attributional bias and course evaluations. J Educ Psychol. 1990;82(2):341-351.
  11. 11.
    Hilt D. What students can teach professors: reading between the lines of evaluations. March 16, 2001.
  12. 12.
    1. Talukdar J,
    2. Aspland T,
    3. Datta P
    . Australian higher education and the course experience questionnaire. Aust Univ Rev. 2013;55(1):27-35.
  13. 13.
    1. Subramanya SR
    . Toward a more effective and useful end-of-course evaluation. J Res Innov Teach. 2014;7(1):143-157.
  14. 14.
    1. Anderson HM,
    2. Cain J,
    3. Bird E
    . Online student course evaluations: review of literature and a pilot study. Am J Pharm Educ. 2005;69(1):Article 5.
  15. 15.
    1. Cannon R
    . Broadening the context for teaching evaluation. New Dir Teach Learn. 2001;2001(88):87-97.
  16. 16.
    1. Crews TB,
    2. Curtis DF
    . Online course evaluations: faculty perspective and strategies for improved response rates. Assess Eval Higher Educ. 2011;36(7):865-878.
  17. 17.
    1. Mckeachie WJ
    . Student ratings: the validity of use. Am Psychol. 1997;52(11):1218-1225.
  18. 18.
    Mullins GP, Cannon RA. Judging the quality of teaching: report to the Department of Employment, Education and Training, Evaluations and Investigations Program. Canberra, Australia: Australian Government Publication Service; 1993.
  19. 19.
    Johnson T. Course Experience Questionnaire 1996: a report prepared for the Graduate Careers Council of Australia. 1997.
  20. 20.
    1. Kogan JR,
    2. Shea JA
    . Course evaluation in medical education. Teaching Teacher Educ. 2007;23(3):251-264.
  21. 21.
    1. Addison WE,
    2. Best J,
    3. Warrington JD
    . Students’ perceptions of course difficulty and their ratings of the instructor. Col Stud J. 2006;40(2):409-416.
  22. 22.
    1. Centra JA
    . Will teachers receive higher student evaluations by giving higher grades and less course work? Res Higher Educ. 2003;44(5):495-518.
  23. 23.
    1. Eiszler CF
    . College students’ evaluations of teaching and grade inflation. Res Higher Educ. 2002;43(4):483-501.
  24. 24.
    1. Ewing AM
    . Estimating the impact of relative expected grade on student evaluations of teachers. Econ Educ Rev. 2012;31(1):141-154.
  25. 25.
    1. Isely P,
    2. Singh H
    . Do higher grades lead to favorable student evaluations? J Econ Educ. 2005;36(1):29-42.
  26. 26.
    1. Krautmann AC,
    2. Sander W
    . Grades and student evaluations of teachers. Econ Educ Rev. 1999;18(1):59-63.
  27. 27.
    1. Spooren P,
    2. Mortelmans D
    . Teacher professionalism and student evaluation of teaching: will better teachers receive higher ratings and will better students give higher ratings? Educ Stud. 2006;32(2):201-214.
  28. 28.
    Williams RL. Course evaluations: a strategy for improving instruction. ED449759. 2001.
  29. 29.
    1. Shores JH,
    2. Clearfield M,
    3. Alexander J
    . An index of students’ satisfaction with instruction. Acad Med. 2000;75(10 Suppl):S106-S108.
  30. 30.
    1. Arah OA,
    2. Heineman MJ,
    3. Lombarts KM
    . Factors influencing residents’ evaluations of clinical faculty member teaching qualities and role model status. Med Educ. 2012;46(4):381-389.
  31. 31.
    1. Billings-Gagliardi S,
    2. Barrett SV,
    3. Mazor KM
    . Interpreting course evaluation results: insights from thinkaloud interviews with medical students. Med Educ. 2004;38(10):1061-1070.
  32. 32.
    1. Beckman TJ,
    2. Ghosh AK,
    3. Cook DA,
    4. Erwin PJ,
    5. Mandrekar JN
    . How reliable are assessments of clinical teaching? J Gen Intern Med. 2004;19(9):971-977.
  33. 33.
    1. Litzelman DK,
    2. Stratos GA,
    3. Marriott DJ,
    4. Skeff KM
    . Factorial validation of a widely disseminated educational framework for evaluating clinical teachers. Acad Med. 1998;73(6):688-695.
  34. 34.
    1. Hayward RA,
    2. Williams BC,
    3. Gruppen LD,
    4. Rosenbaum D
    . Measuring attending physician performance in a general medicine outpatient clinic. J Gen Intern Med. 1995;10(9):504-510.
  35. 35.
    1. James PA,
    2. Kreiter CD,
    3. Shipengrover J,
    4. Crosson J
    . Identifying the attributes of instructional quality in ambulatory teaching sites: a validation study of the MedEd IQ. Family Med. 2002;34(4):268-273.
  36. 36.
    1. Heckert TM,
    2. Latier A,
    3. Ringwald A,
    4. Silvey B
    . Relation of course, instructor, and student characteristics to dimensions of student ratings of teaching effectiveness. Col Student J. 2006;40(1):195-203.
  37. 37.
    1. Nargundkar S,
    2. Shrikhande M
    . Norming of student evaluations of instruction: Impact of noninstructional factors. Dec Sci J Innov Educ. 2014;12(1):55-72.
  38. 38.
    1. Faul F,
    2. Erdfelder E,
    3. Lang A-G,
    4. Buchner A
    . G*Power 3: a flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behav Res Meth. 2007;39(2):175-191.
  39. 39.
    Gravestock P, Gregor-Greenleaf E. Student course evaluations: research, models and trends. Toronto, Ontario, Canada: Higher Education Quality Council of Ontario; 2008.
  40. 40.
    1. Greenwald AG,
    2. Gillmore GM
    . No pain, no gain? The importance of measuring course workload in student ratings of instruction. J Educ Psychol. 1997;89(4):743-751.
  41. 41.
    1. Davies M,
    2. Hirschberg JG,
    3. Lye JN,
    4. Johnston CG,
    5. McDonald IM
    . Systematic influences on teaching evaluations: the case for caution. Aust Econ Papers. 2007;46(1):18-38.
  42. 42.
    1. Gal Y,
    2. Gal A
    . Knowledge bias: is there a link between students’ feedback and the grades they expect to get from the lecturers they have evaluated? A case study of Israeli colleges. J Knowl Econ. 2014;5(3):597-615.
  43. 43.
    1. Marsh HW,
    2. Roche LA
    . Effects of grading leniency and low workload on students’ evaluations of teaching: popular myth, bias, validity, or innocent bystanders? J Educ Psychol. 2000;92(1):202-228.
  44. 44.
    1. Svanum S,
    2. Aigner C
    . The influences of course effort, mastery and performance goals, grade expectancies, and earned course grades on student ratings of course satisfaction. Br J Educ Psychol. 2011;81(4):667-679.
  45. 45.
    1. Hitchcock MA,
    2. Stritter FT,
    3. Bland CJ
    . Faculty development in the health professions: conclusions and recommendations. Med Teach. 1992;14(4):295-309.
  46. 46.
    1. Ewing AM
    . Estimating the impact of relative expected grade on student evaluations of teachers. Econ Educ Rev. 2012;31(1):141-154.

Home

  • AACP
  • AJPE

Articles

  • Current Issue
  • Early Release
  • Archive

Instructions

  • Author Instructions
  • Submission Process
  • Submit a Manuscript
  • Reviewer Instructions

About

  • AJPE
  • Editorial Team
  • Editorial Board
  • History
  • Contact

© 2022 American Journal of Pharmaceutical Education

Powered by HighWire