Abstract
Objective. To assess the reliability and validity of course evaluation data.
Methods. A correlation study was conducted using archival data from pharmacy student course evaluations. Bivariate relationships between eight course-rating items and overall rating item and the extent to which course type, level, and grade point average moderated these relationships were analyzed.
Results. Significant bivariate relationships were found between the eight course evaluation rating variables and the overall course rating variable. Pharmacy practice course type significantly moderated the relationship between all predictor and criterion variables.
Conclusion. Pharmacy school administrators should consider individual course evaluation item ratings when making decisions regarding course offerings or faculty promotion and tenure.
- curriculum evaluation
- program evaluation
- instructor evaluation
- validity of course evaluations
- reliability of course evaluations
INTRODUCTION
Pharmacy education has undergone a major transformation since the 1980s with the adoption of the doctor of pharmacy (PharmD) degree.1 The Accreditation Council for Pharmacy Education (ACPE) is responsible for accrediting professional programs in pharmacy. The ACPE requires college and school of pharmacy administrators to evaluate the curriculum systematically and sequentially.2 Institutional administrators must evaluate the structure, content, and organization of each course offered and must use and analyze course evaluation data to aid in continuous improvement of the curriculum. By fostering data-driven continuous quality improvements of the curriculum, institutional administrators are able to meet ACPE standards.
To maintain academically strong and effective programs responsible for training student pharmacists to be responsive to the changing health care needs of society, institutional leaders must meet the greater demand for accountability.2 The accreditation standards that regional accreditors, the ACPE, and the US Department of Education set forth necessitate transparency and accountability for all stakeholders. Guidance is provided by ACPE through its accreditation standards and descriptive guidelines.
Student course evaluations are often the only data source available regarding the course’s effectiveness.3-8 Faculty members and administrators often ignore or misunderstand course evaluation results.5,9-12 Administrators use course evaluations in almost all higher education institutions, but often use these assessment tools incorrectly.13 Despite faculty protests, higher education administrators use student evaluations in making administrative decisions, including promotion, tenure, termination, and course offerings.13-17 Although higher education administrators have been successful in developing processes to measure course effectiveness through course evaluations, their understanding of how to use these tools and who should use the data to make judgments and decisions is poor.5,9,12,18,19
Some higher education administrators lack strong understanding about using course evaluation data to make judgments and decisions. Most previous course evaluation research focuses on traditional undergraduate degree programs and settings. Pharmacy schools have a different structure, which introduces unique influences on evaluations. Pharmacy schools have a mixture of didactic, laboratory, and experiential courses focused on practice and clinical education.
Medical and allied health educational administrators often look to general higher education literature for guidance when developing course evaluations.20 As course evaluations in medical education serve to assess teaching effectiveness, medical education administrators must determine whether to use a global assessment of teaching assessment or a multidimensional scale.20 The medical and allied health education literature has addressed both multidimensional and global assessment approaches.20 The decision between global or multidimensional assessment is also important in pharmacy education.
No published research exists about the predictive relationship between the identified course evaluation rating variables and the overall course evaluation rating. Although many studies have focused on the relationship between course evaluations and student grades, none identified which components of a course predict student pharmacists’ overall rating for the course on the course evaluation.21-28
Because higher education administrators and instructors are likely to continue using the overall course evaluation score as a single rating for comparing courses, knowing the extent to which the course evaluation variables in the current study are predictive of the overall course ratings could be useful. By identifying what aspects are most predictive, administrators and faculty members will have a better understanding of student course ratings and use the ratings to improve courses, the faculty promotion process, the tenure process, and course offerings.
No published studies address which extraneous variables moderate student course evaluation ratings in a professional school of pharmacy. One previous study in particular demonstrated that evidence indicating a single rating of teaching effectiveness is reliable when identifying instruction that needs improvement.29 The focus of most of the previous research was on traditional undergraduate degree programs and university settings.30,31 Medical schools, like pharmacy schools, have a different structure from traditional degree programs and therefore introduce unique influences on evaluations.20,31,32
Beckman and colleagues noted medical and allied health education course evaluations should consist of two areas: interpersonal teaching and clinical teaching.32 In contrast, several scholars determined a multidimensional course evaluation approach is most appropriate when evaluating medical and allied health education.33,34,35 Within medical and allied health education, multiple scholars have described characteristics of instructional effectiveness and how best to measure it.20 Forms such as the Course Experience Questionnaire are used in medical and allied health settings to produce valid and reliable results.20 Some evaluation instruments in the field focus on the effectiveness of teaching technical skills.20 In regard to medical and allied health education, Litzelman and colleagues designed an instrument to evaluate clinical education focused on seven categories of teaching effectiveness: establishing a positive learning environment, controlling the teaching session, communicating goals to the learner, promoting understanding, evaluating achievement of goals, providing feedback to the learner and promoting self-directed learning, and promoting retention.33
The most commonly used measuring technique for pharmacy college course effectiveness is student pharmacists’ ratings of courses.36,37Although pharmacy education administrators focus on course evaluation items collectively, they should not ignore variables that moderate course evaluation items.20 If a factor such as type of course significantly contributes to more favorable course evaluation ratings, pharmacy administrators should consider compensating for lower evaluation scores resulting from a less popular type of course when using course evaluation ratings to evaluate faculty members.37
The goal of the current study was twofold. The first goal was to assess evidence of reliability for precision of measurement and validity for intended use of the course evaluation survey instrument using archival data. The second goal was to quantify the results of the relationships among predictor, criterion, and moderating variables by means of statistical analyses using archival data. The method was quantitative; the research design was descriptive correlational.
METHODS
Application of a correlational research design supported examining bivariate relationships among numerical variables measured on the course evaluation instrument in the archival data set (Figure 1). Application of a correlational design also supported examining the influence of the moderator variables on these relationships.
College of Pharmacy Course Evaluation Instrument.
Using a quantitative descriptive correlational design, we examined the bivariate relationship between students’ ratings of eight predictor variables (quantity of material covered, relevance of material, textbook selection, intellectual challenge, effort put into course, interest and stimulation of the course assignments, the appropriateness of the assigned coursework to meet the learning objectives, and grading method) and one criterion variable (overall course evaluation score). We also examined the extent to which the type of course, grade point average (GPA), and course level moderated the relationship between the predictor variables and the overall course evaluation.
Using Spearman’s rho, eight null hypotheses were tested pertaining to bivariate relationships between each of the predictor variables and the overall course evaluation rating (criterion variable). Ordinal logistic regression was used to test a null hypothesis pertaining to the extent to which type of course, grade point average (GPA), and course level moderated the relationship between the predictor variables and overall course evaluation.
This study used data from all course evaluations conducted at Lipscomb University College of Pharmacy in Tennessee completed between fall 2008 and spring 2012. The archival data consisted of 3,697course evaluations. The course evaluation instrument was generated electronically at the end of every semester. Pharmacy students were required to complete all course evaluations before being allowed to register for the following semester. The archival data consisted of private records from alumni dating from Fall 2008 to Spring 2012. Staff members at the college originally collected these data using a course evaluation instrument completed by students enrolled in the college of pharmacy. IRB approval was obtained before the study began.
The college provided the GPAs for the students who had completed the course evaluations. College personnel stripped the data set of student names and identification numbers and assigned unique identifiers matching the course evaluations to the grades before providing us with access to the data. We obtained information regarding course type and course level from the academic catalog.
In total, 298 student pharmacists’ evaluation ratings across 39 different courses were analyzed; 13 courses were first-year didactic courses, 13 were second-year didactic courses, and 13 were third-year didactic courses. Twelve of the didactic courses covered basic science content, and the other 27 covered pharmacy practice content.
RESULTS
The results provided specific information regarding the extent to which ratings of specific course aspects were most predictive of the overall course evaluation rating.
Each of the eight course evaluation items were positively and significantly correlated with the overall course evaluation item (Table 1). The findings revealed positive bivariate relationships among ratings of the eight individual course evaluation items and the overall course rating item. Power analysis using the Heinrich-Heine-Universität Düsseldorf’s G*Power 3.1.9.2 statistical software yielded a sample size of 59, given a standard alpha level of .05 and a recommended power of .80 to detect an effect size (r=.3) for logistic regression, supporting the adequacy of the current study’s sample size.38
Bivariate Correlations Between Each Predictor Variable and the Overall Course Rating Criterion Variable
The ordinal regression model (Table 2) depicts the results of the multiple ordinal regression analyses for the current study. Ordinal logistic regression analysis revealed that GPA bands and course level did not significantly moderate the relationship among the eight individual course evaluation items and the overall course evaluation item. One of the two course type attributes significantly moderated the relationship between the eight individual course evaluation items and the overall course evaluation rating. This attribute was pharmacy practice course type (r=-.25, p=.003).
Ordinal Regression Models
DISCUSSION
In higher education, evaluation has always been critical to the effective delivery of education and a crucial part of personnel and programmatic decision-making.39 Given the importance of evaluation, administrators monitor course evaluation results to assess course and instructor performance.39 In theory, instructors and administrators use evaluation results as practical tools for improving the program. However, if the design of the course evaluation instrument has flaws and the course evaluation items and other variables contribute to the relationship between the variables measured, then the usefulness of the tool is problematic. The usefulness of course evaluation items depends on the content and coverage of the items in conjunction with the variables contributing to the relationship between the variables measured.39,40 Higher education administrators need to consider other things that may explain or influence the relationship between the course evaluation variables, such as GPA, course type, and course level.39
Although higher education administrators focus on overall course evaluations, administrators and faculty members can identify means to improve course performance in individual course evaluation items. The results support prior research indicating individual course evaluation item ratings are predictive of how well a course performs.40-44
Although higher education administrators focus on course evaluation items collectively, they should not ignore the possibility that any one item, ie, GPA, course type, or course level, may moderate course evaluation items. The current results both supported and differed from prior research indicating GPA, course level, and course type moderate the relationship between ratings of various course evaluation components and the overall course evaluation rating. If something such as type of course contributes to more favorable student course evaluation ratings, administrators must compensate for course type differences when using course evaluation ratings to evaluate faculty members.37
Driscoll and Cadden indicated that course evaluation ratings were significantly different across departments.31 Type of course and level of course can affect students’ rating of courses.32,34 In the current study, pharmacy practice type courses somewhat negatively moderated the relationship between the eight individual course evaluation items and the overall course evaluation item. Higher ratings were associated with pharmacy practice courses, thus higher education administrators and faculty members must take into account how course type may influence students’ ratings. However, Subramanya found that faculty members did not believe there was differentiation between course factors such as course level, course type, and class size that needed to be considered when interpreting course evaluation data.13 Faculty members and administrators must assert appropriate weights to account for the inherent course differences when interpreting course evaluation data.13
Critical limitations of the current study include the inability to draw causal inferences, the lack of control over predictor variables, and the inability to examine the contribution of other variables, such as faculty and student characteristics. Another limitation was the focus on one pharmacy school. Additionally, higher education administrators continue to use student evaluations of teaching as an essential tool, despite the controversy over their use. In higher education institutions, standardized course evaluation forms are often available for use within all courses, and therefore administrators will continue to use them.39 Because programmatic accreditation requirements include assessment, faculty members in health professions education have embraced assessment as a useful tool to provide feedback about instructional practices.39 Further research into the factors that influence how student pharmacists rate courses is needed.
The results both supported and differed from prior research indicating GPA, course level, and course type moderate the relationship between ratings of various course evaluation components and the overall course evaluation rating.
In the ordinal logistic regression model, each of the moderating variable outputs had one attribute with parameters set at zero. Thus, another limitation was the inconclusive results about the specific contributions of GPA, course level, and course type variables in explaining the significant relationships found between the eight individual course evaluation items and the overall course evaluation item. Replicating the study with a larger data set to ensure each attribute has sufficient numbers would help to provide a clearer picture of the moderating relationships.
CONCLUSION
Because no published research existed about the predictive relationship between the identified course evaluation rating variables and the overall course evaluation rating, understanding the extent to which student level, student GPA, and course type moderate the relationship between course evaluation variables and the overall course evaluation contributed to addressing the gap in the literature. The information gleaned may help faculty members to develop professionally, assist administrators to improve the quality of curricular offerings, and aid in promotion and tenure committee decisions. Understanding the relationship between course evaluation variables and the overall course evaluation will aid in faculty development when integrated with a structured program of improvement, such as consultation with an educational specialist.45,46 The results might also aid administrators in determining what professional development opportunities should be available to the faculty for improving instruction.46
- Received October 13, 2015.
- Accepted January 20, 2016.
- © 2017 American Association of Colleges of Pharmacy