Abstract
Objective. To determine how changes to the student evaluation of teaching (SET) survey instrument and process at a college of pharmacy contributed to improved student response rates and to understand how the process could be further refined.
Methods. Pharmacy students from the class of 2018 who had participated in both the old and new SET process were recruited to participate in one of four focus group interviews. An inductive approach was used for data collection and analysis. A focus group guide was created based on two major domains: comparing changes between the old and new SET process and survey form, and determining how the new SET process could be further refined.
Results. In South Jordan, UT, six students participated in one of the focus groups and seven students participated in the other focus group. In Henderson, NV, seven students participated in each of the two focus groups. Twenty-seven total students participated in the four focus groups across two campuses. Students stated that reducing the number of questions on each SET survey instrument and using a 5-point rather than a 10-point Likert scale were positive changes. The changes also motivated them to complete the surveys, which improved overall response rates. Although students reported that the monetary incentive (contributions toward the cost of the class banquet) that had been added to the new SET process was a strong motivator, the incentive itself would have likely been insufficient without the other changes. Several participants stated that receiving feedback from faculty members on changes made to teaching materials based upon previous student evaluations was also an important motivator for students to continue completing the surveys.
Conclusion. Students identified several motivators for SET participation. Improving the process for survey completion is essential to improve response rates to more accurately represent the feedback of the entire student body. Additionally, the evaluation process must ensure that the data gathered are robust, accurate, and insightful, to be of good use of student and faculty time.
INTRODUCTION
Student evaluation of teaching (SET) is a widely used component of assessment of teaching effectiveness, and is currently a requirement by 2016 Accreditation Council for Pharmacy Education Standard 25.4.1 Student evaluation of teaching is used at many US colleges of pharmacy for various purposes, including improving teaching effectiveness, evaluating faculty performance, determining assignment of teaching responsibilities, and making decision regarding tenure, promotion, and merit awards.2-7 Across US pharmacy programs, online surveys using ordinal scales are the most commonly used method of SET and are typically administered after each semester of teaching.2 As institutions have moved towards online surveys, response rates have decreased compared to those for paper-based surveys from an average of 56% to 33%.8 Multiple studies have assessed student and faculty perceptions of SET and have identified both motivators and barriers to completion rates for student evaluations.9-16 Students report being motivated to complete evaluations when they: believe that their responses will be used to improve teaching and will result in real-time improvements in the course or in faculty behavior, understand how evaluations are used at their institution, are provided incentives for completing evaluations, do not receive evaluations in proximity to examinations, and when the surveys are decreased in length and number.6,9-11,17,18 Student-reported barriers include survey fatigue related to frequency, length, question ambiguity, and having multiple surveys administered at the same time.10,13,18,19 In addition to these barriers, student concerns regarding anonymity and confidentiality, issues with computer access, and incompatibilities of software with delivery of online surveys may also limit response rates.10,13
At Roseman University of Health Sciences College of Pharmacy (RUCOP), the accelerated three-year program’s didactic curriculum is delivered in a block system, with a summative assessment of all material covered in that block conducted every two weeks, usually on a Friday. Each assessment included didactic material presented by one to three faculty members, with some assessments covering lectures by as many as five faculty members. On the Tuesday after the assessment, an email invitation was sent to students, asking them to complete an online evaluation of each faculty member who taught during that block. As a result, students completed evaluations every two weeks for multiple faculty members, which could have resulted in survey fatigue leading to low SET survey response rates.
Prior to the year 2016, RUCOP had poor response rates on evaluations, varying from 4% to 67% per block and averaging 24%. Class size was approximately 240 students, spanning two campuses. The RUCOP Assessment Committee (which has student representation from all classes) conducted a modified Delphi process in 2015 to identify barriers and devise solutions to improve response rates on SET, with the goal to improve the validity and utility of SET. This resulted in a series of changes to the SET online survey instrument and the SET process as a whole. These changes were implemented in the 2016-2017 academic year. The changes to the SET survey instrument included reducing the number of questions on each SET survey from 20 to 11, changing the rating scale from a 10-point to a 5-point Likert scale, eliminating the comment boxes after every question and including just one comment box at the end, and decreasing ambiguity of questions by adapting questions from previously validated questions in the literature. Changes to the online SET process included changing the delivery platform from SurveyMonkey (San Mateo, CA) to Qualtrics (Provo, UT), educating students regarding the importance of completing evaluations, and allocating a monetary incentive towards the class graduation banquet. The monetary incentive consisted of $85 for a minimum response rate of 80% on each evaluation after an assessment for a total of up to $1360 per class per year. In the year after implementing this change, response rates on SET improved from an average of 24% to 66%. These response rates included all students who completed the survey, either partially or entirely, in the first and second professional year (P1 and P2) pharmacy classes on both Roseman University campuses (Nevada and Utah).
Upon realizing a quantitative improvement in response rates, a qualitative assessment was conducted to determine which changes contributed towards the improved response rate, further changes that needed to be made, and barriers that remained to students completing SET surveys. Thus, the primary objective of this study was to determine how the changes to the SET survey instrument and the SET process contributed to improved response rates. The secondary objective was to understand how the new SET process could be further refined for improved efficiency and utility.
METHODS
The RUCOP used the old SET process for the 2015-2016 academic year, and introduced the new SET process in the 2016-2017 academic year. Thus, students of the class of 2018 had the opportunity to use both the old and new SET process during their two didactic years and were in a unique position to appraise the changes and assist RUCOP in determining the impact of the changes. The study used a research design with focus group interviews to determine the major changes students perceived or experienced with the revisions to the SET process. Because the research question was to understand which changes were successful and to determine barriers that remained to students completing evaluations, qualitative methodology was used to gain a better understanding of the issue. Compared to personal interviews, focus groups encouraged conversation by students building on each other’s ideas, and allowed students to remind each other about the changes that had been made and barriers that still existed. Additionally, the group setting was intended to encourage students to be more candid and comfortable in voicing their opinions about the SET process. As stated, the study sample was selected from the class of 2018 who were in their third professional (P3) year (2016-2017) and completing advanced pharmacy practice experiences (APPEs). With RUCOP having two campuses (one in Henderson, NV, and another in South Jordan, UT), two focus groups were planned on each campus with six to eight students in each of the four groups. The institutional review board at Roseman University of Health Sciences approved the study.
The Consolidated Criteria for Reporting Qualitative Research (COREQ) and the checklist for authors and reviewers of qualitative research published by the Journal were used to ensure the quality of the research methods.20,21 Inductive approaches were used in data collection and analysis. A focus group guide (Appendix 1) was created by the authors based on two major domains: comparing changes between the old and new SET process, and how the new SET process could be refined for increased efficiency and utility. For the first domain, the focus group guide questions centered on the changes the students had observed with the new SET process, especially the changes that made completing evaluations a positive experience. For the second domain, the questions focused on further changes that were needed to the new SET process. Additionally, questions included the strategies of executing evaluations in real time in class and determining changes that were needed to increase the efficiency of the SET process.
Participants were recruited via a convenience sample. A recruitment announcement was sent to the email listserv for the P3 class, which included both a brief summary of the purpose of the study as well as the following inclusion criteria: ability to attend focus group sessions on campus, completion of at least three old and three new SET surveys, and ability to communicate well in English. Students expressed interest by participating in a Doodle poll. Because sufficient student responses were not obtained by this strategy, additional recruitment emails were sent directly to RUCOP faculty preceptors to encourage their students to participate in the focus groups. The purpose of this strategy was solely to promote awareness of the study, and student participation was entirely voluntary. To avoid coercion, students who were completing an APPE or who had future rotations scheduled with one of the study investigators at the time of the study were not eligible for participation in the focus groups. Once dates were confirmed, the scheduled date and time of the focus group in which they were to participate were emailed to the students. Participation in the focus groups was incentivized by provision of a free meal to all participants, which was funded by the institution’s general research funds. The purpose of the study was explained to the participants in the recruitment email, again at the beginning of the focus groups, and once more in the informed consent.
All the study investigators were RUCOP faculty members, and all four focus group interviews were moderated by the primary investigator, who had been trained in how to conduct focus group interviews. All the interviews were audio recorded, and a second investigator was present to take field notes during the interviews, including students’ nonverbal cues, if pertinent. The four focus groups took place in a conference room on campus, either during lunchtime with permission from APPE preceptors to excuse students or in the early evening after completion of APPEs.
Prior to the focus group session, lunch or dinner was provided to the students. During this time, students were asked to sign a consent form. Before the session started, each student was assigned a random number and asked to state their number prior to speaking to maintain subject anonymity and confidentiality as well as to make it easier to track and review each speaker’s comments in the transcription. In addition, students were asked to refer to other students via their numbers rather than names to ensure no names were audio recorded. During the focus group interviews, the students were provided a copy of the old and new SET survey instruments and given 10 minutes to recollect their thoughts about the SET survey and process. The moderator then initiated a discussion based on the questions prepared from the focus group guides that lasted for approximately 60 minutes. Each student was encouraged to comment on the questions posed.
Transcription was performed by a team of P1 and P2 student volunteers who double-checked and verified each other’s transcriptions. The data were independently analyzed by three investigators, including the primary investigator, a co-investigator, and two PharmD students (who analyzed collectively) for content analysis manually.
The analysis started with open coding and repetitive reading for immersion of data. From the open coding, the codes were grouped into themes, then the themes were abstracted to form categories. The three study investigators worked independently to achieve data immersion, create initial codes, make notes, and add additional notes with repeated reading, resulting in either editing the older codes or creating newer codes. These codes were further analyzed to understand the emerging themes based on similarities. Depending on the relationship between the themes, categories were formed. The investigators met and discussed their independent findings and any discrepancies. The discrepancies were discussed in further detail and the original transcripts were referenced as necessary until agreement was reached.
RESULTS
Twenty-seven students from the class of 2018 participated in the four focus group sessions. Seventeen of the students were male and 10 were female. The students were a mean age of 28.5 years (range, 21 to 42 years). Additional demographics by each focus group are provided in Table 1. Analysis of students comments during the focus groups revealed 15 codes (Table 2), which were grouped into four major themes: motivators to SET survey completion; barriers to SET survey completion; proposed solutions to improve rates and quality of SET responses; and student perception of importance of SET. Findings from each of the themes are detailed below.
Characteristics of Doctor of Pharmacy Students Who Participated in Focus Groups to Discuss Changes to the Student Evaluation of Teaching Survey Instrument and Process
Themes and Associated Quotes from Analysis of Focus Group Transcripts of Pharmacy Students Regarding Changes to the Student Evaluation of Teaching Survey Instrument and Process
The most commonly referenced motivator as a result of changing the SET process was the decrease in survey length from 20 questions to 11 questions. There was unanimous consensus among students that decreasing the number of questions decreased survey burden, especially when multiple faculty members were being evaluated at the same time. Another motivator was a simplified Likert rating scale, on which we had decreased the number of possible responses from10 to five. The majority of students preferred the revised 5-point Likert scale, which ranged from strongly disagree to strongly agree, and felt that it was adequate to appropriately evaluate faculty members. The majority of students also preferred the single comment box at the end of the new evaluation form, rather than having comment boxes after each of the questions as the old form did. In general, students felt having only one comment box made the survey form simpler and shorter.
When asked about the switch from SurveyMonkey to Qualtrics, most students stated that they had not noticed the switch, but a few commented that the Qualtrics survey site appeared to be “more professional.” Changes that some students noticed and appreciated about the new SET process was that the new evaluation form was more mobile phone compatible and that all survey questions could fit on one screen. However, these perceived improvements were likely the indirect effects of shortening the survey length (ie, deletion of comment boxes) and using different survey software that was more compatible with mobile phones.
Although students liked the decrease in number of overall questions in the new SET survey form as a way to decrease survey burden, two students in the first focus group felt that the older survey form was more detailed and that important questions such as those related to assessments as well as accessibility of faculty members to students had been omitted from the new survey form. Similarly, although most students preferred the 5-point Likert scale and liked having a single comment box at the end of the SET survey form, a few students preferred the 10-point rating scale on the old survey form because it gave them more options when evaluating faculty members. There were also a few students who liked being able to provide comments after each question, especially in cases where they wanted to provide constructive feedback on an item that they had not positively rated.
The majority of students agreed that the school allotting money towards a class banquet as a reward for an 80% student completion rate on an evaluation was an important motivator for students to complete the SET survey form. However, there was some divergence in focus group responses between the two campuses that was related to financial incentives. The students on the UT campus had a clear understanding of the financial incentive and had a higher SET survey completion rate compared to the students on the NV campus, where there was some confusion regarding the financial incentive. Although students valued a monetary incentive with funds dedicated toward the graduation banquet, some commented that the amount was too small or that the money could have been better used for other incentives that would benefit all students and not just those attending the banquet. Examples provided included graduation gowns, additional money toward printing costs, etc. However, none of the focus groups could reach agreement on a better incentive. Most importantly, students consistently agreed that the financial incentive by itself would not have been a strong enough motivator if the survey burden (ie, length and rating scale) had not been decreased. A financial incentive might have incentivized the students to click on the SET survey instrument, but the length would have deterred them from actually completing the survey.
The students mentioned and the majority agreed that having a class spokesperson to increase awareness about the SET process and the financial incentive associated with it was influential to encourage students to complete the SET form and achieve an 80% response rate as a class. When encouraged by peers in the classroom, students stated that they felt more inclined to complete the evaluation as a courtesy to their classmates.
Survey fatigue related to completing multiple SET surveys (one for each person who taught a portion of the curriculum) every two weeks was a common barrier to completion. Furthermore, students relayed that in blocks where three or more faculty members were being evaluated or when faculty members or a pharmacy resident taught only for a few hours, survey fatigue was exacerbated. In those instances, while a few students said they had filled out the SET survey for some of those faculty members or pharmacy residents without much thought, a few others disagreed and described that if they felt burdened by the number of surveys, they preferred not to complete them rather than completing them arbitrarily. The timing of the surveys was also a contributing factor to survey fatigue. This was most evident at the end of an academic year when, along with the bi-weekly block evaluations, students also received surveys related to longitudinal skills-based courses, and other end-of-year surveys from the college and university. The anonymity of the responses and the one-week time period allotted to complete the SET surveys were not barriers to survey completion. Though not mentioned by the students, when asked, students stated they were adequately equipped to constructively evaluate faculty members and provide feedback.
Another barrier that many students consistently recognized was lack of communication by faculty members and administrators regarding the processes related to SET. Students recommended enhanced communication between faculty members and students via email or during class time to acknowledge that faculty members had received the results of the evaluation and their plan, if any, for action based upon the feedback. However, they also acknowledged that this communication can become a little contentious, so faculty members should use judgment in determining the best course of action for communication with students. Students also suggested increased communication about the overall RUCOP processes related to the use of SET by faculty members and administration.
One of the proposed solutions that was identified in all four focus groups was the timing of the block evaluations. Rather than receiving the link to the evaluation form on Tuesday following their Friday assessment, almost all students wanted to complete the evaluation as soon as possible after the block assessment was completed. Receiving the SET survey on the Tuesday after the assessment was a hindrance as, in their mind, they had already moved on to the next block and it was difficult to reflect. However, in all four groups, this proposed idea (ie, to send the evaluation form sooner) was followed by debate on how this could be implemented, but no consensus was reached.
While students agreed that having dedicated class time to complete the evaluation form would enhance response rates, the students also discussed the challenges associated with the idea. These included inability to reach all students as not every student attended class, students using the allotted time for other activities, and feeling rushed to fill out the SET survey in the time allotted.
Although students generally agreed that the SET survey instrument should be further simplified, there was no consensus on how to achieve this. Students provided many suggestions for simplifying the survey instrument, including changing the response scale to include only “agree” and “disagree,” having even fewer items on the survey, having only open-ended questions, having separate comment boxes for positive and constructive comments rather than just one comment box, completing only one evaluation for each block with the option to write comments individually for each faculty member, and adding a “not applicable” option on Likert scales. However, none of these ideas reached a consensus among the students. One idea that garnered agreement among most students in two focus groups was having a consolidated survey instrument, where the student would answer the same question for all faculty members being evaluated at the same time. In other words, rather than having a separate survey form for each faculty member, there would be one survey with all faculty members listed under each question. Most students felt that this option would decrease the burden of reading the question each time to respond to SET for multiple faculty members. However, some students felt that this change could make the process confusing and may create a competitive environment in which students would compare one faculty against another to rank them, rather than each faculty member being evaluated on his or her own merits.
Students generally agreed that SET is an important mechanism to provide feedback to help faculty members make improvements. Even students who admitted not regularly completing SET surveys, recognized the significance and agreed that they should be completing them.
Students stated the importance of having their feedback, both positive and constructive, reviewed by administrators and requiring faculty members to create actionable items for self and course improvement based on constructive feedback from the surveys. Students also speculated that SET evaluations may be used by administrators for adjudication of awards, promotions, or salary adjustments, adding to the sentiment that the surveys should be taken seriously by the students.
Students in all four focus groups agreed that having their voices heard was an important factor in completion of SET surveys. Seeing changes as a result of the feedback that they had provided was a common motivator for students to continue filling out SET surveys. In contrast, students who perceived that their feedback did not result in any improvements felt discouraged about continuing to provide constructive feedback. Students also agreed that they were more likely to complete surveys for those faculty members who personally requested students to provide feedback to help them improve as educators.
DISCUSSION
The primary objective of this study was to determine how the changes to the SET process contributed to improved response rates. The secondary objective was to understand how the new SET process could be further refined for improved efficiency and utility. The results of this study demonstrate that changes that were implemented in the 2016-2017 academic year to the SET process were well received by students who felt that the changes generally increased the likelihood of student participation in the SET process.
Students stated that survey fatigue was an important barrier to SET completion. The study results demonstrated that the changes implemented resulted in decreasing the survey burden due to a reduction in the number of questions, having a five-point rating scale, and having only one comment box. Although no studies have assessed the actual impact of implementation of a new SET based on previously recognized barriers, studies have previously identified similar barriers related to survey fatigue.11,17-19 Given the nature of the block curriculum at RUCOP, with SET surveys occurring every two weeks, survey fatigue is a particularly difficult challenge to overcome. However, none of the participants mentioned that the SET process was not valuable or worth their effort, which indicates that the students understand the importance of SET despite the survey burden. Indeed, one student even acknowledged that the high volume of surveys at RUCOP was inevitable but necessary to consistently provide opportunities for student feedback for all faculty members. Additionally and importantly, students felt confident in their capability to evaluate faculty members and provide constructive feedback. Thus, survey fatigue will likely continue to be the most challenging barrier faced at RUCOP to increase response rates of students participating in SET surveys. This emphasizes the importance of motivators to offset this barrier.
One such key motivator was shortening the survey instrument. Achieving a balance between the number of questions and ensuring that all important information would still be captured was a delicate task. As can be seen from the results, this was a concern raised by the students. For example, a few students recognized that the new SET form had only one question related to assessments, and they would have liked to see additional questions, as assessments are a sizeable component of RUCOP’s block curriculum. Another key motivator identified by the focus groups was the simplified Likert scale. There is lack of consensus on the ideal number of options in a given rating scale; however, it is generally agreed upon to have an odd number of options to allow for a truly neutral position in bidirectional scales (such as the strongly disagree to strongly agree scale used in the revised instrument).22 Among these, 5-point and 7-point Likert scales are most common. However, the reliability of Likert scales is highest when each number on the scale is clearly labeled with its meaning in words.23 As such, it is easier to develop five clearly delineated descriptors than it is to develop seven, and this was the rationale behind our use of five on the revised SET survey instrument. In addition, a few studies showed that shorter rating scales may improve response rates, particularly in the setting of repeated surveys of the same population.24,25 Although this has not been described in the pharmacy literature previously, our results are consistent with this theory as students almost unanimously agreed that the five-point scale used on the new SET survey instrument was less confusing, more easily viewable, and increased the likelihood of survey completion.
Issues related to student computer access and software incompatibilities have previously been recognized as barriers to completion of SET.10,11,13 Although software platform compatibility was not directly raised by students, the discussion on this topic highlights the importance of a mobile user-friendly platform for SET completion. However, it is difficult to ascertain whether the positive feelings related to the new software platform were actually related to the platform or rather to the decrease in number of questions, which allowed the students to view all the questions on one screen.
Students recognized the financial incentive for completing SET at more than 80% response rate as a motivator. Previous studies also assessed incentivization in the form of extra credit points as a motivator to complete SET surveys, assigning class points to the task of evaluation completion, making grades unavailable to students until evaluations were completed, etc.10,18,26 In this case, the idea of rewarding the class with a set amount of money for meeting a student response rate of 80% per assessment that could be used towards the class graduation banquet came from the incentive offered as part of a previous quality improvement project at the university. Although data are still being collected for the 2017-2018 year at the time of this writing, we have noted a trend towards further increased response rates for the SET surveys. As this is the second year after implementing incentives, there is also an increased student awareness of the incentive system as student leaders have been encouraging class participation.
Previous literature has shown an increase in response rates by approximately 5% with the use of other incentives.27 The substantial increase in response rate observed from the internal analysis suggests that the incentive was not the only motivator. This was evident through the focus group discussions in which the students identified that the incentive may have encouraged students to open the survey, but if the length and rating scale of the survey instrument had not decreased, students would have been unlikely to continue completing the SET surveys. In addition, appropriate understanding of the incentive related to the criteria for achieving the benchmark and how the money would be allocated was lacking among the students on the Nevada campus. This was evident from the internal analysis of 2016-2017 data that depicted that response rates from students on the Utah campus increased from 32% to 82%, compared to the Nevada campus, which only increased from 27% to 41%. This highlights the importance of proper communication between faculty members and administrators with students regarding the incentive to ensure that students fully grasp the direct benefit to them and their class of completion of SET surveys.
Another motivating factor was the presence of a class spokesperson, typically the class president or similar student leader, who personally spoke to students and encouraged them to complete the SET surveys. Appointing a class spokesperson was not directed by faculty members, but rather student leaders took the initiative to improve class response rates in an effort to earn money towards their graduation banquet. However, focus group participants repeatedly mentioned that having a spokesperson was a motivating factor and emphasized the sense of accountability it generated. This was an important finding as it implies that a collective incentive, such as funds to a class banquet, may evoke peer support to complete surveys and may be a stronger motivator than individual incentives. Further research would be needed to confirm this hypothesis.
Even though students agreed that the timing for delivering the SET surveys was an important factor in students completing the evaluation, there was no consensus among the students as to the best time for sending out the link to the survey instrument. Though students agreed that receiving the SET survey sooner after a block was finished was the best strategy as the information was fresh in their mind, they also identified the potential bias in responses that might occur, depending on how students performed on the assessment. However, a few students felt that strong feelings might elicit more genuine responses from students and be a truer reflection of their learning. Previous literature has found mixed results regarding whether examination performance influences SET survey responses.6,9
Unlike previous literature, most students in this study did not feel that anonymity was a barrier to SET survey completion.10,11,13 Additionally, a few students stated that they would feel comfortable putting their name on the SET survey instrument as they wanted to have their evaluation considered seriously by the faculty members and felt that students in a graduate program should be able to provide critiques in a professional, respectful manner. Just as demonstrated in other studies, there was a general consensus among students that SET surveys were valuable for students to provide feedback to help faculty make improvements.11,16,18,28 Students in this study mentioned that there was a lack of understanding among students about RUCOP’s processes related to SET. They were unsure whether the information was received by faculty members or administrators, the format and timing of when results were distributed to faculty members, and how those results were used by faculty members. A previous study found that students liked when faculty members communicated the importance of gathering student feedback.18 This was also demonstrated in this study as students were more willing to provide constructive feedback to faculty members who conveyed the importance of and requested student feedback to make improvements. Similar to other studies, students felt that enhanced communication between faculty members and students to acknowledge receipt of SET survey results and the faculty member’s plan for modifications to improve the course was a strong motivator for students to continue completing SET surveys.6,10,17,18 Although increased engagement with students regarding SET surveys was a priority for RUCOP, it seems that the communication is still fragmented and needs further refinement.
Our study was strengthened by a student sample who represented a wide range of SET completion rates, including students who had seldom filled out SET surveys through two years of the didactic curriculum to those who had filled out the majority, if not all, of the SET surveys. This ensured capture of a wide array of both motivators, barriers, and proposed solutions. However, self-disclosed SET completion rates were not specifically quantified in our analyses. Transcription and coding of focus group data were conducted by three independent groups before meeting to promote internal consistency and agreement across individuals. Our study was also well-timed to capture the only class that had experienced both the old and new SET process, thereby providing their unique perspective of a before-and-after comparison between their first- and second-professional years with the two SET surveys. We also conducted a total of four focus groups representing both campuses, which allowed gathering the opinions of a representative sample of students and to achieve data saturation.
Of note, this study is not without limitations. The majority of focus group volunteers were male students despite that the class of 2018 is made up of mostly female students. The reason for this discrepancy is not clear; however, the focus group analyses did not identify a difference by gender in the themes that emerged. As the participants were recruited via a convenience sample, the focus group sample may not be representative of the class. This may also indicate selection bias in recruitment of students as students who were more likely to participate in the focus groups may have been more likely to complete SET surveys. Another limitation of our study was that the focus group guide was not pilot tested prior to implementation, but additional questions were added as the need arose during the focus groups. Additionally, student responses may have been influenced by a faculty member’s presence. However, each focus group discussion was prefaced with an introduction in which honest but professional opinions were encouraged and students signed an agreement acknowledging that no adverse consequences would occur to them as a result of participating in the research. Faculty members involved in the study were not involved in any formal supervisory relationship with any of the students in the focus groups, were not preceptors to them at the time and would not be preceptors to them in the remainder of their experiential year. Finally, the RUCOP program and the SET process have some unique characteristics such as the block curriculum, every two-week assessments, and the financial incentive for SET survey completion that may limit how the results may be extrapolated to other institutions. Nevertheless, we believe the underlying themes identified by this study can be adapted to apply to other institutions.
CONCLUSION
We found that a variety of reasons impacted student participation in SET surveys and a combination of interventions is needed to encourage student participation. Targeting motivators for SET participation, namely use of shorter survey instruments with 5-point rather than 10-point Likert scales, provision of a collective financial incentive that benefits the entire class, and communication of changes implemented by the faculty member with the students based on feedback may help institutions improve SET response rates. Further refinement of the SET process can occur by educating students about how SET survey results are used and how the incentive is allocated. Additionally, improved communication by faculty members regarding changes implemented based on student feedback will further help improve the efficiency and utility of the SET process. The results also showed that students generally understand the importance of SET to provide constructive feedback. Future research related to identifying the most cost-effective combination of interventions is needed to improve SET survey response rates.
ACKNOWLEDGMENTS
The authors would like to acknowledge PharmD students Danijela Andric and Cedric Baraoidan, who contributed to data transcription and analysis of this research.
Appendix 1. Discussion Guide of a Focus Group of Pharmacy Students to Discuss Improvements to the Student Evaluation of Teaching Process
- Received April 13, 2018.
- Accepted June 17, 2018.
- © 2020 American Association of Colleges of Pharmacy