Abstract
Objective. To assess the multiple mini-interview (MMI) as an admission tool for a satellite campus.
Methods. In 2013, the MMI was implemented as part of a new admissions model at the UNC Eshelman School of Pharmacy. From fall 2013 to spring 2015, 73 candidates were interviewed by 15 raters on the satellite campus in Asheville, North Carolina. A many-facet Rasch measurement (MFRM) with three facets was used to determine the variance in candidate ratings attributable to rater severity, candidate ability, and station difficulty. Candidates were surveyed to explore their perceptions of the MMI.
Results. Rasch measures accounted for 48.3% of total variance in candidate scores. Rater severity accounted for 9.1% of the variance, and candidate ability accounted for 36.2% of the variance. Eighty percent of survey respondents (strongly) agreed that interviewers got to know them based on questions they answered.
Conclusion. This study suggests that the MMI is a useful and valid tool for candidate selection at a satellite campus.
- multiple mini-interview
- many-facet Rasch measurement
- multifaceted Rasch measurement
- admissions
- evaluation
INTRODUCTION
Pharmacy education has experienced significant growth since 2000, with the number of pharmacy schools rising from 80 to 132 as of July 2015 and the number of first-professional PharmD graduates topping 13 800 in 2014.1 As the number of pharmacy schools has increased, the number of satellite campuses associated with these schools has also increased.2 According to data from the Accreditation Council for Pharmacy Education (ACPE), there are currently 44 pharmacy school satellite campuses spread across 32 different pharmacy schools.3 Satellite campuses can serve a number of institutional needs, such as expanding physical space to allow for larger class sizes, growing class size without significant increase in faculty needs, and serving regional needs for pharmacists by training them in an underserved area.4-6
The effectiveness of satellite campus education relative to traditional education in pharmacy is well-established; however, students and faculty members on satellite campuses can have experiences unique to distance education.5,7 These can include transactional and cultural-social distance and varied experiences associated with local health systems, politics, and traditions.8-10 As such, providing candidates with the opportunity to engage with satellite campus faculty members and students, explore the surrounding community, and experience other elements of the campus during the admissions process may enable candidates to make more informed decisions about enrollment while helping institutions identify students who are a good fit for their campus.
The UNC Eshelman School of Pharmacy has two campuses, the main one in Chapel Hill, North Carolina and a satellite campus in Asheville, North Carolina. The Asheville satellite campus was established in 2011 and is home to approximately 65 doctor of pharmacy (PharmD) students and seven faculty members. Along with the Chapel Hill campus, the Asheville satellite campus transitioned to a new curriculum in fall 2015 designed to better prepare students for an ever changing health care system with an increased emphasis on problem solving and teamwork. As a part of this transition, the school’s admissions model was redesigned in 2012. This redesigned admission process included a more intensive interview day, known as Candidates’ Day, which is now held on both campuses simultaneously.11 Inviting students to interview on the campus they are applying for is intended to give them a feel for the culture of that campus.
To more effectively select candidates who would be successful in the new curriculum and as health care professionals, the multiple mini-interview (MMI) was implemented as part of a new admissions process on both campuses. Developed at McMaster University for medical school admissions, the logistics of the MMI closely resemble those of an objective structured clinical examination (OSCE).12 Candidates move through a series of stations, each of which require the candidate to respond to a short scenario during an independent interaction with a single interviewer. Since 2004, the MMI has been implemented across various disciplines, including medicine, pharmacy, and veterinary medicine, along with medical and pharmacy residency programs.11-15 Generally, the MMI is considered fair, reliable, valid, pleasant, and transparent.16-21
The purpose of this article is to describe the design and implementation of the MMI on a satellite campus, explore candidate perceptions of the MMI, and examine sources of variability in candidates’ MMI scores using 3-facet many-facet Rasch measurement (MFRM). Models from medical education using MFRM account for approximately 30% to 35% of total variance.22,23 This is the first paper to address the use of an MMI on a satellite campus as part of the admission process.
METHODS
The MMI was designed and first implemented in fall 2013 following a thorough review of published literature, consultation with experts, and consensus among stakeholders. Training for interviewers consisted of an online portion, in-person training prior to the day of the MMI, and a review of information the day of the MMI. A more detailed description and evaluation of the MMI and interviewer training is published elsewhere.11 ProFit HR (ProFitHR, Hamilton, ON, Canada) was used to identify previously validated scenarios and training materials for the MMI.24 Some scenarios were adapted to better fit pharmacy and the desired outcomes of this MMI model.
A 7-station circuit was used, with each station assessing a single construct: adaptability, empathy, integrity, critical thinking, teamwork (receiving instructions), teamwork (giving instructions), and why UNC. At each station, candidates were allotted two minutes to read the scenario before entering the room and six minutes to respond with a single interviewer. During candidates’ responses, interviewers had a standardized list of three to five probing questions that were used to elicit further information from the candidate about the construct. Candidates were unaware of the specific constructs evaluated in each scenario. During the MMI, interviewers were blinded to information about candidates other than their name.
Six total interview days were held for the 2013-2014 and 2014-2015 admissions cycles on the Asheville satellite campus. Seventy-three candidates were interviewed during the six interview days, each completing seven MMI stations. Two circuits were run each interview day using identical stations and scenarios to those on the Chapel Hill campus. To enable psychometric evaluation of the MMI using a 3-facet MFRM, efforts were made to ensure that some interviewers were present on multiple interview days and that these interviewers were assigned to different stations during different days.
Demographic data describing the candidates were collected from the admissions office, including gender, race, incoming grade point average (GPA), and composite Pharmacy College Admission Test (PCAT) score. At each MMI station, the candidate was rated by the interviewer on four criteria: the construct of interest, communication, critical thinking, and overall performance at the station. All scores were measured on a 10-point scale ranging from 1 (less suitable) to 10 (outstanding), with a higher score indicating better performance. Each station had a maximum score of 40, while the critical-thinking station had a maximum score of 30. In addition, candidates were surveyed about their experience at Candidates’ Day and with the MMI within one week of the interview. The survey included 18 multiple-choice items and was administered electronically via Qualtrics (Qualtrics, Provo, Utah). Open text boxes were included to allow for additional comments. All responses were collected anonymously prior to the end of the admissions cycle and no incentive was provided for completion.
Fifteen interviewers, 73 candidates, and seven stations were included in the study resulting in 511 ratings. No data were missing. To assess interviewer severity, station difficulty, and candidate ability, a 3-facet MFRM was conducted. This analysis was based on candidates’ average scores for each station. Minifac, v3.71.4 (Winsteps.com, Beaverton, OR) was used to analyze the three facets simultaneously and independently to calibrate them onto a single logit scale. The MFRM provides measures describing the severity of each interviewer, difficulty of each station, and ability of each candidate and can adjust candidate scores based on these facets to provide a more accurate reflection of candidate performance. It also enables calculation of the percent of total variance in candidate scores accounted for by the three facets included in the model. The MFRM evaluates sources of variability in situations where the performance of subjects on various tasks is rated by observers.25
Minifac provides mean-square (MnSq) error statistics to describe the degree to which each interviewer, candidate, and station fit within the MFRM. These fit statistics are either unweighted Outfit MnSq scores, a measure sensitive to outliers, or weighted Infit MnSq scores, which are less sensitive to outliers. Mean-square values greater than 1 indicate an unexpected level of variability. A MnSq value equal to 1 indicates the facet fit exactly as expected in the MFRM. In the case of a MnSq less than 1, there is less variability than expected. Mean-square values greater than 2.0 can disrupt the MFRM, introducing excessive variability, while MnSq values less than 0.5 represent too little variability but do not destabilize the model.26
After the initial MFRM model was run, MnSq values were examined by hand for each candidate. Because of the potential for values greater than 2.0 to destabilize the model, candidates with Infit or Outfit MnSq values of this magnitude were identified as outliers and corresponding anomalous data points were removed from the analysis. In this analysis, five data points (0.98% of total data points) were removed, leaving 506 data points in the final analysis.
All survey data were analyzed using descriptive statistics performed in SPSS for Windows, v21 (IBM, Armonk, NY). This study was submitted and considered exempt from further review by the institutional review board of the University of North Carolina at Chapel Hill.
RESULTS
As seen in Table 1, 73 candidates were assessed during Candidates’ Days with the MMI. Fifty (69%) of the MMI participants were female, 33 (45.2%) were white, and the mean age was 22.1 (2.8). The mean composite PCAT score of MMI participants was 80.6 (17.7) and mean incoming GPA was 3.5 (0.3). Table 2 lists average MMI scores broken down by station along with the standard deviation of scores for each station and range of candidate scores at each station. Some station ranges include a decimal, such as station 3, because the average station scores were used for the MFRM. Detailed psychometric analysis of the MMI are reported elsewhere and include seven factors with high factor loads and low intercorrelations, high internal consistency for each station, and weak correlations with all academic factors.11
Candidate Demographics at the Satellite Campus (n=73)
Multiple-Mini Interview Station Scores
Rasch measures from the 3-facet MFRM accounted for 48.3% of total variance in candidates’ MMI scores, leaving 51.8% of variance unaccounted for by the model. Rater severity accounted for 9.1% of the variance, candidate ability accounted for 36.2% of the variance, and station difficulty accounted for 3.0% of the variance (Figure 1).
Sources of Variability in Candidates' MMI Scores Based on the Three-Facet many-facet Rasch measurement (MFRM).
Of 15 interviewers, all Infit and Outfit MnSqs were less than 1.7, meaning no interviewers displayed a significantly unexpected degree of variability in scoring candidates. Three of 15 interviewers (20%) had an Infit MnSq and Outfit MnSq of less than 0.5. These interviewers’ low MnSq values indicate lower variability in their rating of candidates during the MMI, suggesting their ratings did not discriminate between candidates to the expected degree.
Of the 73 candidates and 506 data points included in the final analysis, six of the 73 candidates (8.2%) had Infit or Outfit MnSq values greater than 1.7, and 11 of the 73 candidates (15.1%) had Infit or Outfit MnSq values less than 0.5. Candidate scores spanned -2.1 to 1.3 logits when transformed to a logit scale (Figure 2).
Variable map showing noncognitive ability measures for the 73 candidates estimated by the many-facet Rasch measurement (MFRM) using multiple mini-interview (MMI) scores, interviewer severity, and station difficulty measures. Distributions of interviewer, or rater, and station data are also displayed in their respective columns. All data points are plotted on a common equal-interval logit scale from -3 to 2. The horizontal dotted lines in the “Scale” column indicate the scale category thresholds, which illustrate the point at which the likelihood of receiving the next higher rating is equivalent to the likelihood of receiving the next lower rating.
Of the 73 candidates who participated in the MMI, 45 (62%) completed the online survey about their experience with and perceptions of Candidates’ Day. Twenty percent of respondents indicated having participated in an MMI prior to the interview at the UNC. On a 5-point Likert scale (1=strongly disagree, 5=strongly agree), 80% agreed or strongly agreed that “The interviewers got to know me through the questions I answered” [4.1 (0.9)] and 78% agreed or strongly agreed that “Overall, I thought I did well in the MMI” [4.1 (0.8)]. When asked to consider the entire Candidates’ Day, including the MMI, 100% of respondents agreed or strongly agreed that “I had positive interactions with current students, faculty, interviewers, and staff” [4.8 (0.4)], and the majority of respondents (97.5%) agreed or strongly agreed that “Candidate’s Day was a positive experience” [4.6 (0.5)] Following Candidates’ Day, 97.5% of respondents indicated that they were still interested in attending the school.
DISCUSSION
As the number of pharmacy school satellite campuses increases, it is important to describe methods used to maintain quality of admissions and to evaluate the performance of these methods. In addition, conducting interviews on the satellite campus where students are planning to enroll allows schools to ensure that prospective students are introduced to the faculty members, culture, and community of the campus. The MMI is an increasingly popular interview tool supported by a growing body of literature.20 Using a 3-facet MFRM to analyze data from candidates’ MMI scores, this study data suggests using the MMI on a satellite campus is valid and reliable. Survey results suggest this is an acceptable interview method for satellite campus admissions.
The analysis presented in this paper supports previous findings from other settings that the MMI can reliably separate candidates based on measures of ability. In addition to a relatively large proportion of variance explained by candidate ability, a comparatively small proportion of variability was attributable to raters, approximately one quarter of the percentage attributable to candidate variability. Similar to previous MFRMs, minimal variability was associated with the station difficulty. Some positive findings of this model may be attributable to the thorough training for interviewers involved in the MMI. Combining data from two admissions cycles may also contribute to the results as interviewers may have gained experience from the first admissions cycle, increasing the consistency of their ratings.
While rater variability accounted for a similar amount of variability in other MFRMs, some raters’ MnSq statistics suggested less variability in ratings than expected.22,23 Three raters’ Infit and Outfit MnSq values were less than 0.5 indicating they likely did not sufficiently discriminate between candidates when scoring their performance. Low MnSq values are not ideal but they do not destabilize the model. Mean-square values greater than 2.0 can destabilize the model. No raters were found to have MnSqs greater than 1.7 suggesting that there were no raters who scored candidates with an unexpected degree of variability. Raters should use the entire scale (1-10) in rating candidates’ performance to effectively discriminate between more and less qualified candidates. Raters appear to have used the full range of scores, as seen in Table 2, with most ranges nearly spanning the rating scale. Further interviewer training and experience may improve rater scoring patterns and improve the ability of the MMI to discriminate between candidates.
The model reported in this paper accounts for a relatively high proportion of variance compared to other published analyses of the MMI, yet it does leave just over 50% of variability in candidate scores unaccounted for. This amount of variability suggests there is room for improvement in the MMI. Further research of the MMI and techniques for decreasing variability in scoring may prove useful for improving the process. Refining scenarios to better target the intended constructs, employing ongoing interviewer training, and using experienced interviewers could also improve consistency. It may also be important to consider that this MFRM used a relatively small sample size because of the smaller size of the satellite campus MMI and that this sample included pooled data from two admissions cycles.
It is possible to gain insight into the performance of the MMI as the MFRM provides statistics describing interviewer rating patterns, station appropriateness, and sources of variability in the process. The MFRM also provides adjusted scores for each candidate, or “fair scores,” which are calculated based on interviewer rating patterns and station difficulty. While this functionality is unique to the MFRM, the school uses a holistic approach to admissions that takes into consideration multiple factors, not just MMI scores. Because of this holistic process, it is difficult to determine whether an admissions decision would have been altered if adjusted MMI scores were used in admissions decisions. Regardless of whether a school chooses to adjust scores or use “fair scores” in admissions decisions, the MFRM still has utility in assessing the MMI and provides evidence that the MMI can be effectively implemented on a satellite campus as part of the admissions process. This study supports the validity and reliability of the process and provides insight into how the process can be improved. In the future, research will examine whether MMI scores correlate with academic performance, advanced pharmacy practice experience performance, and placement after graduation.
Surveys administered to candidates shortly after Candidates’ Day revealed that candidates had generally favorable impressions of the MMI. This is important to recognize as the MMI is a more rigorous process than a traditional structured interview. While the MMI may be a more rigorous process and less personal than a traditional structured interview, other parts of Candidates’ Day provide opportunities for questions and more personal interactions. By rotating through multiple stations with different interviewers, it is possible for candidates to recover from a bad first impression at a single station or misstep in one scenario. This aspect may also contribute to favorable impressions. It is also worth noting that candidates may have completed the survey prior to being offered admission. There is a risk this could introduce bias into candidate responses despite the surveys being anonymous.
The MMI can be used on a satellite campus to reliably separate candidates by ability, while engaging students in an experience they feel is positive. This is important as we try to balance having a fair and equitable admissions process while giving opportunities for candidates to experience the culture of the satellite campus, which can differ from the main campus. Candidates’ Days on each campus were identical, but faculty members, staff, and students from the satellite campus were used to lead sessions. This allowed candidates to interact with individuals from the satellite campus, which could help them better determine if the campus was a good fit for them. Offering these candidates an opportunity to interview and interact with faculty members and students from that campus may increase accepted offers to the campus.
CONCLUSION
This study suggests that the MMI is a valid and reliable method of interviewing candidates on a satellite campus, adding to a body of evidence supporting its use as an admissions tool. Survey results also suggest the MMI is acceptable to candidates. A 3-facet MFRM found that high variability in candidates’ scores was attributable to candidate ability, although approximately half of the variability was unexplained by this model. In the future, the MMI may be improved with further interviewer training and more interviewer experience. Continued research on the topic will aid in identifying factors that can improve reliability of the MMI.
- Received July 14, 2015.
- Accepted December 11, 2015.
- © 2016 American Association of Colleges of Pharmacy