Abstract
Objective. To evaluate how pharmacy programs administer and evaluate American Association of Colleges of Pharmacy (AACP) curriculum quality perception surveys for continuous quality improvement, and to compare usage across the academy to the Principles of Good Use: AACP Quality Perception Surveys document.
Methods. A 27-item survey instrument examining how schools used the curriculum quality survey was created and administered between March and June 2019 to assessment contacts of accredited schools and colleges of pharmacy. Descriptive statistics were performed for each survey item.
Results. Of the 140 programs invited to participate, 88 (62.8%) responded. Curriculum quality survey data were triangulated with additional existing data (39.8%) or additional data sources were collected for triangulation with the survey data (54.5%). Programs reported on modifications made in the following areas: curriculum (85.2%), communication (75.0%), student services (68.2%), policy and process (61.4%), and professional development (53.4%). Most programs reported the assessment lead was responsible for oversight of the curriculum quality survey.
Conclusion. Of respondents, 66% were familiar with the AACP Principles of Good Use document, and results indicate that institutions are generally following the recommendations. Survey analysis revealed that a significant number of programs are utilizing curriculum quality survey data for making meaningful programmatic improvements. Future work should center on further development of best practices for schools and colleges of pharmacy to effectively use the CQS data for continuous quality improvement.
INTRODUCTION
The American Association of Colleges of Pharmacy (AACP) provides US schools and colleges of pharmacy with curriculum quality perception surveys to administer to faculty, graduating students, alumni, and preceptors in their Doctor of Pharmacy Programs. The survey data are used as evidence during a school or college’s accreditation Accreditation Council for Pharmacy Education (ACPE) self-study process. Additionally, Standard 24.2 of ACPE Standards 2016 require that every pharmacy school’s assessment plan include standardized and comparative assessments and list the AACP surveys as an example of assessments that can be used.1
Schools and colleges of pharmacy use the curriculum quality perception surveys as part of the self-study process in compliance with ACPE standards, but it is unclear whether they remain an element examined as part of an upcoming self-study or are being utilized as part of ongoing assessment efforts, such as for continuous quality improvement of pharmacy programs.1 The AACP provides schools with the Principles of Good Use: AACP Curriculum Quality Perception Surveys (hereafter referred to as “good use” document) to guide in the administration and subsequent data analysis of results obtained from the curriculum quality perception surveys.2 The good use document makes recommendations that include timing of assessments, target audience, relationships between survey administration timing and self-study, how to interpret data and results, and how to turn the results into action, and it states the intent of the surveys are for both accreditation and continuous quality improvement.2 However, the document gives limited guidance regarding use of the surveys for continuous quality improvement.
From the literature, it appears as though other health professions do not use this standardized approach to curriculum quality perception surveys in the same ways as pharmacy programs. Further, there is limited information surrounding use of the AACP curriculum quality perception surveys. Two articles describe the use of data from the curriculum quality perception surveys to evaluate and improve programs.3,4 Haines and colleagues developed a mentoring program in response to low scores on the faculty survey.3 Graduating student survey responses were used by Lyons and Colleges to assess student and school-level predictors of pharmacy residency attainment.4 However, there is a lack of literature to drive best practices as to how schools should use this data to make improvements to their program. Instead, most information available to schools resides in the “good use” document, but the level to which schools utilize this tool was unknown.
The objective of the study was to examine how schools administer and evaluate each of the four AACP curriculum quality perception surveys within their institution. This included evaluating if programs were aware of the “good use” document and comparing their reported utilization of survey data to the guidance outlined in the “good use” document. This inquiry served to inventory how schools are using the curriculum quality perception surveys, and whether or not that usage aligns with recommendations from AACP through the “good use” document. This information is beneficial to the Academy in terms of aiding in identifying the overall usefulness of the surveys, areas for improvement, and areas where additional guidance may be needed. In addition, the information can assist programs in developing a systematic approach to using the AACP curriculum quality perception surveys for continuous quality improvement.
METHODS
The Ferris State University Institutional Review Board (IRB) determined this research did not meet the federal definition of human subject’s research and was, therefore, deemed exempt from further approval. After a review of the literature, survey items were created to inquire how schools are using the curriculum quality perception surveys at their institution, including: response rates, incentives for participation, approach to analysis, interpretation, utilization, and responsibilities for each step of the surveys. The “good use” document was used as a framework for the survey development, and the major components outlined in the document were used: sampling, administration, and data use.2 Ten individuals, comprised of faculty and administrative members of the AACP Assessment Special Interest Group (SIG) who serve in various roles related to assessment from nine different institutions identified mutual interest in the curriculum quality perception surveys at the 2018 AACP Annual Meeting. These individuals met biweekly for multiple weeks to create an initial list of survey items. Then, the team dispersed into small groups to work on sections until consensus was achieved. The entire team met again and finalized the survey items. Team members were asked to review and pilot the items using information from their institutions. The revised surveys were then piloted by select assessment contacts within the academy and questions were adjusted based on feedback. Survey items were primarily close-ended categorical requiring a single response or “select all that apply.” Some survey items did have an “other, please specify” option.
A list of assessment contacts at each school is maintained by the AACP Assessment SIG and includes individuals with assessment oversight responsibility at each institution (ie, Academic Affairs or Assessment Dean, Assessment Coordinator, or Chair of the Assessment Committee). The research team was able to obtain this list and updated the list, as necessary, through existing contacts and publicly available information on pharmacy school websites. Where there was more than one individual listed as having assessment responsibility, the highest-ranking individual was chosen for the survey administration. This list was used for distribution to each of the 140 ACPE-accredited institutions in the accreditation process. The final survey was conducted by QuestionPro (QuestionPro, Inc) in an anonymous format between March-June 2019. The email contained a brief overview of the research project, an invitation to complete the survey, a PDF copy of the survey to aid in gathering information to answer specific items, and a recommendation to forward the email if another individual would be a better respondent. Schools were sent an initial email to complete in March and reminders at approximately one month and two months. Schools were then individually contacted by a member of the research team via email or phone to further increase the response rate after two months. Demographic data were obtained from participants but were not linked to specific schools in accordance with the IRB-approved protocol. Obtaining demographic information allowed us to compare respondents to the Academy as a whole.
Data were analyzed in SPSS (IBM Corp). For each survey item, descriptive statistics were performed. For open response items, such as “other, please specify” a small group of the research team (three members with experience coding questions on prior research and with SPSS) convened to determine an approach to the categorization of responses. The responses then were categorized and included in the other quantitative results. In some cases, such as descriptors of “other” responses, the information is presented in the text of the results.
RESULTS
Of the 140 programs invited to participate, 88 responded (62.8% response rate). Characteristics of the survey respondents are presented in Table 1. Generally, the schools and colleges that responded to the survey represented in demographics the greater sample of all existing accredited schools and colleges in the United States in terms of location, public and private, length of the program, and accreditation status.
Characteristics of All Accredited US Schools and Colleges of Pharmacy Included in a Survey to Determine Administration and Evaluation of the American Association of Colleges of Pharmacy Curriculum Quality Surveys
The results for the administration and response rates for each of the surveys (graduating student, faculty, preceptor, alumni) are presented in Table 2. Response rates were high for graduating student and faculty surveys but were very low for preceptor and alumni surveys. Nine institutions noted that the graduating student survey was a requirement for graduation or was required for admission to a required event such as a board review session. Incentives offered for the graduating student survey included: drawing for an annual membership for a professional organization, raffle for a $50 Amazon gift card, North American Pharmacist Licensure Examination (NAPLEX) preparation materials, other gift card drawings, and tickets for parking at graduation. Gift cards were also used as incentives for faculty, preceptor and alumni surveys. A raffle for a state pharmacy association membership was provided as an incentive for preceptor and alumni surveys as well as school branded items for alumni.
Use of the American Association of Colleges of Pharmacy Curriculum Quality Surveys by US Schools and Colleges of Pharmacy (N=88)
Information also was obtained on the sampling approaches for the preceptor and alumni surveys and is presented here. Most institutions sent the survey to all preceptors of introductory pharmacy practice experiences (IPPEs) and advanced pharmacy practice experiences (APPEs) (n=69, 78.4%), with the remainder using a sample (n=17, 19.3%) or APPE preceptors only (n=2, 2.3%). Institutions that used a sample of preceptors based the sample on those who precepted multiple students in the past year (at least three, five or six), a random selection of 50% of IPPE and APPE preceptors, those who precepted at least one student in the past three to five years, or the 200 or 300 preceptors that take the most students (either IPPE or APPE). For the alumni survey, there were a variety of approaches such as including all alumni who graduated in the past year (n=2, 2.3%), 2 years (n=6, 6.8%), 3 years (n=31, 35.2%), 4 years (n=5, 5.7%), or 5 years (n=25, 28.4%). Some institutions used a sampling approach (n=5, 5.7%) or other approaches (n=14, 15.9%), and these included: all alumni for whom they had contact information for, all alumni who graduated in the last two years and the last four years, alumni that graduated one and three years prior, all alumni who graduated two years ago, all alumni who graduated no less than three years ago and not more than five years ago, or all alumni except the graduating class.
The results of the benchmark process and review for each of the surveys are presented in Table 3. In most institutions, the director or dean of assessment or the assessment committee determined actionable benchmarks, and these benchmarks were updated annually or at least every 2-5 years. In addition to the predominant organization approaches of comparing to internal trends, peer data, and national data, several institutions included comments in addition to options listed for how data are organized and prepared for review including a peer and competitor trend report, comparison between campuses, statistical analysis, line graphs from raw data, and data given to a 3rd party vendor for analysis/display.
Responsibility for Setting Thresholds for the American Association of Colleges of Pharmacy Curriculum Quality Surveys (N=88)
The results of the office and committee roles for each of the surveys are presented in Table 4. Most programs reported that the assessment lead (director or dean of assessment) or the assessment committee was responsible for oversight of the curriculum quality perception surveys. Nearly all schools reported the approach to curriculum quality perception surveys was of compliance and quality improvement, with data obtained being used to make programmatic changes. Nearly two-thirds of respondents were familiar with the “good use” document. In addition to areas noted in the table that curriculum quality perception surveys data has impacted, other responses included recruitment and civility.
US Pharmacy Schools’ Oversight of the American Association of Colleges of Pharmacy Curriculum Quality Survey (CQS) Process (N=88)
The results of the office and committee roles for identifying problematic areas are presented in Table 5. Again, the dean or director of assessment lead or the assessment committee were consistently identified as responsible. Others listed as being responsible for identifying problematic areas included the self-study committee or chair, biostatistician, or director of development.
US Pharmacy Schools’ Responsibility for Identifying Problematic Items Associated With the American Association of Colleges of Pharmacy Curriculum Quality Surveys (N=88)
DISCUSSION
The “good use” document provides guidance regarding the administration and use of the AACP’s curriculum quality perception surveys.2 While only 66% of respondents indicated that they were familiar with the guidance document, alignment was seen between the guidance statements and their responses. Many of the items within the “good use” document follow common approaches used in programmatic assessment and research methodology. Thus, because of the integration of the surveys into ACPE standards and the enhanced focus on robust assessment practices within Standard 24, institutions may follow best practices without explicitly referencing the document. Nevertheless, promoting the document to programs and individuals, particularly those in assessment positions, would be beneficial and may develop a greater understanding of the surveys and how they can be used for continuous quality improvement as well as for accreditation.
The “good use” document specifies that the graduating student survey must be administered every year, while the faculty, preceptor, and alumni surveys should be administered every three to four years.2 The survey administration frequency reported in our study demonstrated adherence to the recommendations in the “good use” document.2 The majority of schools deploy the graduating student survey annually and the faculty, preceptor, and alumni surveys at least every three to four years as recommended. In our study, the graduating student and faculty surveys were associated with the highest response rates, which were generally greater than 60%. However, the preceptor and alumni response rates were lower (generally less than 60%), with only 9% of schools reporting a response rate greater than 60% on the preceptor survey and 3.4% on the alumni survey. The response rates reported in our study are similar to those reported nationally in 2019, where the national response rate was 75.1% (ranging from 4.9% to 100%) for the graduating student survey; 74.2% (ranging from 4.3% to 100%) for the faculty survey, 17.2% (ranging from 3.5% to 55.2%) for the preceptor survey, and 56.3% (ranging from 0.7% to 61.0%) for the alumni survey.5-7 The “good use” document notes that response rates greater than 60% generally provide greater confidence in the survey data. Thus, because of the low national and individual school response rates for the preceptor and alumni surveys, institutions may not have great confidence in the results. This should be of concern to the Academy as well as to ACPE.
The higher response rates on the graduating student and faculty surveys compared to those for the preceptor and alumni surveys could be due to institutions having closer proximity to graduating students and faculty, allowing them to deliver in-person reminders and/or a set time for respondents to complete the survey (eg, as part of graduation activities). Additionally, these two groups may have greater stakes in completing their survey because of their proximity to the program and interest in seeing the program be successful. Maintaining accurate and current contact information for alumni after they graduate is significantly more challenging. Preceptors are often tasked with completing multiple student evaluations and programmatic feedback which may lead to survey fatigue and lower response rates. We examined the survey responses of pharmacy schools that had achieved higher response rates on the preceptor and alumni surveys to determine if their written responses revealed any strategies that the institution had used to increase the response rate (eg, incentive provided, telephone or personalized follow-up); however, there was no indication that such strategies have been used.
The “good use” document clearly specifies that all students graduating from the PharmD program should be surveyed. However, further clarification in the “good use” document on inclusion criteria for faculty, preceptors, and alumni may be needed. The document specifies that all full- and part-time faculty responsible for teaching in the PharmD program should complete the faculty survey. Our study did not examine how schools are defining part-time faculty (eg, based on part of a full-time equivalent, adjunct status, adjunct preceptor, time teaching). Yet the lack of clarity may lead to schools using different criteria.
The “good use” document suggests that an appropriate “representative sample” may be used for the preceptor and alumni surveys, but guidance on how best to sample is not shared.2 The document provides the example of the alumni survey, which has historically had low response rates; and states that using a smaller, representative sample with more aggressive follow-up may reduce non-response bias. In our study, few schools used sampling for the preceptor survey (19%) and even fewer used it for the alumni survey (5.7%). Furthermore, the composition of those sampled varied across schools. Agreement on appropriate methods or selection criteria for sampling of these groups may encourage schools to use sampling techniques. For example, the document specifies that “introductory and advanced preceptors, or an appropriate representative sample, who have been assigned sufficient students to make informed judgements about student performance and education as well as had an opportunity to form an opinion of your institution” can be used. The lack of clarity on an acceptable sample may discourage schools from using a representative sample. Yet the volume of APPE and IPPE preceptors may reduce response rates as a vast number of participants are invited to complete the preceptor survey. Similarly, the size of each graduating class over multiple years may contribute to lower overall response rates. Clarification or examples of appropriate sampling within the “good use” document would be beneficial. Alternatively, within the Academy, agreement on appropriate sampling could be established.
It should also be noted that data interpretation and reaction are more difficult with a smaller sample responding to the survey.8 Factors to consider when determining if a representative sample has responded to the preceptor survey include number of years as a licensed pharmacist; practice setting; precepting of IPPEs, APPEs, or both; number of years serving as a preceptor for the school; number of students precepted; and location where the preceptor survey was completed. For the alumni survey, the year of graduation, graduated on time vs delayed graduation, learning environment, ethnicity, pursuit of postgraduate education/training, and employment setting would help to determine if a representative sample was selected.
The AACP also gives schools and colleges of pharmacy the option to include additional survey items in each of the curriculum quality perception surveys. Survey burden, timing, and redundancy should be considered when deciding whether to add items. Few schools use this feature (less than 30% of schools add items to the survey of graduating students, and less than 26% of schools add items to any of the other surveys). Future research could examine what information is sought by items that are added as well as how schools are using data from the curriculum quality perception surveys. The “good use” document suggests to schools that comparison of their data with national averages and peer groups can be valuable. Our study suggested that this is a typical practice for most schools and also provides some insight into the responsible parties. While our survey obtained information on the general approach of schools, the specifics of how they identify “useful” data or the benchmarks used for identifying “flagged” data were not obtained. Although ACPE standards require that schools administer AACP surveys, additional insight into best practices across the Academy on how schools use them would be helpful.
A 2019 article in the Journal noted the need for assessment operations to consider high vs low impact assessment strategies.9 The authors supported a movement toward “slowing down” processes in order to manage the degree of data collection throughout the year. One of the suggestions included elongating timelines for acquisition and assessment of data sets. This approach may apply not only to the collection of surveys but also to the interpretation. We do not know how many work hours schools dedicate to the delivery, assessment, and response to each of AACP’s curriculum quality perception surveys, regardless of whether the data are deemed useful. This is an area in which identification of best practices is needed moving forward. Pharmacy appears to require standardized surveys that medical and nursing education do not, which may or may not indicate the necessity of ensuring that these surveys are triangulated with other data before action steps are taken. Further, comparisons of other health care professions survey requirements accreditation between other health care professions are not possible at this time. The value of continuing survey administration if response rates for preceptors and alumni surveys cannot be improved should be evaluated in the future.
Our study provided insight on the general use of AACP’s curriculum quality perception surveys by schools and colleges of pharmacy. However, specific details on changes or improvements resulting from the survey responses is absent and may vary from program to program. While the survey asked how often schools review their predetermined benchmarks, the specific benchmarks that were used were not captured. Despite numerous reminders and personal contacts, we were not able to obtain responses from all US pharmacy schools. Instead, a 62.8% response rate was achieved, which limits generalizability. Although the response rate was lower than expected, the responses we received were fairly similar to the Academy as a whole. The aim of the study was to gain an understanding of how the curriculum quality perception surveys were being used; however, simultaneously assessing the four survey instruments provided unique challenges. Respondents could seek assistance from other individuals within their pharmacy school to complete the study survey for any areas with which they were less familiar, as we included a copy of the survey to allow for gathering data prior to completion. Given that assessment leads were chosen from each institution and given their role in using the curriculum quality perception surveys for improvement and for reporting to ACPE. While an overview of the current approaches used by pharmacy schools to administer the curriculum quality perception surveys was captured, the schools’ level of satisfaction with these approaches is unknown. Future research should examine this further and determine ideal and effective practices. However, having a baseline understanding of pharmacy schools’ use of the AACP curriculum quality perception surveys does provide a foundation for deeper analysis in the future to develop best practices for continuous quality improvement.
CONCLUSION
Although only 66% of responders stated that they were familiar with the document Principles of Good Use: AACP Curriculum Quality Perception Surveys, the results of this study indicated that the institutions are generally following the recommendations. Institutions should work to ensure that a representative number of responses to the surveys are received, especially on preceptor and alumni surveys. Additionally, survey results should be shared with staff, students, and external stakeholders in addition to faculty and administration. Future work should center on the development and dissemination of best practices for US schools and colleges of pharmacy regarding the effective use of the curriculum quality perception survey data for continuous quality improvement.
- Received February 26, 2020.
- Accepted November 16, 2020.
- © 2021 American Association of Colleges of Pharmacy