Abstract
Objective. To determine the validity and reliability of the Pharmacist Interprofessional Competencies Tool (PICT).
Methods. Faculty members at Ferris State University, College of Pharmacy developed the PICT, which has five interprofessional criterion (collaboration, ownership, respect, engagement, and application) and four competency levels (unacceptable, novice, competent, and proficient) to assess the interprofessional competencies of pharmacy students. Fourteen pharmacy faculty members were trained in how to use the PICT and then used it to assess students’ behaviors in four to six video-recorded interprofessional education (IPE) learning activities. A subset of these faculty members evaluated the video-recorded IPE learning activities using two other previously validated interprofessional assessment tools. Psychometric analysis of the PICT, including internal consistency and inter-rater reliability, was conducted, along with a correlation analysis and factor analysis, and the results were compared to those from the other validated assessment tools.
Results. The overall rating of the internal consistency of the PICT was excellent and item-total correlations of the individual criterion were fair to good, with the exception of the respect criterion. The PICT demonstrated excellent overall inter-rater reliability, and individual criterion rated as fair to excellent with the exception of the respect criterion. Specific dimensions of the PICT showed high convergence with previously validated interprofessional assessment tools.
Conclusion. The PICT exhibited overall validity and reliability as an assessment tool for measuring the interprofessional competencies of pharmacy students. In establishing the overall validity and reliability of the assessment tool, the respect criterion was not proved to be reliable or valid. Additional training and slight modifications to the PICT and associated IPE learning activities are planned to assist with longitudinal assessment of student performance across the curriculum.
INTRODUCTION
Interprofessional education (IPE) continues to expand within pharmacy education following the implementation of the Accreditation Council for Pharmacy Education (ACPE) Standards 2016 for Doctor of Pharmacy (PharmD) program accreditation.1 Specifically, Standards 24.3 and 25.6 describe the level of data needed for IPE, including individual and aggregate assessment of student readiness to “contribute as a member of an interprofessional collaborative patient care team” and “preparedness of all students to function effectively and professionally on an interprofessional health care team.” Appropriate and accurate assessment of student-level IPE learning activities may offer several benefits, such as providing individualized and constructive feedback and inspiring continuous improvement in educational design. Assessing students’ mastery of objectives related to IPE requires more complex techniques, including patient care simulations involving multiple disciplines. Working in concert with other health professions programs provides an efficient way for each program to deploy IPE learning activities. With this deployment comes the need to establish the competency measures associated with key interprofessional constructs that are important to each discipline’s practice.2
There are several published review articles that evaluate the available interprofessional assessment tools.3,4 Furthermore, repositories containing interprofessional assessment tools, such as the National Center for Interprofessional Practice and Education and the University of Toronto Centre for Interprofessional Education are easily accessible. However, most of these tools rely on student self-reported data, including their attitudes toward IPE, readiness for IPE, and/or self-assessment of interprofessional competencies.5,6 Although self-report assessment tools can be useful for identifying domains when measuring interprofessional competencies, simply gathering student self-reported data is not sufficient for a school to be in compliance with Standards 24.3 and 25.6, which require that individual student-level data be obtained to state achievement of team-readiness and interprofessional preparedness.1 While there are a handful of observer-based assessment tools to assess interprofessional competencies of individual health profession students, there is not a one-size-fits-all assessment tool that can be used for all health professions programs and there is not one specific to document the interprofessional competencies of pharmacy students.7-12 Oates and Davidson conducted a critical appraisal of instruments used to measure outcomes in IPE.13 To be critically appraised, the instrument needed to: be available through publication or inquiry, be designed for use in health professional students in a learning situation that involved more than two health professional groups, measure one or more interprofessional outcomes from the framework of educational outcomes, and have at least one peer-reviewed publication reporting the instrument’s development, reliability, and validity. Of the 140 instruments they reviewed, only nine met the criteria for critical appraisal. Thus, we concluded that there was limited psychometric evidence available for the qualifying instruments that supported their use in assessing health professional students. Additionally, an international consensus statement on IPE was published by a group of universities identifying a suggested domain array for the assessment of interprofessional outcomes.14 They identified that learner-completed tools focusing on learners’ attitudes or perceptions are not recommended for summative assessment of outcomes. They also pointed out the need for further scholarly work to define the suite of IPE domains and the optimal timing of assessment across a professional curriculum to ensure valid and reliable verification of interprofessional competencies.
At Ferris State University (FSU) College of Pharmacy, expansion and standardization of IPE learning activities across the curriculum has taken place since 2003.15 Efforts have been made to develop an assessment tool that is used across multiple IPE learning activities, provides useful student-level feedback, and tracks student progress in achieving interprofessional competencies.16 The final assessment tool had to be designed to evaluate students quickly and easily during IPE learning activities. The dimensions of the assessment tool had to be broad enough to cover many IPE learning activities but specific enough to give useful data and feedback. After an iterative revision process that included diverse feedback, the researchers agreed on an assessment tool, the Pharmacist Interprofessional Competencies Tool (PICT), for pilot-level implementation across the curriculum. A detailed description of the PICT development has been published previously.16 The PICT had four revisions and three pilot-testing phases. We determined that the fourth version would be the final version of the assessment tool and that it should undergo psychometric analysis to establish further measures of validity and reliability. The assessment tool can be mapped to curricular ability-based outcomes, Interprofessional Education Collaborative (IPEC) Core Competencies for Collaborative Practice, and Entrustable Professional Activities.17,18 A full version of the PICT can be found in Appendix 1. The aim of this study was to determine the validity and reliability of the PICT in a population of pharmacy students for the purpose of achievement of interprofessional competencies. As part of the validation of the PICT, an additional aim was to examine the domains associated with measurement and assessment, in support of continuing understanding of the relevant array of constructs associated with IPE.
METHODS
The study process was reviewed by the Institutional Review Board at FSU and was determined to be a non-human subjects research quality improvement project. The study investigators developed a training process for faculty evaluators to complete before testing the PICT. The college maintained a repository of video-recorded IPE learning activities where third professional year pharmacy students worked with students from medicine, nursing, and physical therapy to interview a standardized patient, determine a discharge plan, and communicate the plan to the patient. Two of the study investigators (hereafter referred to as the “training faculty”) selected one of the video-recorded IPE learning activities as the training video and evaluated the student case using the PICT. Then, we distributed the PICT and training video to the 14 faculty members who were going to be involved with data collection (hereafter referred to as the “data collection faculty”). The data collection faculty were asked to watch the video independently, evaluate the student, and send the evaluation back to the training faculty. From there, the training faculty compiled the data and then initiated a discussion with the data collection faculty about each assessment tool criterion. The goal of the discussion was to establish a consensus on what the expected competency level was for a pharmacy student to ensure consistent evaluation, as well as answer any questions about the assessment tool.
The training faculty then watched and evaluated additional video-recorded IPE learning activities and selected six student cases with a wide spectrum of performances. The video recordings were approximately 40 minutes in duration. Each of the 14 data collection faculty members watched and evaluated four of the six pre-identified video-recorded IPE learning activities using the PICT. Then, a subset of eight faculty members were asked to watch the additional two video-recorded IPE learning activities and evaluate them in the same manner. These eight faculty members also evaluated the videos using the Interprofessional Collaborator Assessment Rubric (ICAR) and Modified McMaster-Ottawa (MMO) four-item scale to test the convergent and divergent validity of the PICT.11,12 The ICAR and MMO were chosen for comparison with the PICT because of the overlap in the interprofessional criterion each measures as well as other aspects that made them less than ideal for use at our college. For instance, the MMO only has three competency levels, but other observer-based assessment tools used for skills-based assessment at the college had four competency levels. The ICAR is a long assessment tool that cannot be completed quickly, eg, during an IPE learning activity, which is necessary at our college because of the high volume of students that need to be assessed. In addition, previous published work has established psychometric properties, which is important when selecting instruments for this type of validity testing. A principle components analysis (PCA) with varimax rotation was conducted on the ICAR and PICT results to identify relevant factors in measurement of IPE and for comparison to the PICT. This included the PCA diagnostics Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy and Bartlett’s test of sphericity, which checks for an appropriate relationship between variables. Some dimensions of the ICAR have multiple rows of criteria for evaluation. For example, the dimension “respectful communication” assesses three different aspects of communication (communication with others, communication of opinions, and response to requests) to evaluate a student’s ability to respectfully communicate. When this is the case, the individual aspects are numbered according to the order in which they appear in that section (eg, respectful communication 3).
Internal consistency was measured using Cronbach alpha. Internal consistency was interpreted based on the following: <.70, unacceptable; .70-.79, fair; .80-.89, good; and greater than .90, excellent.19 Inter-rater reliability was measured using the intraclass correlation (ICC). A mixed-effects model was used for deriving the ICC correlation based on consistency using single measures. The ICC values were interpreted based on the following: <.40, poor; .40-.59, fair; .60-.74, good; and .75-1.00, excellent.19 Correlation coefficients were calculated using Kendall’s tau. Absolute values of coefficients were interpreted based on the following: .00- 25, little if any; .26-.49, low; .50-.69, moderate; .70-.89, high; and .90-1.00, very high. Statistical analysis was done using STATA 12 (StataCorp, College Station, TX) and IBM SPSS (Armonk, NY).
RESULTS
The internal consistency of the PICT as measured by Cronbach alpha was .91. Additional item-total statistics are shown in Table 1. The results of the ICC-based analysis for inter-rater reliability are shown in Table 2; the overall single measure ICC was .85 (p<.001). Because both the PICT and the MMO are built with single criteria arrays, the comparison between these two assessment tools was done with a correlation analysis. The correlation matrix comparing the two scales is shown in Table 3. The PICT collaboration correlated highly with MMO roles and responsibilities and close to highly with MMO collaboration. The PICT ownership and engagement correlated highly with MMO roles and responsibilities.
Item-Total Correlations for the Pharmacist Interprofessional Competencies Tool
Intraclass Correlation Coefficient Results for Inter-rater Reliability for the Pharmacist Interprofessional Competencies Tool
Correlation Matrix for the Pharmacist Interprofessional Competencies Tool
The results of the PCA for the ICAR are shown in Table 4. The overall KMO Measure of Sampling Adequacy was .82, indicating that the sampling was adequate. The result of Bartlett’s Test of Sphericity was significant (p<.001), indicating an appropriate relationship between variables for the purpose of running the PCA. Individual level KMO were all greater than .7, with the exception of ICAR active listening (.64) and conflict management (.68). Five components were retained with eigenvalues greater than 1.0, explaining 76.7% of the total variance. The five-component rotated structure matrix for the ICAR is shown in Table 4.
Interprofessional Collaborator Assessment Rubric Rotated Structure Matrix for PCA with Varimax Rotation (Five Components)
The five-component model was carried forward to include the PICT criteria, which were added to the PCA. When added to a five-component model, the overall KMO Measure of Sampling Adequacy was .82, indicating that the sampling remained adequate. Bartlett’s Test of Sphericity was significant (p<.001). Individual level KMO were all greater than .7, with the exception of ICAR active listening (.64) and PICT respect (.70) and application (.65).
The PICT criteria were added to the factor analysis with the same extraction parameters previously outlined. All five of the PICT criteria loaded with component 1 with loadings as follows: collaboration=.87; engagement=.84; ownership .83; application=.77; and respect=.63. The ICAR criteria that loaded on component 1 remained mostly unchanged and included: collaborative relationships (loading=.82), information sharing 1 (loading=.82), respectful communication 2 (loading=.80), shared leadership (loading=.79), team discussion 1 (loading=.79), integration of information from others (loading=.78), respectful communication 3 (loading=.75), respectful communication 1 (loading=.714), team discussion 2 (loading=.70), respect for different perspectives 1 (loading=.67), communication strategies 1 (loading=.60), and communication strategies 2 (loading=.56).
As a result of this, a correlation matrix was run between the PICT and the ICAR criteria that loaded in component 1. The correlation matrix is outlined in Table 5. The PICT collaboration, ownership, and engagement correlated highly with ICAR respectful communication 1, 2, and 3; collaborative relationships; integration of information from others; and information sharing 1. The PICT collaboration and ownership additionally correlated with ICAR shared leadership. Although the variable for respect and application on the PICT loaded with this component of ICAR criteria, they did not correlate highly with any of them. There was moderate correlation with the PICT criteria respect and application across several of the ICAR criteria (Table 5).
Correlation Matrix for the Pharmacist Interprofessional Competencies Tool and Relevant Interprofessional Collaborator Assessment Rubric Criteria (N=48 Kendall’s Tau)
DISCUSSION
The literature has established the need for assessment tools examining both educational outcomes and IPE knowledge and skills.20 However, most of these tools rely on student self-assessment of attitudes toward IPE, readiness for IPE, or self-assessment of interprofessional competencies.3-6 A 2015 report by the Institute of Medicine recognizes the need to assess outside of knowledge and skills and toward competency development.21 The development and subsequent validation of the PICT provides an additional instrument for the assessment suite for measuring interprofessional competencies of pharmacy students in an efficient manner.
There was a need to develop an assessment tool that was applicable for IPE learning activities and could quickly be filled out by evaluators. The assessment tool also needed to provide robust data to measure the ability-based outcomes of the curriculum as well as ACPE Standards 24.3 and 25.6.1 The PICT accomplishes these aims in a reliable and valid manner.
The PICT demonstrated excellent overall internal consistency. The PICT criteria collaboration, ownership, and engagement demonstrated good item-total correlations, while application demonstrated fair item-total correlation. The variable respect demonstrated an unacceptable item-total correlation, which is confirmed by the slight improvement seen in Cronbach alpha when it is removed. In regards to inter-rater reliability, the PICT scored excellent overall. Collaboration, ownership, and engagement showed good to excellent, application showed fair, and respect showed poor inter-rater reliability. The weaker performance of the respect criterion on both item-total correlation and inter-rater reliability indicates the need for further examination of that component.
The PICT collaboration, ownership, and engagement showed high convergence with MMO collaboration and roles and responsibilities. This was predicted because of the similarities in the question design compared to the PICT. Divergence was expected in MMO collaborative patient-family centered approach and conflict management/resolution. These items were not part of the video-recorded IPE learning activities or assessed on the PICT. Divergence was also expected for the PICT application because this related to pharmacist-specific knowledge which would not be relevant to the MMO. The final divergence was with the PICT criterion respect, for which an unexpected result was obtained. We anticipated that the criterion respect on the PICT would have correlated with the criterion collaboration on the MMO.
The PICT criteria collaboration, ownership, and engagement showed high convergence with the ICAR measures of respectful communication 1, 2, and 3; collaborative relationships; integration of information from others; and information sharing 1. This was anticipated because of the similarities in question design between the two instruments. Additionally, the PICT collaboration and ownership showed high convergence with ICAR shared leadership. Shared leadership involves sharing and alternating leadership when appropriate for each discipline, which helps to explain the convergence with the PICT collaboration and ownership. Two areas of divergence were identified with the PICT criteria application and respect. The PICT criterion application was expected because ICAR does not assess knowledge. Also, the IPE learning activity was not designed to assess student application. However, the PICT criterion respect was a surprising area of divergence as it was predicted it would have correlated with the ICAR criterion respectful communication.
The PICT criterion respect showed unacceptable total-item correlation and poor inter-rater reliability, which was unexpected. Furthermore, it was a divergent criterion on both the MMO and ICAR assessments. This finding could be explained and corrected by enhanced faculty training, assessment tool modification, and IPE learning activity revision. Faculty members were not trained to observe or document students’ nonverbal cues indicating respect like head nodding, eye contact, or body language; therefore, student respect may have occurred and was missed because of faculty members’ lack of attention to nonverbal communication. Finally, the IPE learning activity in its current format is not designed to adequately assess respect; however, this is an important component of IPE so modifications are being considered.
The development of the PICT was a lengthy and intentional process as described previously in the literature.16 Multiple versions were developed, tested, and refined using a systematic approach, ultimately leading to the final product. Intentional training of evaluators was conducted to provide consistency and appropriate evaluation of both the students and the assessment tool. While we established the overall validity and reliability of the PICT, this study does have limitations. First, the PICT underwent psychometric analysis at a single institution using only third professional year pharmacy students. Furthermore, only pharmacy faculty members were used for data collection. Using evaluators from other health professions would have broadened the scope of individuals who might assess pharmacy students during IPE learning activities. The PICT is also specific to pharmacy students, so it is not generalizable to other health professions students. If multiple health professions students, including pharmacy students, were being evaluated during an IPE learning activity, then multiple assessment tools might need to be used. Other than refining the respect criterion of the PICT, the next steps for research potentially include identifying how the PICT performs in other IPE learning activities, with pharmacy students in other stages of the PharmD program, and with other institutions. Additionally, identifying how the PICT performs in a longitudinal fashion across multiple IPE learning activities and years of the pharmacy program would benefit PharmD programs.
CONCLUSION
The PICT, an assessment tool used to assess pharmacy students’ ability to perform as part of a simulated interprofessional health care team, proved reliable and valid. In establishing the overall reliability and validity of the assessment tool, we were not able to prove the reliability or validity of the respect criterion. Additional faculty training in administering the PICT and slight modifications to the assessment tool and associated IPE learning activities are planned. Because its validity and reliability have been established, the PICT will be used to assess pharmacy students participating in various IPE learning activities throughout the curriculum.
ACKNOWLEDGMENTS
The authors acknowledge the following individuals for the time they spent watching the video-recorded IPE learning activities to assist us with data collection: Teresa Bailey, PharmD, BCPS, BCACP; Allison Bernknopf, PharmD, MSMI, BCPS; Deeb Eid, PharmD; Heather Girand, PharmD, BCPPS; Annie Ottney, PharmD, BCPS; Mary Frances Ross, PharmD, MPH; Kyle Schmidt, PharmD, BCCCP, Dane Shiltz, PharmD, BCPS; and Curtis Smith, PharmD, BCPS.
Appendix
The Pharmacist Interprofessional Competencies Tool
- Received April 26, 2019.
- Accepted December 16, 2019.
- © 2020 American Association of Colleges of Pharmacy