Abstract
Objective. To use an expert consensus-building process to develop a rubric used by multiple evaluator types to assess Doctor of Pharmacy students’ patient communication skills.
Methods. Faculty and staff members from six schools and colleges of pharmacy collaborated on a multi-step expert consensus-building process to create the final version of a communication rubric. First, faculty and patient content experts evaluated each item in the rubric for its relevance, criticality, and global comprehensiveness using a five-point Likert scale (0=not at all, 4=to a high extent). Descriptive statistics were used to analyze the resulting data. Faculty members evaluated the results and came to a consensus on the second version of the rubric. A corresponding codebook was developed and refined through a two-phase process.
Results. The initial communication rubric was evaluated by 13 expert reviewers. Mean global comprehensiveness on the rubric was 3.83 for faculty experts and 3.5 for patient experts. After evaluating results from the expert consensus-building process, 14 items on the rubric did not change, five items were revised, three items were removed, and two items were added. The second version of the instrument included 20 items in six topic areas. A codebook was finalized to increase scoring consistency for the 20 communication items.
Conclusion. Overall, content experts concluded that the rubric had high global comprehensiveness. Collaboration involving faculty members from multiple schools of pharmacy resulted in a 20-item communication rubric and codebook that can be used to increase consistency in scoring student pharmacists’ patient communication skills.
INTRODUCTION
Effective patient care communication by pharmacists has been shown to improve medication adherence and reduce medication errors.1,2 As such, a variety of teaching strategies are used by colleges of pharmacy in teaching patient care communication skills.3,4 It is imperative that these communication skills are critically assessed and feedback is provided to improve and refine student performance.
Communication rubrics have been created within pharmacy education and are being used to evaluate skills in a multitude of scenarios, including in actual and simulated patient care experiences.5-8 Within the pharmacy profession, there currently is not a universally accepted communication rubric to assess professional communication skills for student pharmacists across a variety of evaluators. A standardized communication rubric that could be used in multiple settings (experiential, laboratory, simulation, etc) by multiple evaluator types (preceptors, faculty, residents, patients, and students) and has been demonstrated to be reliable, valid, and void of interrater variability was needed. A professional communication rubric could be used by pharmacy educators in establishing benchmarks to ensure all students graduating from a college of pharmacy are proficient in their communication skills and that other pharmacy students from other colleges and schools are being held to similar evaluation standards.
The aim of the current project was to develop a rubric using an expert consensus-building process that could be implemented by multiple users to assess student performance of patient communication.
METHODS
A multifaceted approach, as illustrated in Figure 1, was applied to develop the Professional Communication in a Patient Encounter Rubric (communication rubric) and corresponding codebook. The Big Ten Academic Alliance-Performance Based Assessment Collaborative (BTAA-PBAC) is a group of faculty members and instructors from nine schools, including eight Big Ten schools and colleges of pharmacy and the University of Illinois at Chicago (UIC) that teach skills-based laboratory courses. The mission of the BTAA-PBAC is to work towards evolving and shaping performance-based assessment practices for excellence in evidence-based pharmacy education. To begin the process, each BTAA-PBAC school submitted rubrics they had used in the evaluation of students during performance-based assessments. Communication-focused components of each rubric were extracted and compiled into one rubric. Members of the BTAA-PBAC who taught Doctor of Pharmacy (PharmD) students communication skills vetted the rubric. Each item on the rubric was analyzed, discussed, and refined until group consensus was reached. The resulting draft of the communication rubric was piloted by several schools, which led to additional minor changes being made.
Expert consensus building process used to evaluate a universal evaluator rubric to assess pharmacy students’ communication skills.
Key: BTAA-PBAC: Big Ten Academic Alliance – Performance-Based Assessment Committee, UW: University of Wisconsin – Madison, IA: University of Iowa, UIC: University of Illinois at Chicago, MN: University of Minnesota, RU: Rutgers University, OSU: The Ohio State University
Subsequently, a multi-step expert consensus-building process was undertaken to develop the finalized rubric. External content experts completed a survey to assess content validity. Content experts were defined as either faculty members or instructors teaching in a required pharmacy communications course at a Big Ten pharmacy school who were not involved in the development of the rubric (faculty experts) or as individuals employed as standardized patients (patient experts). Experts were given two scenarios where they played the role of the patient to provide context to the survey. They evaluated every item on the rubric for relevance (How well does this item relate to the purpose of the evaluation?) and criticality (How crucial is it that the item be evaluated?), using a four-point Likert scale (0=not at all, 4=to a high extent). Additional questions were asked regarding the global comprehensiveness of the rubric (assessed on the same Likert scale) and identification of important items that should be evaluated by a rater to assess student communication skills. Finally, reviewers responded to three short-answer questions regarding omitted items that should be considered for inclusion, organization of the rubric, and additional comments.
Quantitative data from the consensus-building process were analyzed using descriptive statistics. Themes were identified and reported for the three qualitative questions. Members of the BTAA-PBAC applied results of the expert consensus-building process to make final edits to the rubric (Table 2).
Next, three pharmacy faculty members from the University of Wisconsin-Madison (UW) School of Pharmacy developed a codebook to standardize the scoring of individual communication items. The codebook described the content and structure of the rubric, and provided guidance on the scoring of each item. A draft of the codebook was vetted by the nine-schools represented within the larger BTAA-PBAC group, and additional edits were made based on group consensus.
The communication rubric and codebook were piloted in two phases to evaluate student communication during a patient education session. In phase one, seven laboratory faculty members and one fourth-year student pharmacist representing one of three pharmacy schools (UW, University of Iowa [IA], and UIC) evaluated students during live skill performance at their home institutions. These individuals subsequently brought suggested edits to an in-person meeting of the BTAA-PBAC. Codebook revisions were discussed and implemented after group consensus was achieved. Phase two pilot testing was designed to increase consistency of item scoring through clear interpretation of the codebook. Two simulated patient education sessions were video-recorded and made available to faculty members at six BTAA-PBAC schools (UW, IA, UIC, University of Minnesota [MN], Rutgers University [RU], and The Ohio State University [OSU]). Phase two faculty evaluators used the rubric and codebook to score student communication in each of the videos. Rubric item scoring was compared between faculty evaluators, and discrepancies were identified. Faculty members and instructors from the BTAA-PBAC schools then reconvened to evaluate the results of the phase two pilot and came to consensus on a final version of the codebook (Appendix 1).
This project was approved by the institutional review board (IRB) at each of the six schools that participated in piloting the communication rubric and codebook. Students participating in video recording agreed to participation and signed informed consent documents as required by IRB.
RESULTS
The initial communication rubric was evaluated by 13 content experts, including six faculty members and one graduate student (faculty experts) from seven pharmacy schools and six standardized patients from UW (patient experts). All available data were included in the analysis; however, not all experts completed every survey item. The numerical relevance and criticality ratings by faculty members and patient experts for each of the rubric items are included in Table 1. Mean global comprehensiveness of the communication rubric was 3.83 (n=6) and 3.5 (n=6) for faculty and patient experts, respectively. Common themes to short-answer question responses included suggestions to add omitted items for patient follow-up, a summary of the interaction, and the use of open-ended questions. Both groups of experts also encouraged the evaluation of only one item per line and the development of detailed descriptions for each of the items to assist in consistent use of the rubric. One prominent theme that emerged from the patient expert feedback was that the communication rubric was comprehensive and well organized.
Expert Review by Faculty and Patient Experts of a Universal Evaluator Rubric to Assess Pharmacy Students’ Communication Skills
After the BTAA-PBAC evaluated the results from the consensus-building process and discussed among group members, 14 rubric items did not change. Five items were revised: in item 11, “spoke clearly” was changed to “spoke clearly and confidently”; in item 12, “used words that patient could understand” was changed to “used patient friendly language”; in item 18, “respected patient’s time” was changed to “utilized time efficiently”; in item 19, “achieved mutual understanding and if applicable agreement with plan” was changed to “achieved mutual agreement with plan”; and in item 21, “concluded encounter smoothly” was changed to “provides closure to encounter”. Three items were removed: item 2, “addressed patient by name”; item 15, “listened to patient responses”; and item 16: “appeared confident in abilities.” Two items were added: “listened to and engaged with patient” and “used teach-back.” The second version of the instrument included 20 items that covered six topic areas (Table 2).
Based on feedback from the content experts, a comprehensive codebook was developed and piloted to provide scoring guidance on the 20 communication items evaluated to enable rater consistency. Throughout the three phases of internal codebook consensus building (Figure 1), scoring guidance for all items was refined and finalized, including the addition of descriptive examples for items and how each should be scored.
Professional Communication in a Patient Encounter Rubric (Post-revisions Implemented during Expert Consensus Process)
DISCUSSION
The ability to communicate with patients is a vital component of health care and is emphasized in the Accreditation Council for Pharmacy Education (ACPE) Standards and the Pharmacists’ Patient Care Process (PPCP).9,10 Numerous learning activities designed to improve health care provider communication skills are described in the pharmacy education literature.4 Simulated and standardized patient encounters are common activities used to evaluate progression in student communication skills. Despite recognition of the importance that communication plays in patient care, there is a high degree of subjectivity in determining what exemplifies effective communication and in measuring the appropriate progression of pharmacy students’ communication skills. There is a deficit of standardized rubrics that can be reliably used to evaluate those skills, specifically in student pharmacists. Consequently, individual pharmacy educators often develop and use home-grown assessment tools. Members of the BTAA-PBAC reported having used rubrics that were revised over time without historical context, only used internally, or had not been validated. It is important to utilize a communication assessment tool that limits ambiguity, bias, subjectivity, and scoring inconsistency across assessors, activities, students, and institutions.
Validated tools for assessing communication skills exist, but few have been developed or studied for pharmacy education. The Patient-Centered Communication Tool (PaCT) is one validated tool for assessing student pharmacist communication with patients; however, two limitations of this rubric are validation within a single institution and validation between similar cohorts of students.8
Having faculty members from multiple institutions collaborate to develop a communication assessment instrument through an expert consensus-building process should help to minimize institutional bias and result in an instrument that provides more consistent feedback. The Professional Communication in a Patient Encounter Rubric is the product of such a collaboration and resulted in a shared vision of effective communication among stakeholders. The accompanying codebook provides a mechanism for improved consistency in scoring across multiple assessor types (students, faculty members, simulated patients, and preceptors) as noted by the mean rating on global comprehensiveness of the communication tool by faculty and patient experts.
The Professional Communication in a Patient Encounter Rubric was designed to assess student communication throughout a curriculum for all types of simulated and real patient encounters. Use of this rubric may allow faculty members to easily identify challenges in student mastery of communication and to consistently assess student communication over time; however, the assessment of clinical content will require a separate evaluation.
The process described in this manuscript required numerous meetings, communications, and revisions involving faculty members located across the country. Challenges encountered throughout this process were developing shared terminology with consistent interpretation and BTAA-PBAC membership turnover. A strong desire by all members to proceed with the research and strong collegiate support were necessary for the success of this project. Further research by the BTAA-PBAC will establish interrater reliability and validation of the rubric and codebook across multiple rater groups.
CONCLUSION
Patient communication skills must be taught and assessed throughout the PharmD curriculum. The BTAA-PBAC applied a multifaceted approach, including an expert consensus-building process, to develop and evaluate the Professional Communication in a Patient Encounter Rubric and corresponding codebook. While the rubric does not include assessment of clinical knowledge, this research demonstrates the importance and ability of multiple colleges and schools of pharmacy to collaborate on the development of a universally accepted communication rubric. External faculty and patient content experts provided valuable input necessary to finalize the rubric and the corresponding codebook.
Appendix 1. Codebook for Professional Communication in a Patient Encounter Rubric

Codebook for Professional Communication in a Patient Encounter Rubric
- Received February 4, 2020.
- Accepted July 19, 2020.
- © 2020 American Association of Colleges of Pharmacy