Abstract
Objective. To design an assessment of practice readiness using blended-simulation progress testing.
Design. A five-station, blended simulation assessment was developed to evaluate patient care outcomes in first- and third-year pharmacy (P1 and P3) students, as well as first-year postgraduate (PGY1) pharmacy residents. This assessment of practice readiness included knowledge and performance evaluations administered as a progress test.
Assessment. Eighteen PGY1 residents, 108 P3 students, and 106 P1 students completed the assessment. P3 students scored significantly higher than P1 students across all evaluations. Third-year pharmacy students scored significantly lower than PGY1 residents in interprofessional communications and attitudes of ownership in a standardized colleague/mannequin model station, and in patient communication in a standardized patient station.
Conclusion. Learners demonstrated evolving skills as they progressed through the curriculum. A blended simulation integrated progress test provides data for improvement of individual student clinical skills, informs curricular advancement, and aligns curricular content, process, and outcomes with accreditation standards.
INTRODUCTION
Quality assurance in health science curricula is facilitated through assessment of student competence related to educational and professional outcomes.1,2 Standards developed by the Liaison Committee on Medical Education (LCME) require medical schools to demonstrate curricular quality through assessments of students’ fundamental abilities including problem solving, interprofessional collaboration, and communication, among others, that will prepare them for the contemporary practice of medicine.2 The Accreditation Council for Pharmacy Education (ACPE) mandates pharmacy curricula be structured to facilitate student achievement of outcomes essential to the practice of the profession, including patient-centered care, problem-solving, patient advocacy, interprofessional collaboration, and communication.3
While traditional approaches to assessment, such as multiple-choice examinations, may reliably test students’ fundamental knowledge, important aspects of curricular quality and practice integrity may go underemphasized, including interpersonal skills, lifelong learning, professionalism, and integration of core knowledge into decision making.4 The most valid examinations for assessing competence of these outcomes emulate actual practice activities.5-7 Authentic assessments are defined as performance assessments deployed under realistic conditions in which students are asked to perform real-world tasks that demonstrate meaningful application of essential knowledge and skills.6,8,9 Authentic assessments emphasize the need to demonstrate the ability to apply the knowledge and skills in practical contexts and settings.10 In health professions curricula, simulation is often used to mimic real-life practice due to the limitations of using real patients in actual settings, such as patient availability, safety, and the need to standardize student experiences. Simulation strategies include, but are not limited to, standardized patients and colleagues, virtual patients, and high-fidelity mannequin models.11,12 However, reports on authentic assessments to evaluate learner readiness to handle patient care responsibilities and the use of those assessments to inform curricular quality are limited.
D’Angelo and colleagues developed a simulation-based exit examination to assess medical resident readiness for operative independence.13 Authors reported a wide range of errors and procedure outcomes across study participants and highlighted the need for assessments to improve programmatic awareness of residents’ learning needs. Ragan et al used standardized patients during an assessment examination to identify doctor of pharmacy (PharmD) students at risk of underperforming at advanced pharmacy practice experience (APPE) sites.14 Authors concluded that the process efficiently assessed students’ professional performance in realistic practice settings and was able to predict a substantial percentage of the students likely to perform poorly in practice.
To our knowledge, use of blended simulation (defined as the use of multiple simulation modalities) to assess practice readiness has not been recorded. This manuscript describes the design, curricular integration, and evaluation of a novel authentic assessment deployed as a blended simulation progress test of learner readiness for practice. The Readiness Assessment was developed to promote student learning, curricular quality assurance, and accreditation standard alignment. Progress testing is defined here as a test administration procedure in which multiple equivalent assessments are administered to learners at distinct points in their professional development.15-17
DESIGN
A taskforce of eight faculty and staff members at the University of Pittsburgh School of Pharmacy was charged to investigate and integrate learning strategies into the PharmD curriculum that might more rapidly propel students to practice-ready levels of clinical competence (ie, get to the level of expert faster). To meet this charge, an authentic competence assessment of practice readiness was developed to assess the progress of students in each year of the curriculum in their mastery of key professional outcomes. Design of the Readiness Assessment incorporated the development of a blended simulation assessment to evaluate practice readiness regarding clinical decision-making skills, interprofessional/patient communications, and attitudes of accountability for patient outcomes; integration of the assessment as a progress test across the curriculum; and analysis of data to improve individual student clinical skills, inform curricular modification and advancement regarding student “readiness” for practice, and further align curricular content, process, and outcomes with accreditation standards.
As part of the curriculum at the University of Pittsburgh School of Pharmacy, students are routinely taught and evaluated through patient simulation, including the use of standardized patients and colleagues, virtual patients, and high-fidelity mannequin models.18-22 The Readiness Assessment integrated these pedagogies to represent the most realistic, valid, and reliable patient simulation experiences achievable within the school, while ensuring the assessment could be adequately controlled. Diabetes was chosen as the primary focus of the assessment given that it is thread throughout the curriculum in the form of extensive teaching and learning opportunities. Additionally, diabetes management is a core patient care emphasis across patient care environments and therefore a content area in which all graduates should demonstrate expertise.
The five-station assessment incorporated multiple modes of patient simulation (Figure 1) structured to correspond with Miller’s pyramid of clinical competence,23 similar to previous studies using simulation for assessment.11,14 The assessment was a multi-station simulation of the experiences of one index patient, Theodore Lyton. The same case and assessment was used for all learners regardless of experience. Knowledge and performance evaluations within this assessment were set to a practice-ready level of competence, defined by the taskforce as the knowledge and skills of a PGY1 resident. Five members of the taskforce were experienced preceptors within the PGY1 program and thus capable of determining levels of difficulty. Foundational diabetes knowledge and translation of that knowledge to practice were included.
Description of 5-station Readiness Assessment with Miller's pyramid delineation23
Station 1 was designed to mirror clinical practice “pre-rounding” as learners reviewed patient data within an electronic health record (EHR) training domain (PowerChart Cerner Millenium System Cerner Corporation, Kansas City, MO) to complete an initial patient “work-up” of Theodore Lyton. The EHR case was developed to detail a thorough inpatient experience; including vital signs, clinical notes, laboratory data, medication administration, microbiology data, allergies, and intakes/outputs. At the onset of the EHR review, learners were also provided clipboards containing pertinent patient findings coinciding with the EHR. The clipboards were meant to simulate a clinical pharmacist’s assessment of the EHR, and provided the learner a reference to consult through the remainder of the assessment. The clipboards contained all necessary patient data needed to make justified clinical decisions during subsequent evaluations within the Readiness Assessment. Clipboards were not intended to replace EHR review simply to serve as notes to the learners and ensure all learners were provided the necessary patient information for success.
At station 2, functional knowledge of the pharmacology and pathophysiology of diabetes was assessed by asking multiple-choice questions. The questions were developed from predefined learning objectives by clinical faculty members who were experts in the management and education of diabetes mellitus. Learning objectives were derived from contemporary guidelines and consensus statements of the pathophysiology, risk factors, signs and symptoms, consequences, and management of diabetes.24,25 Questions were then evaluated and approved by the taskforce for appropriate level of difficulty. Multiple-choice questions were administered via an online virtual patient platform (vpSim/DecisionSim, LLC, Chadsford, PA, http://vpsim.pitt.edu/shell/Login.aspx) to maximize the efficiency of the assessment because learners would next encounter a virtual patient within this platform. The virtual patient platform also allowed for more efficient data collection on student performance and for analysis of results.
Station 3 was the virtual presentation of Theodore Lyton as part of the ongoing assessment. Application of functional diabetes knowledge was assessed through clinical decision-making within the virtual patient platform. Short clinical scenarios related to Theodore Lyton’s care were constructed in a “key-features” approach26,27 by the same clinical faculty experts using identical predefined learning objectives developed for the multiple-choice questions. Learners were asked to choose the most correct clinical recommendations from a list of possibilities to maximize the care of the patient. Questions following each scenario concentrated only on the decisions critical to the solution of the problem (ie, key features). The short clinical scenario structure allowed for a greater number of cases and decisions to be assessed within a shorter timeframe. All learners encountered identical virtual patient experiences.
Demonstration of practice skill was evaluated by two subsequent simulations involving Theodore Lyton at stations 4 and 5. A standardized colleague and mannequin model simulation at station 4 targeted student performance in pharmacist-physician interactions. A standardized patient experience targeted student performance in pharmacist-patient interactions at station 5. Unique scripts were designed for each station to place learners in specific situations in which successful resolution of pharmacy problems depended on the display of practice-ready competence. All standardized patients and colleagues were hired actors and were unknown to learners.
Evaluations for station 4 and 5 were predicated on student achievement of learning objectives and performance indicators developed around three patient care outcomes selected to represent practice readiness (Table 1). These three clinical metrics included clinical decision-making (proficiency), interprofessional and patient communication (collaboration), and attitudes of accountability (professional covenant). These metrics were developed from educational and professional outcomes1,28 and confirmed through a survey of faculty preceptors with precepting responsibilities for PharmD students and PGY1 residents at the University of Pittsburgh. Surveyed faculty members were asked to list those attributes separating the clinical skills of PGY1 learners from those of fourth-year PharmD students.
Learning Objectives Developed for Readiness Assessment Stations 4 and 5
Rubrics were developed to assess learner demonstration of performance indicators and practice-ready competence tied to three selected patient care outcomes throughout stations 4 and 5. Performance indicators for each outcome were developed through critical evaluation of the literature1,29,30 by clinical faculty experts and were approved by the taskforce. Rubric scores were captured by pharmacy faculty evaluators using iFormBuilder (Zerion Software, Inc, Herndon, VA) for iPad. Uploading the rubric to this application permitted faculty members to grade online in real time, while allowing for instantaneous posting of results to a learning management system. Faculty evaluators were not located in the same room as the learner and therefore did not interfere with the integrity of the experience. Student learning levels (ie, P1, P3) were not disclosed to faculty members prior to evaluation.
Rubric validity and reliability were determined through pilot testing that was conducted the previous year. Multiple learners (17) across the pharmacist training spectrum (ie, P1, P2, P3, P4, PGY1) were included in the pilot. Learners were evaluated in real-time by pharmacy faculty members. Student performance at stations 4 and 5 was video-recorded in the pilot, allowing taskforce members to reevaluate students using the rubric after the assessment was completed. Taskforce members then compared these scores to scores submitted by faculty evaluators in real time to ensure reproducibility of rubric results and determine interrater reliability.
The Readiness Assessment was integrated into the PharmD curriculum and administered as a progress test in that the assessment was administered to learners at various points in their professional development. Through this test administration procedure, less-experienced learners (ie, P1 level) received identical assessments as those more experienced (ie, P3, PGY1 level). Through this assessment model, faculty members could gauge “progress” of clinical skill development across the curriculum, and learners could gauge their individual “progress” towards practice-ready clinical competence.
The assessment was administered at the simulation education center affiliated with the University of Pittsburgh (WISER, 2015, http://www.wiser.pitt.edu/). Three groups of learners (ie, P1, P3, PGY1) were assessed across two consecutive days (eight hours per day). The P3 pharmacy class and half of the PGY1 class were assessed on one day, while the P1 class and remainder of the PGY1 class were assessed the next day. Each learner was given 30 minutes to complete the assessment. Individual results were emailed to all learners detailing class aggregate and individual strengths and areas for improvement. While students did not receive specific answers to assessment questions and scenarios, recommendations were provided as to how to improve on key ability outcomes. The assessment was considered a “hurdle” requirement, ie, students had to complete it in order to progress through the curriculum.
Median scores and interquartile ranges for each of the learner levels were calculated for each station to describe the data. The data were then analyzed for differences in learner scores (P1, P3, PGY1) for each station. Scores for each learner level were also compared for specific performance indicators within the objectives for stations 4 and 5. To compare the three groups with regard to the final score for each of the four stations and the specific indicators at stations 4 and 5, ANOVAs were conducted. However, the Bartlett test was significant, indicating that the assumption of equal variances could not be made. Therefore, a Kruskal-Wallis equality of populations rank test was performed to determine if there was a difference between the final score for each of the three groups at each of the four stations and the specific indicators at stations 4 and 5. If the Kruskal-Wallis test result was significant, with a p value <.05, then a Mann-Whitney test was conducted to determine where the difference was between the three groups. Each indicator on the rubric, which were dichotomous variables of “completed task” or “did not complete task,” were then evaluated. The P1 class was compared to the P3 class and the P3 class was compared to the PGY1 residents. The Chi square test or Fisher’s exact test was used where appropriate to determine differences between learner levels for each rubric performance indicator. To evaluate the rubric interrater reliability, three reviewers scored all students during pilot testing. A Pearson correlation was conducted to compare the strength of the linear association.
EVALUATION AND ASSESSMENT
A proposal submitted to the investigational review board at the University of Pittsburgh regarding investigation into the effectiveness of the Readiness Assessment was approved under an exempt status. Interrater reliability analysis performed during pilot testing for stations 4 and 5 rubrics demonstrated a large positive relationship between the scores of all reviewers. Correlations ranged from r=.57 to r=.953 for the three outcomes, clinical decision making, attitudes of ownership, and communication. Twenty-five PGY1 residents, 108 P3 students, and 111 P1 students completed the Readiness Assessment. Knowledge and performance data were captured for 18 (72%) PGY1 residents, 108 (100%) P3 students, and 106 (96%) P1students. Scores on multiple-choice questions at stations 2 and 3, as well as rubric scores at stations 4 and 5, demonstrated an advancing skillset from P1 learner to PGY1 trainee (Figure 2). Specifically, P3 scores were significantly higher than P1 scores across all evaluations (p<.05). The PGY1 residents scored better than P3 students at every assessment and significantly better than P3 students in both interprofessional communications and attitudes of accountability at station 4, and patient communication at station 5 (p<.05 for all).
Learner scores per station presented as median (line), interquartile ranges (box), and 10-90% confidence intervals (whiskers)*
At station 4, P3 students scored significantly higher than P1 students across five of seven clinical decision-making performance indicators, four of five attitudes of accountability performance indicators, and all six interprofessional communication indicators (Table 2). At this station, P3 student scores were comparable to PGY1 scores across five of seven clinical decision making performance indicators, but significantly inferior to PGY1 scores across three of the five attitudes of accountability performance indicators, and five of six interprofessional communication indicators.
Readiness Assessment Station 4 and 5 Learner Scores per Outcome and Performance Indicatora
At station 5, P3 students scored significantly higher than P1 students across six of 11 clinical decision-making performance indicators, four of eight attitudes of accountability indicators, and six of seven patient communication performance indicators. At this station, P3 student scores were comparable to PGY1 scores across nine of 11 clinical decision-making performance indicators and six of eight attitudes of accountability indicators, but significantly inferior to PGY1 scores across three of seven patient communication indicators.
First- and third-year learners were surveyed regarding their perceptions of the assessment (Table 3). Response rates were 86% and 50% for the P1 and P3 classes, respectively. The P1 survey responses were highly positive and favorable towards the assessment, despite their lower performance. A majority of students indicated that the assessment met its objectives and was organized, informative, and valuable to their learning. They noted that the simulations were realistic, challenging, and valuable for assessing practice readiness in the curriculum. As expected, P1 students were divided when asked if the assessment and its simulations were appropriate for their level of learning. The P3 students indicated that the assessment was appropriate for their learning level. They found the assessment and its simulations to be organized, logically sequenced, realistic, and challenging. However, responses were divided among the P3 class regarding their level of agreement that the assessment had value, effectiveness, and utility.
Post-assessment Student Survey Responses
DISCUSSION
The goal of this project was to develop an authentic assessment of patient care competence and readiness for practice. To meet this goal, an assessment incorporating multiple simulation modalities was employed to capture knowledge- and performance-based evaluations of key patient care outcomes clinical decision making, interprofessional communication, and attitudes of accountability. This authentic assessment of practice readiness was administered as a progress test and integrated into the PharmD curriculum to measure progressive student development. Resultant data provided evidence of learner achievement in aggregate and at the individual level.
Validity of the assessment as a measure of patient care readiness was maintained through multiple approaches. Indirectly, patterns of results were consistent with expectations that scores would improve with advancing training levels. Since practice-ready competence was tested,learner cohorts did demonstrate evolving skills regarding the three clinical metrics as they progressed through the curriculum, and then progressed further following residency training. Key performance indicators improved significantly from P1 to P3 levels, and then again from P3 to PGY1 learning levels. Additionally, student perception data of the assessment were consistent with expectations. Only 40% of surveyed P1 students indicated that the assessment was appropriate for their level of learning, while the majority (74%) of P3 students found the assessment appropriate. Regarding simulation scenarios within the assessment, 43% and 80% of P1 and P3 students, respectively, indicated that the simulations were appropriate for their level of education. These were encouraging findings given that the assessment was constructed at a practice-ready competence level.
Construct validity of the assessment was enhanced through informed selection, development, and design. Assessment outcomes were developed from educational and professional outcomes.1,28 Use of progress testing as an assessment administration method was chosen based on literature demonstrating its effectiveness regarding student knowledge and skill acquisition throughout a curriculum.16,17 High-fidelity simulations were used to enhance authenticity and real-world application.10,12,14,31 Learning objectives and performance indicators developed to evaluate each outcome were constructed from diabetes education and management competencies.24,25 Finally, multiple-choice questions were used to assess knowledge; “key-features” clinical scenarios were developed for knowledge application and problem-solving; and observation-based scores in patient simulations were captured to demonstrate competency in professional practice outcomes requiring direct patient or physician interactions.17
Authenticity of the assessment further contributed to the assessment’s content and predictive validity, as fidelity of the testing environment matched well with the tasks required in actual clinical pharmacy practice; including EHR review, clinical problem-solving, professional accountability, and patient and/or physician interactions. The assessment relied on simulation to maximize fidelity and authenticity which has proven to improve patient care skills in pharmacy students12,14,19,31,32,33 and has even replaced clinical time in pharmacy and nursing curricula.3,33 Surveyed P1 and P3 students indicated that the simulations were realistic (83% and 67%, respectively), stressful (63% and 69%, respectively), and challenging (91% and 80%, respectively).
Global reliability, defined as the generalizability of the assessment to other assessment situations (eg, P4 student evaluations), was maximized. Assessment formats, and more importantly, specific assessment tasks required of the students within each format were configured to coincide with Miller’s pyramid to measure a student’s ability to “know,” “know how,” and “show how.”22 This assessment strategy is expected to better predict the student’s likelihood of performing these skills in practice.23,34 Scores from performance indicators within stations 4 and 5 depict an advanced P3 learner who is practice-ready in many aspects but likely requires additional experience to improve levels of confidence and engagement with patients and physicians. Furthermore, learners perceived the assessment to be a valuable way to assess readiness for practice (73% and 50% of P1 and P3 learners, respectively). Relative to unstructured assessments of student clinical skills such as the creation of a SOAP note following a patient case review, generalizability of results was further enhanced by testing learners on a greater number and range of knowledge and performance-based clinical scenarios. Also, assessment outcomes, formats, performance indicators, and evaluation tools were predefined and standardized to better target clinical competence. Furthermore, simulated experiences mirrored clinical practice, thereby adding authenticity and an additional layer of global reliability. A majority of the P1 and P3 students who were surveyed found the assessment to be coordinated in a logical sequence (88% and 69%, respectively) and should further be incorporated into the curriculum (67% and 52%, respectively).
To enhance the objectivity of direct observation evaluations integrated throughout the assessment, only practice-based pharmacy faculty members were selected as evaluators. This enhanced the reliability of performance-based assessments as these assessments required a judgment of quality regarding alignment with clinical outcomes. Faculty members were not visible to learners, however all learners were informed that faculty members would be evaluating their performance. Faculty members were trained prior to the assessment using a detailed, standardized evaluation rubric. Rubrics were pilot tested prior to the assessment to ensure interrater reliability.
Because assessment drives learning, the Readiness Assessment was designed to act as a catalyst for student learning as it represents key aspects of learning that can be reasonably incorporated into effective teaching, such as active engagement, stress, reward and reinforcement, and multitasking.35-36 Learners need to learn how to perform the meaningful tasks they will encounter as pharmacists, and this assessment required learners to integrate and apply their knowledge. A majority of P1 students indicated that simulation within the assessment improved their clinical decision-making (55%), attitudes towards accountability for patient outcomes (67%), and interprofessional communication skills (61%). It was apparent from these data that early learners appreciated the opportunity to practice meaningful pharmacist skills. Furthermore, the Readiness Assessment allowed learners to construct meaning around what they had learned in the curriculum as it applies to patient care, better preparing them for direct patient care responsibilities. All learners were provided multiple opportunities to demonstrate their abilities throughout the assessment and received faculty feedback regarding their strengths and areas for improvement. These multiple learning opportunities enhanced the educational impact of the assessment. More P3 students agreed with the statements that simulation improved clinical decision-making (41%), attitudes towards accountability for patient outcomes (48%), and interprofessional communication skills (48%) than disagreed with or were neutral about each statement.
Educational impact was further enhanced through using the progress testing method. Progress testing minimizes the negative steering effect an assessment system can have on student learning and encourages students to adopt deep levels of thinking and self-direction.17 Early learners were able to appreciate the knowledge and skills of the profession through actual practice, while more seasoned learners could gauge their progress towards key patient care outcomes. Progress testing afforded learners independent, practical opportunities to develop and refine key skills required by the school and the profession.
All simulations within the assessment were common formats and tools used throughout the PharmD curriculum, which enhanced faculty and student comfort with this format. In perception surveys, P1 students indicated that the assessment was helpful to their learning (66%), informative (67%), and considered a valued experience (76%). Furthermore, specific advantages to the learner were integrated into the assessment such as minimizing time to completion and ensuring that assessment formats, content, and tasks appeared relevant, realistic, and fair. Surprisingly, P3 students were divided as to their level of agreement with the helpfulness, informative nature, and value of the assessment with similar percentages of students agreeing and disagreeing with each of the variables. This survey finding may be related to the lack of enthusiasm in many of the P3 students about having to perform in a new assessment just prior to the end of their didactic curriculum. Even though all students were given six months advanced notice, P3 students had not been required to participate in this assessment previously, and some likely struggled to appreciate the value of this opportunity.
The Readiness Assessment aligns with pharmacy accreditation standards that urge programs to evaluate characteristics desired of pharmacists including problem solving, patient advocacy, and professional communication.3 Current standards specifically advocate for curricular assessments identifying student readiness to enter APPE experiences; provide direct patient care in a variety of health care settings; and contribute as a member of an interprofessional collaborative patient care team. Accreditation standards also implore colleges to use the analysis of assessment measures to improve student learning and the level of achievement of the educational outcomes.3 Results of the Readiness Assessment demonstrated progressive student development and are being used to ensure curricular quality and drive curricular modification. Assessment data, as part of the continual curricular quality improvement process, were presented to both the curriculum committee and the curricular assessment committee chairs in an effort to inform the work of these committees and drive curricular advancement. Specific focus on curricular outcomes surrounding professional accountability and the pharmacist/physician interaction is currently being discussed.
Efforts were taken to maximize feasibility and efficiency in the design of the Readiness Assessment. It is administered annually to P1, P3, and PGY1 levels and is capable of rotating 125 learners through in an eight-hour day. An assessment of any individual learner lasts just 30 minutes. The assessment is efficient regarding resultant data collection and analysis. All data generated throughout the assessment were collected online and instantaneously uploaded to a learning management system. The data were organized within this management system so that immediate access and analysis of results were possible. The immediacy and availability of learner scores greatly enhanced the feasibility and sustainability of the assessment. Using technology and simulation in assessment presents numerous advantages from an evaluation authenticity and efficiency perspective. However, these strategies are heavily reliant on software and computer system and Internet connectivity. For example, a temporary loss in Internet connectivity caused a loss of evaluation data for 11 learners in this assessment.
Assessment formats and simulations used in the Readiness Assessment are accepted teaching and learning strategies within health professions education, thus the assessment has potential for use by other colleges or schools. Simulations could be adapted to accommodate locally available technology. For example, while an EHR was used for the first station in this assessment to maximize authenticity, any simulated patient information delivery method (ie, virtual chart, paper chart) could be used. The virtual patient platform added further fidelity and efficiency for knowledge-based performance administration and results analysis; however, the platform could be substituted with paper tests or audience response technology. The mannequin model added fidelity, but was not an absolute necessity to gauge the pharmacist-physician relationship in the fourth station. Standardized patients and colleagues are available for hire at most academic centers, and classroom and meeting room space could be creatively arranged to use in place of a simulation center.
This assessment will be administered annually as part of the school’s assessment system. It will add to existing curricular assessments of practice readiness that are required of students including Capstone cases and the Pharmacy Curricular Outcomes Assessment (PCOA).37 After all students in the curriculum (ie, P1-P4) have been administered the same assessment (ie, every two years), index cases and disease states will change. The Readiness Assessment helps faculty members identify those students who may not be “ready,” while allowing students to self-assess for areas of improvement, prior to APPE and clinical practice. In future administrations of the assessment, students will receive immediate results and faculty feedback on all evaluations. Feedback will allow students to see all scores, along with associated levels of practice-ready competence. Feedback will also include faculty recommendations as to how to improve. Also, a comprehensive remediation process will be designed to identify learners not attaining acceptable competence levels, and provide individualized remediation plans to maximize student opportunities for improvement as they progress through the curriculum. This remediation plan will be specific to the student’s need (ie, clinical decision making, attitudes of accountability, interprofessional communication) and will engage the student’s professional development advisor and/or portfolio review process.
Several limitations exist regarding the use of a blended simulation progress test to assess clinical readiness. Developing rubrics to assess practice-ready competence was a complex task as it was difficult to both define the realistic level of expected competence and then to accurately and adequately describe this level of competence in a clear, concise manner. To maximize content and construct validity, rubrics were developed by clinical experts using contemporary evidence and validated through pilot testing. We emphasize that student ability to successfully manage one case, consisting of only five stations, centered on one disease state, is not predictive of the student’s ability to solve any other problem or case (and vice-versa). This assessment should be viewed as part of a systematic assessment strategy to measure progressive student development within the curriculum as opposed to a definitive and final marker of competence. There are also limitations of using a demonstration-type format like simulation as a blueprint for progress testing. The majority of the test could be perceived to be futile for earlier students who would only be guessing at correct responses. From a faculty perspective, extensive time, training, and resources were allocated to learners who may not fully understand and/or appreciate its implications. However, administering this assessment to early learners “sets the culture” of the curriculum, and helps early learners identify curricular and professional expectations. Finally, it should be noted that this assessment was purposed to be an evaluation of learner clinical competence regarding skills of the profession, rather than learner clinical performance. Clinical performance could be evaluated only with use of actual patients in actual clinical settings. Future research to correlate results of this assessment with experiential preceptor evaluations is warranted.
CONCLUSION
An authentic assessment of patient care competence and readiness for practice incorporating multiple simulation modalities was employed to capture knowledge- and performance-based evaluations of patient care outcomes. This authentic assessment of practice readiness was administered as a progress test and integrated into the PharmD curriculum to measure progressive student development. Resultant data demonstrated learners with evolving skills regarding the three clinical metrics as they progressed through the curriculum, and then progressed further following residency training. Data were used to improve learner progress towards practice-ready competence, inform curricular modification and advancement, and further align with pharmacy education accreditation standards.
ACKNOWLEDGMENTS
The authors gratefully acknowledge the WISER facility for its continued collaboration, support, and expertise. We thank and acknowledge the University of Pittsburgh Laboratory for Education Technology and UPMC eRecord CareNet/Core PowerChart for access into vpSim and CERNER EHR, respectively.
- Received March 14, 2016.
- Accepted May 5, 2016.
- © 2017 American Association of Colleges of Pharmacy