Abstract
Objective. To determine whether there is a difference in student pharmacists’ learning or satisfaction when standardized patients or manikins are used to teach physical assessment.
Design. Third-year student pharmacists were randomized to learn physical assessment (cardiac and pulmonary examinations) using either a standardized patient or a manikin.
Assessment. Performance scores on the final examination and satisfaction with the learning method were compared between groups. Eighty and 74 student pharmacists completed the cardiac and pulmonary examinations, respectively. There was no difference in performance scores between student pharmacists who were trained using manikins vs standardized patients (93.8% vs. 93.5%, p=0.81). Student pharmacists who were trained using manikins indicated that they would have probably learned to perform cardiac and pulmonary examinations better had they been taught using standardized patients (p<0.001) and that they were less satisfied with their method of learning (p=0.04).
Conclusions. Training using standardized patients and manikins are equally effective methods of learning physical assessment, but student pharmacists preferred using standardized patients.
INTRODUCTION
In the 1980s, approximately 33% of clinical pharmacist faculty members and 20% of clinical pharmacist adjunct faculty members reported performing physical assessment in patient monitoring.1 Since then, the role of pharmacists in direct patient care has expanded and clinical pharmacists are now routinely using a variety of patient assessment techniques to ensure optimal medication therapy outcomes.2 Colleges and schools of pharmacy must look beyond current practice and envision the expectations and duties of the pharmacist of the future and prepare students to become pharmacists capable of managing medication therapy. The Accreditation Council for Pharmacy Education (ACPE) requires that physical assessment be taught in the classroom (patient assessment laboratory) and experiential settings.3 While the need to teach physical assessment has been established, the method for how to teach it has not.
In 2007, Spray and colleagues distributed a 20-item questionnaire to US pharmacy practice department chairs to determine the content, extent, and design of physical assessment training/teaching in the pharmacy curriculum (the TOPAS study).4 Ninety-six percent of the programs that responded indicated patient assessment skills were taught, mostly for the advancement of the profession of pharmacy and because it was required to meet ACPE standards. The most common topics covered were pulmonary examination, cardiovascular examination, and assessment of vital signs; these results were consistent with previous findings.4,5 The majority (76%) of colleges and schools used the laboratory setting for instruction; 27% of those in a “simulated clinical setting” and 63% didactically. Their methods did not clearly define the parameters of the simulated clinical setting or whether manikins or standardized patients were used.
Using standardized patients to teach physical assessment has a 95% success rate in students successfully demonstrating the ability to perform the skills.6 Using standardized patients can provide a more realistic setting and can be potentially less embarrassing than using peers or faculty members to practice physical assessment.6,7 In general, student pharmacists prefer to practice physical assessment techniques on the following (in decreasing order of preference): standardized patients, peers, instructors, and staff members.7 In addition to using standardized patients, having a medical facilitator explain and demonstrate the skills and respond to student questions while the students are practicing the skills on the standardized patient is important.6 Many programs use trained standardized patients for evaluation purposes as well, or at a minimum, patients who are able to provide feedback directly to the student.
While manikins do not offer the same benefits as standardized patients, they are highly effective in training student pharmacists.8 Seybert and colleagues demonstrated that the majority of student pharmacists participating in a critical care simulation that used manikins (programmed in advance) were extremely satisfied with the experience (88%). Some specific benefits of this approach included that the facilitator could control the learning environment, adapt the simulation to the students’ performance level, and conduct debriefing with the students immediately following the session.
Student pharmacists overall find learning physical assessment to be valuable (especially when taught by medical faculty members vs pharmacy faculty members), despite not feeling confident in performing these techniques.9,10 Given the increasing importance of pharmacists having this skill and that it is an ACPE requirement, determining the best technique for teaching physical assessment is important for colleges and schools of pharmacy.
Student pharmacists at St. Louis College of Pharmacy were required to take a 1-credit course focused on learning basic physical assessment and patient interviewing skills. The course used several methods of instruction in the past to teach physical examinations, and given the increasing costs of using standardized patients, the course instructors wanted to determine if there was any advantage to using one instructional method over another. The objective of this study was to compare 2 simulation methods for teaching physical assessment to third-year student pharmacists: standardized patients vs manikins. The hypotheses were: (1) there is no difference between the 2 methods of learning in student performance of physical assessment, and (2) there is no difference between the 2 methods of learning in student satisfaction.
DESIGN
The course was comprised of lectures on physical examination techniques, laboratory sessions with hands-on experience learning and performing examinations, practice patient interview sessions, multiple quizzes to assess knowledge, and a skill-based final examination. Ten pharmacy practice faculty members were involved in teaching the course. Four faculty members taught and assessed cardiac examinations and 4 faculty members taught and assessed pulmonary examinations. To minimize bias, each faculty member taught students using only 1 simulation method, either with manikins or standardized patients. Two additional faculty members served as substitute instructors during the semester, and all 10 faculty members served as assessors at the end of the course. Prior to teaching the course, each instructor participated in a 1-hour training session by an expert lecturer on physical examination. Faculty members taught the students during 30-minute laboratory sessions at a local medical school simulated patient center and evaluated students during the patient assessment final.
Self-selected pairs of student pharmacists were randomized using the software program The Hat (Harmony Hollow Software, Covington, LA) to 1 of 2 groups to learn how to perform the cardiac examination and the pulmonary examination; group A (n=76) used manikins while group B (n=78) used standardized patients (Figure 1). Students had one 30-minute laboratory session on the cardiac examination and one 30-minute laboratory session on the pulmonary examination.
Comparison of Two Methods of Teaching Physical Assessment Skills to Pharmacy Students.
The learning process and patient assessment final were identical between group A and group B. The simulation center trained the standardized patients used for each laboratory session while faculty members brought the 2 Nursing Anne VitalSim Capable Manikins (Laerdal, Wappingers Falls, NY) from the college to use in the examinations. These manikins were capable of producing normal and abnormal breath, bowel, heart, and lung sounds. They were also equipped with a blood pressure training arm, heart rhythm variations for electrocardiogram interpretation training, and an intravenous accessible arm for training, and came with optional tools to assist in conducting more advanced nursing procedures. The cost to rent the simulation space, use the standardized patients, and cover all facility fees for the semester (laboratory sessions and final examination) was approximately $7,000. The one-time purchase cost for each manikin was approximately $2,600.
EVALUATION AND ASSESSMENT
The cardiac examination consisted of 7 objectives: (1) identifying common topographical markers on the chest (vertical and horizontal landmarks), (2) identifying the location of the point of maximal impulse, (3) identifying the location of the 4 designated auscultatory areas, (4) describing where S1 and S2 (the first and second heart sounds marking the beginning of systole and diastole, respectively) are heard the loudest, (5) describing where/when S3 and S4 (the abnormal third and fourth heart sounds associated with cardiac disease) would be heard, (6) verbalizing the phase, auscultatory area, and pitch of each murmur and what side of the stethoscope to use for each, and (7) examining for the presence or absence of carotid bruits. The pulmonary examination also consisted of 7 objectives: (1) identifying common thoracic landmarks and where the lungs are located with respect to these landmarks, (2) identifying respiratory rate and pattern, (3) using palpation to note the quality of the tactile fremitus, (4) using palpation to note thoracic expansion, (5) auscultating and differentiating between vesicular, bronchovesicular, and bronchial breath sounds, (6) verbalizing the 4 adventitious breath sounds and differentiating between them, and (7) assessing the quality of vocal resonance.
During the patient assessment final, each student (including both those trained using manikins and those trained using standardized patients) independently interviewed a standardized patient in a simulated examination room and completed a cardiac or pulmonary examination on that patient based on the chief complaint. The chief complaints for the cardiac cases included chest pain, palpitations, or chest tightness. The chief complaint for the pulmonary cases was shortness of breath. Faculty members evaluated all performances live but did so remotely via video monitors.
Performance scores for the physical examination component of the patient assessment final were compared between groups and analyzed using a two-tailed independent samples t test. In order to detect a moderate effect size of 0.5 with an alpha of 0.05 and a power of 0.8, a minimum of 64 students per group were needed.
A 9-question satisfaction survey instrument was developed that used a 5-point Likert scale (1= strongly disagree, 2= disagree, 3= neutral, 4= agree, 5= strongly agree). Survey questions asked about satisfaction with learning method and self-perception of confidence, comfort, and accuracy in conducting physical assessments. Students completed the survey instrument at the end of the semester, 2 weeks after the patient assessment final. The survey instrument requested that each student read each question carefully and answer the question in light of the type of simulation they used during the cardiac and pulmonary laboratory sessions. There was also space at the bottom of the survey instrument for students to provide other comments related to their learning experience. Survey responses were analyzed using the Cochran-Armitage trend test. The project was funded by an internal creative teaching award grant and approved by the St. Louis College of Pharmacy Institutional Review Board.
Three student pharmacists were excluded for not completing the final examination, leaving 156 student pharmacists’ performances for comparison. There were no significant differences in the mean final examination scores between group A (n=76) and group B (n=78) (93.8% vs 93.5%, p=0.81), between the cardiac (n=80) and pulmonary (n=74) examinations (94.1% vs 93.1%, p=0.37), or between any subgroups (Table 1).
Student Pharmacist Performance Scores
There was no observed difference between the method of learning that student pharmacists suggested be continued in the class, with 27.3% of group A, which used manikins, recommending that the course continue to use manikins and 32.7% of group B, which used standardized patients, recommending that the course continue to use standardized patients (p=0.23). More student pharmacists using manikins indicated they would have probably learned to perform the examinations better had they used standardized patients than vice versa (22.7% vs 5.3%, p<0.0001). Also, more student pharmacists in group A were less satisfied with their method of learning (61% agreed/strongly agreed that they were satisfied with the learning method in group A vs 71% in group B, p=0.04). There was no difference in how either group felt about the ability of their learning method to effectively portray a patient with specific medical conditions (22% agreed/strongly agreed in group A vs 24% in group B, p=0.59) and help them learn physical assessment skills (32.7% of group A agreed/strongly agreed vs 37.3% of group B, p=0.11). Finally, there was no difference between groups A and B, respectively, in their comfort (24% vs 26.7%, p=0.24), confidence (30% vs 29.3%, p=0.84), or accuracy (41% vs 34.7%, p=0.23) in performing physical assessments on the patient.
DISCUSSION
The results of this study were consistent with previous literature showing value in both simulation methods for teaching physical assessment and went further to show no difference between the 2 methods in achieving student performance outcomes. The similarity in learning is likely explained by how well the manikins portrayed the necessary anatomy and landmarks as well as the sounds present at the various landmarks. Manikins could be considered to have an advantage over standardized patients in that they allow the user to experience abnormal physical assessment findings that are difficult to find naturally with standardized patients. We also found low student confidence in their skills, similar to that reported in existing literature, regardless of the method of instruction used. This lack of confidence is potentially explained by the “staged” nature of the simulated environment and students’ difficulty with perceiving how their skills will translate into examinations of actual patients. The scores for confidence in skills between the student pharmacists who learned using the standardized patients vs the student pharmacists who used the manikins were surprisingly similar despite the potential for standardized patients to be perceived as being closer to “real” patients – a perception which we feel resulted in the difference in satisfaction between the groups.
There were some limitations to our study, which may have impacted the results. These included use of pharmacy faculty members rather than medical faculty members to teach the course, given that previous studies have shown that students prefer learning how to perform these examinations from medical faculty members.9,10 Though this had the potential to affect our results, it is more likely to limit the external validity of the results rather than the comparison between the 2 groups as both were taught by pharmacy faculty members. The results may also have been influenced by the type of manikins used. Other, more sophisticated manikins are available which may have provided a more realistic experience for the students and therefore increased student ratings on the survey questions, but probably would not have affected the accuracy of the results on the final examination. Those student pharmacists who received training on the manikins would have felt more satisfied if they were assessed using the same method in which they were taught (if taught via manikins, their physical assessment skills could have been assessed on a manikin). Ultimately, accurate student performance of these techniques on humans is the goal and therefore a reasonable assessment expectation within the curriculum.
The comparison between the groups may have been limited by the use of 2 separate examinations (cardiac and pulmonary). Using only 1 examination would have given us a more accurate comparison between the 2 simulation methods used, but may have influenced the results of the final assessment if students knew ahead of time which type of examination to expect. Another potential limitation was the small sample size in the comparison. With a small sample size and small effect size, the chance of a type 2 error occurring increases significantly. Even if significant differences had been found, we believe that the “clinical” significance of those differences would have been nonexistent. A final consideration is that students knew of their final examination scores when they completed the survey instrument, which could have biased their responses depending on how well they performed. Despite knowing their examination scores and there being no difference in final examination scores between methods of learning, students in this study still noted decreased satisfaction with learning the techniques using a manikin.
This study shows that teaching physical assessment skills to student pharmacists is effective using either standardized patients or manikins. Given the cost of using manikins relative to that of using standardized patients and the lack of a difference in achieving the desired competencies, it seems feasible to use manikins for physical assessment training. Depending on faculty resources and number of student pharmacists, however, more than 2 manikins may be needed in other programs. Student satisfaction should always be kept in mind, but their satisfaction in this case appears tied to a perceived benefit to using standardized patients that was not reflected in their assessment scores.
CONCLUSIONS
Physical assessment is a required element of the pharmacy curriculum, but the best method for teaching physical assessment skills has not been determined. While we demonstrate that the use of standardized patients and manikins are equally effective methods for learning physical assessment, standardized patients were preferred by student pharmacists. These findings should be considered by programs when weighing all the variables in determining the most appropriate teaching method to use in their curriculum.
ACKNOWLEDGEMENTS
We thank Suzanne G. Bollmeier, Brooke E. Hollands, Teresa D. Azzopardi, Jennifer S. Hardesty, Christin M. Snyder, Katashia M. Partee, and Charles Taylor for their assistance and support in conducting this study. We acknowledge funding received from the St. Louis College of Pharmacy Creative Teaching Award program, which funded the facility fees and standardized patients through Washington University.
- Received October 27, 2012.
- Accepted December 15, 2012.
- © 2013 American Association of Colleges of Pharmacy