Abstract
The most common types of assessment of human patient simulation are satisfaction and/or confidence surveys or tests of knowledge acquisition. There is an urgent need to develop valid, reliable assessment instruments related to simulation-based learning. Assessment practices for simulation-based activities in the pharmacy curricula are highlighted, with a focus on human patient simulation. Examples of simulation-based assessment activities are reviewed according to type of assessment or domain being assessed. Assessment strategies are suggested for faculty members and programs that use simulation-based learning.
INTRODUCTION
Simulation-based learning activities, including use of standardized patients, role-playing exercises with peers, and skills-based evaluations such as prescription checking or extemporaneous compounding have been used in the pharmacy curriculum for at least the past decade. More recently, human patient simulation (HPS) has been incorporated into the doctor of pharmacy (PharmD) curricula across the United States. In HPS, high-fidelity manikins function as simulated patients in healthcare scenarios. Many activities focus on increasing student pharmacists’ exposure to and confidence in managing situations that occur in clinical practice. Simulation activities provide opportunities for students to practice their skills and integrate knowledge, communication, professionalism, and clinical application.
The majority of simulation activities incorporate formative feedback. Few studies have objectively quantified the extent of learning outside of pre- and post-simulation knowledge testing. Pharmacy educators have not used simulation extensively for summative assessment. Assessment in pharmacy education needs to expand beyond administering student satisfaction and self-efficacy survey instruments and knowledge assessments to encompass assessment of clinical performance and critical thinking. Paramount to this will be the future development of reliable, valid assessment tools and techniques to accurately evaluate the effectiveness of learning through simulation.
The Accreditation Council for Pharmacy Education (ACPE) Standards and Guidelines states that colleges and schools of pharmacy are expected to ensure and/or demonstrate that their curricula are successful in graduating pharmacist practitioners who are professionally competent to provide patient care. According to Standard 15, Assessment and Evaluation of Student Learning and Curricular Effectiveness, assessment activities must be systematic, sequential, and ongoing, with data collection and analyses used to improve student learning and attain professional competencies.1 Assessment practices continue to be driven by governmental policies on higher education, accreditation standards, and public demands for accountability and transparency.2 This view is reflected by the ACPE's policy that allows simulation-based activities to account for up to 20% of total introductory pharmacy practice experience (IPPE) time.
Pharmacy education is not alone in its quest to develop sound assessment practices for improvement of student learning and for programmatic evaluation. Medical residency programs must ensure that their graduates meet 6 domains of clinical medical competence (patient care, medical knowledge, practice-based learning and improvement, interpersonal and communication skills, professionalism, and systems-based practice) as defined by the Accreditation Council for Graduate Medical Education (ACGME).4 Simulation is included in the ACGME “Toolbox” of Assessment Methods.5 Baccalaureate and graduate nursing programs also are required to evaluate student performance in a manner consistent with expected individual student learning outcomes.6 Similar accreditation standards for outcomes assessment exist in many other professional programs including veterinary medicine and dental hygiene.7,8 The degree to which other health professions have officially recognized simulation as a strategy for assessment varies because of rapidly changing simulation technology and the capability of other programs to effectively integrate it into their curricula. The degree to which other health professions have officially recognized simulation as a strategy for assessment varies because of rapidly changing simulation technology and the capability of other programs to effectively integrate it into their curricula.
Beyond accreditation standards, educators teaching in PharmD and other healthcare programs are strongly committed to ensuring graduates are professionally competent to provide high-quality patient care. According to Epstein and Hundert, professional competence is “the habitual and judicious use of communication, knowledge, technical skills, clinical reasoning, emotions, values, and reflection in daily practice for the benefit of the individual and community being served.”9 Similar to the ACGME domains, Jungnickel and colleagues suggest that professional competencies in pharmacy should be characterized into 3 specific domains which include patient-centered care, population health, and pharmacy systems management. Five crosscutting abilities necessary for successful practice also are suggested: professionalism, self-directed learning, leadership, interprofessional collaboration, and cultural competency.10 Evaluation of professional competencies presents challenges to educators for which simulation may provide unique opportunities when coupled with effective assessment tools.
How do we know whether student pharmacists have achieved professional competence? Adoption of an assessment process that illuminates the degree to which students have achieved curricular learning outcomes is one way to measure success. Use of valid, reliable tools is recognized as the foundation for robust assessment of professional competency. However, development of these tools is challenging because the validation process is a barrier. When validating an assessment tool, many complex factors must be considered such as choosing the most appropriate analysis and enrolling enough subjects to accurately power the statistical analyses. Additional barriers to instrument validation include limited resources and faculty members who are not equipped with the necessary background or skills. Because of these factors, a sufficient number of valid and reliable tools for evaluating simulation assessments is lacking. A review article in the nursing simulation literature indicated that out of 22 selected representative tools, which included evaluations of clinical simulations in the cognitive, psychomotor, and affective domains, only 9 of them reported validity or reliability data. The authors concluded that the adoption and progress of human patient simulation in nursing curricula might be delayed due to the lack of valid, reliable instruments. Even with well-defined medical competencies from the ACGME, traditional medical education assessment models focus primarily on assessing knowledge acquisition and less on assessing performance, skills, and attitudes via valid and reliable instruments. Moving toward the concept of a competency-based education model requires a paradigm shift for both educators and learners in which the focus on structure and process (traditional model) is replaced with a focus on competencies, particularly in the context of health care delivery.12 A comparison of traditional and competency-based education models is presented in Table 1.12,13
Comparison of Traditional and Competency-Based Educational Models12,13
This paper will explore elements to consider when developing instruments and strategies for assessing simulation-based learning in pharmacy education. How simulation-based teaching and assessment can be used to objectively evaluate student pharmacist performance and provide formative and summative feedback also will be discussed, as well as how the data can be used for global curricular assessment.
ASSESSMENT METHODS FOR HUMAN PATIENT SIMULATION IN PHARMACY EDUCATION
Although use of HPS is relatively new in pharmacy education compared with medicine and nursing, the number of published articles on assessment methodology and tools used to evaluate simulation-based learning in pharmacy education is increasing. Assessment of pharmacy-based HPS activities can be stratified into 5 categories: surveys of satisfaction and/or confidence/self-efficacy, assessments of knowledge, assessments of performance-based skills, demonstrations of problem-solving abilities, and evaluations of team-based behaviors.
Satisfaction and Confidence/Self-efficacy Surveys
Survey instruments can be used to obtain student pharmacists’ perceptions of learning with simulation and their perceptions of self-efficacy based on simulation activities. These surveys are easy to conduct and thus provide data for research and programmatic evaluation purposes. The results help pharmacy educators understand students’ perspectives and satisfaction levels regarding teaching methods. A major limitation of survey instruments is that the data obtained from them typically do not provide insight into the impact of simulation on students’ achievement of learning outcomes and do not allow/afford evaluation of students’ knowledge, performance skills, or professional behaviors. Nevertheless, the majority of assessment data generated by pharmacy educators was obtained from surveys of student pharmacists.
Findings from satisfaction surveys conducted following HPS scenarios, such as advanced cardiac life support (ACLS) and pediatrics simulation exercises, indicate that student pharmacists feel positively about this type of learning experience.15,16 Student pharmacists’ enjoyed patient simulation and believed HPS should be further incorporated into the curriculum. A majority agreed or strongly agreed that they learned things in the simulation that would be useful in practice and noted that HPS improved their learning of clinical patient care compared with that from standardized lectures.17,18
Not all survey data are positive. Mieure and colleagues reported that 12% (14/119) of student pharmacists disagreed that the simulation experience helped them understand how to prepare medications in an ACLS situation and 21% (25/119) disagreed that the ACLS simulation improved their understanding of how to apply dosage calculations in an ACLS situation. The authors concluded that because only one student pharmacist had the opportunity to perform the calculation during the simulation, the other students did not believe they had benefitted from the simulation in terms of learning medication preparation and dosage calculations.15
In other studies, student pharmacists reported improved self-confidence to successfully perform specific professional skills or tasks after participating in HPS scenarios. Pre- and post-simulation survey responses in a pharmacotherapy course that used HPS indicated that student pharmacists’ confidence significantly improved in areas such as problem solving and patient assessment.19 In a study of an interprofessional healthcare team, participants completed a self-assessment before and after participating in an HPS scenario and the results indicated significant improvement in their perceived ability to perform behaviors such as engaging in difficult conversations with patients, coping with emotional fallout from a patient during a difficult conversation, and functioning in imperfect and ambiguous situations.20 A majority of student pharmacists who participated in an HPS simulation series as part of an IPPE felt more confident in their ability to make clinical recommendations to a healthcare provider.21
Much like satisfaction survey instruments, self-efficacy survey instruments are relatively easy to administer and provide information related to participants’ self-assessed confidence in various domains. Data from these types of assessments should be interpreted cautiously. Self-assessed confidence does not always translate to competency as measured by external observations.22
Knowledge and Retention
The most common study design for assessing the impact of simulation on student pharmacist knowledge is use of a pre- and post-simulation quiz or test. Examples of test instruments that evaluate knowledge in the pharmacy education literature are listed in Table 2.
Pharmacy Education Strategies and/or Tools for Assessment of Simulation
Fourth-year student pharmacists enrolled in a 5-year PharmD program, participated in 3 separate simulation exercises (asthma, decompensated heart failure, and infective endocarditis) and completed pre- and post-simulation examinations that demonstrated significant improvement in knowledge for all disease states. A 3-month follow-up examination was administered to evaluate knowledge retention. Participants in the simulation group scored significantly higher than those in a control group of student pharmacists who did not participate in the simulation exercises.21 Student pharmacists enrolled in a pediatrics elective completed a pre- and post-simulation examination and demonstrated significantly improved knowledge related to treating a specific medical condition and administering medications.16 Significant improvement in student pharmacists’ knowledge of hypertension and blood pressure measurement, and management of dysrhythmia and myocardial infarction was demonstrated by Seybert and colleagues in studies that used written examinations to assess knowledge resulting from participation in simulation activities.19,23 While a predominantly positive impact on knowledge has been reported, in one study knowledge retention was not demonstrated. Student pharmacists who participated in an ACLS simulation workshop demonstrated a median score of 25% on a knowledge-based anonymous evaluation at the end of the semester. The authors concluded that the poor results may have been due to limited exposure to the simulation (30 minutes) during the learning experience and/or to inadequate student motivation because the knowledge evaluation did not greatly impact their course grade.15
Evaluation of student pharmacist knowledge is essential to an assessment program and allows student pharmacists to identify gaps in their knowledge of foundational material. Additionally, poor performance may indicate a programmatic concern, making this assessment valuable from a curricular standpoint. The drawback of knowledge assessment is the potential that performance is confounded by various circumstances. Mieure and colleagues noted in their study that inadequate preparation and repetition and a perceived lack of value may have contributed to student pharmacists’ poor knowledge retention. Demonstration of knowledge retention also may be impacted by the presence (or absence) of higher-level cognitive domains of application, synthesis, and evaluation.27 Students may have retained facts or information learned during the simulation but perform poorly on an assessment due to lack of contextual knowledge, clinical application, and perceived benefit associated with the knowledge-based assessment activity or evaluation. Successful performance on knowledge-based assessments does not ensure competency in application, synthesis, or evaluation domains.
Performance-Based Clinical Skills Assessments
Assessment strategies and instruments have been developed for simulated pharmacy activities such as prescription filling and checking, intravenous compounding, extemporaneous compounding, and patient counseling. In pharmacy education, there are few examples of using HPS to assess psychomotor or performance-based skills. This gap highlights differences between the role of pharmacists and the roles of practitioners/health providers in more procedure-oriented disciplines such as nursing or medicine. Simulation is commonly used in nursing and medical education for learning and practicing skills such as starting intravenous lines, inserting chest tubes, injecting medications, and applying oxygen devices. In a medical residency training program, HPS was used to train internal medicine residents on the proper technique and procedure for inserting central venous catheters. Patient outcomes improved when the incidence of catheter-related blood stream infections was reduced in the intensive care unit (ICU) where the residents were trained with simulation as compared to outcomes in another ICU in the same institution in which residents were not trained using simulation.28 This type of study is compelling because it evaluates the use of HPS for education and training of health care professionals in the workplace and evaluates an important patient outcome.
Another skills-based study evaluated medication error rates in a cardiac intensive care unit (CICU). The authors concluded that simulation-based learning provided to the nursing staff significantly reduced medication error rates in the CICU from 30.8% to 4%. In contrast, medication error rates did not decrease in the institution's medical intensive care unit where a control group of nurses had completed traditional classroom lecture-based education about medication errors and error rates.29 This study also is important because it evaluated the impact of HPS on a specific patient-related outcome as opposed to focusing solely on a curricular performance-based skill. Expanding beyond the academic arena into improvement of patient care is of great interest to stakeholders such as hospital administrators and funding agencies, and presents unique scholarship opportunities for faculty members in clinical settings.
In one study in pharmacy education in which specific skills were assessed, student pharmacists’ ability to accurately determine blood pressure significantly improved following completion of practical sessions on auscultating blood pressure using a high-fidelity manikin.23 Ideas for other pharmacy performance-based skills that could be formally evaluated via HPS include personal protection and safety procedures, cardiopulmonary resuscitation, pharmacy calculations, and verifying intravenous pump settings. Scenarios have been developed such as cardiac arrest team training that include some of these skills, but the assessments were not focused specifically on the evaluation of skills performance.
Assessment of Critical-Thinking and Problem-Solving Skills/Abilities
Like evaluation of performance-based skills, published methods and tools for assessment of problem-solving and critical thinking skills learned through HPS in pharmacy education and other health care disciplines are limited.11,29 Examples of tools from pharmacy education used to assess critical thinking or problem-solving skills learned in HPS are presented in Table 2. One study evaluated the performance of student pharmacists who worked in groups to develop a pharmacotherapy plan for a patient case scenario. Of the 8 distinct domains evaluated, the groups scored highest on verbal communication, introduction to the patient, patient counseling, and development of a problem list.19 Reliability and validity data for the assessment tool were not reported. Another study evaluated the performance of groups of student pharmacists who treated a simulated patient experiencing a medical emergency in a community pharmacy setting. Ninety-three percent of the groups correctly identified the emergency.24 Like the previous study described, this study did not evaluate the reliability or validity of the evaluation tool. Opportunities for pharmacy education clearly exist in the development of valid, reliable assessment instruments to assess student pharmacists’ problem-solving and critical-thinking abilities.
Assessments for Behavior and Team Interaction
Simulation provides a method for teams to practice working together and to explore professional overlap and distinction to gain mutual respect, clarity about professional roles and responsibilities, and cultural sensitivity. Interprofessional education is intended to orient health professional team members to these higher-level aspects of team collaboration. Some federal initiatives to facilitate training in interprofessional teamwork have been developed such as the joint effort of the Agency for Healthcare Research and Quality and the Department of Defense to create Team Strategies and Tools to Enhance Performance and Patient Safety (TeamSTEPPS).32 Simulation provides an instructional tool to facilitate practice and allows for immediate feedback regarding interprofessional teamwork and collaboration. The approach to assessment may vary depending on the type of feedback desired because individual performance is the focus of one method of assessment and overall team function is the focus of the other. In the TeamSTEPPS training program, several instruments are used to measure interprofessional communication. The Team Performance Evaluation contains 25 items within the categories of team structure, leadership, situation monitoring, mutual support, and communication.32 Numerous other instruments have been developed for observers to measure an individual's performance on a team or a team's performance as a unit. Some are context specific, such as performance in an operating room,33 during neonatal resuscitation,34 during simulated obstetrical emergencies,35 and in response to intimate partner violence,36 while other instruments can be used to assess performance in general clinical simulations or practice.37-39 Two nonclinical instruments have been used to assess the impact of a training program on emergency room physicians and nurses.40 Some instruments have been designed to measure self-reported attitudes toward interprofessional education and practice.41-43 For additional reviews on team assessment instruments, readers can refer to the book Team Performance in Health Care: Assessment and Development. 44
The opportunities for healthcare practitioners to integrate and perform within the global constraints of group dynamics and responsibilities should be incorporated into professional education long before respective practitioners are licensed and working in healthcare venues in various sectors. Simulation offers a valuable learning activity for interprofessional education. As pharmacy assessment methods and instruments continue to evolve, programs should explore interprofessional opportunities and use proven assessment instruments to evaluate the effectiveness of simulated activities to enhance interprofessional learning.
DEVELOPING ASSESSMENT STRATEGIES AND EVALUATION INSTRUMENTS
Educational assessment of student pharmacists’ performance in simulation exercises is essential and the pharmacy profession is striving to improve in this area; thus, determining how pharmacy education should move forward to develop sound assessment practices for simulation-based learning is the next challenge. Outlined below is a suggested approach for pharmacy faculty members and programs to take as they integrate simulation-based learning and associated assessments into their curricula. Important areas to consider include sound assessment practices, identification of an approach for assessment, and identification of valid, reliable tools.
Adhering to Principles of Good Assessment Practices
The American Association for Higher Education (AAHE) outlined principles of good practice for assessing student learning that capture key elements to consider when developing student assessments (Table 3).45 Principles specific to teaching and learning with simulation come from Issenberg's comprehensive review of the simulation literature.30 Issenberg's principles for teaching with simulation include: provide feedback during the learning experience, require learners to engage in repetitive practice, integrate simulation throughout the curriculum, increase the levels of difficulty, adapt simulation to complement multiple learning strategies, ensure clinical variation, establish a controlled environment for learning to occur, provide individualized (in addition to team) learning, and clearly define outcomes and benchmarks. There are similarities between the principles outlined by Issenberg and his team and those proposed by AAHE. Their guidance is relevant for effective learning in general, but is highly important for simulation because it represents the collective wisdom of successes and failures from use of simulation-based learning in other disciplines. These principles will resonate with all faculty members who have used simulation in their teaching or participated in simulation exercises as a student.
Principles of Good Practice for Assessing Student Learning45
The AAHE principles contend that using a variety of assessment activities over time can “reveal change, growth, and increasing degrees of integration” and provide a more accurate picture of a student's learning experience and potential deficits in this experience.45 An assessment of student pharmacists’ perceptions of a simulated experience provides only one layer of assessment data for a particular experience and does not objectively evaluate the change and growth in participants’ knowledge and skills or the level of knowledge integration they achieved. The principles also allude to the fact that while learning outcomes are “the destination,” it is truly the process or “journey” to an outcome that is essential to understanding how cumulatively the curricula, teaching, experiences, and expectations have positively or negatively impacted an outcome. Sound assessment practices provide the opportunity to evaluate learning at multiple levels. A well-developed global assessment plan is essential to the success of simulated assessment (summative or formative) activities. Without a clearly outlined “road,” performance on a single simulated assessment may be negatively impacted by lack of context or student preparation and establishing why the simulated activity was not successful may be difficult. Finally, the principles reinforce what is already intuitive: “assessment makes a difference when meaningful data are collected, connected, and applied creatively to illuminate questions and provide a basis for decision making.”46 The rationale for developing and implementing a simulation exercise and using it to assess student pharmacists requires first identifying a clear and measurable objective. The objective(s) should be simple and limited to those that are essential. Assessing too many objectives or skills at one time may result in overwhelmed students who do not perform well, rubrics that are too complex, and/or recall bias of evaluators. The results from the assessment then must be evaluated in the context of other assessments conducted and the findings combined to assess individual student performance and overall curriculum effectiveness.
Identifying an Approach for Assessment
Prior to developing an assessment to evaluate specific content and/or a specific skill set, the target or goal of the assessment should be determined. For most programs, a reasonable goal is achieving college-wide competency-based learning or abilities-based outcomes. Once outcomes have been identified, matching an assessment strategy to a specific learning outcome prior to developing an evaluation tool may be helpful.
The most commonly reported assessments of simulation in the pharmacy education literature are knowledge and recall following a simulation exercise. Examples of team and behavior assessment tools are available from work in other disciplines and interprofessional collaborations. Pharmacy programs and faculty members should focus more on assessment of higher-level cognitive domains (analysis, synthesis, and application) as well as evaluation of the psychomotor domain (clinical performance) following participation in simulation exercises.
Other conceptual frameworks exist that may be beneficial to consider when choosing an available instrument or designing a simulation assessment tool. Miller's pyramid suggests that a medical learner should be assessed at 4 different levels: knows (content), knows how (competence), shows how (performance), does (action).47 Disciplines adopting simulation for assessment strategies have most commonly used the first 3 levels of learning for evaluation of students.30 For pharmacy education, a fundamental step related to Miller's pyramid is to determine the purpose of the assessment, such as to assess whether a student pharmacist knows, is competent, can perform, or can do. A working group from the Canadian Network for Simulation in HealthCare proposed a taxonomy and conceptual framework for simulation instructional design and media that includes 4 levels: instructional medium (level 1), simulation modalities (level 2), instructional methods (level 3), and presentation (level 4) (Chiniara G. Canadian Network for Simulation in HealthCare, April 2011) Applying these step-wise tools can provide a logical, objective method for selecting the appropriate media and simulation modality. Another approach is to include assessment design earlier in the educational process instead of developing the assessment method or tool as the last step. Once the learning outcome or competency has been identified, determine the best assessment tool to evaluate the outcome and then design the educational activity itself.
Identifying the Purpose of Assessment
Another important element to consider when determining the most appropriate assessment method is to clarify the purpose of the assessment. Some assessments are intended to provide individualized student feedback through a formative process. Formative feedback typically includes written or verbal comments that are shared directly with students during the debriefing session. Formative feedback often includes the opportunity for a student to demonstrate improvement by repeating the assignment or activity.
Valid, reliable instruments are particularly important if the assessment “score” will be used for summative decision making such as high-stakes (pass/fail) evaluations that can determine a student pharmacist's progression in the program. For summative assessments (eg, final examination, licensure examination), students typically do not receive any feedback about their performance other than whether they passed. Simulation activities can be used effectively for formative and summative assessment purposes. Pharmacy curricula that use high-stakes assessment practices for simulation or other curricular areas must ensure that they are conducting the assessments with valid and reliable instruments.
Aggregate data and subsequent analyses generated from either formative or summative assessments of simulation exercises can be used effectively in assessment initiatives to evaluate curricula and programs. Data used for these purposes should be de-identified and careful consideration should be given to the policies and procedures surrounding methods and security for data collection, storage, and evaluation.
Development of Valid, Reliable Instruments:
To truly evaluate the effectiveness of HPS in pharmacy education, development of and/or use of available reliable valid instruments or tools must be a priority. The ACPE policy allowing simulation activities to account for up to 20% of IPPE makes the need for valid assessment tools even more urgent.3 Programs that elect to use simulation for IPPEs must demonstrate that student pharmacists are achieving professional competency and curricular outcomes through a valid and reliable assessment process.
While a complete discussion of the development of reliable, valid tools is beyond the scope of this paper, there are some basic steps that will ensure the development of high-quality assessment tools and processes for the assessment of simulation-based learning. Performance assessment, which according to Banta “requires students to display their learning…actively practice their skills, synthesize their knowledge…” should include the following steps489:
(1) Ensure content and construct validity of rating scales (rubrics) by comparing them to professional standards (competencies) and obtaining feedback from faculty experts and practicing clinicians;
(2) Hold orientation sessions for actors involved in the scenario so they can practice their roles;
(3) Conduct group training sessions for faculty raters;
(4) Hold debriefing sessions to gather information about the process;
(5) Determine inter-rater reliability and predictive validity.
An instrument's accuracy in measuring what it was intended to measure is validity. As a starting point, according to Kardong-Edgren and colleagues, simulation instruments should include measurements of “content validity (the appropriateness of sample items and comprehensiveness of the measurement) and construct validity (the process of establishing that a particular action adequately represents the concept being evaluated).”11 One way to demonstrate content validity for simulation rating instruments is to use clinical experts to review the criteria described in the tool. The experts can provide feedback regarding the relevance or importance of the items and indicate whether all important items are included.
Reliability ensures that the designed instrument provides consistent results. The goal is for the instrument to always yield similar results for the targeted measurement. Verification of inter-rater reliability also is important, particularly when simulation activities are being evaluated by many different graders. This is often the case when large numbers of student pharmacists are participating in simulation scenarios with multiple faculty members evaluating their skills.
Because of limited resources and faculty time, expecting individual faculty members to undertake the development of valid, reliable instruments for assessment of simulation-based learning is unrealistic. Faculty members should enlist guidance from others who are skilled in tool evaluation and they should expand the application and evaluation of currently available tools. Support from pharmacy professional organizations and accreditation bodies will be critical to further the development of these instruments. The American Association of Colleges of Pharmacy (AACP) is addressing this need through the formation of the Assessment Special Interest Group and the AACP Assessment Institute, which provide assessment support to members of the academy. Continued development and expansion of a central and easily accessible database of current tools, including validity and reliability data, is an important step to disseminate this information to interested faculty members and programs. To support the scholarly effort, doctoral-level research programs focusing on the development and evaluation of simulation tools are emerging in nursing. This approach may be a viable option for pharmacy education as well.
CONCLUSIONS
The examples provided in this paper offer resources for programs using simulation within their curricula and highlight the progress that pharmacy colleges and schools have made in the area of assessment of simulation-based learning. Like other health disciplines, pharmacy education needs to focus efforts on expansion of assessment beyond satisfaction/self-efficacy survey tools and knowledge, to encompass assessment of clinical performance and critical thinking. Of high importance is the need to develop reliable, valid assessment tools to accurately evaluate the effectiveness of learning with simulation. Although moving to these next levels of assessment will be difficult, enhanced collaboration among pharmacy educators can help ensure the successful advancement of valid assessment tools.
ACKNOWLEDGEMENTS
We would like to thank Gary Pollack, Linda Garrelts Maclean, and Abdur Rehman for their support and contributions to this manuscript.
- Received February 4, 2011.
- Accepted July 29, 2011.
- © 2011 American Association of Colleges of Pharmacy