Abstract
An integrated curriculum that does not incorporate equally integrated assessment strategies is likely to prove ineffective in achieving the desired educational outcomes. We suggest it is time for colleges and schools of pharmacy to re-engineer their approach to assessment. To build the case, we first discuss the challenges leading to the need for curricular developments in pharmacy education. We then turn to the literature that informs how assessment can influence learning, introduce an approach to learning assessment that is being used by several medical education programs, and provide some examples of this approach in operation. Finally, we identify some of the challenges faced in adopting such an integrated approach to assessment and suggest that this is an area ripe with research opportunities for pharmacy educators.
INTRODUCTION
Curriculum renewal has become a constant in health professions education. One round of curriculum review is barely finished before the urge for another begins. In the past few decades, curriculum renewal efforts have seen a number of innovative models emerge and be widely embraced (eg, problem-based learning, case-based learning, modular structure). Yet although the energy focused on curriculum design and pedagogical practice seems enormous, the efforts expended often have produced disappointing results as regards to student learning.
One potential reason for the failure of curriculum to fundamentally alter the culture of learning may well be our failure to subject our assessments of learning to the same deliberate planning, scrutiny, and purposeful design that we have employed with our curricula. While the science of measurement has advanced greatly in its models of reliability and validity, some have suggested that we continue to treat testing exclusively as a measurement problem but fail to appreciate that it is also an instructional design problem.1 It is generally known in the literature and anecdotally that assessments have a powerful influence on student learning.2 As Swanson and Case have eloquently stated in regard to student learning, “Grab students by the tests and their hearts and minds will follow.”3
The American Association of Colleges of Pharmacy’s (AACP) curriculum and assessment special interest groups (SIG) recently recognized this development by emphasizing the interdependency of curriculum and the assessment of learning.4,5 Yet for the most part, we have failed to appreciate, much less take advantage of this fact in the creation and implementation of new curricula. We develop highly novel and creative approaches to curriculum delivery, but continue to use the same model in the assessment of learning. Then we are surprised and disappointed by the repeated discovery that our students’ approach to learning is more compelled by the structure of testing than the structure of instruction. Thus, if innovative curricular models are to have a meaningful impact on the culture of learning, it is critical that the strategy for assessing learning be recognized as an integral component of the curricular process and be conceptually aligned with the instruction and learning activities that are planned.1,4,6-8
One recurring effort in curriculum renewal for which the consideration of assessment practices will be especially important is the desire to create more integrated curricula. Approaches emphasizing horizontal, vertical, and spiral integration and/or a better balance between foundational sciences and practice have received extensive attention across health professions9 and in pharmacy in particular.5,10 Yet, again, throughout these efforts assessment strategies have generally maintained a segregated structure, testing each content area at the end of the course or block in which the content was taught, with the assessment of the content being the exclusive responsibility (and right) of the content expert and/or course director. Such an approach is problematic. An integrated curriculum that does not incorporate equally integrated assessments of learning is likely to prove ineffective in achieving the desired educational outcomes, and once again we will be faced with a paradox of “reform without change.”11
Pharmacy educators have adopted a number of innovative assessment strategies that move beyond traditional stand-alone, end-of-course assessments, such as MileMarker examinations, progress examinations, and annual skills assessments.12-16 Such innovations offer valuable information about students’ learning progress across time. Yet, for the most part, these assessments of learning are in addition to traditional course-based examinations and therefore might best be thought of as longitudinal assessments that sit within “non-integrated” assessment programs. As described by the AACP Assessment SIG, truly integrated programs of learning assessment are ones in which the results from both quantitative and qualitative methods from across the entire curriculum are triangulated to determine if students are achieving the desired education outcomes.4 Such programs combine frequent formative assessments to guide and foster learning at all stages of the program and the selective use of summative assessments at critical junctures when progress decisions are required.
In line with the AACP Assessment SIG, we propose that it may be time to consider a re-engineered approach to assessment. To support the movement toward such an approach, in this paper we first discuss some of the challenges underlying the need for integrative curricular developments in pharmacy education. We then turn to the literature that informs how assessment can influence learning, introduce a comprehensively integrated approach to learning assessments that is used by several medical education programs, and provide some examples of this approach in operation. Finally, we identify some of the challenges faced in adopting such an integrated approach of assessment and suggest that this is an area ripe with research opportunities for pharmacy educators.
In medical education, these comprehensively integrated programs of learning assessment are often referred to as “programmatic assessment.” As used in this context, it does not refer to program level evaluation, as is often the case in pharmacy education. Rather, it is a comprehensive, program-wide assessment of learning. To avoid confusion, throughout this article, we will be using the term program of assessment to refer to assessment of student learning (not assessment of the program or curriculum) and will only use the term “programmatic assessment” as reference to student assessment in direct quotations from the medical education literature.
Pressures Leading To the Need for Change in Pharmacy Education
A number of recent demands on pharmacy education suggest that a re-examination of both our curricula and our assessment and feedback practices is warranted. First, there is an increasing focus on the need to ensure that our students are “practice ready” upon graduation. If our curricula fail in this effort, the consequences could be significant. (eg, compromised patient safety, damage to the faculty’s/clinical site’s reputation, and loss of future student placements). As we work to ensure that our graduating students are “fit for purpose,” it is critically important that our curricula focus on practice relevant material and that our assessments promote and “certify” the practice readiness of our students.
In response to this challenge, the learning outcomes set for today’s pharmacy programs are increasing in complexity. For example, in 2010 the Association of Faculties of Pharmacy of Canada (AFPC) revised the outcomes expected of entry-to-practice degree programs to reflect a societal need for “medication therapy experts.”17 Using the CanMEDS Physician Competency Framework as their template,18 Canadian pharmacy education programs must now impart the necessary knowledge, skills, and attitudes to fulfill the identified pharmacist roles of care provider, communicator, collaborator, manager, advocate, scholar, and professional. Similarly, the 2012 Accreditation Council for Pharmacy Education (ACPE) Conference on Advancing Quality in Pharmacy Education encouraged revisions in the ACPE Accreditation Standards that would require US pharmacy programs to expand their emphasis on direct patient care, lifelong learning, inter-professional teamwork, behavioral competencies, and screening students’ readiness for advanced clinical placements.19
At the same time, societal and professional demands are leading to increased accountability for educational programs. This accountability is often tied to proof of outcomes (evidence that our graduating students have the required skills). This, in turn, has placed pressure on our assessments of learning. To be accredited, pharmacy programs in many countries20-23 must meet a set of standards that include ones specific to the quality and comprehensiveness of assessment practices. These accreditation standards clearly set expectations of the nature of assessments to be applied and the adjustments to be made in assessment practices when warranted. One implication of such changes is the need to ensure our assessments “capture” all the learning (cognitive, psychomotor, and affective) targeted in our educational goals.
Several innovations in curriculum and content are being developed to address these challenges to pharmacy education, not the least of which is the effort to structure curricula in ways that are intended to integrate the required knowledge, skills, and attitudes of current practice in our students. At the same time, there have been innovations in assessment aimed at ensuring that each of the required aspects of practice are being effectively evaluated (such as case-based performance assessments).24 Yet (earlier examples of longitudinal assessment notwithstanding12-16) what is missing, we suggest, is the effort to mirror the process of integrating curriculum by integrating assessment. Rather, assessment practices continue to be largely fragmented and isolated. Thus, in order to meet the challenges and opportunities presented by these developments, we are recommending a comprehensive, program-wide approach to assessment that is embedded in the curriculum; that is, an integrated program of assessment.
The Influences of Assessment on Learning
In order to understand how an integrated approach to assessment of learning might positively impact the integration of learning, it is first helpful to understand how assessment can influence learning. Educational researchers have long been interested in questions related to how instruction and assessment practices influence the quality and effectiveness of student learning.25 There are two broad mechanisms by which assessments can shape and guide learning. The first mechanism involves shaping learning through the student’s anticipation of and preparation for the assessment. The second mechanism is through the feedback that learners receive subsequent to the assessment.
Influences on Learning Evoked by Anticipation of the Test
It is widely accepted that assessment drives learning2 even if no feedback about performance is provided. This shaping force of the assessment itself involves not only the content (what students study) but also the format and the practices surrounding the assessment. Interestingly, assessment practices and format may either promote or hinder learning.26 Assessments – when purposely designed to measure the stated outcomes and the delivered instructional content – can positively impact learning.8,27 Furthermore, the inherent power of assessment can be exploited to foster development of higher order metacognitive and self-directed skills.27-29 However, the format and content of our assessments often hinder learning by encouraging a “bulimic” approach to learning in which the students prepare for examinations by cramming or “binging” and then quickly forgetting or “purging” the content after the examinations are over.29-31 Assessments constructed to reward these behaviors impede achievement of broader learning goals and foster poor preparation and underperformance.32
Marton and Säljö have suggested that individuals can adopt “surface” or “deep” approaches to learning depending on their perceptions of what learning means.33 Those who adopt a surface approach conceptualize learning as the capacity to reproduce the details conveyed by the instructor or a text. Those who adopt a deep approach search for the meaning underlying the content and try to answer the question, “What is this all about?” It should be noted that surface or deep approaches to learning are not stable traits and can be influenced by several factors such as the learning context as well as instructional quality and assessment practices.6,34 A student’s studying orientation can be influenced by the predictability, interpretation, and nature of the “demand structure” (the learning task and the assessment that follows), the student’s perceptions of the learning’s relevance, workload, as well as the student’s levels of anxiety and intrinsic and extrinsic motivation.29,33 A deep approach often related to high levels of academic achievement is fostered by assessment strategies that emphasize and reward personal understanding.25,27
In short, what we assess, how we assess, and where we assess will have a significant (some26 argue the most significant) impact on our success at producing self-directed, lifelong-learners. But it also can have an influence on the extent to which students perceive the value in adopting the integrated learning approach that our integrated curricula intend to promote. If the content tested on our assessments is not constructed such that it requires integration of the material, then the tests will drive our students to segment the material in their learning. If the material assessed is blocked by the specific content taught in each course (and assessed only at the end of the course) then students may be well rewarded for adopting a binge-and-purge model of test preparation. Thus, it is important to reconsider our assessment practices to ensure that our tests do not “disintegrate” the material that our curricula are designed to integrate.
Influences on Learning Evoked Through Feedback After Performance
The dominant purpose of assessment by far has been to determine whether a student has achieved the required learning outcomes for a particular module, course or program. This is commonly referred to as summative assessment or assessment of learning (AoL).35 In addition, however, educators are increasingly being encouraged to capitalize on assessment’s power to foster learning by using formative assessment or assessment for learning (AfL)35-38 through the feedback we can provide as a result of those assessments. AfL as defined by Black and Wiliam39 is “Practice in a classroom is formative to the extent that evidence about student achievement is elicited, interpreted, and used by teachers, learners, or their peers, to make decisions about the next steps in instruction that are likely to be better, or better founded, than the decisions they would have taken in the absence of the evidence that was elicited.”
To be successful, AfL needs a learning environment that is deliberately engineered to involve students in the learning tasks.38 The design features of an AfL environment include clear learning intentions and shared criteria for success; enabling classroom discussions and classroom activities that provide evidence of learning; feedback delivered in a manner that assists learning progression; a setting where students are deliberately developed as a learning resources for one another through the use of activities such as peer-assessment; and students being encouraged and motivated to accept responsibility for their own learning.40
There is some question as to whether the same assessment tools (or assessment moments) can be used for both AoL and AfL. However, what is clear is that assessments performed at the end of an educational block or course are unlikely to satisfy the requirements of AfL articulated above. Rather, such end-of-course assessments are likely to be seen by learners exclusively as “hoops” to jump through and, once cleared, move on from. Feedback from such assessments, therefore, is unlikely to have much influence on future learning, as learners will have already received the message that they have sufficiently mastered this content domain. This in turn, leads students to what Postman and Weingartner referred to as the “vaccination theory of education” often epitomized by the phrase, “we learned that already,” a model that implicitly segregates the material that the curriculum is trying to integrate.41 Thus, again, it is important to think about how our assessment strategies might support or undermine our efforts to promote, among our students, continuous learning and integration of material across the entire program or curriculum.
An Integrated Program of Assessment
The foregoing sections provide a rationale for a system-wide re-engineering of our approach to assessment. One approach for consideration is emerging in the medical education literature. A review of that literature for the period 1988 to 2010 indicated the importance of this area, with 26% of the papers retrieved being devoted to assessment.42 In an analysis of that review, van der Vleuten and Dannefer43 noted four trends: an abundance of assessment methods proposed and investigated; a well-developed methodology for conducting assessments; a deliberate shift away from AoL toward a greater emphasis on AfL; and an awareness of the need to “move beyond the individual assessment method” as well as for “urgent progression in the development of the systems approach.”
A comprehensive, program-wide approach to assessment in higher education is not a new concept.44 Neither is the perspective that a clear distinction is needed between assessment methods and assessment purposes. When making high stakes decisions, it is acknowledged that reliability and validity of the assessment methods are critically important. For lower stakes decisions (eg, AfL) the reliability of the assessment method is less essential; the purpose here is “to promote learning dialogues that inform future work” in order to foster student development over a longer term.44 Taking a programmatic view of the purpose of each assessment makes it “easier to see how to invest in reliability and to identify where it really matters.”44 With such a perspective, assessments can be “managed” not in an ad hoc manner, but systematically, with resources transferred and assigned to those critical assessment moments.45
Such an approach is not unfamiliar to pharmacy educators. Zlatic advocated for the adoption of an ability-based curricular design with assessment built in across the curriculum to facilitate learning rather than exclusively to measure learning.46 Likewise, Maddux suggested that “institutionalizing an assessment-as-learning” model within an ability-based curriculum would be “a powerful tool that can effectively promote, measure, and improve student learning.”47 Winslade provided a comprehensive list of evidence based recommendations for a system to assess the achievement of program outcomes by doctor of pharmacy students in the United States and encouraged the results be used for program improvements as well as summative and formative assessments of student learning.48 DiVall and pharmacy colleagues have offered a toolkit of formative assessment strategies for improving student learning as well as improving the instructional process.49 Fulford, Souza, and associates provided an extensive assessment blueprint for learning experiences in response to the 2013 Center for the Advancement of Pharmacy Education (CAPE) educational outcomes.4,50
These pharmacy educators recognized that the assessment of practice competence of health professionals is not simply a measurement problem but also an instructional design challenge.1 Adopting this perspective requires a significant shift from our focus on individual assessment methods to a concentration on comprehensive assessment programs.43 In that process, assessment needs to be repositioned such that it is no longer the last item on the curriculum renewal agenda.1 The what, how, and when of assessment should be integral parts of the curricular design discussion and purposely structured to gather and “combine information across content, across time and across different assessment sources.”51 Such integrated learning assessment programs have been described as: “… a design process that starts with a clear definition of the goals of the programme. Based upon this well-informed, literature-based, and rational decisions are made about the different assessment areas to be included, the specific assessment methods, the way results from the various sources are combined, and the trade-offs that have to be made between the strengths and weaknesses of the programme’s components. In this way we see not just any set of assessment methods in a programme as the result of a programmatic approach to assessment but reserve the term programmes of assessment for the result of the design approach as described above.”52
A framework for designing such an integrated program of assessment was developed by an international group of medical educators experienced with the challenges of educational assessment.52 This framework consists of five interrelated assessment layers: program in action, support, documenting, improving, and accounting, which are bounded by program purpose (the starting point), infrastructure, and stakeholders (the context). In addition, researchers have developed a set of 72 context-independent guidelines (eg, applicable for AfL as well as AoL) that could be used in the design of an integrated program of assessment.53
Based on the framework and guidelines developed by Dijkstra and colleagues,52,53 another group of researchers developed a generic model for such a program of assessment that maximizes its “fitness for purpose” for the first layer (ie, the program in action).54 This model was designed to fulfill three assessment purposes: to facilitate learning (ie, assessment for learning); to maximize the robustness of high-stakes decisions (ie, on selection/promotion of learners); and to provide information for improving instruction and the curriculum.54 To guide construction of this model, six theoretical principles were formulated from the assessment research literature: Any single assessment data point is flawed; we can have reasonable confidence in the validity of standardized assessment instruments through detailed attention to content construction, structured scoring and administration procedures, and use of the test on appropriate populations of learners; validity of non-standardized assessment resides more in those making the assessments (the individual assessors who are often making judgments of situated student performances); the stakes of the assessment should be seen as a continuum with a proportional relationship between increases in stakes and number of data points involved; assessment drives learning; expert judgment is imperative.
A graphical representation of the resultant model is reproduced in Figure 1.54 The salient parts of this model are summarized in the following brief description; for additional details the reader is encouraged to consult the original article. Any assessment program should maximize learning and provide robust evidence of an individual’s progress toward attainment of the educational outcomes. Within a specified period of training – for example, a course module, or academic semester – the educational program is a logical and sequential arrangement of learning tasks (eg, situated in lectures, laboratories, case discussions, self-study assignments, and clinical placements) designed to achieve specific outcomes. Some of those learning tasks will produce artifacts of learning such as a dispensed prescription in a pharmacy practice laboratory, a therapeutic plan as part of a PBL session, a written reflection on professionalism, etc. Within the specified period of the training program, assessments are conducted to guide the progression of individuals. These assessments could be a multiple-choice test as part of a module, an observation of a patient counseling session in a pharmacy clinic, or the elicitation of a drug history during a simulated patient encounter. Some of these assessments will include evaluations of the artifacts produced as a result of the learning task (eg, patient care plan). To fully support learning, the assessment tasks should be aligned with learning tasks to provide the learner with feedback that is meaningful and actionable if necessary.8
Programmatic Assessment Model. Van der Vleuten CP, Schuwirth LW, Driessen EW, et al. A model for programmatic assessment fit for purpose. Med Teach. 2012;34:209. Reprinted by permission of Taylor & Francis LTD. http://www.tandfonline.com.
Assessment results should be documented (traceable), viewed as a single data point, and should not be used to make a pass or fail decision. Each assessment is thus viewed as low stakes. However, an accumulation of such single points may be used to inform subsequent progress decisions. The one exception to this policy that each assessment should be low stakes might be those mastery skills (eg, immunization certification),that must be certified via a single high stakes assessment in a simulated environment (eg, a clinical skills laboratory) before permitting student pharmacists to immunize patients.
In addition to learning and assessment activities, within each period of the training program, the model suggests inclusion of two types of activities to promote and reinforce learning. First, students are encouraged to reflect upon the information obtained from the learning and assessment activities and use the results of that reflection and any other feedback received to develop and implement self-directed learning plans. Van der Vleuten and colleagues acknowledged the difficulties in getting individuals to reflect and engage in self-directed learning.54 Consequently, they suggested a second type of supportive activity that involves the scaffolding of self-directed learning with social interaction. They encouraged the use of coaches or mentors (including senior students or peers) and structured reflective activity instruments to support reflection and self-direction. They suggested this social interaction component is critical to avoid trivialization and bureaucratization of the reflective activities. For a more detailed discussion of the supportive activities and related references, the reader is referred to the article by van der Vleuten and colleagues.54
The model includes intervals, (eg, the end of a module, semester, or term) when an intermediate evaluation of student progress is carried out against performance standards. It is recommended that a committee of examiners/assessors should be responsible for this evaluation, using an aggregation (when appropriate and meaningful) of all assessment results, learning artifacts and select information from the supportive activities garnered to this point and relevant to the decision to be made. When the results are consistent, the committee’s decision should be straightforward. When consistency is missing, the committee will need to spend more time considering and perhaps augmenting the available data. The focus of these intermediate evaluations is developmental to ensure the students are on track. Subsequently, students are expected to follow recommended remediation activities and use information provided to develop future self-directed learning plans.
At certain points in the educational program (eg, the end of the academic year), student progress decisions will have to be made. In their “programmatic assessment” model, van der Vleuten and colleagues recommended the same committee of examiners/assessors responsible for the intermediate evaluations make these decisions, informed by all the assessment data gathered to this point and relevant to the decision at hand.54 They acknowledged the high stakes nature of these decisions and therefore suggested a number of stringent procedural safeguards (eg, clear student appeal procedures, assessor training, and benchmarking) may be necessary to assist the committee. The possible decisions could include promotion (with or without distinction), remediation needed, or non-promotion. The model proponents suggest that in most cases, if the system is working, the outcome should come as no surprise to any student.1
Examples of Integrated Assessment
While, to our knowledge, no health professional training program has transitioned to a fully integrated program of learner assessment, several medical training programs have implemented aspects of van der Vleuten’s proposed model and provide examples for others to follow and learn from.
Driessen and colleagues described the application of this model in the context of a final year medical clerkship in the Netherlands.55 Individual assessments included at least five mini-clinical evaluations (mini-CEXs), two multisource feedback procedures, two critical appraisals of a topic, two progress tests, and one objective structured clinical examination (OSCE). For each of these assessments, feedback was provided to the student, who met with a mentor every four weeks. Each student also generates a portfolio of his or her experiences, successes, and challenges. An intermediate evaluation of the student’s progress was conducted by the mentor approximately one-quarter of the way through the clerkship, and a final evaluation (pass or fail) was conducted by a review committee that examined all of the data collected about a student over the entire clerkship experience (examination scores, portfolio data, mentor opinions, and the student’s own self-assessment based on the portfolio).
Based on their evaluation of the program, the authors concluded that it has high learning value, assessments were sufficiently robust, and the model was well-accepted by the students. They also identified a list of success factors for programmatic assessment in the clinical workplace, which included the need to make each individual assessment simple and “lean” and the importance of incorporating qualitative data in the final decision process.
Schuwirth and colleagues describe the use of a comprehensive program of assessment within a new graduate-entry MD program aimed at producing physician-clinical researchers at the University of Maastricht.56 A feature of this program was the integration of assessment as part of the learning process. Critical to this assessment approach was the use of portfolios and faculty mentors. The portfolios were used to record results of all assessments (whether formative, summative, self, peer or critical reflections). Student and mentors met regularly (eg, six times per year for first-year students) to discuss progress in achieving the program’s learning goals and to establish future learning plans. Periodically, a summary of the portfolio’s contents was prepared by an independent mentor and reviewed by a committee of independent mentors to reach progress decisions. In reviewing the assessment program, authors found that elements related to feedback, portfolios, assessments, and assignments were found to have generally supportive effects on learning. Interestingly, however, some students were less appreciative of the portfolio’s reflective activities, seeing some aspects of this exercise as inhibiting the learning response.
Perhaps the most comprehensive version of an integrated program of student assessment was described by Ricketts and Bligh in the context of the Peninsula College of Medicine and Dentistry.57 Their “frequent look and rapid remediation” outcomes-oriented assessment system was developed for a new, five-year medical and dental school in the United Kingdom (first year enrolment > 600). As they describe the program: “…the system uses continuous assessment … and remedial action is possible at any of the many assessment points. Global assessments on progression are made at the end of every academic year.”
A wide variety of assessment strategies were used (including progress testing, patient-based clinical assessments, multi-source judgments of professionalism, and portfolio reviews) as a mixture of continuous, cumulative, and end-point assessments, with few individually “high stakes” examinations. While they described several “growing pains” in the development and implementation of the assessment structure, the authors felt that the emphasis on frequent, formative assessments as well as “doing” rather than “knowing” led to the early identification of students experiencing difficulty so that remedial steps were implemented sooner.
Drawing upon their own developmental work and a focused literature review, van der Vleuten and colleagues have provided 12 tips for the implementation of programmatic assessment.58 Their guidelines, arranged under the general headings provided in Table 1, present a succinct summary for anyone interested in exploring this approach to assessment.
Twelve Tips for Programmatic Assessmenta
Challenges in Shifting to an Integrated Program of Assessment
Designing, implementing, and maintaining a comprehensive and integrated program of assessment is not without challenges, many of which are significant. The more critical are related to assessment ownership and oversight. An integrated program of assessment is a collective endeavor and needs to be centrally managed and developed from a master plan.1,58 Furthermore, an integrated program of assessment is not an assortment of single methods used in isolation to measure exclusively one competency at a time. Rather, it is “an educational design problem that encompasses the entire curriculum” in which assessments are strategically selected, sequenced and combined for their contribution to competence development and decision-making.1 In that process, an integrated program of assessment combines quantitative and qualitative data for student feedback and progress decisions. The combining of information from multiple sources requires professional judgment and strict procedural measures to ensure trustworthy decision-making.55,58 Development of expertise by faculty members and clinical supervisors in qualitative assessment will be critical. Again, recent developments in medical education provide considerable guidance and, in particular we would recommend the work related to developing quality in-training evaluation reports by Dudek and colleagues at the University of Ottawa.59 While developed for the clinical supervision process, their tips for providing quality feedback are relevant for all involved in performance assessment regardless of which stage in the educational program.
“Programmatic assessment” of the nature described is built upon constructivist learning theories and longitudinal competency development.58 Successful implementation of such an integrated program of assessment is challenging because it often requires a culture change in the operating philosophy and practices at the macro (university/accreditation body), meso (curriculum) and micro (faculty and students) levels.58 The challenge of implementation is nowhere more evident than when assessment for learning data are combined to make high stakes assessment of learning decisions.60
Further challenges are associated with what is perceived to be an expensive and overly bureaucratic assessment system. To address this criticism, van der Vleuten and his colleagues have suggested a deliberate slow start could contain costs. Accordingly, they recommend choosing a few things; doing them well and then building upon your successes.54 They also suggest there is considerable overlap between assessment and teaching in such a system where assessment activities are embedded in the learning activities. Peers can perform many of the low-stakes assessments and assessment instruments and strategies (eg, progress tests, e-portfolios, and workplace-based assessments) can be developed cooperatively using consortiums of pharmacy schools to share costs1 (eg, the Pharmacy Curriculum Outcomes Assessment developed by the National Association of Boards of Pharmacy in the United States). Investments in assessment also can be investments in learning and thus affect the overall quality of an entire educational program.61
Many of the issues articulated above were described by Bok and colleagues, who enumerated a number of challenges encountered and lessons learned when implementing an integrated program of student assessments in the final three clinical years of a six-year veterinary medicine degree program at Utrecht University in the Netherlands.60 Their findings suggest there was confusion and apprehension when formative assessment data were recorded in portfolios and later included as part of the evidence used to make summative decisions. Also highlighted was the importance of training for clinical supervisors and portfolio review committee members with respect to feedback quantity and quality. Bok and colleagues felt that the assessment program failed to fully harness the power of assessment to promote learning. Here again, feedback seemed to be the main challenge. Students did not know whom to ask for feedback or were reluctant to ask for it. Supervisors felt they had insufficient time to provide quality feedback. Bok and his coworkers concluded that to promote reflection and self-directed learning, “It appears to be important to scaffold self-directing learning by offering students social interaction and external direction from a personal mentor,” thus supporting the finding of previous research.
Future Directions and Research Opportunities
McLaughlin and her colleagues suggest that pharmacy educators are well-positioned to “re-engineer learning and curricula” by conducting research that not only informs course redesign but transforms learning.62 The purpose of such educational research is to contribute to theory development and to generalize about relationships among various phenomena.62 We encourage pharmacy educators to focus on learning assessment as one part of this discourse.
In a 1996 publication, educational achievement testing in the health professions was described as an area of turmoil and one warranting scholarly attention.61 Since that observation was made, there has been progress, most notably in medicine, but also increasingly in pharmacy education literature. For example, Mészáros and colleagues have developed a triple jump progress test, which they administer at the end of four academic semesters. It consists of a written, case-based, closed-book examination, a written, case-based, open-book examination and an objective structured clinical examination (OSCE).14 Medina and her colleagues described the development and implementation of integrated progress examinations as part of the final assessments in six courses (two per year in the first three years of the professional program) to assess whether students had acquired and retained foundational pharmacy practice knowledge and skills.63 Wensel, Broeeseker, and Kendrach described the implementation of an electronic portfolio and required students to record “a self-assessment of how well they are able to communicate information learned and integrate information across all courses, an artifact demonstrating an ability they achieved that semester, a reflective questionnaire, and an updated curriculum vitae.”64 As early as the 1990s, Purdue University introduced a holistic approach to assessment through the implementation of an assessment center.65,66 The authors emphasized that assessment centers are “not necessarily a ‘where,’ but more of a ‘who’ and a ‘what.’ Assessment centers function to identify and serve as a repository for processes and procedures to conduct assessments, and also as a medium to collect data.”
However, there is still considerable heavy lifting to be done in order to translate these innovations into a more globally integrated, curriculum wide program of learning assessments. Thus, there are opportunities for pharmacy educators to contribute to research related to such comprehensive, integrated assessments of the type described in this paper.67 Some direction is provided by an Association of Medical Educators of Europe guide for medical educators, which suggests, while there are no unique theories related to assessment, educators should look to related fields such as expertise development, cognitive psychology, and psychometrics for their theoretical guidance.36 Health professions educators are encouraged to consider the emerging phenomenon of assessment for learning as an area for future research and are provided with a number of suggestions for possible theoretical approaches. Further guidance can be found in the recommendations of a consensus report on research in assessment prepared by an international panel of medical education researchers.68 The panel’s 26 recommendations were grouped to provide direction pertaining to the broad headings of types of research, theoretical frameworks/context, study design, choice of methods, instrument characteristics such as validity and generalizability, cost/acceptability, ethical issues, and infrastructure and support.
CONCLUSION
There are increasing pressures for pharmacy education programs to produce more socially responsive, “fit for purpose” graduates, as well as increasing pressures for accountability in this process. In response, pharmacy educators have been developing a variety of innovative educational models and an increasing number of assessment tools that are designed to address these needs. While these advances in both curriculum and assessment have been valuable, the model implicit in our assessment practices might unintentionally be undermining our curricular efforts. In particular, our efforts to create integrated curricula, which in turn are intended to create integration for our students of the various aspects of practice, may well have been undermined by a model of assessment that tends to focus on isolated aspects of knowledge, skill, and attitude that promotes a “pass it and move on” vaccination theory of learning in our students. As we move forward with our efforts to address the need for “fit for purpose” graduates and accountability of this process, therefore, we must recognize that assessment is not merely a measurement problem, but also an instructional design problem. In this regard, if we wish to promote an integration of concepts and competencies in our students, we must not only think about how to create integrated educational practices, but also how to create integrated assessment practices. Programmatic assessment is one potential way forward. Moving in this direction will require a fundamental rethinking of our assessment practices from a primary focus on assessment of learning to a focus on assessment for learning and from a model in which assessment is the exclusive responsibility (and right) of individual course directors and teachers to one in which assessment is a collective responsibility of the program. Such a shift undoubtedly has its challenges, but it opens a new opportunity to move beyond the carousel of reform without change in our integration efforts.
ACKNOWLEDGMENTS
The authors would like to thank Simon Albon, PhD, Helen Fielding, BA, BEd, George Pachev, PhD, and Marion Pearson, PhD, for their feedback on drafts of this manuscript.
DWF also would like to thank Dr. Lambert Schuwirth and his colleagues at the Flinders Innovations in Clinical Education, Faculty of Medicine, Nursing and Health Sciences, Flinders University, Adelaide, Australia; Dr. Cees van der Vleuten and his colleagues at the School of Health Professions Education, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, the Netherlands; and Dr. Joanna Bates and her colleagues at the Centre for Health Education Scholarship, Faculty of Medicine, the University of British Columbia, Vancouver, Canada, for their support and guidance during study leaves at their institutions.
- Received October 29, 2015.
- Accepted March 31, 2016.
- © 2017 American Association of Colleges of Pharmacy