Skip to main content

Main menu

  • Articles
    • Current
    • Early Release
    • Archive
    • Rufus A. Lyman Award
    • Theme Issues
    • Special Collections
  • Authors
    • Author Instructions
    • Submission Process
    • Submit a Manuscript
    • Call for Papers - Intersectionality of Pharmacists’ Professional and Personal Identity
  • Reviewers
    • Reviewer Instructions
    • Call for Mentees
    • Reviewer Recognition
    • Frequently Asked Questions (FAQ)
  • About
    • About AJPE
    • Editorial Team
    • Editorial Board
    • History
  • More
    • Meet the Editors
    • Webinars
    • Contact AJPE
  • Other Publications

User menu

Search

  • Advanced search
American Journal of Pharmaceutical Education
  • Other Publications
American Journal of Pharmaceutical Education

Advanced Search

  • Articles
    • Current
    • Early Release
    • Archive
    • Rufus A. Lyman Award
    • Theme Issues
    • Special Collections
  • Authors
    • Author Instructions
    • Submission Process
    • Submit a Manuscript
    • Call for Papers - Intersectionality of Pharmacists’ Professional and Personal Identity
  • Reviewers
    • Reviewer Instructions
    • Call for Mentees
    • Reviewer Recognition
    • Frequently Asked Questions (FAQ)
  • About
    • About AJPE
    • Editorial Team
    • Editorial Board
    • History
  • More
    • Meet the Editors
    • Webinars
    • Contact AJPE
  • Follow AJPE on Twitter
  • LinkedIn
Research ArticleINSTRUCTIONAL DESIGN AND ASSESSMENT

A Methodology for Assessing Skill-Based Educational Outcomes in a Pharmacy Course

Gregory L. Alston and Carrie L. Griffiths
American Journal of Pharmaceutical Education September 2015, 79 (7) 105; DOI: https://doi.org/10.5688/ajpe797105
Gregory L. Alston
Wingate University School of Pharmacy, Wingate, North Carolina
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Carrie L. Griffiths
Wingate University School of Pharmacy, Wingate, North Carolina
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • PDF
Loading

Abstract

Objective. To develop a methodology for assessing skill development in a course while providing objective evidence of success and actionable data to improve instructional effectiveness.

Design. Course objectives were recast as skills to be demonstrated. Confidence in these skills was surveyed before and after the course. Student skills were demonstrated using 4 work products and a multiple-choice examination.

Assessment. The change from precourse survey to postcourse survey was analyzed with a paired t test. Quality of the student work product was assessed using scoring guides. All students demonstrated skill mastery by scoring 70% or above on the work product, and 87/88 demonstrated individual progress on the surveyed skills during the 15-week course.

Conclusion. This assessment strategy is based on sound design principles and provides robust multi-modal evidence of student achievement in skill development, which is not currently available using traditional student course evaluation surveys.

Keywords
  • educational outcomes
  • assessment
  • CAPE 2013
  • skill development
  • course evaluation

INTRODUCTION

Accreditation Council for Pharmacy Education (ACPE) Standards1 call for the development of knowledge, skills, abilities, and attitudes to become a competent pharmacist.1 Knowledge usually refers to declarative knowledge representing course content that is memorized and understood. Skills are complex acts that, while requiring knowledge, also involve performance.2 Abilities refer to behaviors that resemble skills but are more complex and require a much longer time to develop.2 Attitudes are primarily elements of the affective domain such as interest, motivation, self-confidence, personality, and temperament.2 The prime directive of the ACPE Standards1 is to create competent professional pharmacists who are life-long learners.

Learning that lasts involves integrating learning with the process of individual development to produce competent performance.3 Four longstanding aims of higher education can be paraphrased to reflect the goals of pharmacy education and each element is required for significant learning to occur.3 These aims are: (1) reasoning-develop thinking skills within the pharmacy discipline; (2) development-stimulate students’ capacity to make meaning within their professional lives; (3) self-reflection-develop the ability to self-reflect and thrive within the pharmacy profession culture; (4) performance-develop the capacity to perform well within work, family, and community.

These aims should inform the curricular strategy of an institution. However, for students to achieve personal growth, they must experience a transformative learning cycle. That involves internal focus on meaning, external focus on competence, personal characteristics, and a contextual framework for education. The 4 domains of growth for a student are: abstract, insightful, and sound reasoning; integrative and ethical development; perceptive, insightful, and adaptive self-reflection; and effective performance.3 Students achieve this transformation by using metacognitive strategies to learn “what they should know and how they can do something,” using self-assessment skills to learn “what they can do and how they can improve,” and engaging a diverse set of activities to learn “who they are and who they should become.”3

Typical didactic instructional assessment activities in pharmacy programs center on the demonstration of knowledge acquisition through quizzes, examinations, and cases. However, the landscape for pharmacy education is shifting from a knowledge-centric basis to an outcomes-centered schema with the introduction of the 2013 Center for the Advancement of Pharmacy Education (CAPE) Outcomes.4 The ACPE Standards 2016 incorporate the CAPE Outcomes.5 In the past, colleges and schools of pharmacy relied heavily on foundational science examination scores as key performance indicators of quality. Given that foundational knowledge is now only 1 of the 15 subdomains outlined in the CAPE 2013 document, programs will require significant changes in student, curricular, and institutional assessment to meet the mandates of the new standards.

The Consortium for the Improvement of Teaching, Learning, and Assessment (CTLA) developed a list of shared educational assumptions to inform assessment practices for higher education.6 These include student learning being a primary purpose of the educational institution, education going beyond knowing to being able to do what one knows, learning being active and collaborative, assessment being integral to learning, assessment being developed in multiple modes and contexts, and performance assessment with explicit criteria, feedback and self-assessment being an effective strategy for ability based, student-centered education.

Assessment and learning design are not standalone disciplines. Assessment is required for learning to take place. Traditional models of pharmacy instructional design typically begin with course goals and a course description and progress to a list of course objectives. Activities and lectures are planned to meet those objectives, and students’ knowledge acquisition is assessed by testing from the objectives. This leads to courses designed to cover material rather than to develop functional skills. Additionally, it leads to assessments designed to compute grades rather than enhance learning. Wiggins and McTighe suggested a backward course design strategy that identifies desired results, determines acceptable evidence, and plans learning activities and instruction.7 Backward course design is a strategy to improve learning and help pharmacy students become lifelong learners by focusing on skill development rather than content delivery.8 A substantial hurdle to the implementation of this strategy is the lack of a simple and effective method of assessing skill development.

This article describes a methodology for using precourse and postcourse surveys, as well as carefully selected work performance products, to provide a robust analysis of a course’s ability to facilitate achievement of skill-based educational outcomes. This assessment strategy was devised to measure the development of skills rather than the possession of foundational knowledge. The strategy allows students to self-reflect and create individualized meaningful outcomes and provides multiple modes of data in the context of explicit criteria for performance. While the strategy provides new levels of data regarding student performance, it also directly informs instructional effectiveness by analyzing whether students developed the skills. Therefore, this strategy is designed to be appropriate for any course tasked with developing skills.

DESIGN

The intervention took place in a required 2-credit hour course in pharmacy management delivered in the third year of the PharmD program at Wingate University School of Pharmacy (WUSOP).

The first step in the process of developing this assessment strategy was to recast the traditional topic-driven course objectives into a set of skills-based objectives. The literature was reviewed and practicing pharmacists were consulted to determine a list of skills that could reasonably be instructed to third-year pharmacy students. The list of managerial sciences includes accounting, finance, economics, human resource management, marketing, operations management, and value creation.9 Each content area was further processed using the backward course design strategy to identify the desired results, determine the acceptable evidence, and plan the learning experiences.

Assessments were then designed using a series of steps based on methods developed at Alverno College.10 Course skills were identified to be developed in this domain, considering that depth of instruction would need to be appropriate for the course context. Given that the nature of this course was to provide introductory instruction to entry-level PharmD candidates, the following skills were selected for the educational objectives: basic accounting, basic finance, economics, human resource management, marketing, professional compensation strategies, business planning, and value creation. Using the Principles of Good Assessment,11 the authors decided that students would demonstrate skill mastery by participating in active-learning exercises in class, by completing a survey before and after the course, by taking a case-based, problem-solving, multiple-choice examination, and by completing work product assignments. Feedback was given to the students through a combination of active-learning exercises, class assignments with written feedback, class discussion, peer review, private discussion, and detailed grading comments. Four to 6 performance expectations were then selected for each skill that would serve as the basis for class assignments. Some were purely formative practice in skill development rather than graded summative exercises (Table 1). Summative work product assessments were designed to evaluate student performance on the skill in question. Assessments should require the use of multiple complementary skills and simulate as close to authentic performance as possible. Thus, work product assignments were designed to mirror authentic situations that would require the application of management skills.

View this table:
  • View inline
  • View popup
  • Download powerpoint
Table 1.

Skill Objectives for the Third-year Management Course

Work Product Assignments and Assessments

Math-based skills for accounting, finance, economics, and value creation were assessed using a 70-item, multiple-choice examination. Students were provided a practice scenario, a profit and loss statement, and a balance sheet. They were then asked to compute results, predict outcomes, and determine the likelihood of success of a particular strategy. Students received a score report that reflected their performance within each skill domain. This examination was worth 30% of the final course grade.

This human resource management assessment involved the development of legal and ethical questions that could be used in a hiring interview. Students worked in groups to discuss the acceptable and unacceptable behaviors in a candidate. They individually rated their top 5 behaviors in each category and crafted 5 questions designed to elicit the information they needed to identify those behaviors in a candidate without violating federal employment laws. This assessment was worth 10% of the final course grade. The scoring guides for all the work product assignments are found in Table 2.

View this table:
  • View inline
  • View popup
  • Download powerpoint
Table 2.

Scoring Guides for Individual Work Product Assignments and In-class Work Performance Assessments

Business planning skills were assessed using a group business plan project. Students developed a continuing education presentation that would appeal to pharmacists. They crafted a headline, a tagline, an outline of the talk, and a marketing message. Feedback was delivered in 2 ways: a 10-point scale was used to evaluate the accuracy and clarity of the presentation design, and a mock “sales” website was created advertising the student’ seminars. Practicing pharmacists were then invited to vote on which seminar they would attend if given the choice. The votes were tallied and each group was able to see a nonacademic market response to their work. This project was worth 10% of the final course grade.

To assess financial planning, accounting, budgeting, and professional compensation strategies skills, each student prepared a detailed budget for their first 5 years after graduation. In addition, they computed their net worth projection for each year end. In class, students studied income, taxation, debt, and decision-making to prepare a personalized budget and net worth statement. Students worked in groups and consulted the instructor outside of class. This project was worth 10% of the final course grade.

Each student was required to develop a vision and mission statement for their career and a value creation strategy that would describe what they are going to do to be worth the salary they would ask to be paid. Furthermore, they defined the steps required to go from where they were as students to where they wanted to be as professionals. This project was worth 15% of the final course grade.

Work product assignments were all graded by a single instructor using a detailed scoring rubric within one week of being turned in and written feedback was given to students. One key element of the design process was to focus the skill performance outcomes on tasks that would be perceived by students as personally beneficial. By focusing the assignments on how they were going to earn income, pay down student debt, and locate their ideal job, we hoped students would be more inclined to give honest effort to the assignment.

The assessments were designed to directly comply with the CTLA shared educational assumptions.6 Student work scores were compiled and analyzed to determine the number of students who successfully completed course requirements in total and for each skill assignment.

Lastly, because the course was a flipped classroom with the students responsible for engaging with the instructional material prior to class, the in-class time was heavily focused on active-learning exercises. Therefore, 25% of a student’s course grade reflected their engagement in the class activities. The 3 points available for each class session were awarded based on the scoring guide in Table 2. The intent of the in-course activities was to let the students gain experience and confidence in skills they would be required to perform on their summative assessments. While the summative assessments directly assessed students’ ability to perform the skills in question, the final element of this assessment strategy was to develop an indirect assessment of student skills that could be used to survey student attitudes as well as quantify individual development over time.

Survey Design

An online survey was constructed using Survey Monkey (Survey Monkey, Palo Alto, CA) to assess multiple elements of students’ perception of their management skills. Students were asked to rate each skill within 3 dimensions: (1) Understanding: how well the knowledge base for the skill was understood“; (2) Decision-making: confidence in the ability to make decisions within the skill; “and (3) Usefulness: the likelihood that they would use these skills in practice. Students rated each of these dimensions from 0% to 100% along an 11-point scale with an answer of zero equaling 1, an answer of between 1 and 10% equaling 2, and so on, an answer of 91-100% equaling 11.

While these skill dimension questions composed the bulk of the survey, demographic information and contextual information not germane to this article was included in the full survey. Five questions worth mentioning directly related to student development and decision-making: (1) Do you plan to pursue a pharmacy residency after graduation? (2) Do you have a written personal mission statement? (3) Do you have a written financial plan for the next 5 years? (4) If you were forced by circumstances to run your own business, either traditional retail pharmacy or a new professional clinical practice, in order to earn a living, how confident are you that you could build a successful practice? (5) The concepts and strategies taught in this course will help me to be a better pharmacist during my career after my academic training is completed. Answer choices ranged from strongly disagree to strongly agree (this question was included on the post-course survey only). Reports showed the number of students who responded at each of the 11 levels on each survey item in addition to computing the average score for each question. Individual student responses on the presurvey and postsurvey were entered into a master data table and analyzed using SYSTAT13 (SYSTAT13, San Jose, CA).

The survey results for each skill were recorded by student identifier to allow data pairing, and 4 compiled scores were computed to combine the same results in different ways. The 24 scores (8 skills x 3 dimensions) of the precourse and postcourse surveys were compiled into an “all skills” score for each survey. The 8 scores for the Understanding, Decision-making, and Usefulness dimensions were compiled to create a precourse and postcourse score for each dimension. Paired data were analyzed to determine the number of students who scored higher on the postcourse survey. The precourse score for each element was then compared to its corresponding postcourse survey score using the paired sample t test with 95% confidence interval and graphed using the 2 sample t test graph in the software.

The survey responses for additional questions 1, 2, 3, and 5 were reported and p values were calculated for the comparison between precourse and postcourse results. These scores reflected the percentage of students who began the course with major milestone assignments already in progress and those who had a completed work product by the end of the course. This study was approved by the university’s Institutional Review Board.

Overall Assessment Strategy

One key element of the CTLA shared educational assumptions is that assessment must use multiple methods and contexts.6 This assessment strategy combined student achievements into multiple data points from 4 perspectives: (1) student perception of competence in the skill domain prior to instruction; (2) student perception of competence in the skill domain after instruction occurred; (3) work product evidence of actual skill performance in an academic setting; and (4) student determination of the value of the instruction.

The authors propose that course leaders utilizing this assessment strategy use the following guide to determine if their course has succeeded in developing the skills targeted by their course. (1) Did the students achieve mastery on the direct assessments? [In this example, the accounting skills midterm, business marketing plan, personal action plan and 5-year net worth projection.] (2) Did their self-confidence in the understanding of the subjects taught improve? (3) Did their self-confidence in the ability to make good decisions improve? (4) Did students perceive relevance of material as usable in their post academic career? (5) Did students’ confidence in their ability to perform the master ability improve? [In this example, did they feel confident in their ability to manage a business if they were forced to by circumstance?] (6) Did the students perceive the course to enhance their ability to perform as a competent practitioner? (7) Did students actually make a key important decision using the skills taught? [In this example, did they confirm or change their view towards pursuing a residency?]

The time investment required to execute a backward course design strategy conversion from the original topic-driven design was significant. Converting thirty 50-minute lectures to 15 100-minute class sessions filled with active-learning exercises required more than 100 hours of effort. However, the time required to build the pre/postsurvey strategy required almost 20 hours. The ongoing use of this strategy requires almost one hour of time per semester as the surveys can be copied and duplicated easily in the survey software. Data analysis and evaluation consumes an additional 4 hours per semester with most of the time devoted to building the data tables for analysis. By following the methodology outlined here, users of this strategy can significantly reduce their development time. The surveys take 10 minutes each for students to complete. Because most course leaders routinely review their courses, this strategy may not require any more time investment than what is spent on existing evaluation tools, and the quality of the data it yields may actually improve the efficiency and accuracy of course improvement efforts in the long run.

EVALUATION AND ASSESSMENT

A carefully constructed student perception survey can be reliable and valid in providing the views regarding the quality of a course and the instruction provided. Aleamoni reviewed student survey rating myths and suggested students are capable of making consistent judgments about the instructor and the instruction despite their immaturity and lack of experience, and that student ratings are not simply a popularity contest.12 Interestingly, he stated, “The disadvantages of gathering student ratings primarily result from how they are misinterpreted and misused.”12 In many cases, the only “objective data” instructors have about the effectiveness of their teaching efforts are these traditional student opinion surveys. These surveys are essentially a compilation of subjective data points. Unless student satisfaction surveys accurately reflect the actual attainment of course objectives, such surveys may introduce significant error to the assessment process.

Novel instructional approaches are not polished efforts when first attempted. Poor student course ratings potentially provide a disincentive to innovate in the classroom unless that faculty member has evidence that the innovation is adding value to the instruction. Aleamoni stated the most common misuse of perception survey results is that, without normative data, a faculty member may be tempted to place inappropriate emphasis on selected student responses.12 This tendency to overweight negative responses may lead to fundamentally unsound instructional design decisions. Abandoning innovation based on negative student comments could be harmful if the innovation actually produced better outcomes. In addition, if poor student perception ratings are used to deny promotion and tenure, the college may be unintentionally punishing innovation. The survey strategy described in this paper aimed to minimize potential problems with student surveys by providing a structure and methodology that limits these deficiencies while capturing useful outcomes data.

The traditional-end-of-course student satisfaction survey is subject to several biases: (1) The typical survey focuses on the process of educating rather than the outcomes of instruction. If the survey is designed with a specific process in mind and the teacher does not follow that process, the quantification of the student satisfaction results will likely be invalid; (2) Without a precourse data point specific to material being covered in the course, one cannot control for entry-level knowledge, competence or opinions of the student group. Thus, students using typical course evaluation survey data cannot gauge their own progress or seek evidence-based advice from faculty advisors; (3) Without motivation to be truthful, student responses are potentially biased; (4) Many students fail to complete the survey instrument causing significant nonresponse bias, so limiting evaluation of instruction to an opinion survey completed by a potentially biased sample is not sound assessment strategy.

The precourse and postcourse survey design used in this assessment strategy minimized these biases by assessing precourse skills along 3 dimensions: Understanding, Confidence in Decision-making, and Usefulness of the skills. While perception of these dimensions is not the same as performance of skills, if students’ confidence in their understanding of the content, in their ability to make good decisions using the content, and in their recognition of the Usefulness of that content improved, they would be more likely than students who reported otherwise to develop these skills. Also, by embedding the survey in to a course assignment, response rates were 100% for the precourse survey and 97% for the postcourse survey, virtually eliminating nonresponse bias. Additional work will be required to confirm this hypothesis, but the additional 5 questions were included to provide evidence of complementary markers that could inform this question. In this use of the strategy all of the direct and indirect elements aligned to indicate that learning took place. If those elements were to point in opposing directions interpretation would propose new challenges. Further work is required to see what patterns emerge that can improve interpretation.

Ninety-one students enrolled in the course. The student body in general is 38% male and 57% earned a college degree prior to enrollment on the doctoral program. Survey participants reflect the demographic makeup of the student body. Data for all individual skills assessed on the surveys are reported in Appendix 1. Each of the skill areas showed significant improvement. With 8 skills rated along 3 dimensions each and a maximum score of 11 on the scale, each student could achieve a maximum score of 264 on the precourse or postcourse survey. The average student all skills score was 150.1 on the precourse survey and 217.9 on the postcourse survey for a gain of 45.3% (Table 3). This all skills average was depressed by the Usefulness score. The Usefulness score was much higher on the precourse survey (67.9), than either the Understanding or Decision-making score (40.7 and 41, respectively). The average student score improved 76.7% on their confidence in Understanding, 74.8% on their confidence in Decision-making and 9.5% on their perception of Usefulness. Two of 88 students showed slight decreases in their All Skills scores as a result of their precourse Usefulness score being high. Those same students progressed +19% and -5%, respectively, on their combined Understanding and Decision-making scores. One individual who regressed had a precourse survey score of 240/264 and a postcourse survey score of 216/264. This decrease may have been a result of exaggerated confidence before the course began. Essentially, since 87/88 students who completed both surveys progressed.

View this table:
  • View inline
  • View popup
  • Download powerpoint
Table 3.

Pre/Postsurvey for the Indirect Assessment of Management Skill Development

Fifty-four students confirmed their precourse decision to pursue a residency, and 11 converted from not sure to no, which was confirmed by reviewing the actual paired data table. The process of making this decision using value creation strategy was a significant element of the course. Although it is impossible to determine if this course was the sole cause of that decision, the results provide support that decisions were made during this 15-week experience.

Regarding personal mission, 79 of 91 students went from having no written mission statement to having a peer-reviewed written document by the end of the course. This evidence supports the notion that students actually crafted a personal mission statement. Additionally these mission statements were graded according to a scoring guide that provided evidence of their quality.

Eighty-two of 91 students went from having no written personal financial plan to having a detailed income, expense, and 5-year net worth projection based on their circumstances. This evidence supports the notion that students actually crafted a budget, income and expense statement, cash flow plan, and net worth statement using the skills taught in the course. Additionally, these work products were graded according to a predefined scoring guide that provided evidence of their quality.

To determine students’ confidence in their overall ability to manage if they had to, the relevant question used the 11-point rating scale for the skill dimensions. The mean score for the precourse survey was 6.55 (SD=2.7) for 91 students, and the mean score for the postcourse survey was 8.2 (1.9) for 86 respondents (Table 3). Two students completed the postsurvey but failed to answer this question. This result provides evidence that students’ confidence in their ability to manage improved during the duration of the course, suggesting that the instruction provided value for the students.

The statement that the concepts and strategies taught in the course would help students be better pharmacists after academic training was rated using a 4-point Likert scale (strongly disagree, disagree, agree, strongly agree). Thirty-five students answered strongly agree and 45 students answered agree so that 82 of 88 (93.2%) rated the course as useful to their career (Table 4).

View this table:
  • View inline
  • View popup
  • Download powerpoint
Table 4.

Assessment of Performance Markers

Table 5 reports the descriptive statistics for the direct work product assessments of skill performance. All students in the course attained scores above a “C” performance level (70%) on every course assignment with the exception of one student who scored a 60% on the personal action plan and 5-year budget exercise and one student who earned a 67.5% on the midterm. The midterm was all case-based, problem-solving items that required the ability to evaluate the case dynamics and apply the appropriate strategy and its concomitant calculations to solve the problem. As such, it was a higher-order measure of the students’ grasp of the skills being assessed and was scored mechanically

View this table:
  • View inline
  • View popup
  • Download powerpoint
Table 5.

Descriptive Statistics for the Direct Course Assessments of Student Work Product Assessing Management Skills

Class mean scores on the postcourse survey improved significantly from the precourse survey, the standard deviation of the postcourse survey cohort was significantly smaller than the precourse survey cohort, and the low score in the class scoring range improved significantly. These 3 indicators not only signal that the class as a whole improved but that the low position outliers moved closer to the mean. The results of the pre/postcourse survey analysis suggest that all 3 key performance indicators were achieved. Figure 1 shows a significant difference between the surveys of understanding (p <0.01). This indicates that a significant change took place, given that the mean score improved (76.7%), the minimum score improved, and the standard deviation shrank (43%). Figure 2 shows a significant difference between the surveys of decision making (p <0.01), indicating a significant change, given that the mean score improved (75%), and the standard deviation decreased (43%). Figure 3 shows a significant difference between the surveys of perception of skill Usefulness (p<0.01). This indicates a significant change, given that the mean score improved (9.4%), and the standard deviation decreased (21.8%). Given the much higher starting point for the class in this dimension, the growth in student performance was modest. In addition, the dimension itself was not a direct focus of course instruction but was considered a collateral effect. These facts may explain the weaker improvement than the other dimensions. Figure 4 shows a significant difference in the surveys of confidence in business skills (p<0.01), indicating a significant change, given the mean score improved 25.2%, and the standard deviation decreased (31.1%).

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Comparison of Pre- and Post-Course Survey of Understanding.

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

Comparison of Pre- and Post-Course Survey of Decision Making.

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

Comparison of Pre- and Post-Course Survey of Perception of Skill Usability.

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

Comparison of Pre- and Post-Course Survey of Confidence in Business Skills.

DISCUSSION

ACPE Standards require that pharmacy programs develop skills, abilities, and attitudes in addition to knowledge.1 The seminal work Learning That Lasts describes the need for individual development to produce competent performance and describes 4 aims of education as developing thinking skills, making meaning, self-reflecting, and developing the capacity to perform.3 The proposed 2016 ACPE Standards refocus the academy’s attention on the assessment of educational outcomes rather than on foundational knowledge.5 The CTLA advises that assessment is integral to learning and that it must be developed in multiple modes and contexts with explicit criteria feedback and self-assessment as a component.6 Backwards learning design also is a reasonable solution to redesigning a course.6 The final element needed is a method of assessing skill development rather than knowledge acquisition.

The CAPE 2013 Outcomes focus heavily on skill development for this domain.4 This methodology demonstrates a new integrated approach to assessing student achievement in performing skills. It may have use as a measure of teacher and institutional effectiveness. Validation of the instrument for such purposes will require additional research to link internal and external performance measures to these outcomes. This strategy provides a robust data set that demonstrates skill development better than traditional student course evaluations do to potentially guide skill-based instruction. Effective quality control processes require that the assessments be matched to the outcomes being evaluated.

A key distinction for educators to consider when comparing this assessment strategy to others is that this was a course designed for skill development. Therefore, they should not be concerned by the relatively high level of performance in the course. Students were allowed to practice skills in class until they felt confident demonstrating their mastery of the skills. The summative assessment instruments were not administered until after every student had been given 3 opportunities to practice. These opportunities were with presession homework assignments, and in-class formative assignments and group activities. The goal for this type of course is the demonstration of skill mastery not the ranking of students by their test-taking ability.

Student work product data provided direct evidence of the students’ ability to perform the skills in question. The precourse and postcourse survey results indirectly demonstrated that students progressed along the continuum of success in mastering skills. Student work product performance on average was well above minimal mastery as defined by a score of 70%. Eighty-seven of 88 students who completed both surveys improved their understanding by an average of 74%. Furthermore, 87 of 88 students who completed both surveys improved their decision making by an average of 75%, and 67 of 88 students who completed both surveys improved their perception that they would use these business skills postgraduation by an average of 10%. Lastly, 67 of 85 students who completed both surveys improved their confidence in their ability to manage a business if they were forced to by an average of 25%.

Typical student satisfaction surveys given after a course provide no data about individual student progress thereby minimizing the utility of the data for targeted remediation of students at risk of failure. While student satisfaction surveys remain a valuable tool to assess the process of instruction, they fail to adequately provide key performance indicators of success in the instructional process of developing and mastering skills. Our assessment strategy provides actionable data points that can be used to target individual student development, to assess the instructional effectiveness for each skill domain, to focus the course activity on the mastery of skills, and to align work product assessment deliverables with course and curriculum objectives and goals. The ability to triangulate the data from the indirect assessment of skill development via the precourse and postcourse surveys with the direct evidence of actual work product and the increase in student confidence in their ability to perform the skills, provides a multimodal assessment tool that explores the development of educational outcomes more effectively than any tool that relies on a single perspective.

Caution needs to be exercised when interpreting the results of these surveys. If a student’s confidence precourse was unreasonably exaggerated, the beneficial outcome of instruction may have been having them comprehend how much they didn’t know. This could result in a lower postcourse score even though significant learning did take place. Outliers should be reviewed in detail separately to correctly interpret these results.

Inter-rater reliability issues were minimized because all grades were assigned by a single evaluator. However, because the lead author of this paper performed the evaluations, there was potential for bias towards a positive outcome. This was minimized by the use of tightly focused scoring guides and frequent formative activities that allowed students to learn from their failures. Nevertheless, this remains a weakness of the study that will require further analysis to eliminate. However, by posting the business plan assignment for public rating by working pharmacists, the evidence demonstrates that the work product quality was well received by practicing pharmacists.

Further research needs to be done to validate the instrument as a measure of students’ skill development in the actual practice of these skills. This will require triangulation with actual work performance in advanced pharmacy practice experiences and workforce settings and the involvement of external stakeholders in the design, evaluation, and validation of the in-course assessments. The potential bias created with the lead author being the sole rater of student performance could be eliminated with external review of the grading. In addition, the scoring guides need to be validated as true rubrics that could be used by multiple raters effectively. A team of clinical faculty members at WUSOP has begun identifying the core clinical process skills to be developed in all of WUSOP’s pharmacotherapy modules using this strategy. The team’s goal is to investigate how the strategy can be used to provide longitudinal performance data on individual students’ clinical skills development as they progress through the pharmacotherapy sequence in the PharmD program.

SUMMARY

The particular example used was for a third-year pharmacy management course but the methodology would apply to any course seeking to develop complex skills. Presurvey and postsurvey analysis, skills-based instructional activities, and skill-targeted work product deliverables showed that the educational outcomes were achieved using a set of triangulated assessment data to assess the development of skills in an outcomes based course. This assessment strategy is based on sound assessment design principles and provides robust, multi-modal evidence of student achievement in skill development not currently available using traditional student surveys.

Appendix

Table
Appendix 1.

Detailed Pre/Postsurvey for the Indirect Assessment of Management Skill Development

  • Received August 15, 2014.
  • Accepted December 15, 2014.
  • © 2015 American Association of Colleges of Pharmacy

REFERENCES

  1. 1.↵
    Accreditation Council for Pharmacy Education 2011. Accreditation standards and key elements for the professional program in pharmacy leading to the doctor of pharmacy degree. Chicago, IL. www.acpe-accredit.org/pdf/FinalS2007Guidelines2.0.pdf Accessed August 5, 2014.
  2. 2.↵
    1. Haladyna TM
    . Writing Test Items to Evaluate Higher Order Thinking. Needham Heights, MA: Allyn & Bacon, 1997.
  3. 3.↵
    1. Mentkowski M,
    2. et al
    . Learning That Lasts: Integrating learning, development, and performance in college and beyond. San Francisco, CA: Jossey-Bass, 2000.
  4. 4.↵
    1. Medina MS,
    2. Plaza CM,
    3. Stowe CD,
    4. et al
    . Center for the Advancement of Pharmacy Education (CAPE) Educational Outcomes 2013. Am J Pharm Educ. 2013;77(8):Article 162.
  5. 5.↵
    Accreditation Council for Pharmacy Education 2014. Accreditation Standards and Key Elements For The Professional Program In Pharmacy Leading To The Doctor Of Pharmacy Degree. Draft Standards 2016 Chicago, IL. www.acpe-accredit.org/pdf/Standards2016DRAFTv60FIRSTRELEASEVERSION.pdf Accessed August 3, 2014
  6. 6.↵
    Consortium for the Improvement of Teaching, Learning and Assessment. High school to college to professional school: Achieving educational coherence through outcome-oriented, performance-based curricula (Final Report to the W. K. Kellogg Foundation). Milwaukee, WI: Alverno College Productions, 1992.
  7. 7.↵
    1. Wiggins GP,
    2. McTighe J
    . Understanding by Design. 2nd ed.London, Pearson, 2005.
  8. 8.↵
    1. Daugherty KK
    . “Backward course design: making the end the beginning.” Am J Pharm Educ. 2006;70(6):135.
    OpenUrlPubMed
  9. 9.↵
    1. Desselle SP
    . The “Management” in Medication Therapy Management. Pharmacy Management: Essentials for All Practice Settings. 3rd ed. New York, McGraw-Hill, 2012.
  10. 10.↵
    Alverno College. Approaches to Course Planning: Student-focused Approach.: Milwaukee, WI: Alverno College Productions; 2007.
  11. 11.↵
    National Institute for Learning Outcomes Assessment. www.learningoutcomeassessment.org/PrinciplesofAssessment.html. Accessed July 30, 2014.
  12. 12.↵
    1. Aleamoni LM
    . Student rating myths v research facts from 1924 to 1998. Journal of Personnel Evaluation in Education. 1999;13(2):153-166.
    OpenUrlCrossRef
PreviousNext
Back to top

In this issue

American Journal of Pharmaceutical Education
Vol. 79, Issue 7
25 Sep 2015
  • Table of Contents
  • Index by author
Print
Download PDF
Email Article

Thank you for your interest in spreading the word on American Journal of Pharmaceutical Education.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
A Methodology for Assessing Skill-Based Educational Outcomes in a Pharmacy Course
(Your Name) has sent you a message from American Journal of Pharmaceutical Education
(Your Name) thought you would like to see the American Journal of Pharmaceutical Education web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
9 + 8 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.
Citation Tools
A Methodology for Assessing Skill-Based Educational Outcomes in a Pharmacy Course
Gregory L. Alston, Carrie L. Griffiths
American Journal of Pharmaceutical Education Sep 2015, 79 (7) 105; DOI: 10.5688/ajpe797105

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Share
A Methodology for Assessing Skill-Based Educational Outcomes in a Pharmacy Course
Gregory L. Alston, Carrie L. Griffiths
American Journal of Pharmaceutical Education Sep 2015, 79 (7) 105; DOI: 10.5688/ajpe797105
Reddit logo Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • INTRODUCTION
    • DESIGN
    • EVALUATION AND ASSESSMENT
    • DISCUSSION
    • SUMMARY
    • Appendix
    • REFERENCES
  • Figures & Data
  • Info & Metrics
  • PDF

Similar AJPE Articles

Cited By...

  • No citing articles found.
  • Google Scholar

More in this TOC Section

  • Transformation of an Online Multidisciplinary Course into a Live Interprofessional Experience
  • Enhancing Student Communication Skills Through Arabic Language Competency and Simulated Patient Assessments
  • Qualitative Analysis of Student Perceptions Comparing Team-based Learning and Traditional Lecture in a Pharmacotherapeutics Course
Show more Instructional Design and Assessment

Related Articles

  • No related articles found.
  • Google Scholar

Keywords

  • educational outcomes
  • assessment
  • CAPE 2013
  • skill development
  • course evaluation

Home

  • AACP
  • AJPE

Articles

  • Current Issue
  • Early Release
  • Archive

Instructions

  • Author Instructions
  • Submission Process
  • Submit a Manuscript
  • Reviewer Instructions

About

  • AJPE
  • Editorial Team
  • Editorial Board
  • History
  • Contact

© 2023 American Journal of Pharmaceutical Education

Powered by HighWire