Skip to main content

Main menu

  • Articles
    • Current
    • Early Release
    • Archive
    • Rufus A. Lyman Award
    • Theme Issues
    • Special Collections
  • Authors
    • Author Instructions
    • Submission Process
    • Submit a Manuscript
    • Call for Papers - Intersectionality of Pharmacists’ Professional and Personal Identity
  • Reviewers
    • Reviewer Instructions
    • Call for Mentees
    • Reviewer Recognition
    • Frequently Asked Questions (FAQ)
  • About
    • About AJPE
    • Editorial Team
    • Editorial Board
    • History
  • More
    • Meet the Editors
    • Webinars
    • Contact AJPE
  • Other Publications

User menu

Search

  • Advanced search
American Journal of Pharmaceutical Education
  • Other Publications
American Journal of Pharmaceutical Education

Advanced Search

  • Articles
    • Current
    • Early Release
    • Archive
    • Rufus A. Lyman Award
    • Theme Issues
    • Special Collections
  • Authors
    • Author Instructions
    • Submission Process
    • Submit a Manuscript
    • Call for Papers - Intersectionality of Pharmacists’ Professional and Personal Identity
  • Reviewers
    • Reviewer Instructions
    • Call for Mentees
    • Reviewer Recognition
    • Frequently Asked Questions (FAQ)
  • About
    • About AJPE
    • Editorial Team
    • Editorial Board
    • History
  • More
    • Meet the Editors
    • Webinars
    • Contact AJPE
  • Follow AJPE on Twitter
  • LinkedIn
Research ArticleINSTRUCTIONAL DESIGN AND ASSESSMENT

Developing an Assessment Process for a Master’s of Science Degree in a Pharmaceutical Sciences Program

Timothy J. Bloom, Julie M. Hall, Qinfeng Liu, William C. Stagner and Michael L. Adams
American Journal of Pharmaceutical Education September 2016, 80 (7) 125; DOI: https://doi.org/10.5688/ajpe807125
Timothy J. Bloom
Campbell University College of Pharmacy & Health Sciences, Buies Creek, North Carolina
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Julie M. Hall
Campbell University College of Pharmacy & Health Sciences, Buies Creek, North Carolina
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Qinfeng Liu
Campbell University College of Pharmacy & Health Sciences, Buies Creek, North Carolina
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
William C. Stagner
Campbell University College of Pharmacy & Health Sciences, Buies Creek, North Carolina
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Michael L. Adams
Campbell University College of Pharmacy & Health Sciences, Buies Creek, North Carolina
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Info & Metrics
  • PDF
Loading

Abstract

Objective. To develop a program-level assessment process for a master’s of science degree in a pharmaceutical sciences (MSPS) program.

Design. Program-level goals were created and mapped to course learning objectives. Embedded assessment tools were created by each course director and used to gather information related to program-level goals. Initial assessment iterations involved a subset of offered courses, and course directors met with the department assessment committee to review the quality of the assessment tools as well as the data collected with them. Insights from these discussions were used to improve the process. When all courses were used for collecting program-level assessment data, a modified system of guided reflection was used to reduce demands on committee members.

Assessment. The first two iterations of collecting program-level assessment revealed problems with both the assessment tools and the program goals themselves. Course directors were inconsistent in the Bloom’s Taxonomy level at which they assessed student achievement of program goals. Moreover, inappropriate mapping of program goals to course learning objectives were identified. These issues led to unreliable measures of how well students were doing with regard to program-level goals. Peer discussions between course directors and the assessment committee led to modification of program goals as well as improved assessment data collection tools.

Conclusion. By starting with a subset of courses and using course-embedded assessment tools, a program-level assessment process was created with little difficulty. Involving all faculty members and avoiding comparisons between courses made obtaining faculty buy-in easier. Peer discussion often resulted in consensus on how to improve assessment tools.

Keywords
  • program assessment
  • graduate education
  • pharmaceutical sciences

INTRODUCTION

Program assessment has been receiving increased attention in higher education and pharmaceutical education in particular for many years. A search of American Journal of Pharmaceutical Education archives for publications between 1990-2003 using the search terms “abilities-based assessing,” “abilities-based assessment,” “assessment outcomes,” “assess outcomes,” and “programmatic assessment” found 48 articles or notes and 116 abstracts.1 The report of the 2003-2004 American Association of Colleges of Pharmacy (AACP) Academic Affairs Committee described steps in the continuing call for and development of resources for schools and colleges of pharmacy to use in demonstrating program effectiveness. The report indicated that a useful assessment program incorporates results of institutional research to identify; “(1) evidence of effectiveness, and (2) indicators of needed change in a continuous quality improvement environment.”2

An increase has occurred in program-level assessment at US schools of pharmacy. A survey by Bouldin and Wilkin in 2000 found that only 29% of respondents had a formally adopted assessment plan in place.3 Many institutions relied on tools such as North American Pharmacist Licensure Examination (NAPLEX) results, although this does not allow for assessment of relevant clinical skills and does not provide information related to institution-specific program outcomes. In 2006, Kirschenbaum et al reported that 60% of responding schools had a formally adopted assessment program, although a large percentage still relied at least in part on NAPLEX or state licensure examination results (85% and 44% respectively).4 The Accreditation Council for Pharmacy Education (ACPE) Standards for Doctor of Pharmacy (PharmD) Programs, adopted in 2007, place a higher emphasis on outcomes assessment, suggesting that a survey done today would show most or all US schools of pharmacy have some sort of formally adopted assessment program.5

Anderson et al reviewed changes in the definition of assessment as it became more widespread, looking for similarities and themes. They concluded that although emphasis varied, authors consistently agreed that assessment consisted of a systematic and continuous process, emphasized student learning with the cornerstone being what students can do, and focused on improvement of educational programs.1 The last component is often the most difficult, with many programs failing to “close the loop” and use the information gathered through assessment to actually make improvements to the educational program.

Assessment of academic program effectiveness is becoming more and more common at all levels, not just in professional programs. Legislatures demand evidence of effectiveness from public institutions as part of the justification for continued funding, while accrediting bodies are increasing the degree to which program assessment is part of accreditation standards.6,7 Parents and students are also interested in the benefits accrued from the investment in an education. Therefore, there is increasing pressure to demonstrate the benefit of academic programs.

Faculty members are generally familiar with course-level assessment. They are comfortable with the tools used, such as tests, quizzes, homework, presentations, and papers. Faculty members understand the role these types of assessment play in their courses. Faculty members interested in developing as teachers or improving student outcomes in their courses will explore ways to provide formative assessment to improve student learning, or summative assessment to assign student grades. The results of these assessments are collected with student learning in mind; however, this information is also analyzed to document faculty members’ effectiveness as teachers and for promotion and tenure dossiers.

There can be resistance among faculty members to participate in program-level assessment when such programs are first proposed. Those asking for the information, usually administrators and/or accreditors, may be viewed by faculty members as antagonistic to the academic program. Requests for such assessments may be perceived as a way to identify poorly performing faculty members or courses that may be eliminated. Additionally, program level assessment seems further removed from faculty interests, and so can be perceived as extra work with no benefit to individual faculty members. It is not uncommon for faculty members to complain that they could be more effective with their courses if others would leave them alone to do their job. Despite the efforts of those in the assessment and curriculum design communities, there can be a disconnect in the minds of faculty members between their course and the program to which it belongs. The academic freedom that allows teaching a course as one sees fit does not mean there is no faculty responsibility to the program of which the course is a part. Failing to see that a course is one of many components of an academic program can make it hard for faculty members to understand how the success of a program is impacted by the success of their own course and vice versa, and thus collecting program assessment data can seem an unwarranted burden.

The Campbell University College of Pharmacy & Health Sciences (CPHS) offers a master’s of science degree in pharmaceutical sciences (MSPS). The program was created before the collection of program-level assessment was routine at the institution. Consequently, for many years no mechanism determined how well the curriculum was preparing students for careers in the pharmaceutical industry other than low resolution tools like hiring rates after graduation. Although ACPE does not provide oversight for this program, educational standards such as program assessment and actionable information on program effectiveness are crucial to student recruitment and for critical program review and improvement. To have an evidence-based approach for curricular review, the Department of Pharmaceutical Sciences decided to create a system for program-level assessment. This report describes the process used in the development of a course-embedded assessment program, the initial stage of moving toward a culture of assessment.

DESIGN

The MSPS program was initiated in 1999 and the first class graduated in 2001. The 2-year degree typically enrolls 40-50 students at a time, with 20-25 students in the first and second years. Key programmatic components include core coursework taken by all MSPS students; laboratory-based coursework in a focus area or “track” chosen by students during the application process (industrial pharmacy, pharmaceutical analysis, pharmacology, or bioprocessing/biotechnology); and a capstone research project. In the past two years, the department began offering an alternative multidisciplinary course of study that substitutes additional laboratory-intensive courses for the research project. The typical class size ranges from 2 to 25 students, depending on whether a course is core and taken by all students or track-specific and taken by a subset. These low numbers ensure each student receives the academic support and mentoring that one would expect at a teaching-focused university.

The department consists of 20 faculty members who teach in a variety of biomedical programs at Campbell University, including pharmacy, physician assistant, and osteopathic medicine. All faculty members are involved with the MSPS program, albeit in different ways, including teaching courses and/or laboratories, participating in student research committees, mentoring students in the laboratory, and advising. The program is overseen by departmental curriculum and assessment committees, which meet regularly to evaluate and revise the curriculum, and to monitor student achievement. Collective work of these committees revealed that a formal process for evaluating student progress toward program goals was lacking and that a system was needed to assess whether students were advancing in their knowledge and skills as they progress from matriculation to graduation. The department decided to develop a program-level assessment process for the MSPS degree to institute a formal method for collecting quantitative information that would inform the assessment committee on student progress toward the overall program goals.

To achieve and sustain faculty commitment, the program assessment process needs to be strongly supported by upper administration. In this case, the department chair provided the vision and support to sustain this project, including the necessary internal and external resources. The first steps taken were development of program-level goals appropriate for the MSPS program. An ad hoc committee made up of faculty members actively teaching in each track met to review learning outcomes of all core courses and identify overarching themes that would apply to all students regardless of focus area. These themes were used to define program goals (Appendix 1), which describe general areas of development in students’ knowledge and skills as they move through the academic program. Although the goals were initially based on a review of core courses, track-specific courses were included as they contributed to student mastery of program goals and were a key component of the MSPS program. A day-long departmental meeting introduced the importance of and need for program assessment. A major portion of the day was set aside for small group discussion to reach consensus and finalize the program goals.

The next step was for course directors to map each of their course objectives to the program goals and to identify the highest level of Bloom’s Taxonomy at which the objectives were taught and assessed. The map allowed the committees to determine areas of redundancy and gaps in coverage related to program goals. Identifying the Bloom’s Taxonomy level allowed the committees to confirm that teaching and assessment expectations were aligned and provided a means to determine growth in students’ mastery as they moved through the program.

Course directors were next asked to identify potential embedded assessment tools for the program goals mapped by their respective course(s). The tools could be represented by examination questions, problem-solving activities, laboratory reports, oral presentations, laboratory practicals, written evaluations of scientific articles, and/or regulatory documents. In each iteration of the assessment process, course directors were asked to identify tools from existing summative assessment methods that could be used to assess program as well as course goals. The goal was to ease workload to increase faculty participation and compliance. To further facilitate the process, course directors were provided with a template in which to record information requested by the assessment committee.

At the end of the semester, the assessment committee met with participating course directors for a debriefing of the semester’s results, which involved a discussion of assessment tools used, data collected, and usefulness of both. Based on the results of the initial review iteration, the committee amended the process. Thereafter, course directors were to provide assessment tools with clear quantitative and reflective components. The use of rubrics where appropriate was highly encouraged. The committee also provided course directors a list of Bloom’s action verbs to help with identification of taxonomy levels appropriate to their assessment tools. Taxonomy levels were paired as knowledge/comprehension, application/analysis, and synthesis/evaluation. Out of the iterative process, a template was created to simplify submission of assessment tools and results.

EVALUATION AND ASSESSMENT

The purpose of program-level assessment is to measure cumulative effectiveness in reaching MSPS program goals and not to evaluate faculty teaching effectiveness or to compare one course with another. Prior to the first iteration of the process, all MSPS course directors were asked to map each course objective to the relevant program goal(s). They were then prompted to propose course-embedded tools for assessing progress toward program goals at the end of the semester by answering the following: (1) which program goal(s) the course objective addressed; (2) which Bloom’s Taxonomy level the objective was taught and assessed at for grading; and (3) what primary assessment tools were used for grading each course learning objective and whether they could be adapted for assessing student mastery of program goals.

The proposed tools were evaluated by members of the assessment committee. Course directors whose tools were deemed suitable by at least three of the six committee members were asked to collect assessment data for the first iteration in the fall 2012 semester. Thirteen MSPS courses were offered that semester, and seven were selected after proposed program assessment tools were reviewed. At the end of the semester, directors of these seven courses provided the committee with the tools used along with instructions given to the students prior to the use of the tools. They also provided any answer keys, scoring guidelines or rubrics used in the scoring, and anonymized scores earned by students for each tool. Class averages were calculated and summarized without course identifiers for reporting to the department.

Minimum academic performance for retention and graduation is a cumulative GPA of 3.0, the equivalent of 80% on a grading scale. Since the purpose of the assessment was to determine achievement of program goals, the committee decided that program goal achievement should have a minimal class average of 80%. However, it was difficult to conclude whether each program goal was achieved or not using the data collected from multiple courses. Program goals were not evenly covered (eg, goal 5c was assessed in five courses while goal 6a was assessed in only one course. Another complication was that some course directors did not actually collect data for all of the program goals to which they had mapped their course objectives.

The assessment committee met with each course director to discuss each assessment tool to verify the fit between the tool and the program goal being assessed and the targeted Bloom’s Taxonomy level. Based on these discussions, the committee found that descriptions of program goals and the Bloom’s Taxonomy levels were not clear to course directors. Therefore, a clearer description of program goals and examples of suitable assessment tools were created and given to course directors to improve development of future assessment tools by better aligning the teaching and assessment expectations to a level appropriate for MSPS students. The committee also strongly encouraged creation and use of grading rubrics as tools to assess writing, oral presentation, or laboratory proficiency in subsequent semesters. Furthermore, the committee agreed to wait to provide quantitative feedback to the curriculum committee until program-level assessment data became more reliable. Finally, a new data collection template that included more detailed instructions on how to collect program-level assessment data was generated by the committee and provided to each MSPS course director.

The second iteration of assessment was performed using spring 2013 semester courses. Assessment data were reviewed from seven of 11 offered courses, using the same process as in the first iteration. Most course objectives were still reported as having low Bloom’s Taxonomy levels: 55% were at the knowledge/comprehension level, 33% at application/analysis, and 12% at the synthesis/evaluation level. This led to a revision of the mapping template and a request for course directors to verify they had correctly identified the highest Bloom’s Taxonomy level at which the course was routinely taught. The taxonomy data were shared with the curriculum committee to allow that group to review the level at which MSPS courses were being taught.

The third iteration of assessment, with alterations based on the feedback described above, was conducted in fall 2013 and included 12 courses. This semester’s committee debriefings with course directors revealed that some course objectives were mapped to inappropriate goals. In addition, program goals had been missed during course objective mapping. For example, some courses mapping to mastery of chemical or biological concepts (goal 2) were found to use those concepts but not teach them; these mappings were therefore deleted. The committee realized that laboratory notebook requirements to conform to a particular format seemed to fit the regulatory component of our MSPS program goal 5 (understanding of compliance), but mapping to this goal was missing in all laboratory courses. As a result, course directors using strict laboratory report guidelines were advised to add program goal 5 to their map and thus contribute to assessment of that program goal. Comments from multiple course directors indicated some degree of frustration with the program goals as written, particularly with subgoals. These subgoals had been included to help clarify intent of program goals, but course directors found them cumbersome and sometimes difficult to distinguish from one another. As a consequence, the assessment committee recommended, and the department approved, changes to simplify program goals by turning the subgoals into illustrations of the meaning of each goal rather than independent items themselves.

Another observation was that Bloom’s Taxonomy levels were often over- or underestimated when program goals were being assessed. For example, examination questions requiring only memorizing information of the course lectures were sometimes used to assess a goal mapped at the level of synthesis/evaluation. The opposite problem was also occasionally found, where a question requiring evidence-based judgment or synthesis was used to assess a goal mapped at a lower level. In addition, course directors often did not separate out one specific goal for an assessment tool. For example, one course director assessed goal 1 using several questions, one of which was a component of a multipart question. Even though only that one component actually related to goal 1, the score for the entire question was provided as assessment data for the targeted program goal. Regardless of the type of problem, peer discussion between the committee and course directors often resulted in agreement on how to improve the alignment of assessment tools at the appropriate taxonomy level with course-based program expectations.

Based on experiences learned from three iterations of assessment, the assessment committee created a “unified assessment documentation” template. This document dramatically reduced the number of files created by each course director, and thus the number sent to the assessment committee for review, by providing space for a course director to provide all information related to the planned program assessment. This unified document included space for the identification of targeted program goals, highest Bloom’s Taxonomy level at which assessment would be done, the actual assessment tool(s) and instructions for their use, and any scoring instruments. All of the supporting documentation was merged into a Word file by making use of that program’s ability to create pull-down menus, useful for choice-delimited components such as targeted program goals and Bloom’s Taxonomy levels, and expanding text boxes, which were useful for containing open-ended components such as assessment tools and instructions for their use. Navigation within the document was simplified by using internal hyperlinks to connect each section back to the table of contents. Copies of this template are available from the authors upon request.

Because of the near doubling of courses being used for program assessment in the third iteration, the committee decided to reduce the use of individual meetings with course directors after the semester to discuss assessment results. As a result, only course directors involved for the first time in program-level assessment and those self-identified as having difficulty with the assessment met with the committee following the semester’s end. To continue the beneficial reflection that these discussions had generated, a major addition to the assessment documentation was a request to justify the use of the tool for the program goal and taxonomy level. In addition, postassessment reflection on the process was prompted by guiding questions in the last section of the unified assessment documentation template (Appendix 2).

DISCUSSION

This report describes the development of program-level assessment process for an MSPS program using an iterative process of data collection, peer evaluation, and discussion. This process led to a mechanism for collecting course-embedded quantitative summative data to evaluate students’ ability to meet academic program goals aimed at effectively preparing them for careers in the pharmaceutical industry and related enterprises. To implement a successful program-level assessment system that embraces continual improvement, faculty members must appreciate the need and usefulness of the assessment. Winning faculty acceptance and self-motivated participation must be viewed as a long-term process requiring multiple points of engagement and debate over several years. As we have designed the assessment process, it is reminiscent of the Shewhart-Deming Plan-Do-Study-Act continuous improvement cycle, a long-term, ongoing process.8

The most significant barrier to implementation of a program-level assessment process was and remains faculty buy-in. Faculty members are busy and program assessment is yet another demand on their time. Using course-embedded evaluation tools made faculty buy-in easier as little extra work was required on their part. Boyce noted that integrating and embedding assessment tools should lead to improved cost-effectiveness of assessment activities.9,10 This would not eliminate difficulties in collecting program assessment data using course-embedded tools. However, it does seem to be a reasonable first step in developing a culture of assessment and establishing the habit of thinking beyond one’s own courses.

The first appraisal iteration was performed as a trial run and involved only half of the classes offered during the semester. The trial run provided both faculty members and the assessment committee significant learning opportunities. The postassessment review process led to changes of goal mapping, with both additions and deletions being made to multiple courses. Course directors often overstated the Bloom’s Taxonomy level at which they taught and/or assessed. This was particularly common with assessment tools aimed at interpretation (goal 1), where many tools claiming to assess at analysis or above often actually assessed at recall or comprehension. Course directors also had a difficult time designing tools that assessed only one goal at time. For example, tools chosen to assess mastery of advanced theoretical principles (goal 2) often also involved critical interpretation of scientific data (goal 1). While assessing both is necessary, a combined tool made it difficult to determine how well each individual goal was being met. Peer discussions resulted in assessment tools more narrowly directed toward an individual goal and better aligned to suitable Bloom’s Taxonomy levels.

The creation of the unified assessment documentation template was implemented as a major improvement based on feedback from multiple assessment iterations. Instead of having separate documents created for identification of program goals met by a course, the actual assessment tools, instructions for use of the tools, and faculty reflection on successes and opportunities for improvement, the template integrated all the documentation required to undertake and report the assessment. The electronic unified assessment documentation template has made planning and reporting program assessment easier.

As we finished our third iteration, we began seeing more faculty buy-in. Consensus on how to improve goal targeting and assigning taxonomy levels was approached in a spirit of improvement and collaboration. The same approach used in peer-review of manuscripts and grant applications can be used effectively in improving course and/or program assessment. More faculty members provided self-reflections and improvement plans that demonstrate attention paid to the assessment data. The committee is no longer involved with face-to-face peer reviews of every course director each semester because there is a much clearer understanding of what information needs to be collected, how measurements are to be made, and how to work toward improvement. We anticipate that despite less frequent meetings between course directors and the committee, this self-learning exercise will help course directors to improve their future teaching and collect useful program assessment data that allow coordination with the curriculum committee to close the assessment loop.

The current plan is to extend the program assessment system to our bachelor’s of science in pharmaceutical science (BSPS) Program. We anticipate that the implementation of program assessment for the BSPS will be much faster and easier because of lessons learned with the MSPS program. Because this approach focuses on course-embedded assessment tools, it is difficult to generalize these tools to programs outside CPHS. However, the iterative process used illustrates an approach that can be adapted and applied at any institution or academic program.

CONCLUSION

Developing an assessment program provides not only valuable information on program effectiveness and areas of improvement, but also a rich environment for faculty development. An iterative process that started small and grew over time helped create consensus on areas for improvement of the assessment program, with reflection and discussion among peers proving to be potent drivers. Peer discussion often resulted in consensus on how to improve goal targeting and taxonomy level of assessment tools. The validity and reliability of data provided from multiple courses and course directors is improved when all faculty members have a consistent understanding of, for example, Bloom’s Taxonomy and what distinguishes one level from another.

ACKNOWLEDGMENTS

The authors would like to acknowledge the valuable contributions of Dr. Thomas Holmes as a member of the original assessment committee and Dr. Emanuel Diliberto Jr. for his leadership and support of this initiative. An abbreviated version of this manuscript was presented in poster format at the 2014 AACP Annual Meeting in Grapevine, Texas.

Appendix 1. Initial Master’s of Science Degree in Pharmaceutical Sciences (MSPS) Program Goals

Upon successful completion of the MSPS program, a student shall be able to:

  • 1. Critically interpret scientific data

    • a) Read and evaluate the scientific merits of research papers

    • b) Document and analyze results of his/her scientific investigation

  • 2. Gather and use information from the primary literature to prepare a scientific investigation addressing a specific problem or hypothesis

  • 3. Demonstrate mastery of advanced chemical and/or biological theoretical principles

  • 4. Effectively communicate scientific ideas in written and oral formats

    • a) Organize information for presentation to various target audiences

    • b) Ask and answer relevant questions by paying attention to details

  • 5. Demonstrate technical proficiency with relevant instrumentation and methodologies

    • a) Choose appropriate instrumentation to execute a scientific investigation

    • b) Operate scientific instrumentation correctly

    • c) Interpret the results generated by scientific instrumentation and draw reasonable conclusions

  • 6. Demonstrate understanding of the pharmaceutical compliance process

    • a) Explain the effect a regulatory environment has on day-to-day laboratory activities

    • b) Generate a functional document describing a validation procedure

These competencies might be considered basic for scientific education at any level in the area of pharmaceutical sciences. Therefore the level at which these competencies are assessed should be directly related to the current coursework offered at the MS Level.

Appendix 2. Guiding Questions for Postassessment Reflection

  • 1) What do your assessment results say about student achievement of program (NOT course) goals?

  • 2) What changes (if any) do you plan to make in the course the next time it is offered?

  • 3) Have you changed your ideas about which program goal(s) this course addresses? If so, why?

  • 4) Remember that to avoid teaching to the test, when test questions are part of your assessment tool you should not use the exact same assessment questions every time a course is offered. Bearing that in mind, do you plan on making significant changes in the assessment tools you use the next time? If so, what kind of change?

  • Received June 23, 2015.
  • Accepted September 23, 2015.
  • © 2016 American Association of Colleges of Pharmacy

REFERENCES

  1. 1.↵
    1. Anderson HM,
    2. Anaya G,
    3. Bird E,
    4. Moore DL
    . A review of educational assessment. Am J Pharm Educ. 2005;69(1):Article 12.
    OpenUrl
  2. 2.↵
    1. Boyce EG,
    2. Maldonado WT,
    3. Murphy NL,
    4. et al
    . Building a process for program quality enhancement in pharmacy education: report of the 2003-04 academic affairs committee. Am J Pharm Educ. 2004;68(3):Article S7.
    OpenUrl
  3. 3.↵
    1. Bouldin AS,
    2. Wilkin NE
    . Programmatic assessment in US schools and colleges of pharmacy: a snapshot. Am J Pharm Educ. 2000;64(4):380-387.
    OpenUrl
  4. 4.↵
    1. Kirschenbaum HL,
    2. Brown ME,
    3. Kalis MM
    . Programmatic curricular outcomes assessment at colleges and schools of pharmacy in the United States and Puerto Rico. Am J Pharm Educ. 2006;70(1):Article 8.
    OpenUrl
  5. 5.↵
    Accreditation Council for Pharmacy Education. Accreditation standards and guidelines for the professional program in pharmacy leading to the doctor of pharmacy degree 2007. https://www.acpe-accredit.org/standards/, accessed June 2015
  6. 6.↵
    1. Kelderman E
    . Public Colleges endorse Obama plans on affordability and accountability. Chron Higher Educ. August 22, 2013.
  7. 7.↵
    Accreditation Council for Pharmacy Education. 2016 Accreditation standards and key elements for the professional program in pharmacy leading to the doctor of pharmacy degree. https://www.acpe-accredit.org/standards/, accessed June 2015
  8. 8.↵
    1. Deming WE
    . Out of the Crisis. Cambridge, MA: MIT Press; 1986.
  9. 9.↵
    1. Boyce EG
    . Program assessment: enough or too much? Am J Pharm Educ. 2013;77(9):Article 185.
    OpenUrl
  10. 10.↵
    1. Boyce EG
    . Finding and using readily available sources of assessment data. Am J Pharm Educ. 2008;72(5):Article 102.
    OpenUrl
PreviousNext
Back to top

In this issue

American Journal of Pharmaceutical Education
Vol. 80, Issue 7
25 Sep 2016
  • Table of Contents
  • Index by author
Print
Download PDF
Email Article

Thank you for your interest in spreading the word on American Journal of Pharmaceutical Education.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Developing an Assessment Process for a Master’s of Science Degree in a Pharmaceutical Sciences Program
(Your Name) has sent you a message from American Journal of Pharmaceutical Education
(Your Name) thought you would like to see the American Journal of Pharmaceutical Education web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
3 + 2 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.
Citation Tools
Developing an Assessment Process for a Master’s of Science Degree in a Pharmaceutical Sciences Program
Timothy J. Bloom, Julie M. Hall, Qinfeng Liu, William C. Stagner, Michael L. Adams
American Journal of Pharmaceutical Education Sep 2016, 80 (7) 125; DOI: 10.5688/ajpe807125

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Share
Developing an Assessment Process for a Master’s of Science Degree in a Pharmaceutical Sciences Program
Timothy J. Bloom, Julie M. Hall, Qinfeng Liu, William C. Stagner, Michael L. Adams
American Journal of Pharmaceutical Education Sep 2016, 80 (7) 125; DOI: 10.5688/ajpe807125
del.icio.us logo Digg logo Reddit logo Twitter logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • INTRODUCTION
    • DESIGN
    • EVALUATION AND ASSESSMENT
    • DISCUSSION
    • CONCLUSION
    • ACKNOWLEDGMENTS
    • Appendix 1. Initial Master’s of Science Degree in Pharmaceutical Sciences (MSPS) Program Goals
    • Appendix 2. Guiding Questions for Postassessment Reflection
    • REFERENCES
  • Info & Metrics
  • PDF

Similar AJPE Articles

Cited By...

  • Breaking Down Barriers to Pharmacy Graduate Education: The Report of the 2017-2018 Research and Graduate Affairs Committee
  • Google Scholar

More in this TOC Section

  • Transformation of an Online Multidisciplinary Course into a Live Interprofessional Experience
  • Enhancing Student Communication Skills Through Arabic Language Competency and Simulated Patient Assessments
  • Student Self-Assessment and Faculty Assessment of Performance in an Interprofessional Error Disclosure Simulation Training Program
Show more Instructional Design and Assessment

Related Articles

  • No related articles found.
  • Google Scholar

Keywords

  • program assessment
  • graduate education
  • pharmaceutical sciences

Home

  • AACP
  • AJPE

Articles

  • Current Issue
  • Early Release
  • Archive

Instructions

  • Author Instructions
  • Submission Process
  • Submit a Manuscript
  • Reviewer Instructions

About

  • AJPE
  • Editorial Team
  • Editorial Board
  • History
  • Contact

© 2023 American Journal of Pharmaceutical Education

Powered by HighWire