Skip to main content

Main menu

  • Articles
    • Current
    • Early Release
    • Archive
    • Rufus A. Lyman Award
    • Theme Issues
    • Special Collections
  • Authors
    • Author Instructions
    • Submission Process
    • Submit a Manuscript
    • Call for Papers - Intersectionality of Pharmacists’ Professional and Personal Identity
  • Reviewers
    • Reviewer Instructions
    • Call for Mentees
    • Reviewer Recognition
    • Frequently Asked Questions (FAQ)
  • About
    • About AJPE
    • Editorial Team
    • Editorial Board
    • History
  • More
    • Meet the Editors
    • Webinars
    • Contact AJPE
  • Other Publications

User menu

Search

  • Advanced search
American Journal of Pharmaceutical Education
  • Other Publications
American Journal of Pharmaceutical Education

Advanced Search

  • Articles
    • Current
    • Early Release
    • Archive
    • Rufus A. Lyman Award
    • Theme Issues
    • Special Collections
  • Authors
    • Author Instructions
    • Submission Process
    • Submit a Manuscript
    • Call for Papers - Intersectionality of Pharmacists’ Professional and Personal Identity
  • Reviewers
    • Reviewer Instructions
    • Call for Mentees
    • Reviewer Recognition
    • Frequently Asked Questions (FAQ)
  • About
    • About AJPE
    • Editorial Team
    • Editorial Board
    • History
  • More
    • Meet the Editors
    • Webinars
    • Contact AJPE
  • Follow AJPE on Twitter
  • LinkedIn
Article CommentaryCOMMENTARY

Identifying High-Impact and Managing Low-Impact Assessment Practices

Kristin K. Janke, Katherine A. Kelley, Beth A. Martin, Mary E. Ray and Burgunda V. Sweet
American Journal of Pharmaceutical Education September 2019, 83 (7) 7496; DOI: https://doi.org/10.5688/ajpe7496
Kristin K. Janke
aUniversity of Minnesota College of Pharmacy, Minneapolis, Minnesota
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Katherine A. Kelley
bThe Ohio State University, Columbus, Ohio
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Beth A. Martin
cUniversity of Wisconsin School of Pharmacy, Madison, Wisconsin
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Mary E. Ray
dUniversity of Iowa College of Pharmacy, Iowa City, Iowa
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Burgunda V. Sweet
eUniversity of Michigan College of Pharmacy, Ann Arbor, Michigan
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • PDF
Loading

Abstract

Those in pharmacy education who are tasked with assessment may be overwhelmed by deadlines, data collection, and reporting, leaving little time to pause and examine the effectiveness of their efforts. However, assessment practices must be evaluated for their impact, including their ability to answer important questions, use resources effectively, and contribute to meaningful educational change. Often assessments are implemented, but then attention is diverted to another assessment before the data from the former assessment can be fully interpreted or used. To maximize the impact of assessment practices, tough and uncomfortable decisions may need to be made. In this paper, we suggest an approach for examining and making decisions about assessment activities and provide guidance on building high-impact assessment practices, evolving or “sunsetting” low-impact assessment practices, and managing mandated assessment.

Keywords
  • assessment
  • programmatic assessment
  • curricular assessment

Since the 2009 American Association of Colleges of Pharmacy Curricular Summit, curricular transformation has been openly encouraged and pursued among colleges of pharmacy in the United States.1 The release of the 2013 Center for Advancement of Pharmacy Education Outcomes2 and Accreditation Council for Pharmacy Education’s Standards 20163 added to the fervor for curricular redesign and evolution. With this aim in mind, decisions for change should be based on evidence, and transformation in education should be guided by assessment.

This work is not easy. While resources have been developed to support formative assessment strategies4 and guide the development of assessment leads,5 colleges and schools may nonetheless struggle to develop their assessment operations. Ever-increasing mandates for evidence of program effectiveness have driven demands for more metrics and benchmarks. In addition to pharmacy accreditation requirements, universities have additional reporting obligations. Collectively, there is a seemingly endless list of required data that sometimes makes the work of assessment feel like little more than checking off boxes.

Pharmacy faculty members, committees, and administrators should integrate assessments responsibly, use continuous quality improvement processes, and work to establish a culture of assessment within pharmacy schools.6 Yet, this is easier said than done. Intellectually, we know that gathering and managing reams of data does not create the culture we seek. And though technology may seem like an answer to managing the workload, it may simply make it easier to ask faculty members to supply ever-increasing levels of minutia. As assessment enterprises grow and mature, there may be the appearance of progress because people are working, analyses are conducted, and reports are produced. However, despite the flurry of activity, is it possible we are still not answering important questions that help us drive evidence-based improvement and change? Has assessment practice become, as Gilbert suggests, “similar to surgeons patting themselves on the back for taking out tumors without checking to see if their interventions are affecting mortality rates”?7

As organizations, pharmacy schools are susceptible to getting caught up in the assessment routine, churning through endless cycles of data collection and reporting. Thoughtful decisions must be made to foster sustainable and impactful assessment and to aid the movement towards a culture of assessment. The academy needs well-intentioned and well-designed assessments with attention given to implementation, fidelity, and quality improvement. We should be asking how we can refine and improve assessment processes. To aid in asking important questions to ultimately drive positive educational change, we outline an approach here for examining and making decisions about assessment activities. Given a recent commentary in higher education assessment that calls for clarity in the language of assessment,8 we also provide working definitions for the concepts introduced.

Ask Meaningful Questions

Colleges and schools need to strive toward assessment that is meaningful, valuable, and pragmatic.9 Simply gathering data is not sufficient. Good assessment begins with asking pivotal questions. Like posing good research questions, assessment that makes a difference is based on asking questions that hold meaning to the people involved in the process. These criteria apply for assessments conducted at the individual (eg, student) or program level, as well as those that are formative or summative. If the task is not serving a purpose, we should reconsider whether it needs to be done.

Identify Assessment Practices

Many routines are used to help accomplish the work of assessment. However, these routines do not all lead to meaningful change or improvement. A mechanism is needed for evaluating assessment activities and making decisions that allow an assessment program to be both effective and sustainable. We believe this involves isolating and naming our assessment practices. We posit that an “assessment practice” is a process, procedural sequence or system that accomplishes a specific goal related to curricular or programmatic improvement, utilizes specific strategies, tools and techniques, and is implemented and repeated with intention and discipline.

Having both a strategic approach and tools are key. We can have a good strategy, but lack effective tools for delineating actionable data. We can also have good tools, but lack a strong strategy for using them. For instance, an annual “data day” might employ the strategy of convening stakeholders, including external parties, for an onsite event to review results from curricular assessments. Tools or techniques, such as the use of data placemats, may then be used to encourage interaction with the data and their interpretation.10 When used intentionally, routinely, and effectively, a “data day” with all of its associated strategies and tools, is an assessment practice. In the frenzy of everyday operations, assessment practices may not be regularly reviewed; however, these practices are critical to the impact of assessment. So what makes an assessment practice impactful?

Build High Impact Assessment Practices

Identifying, listing, and describing assessment practices is not enough. We need a lens through which to evaluate our investment in this work. The concept of high-impact practices (HIPs) in the context of student learning has been conceptualized in undergraduate education. Specifically, HIPs have been defined as “an investment of time and energy over an extended period that has unusually positive effects on student engagement in educationally purposive behaviors.”11 Examples of HIPs include capstone experiences, service learning, undergraduate research experiences, and internships. By their nature, HIPs involve interacting with faculty members and peers about substantive matters, experiencing diversity, reflecting and integrating learning, responding to frequent feedback, and discovering relevance of learning through real-world applications.12,13

Just as there are high-impact educational practices associated with student learning, we suggest that assessment within our colleges and schools requires examination with an eye to identifying high impact assessment practices (HIAPs). The notion of high impact assessment practices would allow us to sort through assessment routines, looking for excellence. Building from the undergraduate literature,11 a HIAP would involve an investment of time and energy in the repeated implementation of an intentional set of assessment strategies and tools, which results in substantive, positive effects on stakeholder engagement and educationally meaningful, relevant improvements in curricula, the student experience, or the program.

In short, HIAPs answer critical questions and motivate change. A HIAP encourages the use of data for dialogue and debate and ultimately affects stakeholders (eg, students, faculty members) and/or the curriculum or program. Its implementation results in discernable, transformative, and memorable action being taken. While one-time or initial use of an assessment practice provides good input on its potential and promise, experience with an assessment practice (ie, repeated use) is needed to determine its impact. In addition, an assessment practice’s utility must be judged in context. A collection of successful strategies and tools used in a different environment may or may not work.

As an example, a key question may be whether a new curriculum requires a reasonable and appropriate student workload. A HIAP might focus on student workload monitoring. Several data sources (tools) might be utilized, such as faculty estimates of study time (assessed via survey tool) and student documentation of actual out-of-class time (logged by a sample of students). The assessment practice would also include the strategies by which data were reviewed (eg, an ad hoc “workload committee” evaluates the information). Decisions can then be made to “right size” workload to minimize peaks and troughs within a single semester. As needed, repeated cycles of data collection and use would be conducted, allowing for improvement and progress on the variable under consideration (ie, student workload).

As any HIAP is used, it may need to evolve to increase its utility (eg, refining measures, strengthening participation, employing triangulation, addressing reliability issues, experimenting with reporting methods, scaling up stakeholder involvement). For example, it might be determined that measures of student stress are needed, along with hours spent, in order to more fully appreciate and interpret the effects of student workload.

A common HIAP is identifying required learning assessment activities at strategic places in the curriculum (eg, pre-APPE). This collection of assessments can do “double duty” by providing program-level data on the success of the curriculum, as well as identifying students who are struggling so that assistance and remediation can be provided. As an example, these assessments could be conducted at the end of each didactic year and might include a 50-item written examination and performance-based assessments that assess program-level learning outcomes. When faculty members and students are actively involved in interpreting the data and meaningful educational changes result, this practice may be considered high-impact.

Evolve or Sunset Low Impact Assessment

When assessment practices fail to hit the mark, we should routinely determine what went wrong. Best practice would dictate that assessing our assessments for their impact should be part of the process. It may be possible to evolve assessment practices to be more effective, perhaps even making them be high impact. Tables 1-3 build from available literature to assist committees or administrators in self-evaluating impact and identifying methods to elevate impact further.

View this table:
  • View inline
  • View popup
  • Download powerpoint
Table 1.

Design- and Development-Related Questions for Committees and Administrators to Use in Evaluating the Impact of an Assessment Practice

View this table:
  • View inline
  • View popup
  • Download powerpoint
Table 2.

Implementation-Related Questions for Committees and Administrators to Use in Evaluating the Impact of an Assessment Practice

View this table:
  • View inline
  • View popup
  • Download powerpoint
Table 3.

Evaluative Questions for Committees and Administrators to Ask When Examining the Impact of an Assessment Practice

A lack of impact is likely attributable to a variety of reasons. Perhaps data are collected but not shared publicly with relevant stakeholders. Maybe there is insufficient dialogue allowing for solid and meaningful data interpretation eg, how much weight do we give this finding? What additional evidence is needed to better understand this issue? Maybe there is not enough discussion and debate about the data’s implications and the options for change, eg, what alterations could inch us toward a stronger outcome? Perhaps the measure is not as sensitive or specific as we need it to be.

Raising the “impact” of an assessment practice may be possible. For instance, a change in the instrument or the method of administration may be needed. A proper analysis may also be lacking; performing a more targeted sub-analysis may provide us with the information we need. A technique (eg, student focus groups) may not be meeting the desired intent (eg, quality input on the curriculum) and may need redirection, replacement, or supplementation with a new technique or tool. Perhaps the critical part that is missing is consistent closure of the assessment loop; procedures can be put in place to reevaluate and ensure that data are shared, discussed, interpreted, and acted upon.

Salvaging a low-impact practice is not the only option. Assessment practices can also be retired. In particular, we must be careful about assessing that which we have neither the will nor the ability to change.14 If the initial question prompting the assessment has been answered, the practice may no longer be needed and may be retired or “sunset.” For example, if student workload in the curriculum has been optimized after several years of monitoring and adjustments, the assessment practice can be sunset. In Good to Great, author Jim Collins argues that strong organizations have a “stop doing” list.15 It requires courage to channel resources differently and disciplined people to make an honest and diligent effort to understand the current reality and confront it through dialogue and debate (not blame or coercion).15

Sunsetting an assessment practice can be an uncomfortable decision. Many educators are understandably interested in preserving the possibility of examining historical data. They anticipate that at some point they may want to look for a trend, demonstrate improvement, or document growth. However, the willingness to retire or revise low-impact assessments is essential to allow for investment in other key assessment areas. In addition, we must be sensitive to the burden that assessment practices place on students and faculty and staff members, ensuring we are intentional and strategic in what we are doing and how we use the resulting data. The utility and cost-benefit of an assessment must be considered. To help faculty members let go of low impact assessment, asking them how the data will be used going forward can be helpful. If the answer is “We have always collected it,” or “We may need it,” then administrators or other faculty members must convince them that it is okay to let those measures go. Creating a mechanism for warehousing the item (or technique) in case it needs to be instituted again in the future may help some faculty and staff members to accept sunsetting assessment practices.

Manage Mandated Assessments

There are fundamental differences between those practices carried out for accountability (eg, annual monitoring of licensure examination scores) and those carried out for the purpose of improvement. Pharmacy educators have a responsibility to the Department of Education and to the public to show accountability for what we say we do. However, balancing assessment for accountability with assessment for improvement is necessary. Assessment for improvement helps us to do what we are accountable for doing even better and allows us to reach a higher bar. Fundamentally, we feel more invested in the outcome and more likely to be motivated to make changes when we answer questions with local meaning that can lead to real impact. Keeping the purpose of these two types of assessments front and center can help prevent getting caught up in the required assessments for accountability without leaving time or energy for assessing what matters most to pharmacy programs.

Several strategies can be used to change the time and effort spent on mandated assessment. One approach is to spend just enough time on assessment for accountability and to meet accreditation requirements (eg, pass rate or score on a standardized measure such as licensure rate) and then move on. Another approach is to elongate timelines and conduct certain assessments every 2 to 3 years versus every term or annually (eg, curricular quality surveys). Furthermore, it may not be necessary to conduct some assessments at every curricular touchpoint. For instance, one of our programs has deliberately not implemented assessments at every interprofessional didactic or experiential encounter, but more strategically. The result is purposeful measurement at times in the curriculum when the student cohort would have reached a level of achievement. A “slow it down” list may help to release resources and control the rate of data acquisition to more usable amounts.

Finally, creative efforts may help to elevate the impact of some mandated data collection by linking a new assessment strategy into the process that allows for necessary improvement. For example, several programs in the academy recently reported finding limited use in the reports provided for the Pharmacy Curriculum Outcomes Assessment (PCOA), making it difficult to use the data to make informed decisions about curriculum revision.16 Adding a simple step may prove valuable, such as interviewing students who just completed the examination to capture their experience. Their response to a question such as, “Is there content on the exam that is not covered or reinforced in the curriculum?” can be quite informative. This approach, which was used in one of our programs, revealed that the calculations content on the PCOA was covered in the curriculum, but some items were only covered in the first year of the program. This information allowed for a targeted curriculum revision that built in additional calculation touch points in the second and third years.

Facing the Mountain: A Call to Action

In order to prevent faculty and staff burnout, it may be helpful to periodically ask the assessment committee and/or assessment staff to examine the impact of each practice in the assessment portfolio, evaluating its ability to aid in meaningful educational change. Suggestions for balancing assessment for accountability and assessment for improvement can be made. A list of low-impact practices can be generated, along with recommendations for either increasing their impact or sunsetting them altogether. Approaches for fostering HIAPs and managing mandated assessments can also be included in this evaluation. While this approach does not eliminate the workload and staffing challenges related to assessment, it can help to focus energy and attention.

Identifying and evolving assessment practices requires diligence on several fronts. This work cannot be realized without leadership. The assessment leads must have a vision for the assessment program as a whole and work to involve others so that this becomes a shared vision.5 Developing a culture of assessment within our institutions requires that leaders build trust among those involved in assessment, a shared language around assessment, and research-based guidelines that can orient our assessment activities and serve as criteria for evaluating assessment plans and efforts.14 In addition, HIAPs happen within a context and culture, which requires the design, implementation, and evaluation of assessments to be uniquely tailored to meet local needs.

Now is the time to challenge ourselves to define the tangible benefits of our activities and the need for, and usefulness of, each data point that we collect. How do we judge what truly matters, when enough is enough, or perhaps more importantly, when some practices should be abandoned? Colleges and schools cannot continue to do it all. We should not be on an endless journey of climbing mountain after mountain, day after day. Instead, we should set our sights on the highest peak and make each day of climbing a meaningful step towards our goal. We have reached a point of maturity where examination of our assessment practices is needed so that our efforts lead us towards targeted, quality, and sustainable assessment. We encourage faculty members to identify high-impact practices to assist not only their own institution, but also the academy at-large, and to share their findings.

ACKNOWLEDGMENTS

The authors wish to acknowledge the insight and encouragement of 2016-2018 Big Ten Academic Alliance Pharmacy Assessment Collaborative (BTAA-PAC) members in developing this manuscript.

  • Received December 20, 2018.
  • Accepted April 28, 2019.
  • © 2019 American Association of Colleges of Pharmacy

REFERENCES

  1. 1.↵
    1. Farris KB,
    2. Demb A,
    3. Janke KK,
    4. Kelley K,
    5. Scott SA
    . Assessment to transform competency-based curricula. Am J Pharm Educ. 2009;73(8):158.
    OpenUrlPubMed
  2. 2.↵
    1. Medina MS,
    2. Plaza CM,
    3. Stowe CD,
    4. et al
    . Center for the Advancement of Pharmacy Education Educational Outcomes 2013. Am J Pharm Educ. 2013;77(8):Article 162.
    OpenUrl
  3. 3.↵
    Accreditation Council for Pharmacy Education. Accreditation Standards and Key Elements for the Professional Program in Pharmacy Leading to the Doctor of Pharmacy Degree. 2015. https://www.acpe-accredit.org/pdf/Standards2016FINAL.pdf. Accessed December 11, 2018.
  4. 4.↵
    1. DiVall MV,
    2. Alston GL,
    3. Bird E,
    4. et al
    . A faculty toolkit for formative assessment in pharmacy education. Am J Pharm Educ. 2014;78(9):Article 160.
    OpenUrl
  5. 5.↵
    1. Janke KK,
    2. Kelley KA,
    3. Sweet BV,
    4. Kuba SE
    . A modified Delphi process to define competencies for assessment leads supporting a doctor of pharmacy program. Am J Pharm Educ. 2016;80(10):Article 167.
    OpenUrl
  6. 6.↵
    1. Janke KK,
    2. Kelley KA,
    3. Kuba SE,
    4. et al
    . Reenvisioning assessment for the academy and the Accreditation Council for Pharmacy Education’s standards revision process. Am J Pharm Educ. 2013;77(7):Article 141.
    OpenUrl
  7. 7.↵
    Gilbert E. Does assessment make colleges better? Who knows? The Chron of High Educ. August 14, 2015.
  8. 8.↵
    1. Jankowski NA
    . “Pardon me, your catch phrase is showing”: the importance of the language we use. Assess Updat. 2017;29(2):9-13.
    OpenUrl
  9. 9.↵
    1. Maki PL
    . Assessing for Learning. 2nd ed. Sterling, VA: Stylus Publishing; 2010.
  10. 10.↵
    1. Pankaj V,
    2. Emery AK
    . Data placemats: a facilitative technique designed to enhance stakeholder understanding of data. New Dir Eval. 2016;2016(149):81-93.
    OpenUrl
  11. 11.↵
    1. Brownell J,
    2. Swaner L
    . Five High-Impact Practices. Washington, DC: Association of American Colleges and Universities; 2010.
  12. 12.↵
    1. Kuh GD,
    2. Kinzie J,
    3. Buckley JA,
    4. Bridges BK,
    5. Hayek JC
    . What Matters to Student Success: A Review of the Literature. Comm Rep Natl Symp Postsecond Student Success Spearheading a Dialog Student Success. 2006. https://www.ue.ucsc.edu/sites/default/files/WhatMattersStudentSuccess(Kuh,July2006).pdf. Accessed December 18, 2018.
  13. 13.↵
    1. Kuh GD
    . High-Impact Educational Practices: What They Are, Who Has Access to Them, and Why They Matter. Washington, DC: Association of American Colleges and Universities; 2008.
  14. 14.↵
    1. Angelo TA
    . Doing assessment as if learning matters most. AAHE Bull. 1999;51(9):3-6.
    OpenUrl
  15. 15.↵
    1. Collins J
    . Good to Great: Why Some Companies Make the Leap and Others Don’t. New York, NY: HarperCollins Publishers; 2001.
  16. 16.↵
    1. Sweet BV,
    2. Assemi M,
    3. Boyce E,
    4. et al
    . Characterization of PCOA use across accredited colleges of pharmacy Am J Pharm Educ. 2019;83(7):Article 7091.
    OpenUrl
  17. 17.
    1. Hutchings P,
    2. Ewell PT,
    3. Banta TW
    . AAHE Principles of Good Practice: Aging Nicely. http://www.learningoutcomesassessment.org/PrinciplesofAssessment.html#AAHE. Accessed December 11, 2018.
  18. 18.
    1. Palomba CA,
    2. Banta TW
    . Assessment Essentials: Planning, Implementing, and Improvement Assessment in Higher Education. San Francisco, CA: Jossey-Bass Publishers; 1999.
  19. 19.
    1. Hutchings P
    . Opening Doors to Faculty Involvement in Assessment: National Institute for Learning Outcomes Assessment Occasional Paper #4. 2010;(April). http://www.learningoutcomeassessment.org/documents/PatHutchings_000.pdf. Accessed December 11, 2018.
  20. 20.
    1. Banta TW,
    2. Blaich C
    . Closing the assessment loop. Chang Mag High Learn. 2010;43(1):22-27.
    OpenUrl
  21. 21.
    1. Baker GR,
    2. Jankowski NA,
    3. Provezis S,
    4. Kinzie J
    . Using assessment results: promising practices of institutions that do it well. Natl Inst Learn Outcomes Assess. 2012;(July). http://learningoutcomesassessment.org/documents/CrossCase_FINAL.pdf. Accessed December 11, 2018.
  22. 22.
    1. Casiro O,
    2. Regehr G
    . Enacting pedagogy in curricula: On the vital role of governance in medical education. Acad Med. 2017;93(2):179-184.
    OpenUrl
  23. 23.
    1. Tagg J
    . Double-loop learning in higher education. Chang Mag High Learn. 2007;39(4):36-41.
    OpenUrl
  24. 24.
    1. Haji F,
    2. Morin M-P,
    3. Parker K
    . Rethinking programme evaluation in health professions education: beyond ‘did it work?’ Med Educ. 2013;47(4):342-351.
    OpenUrlPubMed
  25. 25.
    1. Cook DA,
    2. Bordage G,
    3. Schmidt HG
    . Description, justification and clarification: a framework for classifying the purposes of research in medical education. Med Educ. 2008;42(2):128-133.
    OpenUrlCrossRefPubMed
PreviousNext
Back to top

In this issue

American Journal of Pharmaceutical Education
Vol. 83, Issue 7
1 Sep 2019
  • Table of Contents
  • Index by author
Print
Download PDF
Email Article

Thank you for your interest in spreading the word on American Journal of Pharmaceutical Education.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Identifying High-Impact and Managing Low-Impact Assessment Practices
(Your Name) has sent you a message from American Journal of Pharmaceutical Education
(Your Name) thought you would like to see the American Journal of Pharmaceutical Education web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
4 + 0 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.
Citation Tools
Identifying High-Impact and Managing Low-Impact Assessment Practices
Kristin K. Janke, Katherine A. Kelley, Beth A. Martin, Mary E. Ray, Burgunda V. Sweet
American Journal of Pharmaceutical Education Sep 2019, 83 (7) 7496; DOI: 10.5688/ajpe7496

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Share
Identifying High-Impact and Managing Low-Impact Assessment Practices
Kristin K. Janke, Katherine A. Kelley, Beth A. Martin, Mary E. Ray, Burgunda V. Sweet
American Journal of Pharmaceutical Education Sep 2019, 83 (7) 7496; DOI: 10.5688/ajpe7496
del.icio.us logo Digg logo Reddit logo Twitter logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • ACKNOWLEDGMENTS
    • REFERENCES
  • Figures & Data
  • Info & Metrics
  • PDF

Similar AJPE Articles

Cited By...

  • Administration and Evaluation of the American Association of Colleges of Pharmacy Curriculum Quality Surveys in Pharmacy Schools
  • Google Scholar

More in this TOC Section

  • Are Prerequisite Courses Barriers to Pharmacy Admission or the Keys to Student Success?
  • Does the Academy Have Trust Issues?
  • Using Online Cancer Genomics Databases to Provide Teaching Resources for Pharmacy Education
Show more COMMENTARY

Related Articles

  • No related articles found.
  • Google Scholar

Keywords

  • assessment
  • programmatic assessment
  • curricular assessment

Home

  • AACP
  • AJPE

Articles

  • Current Issue
  • Early Release
  • Archive

Instructions

  • Author Instructions
  • Submission Process
  • Submit a Manuscript
  • Reviewer Instructions

About

  • AJPE
  • Editorial Team
  • Editorial Board
  • History
  • Contact

© 2023 American Journal of Pharmaceutical Education

Powered by HighWire