Skip to main content

Main menu

  • Articles
    • Current
    • Early Release
    • Archive
    • Rufus A. Lyman Award
    • Theme Issues
    • Special Collections
  • Authors
    • Author Instructions
    • Submission Process
    • Submit a Manuscript
    • Call for Papers - Intersectionality of Pharmacists’ Professional and Personal Identity
  • Reviewers
    • Reviewer Instructions
    • Call for Mentees
    • Reviewer Recognition
    • Frequently Asked Questions (FAQ)
  • About
    • About AJPE
    • Editorial Team
    • Editorial Board
    • History
  • More
    • Meet the Editors
    • Webinars
    • Contact AJPE
  • Other Publications

User menu

Search

  • Advanced search
American Journal of Pharmaceutical Education
  • Other Publications
American Journal of Pharmaceutical Education

Advanced Search

  • Articles
    • Current
    • Early Release
    • Archive
    • Rufus A. Lyman Award
    • Theme Issues
    • Special Collections
  • Authors
    • Author Instructions
    • Submission Process
    • Submit a Manuscript
    • Call for Papers - Intersectionality of Pharmacists’ Professional and Personal Identity
  • Reviewers
    • Reviewer Instructions
    • Call for Mentees
    • Reviewer Recognition
    • Frequently Asked Questions (FAQ)
  • About
    • About AJPE
    • Editorial Team
    • Editorial Board
    • History
  • More
    • Meet the Editors
    • Webinars
    • Contact AJPE
  • Follow AJPE on Twitter
  • LinkedIn
Research ArticleSpecial Articles

Benchmarking in Academic Pharmacy Departments

John A. Bosso, Marie Chisholm-Burns, Jean Nappi, Paul O. Gubbins and Leigh Ann Ross
American Journal of Pharmaceutical Education October 2010, 74 (8) 140; DOI: https://doi.org/10.5688/aj7408140
John A. Bosso
aSouth Carolina College of Pharmacy – Charleston
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Marie Chisholm-Burns
bUniversity of Arizona College of Pharmacy
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Jean Nappi
aSouth Carolina College of Pharmacy – Charleston
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Paul O. Gubbins
cCollege of Pharmacy, University of Arkansas for Medical Sciences
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Leigh Ann Ross
dUniversity of Mississippi School of Pharmacy
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • PDF
Loading

Abstract

Benchmarking in academic pharmacy, and recommendations for the potential uses of benchmarking in academic pharmacy departments are discussed in this paper. Benchmarking is the process by which practices, procedures, and performance metrics are compared to an established standard or best practice. Many businesses and industries use benchmarking to compare processes and outcomes, and ultimately plan for improvement. Institutions of higher learning have embraced benchmarking practices to facilitate measuring the quality of their educational and research programs. Benchmarking is used internally as well to justify the allocation of institutional resources or to mediate among competing demands for additional program staff or space. Surveying all chairs of academic pharmacy departments to explore benchmarking issues such as department size and composition, as well as faculty teaching, scholarly, and service productivity, could provide valuable information. To date, attempts to gather this data have had limited success. We believe this information is potentially important, urge that efforts to gather it should be continued, and offer suggestions to achieve full participation.

Keywords:
  • benchmarking
  • pharmacy
  • academia

INTRODUCTION

The term “benchmarking” probably originated from the work performed by cobblers who measured feet for shoes by placing the person's foot on a “bench” and “marking” it out to develop a pattern.1 Today, benchmarking is defined as the process of comparing practices, procedures, and performance metrics to an established standard or best practice. For more than 20 years, benchmarking has been an accepted practice to improve industry processes.2 Benchmarking has numerous applications, most commonly serving as a guideline, standard, and/or comparison, thus allowing a unit, person, or organization to know where they stand in relation to the established guideline or standard. In addition to describing the industry, benchmarking often is used as a catalyst for change within organizations or industries. Table 1 describes benchmarking typologies.2–13

Table 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Table 1.

Benchmarking Typologies3–13

Benchmarking was first used in the Western manufacturing sector by Rank Xerox in 1983,14 and is now used in both the private and public sectors. Although slower to take root in academia, benchmarking now is used formally and informally on a routine basis. Deans of colleges and schools of pharmacy often informally use benchmarking when adjusting admitting class or faculty size, tuition, faculty members' salaries, and building space. Because these activities often are performed informally, they rarely are labeled as “benchmarking,” which may interfere with the realization or appreciation of benchmarking contributions.

The notion of benchmarking generally is met with mixed emotions. There is positive reception, as individuals recognize the value of performance standards. Positive feelings, however, may be quelled by fears and concerns regarding the potential consequences of underperformance. Therefore, while much of the literature describes the benefits of benchmarking, caution has been expressed when this tool is used inappropriately. For example, Cox and Thompson warned about implementing another's best practices as they may not be “fitting” or even adaptable for a different environment.15 This and other similar criticisms do not discount the benefits of benchmarking, and it is in that spirit that we present this information. Thus, benchmarking should be viewed not only as a competitive endeavor, but more importantly, as a tool that can be used for multiple purposes and allow for opportunities.

Benchmarking in Academia

Universities and colleges are becoming more interested in benchmarking practices, as they are being asked more frequently to demonstrate the quality of their educational and research programs to the public and government stakeholders.15 The culture of collaboration, as well as the widespread use of analytical research methods, both fostered in academia, bode well for the acceptance of benchmarking in the academic environment.9,16 A rigorous comparison among institutions, “rankings,” and professional colleagues is not a new concept in higher education. The novelty is in the formalization of these comparisons and the use of the term “benchmarking.” The ubiquity of information technology in academia can greatly aid the dissemination of benchmarking, as a wealth of metrics is readily available for use in decision analysis, including human resource management.

The goal of benchmarking in academia is to provide institutional leaders with reputable standards by which they can measure the quality and cost of administrative processes, instructional models, and research efforts, and to identify where opportunities for improvement reside. Leadership committed to improving the quality of offerings and activities can move forward by identifying a benchmark institution that shares a similar mission or structure. The choice of benchmark is often decided by reviewing data compiled by national education groups, including accreditation bodies.17 This external reference point can provide a standard by which to assess current programs, and it can also provide useful insights into problematic areas. The benchmarking of successes may help identify solutions to address noted weaknesses or to rectify identified deficits at the home institution.18 Thus, benchmarking can be prescriptive as well as diagnostic. Benchmarking also can be used to better inform extramural stakeholders as to the state of the institution and the need to expedite corrective actions.

A number of national professional organizations and private consulting firms provide benchmarking services for universities and colleges.19 One example of a focused academic benchmarking effort is the National Study of Instructional Costs and Productivity, (also known as “The Delaware Study”).20 With decades of experience in academic benchmarking, this study provides comparative analyses of student credit hour production (credit value of a course multiplied by the student enrollment in that course), faculty members' teaching loads, and instructional, research, and service expenditures (direct expenditure data incurred for personnel compensation, supplies, and services used in the conduct of each of these functional areas) broken down by academic discipline at a departmental level.21,22 At the time of this writing, over 500 universities and colleges participate in this longitudinal study, which allows not only point-in-time analysis but also permits data trending over time.

Benchmarking can play a significant part in the well-known 4-step approach to continuous quality improvement, the Plan-Do-Check-Act Cycle (Figure 1).23 In academia, the first step would be to identify whom or what to benchmark by selecting the administrative, teaching, or research process to be studied. The second step would be to compile data from and about a benchmark institution, department, or program. The third step would be to analyze the compiled data and conduct comparative assessments to identify quality differences and to yield actionable recommendations for improvement. The final step, at least in the first iteration of the cycle, would be to implement facility, program, or personnel-specific changes. Then, the success of the intervention can be assessed against its ability to narrow the differences between the target and the benchmark. Thus, benchmarking formalizes the planning process to permit sound action to be taken to improve quality, and affords a standard by which the success of an intervention can be assessed.

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

The Plan-Do-Check-Act Cycle

Benchmarking is most successful when a well-accepted standard is available. When an acknowledged best practice or leading institution cannot be identified clearly, process or industry composites may serve as a benchmark.3 Compiled reports can lead to rank ordering to identify a benchmark process or organization, but this data can also be aggregated to yield composite measures for a practice or discipline, providing a benchmark when a singular standard is not apparent. Academic institutions, especially professional schools, are accustomed to sharing aggregate student and faculty information, as they share an interest in producing competent graduates and advancing scientific and practical knowledge. Thus, the open nature of educational innovation, and the willingness of educators to work together to achieve coordinated goals, can provide readily available data that can be compiled to yield best practice and composite professional standards.

In 2006, the Secretary of Education's Commission on the Future of Higher Education report called for “a new consumer-oriented database and more and better information on the quality and cost of higher education.”24 This data repository would be open for review by researchers, policymakers, and the general public. To adapt to an era of constrained budgets, higher education is being encouraged to shift from reputation-based organizations to performance-based institutions that cultivate a culture of transparency and responsibility. Universities and colleges will be asked to demonstrate the quality of their educational offerings and the productivity of their research enterprises to many constituencies, including students and faculty members, but also to professional and governmental bodies. As demand for accountability in academia increases, schools and departments will require data to demonstrate their contributions to an institution's mission, as they compete for increasingly limited resources. Benchmarking likely will play a vital role in this reporting and strategic planning.

Benchmarking in Academic Pharmacy Departments

The continuous use of benchmarking data to evaluate and support pharmacy programs is commonplace in health systems. Health systems use a variety of standards (eg, American Society of Health-System Pharmacists' Best Practices, Joint Commission's Comprehensive Accreditation Manual for Hospitals: The Official Handbook, Centers for Medicare & Medicaid Conditions of Participation Interpretive Guidelines and Conditions for Coverage, etc) to continuously benchmark their pharmacy processes, services, or personnel needs to identify and reduce variations in practice and any resulting outcomes (improved patient care, medication safety, etc). Benchmarking efforts in academic pharmacy may not be employed continuously, but nationally generated data could be used by academic pharmacy departments in a number of useful ways. Obviously, access to national norms would allow comparisons in areas of workload, resources (eg, number of faculty members), and productivity. Such considerations may form the basis for increased resource justification, or departmental planning and goal setting, provided that it is appropriately normalized or stratified to allow valid comparisons to be made. For example, institutions or departments with a given emphasis (research, education, practice, etc) should be compared to other similarly-focused institutions or departments.

The specific benchmark data to collect is an important consideration. Standards and guidelines can help facilitate benchmarking efforts by establishing best practices or setting accepted practice norms. For academic pharmacy, the Accreditation Council for Pharmacy Education (ACPE) seeks to assure and advance quality in pharmacy education through its accreditation standards and guidelines for professional programs in pharmacy. The ACPE standards (Standards 2007) serve to establish minimum standards in academic pharmacy. Specifically, standards 24 through 26 address quantitative and qualitative factors related to faculty members and seek to ensure that a given institution has “fair and equitable policies and procedures and capabilities to attract, develop, and retain an adequate and appropriate number of qualified faculty to contribute to and achieve its mission and goals”.25 Descriptions of national benchmarking efforts in academic pharmacy literature are generally lacking. However, gathering and analyzing benchmarking data can help translate today's best practices into tomorrow's standards.26 A criticism of the current ACPE standards is that they lack a guiding philosophy for pharmacy education and practice that translates into ambiguity, circular arguments, and non-nullifiable hypotheses.27 Benchmarking departmental characteristics, responsibilities, and outputs within academic pharmacy could help address some of these concerns by establishing best practices or setting accepted norms for faculty members' efforts in the 3 academic missions (teaching, scholarship, and service), and perhaps clinical practice.

Benchmarking departmental characteristics, responsibilities, and outputs within academic pharmacy would require the development of an information framework to standardize data collection and submission. Such a framework would enable benchmarking processes like those performed annually by the American Association of Colleges of Pharmacy (AACP) to benchmark pharmacy faculty compensation. AACP collects and shares other descriptive data for each college/school of pharmacy that is relevant to benchmarking as well, such as the number of pharmacy faculty members in each discipline. While developing a system to benchmark best practices or accepted norms for the 3 traditional academic missions would be more challenging, we believe it is possible nonetheless. Data characterizing the academic missions can be tabulated readily and reported in aggregate or combined with departmental demographic data and stratified by academic rank, years of service, type of appointment, institution type, or full-time equivalents (FTE) allocated to the given mission. Of the 3 missions, scholarship might be the easiest to benchmark. Table 2 lists a variety of parameters that can be used to measure research and scholarly productivity.28 Indeed, quantification of scholarly productivity, particularly authorship, is the area of academic pharmacy benchmarking evident in the literature and for which the quality of measure has been considered. Thompson and colleagues reviewed a variety of recently developed indices that measure the depth, breadth, and creativity in journal article publishing.29 Each index has advantages and disadvantages, and several have been validated; and normative values in academic pharmacy have been determined, particularly among pharmacy practice faculty members and department chairs, and college/school of pharmacy deans.29–35

Table 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Table 2.

Representative Measures to Benchmark Academic Pharmacy

Benchmarking the efforts departments direct towards their teaching mission is more difficult due to a paucity of objective and validated tools for documentation and evaluation.36 Examples of indices that could be measured are summarized in Table 2. The data could be useful to determine normative values for the percentage of the curriculum that departments usually support, experiential student-to-faculty ratios, typical teaching loads based upon academic rank, or FTE allocated to teaching. To ensure valid comparisons, the data should be normalized using common parameters. For example, separating the effort spent in providing clinical service and delivering experiential education is difficult. Faculty members may or may not provide clinical service 12 months a year, or there may be an inconsistent relationship between providing service to patients and precepting students (relative attention to each and/or degree of overlap of effort for each). Therefore, even though clinical practice and experiential education cannot be separated completely, presenting the number of clerkship students precepted per year by individual faculty member in the context of months of clinical service may provide a better estimate of the effort devoted to providing clinical education. Other more subjective or ambiguous measurements, such as student or peer evaluation scores, and institutional rewards for teaching excellence, are probably of limited value in benchmarking due to diverse methods/policies at different institutions.36

Department effort in the academic service mission is perhaps the most challenging mission to benchmark because it is diverse in scope and therefore difficult to measure objectively.36 Examples of common measures of this mission are summarized in Table 2. The data could be useful to determine normative values for the percentage of the committee loads, leadership roles, and contributions to the profession and society. As with benchmarking the teaching mission, separating provision of clinical service from clinical education is also difficult. Perhaps pharmacoeconomic metrics or patient-outcome measures (therapeutic endpoint targets, quality of life measures, etc) could be used to provide a better estimate of the effort devoted to providing clinical service. Deriving a national benchmark for this activity is particularly challenging as faculty members' practices (percent of effort or hours/day, scope of practice, etc) likely vary profoundly from 1 faculty member to another and from 1 practice site to another.

An Experience With National Benchmarking

To our knowledge, a national benchmarking study examining academic pharmacy departments has not been conducted previously. Because we believed that national data would be a powerful tool for department chairs, in late 2008 we developed and distributed an extensive survey instrument to all chairs of departments of pharmacy practice. A national group of chairs participated in the development of the survey instrument, and an online questionnaire was developed to capture data reflecting both departmental composition and performance. Sixty-one data/response categories were included.37 The survey instrument was approved by the Institutional Review Board of the Medical University of South Carolina. We sought to measure department demographics (number of faculty members, ranks) and faculty performance in a number of areas (scholarship, teaching, and practice) but were unsuccessful in gaining broad input. Potentially useful results were evident although there was considerable variation in the responses in most categories.

Lessons Learned and Recommendations

Benchmarking is a process of assessment and innovation, but universities and colleges can be resistant to change. Therefore, it is advantageous for the advocates of benchmarking to employ reliable research techniques such as validated survey instruments, independent interviews, and confirmatory measures to insure the reliability of their data extraction and to bolster the credibility of their judgments and recommendations. While the data we gathered are of interest and probably have some use, the response rate to the survey instrument was disappointingly low. Although the “data” were in some cases not usable, our experience leads us to several recommendations for future attempts.

First, the survey instrument and the process to administer it should be simple. Perhaps because our survey instrument sought a great amount of detailed information, a number of respondents provided only partial information. This might be due to the chairs being deluged with requests to respond to/complete survey instruments that emanate from a variety of sources, both internal and external to a given institution. Thus, the phenomenon of “survey fatigue” is real. We now recognize that a 61-question survey instrument, requesting highly detailed information, was bound to have limited success.

Second, it is vital that the data collected allows comparison of “apples to apples.” As academic pharmacy departments vary significantly in size and composition, allowing for stratification is desirable to allow valid comparisons. Beyond demographic differences, there are variations in mission. For example, the ACPE expectation that all pharmacy faculty members be involved in scholarship is likely broadly interpreted and applied to varying extents at different institutions. If some departments have little or no expectations for scholarship, their data will skew the pooled results, and attempts to accurately determine national norms or benchmarks will fail. Additionally, a department that has little or no expectation for scholarship may have high expectations for teaching effort and/or professional practice, and again, their data could skew the resultant national average. Thus, we also must be able to stratify the results based upon school mission.

Third, it must be time-focused. If there is a real need for national benchmarking data and most chairs would like to use such information, motivation to participate should be high. However, the effort and time required to participate by providing information should be reasonable. Therefore, in addition to limiting the size of the survey instrument, presenting it in manageable, focused (ie, single mission) segments would be wise.

Fourth, an accurate survey population to sample is necessary. The Chairs of Departments of Pharmacy Practice list purchased from AACP contained over 130 listings, which exceeds the number of pharmacy schools in the country. This may reflect that some schools have geographically split campuses and/or departments and have more than 1 chair of the department of pharmacy practice on the AACP list. Deciding whether such data should be combined for a “school” response or whether the entities are large enough to justify separate consideration is another issue to resolve.

In conclusion, reliable national benchmarking data would be a powerful tool for academic pharmacy department chairs and deans, and further initiatives to gather, assess, and share such information are recommended. Whether such data should be collected by a group of motivated faculty members or chairs (as in our case), or by a pharmacy organization such as AACP, using its internal mechanisms to collect and disseminate these data on a regular basis remains to be determined. In any case, marshalling resources to perform benchmarking of academic pharmacy departments should be a priority for the academy.

  • Received March 31, 2010.
  • Accepted June 2, 2010.
  • © 2010 American Journal of Pharmaceutical Education

REFERENCES

  1. 1.↵
    1. Santovec ML
    Benchmark to assess the value of policies, programs Women Higher Educ. 2009 18 (8) 18 19
    OpenUrl
  2. 2.↵
    1. Francis G,
    2. Holloway J
    What have we learned? Themes from the literature on best-practice benchmarking Int J Manage Rev. 2007 9 (3) 171 189
    OpenUrl
  3. 3.↵
    1. Camp RC
    Business Process Benchmarking: Finding and Implementing Best Practices 1995 Milwaukee, WI ASQC Quality Press
  4. 4.↵
    1. Gilson C,
    2. Pratt M,
    3. Roberts K,
    4. Weymes C
    Peak Performance – Business Lessons from the World's Top Sports Organizations 2001 London, England Harper Collins
  5. 5.↵
    1. Peters T,
    2. Waterman RH
    In Search of Excellence: Lesson's From America's Best Run Companies 1982 London, England Harper Collins
  6. 6.↵
    1. Trosa S,
    2. Williams S
    Benchmarking in Public Sector Performance Management 1996 Performance Management in Government OECD Occasional Papers No. 9
  7. 7.↵
    1. Bowerman M,
    2. Francis GAJ,
    3. Ball A,
    4. Fry J
    The evolution of benchmarking in UK local authorities Benchmarking. 2002 9 (5) 429 449
    OpenUrlCrossRef
  8. 8.↵
    1. Elnathan D,
    2. Lin TW,
    3. Young SM
    Benchmarking and management accounting: a framework for research J Manag Account Res. 1996 8 (2) 37 54
    OpenUrl
  9. 9.↵
    1. Schofield A
    Benchmarking in Higher Education: An International Review 1998 London, England CHEMS and Paris, France UNESCO
  10. 10.↵
    1. Jackson N,
    2. Lund H
    Benchmarking for Higher Education 2000 Buckingham, England Open University Press
  11. 11.↵
    1. Murdoch A
    Lateral benchmarking or what Formula One taught an airline Manage Today. 1997 75 (10) 64 67
    OpenUrl
  12. 12.↵
    CIPFA Benchmarking to Improve Performance 1996 London, England Chartered Institute of Public Finance and Accountancy
  13. 13.↵
    1. Watson GH
    Strategic Benchmarking 1993 New York, NY Wiley
  14. 14.↵
    1. Jacobson G,
    2. Hillkirk J
    Xerox, American Samurai 1986 New York, NY Macmillan
  15. 15.↵
    1. Alstete WJ
    Benchmarking in higher education: adapting best practices to improve quality ERIC Digest. Washington, DC, ERIC Clearinghouse on Higher Education HE029 18:1–4;1995.
  16. 16.↵
    1. Arnone M
    New commission on college accountability debates standards, rewards, and punishments Chron Higher Educ. May 11, 2004
  17. 17.↵
    1. Achtemeier SD,
    2. Simpson RD
    Practical considerations when using benchmarking for accountability in higher education Innovative Higher Educ. 2005 30 (2) 117 128
    OpenUrl
  18. 18.↵
    1. Epper RM
    Applying benchmarking to higher education: some lessons from experience Change. 1999 31 (6) 24 31
    OpenUrl
  19. 19.↵
    Higher Education Associations. http://www.ntlf.com/html/lib/assoc/ Accessed August 25, 2010.
  20. 20.↵
    1. Middaugh MF
    Using Quantitative Benchmarking Data. Understanding Faculty Productivity: Standards and Benchmarks for Colleges and Universities 2001 1st ed San Francisco, CA Jossey-Bass Inc
  21. 21.↵
    1. Middaugh MF
    Establishing Qualitative Benchmarks in Individual Departments. Understanding Faculty Productivity: Standards and Benchmarks for Colleges and Universities 2001 1st ed San Francisco, CA Jossey-Bass Inc
  22. 22.↵
    Middaugh MF. A consortial approach to assessing instructional expenditures. University of Delaware Office of Institutional Research, The National Study of Instructional Costs and Productivity; http://www.udel.edu/IR/cost/consortial.html Accessed August 25, 2010.
  23. 23.↵
    1. Langley GL
    Using the Model for Improvement. The Improvement Guide: A Practical Approach to Enhancing Organizational Performance 2009 2nd ed Hoboken, NJ Wiley Higher Education
  24. 24.↵
    US Department of Education A Test of Leadership: Charting the Future of US Higher Education 2006 Washington, DC US Dept of Education
  25. 25.↵
    Accreditation Council for Pharmacy Education (ACPE): Accreditation standards and guidelines for the professional program in pharmacy leading to the doctor of pharmacy degree, effective: July 1, 2007. http://www.acpe-accredit.org/standards/default.asp. Accessed August 25, 2010.
  26. 26.↵
    1. Murphy JE
    Using benchmarking data to evaluate and support pharmacy programs in health systems Am J Health-Syst Pharm. 2000 57 (Suppl 2) S28 31
    OpenUrlAbstract/FREE Full Text
  27. 27.↵
    1. Campbell WH
    Accreditation Standards 2007: much is implied, little is required Ann Pharmacother. 2006 40 (9) 1665 1671
    OpenUrlCrossRefPubMed
  28. 28.↵
    1. Leslie SW,
    2. Corcoran GB,
    3. MacKichan JJ,
    4. Undie AS,
    5. Vanderveen RP,
    6. Miller KW
    Pharmacy scholarship reconsidered: The report of the 2003–2004 research and graduate affairs committee Am J Pharm Educ. 2004 68 (1) Article S6.
  29. 29.↵
    1. Thompson DF,
    2. Callen EC,
    3. Nahata MC
    New indices in scholarship assessment Am J Pharm Educ. 2009 73 (6) Article 111.
  30. 30.↵
    1. Jungnickel PW
    Scholarly performance and related variables: a comparison of pharmacy practice faculty and departmental chairpersons Am J Pharm Educ. 1997 61 (1) 34 44
    OpenUrl
  31. 31.↵
    1. Thompson DF,
    2. Callen EC,
    3. Nahata MC
    Publication metrics and record of pharmacy practice chairs Ann Pharmacother. 2009 43 (2) 268 275
    OpenUrlCrossRefPubMed
  32. 32.↵
    1. Thompson DF,
    2. Callen EC
    Publication records among college of pharmacy deans Ann Pharmacother. 2008 42 (1) 142 143
    OpenUrlCrossRefPubMed
  33. 33.↵
    1. Thompson DF,
    2. Harrison KE
    Basic science pharmacy faculty publication patterns from research-intensive US colleges, 1999–2003 Pharm Educ. 2005 5 (2) 83 86
    OpenUrlCrossRef
  34. 34.↵
    1. Thompson DF,
    2. Segars LW
    Publication rates in US schools and colleges of pharmacy, 1976–1992 Pharmacotherapy. 1995 15 (4) 487 494
    OpenUrlPubMed
  35. 35.↵
    1. Coleman CI,
    2. Schlesselman LS,
    3. Lao E,
    4. White CM
    Numbers and impact of published scholarly works by pharmacy practice faculty members at accredited US colleges and schools of pharmacy (2001–2003) Am J Pharm Educ. 2007 71 (3) Article 44.
  36. 36.↵
    1. Kennedy RH,
    2. Gubbins PO,
    3. Luer M,
    4. Reddy IK,
    5. Light KE
    Developing and sustaining a culture of scholarship Am J Pharm Educ. 2003 67 (3) Article 92.
  37. 37.↵
    1. Bosso JA,
    2. Nappi J,
    3. Gubbins PO,
    4. Chisolm-Burns M,
    5. Ross LA
    National benchmarking of departments of pharmacy practice [abstract]. Meeting Abstracts: 110th Annual Meeting of the American Association of Colleges of Pharmacy, Boston, Massachusetts, July 18–22, 2009 Am J Pharm Educ. 2009 73 (4) Article 57.
PreviousNext
Back to top

In this issue

American Journal of Pharmaceutical Education
Vol. 74, Issue 8
1 Oct 2010
  • Table of Contents
  • Index by author
Print
Download PDF
Email Article

Thank you for your interest in spreading the word on American Journal of Pharmaceutical Education.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Benchmarking in Academic Pharmacy Departments
(Your Name) has sent you a message from American Journal of Pharmaceutical Education
(Your Name) thought you would like to see the American Journal of Pharmaceutical Education web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
9 + 3 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.
Citation Tools
Benchmarking in Academic Pharmacy Departments
John A. Bosso, Marie Chisholm-Burns, Jean Nappi, Paul O. Gubbins, Leigh Ann Ross
American Journal of Pharmaceutical Education Oct 2010, 74 (8) 140; DOI: 10.5688/aj7408140

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Share
Benchmarking in Academic Pharmacy Departments
John A. Bosso, Marie Chisholm-Burns, Jean Nappi, Paul O. Gubbins, Leigh Ann Ross
American Journal of Pharmaceutical Education Oct 2010, 74 (8) 140; DOI: 10.5688/aj7408140
del.icio.us logo Digg logo Reddit logo Twitter logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • INTRODUCTION
    • REFERENCES
  • Figures & Data
  • Info & Metrics
  • PDF

Similar AJPE Articles

Cited By...

  • No citing articles found.
  • Google Scholar

More in this TOC Section

  • Complex Issues Affecting Student Pharmacist Debt
  • Where and How to Search for Evidence in the Education Literature: The WHEEL
  • Part-time and Job-Share Careers Among Pharmacy Practice Faculty Members
Show more Special Articles

Related Articles

  • No related articles found.
  • Google Scholar

Keywords

  • benchmarking
  • pharmacy
  • academia

Home

  • AACP
  • AJPE

Articles

  • Current Issue
  • Early Release
  • Archive

Instructions

  • Author Instructions
  • Submission Process
  • Submit a Manuscript
  • Reviewer Instructions

About

  • AJPE
  • Editorial Team
  • Editorial Board
  • History
  • Contact

© 2023 American Journal of Pharmaceutical Education

Powered by HighWire