Skip to main content

Main menu

  • Articles
    • Current
    • Early Release
    • Archive
    • Rufus A. Lyman Award
    • Theme Issues
    • Special Collections
  • Authors
    • Author Instructions
    • Submission Process
    • Submit a Manuscript
    • Call for Papers - Intersectionality of Pharmacists’ Professional and Personal Identity
  • Reviewers
    • Reviewer Instructions
    • Call for Mentees
    • Reviewer Recognition
    • Frequently Asked Questions (FAQ)
  • About
    • About AJPE
    • Editorial Team
    • Editorial Board
    • History
  • More
    • Meet the Editors
    • Webinars
    • Contact AJPE
  • Other Publications

User menu

Search

  • Advanced search
American Journal of Pharmaceutical Education
  • Other Publications
American Journal of Pharmaceutical Education

Advanced Search

  • Articles
    • Current
    • Early Release
    • Archive
    • Rufus A. Lyman Award
    • Theme Issues
    • Special Collections
  • Authors
    • Author Instructions
    • Submission Process
    • Submit a Manuscript
    • Call for Papers - Intersectionality of Pharmacists’ Professional and Personal Identity
  • Reviewers
    • Reviewer Instructions
    • Call for Mentees
    • Reviewer Recognition
    • Frequently Asked Questions (FAQ)
  • About
    • About AJPE
    • Editorial Team
    • Editorial Board
    • History
  • More
    • Meet the Editors
    • Webinars
    • Contact AJPE
  • Follow AJPE on Twitter
  • LinkedIn
Research ArticleViewpoints

Response Rates and Responsiveness for Surveys, Standards, and the Journal

Jack E. Fincham
American Journal of Pharmaceutical Education September 2008, 72 (2) 43; DOI: https://doi.org/10.5688/aj720243
Jack E. Fincham
Associate Editor, School of Pharmacy, The University of Missouri Kansas City
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Info & Metrics
  • PDF
Loading

The Journal has regularly published the results of survey research. As an academy we seem to be very interested in learning what our faculty members and students think, how they perform, and what is going on at other schools and colleges of pharmacy. A survey is often the best approach to acquiring that knowledge. However, the Editors believe that survey research published in the Journal has varied in quality and that standards for survey research can be used to improve the quality of research in the academy and the quality of papers published in the Journal. With that in mind, a decision was made in early 2008 to clarify expectations for survey research manuscripts submitted to the Journal. In Volume 72, Issue 1 of the Journal, Draugalis and colleagues1 presented an excellent paper detailing “best practices” for survey research manuscripts. These standards are now recommended to authors and reviewers, and will be used by the Editors in making decisions regarding acceptance of manuscripts.

One item addressed in the paper1 was the importance of response rates to questionnaire research, while another issue dealt with sample representativeness. The Draugalis et al1 paper and an examination of previously published survey research manuscripts in the Journal has led to the application of more stringent expectations for manuscripts published in the Journal.

Expectations for Survey Research Response Rates

There are now higher expectations for survey response rates. Response rates approximating 60% for most research should be the goal of researchers and certainly are the expectation of the Editor and Associate Editors of the Journal. For survey research intended to represent all schools and colleges of pharmacy, a response rate of ≥ 80% is expected.

The following sentences will be included by the Journal Editors in letters sent to authors of manuscripts that do not meet generally accepted standards for survey research:

“We are now applying stricter standards for survey research. For a discussion of the rationale behind the new standards, please refer to the paper by Draugalis et al, Article 11 in Volume 72, Issue 1 of AJPE (http://www.ajpe.org/view.asp?art=aj720111&pdf=yes). In brief, survey reports that are intended to be generalized to all colleges/schools of pharmacy should (1) have a response from at least 80% and (2) demonstrate that the sample includes representation of colleges based on the following factors that are similar to the overall profile of US institutions: public vs. private, geographic location, and university affiliation (stand-alone, part of a comprehensive university, or part of an academic health center).”

Why Are Representativeness and Response Rates Important Issues?

Representativeness. Representativeness refers to how well the sample drawn for the questionnaire research compares with (eg, is representative of) the population of interest. Can the reader evaluate the study findings with assurance that the sample of respondents reflects elements of the population with breadth and depth? Lack of response to the questionnaire by potential respondents in a sample or population is referred to as nonresponse bias. Nonresponse bias is a deadly blow to both the reliability and validity of survey study findings. If a survey achieves only a 30% response rate, the study suffers from a nonresponse bias of 70%. If the response rate to a survey is 20%, the nonresponse bias is 80%. Brick and Kalton2 suggest that one way of dealing with lack of representativeness is to weight the study sample segments to reflect the greater population attributes. However, the universe of pharmacy faculty members is too diverse and segmented for this to be a viable option for pharmacy education research.

Draugalis et al1 listed 10 criteria for survey research reports in the Appendix of their paper. Two of the criteria specific to representativeness (criterion 3) and response rates (criterion 7) are printed below in this section (representativeness) and the next (response rates):

Criterion 3. Did the authors select samples that well represent the population to be studied?

  1. a. What sampling approaches were used?

  2. b. Did the authors provide a description of how coverage and sampling error were minimized?

  3. c. Did the authors describe the process to estimate the necessary sample size?

Cook et al point out after conducting a meta-analysis of web- or Internet-based surveys that: “Response representativeness is more important than response rate in survey research. However, response rate is important if it bears on representativeness.”3(p821) When total nonresponse occurs in sample elements drawn from populations that are small, the effect of nonresponse bias is even more profound. Because the academy is relatively small and made up of disparate entities (small vs. larger schools; public vs. private; research intensive vs. teaching; religious affiliations vs. unaffiliated; standalone vs. medical center/or liberal arts based; and combinations and permutations of the above), samples must be appropriately representative of the greater academy in scope so as to further diminish the negative effects of nonresponse bias. A representation of 80% has been chosen as the standard for evaluation for the Journal.

Response rates. Draugalis et al1 list the following point when considering response rates:

Criterion 7. Was the response rate sufficient to enable generalizing the results to the target population?

  1. a. What was the response rate?

  2. b. How was response rate calculated?

  3. c. Were follow-ups planned for and used?

  4. d. Do authors address potential nonresponse bias?

Response rates are calculated by dividing the number of usable responses returned by the total number eligible in the sample chosen. Mitchell4 argues, with documentation from others, that the survey response rate should be calculated as the number of returned questionnaires divided by the total sample who were sent the survey initially. Others subtract the number of undeliverable questionnaires from the initial sample to obtain the denominator. Mitchell4 argues that this calculation only determines the questionnaire's success in inducing respondents to return the survey, and masks a potential large sample selection bias for the instrument.

Questionnaires can be either telephoned, administered in person, mailed only, e-mailed only, or Internet mediated only, or a combination of these. Response rates to e-mail surveys have decreased since the late 1980s.5 E-mail response rates may only approximate 25% to 30% without follow-up e-mail and reinforcements.6 E-mail surveys incorporating multimode approaches may yield response rates as high as 70%.6 Allowing for differing methods of returning surveys (e-mail and/or mailed options; eg, multimode) will aid those respondents who prefer to print out a survey instrument and respond via US mail. In a study carried out by Yun and Trumbo,6 a response rate of 72% was obtained with a multi-mode approach. A methodology similar to what has enhanced response rates to mailed survey instruments had not been developed by the late 1990s.7 This has since changed, with Schaefer and Dillman7 asserting that a multimode approach to e-mail survey administration will enhance response rates. In a completed study comparing differing methods of administration, response rates close to 60% were achieved by multimode contacts.7 This mixed-mode approach, combining both mailed and e-mailed survey instruments with an Internet-based response mechanism, also is an approach to help reduce the problem of coverage error in administration of surveys.

Reviews of electronic survey research point to similar response rates as those obtained via mailed survey methodologies.8 In a comparative study, mailed surveys alone or combined with e-mail/web follow-up resulted in larger response rates than an e-mail-web survey followed up by a mailed contact to non-respondents.8 Response rates to web and mailed survey instruments were both increased if preceded by a mailed contact to potential respondents.9 Multiple contacts, appearance, incentives, personalization, and sponsorship have significant impacts on survey response rates.10 High response rates are achievable and have been achieved in samples across many studies. Sitzia and Wood11 examined a large, global sample of survey response rates to patient satisfaction studies and found an average response rate of 76.7% for the studies chosen to analyze. Even so, they conclude that patient satisfaction studies show a poor awareness of important methodological considerations in design and administration.

Summary and Points About Previously Published Research in the Journal

It will be apparent when perusing past issues of the Journal and examining several of the manuscripts that have been previously published, that there are papers that the Journal has published that contain survey research that do not meet these new criteria for responsiveness and response rates. This is understood, and points related to this are not lost on the Journal Editors, however these new standards will be seen as positive by those in the academy and beyond who look to the Journal for quality in all aspects of educational research in pharmacy, and subsequent manuscript submissions emanating from studies.

  • © 2008 American Journal of Pharmaceutical Education

REFERENCES

  1. 1.↵
    1. Draugalis JR,
    2. Coons SJ,
    3. Plaza CM
    Best Practices for Survey Research Reports: A Synopsis for Authors and Reviewers Am J Pharm Educ. 2008 72 1 Article 11.
  2. 2.↵
    1. Brick JM,
    2. Kalton G
    Handling missing data in survey research Stat Methods Med Res. 1996 5 215 38
    OpenUrlCrossRefPubMed
  3. 3.↵
    1. Cook C,
    2. Heath F,
    3. Thompson RL
    A meta-analysis of response rates in web- or internet-based surveys Educ and Psychol Meas. 2000 60 6 821 36
    OpenUrl
  4. 4.↵
    1. Mitchell RC
    Using surveys to Value Public Goods: The Contingent Valuation Method 1989 Washington, DC Resources for the Future
  5. 5.↵
    Sheehan K. E-mail survey response rates: a review. J Compu-Mediated Com. 2001;6(2). Available at: http://jcmc.indiana.edu/vol6/issue2/sheehan.html Accessed April 1, 2008.
  6. 6.↵
    Yun GW, Trumbo CW. Comparative response to a survey executed by post, e-mail, & web form. J Compu-Mediated Com. 2000:6. Available online at: http://jcmc.indiana.edu/vol6/issue1/yun.html Accessed April 1, 2008.
  7. 7.↵
    1. Schaefer DR,
    2. Dillman DA
    Development of a standard e-mail methodology Public Opinion Q. 1998 62 378 97
    OpenUrlCrossRef
  8. 8.↵
    1. Converse PD,
    2. Wolfe EW,
    3. Oswald FL
    Response rates for mixed-mode surveys using mail and e-mail/web Am J Eval. 2008 29 1 99 107
    OpenUrl
  9. 9.↵
    1. Kaplowitz MD,
    2. Hadlock TD,
    3. Levine R
    A comparison of web and mail survey response rates Public Opinion Q 2004 68 1 94 101
    OpenUrlCrossRef
  10. 10.↵
    1. Dillman DA
    Survey implementation Mail and Internet-Surveys, the Tailored Design Method 2nd ed New York John Wiley & Sons, Inc 149
  11. 11.↵
    1. Sitzia J,
    2. Wood N
    Response rate in patient satisfaction research: an analysis of 210 published studies Int J Qual Health Care. 1998 10 4 311 7
    OpenUrlCrossRefPubMed
PreviousNext
Back to top

In this issue

American Journal of Pharmaceutical Education
Vol. 72, Issue 2
1 Sep 2008
  • Table of Contents
  • Index by author
Print
Download PDF
Email Article

Thank you for your interest in spreading the word on American Journal of Pharmaceutical Education.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Response Rates and Responsiveness for Surveys, Standards, and the Journal
(Your Name) has sent you a message from American Journal of Pharmaceutical Education
(Your Name) thought you would like to see the American Journal of Pharmaceutical Education web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
9 + 8 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.
Citation Tools
Response Rates and Responsiveness for Surveys, Standards, and the Journal
Jack E. Fincham
American Journal of Pharmaceutical Education Sep 2008, 72 (2) 43; DOI: 10.5688/aj720243

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Share
Response Rates and Responsiveness for Surveys, Standards, and the Journal
Jack E. Fincham
American Journal of Pharmaceutical Education Sep 2008, 72 (2) 43; DOI: 10.5688/aj720243
Reddit logo Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • REFERENCES
  • Info & Metrics
  • PDF

Similar AJPE Articles

Cited By...

  • COVID-19 Pandemic in University Hospital: Is There an Effect on The Medical Interns?
  • Case-comparison study protocol for gauging effects of neighbourhood trends and sickness: examining the perceptions of transit-Induced gentrification in Prince Georges County
  • Pilot study: undergraduate sports & exercise medicine conferences: what role do they play?
  • Review of response rates over time in registry-based studies using patient-reported outcome measures
  • Internet-based surveys: relevance, methodological considerations and troubleshooting strategies
  • Comparison of risk factor associations in UK Biobank against representative, general population based studies with conventional response rates: prospective cohort study and individual participant meta-analysis
  • Google Scholar

More in this TOC Section

  • In Memoriam: George H. Cocolas, PhD
  • Flexner, Educational Reform, and Pharmacy
  • Reflections for a Year of Caring and Understanding
Show more Viewpoints

Related Articles

  • No related articles found.
  • Google Scholar

Home

  • AACP
  • AJPE

Articles

  • Current Issue
  • Early Release
  • Archive

Instructions

  • Author Instructions
  • Submission Process
  • Submit a Manuscript
  • Reviewer Instructions

About

  • AJPE
  • Editorial Team
  • Editorial Board
  • History
  • Contact

© 2023 American Journal of Pharmaceutical Education

Powered by HighWire