To the Editor. This letter is in response to the Special Article by Fincham and Draugalis published in February 2013.1 The authors note that the intent of their paper is to clarify Journal guidelines regarding survey research standards. Even though the paper is steeped in scientific rigor, it reads more as an apology for the guidelines rather than a clarification. Furthermore, it suggests that the authors think implicitly that all survey research is done in an environment in which the researchers have complete control over their sample and just need to make scientific decisions about sample size and response rates relative to the population and its characteristics. Would that this were so. Would that we could do our surveys in laboratory environments in which we could control not only sample size but also all confounding factors. Our jobs would then be relatively easy and almost technician-like in their execution. The reality is that survey research is conducted in a social world, and we often have to satisfice rather than optimize.2 I found the related article by Mészáros and colleagues3 (also built on a scientific foundation) to be considerably more grounded in reality and to make a far more compelling argument (for being cautious about declared standards). My intent here is to use an example from my own research to reinforce the Mészáros and colleagues argument but with a different, more practical tack.
I have an ongoing interest in how medications are managed in primary and secondary schools. I have always been eager to understand the practices that take place in schools, especially when the medications are handled by unlicensed assistive personnel rather than school nurses. Unlicensed assistive personnel include secretaries, teachers, principals, bus drivers, etc. Several years ago, I was able to obtain the names and addresses of every district superintendent and every school principal in a large, affluent county west of Chicago. These included 43 public school districts comprising 241 schools as well as 56 private schools. Every grade level was included from kindergarten to grade 12. Each school district and private school was contacted by telephone to ascertain general interest in the study. Then, following recommended protocols, letters were sent to each district superintendent asking them to sign off on a form that would give the school principal of each school in their district permission to participate in the research. Superintendents were asked to return the signed permission forms to the researchers, and then we included a copy of the signed form in the envelope with the survey form sent to each principal. We asked the principals to distribute the survey instruments to the persons primarily responsible for medications at their schools. Private schools were contacted directly. This was all done at some expense. In effect, we tried to survey the entire population of schools in the county, understanding that the results could not be generalized to other counties without using a bit of common sense.
The bottom line is that the superintendents only gave permission for 57 of 241 public schools to participate. These 57 public schools and all 56 private schools were sent survey instruments. Only 45 survey instruments were returned by public schools and only 18 by private ones. So, out of 297 schools in the county, we received data from only 63. To further complicate the problem, these 63 schools were probably different in some way than the remaining 234. Perhaps the principals were more open-minded and willing to let researchers take a look at their practices. Perhaps they felt their medication practices were good and so were not afraid to have them examined by researchers.
What to do now? If I am reading Fincham and Draugalis correctly, they would say that we had a nice go at it but failed and that we should just forget the money, time, and effort spent on the study – any results found would be meaningless from a scientific point of view. But, if we look at some of the significant findings a little closer, they might not be so meaningless. For example, licensed subjects (ie, school nurses) were more likely to know that state and professional associations have medication management guidelines and were more likely to adhere to these guidelines than unlicensed subjects. Also, it was common for teachers, secretaries, health aides, and administrators to handle medications even in schools with nurses. Licensed subjects rated the medication management process as being more difficult than did unlicensed assistive personnel. Finally, subjects thought that books, the Internet, physicians, and continuing education programs were more helpful than pharmacists when seeking information about medications.4
The paper ends with a litany of limitations. Alternative sampling methods are discussed. For example, we could have obtained a list of school nurses from their professional society and contacted them directly, which would have allowed us to bypass the school access barrier. However, this would not have provided access to unlicensed assistive personnel, which, as our results suggest, are the group we really need to know about if we want to improve the dangerous melee that is known as medication management in schools. Also, is it not reasonable to consider the possibility that the distressing situation we found in the most open schools in this relatively affluent suburban county might be far worse in more cautious, resource-depleted schools? These results, I think, shed important light on a neglected drug therapy problem in spite of the myriad of intractable sampling-related issues that the research encountered. At a minimum, they are important because they cry out for further exploration of this problem.
This research project was ultimately published in a non-refereed journal (showing that others agree with Fincham and Draugalis), but it can be contrasted to another study on the same topic that was published as a peer-reviewed article in the Journal.5 In that research, a survey instrument was sent to the entire population of faculty members at US colleges and schools of pharmacy (N = 4,569). Four hundred ninety-nine usable responses were returned. Even this study does not meet the Fincham and Draugalis standards, but the numbers are large, and it was published fairly easily. There is not sufficient space here to describe the specifics of this second study, but the point is that some reasonable researchers in the field might consider the former study to be more important than the latter one.
The school study is just one example of the access problems that exist in the real world of survey research. Whether the researcher is working in schools, pharmacies, clinics, or, for present purposes, colleges and schools of pharmacy, the reality is that you often have to take what you can get. Reasonable-minded researchers understand this and will evaluate studies with these realities in mind. Again, if we assume that the researcher has control over all sampling-related issues, then we are inadvertently defining survey research as a technical function. So, in conclusion, I think the Journal’s guidelines and the defense of them by Fincham and Draugalis, while looking good on paper, are not instrumental for survey researchers or helpful for the development of a comprehensive body of knowledge over time. This brings up a related point to discuss on another day: should the Journal perhaps be reaching out more aggressively for relevant and insightful empirical studies that do not use pharmacy schools, courses, faculty members, or students as the unit of analysis?
- © 2013 American Association of Colleges of Pharmacy
Editor's Response to “A Different Perspective on Survey Research Standards”
Editor’s Response: Survey research standards have been and will continue to be an important topic for the Journal. Reutzel makes an argument against the application of the guidelines described by Fincham and Draugalis in several papers.1-4 Before the Journal implemented survey research guidelines in 2008 the quality of survey research appearing in the Journal was variable with many published papers not meeting appropriate standards.2 All research including survey research is challenging. However, researchers who wish to have their work published have an obligation to ensure that the conclusions are scientifically valid. The Journal guidelines are used to ensure that survey research we publish is scientifically valid. The guidelines have raised the quality of the work appearing in the Journal and will be applied to future submissions.
- © 2013 American Association of Colleges of Pharmacy