Skip to main content

Main menu

  • Articles
    • Current
    • Early Release
    • Archive
    • Rufus A. Lyman Award
    • Theme Issues
    • Special Collections
  • Authors
    • Author Instructions
    • Submission Process
    • Submit a Manuscript
    • Call for Papers: Moving from Injustice to Equity
  • Reviewers
    • Reviewer Instructions
    • Reviewer Recognition
    • Frequently Asked Questions (FAQ)
  • About
    • About AJPE
    • Editorial Team
    • Editorial Board
    • History
  • More
    • Meet the Editors
    • Webinars
    • Contact AJPE
  • Other Publications

User menu

Search

  • Advanced search
American Journal of Pharmaceutical Education
  • Other Publications
American Journal of Pharmaceutical Education

Advanced Search

  • Articles
    • Current
    • Early Release
    • Archive
    • Rufus A. Lyman Award
    • Theme Issues
    • Special Collections
  • Authors
    • Author Instructions
    • Submission Process
    • Submit a Manuscript
    • Call for Papers: Moving from Injustice to Equity
  • Reviewers
    • Reviewer Instructions
    • Reviewer Recognition
    • Frequently Asked Questions (FAQ)
  • About
    • About AJPE
    • Editorial Team
    • Editorial Board
    • History
  • More
    • Meet the Editors
    • Webinars
    • Contact AJPE
  • Follow AJPE on Twitter
  • LinkedIn
Research ArticleSPECIAL ARTICLE

Insights, Pearls, and Guidance on Successfully Producing and Publishing Educational Research

Adam M. Persky and Frank Romanelli
American Journal of Pharmaceutical Education June 2016, 80 (5) 75; DOI: https://doi.org/10.5688/ajpe80575
Adam M. Persky
aEshelman School of Pharmacy, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina
cAssociate Editor,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Frank Romanelli
bUniversity of Kentucky College of Pharmacy, Lexington, Kentucky
cAssociate Editor,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site

Abstract

It is the collaborative responsibility of authors, reviewers, and editors to produce high-quality manuscripts that advance knowledge and educational practice. Experience with manuscript submissions to the American Journal of Pharmaceutical Education reveal several areas for improvement that authors can make to increase their submission success rate during the review process. These improvements include research question justification, improved clarity and details regarding methodology, concise data and results, and a discussion that frames research findings in the context of what is already known. This paper summarizes common flaws we see in submitted manuscripts and makes suggestions on how to address these areas and improve publication success.

Keywords
  • manuscripts
  • research
  • publication bias

INTRODUCTION

Pharmacy faculty members may be trained as clinicians and/or as scientists. As such, most faculty members have conducted clinical research, wet lab research, or social/behavioral and administration research. Faculty members often have less formal training or experience in conducting educational or pedagogic research. As Journal editors, we often receive and review manuscripts lacking methodological rigor and critical analysis needed to ensure results are valid and conclusions generalizable to a broader audience of learners, faculty members, or disciplines. We also encounter unpolished manuscripts that are verbose and incomprehensive. A central goal of the editorial team at the Journal is to improve the quality of publications, the criticalness of reviewers, and the overall writing skills of the academy. We strive to ensure the knowledge generated by pharmacy education research contributes to health sciences and higher education in general. Scholars are always encouraged to innovate but must recognize that even innovation must meet quality standards.

In the past, the Journal published guidelines regarding survey research,1-4 guidelines for conducting scholarship of teaching and learning,5 and general stylistic considerations/instructions for authors.6 These standards are provided to authors so that readers can have reasonable confidence that published works are of sufficient quality with appropriate internal validity (ie, how well the study was conducted and how confidently we can conclude that the change in the outcome variable was produced solely by the independent variable and not extraneous ones) and external validity (ie, the extent to which a study’s results can be generalized to other settings). The Journal’s acceptance rate for submitted manuscripts in the IDEAS or Research Article format over the past five years is approximately 47% and 34%, respectively (2009: IDEAS=59%, Research=52%; 2014: IDEAS= ∼30%; Research= ∼20%); this rate has declined as the number of papers submitted has rapidly increased. In comparison, acceptance rates among most peer journals falls substantially lower [eg, Advances in Health Science Education (13%), Academic Medicine (20%), Medical Education (20%)]. With an estimated submission rate of 600 papers anticipated in 2016, the selection of publishable manuscripts continues to be resource intensive.

The manuscript review process involves several concurrent resource issues including, but not limited to, financial, physical, personal, and intellectual ones. Editors must work judiciously to manage these resources while ensuring the dissemination of timely and high quality scholarly work. Much like grantsmanship, successful publishing is increasingly challenging while requiring a significant time commitment. The academy, and in particular pharmacy education, must be vigilant in its pursuit of excellence within health science education and higher education at large. As such, we provide suggestions to potential authors and reviewers including some “do’s and don’ts” (see references7-13) to enhance the quality of future publications in the Journal.

First, keep in mind reviewers, editorial board members, and editors have an essential responsibility to the Journal and more broadly the academy as the “gatekeepers” of scholarship. Reviewers must critically evaluate papers on a micro and macro level ensuring internal and external validity of submissions. Research reports should have clear objectives, methods, and results accompanied by a balanced discussion that addresses results or conclusions with an appropriate perspective. Peer reviewers and editors are entrusted to critically appraise and, as necessary, reject papers that do not meet scholarly standards; this responsibility is balanced with being thoughtful enough to know that no research study is perfect. Constructive analysis and feedback is as important for rejected papers as it is for those with recommended revisions and even for those that are accepted. It is incumbent upon authors to perform the stated research and to subsequently draft manuscripts as efficiently and thoughtfully as possible so that each submission delineates a complete story. Reviewers’ responsibility is to aid in strengthening manuscripts and to help make them significant contributions to the literature.14 This entire process can be a love-hate relationship.

In terms of publication success, weak research design and poor writing often are cited as top reasons for rejection.15-19 This paper discusses how to maximize publication success; specifically, it focuses on methodological issues associated with educational research and on issues related to writing style and manuscript drafting.

METHODLOGICAL ISSUES

A project must include methods that can answer the question at hand. In teaching, faculty members aim for instructional alignment (ie, alignment of objectives, assessments, and instructional methods). The same alignment is expected in educational research. More often than not, errant methods used to research a question or hypothesis lead to reduced success in the peer-review process. Provided below are some considerations regarding the design of educational research studies.

Study rationale. Redundant manuscripts that simply reiterate research conducted elsewhere offer little contribution to the existing body of work in a given area. As such, they are less successful in the peer review process because they are not necessarily advancing the field. While confirmatory studies are helpful, each project should contribute a new piece of information. To prevent redundancy, a comprehensive literature review should include pharmacy literature, health sciences including sciences beyond those commonly found in our schools of pharmacy, and the humanities. There is significant work being done in the STEM (science, technology, engineering, and math) disciplines, which also may serve as a basis for educational research in pharmacy. Databases such as ERIC (Education Resources Information Center) and/or PSYCinfo are helpful resources for milling publications related to research on education, learning, and memory. There also are discipline-specific databases and more common and general search engines such as Articles+ or Google Scholar that may be helpful.

Control groups. It is fairly common for the Journal to receive interventional manuscript submissions that lack a control group. In research, we learn through comparisons. Comparisons require a frame of reference (ie, what you are comparing; what the control is). In health sciences, investigators compare new drugs or new interventions to current standards of care. Rarely is that standard a placebo or, alternatively, nothing. Similarly, educational research should strive for control interventions, which can include, but are not limited to, different sections of courses, historical controls, or a randomized control group. For the latter, it is possible to randomize students to educational interventions. This would require informed consent, may require the opportunity for students to receive both interventions 20 or for the study to be counterbalanced, or use a block design. On a practical note, these types of studies may require more discussion with the institutional review board, depending on the specific considerations. Studies without some comparator are akin to comparing something to nothing. Readers aspire to know if the intervention is better, no worse than, or worse than what is current, standard practice (eg, lecture, reading from a standardized textbook).

The pre/post format. The pre-assessment in a pre/post design should not be considered a control. Rather, the pre/post is meant to take into account baseline existing knowledge or skills. In the absence of a control arm (ie, some other intervention) this format basically demonstrates that students can learn or that confidence can change with an educational intervention. Imagine a study where students listened to audio lectures while sleeping. The study finds no change in knowledge after six weeks of this intervention. Without a control group, one might conclude the audio-during-sleep intervention does not work. However, imagine the same study with a control group and the control group’s performance declines over a 6-week period. One might conclude audio-at-night may prevent loss of information. In the case of no change preintervention to postintervention, it is still not feasible to draw a conclusion on the impact without knowing if the control did not change, increased, or decreased in terms of performance. Pre/post designs do not inherently imply that students will learn more or have more confidence than through any other alternative intervention. Additionally, repeated questioning may result in the Hawthorne or other observer effects where subjects’ responses or activity is influenced simply by study participation.21

One is good, two is better – maybe. Effective combination interventions should always be greater than the individual intervention.18 For example, think-pair-share helps with learning, and adding clickers to a class with a think-pair-share will likely further increase learning.16 We may not need to prove this, unless the researcher is exploring additional elements of the intervention such as cost or time effectiveness. The question to be addressed is whether the observed increase in learning associated with an intervention is worth the time, effort, or resource(s) required for it to be effective.

Student feelings about a course or intervention. Students’ perception of a course or other intervention such as “loved the course” or “loved the activity” or even if they “felt it was useful” often does not address the educational question being investigated unless perhaps it speaks to the cost-effectiveness discussion. For example, intervention A helps with learning but students dislike it while intervention B helps with student learning to an equal degree and takes as much time and effort as A but students like it. In this example, the effectiveness of B could be deemed better because of the enjoyment aspect. Whether students like or enjoy the course or an intervention can be a secondary outcome but rarely, if ever, should it be the primary endpoint. The essential question is and must be student learning (eg, retention, transfer, skill development) or the assessment of enhanced student learning.

DRAFTING THE MANUSCRIPT

Regardless of the paucity of the research intervention, a manuscript must clearly articulate to the reader the nature of the hypothesis, outcomes, and implications within education or the academy at large. Often, common themes emerge in terms of stylistic considerations that increase reviewer and editorial concerns.

Introduction

What are the knowledge gaps? The first part of a manuscript (ie, research or IDEAs format) should briefly and concisely introduce and reinforce the research question by drawing on existing gaps in knowledge. This section should make an argument (eg, we argue that the primary factor in students missing class is a lack of student engagement, not the availability of lecture recordings) or state a hypothesis (eg, we hypothesize that students who do not engage in regular physical activity perform worse in pharmacy school than students who engage in regular physical activity). By the end of the introduction, the purpose of the study and its objectives should be clear to the reader.

Accreditation standards are not enough. As in any scholarly endeavor, the author should state the educational inquiry, issue, or challenge the research will address and then provide sufficient evidence, theory, or support to justify the research. Accreditation standards alone should not be used to justify scholarly inquiry. Simply defaulting to pharmacy accreditation standards as an impetus for inquiry limits both the reach and readership of the Journal. A manuscript is strengthened when it is clear that authors have holistically examined the literature from the academy as well as from other health professions education. The reader is looking for a clearly stated research question that is relatable to the current literature and explains how an author’s data will advance the research question.

Can the study be generalized? Many manuscripts focus on a problem or problems at a certain school. The real question is whether the information garnered be generalized or transferred to other schools across the academy. Moreover, can the findings generalize to schools that differ in size, location, age of the program, funding (public vs private), and student demographic? 18 The findings should be as generalizable as possible. If a study is characterizing “something” (eg, number of schools with international advanced pharmacy practice experiences, number of schools using multiple mini interviews), the author(s) should be able to draw some conclusions or make some recommendations based on the data collected. Simply finding that 50% of schools use “something” may be insufficient to advance educational practice. In addition to characterizing or “describing,” it may be prudent to collect data on best practices so the readership can grow as a result of the research.

Methods

Methods should be straightforward and explained well enough that they can be replicated. If you are conducting a novel analysis that might not be commonly encountered in educational and/or pharmacy education research (eg, latent curve model, ethnography), background is essential to enable readers to evaluate the methodology. Several commonly encountered limitations associated with methodology related to educational manuscripts are outlined below.

“I feel I learned more.” Asking students if they feel like they learned something is not the same as ascertaining whether they actually learned something. When possible, learning should be the outcome of interest rather than the perception of learning. Researchers ask students questions that yield responses along the lines of “I learned a lot” or “I learned more in this format” or “I feel more confident.” Learners (and in particular students) are often poor judges of what they know and what they do not know.22 How often do faculty members rely on students’ self-perceptions of learning as valid assessment of having achieved or not achieved an outcome? Even in the face of confidence, learning must be calibrated against some defined measure.23 Confidence judgments alone in the absence of calibration of student learning may not be helpful. Confidence judgments can become complicated when considering time-effects24 or when relating confidence to various levels of students.25 The outcome should be learning.

There are, however, a couple exceptions. The first is if the investigation is specifically examining metacognitive judgments; that is, how to potentially make students better assessors of their knowledge. Here, confidence judgments are appropriate but so is calibration of those judgments. The second area is motivation. Examining aspects of motivation such as self-efficacy can include confidence judgments. However, self-efficacy is built through performance accomplishments, experiences, persuasion, and physiologic feedback. An individual’s performance is the most reliable guide driving student’s motivation, thus, again, a measurement of learning performance is usually required.26 What is important is that we try to use the best measurements of learning, which is often the assessment of knowledge and skills. Perceptions of learning alone without performance data may be appropriate measures for metacognition or motivation.

Use thematic coding when appropriate. If student perception is a desired outcome, we suggest using qualitative methods such as thematic coding to identify potential elements of causation or explanatory variables. Thematic coding identifies passages of text (or images) linked by a common theme or idea allowing the researcher to index the text into categories.27 These themes in turn can help others with their educational practices or future research. Because student perceptions can be used as a potential outcome measure, it is important that when conducting studies, the researcher attempts to remain neutral regarding the results or impact of the intervention. While we understand a natural urge to “sell” active learning to students, outcomes based solely on student perception or attitude may introduce bias.

Results

The results section provides the reader with a clear presentation of the findings. This is accomplished by aligning the data with the objectives of the study, using tables or graphs to efficiently summarize information, and indicating important changes that occurred in the outcome variables. When reporting research findings, there are frequent observations by reviewers and editors we will now discuss.

Statistics and p values. The most impactful published papers almost always have statistical analysis; even qualitative studies report numbers.28 Manuscripts successful in the review process clearly indicate relevant statistical methods and resulting statistics. Generally, researchers set a significant p value a priori that is below 0.05. In the cases where the rejection region for the hypothesis is set, it is sufficient to report a p value as p<0.05 without having to outline an absolute value that falls below this parameter (eg, p<0.0001).29,30 However, if the goal is to convey the strength of evidence, the exact p value should always be reported.29,30 In some instances, reported p values alone may not be enough to demonstrate importance or significance to readers, and other elements may need to be defined or explained.

Means, medians, and effect size, oh my! Allow readers to see the data. They should be privy to medians, means, standard deviations, confidence intervals, correlation coefficients, regression parameters, figures, and/or graphs. Make efforts to use and report effect sizes (eg, treatment effect such as Cohen’s D, regression coefficient, odds ratio). The use of effect sizes are highly encouraged by both the American Psychological Association (APA)31 and the American Educational Research Association (AERA) 32 because they provide scale-free measures of practical meaningfulness.33,34 Effect size allows for comparisons across studies by reporting the strength of the treatment or intervention regardless of sample size.33,35 For example, a comparison is made between a “flipped” course and a lecture-based course and reports a significant change in mean examination scores from 80% to 82% (with a standard deviation of ±10). While this may be a significant change, the resultant effect size is 0.2 (ie, the average performance of a student in the “flipped” course is 0.2 standard deviations above the average performance in the “lecture” course). The question then becomes, given the small effect size, could a simpler intervention have been used that might have led to a greater effect (eg, providing examination feedback or review).

The dreaded significant digit and precision discussion. In terms of numbers, authors should be sure to consider significant digits, precision, and accuracy. A 10-question quiz cannot have a class average score of 85.5142%. In this case, there is “too much” precision. Because of the rules of significant digits, a class-average score on a 10-question quiz cannot be 85% either (unless there is partial credit). Authors should carefully select the appropriate number of significant digits to attach to figures based on the data set and methods being employed.

It’s dramatically larger. Results should concisely convey any quantitative or qualitative data associated with a study and lead the reader toward an understanding of implications and importance of the outcomes. Authors should avoid the use of general terms such as “better,” “higher,” worse,” “lower,” or “dramatic.” Instead, they should provide specific units such as percentage increased or decreased.

Figures, graphs, or text. If results can be discussed easily in the text, then there is little need for tables or diagrams. Figures should be developed to allow the reader to easily distinguish between the factors being investigated with easily definable symbols and lines and indications of any statistical differences.

The Discussion

The discussion should summarize findings, compare results to what we currently know, explain strengths and limitations of the research, and draw conclusions. Some readers may begin by reviewing the end of the discussion or conclusion sections to ascertain what they want to focus on as they subsequently analyze the manuscript.

More research in any given topic area is almost always needed. The statement “more research is needed” is not necessary in the text of a manuscript if it lacks direction in terms of key findings learned in the current study. However, providing direction for future research can help advance the area.

There are limitations. All editors and reviewers will acknowledge that no study is void of limitations and these should be outlined clearly for the reader. This also is a way to help the readers and other investigators not repeat mistakes.

FINAL THOUGHTS

In addition to the previously stated areas for improvement, there are few general items that may need to be addressed by potential authors:

The least publishable unit. There is significant and ever-increasing pressure to publish and, in pharmacy education, our choices for venues may be small relative to the biomedical, pharmaceutical, social-administrative, and clinical sciences. One often disingenuous method to increase publications is to focus on the least publishable unit from a study, commonly termed “salami slicing.”36 By fragmenting data into pieces rather than combining them into one cohesive manuscript, authors may garner more than one publication from a single data set. More times than not, this is unwarranted and often superfluous. The Journal does not publish serial papers, and researchers should be cautious of “me too” papers. It is possible that previously published works and topical areas might be reexamined using different or alternative methods or considered in light of a new innovation or discovery. Papers should focus on one key topic and tell a good story; if multiple key topics result from a research study this may warrant the submission of multiple papers.

Does it read well? Before submitting a manuscript read it for clarity, grammar, and flow. Ensure your manuscript adheres to published guidelines for the Journal. Manuscripts should be submitted using double spacing to assist reviewers and editors in reading the paper. Manuscripts do not need to be submitted in the final AJPE format as papers appear in publication but authors should follow the guidelines for references, section headings, and style. While perfect grammar is not a prerequisite for publication, poor grammar will only lead to a greater level of scrutiny. Smith reported that the number one reason manuscripts are often rejected by editors is for poor writing.10 Manuscripts should tell a logical, concise, and complete story.

Get outside readers. Authors might consider asking at least two colleagues to proofread their manuscript—one who is an expert in the given topical area and, ideally, one who is not. Experts should focus on the validity of the research question as well as the methods employed to test any hypothesis. Nonexperts should approach the manuscript like a typical reader assessing for clarity, conciseness, and readability.

References, in and out of pharmacy. References should be timely and include primary literature. References should also, whenever possible, engender outside views including but not limited to other health science disciplines. In addition, studies that are interprofessional have added value as they increase generalizability and help promote outside views and readership.

Across higher education, more faculty members are engaging in the scholarship of teaching and learning. Collaterally, more schools of pharmacy are recognizing scholarship of teaching and learning as a means for promotion and tenure. Successful publication will continue to be increasingly competitive. Developing a study and executing research requires a great deal of time and other associated resources. Drafting a manuscript is equally difficult and requires commitment as well as persistence and in some cases practice. There are no shortcuts to effective writing. The most prolific writers write daily and revise their work constantly.11-13,37 It is the responsibility of researchers to design thoughtful studies and write coherent manuscripts. Reviewers and editors must in turn ensure that research standards are being met and that writing improves and excels across the academy. It is incumbent upon the Journal to make efforts to address rising submission rates while acknowledging the contributions of “good ideas without much data.” Good ideas are needed to drive innovation, solve problems, and generate novel hypothesizes. All these efforts in tandem will improve the quality of writing across the academy, the impact of educational research in pharmacy, and the professions impact on higher education.

  • Received March 23, 2015.
  • Accepted June 15, 2015.
  • © 2016 American Association of Colleges of Pharmacy

REFERENCES

  1. 1.
    1. Draugalis JR,
    2. Coons SJ,
    3. Plaza CM
    . Best practices for survey research reports: a synopsis for authors and reviewers. Am J Pharm Educ. 2008;72(1):Article 11.
  2. 2.
    1. Draugalis JR,
    2. Plaza CM
    . Best practices for survey research reports revisited: implications of target population, probability sampling, and response rate. Am J Pharm Educ. 2009;73(8):Article 142.
  3. 3.
    1. Fincham JE,
    2. Draugalis JR
    . The importance of survey research standards. Am J Pharm Educ. 2013;77(1):Article 4.
  4. 4.
    1. Meszaros K,
    2. Barnett MJ,
    3. Lenth RV,
    4. Knapp KK
    . Pharmacy school survey standards revisited. Am J Pharm Educ. 2013;77(1):Article 3.
  5. 5.
    1. McLaughlin JE,
    2. Dean MJ,
    3. Mumper RJ,
    4. Blouin RA,
    5. Roth MT
    . A roadmap for educational research in pharmacy. Am J Pharm Educ. 2013;77(10):Article 218.
  6. 6.
    1. Poirier T,
    2. Crouch M,
    3. MacKinnon G,
    4. Mehvar R,
    5. Monk-Tutor M
    . Updated guidelines for manuscripts describing instructional design and assessment: the IDEAS format. Am J Pharm Educ. 2009;73(3):Article 55.
  7. 7.
    1. Bartsch RA
    . Designing SoTL Studies: Part II: Practicality. New Dir Teach Learn. 2013;136:35-48.
  8. 8.
    1. Bartsch RA
    . Designing SoTL Studies: Part I: Validity. New Dir Teach Learn. 2013;136:17-33.
  9. 9.
    1. Christopher AN
    . Navigating the minefields of publishing. New Dir Teach Learn. 2013;136:85-99.
  10. 10.
    1. Smith RA
    . Tell a good story well: writing tips. New Dir Teach Learn. 2013;136:73-83.
  11. 11.
    Gray T. Publish and Flourish: Become a Prolific Scholar. Las Cruces, NM: Teaching Academy, New Mexico State University; 2005.
  12. 12.
    1. Silvia PJ
    . How to Write A Lot: A Practical Guide to Productive Academic Writing. Washington, DC: American Psychological Association; 2007.
  13. 13.
    Zinsser WK. On Writing Well: The Classic Guide to Writing Nonfiction. 30th Anniversary Edition. New York: HarperCollins; 2006.
  14. 14.
    1. Brazeau GA,
    2. Dipiro JT,
    3. Fincham JE,
    4. Boucher BA,
    5. Tracy TS
    . Your role and responsibilities in the manuscript peer review process. Am J Pharm Educ. 2008;72(3):Article 69.
  15. 15.
    1. Bonjean CM,
    2. Hullum J
    . Reasons for journal rejection: an analysis of 600 manuscripts. PS. 1978;11(4):480-483.
  16. 16.
    1. Celik E,
    2. Gedik N,
    3. Karaman G,
    4. Demirel T,
    5. Goktas Y
    . Mistakes encountered in manuscripts on education and their effects on journal rejections. Scientometrics. 2014;98(3):1837-1853.
  17. 17.
    1. Clarke SP
    . Advice to authors: the “big 4” reasons behind manuscript rejection. CJNR. 2005;37(3):5-9.
  18. 18.
    1. Norman G
    . Data dredging, salami-slicing, and other successful strategies to ensure rejection: twelve tips on how to not get your paper published. Adv Health Sci Educ. 2015;19(1):1-5.
  19. 19.
    1. Smith MU,
    2. Wandersee JH,
    3. Cummins CL
    . What’s wrong with this manuscript?: an analysis of the reasons for rejection given byjournal of research in science teaching reviewers. J Res Sci Teach. 1993;30(2):209-211.
  20. 20.
    1. Bell M,
    2. Schraff L
    . The creation and refinement of a sustainable multimedia process in a higher education environment. J Res Center Educ Tech. 2008;4(2):83-95.
  21. 21.
    1. McCarney R,
    2. Warner J,
    3. Iliffe S,
    4. Van Haselen R,
    5. Griffin M,
    6. Fisher P
    . The Hawthorne Effect: a randomised, controlled trial. BMC Med Res Method. 2007;7(1):30.
  22. 22.
    1. Zell E,
    2. Krizan Z
    . Do people have insight into their abilities? A metasynthesis. Persp Psychol Sci. 2014;9(2):111-125.
  23. 23.
    1. Dunlosky J,
    2. Thiede KW
    . Four cornerstones of calibration research: Why understanding students’ judgments can improve their achievement. Learn Instruct. 2013;24(1):58-61.
  24. 24.
    1. Koriat A,
    2. Sheffer L,
    3. Ma’ayan H
    . Comparing objective and subjective learning curves: judgments of learning exhibit increased underconfidence with practice. J Exp Psychol General. 2002;131(2):147-162.
  25. 25.
    1. Hacker D
    , J., Bol L, Horgan D, D, Rakow E, A. Test prediction and performance in a classroom context. J Educ Psychol. 2000;92(1):160-170.
  26. 26.
    1. Schunk DH
    . Self-efficacy and academic motivation. Educ Psychol Measure. 1991;26(3-4):207-231.
  27. 27.
    1. Braun V,
    2. Clarke V
    . Using thematic analysis in psychology. Qual Res Psychol. 2006;3(2):77-101.
  28. 28.
    1. O’Brien BC,
    2. Harris IB,
    3. Beckman TJ,
    4. Reed DA,
    5. Cook DA
    . Standards for reporting qualitative research: a synthesis of recommendations. Acad Med. 2014Sep ;89(9):1245-1251.
  29. 29.
    1. Goodman S. A
    dirty dozen: twelve p-value misconceptions. Sem Hematol. 2008;45(3):135-140.
  30. 30.
    1. Grunkemeier GL,
    2. Wu Y,
    3. Furnary AP
    . What is the value of a p value? Ann Thorac Surg. 2009;87(5):1337-1343.
  31. 31.
    1. DeCleene KE,
    2. Fogo J
    . Publication Manual of the American Psychological Association. Occup Ther Health Care. 2012;26(1):90-92.
  32. 32.
    Anonymous. Standards for reporting on empirical social science research in AERA publications: American Educational Research Association. Educ Res. 2006;35(6):33-40.
  33. 33.
    1. Kelley K,
    2. Preacher KJ
    . On effect size. Psychol Method. 2012;17(2):137-152.
  34. 34.
    1. Keselman JC,
    2. Keselman HJ
    . Detecting treatment effects in educational research. Educ Psychol Measure. 1987;47(4):903-910.
  35. 35.
    1. Maher JM,
    2. Markey JC,
    3. Ebert-May D
    . The other half of the story: effect size analysis in quantitative research. CBE Life Sci Educ. 2013;12(3):345-351.
  36. 36.
    1. Dupps WJ,
    2. Randleman JB
    . The perils of the least publishable unit. J Refract Surg. 2012;28(9):601-602.
  37. 37.
    1. King S
    . On Writing: A Memoir of the Craft. New York, NY: Scribner; 2000.

Home

  • AACP
  • AJPE

Articles

  • Current Issue
  • Early Release
  • Archive

Instructions

  • Author Instructions
  • Submission Process
  • Submit a Manuscript
  • Reviewer Instructions

About

  • AJPE
  • Editorial Team
  • Editorial Board
  • History
  • Contact

© 2021 American Journal of Pharmaceutical Education

Powered by HighWire