Skip to main content

Main menu

  • Articles
    • Current
    • Early Release
    • Archive
    • Rufus A. Lyman Award
    • Theme Issues
    • Special Collections
  • Authors
    • Author Instructions
    • Submission Process
    • Submit a Manuscript
    • Call for Papers - Intersectionality of Pharmacists’ Professional and Personal Identity
  • Reviewers
    • Reviewer Instructions
    • Call for Mentees
    • Reviewer Recognition
    • Frequently Asked Questions (FAQ)
  • About
    • About AJPE
    • Editorial Team
    • Editorial Board
    • History
  • More
    • Meet the Editors
    • Webinars
    • Contact AJPE
  • Other Publications

User menu

Search

  • Advanced search
American Journal of Pharmaceutical Education
  • Other Publications
American Journal of Pharmaceutical Education

Advanced Search

  • Articles
    • Current
    • Early Release
    • Archive
    • Rufus A. Lyman Award
    • Theme Issues
    • Special Collections
  • Authors
    • Author Instructions
    • Submission Process
    • Submit a Manuscript
    • Call for Papers - Intersectionality of Pharmacists’ Professional and Personal Identity
  • Reviewers
    • Reviewer Instructions
    • Call for Mentees
    • Reviewer Recognition
    • Frequently Asked Questions (FAQ)
  • About
    • About AJPE
    • Editorial Team
    • Editorial Board
    • History
  • More
    • Meet the Editors
    • Webinars
    • Contact AJPE
  • Follow AJPE on Twitter
  • LinkedIn
Research ArticleRESEARCH

Factors Affecting Student Time to Examination Completion

Adam M. Persky and Hannah Mierzwa
American Journal of Pharmaceutical Education September 2018, 82 (7) 6321; DOI: https://doi.org/10.5688/ajpe6321
Adam M. Persky
aEshelman School of Pharmacy, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina
bAssociate Editor, , Arlington, Virginia
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Hannah Mierzwa
aEshelman School of Pharmacy, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • PDF
Loading

Abstract

Objective. To investigate factors (prior or current knowledge, metacognitive accuracy, and personality) that might impact the time it takes students to complete an examination.

Methods. On the final examination, the time to complete the examination was recorded. Prior to the course, students completed the five-factor personality assessment. During the semester, students completed four cumulative assessments that included prospective judgments of performance to improve their metacognitive accuracy. Measures of metacognitive accuracy were calculated from the difference between the students’ prospective judgments of performance and their actual assessment performance for the final examination. Two weeks prior to the final examination, students completed a cumulative assessment, which served as prior knowledge; this was similar in content to the final examination.

Results. The time to complete the final examination was significantly negatively correlated with examination score and positively correlated with Agreeableness, and degree of metacognitive bias. However, only current knowledge (β=-.35) and Agreeableness (β=.12) predicted the time to complete the final examination. These two factors explained about 14% of the variability in completion times. Examining the scale for the time to complete the examination, there were some regional differences between the slowest, intermediate and fastest completers.

Conclusion. Current knowledge and to a lesser extent, pro-social behavior (agreeableness) influenced examination completion time. Metacognitive accuracy had limited predictability in time to complete the examination.

Keywords
  • pharmacokinetics
  • personality
  • metacognition
  • time on task

INTRODUCTION

Self-regulation is the ability to act in one’s long-term best interest.1 Self-regulation is a critical influencer of time-to-complete a given task (eg, an examination) because self-regulation directs motivation and attention (eg, what material needs attention).2,3 One’s ability to self-regulate is an interaction of prior experiences, personality, environment, and ability to monitor one’s own thinking (ie, metacognition). There is very little research on factors that determine the self-regulation of students during an examination. This exploratory analysis examines potential factors that might influence how long it takes students to complete a high stakes, cumulative examination with time to completion being a measure of self-regulation.

Learners perform best when they approach a learning situation with prior knowledge related to the situation.4,5 With more prior knowledge, a learner can detect and recognize features, generate a solution and perform more quickly and accurately; this is the nature of expertise.4,5 As such, students entering an examination with high prior knowledge should complete examinations more quickly. In this study, prior knowledge was assessed with a cumulative, low stakes assessment that occurred two weeks prior to the final examination.

As learners’ progress toward expertise, they gain more knowledge and develop better metacognitive monitoring allowing them to judge how well they know a given fact or can complete a given skill. Metacognition is a central practice in self-regulated learning. The accuracy of metacognitive monitoring can be affected by various factors, including problem or test difficulty, background or prior knowledge, and desired performance level.6-8 More metacognitively accurate individuals may have extended testing times due to increased processing demands – that is, they engage in more thinking when faced with a problem.9,10 For example, students may use how fast they could answer a question as an anchor in determining how well they did on an examination. However, the speed of which answers come to mind do not relate to performance and some evidence suggest the slower answering times were correct more often.11,12 Even if students had some level of metacognitive monitoring during the learning process, they may not apply it in a testing environment.7,13 These students could be faster or slower in completing an examination depending on their metacognitive ability. In this study, metacognitive accuracy was assessed by bias and absolute bias. Bias is the difference between the predicted score and actual performance and includes both directionality (ie, under- or over-confident) and magnitude of that directionality; absolute bias captures the magnitude or accuracy only. It is expected that learners with higher degrees of overconfidence (predicted score > actual score) may proceed more quickly through an examination compared to students with stronger degrees of underconfidence (predicted score < actual score).

Time to complete an examination requires time management skills and these skills may be the driver for the link between personality traits and academic achievement.14 Personality research has established a five-factor structure, which has the dimensions of openness to experience, conscientiousness, extraversion, agreeableness, and emotional stability (or neuroticism).15 Numerous studies have examined the impact of the five-factor personality assessment on academic achievement, and of these studies, conscientiousness correlates the most with academic achievement.16-20 It has been hypothesized that this relationship is due to the use of better time management strategies, and there is a positive relationship between higher levels of time management skills and higher levels of conscientiousness.21,22 Openness to experience appears to be positively correlated to academic achievement.20 Emotional stability (or neuroticism) and achievement have shown a negative correlation, suggesting that elevated emotional instability places learners at the risk of reduced academic achievement. Agreeableness and extraversion have shown mixed and inconclusive results in relation to academic achievement.20,23As such, there may be a relationship between the five-factor personality traits and time management during examination completion. In this study, the five-factor personality was used because of its reliability, validity and prior research investigating its relationship to academic achievement.24

Taken together, the authors examined the impact of prior knowledge, current knowledge, metacognitive accuracy and personality on time to complete an examination. The findings from this study can be used to help develop better study or test taking strategies for students or allow for better time allocation for examination taking.

METHODS

Participants were student pharmacists enrolled in the Doctor of Pharmacy (PharmD) program at a large, public university in the southeastern United States. In that program, pharmacokinetics was a required 3-credit course in which students met once a week for three hours and course format has been described previously.25 During a two-year period, 295 students were enrolled in the course, spread across two campuses that were synchronously video-conferenced. Upon admission to the program, the average age of students was 23 years (range 19-50), and approximately 65% were female. The mean grade point average upon admission was 3.5 (out of 4.0), the mean Pharmacy College Admission Test (PCAT) score was approximately 87%, and approximately 80% of students had a prior degree.

The course consisted of two parts. Part one used team-based learning (TBL) and occurred over the first 10 weeks of the semester. There were five main sections (pharmacodynamics, single dose kinetics, approach to steady-state, violations of the one-compartment model, and physiologic influences on clearance) that accounted for nine different topics. Each section started with the readiness assurance process (ie, individual assessment followed by team-assessment) and the remaining time dedicated to cases. Part two of the course occurred during the last five weeks of the semester and involved students completing one out-of-class and one in-class case both of which integrated the course material.

Prior Knowledge. Four 36-question cumulative assessments were administered during the course. These examinations were used to improve learning through the “testing effect” and allow students to improve their metacognitive monitoring. Each of these examinations sampled from the nine major topics covered in the course. The first three assessments were unique based on a random drawing of four questions from each of nine question pools (corresponding to the nine topics), for a total of 36 questions on each test. Each pool contained 12 to 20 questions which had been used in previous years of the course. Over 90% of the questions were at the level of application or higher according to Bloom’s Cognitive Taxonomy.26 These first three examinations were administered online and were low stakes. The first examination (week 1) provided a baseline of performance; the second examination (week 9) occurred at the completion of content (week 9); the third examination (week 14) occurred midway through review and integration activities occurring in the second part of the course. This third assessment was used as the measure of prior knowledge.

Current Knowledge Assessment. The fourth examination was the final examination (week 16) that consisted of newly created questions compared to the questions within the question pools. This examination was administered in-class and employed an answer-until-correct format (IF-AT, Epstein Educational Enterprise) in which students received immediate feedback on the accuracy of each response.27 Students used the same answer-until-correct format for the readiness assurance process during the TBL section of the course (weeks 1 through 10). The final examination was used as the measure of current knowledge. Scores were examined with and without partial credit for two reasons. The first reason is the first three examinations had no partial-credit, thus it is easier to compare scores across examinations. The second reason is learners would have extended testing times when correcting errors.

Self-regulation Time to Completion. During the final examination, either the students or the instructor recorded the clock time the examination was submitted. Duration to complete the examination was based on this recorded clock time.

Metacognitive Monitoring. At the start of each of the four exams, students made a global prediction of their performance: “On this assessment, I expect to receive a __%”. A bias score was calculated (predicted score – actual score) to determine degree of overconfidence (positive values of bias) and underconfidence (negative values of bias).28 An absolute bias score (absolute value of the difference between the predicted performance and actual performance) was calculated, determining the students’ metacognitive accuracy. 28 Students were asked to make these judgments on each examination to help develop their metacognitive monitoring skills. For this study, only the measures of bias or absolute bias on the final examination were used.

Personality Assessment. Prior to the semester, students completed the five-factor personality (eg, Big Five, NEO-I) test that was freely available (http://www.outofservice.com/bigfive/) to assist in team assignments. Scores were recorded on a 0% to 100% scale with higher numbers indicating a stronger trait.

All variables are summarized in Table 1. Data were graphed to examine the relationship between time to complete the examination and each factor of interest. Based on visual inspection, data was analyzed both as an entire cohort and according to completion time tertiles to discern regional effects. Spearman correlations were performed on all variables using SPSS software (IBM Corporation, Armonk, NY). A one-way ANOVA with Tukey post-hoc test was used to compare tertiles. Finally, multiple linear regression was performed using entry criteria on the entire cohort and each tertile. This study was deemed exempt from review by the Institutional Review Board. Statistical significance was set at p<.05.

View this table:
  • View inline
  • View popup
  • Download powerpoint
Table 1.

Summary of Study Variables

RESULTS

There were 266 participants who completed the data sets. The full correlation matrix can be found in Appendix 1. Overall, time to complete the final examination was negatively correlated with the final examination performance with and without partial credit (Table 2). Time to complete the final examination also was positively correlated with agreeableness and metacognitive bias. These findings suggest students who completed the examination quicker performed better, had a lower degree of the agreeableness trait and had higher degrees of underconfidence/lower degrees of overconfidence However, there were some regional differences. In the fastest tertile, there were significant negative relationship with time to complete the final examination and final examination score with and without partial credit; there was no relationship with personality or metacognitive monitoring. In the middle tertile, agreeableness was associated positively with time to complete the final examination. For the slowest tertile, there was a significant negative association with final examination completion time and performance with and without partial credit.

View this table:
  • View inline
  • View popup
  • Download powerpoint
Table 2.

Correlations of Various Factors with Time to Complete the Final Examination

Each time tertile was compared for significant differences between each variable of interest (Table 3). The fastest tertile completed the examination on average an hour earlier than the slowest third and approximately a half-hour earlier than the middle third. The fastest students performed less than half a grade better than the slowest students. There was no differences in personality or metacognitive monitoring between any of the tertiles.

View this table:
  • View inline
  • View popup
  • Download powerpoint
Table 3.

Summary of ANOVA Results

Finally, a regression analysis was performed on each time tertile and the entire cohort (Table 4). For the entire cohort, agreeableness and current knowledge significantly predicted time to complete the final examination, explaining 14% of the variability; these results parallel the correlation analysis. Metacognitive accuracy, as measured by absolute bias, predicted time to complete the examination for the early completers only. However, agreeableness predicted time to complete the examination for the middle tertile. Finally, current knowledge and metacognitive bias predicted time to complete the examination for the slowest tertile.

View this table:
  • View inline
  • View popup
  • Download powerpoint
Table 4.

Summary of Multiple Linear Regresion Results. Data shown as standardize beta (p value)

DISCUSSION

This is one of the first studies that examined self-regulation as a function of the time to complete an authentic, in-class assessment. Factors including current and prior knowledge, metacognitive accuracy and personality traits were examined. All three factors seem to play a role in the self-regulation of completing an examination.

The first hypothesis was students with a higher level of prior knowledge based on in-semester assessments would complete the task quicker. Prior knowledge did not affect time to complete the examination. Even though prior knowledge had no relationship with examination completion time, current knowledge seemed more influential. Overall, the authors found that current knowledge predicted time to complete the examination. The discrepancy between the effects of prior and current knowledge could be caused by students studying the two weeks leading up the final examination. This acute study and practice lead to additional learning or a higher level of retrieval strength and thus, higher overall performance. These learners may move through examination questions with more confidence because answers come to mind more quickly, reducing time per question and overall review time.

The authors hypothesized that metacognitive monitoring played a role in examination time and had limited effects. The limited results suggest that students who tend toward overconfidence (or less underconfidence) completed the task more slowly. This may relate to their use of study strategies prior to the examination. Typically, the lowest-performing students tend to show the greatest overconfidence (ie, the “unskilled-and-unaware” effect) even though this was not true in this study – there was no statistical difference in metacognitive bias or accuracy.29 However, this overconfidence can have negative effects on the efficacy of students’ short-term study behaviors by underpreparing for examinations or massing their practice instead of spacing it appropriately.29,30 If students massed their practice, they would have high retrieval strength for the knowledge and skills but low storage strength, and thus, lower long-term retention.31 Their ability to cram can lead to high performance. However, the metacognitive measures had limited predictive utility. This may be due to the answer-until-correct format of the final assessment. Students were aware of how they were scoring as they went through each question, eliminating time that would have been spent second-guessing or re-reading questions after providing an original answer. Judgment of their understanding was provided externally as feedback immediately after answering a question and thus was not dependent on internal metacognitive monitoring. In addition, students practiced their metacognitive judgments and became more accurate despite the presence of inter-student variability (data not shown). The metacognitive monitoring may have been a more important factor if students did not practice their judgments over the course of the semester and did not receive external feedback from the examination format.

The final factor investigated was personality. Within this study, only agreeableness seems to play a role. Individuals high in agreeableness are characterized as kind, appreciative, generous, cooperative and friendly; those that score low are characterized as fault finding, quarrelsome and thankless.24 This trait may explain time to complete the examination based of its relationship to social desirability, self-regulation, and boredom.32-37 On one hand, the “social do” may dictate when students should turn in their examination rather than when they were completed with the examination – that is, “I should turn it in after a certain time but I should not be the first to turn it in.” Students may feel social pressure on the appropriate timing to submit an examination. This might be modulated by the inverse relationship between agreeableness and boredom with high agreeableness showing less boredom and may persist through a task.33 In addition, agreeableness is related to self-control and effortful control. Thus, individuals with a higher agreeableness trait can modulate the emotional controls and persist through tasks like completing an examination.36 Thus in this study, students with higher levels of agreeableness were slower in completing their examination because of social desirability and their ability to modulate emotions, which may result in a slower completion time.

The factors explored explained a moderate amount of the variability and that human behavior is often quite complex. There are three potential confounders within the study: problem difficulty, working memory capacity, and metacognitive practice. The first issue is problem difficulty or how easy or hard a problem is. Hoffman and colleagues found that differences in metacognitive prediction depends on problem difficulty with metacognitive prompting, and self-efficacy having a stronger interaction for more complex math problems.38 This may or may not be a factor given the high practice levels of students prior to the final examination leading to more automaticity and high overall performance. Another potential factor is working memory capacity. In one study, higher working memory capacity was correlated with faster improvements in response time.39 Working memory capacity was not investigated within the study but may explain another section of the variance within the study. Finally, students practiced their metacognitive judgments, thus all students became more accurate over time. It is unclear if the results would change if students did not practice these judgments, which could lead to larger differences in accuracy and larger degrees of underconfidence and overconfidence.40,41

CONCLUSION

Current knowledge, and to a lesser extent, pro-social behavior (ie, agreeableness) influenced examination completion time. Metacognitive monitoring had limited predictability in time to complete the examination. Regions of the performance versus time relationship (eg, slow vs fast) may be governed by different drivers of self-regulation and these relationships may be complex. In general, instructors may have to help students develop behaviors to overcome natural tendencies in their personality to enhance performance. More importantly, time-limited examinations may negatively affect students with certain personality traits.

ACKNOWLEDGMENTS

The authors would like to thank Anita Scottie, PharmD candidate, and Kathryn Fuller, PharmD, for their assistance in preparing this manuscript.

Appendix 1. Full Correlation Matrix for All Included Variables

Table

  • Received February 6, 2017.
  • Accepted August 15, 2017.
  • © 2018 American Association of Colleges of Pharmacy

REFERENCES

  1. 1.↵
    1. Bjork RA,
    2. Dunlosky J,
    3. Kornell N
    . Self-regulated learning: beliefs, techniques, and illusions. Ann Rev Psychol. 2013;64(1):417-444.
    OpenUrlCrossRefPubMed
  2. 2.↵
    1. Butler DL,
    2. Winne PH
    . Feedback and self-regulated learning: a theoretical synthesis. Rev Educ Res. 1995;65(3):245-281.
    OpenUrlCrossRef
  3. 3.↵
    1. Stone NJ
    . Exploring the relationship between calibration and self-regulated learning. Educ Psych Rev. 2000;12(4):437-475.
    OpenUrl
  4. 4.↵
    1. Ericsson KA,
    2. Charness N,
    3. Hoffman RR,
    4. Feltovich PJ
    , eds. The Cambridge Handbook of Expertise and Expert Performance. Cambridge, UK: Cambridge University Press; 2006.
  5. 5.↵
    Persky AM, Robinson J. Expert novice difference. Am J Pharm Educ. In press.
  6. 6.↵
    1. Pressley M,
    2. Ghatala ES
    . Delusions about performance on multiple-choice comprehension tests. Int Read Assoc. 1988;23(4):454-464.
    OpenUrl
  7. 7.↵
    1. Schraw G,
    2. Roedel TD
    . Test difficulty and judgment bias. Mem Cognition. 1994;22(1):63-69.
    OpenUrl
  8. 8.↵
    1. Nietfeld JL,
    2. Schraw G
    . The effect of knowledge and strategy training on monitoring accuracy. J Educ Res. 2002;95(3):131-142.
    OpenUrlCrossRef
  9. 9.↵
    1. Weaver CA
    . Constraining factors in calibration of comprehension. J Exp Psychol Learn Mem Cognit. 1990;16(2):214-222.
    OpenUrlCrossRef
  10. 10.↵
    1. Maki RH,
    2. Foley JM,
    3. Kajer WK,
    4. Thompson RC,
    5. Willert MG
    . Increased processing enhances calibration of comprehension. J Exp Psychol Learn Mem Cognit. 1990;16(4):609-616.
    OpenUrlCrossRef
  11. 11.↵
    1. Serra MJ,
    2. Dunlosky J
    . Does retrieval fluency contribute to the underconfidence-with-practice effect? J Exp Psychol Learn Mem Cognit. 2005;31(6):1258-1266.
    OpenUrlPubMed
  12. 12.↵
    1. Benjamin AS,
    2. Bjork RA,
    3. Schwartz BL
    . The mismeasure of memory: when retrieval fluency is misleading as a metamnemonic index. J Exp Psychol Gen. 1998;127(1):55-68.
    OpenUrlCrossRefPubMed
  13. 13.↵
    1. Winne PH,
    2. Jamieson-Noel D
    . Exploring students’ calibration of self reports about study tactics and achievement. Contemp Educ Psych. 2002;27(4):551-572.
    OpenUrl
  14. 14.↵
    1. Credé M,
    2. Kuncel NR
    . Study habits, skills, and attitudes: the third pillar supporting collegiate academic performance. Perspect Psychol Sci. 2008;3(6):425-453.
    OpenUrlCrossRefPubMed
  15. 15.↵
    1. McCrae RR,
    2. Costa PT
    . Validation of the five-factor model of personality across instruments and observers. J Pers Soc Psychol. 1987;52(1):81-90.
    OpenUrlCrossRefPubMed
  16. 16.↵
    1. Chamorro-Premuzic T,
    2. Furnham A
    . Personality traits and academic examination performance. Eur J Pers. 2003;17(3):237-250.
    OpenUrl
  17. 17.
    1. Wolfe RN,
    2. Johnson SD
    . Personality as a predictor of college performance. Educ Psychol Meas. 1995;55(2):177-185.
    OpenUrlCrossRef
  18. 18.
    1. O’Connor MC,
    2. Paunonen SV
    . Big five personality predictors of post-secondary academic performance. Pers Individ Diff. 2007;43(5):971-990.
    OpenUrl
  19. 19.
    1. Poropat AE
    . A meta-analysis of the five-factor model of personality and academic performance. Psychol Bull. 2009;135(2):322-338.
    OpenUrlCrossRefPubMed
  20. 20.↵
    1. Duff A,
    2. Boyle E,
    3. Dunleavy K,
    4. Ferguson J
    . The relationship between personality, approach to learning and academic performance. Pers Individ Diff. 2004;36(8):1907-1920.
    OpenUrl
  21. 21.↵
    1. Liu OL,
    2. Rijmen F,
    3. MacCann C,
    4. Roberts R
    . The assessment of time management in middle-school students. Pers Individ Diff. 2009;47(3):174-179.
    OpenUrl
  22. 22.↵
    1. Kelly WE,
    2. Johnson JL
    . Time use efficiency and the five-factor model of personality. Educ. 2005;125(3):511-515.
    OpenUrl
  23. 23.↵
    1. Eysenck HJ,
    2. Cookson D
    . Personality in primary school children. 1. Ability and achievement. Brit J Educ Psychol. 1969;39(2):109-122.
    OpenUrlCrossRefPubMed
  24. 24.↵
    1. John OP,
    2. Robins RW,
    3. Pervin LA
    1. John OP,
    2. Naumann LP,
    3. Soto CJ
    . Paradigm shift to the integrative big-five trait taxonomy: history, measurement, and conceptual issues. In: John OP, , Robins RW, , Pervin LA, eds. Handbook of Personality: Theory and Research. New York, NY: Guilford Press; 2008:114-158.
  25. 25.↵
    1. Persky AM,
    2. Henry T,
    3. Campbell A
    . An exploratory analysis of personality, attitudes, and study skills on the learning curve within a team-based learning environment. Am J Pharm Educ. 2015;79(2):Article 1.
  26. 26.↵
    1. Anderson LW,
    2. Krathwohl DR
    . A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom's Taxonomy of Educational Objectives. Complete Edition. New York, NY: Longman; 2001.
  27. 27.↵
    1. Persky AM,
    2. Pollack GM
    . Using answer-until-correct examinations to provide immediate feedback to students in a pharmacokinetics course. Am J Pharm Educ. 2008;72(4):Article 83.
  28. 28.↵
    1. Dunlosky J,
    2. Thiede KW
    . Four cornerstones of calibration research: Why understanding students' judgments can improve their achievement. Learn Instruct. 2013;24(1):58-61.
    OpenUrl
  29. 29.↵
    1. Serra MJ,
    2. DeMarree KG
    . Unskilled and unaware in the classroom: College students’ desired grades predict their biased grade predictions. Mem Cognit. 2016;44(7):1127-1137.
    OpenUrl
  30. 30.↵
    1. Son LK,
    2. Simon DA
    . Distributed learning: data, metacognition, and educational implications. Educ Psychol Rev. 2012;24(3):379-399.
    OpenUrlCrossRef
  31. 31.↵
    Bjork RA, Bjork EL. A new theory of disuse and an old theory of stimulus fluctuation. In: Estes WK, Healy AF, Kosslyn SM, Shiffrin RM, eds. From Learning Processes to Cognitive Processes: Essays in Honor of William K. Estes. Vol 2. Hillsdale, NJ: L. Erlbaum Associates; 1992.
  32. 32.↵
    1. Buratti S,
    2. Allwood CM,
    3. Kleitman S,
    4. et al.
    First- and second-order metacognitive judgments of semantic memory reports: the influence of personality traits and cognitive styles. Metacognit Learn. 2013;8(1):79-102.
    OpenUrl
  33. 33.↵
    1. Sulea C,
    2. van Beek I,
    3. Sarbescu P,
    4. Virga D,
    5. Schaufeli WB
    . Engagement, boredom, and burnout among students: basic need satisfaction matters more than personality traits. Learn Individ Diff. 2015;42:132-138.
    OpenUrl
  34. 34.↵
    1. Washburn DA,
    2. Smith JD,
    3. Taglialatela LA
    . Individual differences in metacognitive responsiveness: cognitive and personality correlates. J Gen Psychol. 2005;132(4):446-461.
    OpenUrlCrossRefPubMed
  35. 35.↵
    1. Cortes K,
    2. Kammrath LK,
    3. Scholer AA,
    4. Peetz J
    . Self-regulating the effortful "social dos". J Pers Soc Psychol. 2014;106(3):380-397.
    OpenUrlCrossRefPubMed
  36. 36.↵
    1. Jensen-Campbell LA,
    2. Rosselli M,
    3. Workman KA,
    4. Santisi M,
    5. Rios JD,
    6. Bojan D
    . Agreeableness, conscientiousness, and effortful control processes. J Res Personal. 2002;36(5):476-89.
    OpenUrlCrossRef
  37. 37.↵
    1. Peterson CH,
    2. Casillas A,
    3. Robbins SB
    . The student readiness inventory and the big five: examining social desirability and college academic performance. Personal Individ Diff. 2006;41(4):663-673.
    OpenUrl
  38. 38.↵
    1. Hoffman B,
    2. Spatariu A
    . The influence of self-efficacy and metacognitive prompting on math problem-solving efficiency. Contemp Educ Psychol. 2008;33(4):875-893.
    OpenUrlCrossRef
  39. 39.↵
    1. Bo J,
    2. Jennett S,
    3. Seidler RD
    . Working memory capacity correlates with implicit serial reaction time task performance. Exp Brain Res, 2011;214(1):73-81.
    OpenUrlCrossRefPubMed
  40. 40.↵
    1. Hacker DJ,
    2. Bol L,
    3. Horgan DD,
    4. Rakow EA
    . Test prediction and performance in a classroom context. J Educ Psychol. 2000;92(1):160-710.
    OpenUrlCrossRef
  41. 41.↵
    Hartwig M, Persky AM. Repeated cumulative testing in a classroom: underconfidence-with-practice and diminishing absolute bias for authentic course materials. In development.
PreviousNext
Back to top

In this issue

American Journal of Pharmaceutical Education
Vol. 82, Issue 7
1 Sep 2018
  • Table of Contents
  • Index by author
Print
Download PDF
Email Article

Thank you for your interest in spreading the word on American Journal of Pharmaceutical Education.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Factors Affecting Student Time to Examination Completion
(Your Name) has sent you a message from American Journal of Pharmaceutical Education
(Your Name) thought you would like to see the American Journal of Pharmaceutical Education web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
18 + 0 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.
Citation Tools
Factors Affecting Student Time to Examination Completion
Adam M. Persky, Hannah Mierzwa
American Journal of Pharmaceutical Education Sep 2018, 82 (7) 6321; DOI: 10.5688/ajpe6321

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Share
Factors Affecting Student Time to Examination Completion
Adam M. Persky, Hannah Mierzwa
American Journal of Pharmaceutical Education Sep 2018, 82 (7) 6321; DOI: 10.5688/ajpe6321
del.icio.us logo Digg logo Reddit logo Twitter logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • INTRODUCTION
    • METHODS
    • RESULTS
    • DISCUSSION
    • CONCLUSION
    • ACKNOWLEDGMENTS
    • Appendix 1. Full Correlation Matrix for All Included Variables
    • REFERENCES
  • Figures & Data
  • Info & Metrics
  • PDF

Similar AJPE Articles

Cited By...

  • No citing articles found.
  • Google Scholar

More in this TOC Section

  • Assessment of Moral Development Among Undergraduate Pharmacy Students and Alumni
  • An Update on the Progress Toward Gender Equity in US Academic Pharmacy
  • Remote Work in Pharmacy Academia and Implications for the New Normal
Show more RESEARCH

Related Articles

  • No related articles found.
  • Google Scholar

Keywords

  • pharmacokinetics
  • personality
  • metacognition
  • time on task

Home

  • AACP
  • AJPE

Articles

  • Current Issue
  • Early Release
  • Archive

Instructions

  • Author Instructions
  • Submission Process
  • Submit a Manuscript
  • Reviewer Instructions

About

  • AJPE
  • Editorial Team
  • Editorial Board
  • History
  • Contact

© 2023 American Journal of Pharmaceutical Education

Powered by HighWire