Skip to main content

Main menu

  • Articles
    • Current
    • Early Release
    • Archive
    • Rufus A. Lyman Award
    • Theme Issues
    • Special Collections
  • Authors
    • Author Instructions
    • Submission Process
    • Submit a Manuscript
    • Call for Papers - Intersectionality of Pharmacists’ Professional and Personal Identity
  • Reviewers
    • Reviewer Instructions
    • Call for Mentees
    • Reviewer Recognition
    • Frequently Asked Questions (FAQ)
  • About
    • About AJPE
    • Editorial Team
    • Editorial Board
    • History
  • More
    • Meet the Editors
    • Webinars
    • Contact AJPE
  • Other Publications

User menu

Search

  • Advanced search
American Journal of Pharmaceutical Education
  • Other Publications
American Journal of Pharmaceutical Education

Advanced Search

  • Articles
    • Current
    • Early Release
    • Archive
    • Rufus A. Lyman Award
    • Theme Issues
    • Special Collections
  • Authors
    • Author Instructions
    • Submission Process
    • Submit a Manuscript
    • Call for Papers - Intersectionality of Pharmacists’ Professional and Personal Identity
  • Reviewers
    • Reviewer Instructions
    • Call for Mentees
    • Reviewer Recognition
    • Frequently Asked Questions (FAQ)
  • About
    • About AJPE
    • Editorial Team
    • Editorial Board
    • History
  • More
    • Meet the Editors
    • Webinars
    • Contact AJPE
  • Follow AJPE on Twitter
  • LinkedIn
Brief ReportBRIEF

Student Performance on Graded Versus Ungraded Readiness Assurance Tests in a Team-Based Learning Elective

Sarah T. Eudaley, Michelle Z. Farland, Tyler Melton, Shelby P. Brooks, R. Eric Heidel and Andrea S. Franks
American Journal of Pharmaceutical Education November 2022, 86 (9) ajpe8851; DOI: https://doi.org/10.5688/ajpe8851
Sarah T. Eudaley
aUniversity of Tennessee Health Science Center, College of Pharmacy, Knoxville, Tennessee
PharmD
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Michelle Z. Farland
bUniversity of Florida, College of Pharmacy, Gainesville, Florida
PharmD
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Tyler Melton
aUniversity of Tennessee Health Science Center, College of Pharmacy, Knoxville, Tennessee
PharmD, MPH
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Shelby P. Brooks
cUniversity of Louisiana at Monroe, College of Pharmacy – Shreveport Campus, Shreveport, Louisiana
PharmD
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
R. Eric Heidel
dUniversity of Tennessee, Graduate School of Medicine, Knoxville, Tennessee
PhD
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Andrea S. Franks
aUniversity of Tennessee Health Science Center, College of Pharmacy, Knoxville, Tennessee
PharmD
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • PDF
Loading

Abstract

Objective. Team-based learning is widely used in pharmacy education. In this context, students need to be incentivized to do preclass preparation, thus ensuring they are ready for team-based learning, via graded readiness assurance tests (RATs). The purpose of this study was to determine the effect of graded versus ungraded RATs on examination performance in an ambulatory care elective course for third-year student pharmacists.

Methods. For the course offered in spring 2020 and 2021, a standard team-based learning framework was employed. In 2020 the RATs were graded and contributed to the overall course grade (graded RAT cohort), but in 2021 RAT grades did not contribute to the course grade (ungraded RAT cohort). For the ungraded RAT cohort, at the end of the course students completed an online anonymous survey regarding class preparation and perceived team accountability.

Results. No significant difference was found between the graded RAT (n=47) and ungraded RAT cohorts (n=36) in the overall mean percentage score on individual RATs (76% vs 74%) and individual examinations (82% vs 80%). Most students (69%-91%) in the ungraded RAT cohort reported completing preclass preparation assignments. In the postcourse survey, 94% of students agreed or strongly agreed that RATs contributed to team members’ learning, and 86% agreed or strongly agreed that they were proud of their ability to assist in the team’s learning.

Conclusion. Ungraded RATs did not significantly impact students’ examination performance in an elective course. Removing the grading of this test, whereby grading promotes the performance approach to learning, may have shifted the students’ motivation to the mastery approach in the context of preclass preparation. This challenges a widely held belief that grades are necessary incentives for preclass preparation within team-based learning.

Keywords:
  • team-based learning
  • pharmacy education
  • readiness assurance test
  • assessment

INTRODUCTION

Team-based learning (TBL), a student-centered, collaborative, active learning strategy, is widely used in pharmacy education.1 Literature characterizing TBL in pharmacy education describes student performance, critical thinking, problemsolving, and engagement as well as faculty and student perceptions.2-9 The standard framework for TBL includes individual preclass preparation, readiness assurance tests (RATs), team application exercises, content expert-facilitated interteam discussion, and peer evaluation. The RATs consist of multiple-choice items developed to ensure students have foundational knowledge and that they have prepared to engage in high-level discussions during team application exercises. The individual RAT (iRAT) assures individual student preparation, while the team RAT (tRAT) allows peer-to-peer teaching through the clarification of concepts from preparation materials when students discuss the RAT. Within the standard TBL framework, RATs contribute to the overall course grade and incentivize students to prepare for class.10-12

In the 2020-2021 academic year, the synchronous online course delivery required at that time created a new learning environment. While continued use of TBL maintained a community atmosphere with active engagement, it was not feasible to proctor formative assessments. Since RATs are delivered in a multiple-choice format and focus on foundational knowledge, this contributes to potential academic dishonesty, negating the learning power of the readiness assurance process.13 One approach to mitigate academic dishonesty is to remove the motivation to engage in these behaviors by not having scores contribute to the course grade.14-15

Limited evidence exists on how ungraded readiness assurance tests influence TBL. Student accountability for preclass preparation is paramount for successful student learning and is determined by using iRATs.10 Some educators posit that an incentive structure (eg, graded activity) must be in place as an extrinsic motivator to encourage students to complete preclass preparation.11,12 Others contend that graded formative assessments, like RATs, cause undue student stress and facilitate superficial learning to perform on the assessment.14 Graded assessments as incentives may motivate learners to use performance-approach learning, with the goal only to perform on the assessment.16 This may also promote performance-avoidance learning, through which students engage with material to seek approval from others and to appear competent.16 Both performance-approach and performance-avoidance learning lead learners to focus on the assessment rather than mastery of material, which is crucial for practicing pharmacists.17 In the absence of graded formative assessments, the stress and anxiety caused by the pressure to perform is removed, and students can engage in deeper learning and mastery of material.16 Students with a mastery approach to learning are intrinsically motivated, having the goal of improving their own knowledge and competencies rather than performing on an assessment.16 In the absence of graded formative assessments, additional extrinsic motivators for individual class preparedness may still exist through examination performance and team accountability.12 This study was designed to contribute additional information to the debate regarding incentive structures for learning in TBL classrooms. The purpose was to determine the effect of graded vs ungraded RATs (formative assessments) on examination performance (summative assessments) in an elective course.

METHODS

The ambulatory care elective is a two-credit-hour course for Doctor of Pharmacy (PharmD) students in their third professional year at the University of Tennessee Health Science Center College of Pharmacy. The 2020 course offering included three two-hour class sessions weekly over six weeks, with each session covering one of 11 topics. The standard TBL framework was employed, except for peer evaluation. Team-based learning was used to deliver course material synchronously across three campuses using videoconferencing technology. For each class session, one content expert facilitated from one of three campuses. Teams consisted of students from the same campus who met in person for class, and they were developed using a random number generator. There were 11 RATs (10 multiple-choice items per RAT) and two examinations (25 multiple-choice items). The iRATs were completed using Examplify (ExamSoft Worldwide LLC). The tRATs were completed using Immediate Feedback Assessment Technique (IF-AT) (Epstein Educational Enterprises Inc) cards, which use an answer-until-correct method and partial credit.18 Individual and team examinations were completed using Examplify. Proctors for RATs and examinations were present in the classroom on each campus.

The 2021 course offering was delivered using TBL within the same time frame but through synchronous videoconferencing using Zoom (Zoom Video Communications Inc). The same 11 topics in identical sequence to the 2020 offering were included. Teams consisted of students from different campuses. All students interacted synchronously online with faculty content experts, team members, and classmates. The iRATs were completed using the online learning management system (Blackboard Inc), and tRATs were completed using Microsoft Forms, without the answer-until-correct method. Individual and team examinations were completed using Examplify and Microsoft Forms, respectively. Topics on each examination were the same as the 2020 course offering. Examinations were proctored by faculty/staff through Zoom using methods developed at our college.19 Students were divided into three groups for the individual examination and received a Zoom link from the individual proctor. Student identification was verified upon entry into the Zoom meeting, and surroundings were viewed by the proctor. Once individual examinations were all complete, students were assigned to team breakout rooms to complete the team examination. The assigned proctor entered and exited assigned rooms during completion of team examinations.

For the 2021 ungraded RAT cohort, intense proctoring of RATs for individual students was not feasible. Because of the value of RATs within TBL pedagogy, RATs were included within the course, but the incentive grading structure was different. For the ungraded RAT cohort, the iRAT and tRAT accounted for 2.5% of the final course grade and were for participation only, in contrast to the graded RAT cohort, for which iRAT and tRAT grades accounted for 20% and 30% of the final course grade, respectively (Table 1).

View this table:
  • View inline
  • View popup
  • Download powerpoint
Table 1.

Grading Scheme Used to Assess Student Performance on Graded vs Ungraded Readiness Assurance Tests in a Team-based Learning Elective

The institutional review board granted approval for this study. Regarding data analysis, cohort demographics were compared using the t test and chi-square test as appropriate. Examination grades, individual mean percentage scores for iRATs, and a particular grade point average, namely the pharmacotherapy integrated modules grade point average (PIM-GPA), were compared using the t-test. The PIM-GPA was calculated from grades earned in the pharmacotherapy integrated modules that were completed prior to enrollment in the elective. The individual mean percentage score for iRATs was calculated by adding iRAT scores then dividing by the total number of TBL sessions for which the student completed an iRAT; this allowed us to account for potential absences leading to a difference in the number of TBL sessions.

At course conclusion, the ungraded RAT cohort completed an anonymous survey (QuestionPro Inc). The survey included three demographic items, three items adapted from a previously published survey to determine class preparation, and the accountability subscale of the Team-Based Learning Student Assessment Instrument (TBL-cSAI).20,21 The accountability subscale included eight items for students to rate level of agreement with a five-point Likert scale ranging from 1=strongly disagree to 5=strongly agree. Survey response frequencies were summarized using descriptive statistics.

RESULTS

Forty-seven students enrolled in the graded RAT (2020) cohort, and 36 enrolled in the ungraded RAT (2021) cohort. No significant differences existed in student characteristics between cohorts with regard to mean age in years (26 [SD=3] in graded RAT vs 26 [SD=2] in ungraded RAT, p=.68), percentage female students (62% vs 57%, χ2=.17, p=.57), student-reported prior TBL participation (70% vs 74%, χ2=.04, p=.84), or mean PIM-GPA (2.9 [SD=0.6] vs 3 [SD=0.7], p=.31). There was no significant difference in overall mean iRAT or individual examination scores (Table 2).

View this table:
  • View inline
  • View popup
  • Download powerpoint
Table 2.

Comparison of Pharmacy Students’ Scores on Graded vs Ungraded Examinations Administered in a Team-Based Learning Elective

In the ungraded RAT cohort, which was given the survey, 97% (n=35) of students completed the survey. Student responses to the accountability subscale of the TBL-SAI are in Table 3. Most students (51%, n=18) spent an average of one to two hours preparing for each session, whereas 31% (n=11) reported spending an average of 30-60 minutes. For each session, 69%-91% of students reported preparing for class, although one student reported not preparing for any of the sessions. Reasons for not preparing included that there was too much material (20.8%), students felt prepared for class without completing the required readings (15%), class preparation took too much time (10%), and students had no incentive to participate since iRATs were not graded (0%).

View this table:
  • View inline
  • View popup
  • Download powerpoint
Table 3.

Survey Responses in the Ungraded Readiness Assurance Test Cohort for the Team-Based Learning Student Assessment Instrument Subscale18 (n=35)

DISCUSSION

Team-based learning incorporates multiple levels of student accountability. First, students are individually accountable to perform well on the iRAT. Second, students are accountable to their team members to perform well on the tRAT and team application exercises. Our investigation of adjusting the incentive structure in a TBL classroom using graded or ungraded RATs demonstrated no significant difference in performance on individual overall formative and summative assessments. This important finding is contrary to the widely held belief that a graded incentive structure is needed to extrinsically motivate students to complete preclass preparation. These results confirm previously published data by Koh and colleagues demonstrating that ungraded formative assessments led to a slight decline in formative assessment performance but a sustained performance on summative assessments.22 Our results, along with previously published data, challenge one of the core beliefs in TBL, that graded iRATs incentivize and extrinsically motivate students’ individual class preparation.11

Foundational knowledge established through preclass preparation is crucial within TBL for ensuring proper peer-to-peer learning, supporting team answers, and achieving consensus during team application exercises. While some degree of extrinsic motivation still exists in this structure (eg, accountability to the team and graded exams), this process does not appear to require a graded formative assessment to remain effective. Participating in an iRAT (graded or ungraded) engages retrieval practice that solidifies knowledge and brings awareness to knowledge gaps. Perhaps removing the graded formative assessments allowed students to shift learning motivation from the performance-approach and/or performance-avoidance types to the learning approach, allowing students to focus on mastery of material rather than performance on assessments. Mastery of material may have been indirectly attributed to performance on other graded assessments within the course. Accountability to a team of peers may also motivate students to prepare for class, as the TBL structure emphasizes peer-to-peer learning during the tRAT and team application exercises.

Results from the end-of-course survey of the ungraded RAT cohort confirmed that students felt a responsibility to team members (Table 3). Most reported feeling a need to contribute to the team, and they perceived their contributions as important, indicating a perceived benefit from team interactions that enhanced learning. Regarding class preparation, most students agreed or strongly agreed that class preparation was necessary to do well in the course (Table 3). Most students completed preparation assignments, though this varied by topic and reasons for not preparing. Notably, none of the students reported that they failed to engage in preclass preparation because the RATs did not contribute to their course grade. This may demonstrate that although ungraded RATs did not serve as an incentive for class preparation, students viewed team accountability as an extrinsic motivator, enticing them to maintain engagement with course material despite the lack of a peer evaluation process. Within this incentive structure, individual examination performance may have also been perceived as an extrinsic motivator.

Although there was no difference in overall scores for formative and summative assessments, it is important to note the significant difference in scores between cohorts for the first iRAT (iRAT examination 1) and the first individual examination (individual examination 1), with the ungraded RAT cohort scoring 5% and 7% lower, respectively (Table 2). Although approximately 70% of students from both cohorts reported prior participation in TBL, delivery can vary significantly between courses. Further, the ungraded RAT cohort was adjusting to TBL delivery using live synchronous videoconferencing rather than in-person TBL delivered synchronously across three campuses. This adjustment may have contributed to differences in scores on the iRAT examination 1 and the individual examination 1. For examination 2, perhaps students in the ungraded RAT cohort adjusted to this environment, leading to better assessment performance. This adjustment may not have been observed for the graded RAT cohort because the iRATs were graded and IF-AT cards were used for tRATs, facilitating more initial engagement with material earlier on during the course.

Some limitations of this study must be considered. This was an elective course, so students may have been more engaged in the subject matter and, therefore, intrinsically motivated to prepare for and engage in class compared to a required course. Interest in the course content may have increased the likelihood that students would engage in the mastery approach. Extrinsic motivators (eg, formative graded assessments, peer evaluations) may play a more crucial role in encouraging engagement with course content for a required course. While we did not collect download rates for preclass materials as in previously published data, we attempted to characterize class preparation and perceptions through student survey responses for the ungraded RAT cohort, but we did not collect the same data for the graded cohort. Additionally, some content experts/facilitators differed between cohorts, although new facilitators received the same training for both cohorts. Individual facilitators identified preclass preparation materials, wrote preclass preparation objectives, created RAT questions, developed team-based application exercises, facilitated class discussion, and created examination questions.

CONCLUSION

Ungraded RATs did not significantly impact student performance on summative course assessments in an elective course. Ungraded formative assessments may have removed the pressure to perform on an assessment and may have promoted learning to master material, allowing students to engage in the mastery approach in this course. Students in the ungraded RAT cohort felt responsible to their team to prepare for class despite the RATs not contributing to their course grades. We observed high rates of self-reported preclass preparation and responsibility to contribute to team learning. Although the extrinsic motivator of graded RATs was removed, students may have viewed team accountability and examinations as motivators to learn before and during class. This is supported by similar scores on overall formative and summative assessments between cohorts. Ungraded iRATs in TBL sessions for elective courses can be considered as an alternative approach to mitigate the risk of academic dishonesty without fear of diminishing learning and student performance.

  • Received August 16, 2021.
  • Accepted January 4, 2022.
  • © 2022 American Association of Colleges of Pharmacy

REFERENCES

  1. 1.↵
    1. Allen RE,
    2. Copeland J,
    3. Franks AS,
    4. et al.
    Team-Based Learning (TBL) in US Colleges and Schools of Pharmacy. Am J Pharm Ed. 2013;77(6):Article 115.
    OpenUrl
  2. 2.↵
    1. Silberman D,
    2. Carpenter R,
    3. Takemoto,
    4. et al.
    The impact of team-based learning on the critical thinking skills of pharmacy students. Curr Pharm Teach Learn. 2021;13:116-121.
    OpenUrl
  3. 3.↵
    1. Wheeler S,
    2. Valentino AS,
    3. Liston BW,
    4. et al.
    A team-based learning approach to interprofessional education of medical and pharmacy students. Curr Pharm Teach Learn. 2019;11(11):1190-1195.
    OpenUrl
  4. 4.↵
    1. Johnson JF,
    2. Bell E,
    3. Bottenberg,
    4. et al.
    A multiyear analysis of team-based learning in a pharmacotherapeutics course. Am J Pharm Educ. 2014;78(7):Article 142.
    OpenUrl
  5. 5.↵
    1. Nelson M,
    2. Allison SD,
    3. McCollum M,
    4. et al.
    The Regis model for pharmacy education: a highly integrated curriculum delivered by team-based learning (TBL). Curr Pharm Teach Learn. 2013;5(6):555-563.
    OpenUrl
  6. 6.↵
    1. Medina MS,
    2. Conway SE,
    3. Davis-Maxwell TS,
    4. Webb R
    . The impact of problem-solving feedback on team-based learning case responses. Am J Pharm Educ. 2013;77(9):Article 189.
    OpenUrl
  7. 7.↵
    1. Kebodeauz CD,
    2. Peters GL,
    3. Stranges PM,
    4. et al.
    Faculty perceptions of team-based learning over multiple semesters. Curr Pharm Teach Learn. 2017;9(6):1010-1015.
    OpenUrl
  8. 8.↵
    1. Zingone MM,
    2. Franks AS,
    3. Guirguis AB,
    4. et al.
    Comparing team-based learning and mixed active-learning methods in an ambulatory care elective. Am J Pharm Educ. 2010;74(9):160.
    OpenUrlAbstract/FREE Full Text
  9. 9.↵
    1. Thompson BA,
    2. Schneider VF,
    3. Haidet P,
    4. Perkowski LC,
    5. Richards BF
    . Factors influencing implementation of team-based learning in health sciences education. Acad Med. 2007;82(10 Suppl):S53-S56.
    OpenUrlCrossRefPubMed
  10. 10.↵
    1. Michaelsen LK,
    2. Parmelee DX,
    3. McMahon KK,
    4. Levin RE
    1. Michaelsen LK,
    2. Sweet M
    . Fundamental principles and practices of team-based learning. In Michaelsen LK, , Parmelee DX, , McMahon KK, , Levin RE, eds. Team-Based Learning for Health Professions Education. Sterling, VA: Stylus; 2008:9-31.
  11. 11.↵
    1. Parmelee D,
    2. Michaelsen LK,
    3. Cook S,
    4. Hudes PD
    . Team-based learning: A practical guide: AMEE Guide No. 65. Med Teach. 2012;34(5):e275-e287.
    OpenUrl
  12. 12.↵
    1. Haidet P,
    2. Levine R,
    3. Parmelee DZ,
    4. et al.
    Guidelines for reporting team-based learning activities in the medical and health sciences education literature. Acad Med. 2012;87(3):292-299.
    OpenUrlCrossRefPubMed
  13. 13.↵
    1. Ng HWW,
    2. Davies G,
    3. Bates I,
    4. Avellone A
    . Academic dishonesty among pharmacy students: investigating academic dishonesty behaviors in undergraduates. Pharm Educ. 2003;3(4):261-269.
    OpenUrlCrossRef
  14. 14.↵
    1. McLachlan J
    . The relationship between assessment and learning. Med Educ. 2006;40:716-717.
    OpenUrlCrossRefPubMed
  15. 15.↵
    1. Childs-Kean LM,
    2. Farland MZ
    . Stop tempting your students to cheat. Curr Pharm Teach Learn. 2021;13:588-590.
    OpenUrl
  16. 16.↵
    1. Meece JL,
    2. Anderman EM,
    3. Anderman LH
    . Classroom goal structure, student motivation, and academic achievement. Annu Rev Psychol. 2006;57:487-503.
    OpenUrlCrossRefPubMed
  17. 17.↵
    1. Medina MS,
    2. Plaza CM,
    3. Stowe CD,
    4. et al.
    Center for the Advancement of Pharmacy Education (CAPE) Educational Outcomes 2013. Am J Pharm Educ. 2013;77(8):Article 162.
    OpenUrlAbstract/FREE Full Text
  18. 18.↵
    Epstein Educational Enterprises. What is the IF-AT? Accessed July 26, 2021. http://www.epsteineducation.com/home/about/.
  19. 19.↵
    1. Hall EA,
    2. Spivey C,
    3. Kendrex H,
    4. Havrda
    . Efffects of remote proctoring on composite examination performance among doctor of pharmacy students. Am J Pharm Educ. 2021;85(8): Article 8410.
    OpenUrlAbstract/FREE Full Text
  20. 20.↵
    1. DeJongh B,
    2. Lemoine N,
    3. Buckley E,
    4. Traynor L
    . Student preparation time for traditional lecture vs TBL in PTx course. Curr Pharm Teach Learn. 2018;10:360-366.
    OpenUrl
  21. 21.↵
    1. Mennenga HA
    . Development and psychometric testing of the team-based learning student assessment instrument. Nurse Educ. 2012;37(4):168-172.
    OpenUrlPubMed
  22. 22.↵
    1. Koh JY,
    2. Rotgans JI,
    3. Rajalingam P,
    4. et al.
    Effects of graded versus ungraded individual readiness assurance scores in team-based learning: a quasi-experimental study. Adv Health Sci Educ. 2019; 24:477-488.
    OpenUrl
PreviousNext
Back to top

In this issue

American Journal of Pharmaceutical Education
Vol. 86, Issue 9
1 Nov 2022
  • Table of Contents
  • Index by author
Print
Download PDF
Email Article

Thank you for your interest in spreading the word on American Journal of Pharmaceutical Education.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Student Performance on Graded Versus Ungraded Readiness Assurance Tests in a Team-Based Learning Elective
(Your Name) has sent you a message from American Journal of Pharmaceutical Education
(Your Name) thought you would like to see the American Journal of Pharmaceutical Education web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
11 + 1 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.
Citation Tools
Student Performance on Graded Versus Ungraded Readiness Assurance Tests in a Team-Based Learning Elective
Sarah T. Eudaley, Michelle Z. Farland, Tyler Melton, Shelby P. Brooks, R. Eric Heidel, Andrea S. Franks
American Journal of Pharmaceutical Education Nov 2022, 86 (9) ajpe8851; DOI: 10.5688/ajpe8851

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Share
Student Performance on Graded Versus Ungraded Readiness Assurance Tests in a Team-Based Learning Elective
Sarah T. Eudaley, Michelle Z. Farland, Tyler Melton, Shelby P. Brooks, R. Eric Heidel, Andrea S. Franks
American Journal of Pharmaceutical Education Nov 2022, 86 (9) ajpe8851; DOI: 10.5688/ajpe8851
Reddit logo Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • INTRODUCTION
    • METHODS
    • RESULTS
    • DISCUSSION
    • CONCLUSION
    • REFERENCES
  • Figures & Data
  • Info & Metrics
  • PDF

Similar AJPE Articles

Cited By...

  • No citing articles found.
  • Google Scholar

More in this TOC Section

  • Teaching and Assessing Pharmacy Students on Sterile Compounding Accuracy Checks
  • Identifying Student Research Project Impact Using the Buxton and Hanney Payback Framework
  • Qualitative Analysis of Pharmacy Students’ Self-identified Preconceptions Regarding the Term Clinical Pharmacy
Show more BRIEF

Related Articles

  • No related articles found.
  • PubMed
  • Google Scholar

Keywords

  • team-based learning
  • pharmacy education
  • readiness assurance test
  • assessment

Home

  • AACP
  • AJPE

Articles

  • Current Issue
  • Early Release
  • Archive

Instructions

  • Author Instructions
  • Submission Process
  • Submit a Manuscript
  • Reviewer Instructions

About

  • AJPE
  • Editorial Team
  • Editorial Board
  • History
  • Contact

© 2023 American Journal of Pharmaceutical Education

Powered by HighWire