Abstract
Objective. To determine if the amount of exposure to patient encounters and clinical skills correlates to student clinical competency on ambulatory care advanced pharmacy practice experiences (APPEs).
Design. Students in ambulatory care APPEs tracked the number of patients encountered by medical condition and the number of patient care skills performed. At the end of the APPE, preceptors evaluated students’ competency for each medical condition and skill, referencing the Dreyfus model for skill acquisition.
Assessment. Data was collected from September 2012 through August 2014. Forty-six responses from a student tracking tool were matched to preceptor ratings. Students rated as competent saw more patients and performed more skills overall. Preceptors noted minimal impact on workload.
Conclusions. Increased exposure to patient encounters and skills performed had a positive association with higher Dreyfus stage, which may represent a starting point in the conversation for more thoughtful design of ambulatory care APPEs.
INTRODUCTION
The most recent Standards and Guidelines for the Professional Program in Pharmacy from the Accreditation Council for Pharmacy Education (ACPE) state that students should engage in direct patient care during pharmacy practice experiences “most of the time,” but does not specify how many patient care experiences are required during each advanced pharmacy practice experience (APPE), or how many encounters with patients with specific medical conditions are required for competency in a given area.1 These guidelines also require that APPEs integrate, apply, reinforce, and advance knowledge, skills, attitudes, and values, and that student performance and attainment of desired outcomes be assessed and documented.1
A White Paper on Quality Experiential Education from the American College of Clinical Pharmacy (ACCP) outlines the quality and quantity of experiences during APPEs. However, the paper contains no guidance on the number of patient encounters or skills performed.2 The White Paper and the related ACCP Position Statement on Ensuring Quality Experiential Education recommend using student portfolios that include checklists of required elements, a record of skills and activities performed during the APPE, logs of topics discussed and types of patients seen, and examples of drug information questions answered to help determine deficiencies that can be remedied during future experiences.2,3 Competency of skills are determined through observation of students or through targeted skill assessments.2,3 However, it is unknown how many patient encounters in a given therapeutic area or patient care skills performed would ensure competency, and tracking of patient encounters in specific therapeutic areas is not currently a requirement in most APPEs.
Yet, APPEs are held accountable by ACPE for other aspects of student professional development such as: “retrieving, evaluating, managing, and using clinical and scientific publications in the decision-making process;” “accessing, evaluating, and applying information to promote optimal health care;” and “ensuring continuity of pharmaceutical care among health care settings.”1 Furthermore, the ACCP Position Statement states that students should have documented proficiency in interprofessional communication with other health care providers and in use of evidence-based medicine.3 In the 2013 Educational Outcomes from the Center for the Advancement of Pharmacy Education (CAPE), Domain 2, Essentials for Practice and Care, includes interpreting evidence and formulating evidence-based care plans.4 During APPEs, students apply evidence-based care plans to patients and are then evaluated on their abilities. However, it is uncommon that students on APPEs track these activities or the number of these skills they perform. These documents provide no guidelines on how many times students must perform a skill or conduct patient encounters for a medical condition to be considered competent.
In medicine, students track required experiences that prepare them for a career as a generalist practitioner. This requirement is included in the accreditation standards document of the Liaison Committee on Medical Education (the authority for the accreditation of medical education programs leading to the doctor of medicine degree).5 Moreover, in Standard 8, Element 8.6, medical schools are required to establish a system that specifies clinical conditions and types of patients that medical students must encounter, that these encounters and experiences must be monitored and verified by faculty members, and that any identified gaps must be remedied. This standard is necessary for accreditation of medical schools. Whatever system is implemented, it must ensure that all medical students complete these required experiences, and if not, they must undergo a clerkship or simulated experience. The required system monitors and ensures that all medical students complete all required clinical experiences during clerkships.5 The Core Medicine Clerkship Curriculum Guide published by the Clerkship Directors in Internal Medicine/Society of General Internal Medicine (CDIM/SGIM) outlines general clinical core competencies and includes a list of required core conditions where competency should be demonstrated; however, there is no guidance provided on how many encounters for each medical condition or skill is required for competence in a given area.6
To implement the required system of documenting and tracking patient encounters, medical schools use patient encounter logs (either paper or electronic), and this practice has been described, evaluated, and validated in several studies.7-10 In a system with a mandate for specific disease states and numbers of encounters (2-5 per disease state), 98% of students met the requirements, and 94% of students found it “easy” or “very easy” to do so. The accuracy of submitted logs was 77%, but most were validated by preceptors.7 Another study assessed the number of patient encounters, specific diagnoses, and procedures documented on patient encounter cards and found a 77% concordance of student report with faculty confirmation.8 However, in these studies, the correlation between the number of encounters documented and competence was not assessed.
The Dreyfus model of skill acquisition describes stages of learning (novice, advanced beginner, competent, proficient, expert), and was originally developed by Hubert and Stuart Dreyfus in response to the rise of artificial intelligence in the 1980s.11 It is not without controversy, and some authors have been critical of placing the complexity of clinical learning into a linear model that may not capture daily learning and the balance of intuition with analytical thinking, particularly in the “expert” category.12 Since its initial publication, the Dreyfus model has been adapted for professional education and applied in medicine, nursing, and public health.12-18 It was used by the Accreditation Council for Graduate Medical Education (ACGME) in defining competencies and milestones reached for graduate medical education (eg, residency programs).19 Several articles from medicine apply the Dreyfus model to medical education by placing medical students in the “novice” to “advanced beginner” stage, and a senior graduating resident in the “competent” stage. The competent stage is arrived at after extensive experience in which the learner recognizes common problems and feels more responsibility for outcomes. Higher stages of “proficient” and “expert” are not met until later in an individual’s career.14,16,17 Based on the significant work in medical training, it is likely the Dreyfus model can be applied to pharmacy. Because the model has not been widely applied in pharmacy education, it is not yet clear what the expected stage of learning should be for APPE students.
Faculty preceptors of required ambulatory care APPEs at the University of Minnesota College of Pharmacy recognized the current insufficiencies in tracking student patient encounters and skills performed and formed the Minnesota Ambulatory Care APPE Cooperative to consolidate faculty resources, implement patient encounter and skill tracking, and ensure a consistent experience and more objective assessment across ambulatory care APPEs. The cooperative began within five ambulatory care APPE sites in the Minneapolis/St. Paul metropolitan area. All sites were interprofessional primary care practices: three family medicine residency teaching clinics, one women’s health clinic, and a home health care site. Three additional APPE sites in Duluth, Minnesota were added in 2012. Although the educational environment varied from site to site, each site provided the student with a considerable amount of direct patient contact.
The primary outcome of this project was to determine if the amount of exposure to patient encounters and clinical skills correlates to student clinical competency. Specifically, the project sought to determine how many patient encounters for each medical condition are needed to be competent and how many times each clinical skill needed to be performed to be competent. Secondary outcomes assessed were the stage of student learning for medical conditions and skills performed with all patients seen, the impact of enrollment time (early, mid, or late rotation block) on preceptor assessments, inter-rater reliability of the preceptor assessments, and preceptor perception of the experience.
DESIGN
Students on 5-week ambulatory care APPEs were instructed to track the number of patients and experiences they encountered on rotations. A self-reported student tracking tool was developed to quantify two different experiences: (1) counting and categorizing patients encountered and (2) tracking patient-care skills performed during the ambulatory care APPEs. Students were trained to use the tracking tool during the APPE orientation. The tool was designed to reflect data describing the most common drug therapy problems and their associated medical conditions.20 Ten medical conditions (anticoagulation, asthma, infectious disease, chronic obstructive pulmonary disease [COPD], diabetes, lipids, hypertension, mental health, smoking, and women’s health) were listed in the tool with an additional write-in section for other medical conditions that may have been otherwise overlooked. Likewise, the skill section listed four skills related to the common drug therapy problems and medical conditions (asthma action plan, asthma education, diabetes education, and pain assessment), and five skills related to clinical tasks, including primary literature search, drug consult/information, motivational interviewing, presentation to another healthcare provider, and care coordination. These skills were agreed upon as common activities by faculty preceptors from the cooperative through group consensus.
In an effort to count each patient only once, students tracked each patient encounter based on the primary medical condition addressed within the patient care encounter. This allowed for a more accurate estimate in terms of the number of patients seen over the course of the APPE, while also not over-representing medically complicated patients. In contrast, students tracked each time they performed one of the listed patient care skills, regardless if it was performed with the same patient or multiple patients. As a patient care skill, the ability to repeat an experience and improve upon it was considered valuable information.
To associate the number of patient encounters and patient care skills performed with student competency, each preceptor evaluated the student’s stage of competency for each medical condition and patient care skill at the end of the rotation, referencing the Dreyfus model for skill acquisition.21 Preceptors were educated on the Dreyfus model and given a preceptor’s guide to assessing skill stage. The guide describes characteristics of each skill stage (novice, advanced beginner, competent, proficient, expert) according to Dreyfus, and provides examples of activity descriptions at the competent stage for each of the medical conditions and patient care skills.
Students were provided the tracking tool in paper form at the beginning of their rotations. Students tracked their encounters and skills either by writing them in on the paper form or recording them electronically. At the end of their 5-week rotations, students were provided a link on the cooperative’s course management site to enter their counts of patient encounters and skills performed into a password-protected Google survey. Student participation was expected, but, there were no consequences if they chose not to participate. Preceptors were also given access to a protected Google survey link on the course management site to enter their competency assessments. To increase timely participation, preceptors were subsequently e-mailed a reminder and link to the competency assessment survey on the last day of each rotation. Preceptors did not review their competency assessments with students upon completion of the APPEs.
Student ratings were collapsed into two groups to provide more robust analysis. Students rated as novice or advanced beginner were grouped into the “not competent” category, while students rated as competent, proficient, or expert were considered “competent.” The mean number of patient encounters for both groups for each medical condition was compared using independent t tests. A p value of <0.05 was considered significant. Analysis for t tests was done using R program, v3.0.1 (R Foundation for Statistical Computing, Vienna, Austria). Only student responses matched with the corresponding preceptor ratings (n=46) were included in the analysis for the primary objective.
To determine if there were effects of rotation block number or preceptor rater, a repeated-measures analysis of variance (ANOVA) was done. Not all students saw patients in each medical condition category nor did they all perform the same skills. Therefore, only the medical conditions and skills with complete data were included for these secondary objectives. Data from 32 students with complete responses in six medical conditions (anticoagulation, asthma, COPD, diabetes, lipids, and hypertension) were used in the analysis. In addition, data from 31 students with complete responses in five skills (diabetes education, presentation to other healthcare provider, motivational interviewing, drug consult, and primary literature search) were used in the analysis. The nine different rotation blocks were placed into three groups: early (blocks 1-3), mid (blocks 4-6), and late (blocks 7-9). Analysis for repeated-measures ANOVA was done using SPSS, v21 (SPSS Inc., Chicago, IL). The University of Minnesota Institutional Review Board Human Subjects Committee determined this project did not require review.
EVALUATION AND ASSESSMENT
Data was collected from September 2012 through August 2014. During this time, 63 students were on rotation at the participating APPE sites. Of those students, 55 chose to complete the patient encounters and skills performed tracking tool, and preceptors evaluated and assigned ratings to 54 students. Forty-six student tracking tool responses were matched to the corresponding preceptor evaluation ratings. Eight preceptor ratings did not have matching completed student tracking tools.
From the 46 matched student responses and preceptor ratings, students had 1890 patient encounters across the 10 medical conditions. Of the total encounters, students rated as competent had 1020 patient encounters and the students rated as not competent had 870 patient encounters. The mean number of patient encounters by students rated as competent was significantly higher than the mean number of patient encounters by students rated as not competent (p<0.05). However, significant differences were not found for the individual medical conditions when comparing students rated as competent or not competent (Table 1).
Mean Number of Encounters for Students Rated as Competent vs Not Competent
For the nine skills evaluated, students performed these skills 1163 times. Students rated as competent performed these skills 699 times, while the students rated as not competent performed the skills 464 times. The mean number of skills performed by students rated as competent was significantly higher than the number of skills performed by students rated as not competent (3.8 vs 2.7; p<0.001). When comparing these students on individual skills, only asthma education had a significant difference in the mean number of times the skill was performed (Table 2).
Mean Number of Times Skill Performed by Students Rated as Competent v. Not Competent
Preceptor ratings of students (novice to expert) showed no difference between rotation blocks (early, mid, late) for the selected medical conditions (p=0.41) or for skills performed (p=0.54). There was no significant difference between preceptor raters for either medical condition (p=0.068) or skill ratings (p=0.34). The observed power was 0.66.
All preceptors were surveyed to determine the preceptor workload of the tracking tool, the workload of completing the final competency assessment survey, and the impact of this competency assessment on future precepting. Overall, preceptors stated that monitoring students’ tracking of patient care activities and completing the final competency assessment required minimal work. Preceptors indicated the provided guidance document on assessing skill stage was helpful for the first few assessments, but became less necessary after that. Preceptors stated the competency assessment process did not have a major impact on how they approached precepting, although one preceptor commented: “I used the term ‘competent’ more often. I explained to students where I wanted them to be. I was more direct in instructing students on expectations. I think this rubric helped me explain my goals as a preceptor and show[ed] students the objectives of the rotation.”
DISCUSSION
Evaluating students’ competency when they complete an APPE is at the core of experiential teaching. Assessment of competency is a subjective measurement that preceptors naturally and intuitively make, commonly in relation to the practitioner’s own experiences or in comparison to previous learners. Student tracking of patient encounters and skills on an ambulatory care APPEs is a quantitative measure of patient care experiences and presents an opportunity for professional growth. This project showed a significant difference in the aggregate mean number of patient encounters by students who were rated competent compared to those rated not competent. Similarly, with the skills performed, students rated competent performed the skills significantly more times than their peers rated not competent when total exposures were aggregated. The project results did not show significant differences for competency in the number of patient encounters for any medical condition or in the number of times each skill needed to be performed, outside of asthma education.
The trend of increased exposure and evaluated competency was a positive sign, albeit predictable. This association was strongest in skills performed, which may be related to the increased uniformity in task application. Using asthma education as an example, a student demonstrating appropriate inhaler technique will likely have a similar experience when repeating this skill with the next patient. In this way, the skill can be refined and experiences built on one another as repetition increases. In direct patient care and comprehensive medication management, the qualitative aspect of each patient encounter may be unique. During a short experiential rotation, a student may have the opportunity to participate in the care of one or more COPD patients, but each encounter may be different with regard to disease severity and/or social circumstance.
Only students rated as competent for the asthma education skill were found to have a significant difference in the number of times the skill was performed. For this skill, students rated as competent or better recorded practicing this skill an average of 4.5 times compared to 2.6 times by lower rated students. This suggests that the minimum number of times this clinical skill should be performed by an ambulatory care APPE student is somewhere between these values, if the educational goal is competence as defined by the Dreyfus model. This is analogous to medicine where thresholds are set to determine competency in appropriately performing procedures such as surgical technique and labor and deliveries.22 However, the specific number of patients seen and skills performed for competency is only one factor to consider when determining competency in ambulatory care practice, which begs the question of whether competence should be the educational goal for pharmacy APPE students. If the application of the Dreyfus model in medicine regards a graduating medical resident as competent,14,16,17 then applying this standard to a pharmacy student may be unreasonable.
Maturity and ability to work in a clinical setting is a skill gained in experiential education, but it did not factor into this evaluation. Whether the APPEs occurred early or late during the academic year did not impact the competency ratings for either patient encounters or skills performed. This finding may represent the overemphasis of previous clinical experience, or it may highlight the uniqueness of the ambulatory care setting. This study did not assess the type or number of patient encounters experienced before the ambulatory care experience or measure students at baseline; the impact of previous work experiences was also unknown.
No differences among preceptors and their ratings were found; however, preceptor bias is important to consider when implementing this type of evaluation process. Some preceptors may adjust ratings based on their perceptions of the difficulty of the patient encounter or skill performed. There may also be outlier students who only need to see a few patients or perform the skill a few times to be considered competent. Others may need to see numerous patients and perform the skill numerous times and still struggle with competency. In this study, steps were taken to minimize preceptor bias. All preceptors were instructed on applying the Dreyfus rating scale. The development of the preceptor guide worked to control for these variances, by giving specific definitions and examples of competency for each condition and skill.
Another method to control bias was in the separation of APPE assigned grade and competency evaluation. Student performance expectations can influence how an experiential preceptor rates or grades the student, otherwise known as grade inflation.23 By blinding students from this assessment of skills, preceptors were free from potential influence.
In some cases preceptors were unable to determine competency stages for all medical conditions and clinical skills because the patient care opportunity did not present itself in the APPE time period. Individual APPE sites have varying opportunities to expose students to medical conditions and skills, which may explain why numbers for some of the conditions and skills were smaller than others (Table 3). While variability exists between experiential sites because of patient populations, patient care experiences can also vary within the same APPE practice site between rotation blocks for a multitude of reasons or circumstances. Duration of APPE experiences may be a method to decrease overall variability between same site patient exposures, but differences between practice sites will likely always be present in experiential education.
Student Dreyfus Ratings for Each Medical Condition and Skill Performed
This study presents information that tracking student experiences, and exposure can be linked to competency stages and students’ perceived abilities. It also highlights the challenge of developing definitions. The term competency implies a threshold of ability or knowledge and does not consider the gradation of ability among students. There may be a need to change the term to “advanced beginner” in terms of educational orientation and expectations of students on APPEs as this term appears to be a better-defined educational goal. With this adjustment, the tracking of student experiences may better correlate with preceptor evaluations of ability and competence.
As the number of APPE sites classified as ambulatory care continues to grow, further defining expectations and assessments for students is needed. This study found that the number of patients seen during APPEs matters. Students exposed to more patients had higher Dreyfus stage ratings. Thus, this data represents a starting point in the conversation for more thoughtful design of ambulatory care APPEs.
The number of sites and student participation was a limitation of the study. While the totals for both patient encounters and clinical skills performed were significant, only one item showed significance. With a larger sample of students, more measured items may likely demonstrate significance between exposure and student competency. While there was not a difference observed among the preceptor raters for medical conditions, the sample size was low. A difference between preceptor raters may exist, but the observed power was not large enough for it to be detected. Other limitations involved data collection. While there were more students enrolled in the participating APPEs, information was not gathered from students and preceptors. Reminders were sent in an attempt to improve this data collection. Regarding information received, the study relied heavily on student self-reporting of experiences. Variability in accuracy of reporting between sites and students was possible—most likely suspected underreporting of exposures—but this was not measured. Students’ stages of competency for medical conditions and skills were not evaluated at baseline, limiting the ability to assess potential improvements. In addition, the specific competency ratings were not reviewed with students. Although the study was design to avoid evaluation bias, there was a potential missed learning opportunity for students. However, evaluation of students on APPEs does include assessments of clinical abilities and areas for improvement. These assessments are shared with students upon completion of the APPEs as part of the standard evaluation process. Finally, while preceptors were asked about the impact on their workload, students were not asked if tracking encounters and skills was a burden or increased their workload while on APPEs.
SUMMARY
This study on student competency and measuring APPE outcomes furthers the profession’s ability to produce highly productive clinicians. In accordance with ACPE Standards and ACCP’s White Paper and Position Statement,1-3 tracking these activities highlight how many and what types of patients and skills students are being exposed to while on APPEs. Moreover, increased exposure to patient encounters and skills performed had a positive association with higher Dreyfus stages. This information can be used by the academy to further define standards in education or work towards developing new forms of student tracking and assessment.
ACKNOWLEDGMENTS
The authors wish to recognize the contributions of Cooperative faculty members Keri Hager, PharmD, Ann Philbrick, PharmD, Shannon Reidt, PharmD, MPH, and Megan Undeberg, PharmD, to this project and Ronald Hadsall, PhD, MS for statistical analysis guidance and support.
- Received December 5, 2014.
- Accepted April 24, 2015.
- © 2016 American Association of Colleges of Pharmacy