Abstract
Objective. To estimate whether first-time pass rates on the North American Pharmacist Licensure Examination (NAPLEX) have been influenced by the number of pharmacy programs founded since 2000, the programs’ accreditation era, and the changes to the blueprint as well as changes to the testing conditions and passing standards implemented by the National Association of Boards of Pharmacy (NABP) beginning in 2015.
Methods. This was a retrospective, observational cohort study using publicly published data. The number of programs and pass rates were collected from 2008 to 2020. Programs reporting pass rates from 2016 to 2020 were eligible. Accreditation era was defined as programs accredited before or after 2000. Pass rates were categorized into NAPLEX tests administered before or after 2015. Statistical analyses were conducted for comparisons.
Results. Pass rates were initially found to decline as the number of programs rose. First-time pass rates of programs accredited before 2000 were higher than pass rates of programs accredited after 2000 every year after 2011. Only 40% of the programs accredited after 2000 exceeded the national average between 2016-2020. Blueprint changes implemented in 2015 and the changes to testing conditions plus passing standards implemented in 2016 had a greater effect on pass rates than the number of programs or applicants.
Conclusion. Programs accredited after 2000 generally had lower first-time NAPLEX pass rates. Even so, blueprint changes and changes to the testing conditions plus passing standards instituted by the NABP were more important predictors of the decline of first-time NAPLEX pass rates. Stakeholders should collaborate and embrace best practices for assessing practice-ready competency for licensure.
- North American Pharmacist Licensure Examination (NAPLEX)
- educational assessment
- pharmacy education
- pass rate
- licensure
- retrospective studies
- interrupted time series
- segmented regression
INTRODUCTION
Between 1970 and 1999, accredited pharmacy programs increased from 74 to 76 programs. After 2000 until 2015, the number expanded to 125.1 This increase in the number of programs raised concerns about the quality of students’ education and North American Pharmacist Licensure Examination (NAPLEX) pass rates.2 Maine and Vlasses first reported results of programs’ first-time pass rates on the NAPLEX.3 Their conclusion was that the change in the average first-time NAPLEX pass rate was not meaningful between 2004 and 2010.3 However, they examined data from more than a decade ago. Moreover, Popovich and colleagues expressed concerns about the veracity of their conclusions of a null effect because of the lack of statistical comparisons of the pass rates.2
Three other studies have examined NAPLEX pass rate predictors at the programmatic versus individual student level. Two of these studies overlooked the pass rate’s association with the number of programs, accreditation era (ie, before or after 2000), changes to the blueprint, or changes to testing conditions plus passing standards.4,5 One study found that accreditation era was associated with pass rates.6 Therefore, the original question remains as to whether pass rates were affected by growth in the number of programs2,3 or something else.
Two meaningful changes occurred in sequence and overlapped the decline in pass rates in 2015 and 2016. First, the NAPLEX blueprint was modified in 2015 to focus more on clinical assessment and treatment recommendations.7 Second, the testing conditions and passing standards changed in 2016.8-10 Three testing conditions changed: computerized adaptive testing was changed to a computerized linear examination format; the number of questions increased from 185 to 250; and the amount of time allotted to take the NAPLEX increased from 4.5 hours to six hours.
As a first objective, we examined the association between first-time NAPLEX pass rates and the number of programs. Upon uncovering an inverse relationship, our next objective was to examine whether programs accredited before 2000 had different pass rates compared with programs accredited since 2000 (ie, the accreditation era).6 Our final objective was to compare first-time pass rates before 2016 versus 2016 and later for associations with changes to the blueprint as well as changes to testing conditions plus passing standards.
METHODS
This was a retrospective observational cohort study using data published annually by the National Association of Boards of Pharmacy (NABP). Accredited Doctor of Pharmacy (PharmD) programs with published NAPLEX scores from 2015-2020 were eligible, with one exception. The South Carolina College of Pharmacy was established in 2004 when the Medical University of South Carolina and the University of South Carolina programs merged. The Medical University of South Carolina earned accreditation in 2015/2016. Each program had separate NAPLEX pass rates after 2016 and were excluded. Using the timeframe of 2015-2020 maximized the number of programs included in the study (N=124).
Programs’ first-time NAPLEX pass rates were the primary outcome variable.11 The NAPLEX is scored on a zero to 150 scale. Licensure candidates must earn a scaled score of 75 or higher to pass. The scaled score is calculated using a confidential algorithm based on a rolling average of results from other candidates during the same administration cycle.12,13 Here, pass rate was defined as the percentage of each program’s first-time candidates that passed NAPLEX with the minimum scaled score of 75. Publicly published pass rates were obtained from NABP for individual programs from 2008-2020.6,11 Pass rates before 2008 are not available publicly. Programs’ pass rates were averaged for eight years before the testing conditions and passing standards changed (ie, 2008-2015) and for five years afterward (ie, 2016-2020).
Next, individual programs’ pass rates were compared with the national average. Each program’s pass rate was subtracted by the national average. A negative result indicated the program’s pass rate was lower than the national average (coded 0), and a positive result indicated it was higher than the national average (coded 1) for that year. The number of years a program’s pass rate was higher than the national average for 2016-2020 was counted. This count of the number of years ranged from zero (indicating that a program never had a pass rate higher than the national average) to five (indicating that a program’s average score was higher than the national average for all five years).
Variability in pass rates was used to determine whether accreditation eras and/or the blueprint and testing condition plus passing standard changes were associated with pass rate changes. Mean differences in the standard deviations were assessed by independent t tests.
The number of programs and the year a program was accredited were obtained from the Accreditation Council for Pharmacy Education (ACPE).1 By definition, newer programs all started after 2000. Simply entering the number of programs as a variable into the statistical model does not fully capture critics’ concerns regarding potential deficiencies in essential educational resources associated with the rise in number of programs.2 Potential deficiencies might include, but are not limited to, too few faculty, inexperienced faculty and administrators, fewer qualified students, and too few experiential sites. A proxy variable was created to represent those concerns. Programs were categorized into two accreditation eras, specifically those accredited after 2000 (coded 0) or accredited before 2000 (coded 1). This cut point aligns with previous work4 and represents the accreditation eras before and after the unprecedented increase in the number of programs after 2000. The number of pharmacy applicants for each year from 2004-2016 was obtained from the American Association of Colleges of Pharmacy. These applicant years were selected because they are aligned with the years that students took the NAPLEX using the assumption that most students took four years to complete their program.
The NABP routinely updates the NAPLEX. The NAPLEX blueprint was updated in 2015, and the testing conditions and passing standards were updated in 2016.10 Two test administration variables were created to reflect these updates. The 2015 cut point variable represents only the blueprint changes. The 2016 cut point variable incorporates the blueprint changes as well as changes to the testing conditions plus passing standards.
Means, standard deviations, and minimum and maximum pass rates were used to describe continuous data. The Pearson r correlation was used to estimate the association between the average pass rate and the number of programs and applicants. Two sets of slopes described the pass rates for programs accredited before and after 2000. Estimates were calculated using the slope function in Excel 2016 for examination years 2008-2015 and for 2016-2020. Independent sample t tests determined differences between the slopes for programs accredited before or after 2000, as well as the timeframes before (2008-2015) and after (2016-2020) the testing condition plus passing standard changes. Paired t tests estimated the mean difference in pass rate for each accredited program before and after 2016.
A test of the difference between the means of two groups indicates significant differences but does not indicate the relevance of the differences. Cohen d estimates the standardized effect size,14 and its meaningfulness is interpreted using the z distribution. A Cohen d of zero indicates no difference in the mean pass rate between the two groups. A Cohen d of 0.2 indicates a small effect size and denotes that 58% of the average pass rates of programs accredited before 2000 was above the average pass rates of programs accredited after 2000. A medium effect size (d=0.5) and a large effect size (d=0.8) denote that 69% and 79%, respectively, of the average pass rates of programs accredited before 2000 were higher than the average pass rates of programs accredited after 2000.14
Multiple linear regression was used to model the relationships between the predictors and the outcome variables. The blueprint and testing condition plus passing standard change variable was entered as Step 1, and the number of programs and number of applicants variables were entered as Step 2 into the model.15 The goal of entering the before and after 2016 administration cycle variable first was to evaluate the magnitude of its independent effect on pass rates. The variables representing the number of programs and applicants were entered second to ascertain whether these variables were meaningful predictors of the pass rate, independent of the blueprint and testing condition plus passing standard changes. Segmented regression analyses were conducted to evaluate the before and after 2016 slopes.16 The R2 value assessed the statistical significance of the overall model. The R2 change after Step 1 assessed the effect of the number of programs and applicants on pass rates. An a priori alpha error of p≤.10 was used to test statistical significance because of the small sample (n=13).
Data were analyzed using SPSS Statistics version 27.0 (IBM Corp). The protocol was classified as a nonhuman subject study by the institutional review board at the University of New Mexico.
RESULTS
In 2002, ACPE reported 80 accredited programs (Table 1). In 2008, NABP first published NAPLEX pass rate data for 87 programs and the average pass rate was 97%. The number of eligible programs continued to increase, with 137 programs accredited by 2020, which is a 71% increase over 2001 (n=57). As the number of programs increased, the pass rates trended downward (r=-.86, p<.001), with the lowest average pass rate of 86% in 2016 (Figure 1a, Table 2).
Average National First-Time NAPLEX Pass Rate by Number of Programs and Pharmacy Applicants 2004-2020
First-time North American Pharmacist Licensure Examination (NAPLEX) pass rate and number of pharmacy programs 2008 (‘08) to 2020 (‘20) (Top). +Pearson’s r correlation was used to determine significance between national first-time NAPLEX pass rate and number of pharmacy programs (2008-2020), defined as p<.05. First-time NAPLEX pass rate for programs accredited before 2000 versus programs accredited after 2000 from 2008 (‘08) to 2020 (‘20) (Bottom). +T-test was used to determine significance between pre-2000 and post-2000 programs in average first-time pass rate for each year, defined as p<.05
Descriptive Statistics, Average First-Time Pass Rate, and Percentage Above and Below the First-Time Pass Rate for NAPLEX 2008 to 2020
Between 2008-2020, the data representing the number of applicants presented with a U-shaped configuration. The highest number of applicants was in 2010 and the lowest in 2015 (Table 1). This attenuated the association between the number of applicants and the NAPLEX pass rate (r=.40, p=.17). The association between the number of programs and applicants was not significant (r=-.16, p=.60).
Among the 124 programs included in the study, programs accredited before 2000 usually achieved higher pass rates than programs accredited after 2000 (Table 2, Figure 1b). The first meaningful difference based on programs’ accreditation era (ie, before or after 2000) was in 2012 and favored programs accredited before 2000 every year (mean [SD] of 97.5 [2.70] vs 94.5 [4.8], p<.001). During the years 2016-2020, nearly 17% of the 124 programs’ pass rates never exceeded the national average. Of those 21 programs, 76% were accredited after 2000. Conversely, over 32% (n=40) of the programs’ pass rates were higher than the national average for all five years, and nearly 88% of them were accredited before 2000. Nearly half (n=60) of the programs included in the study scored higher than the national average four out of the five testing years, and 61% (n=76) were higher than the national average three or more times ( p<.001, Cohen d=.90). Beginning in 2012, the average pass rate of programs accredited after 2000 was 2.3% lower than the national average. In contrast, the average pass rate for programs accredited before 2000 was 0.7% higher than the national average (Table 2). Every year afterward, the difference favored those accredited before 2000. The most extreme difference occurred in 2020. The average pass rate of programs accredited after 2000 was 5.1% lower than the national average, and the average pass rate for programs accredited before 2000 was 2.4% higher than the national average. For 2011-2020, the pass rate ranged from 1.9% to 5.1% lower than the national average for programs accredited after 2000. For programs accredited before 2000, the range was 0.6% to 2.4% higher than the national average (Cohen d range .58 to .94).
For all programs, the maximum pass rate was 100% from 2008-2020, except for 2016 when it was 99% (Table 2). Conversely, the lowest pass rate for a single individual program was 67% before and 51% after 2016. When the periods before and after 2016 were compared, the average minimum pass rate prior to 2016 was 75% and 56%, respectively, from 2016 onward (p<.001, Cohen d=3.73). The average standard deviation for programs accredited before 2000 was 5.1 and for programs accredited after 2000 was 6.3 ( p>.10). The average standard deviations before the cut points were 4.0 for 2008-2014 and 4.1 for 2008-2015 ( p>.10). In contrast, the average standard deviations for the accreditation eras were significantly different after both cut points. Post-2000 programs’ average standard deviation was 9.0 and pre-2000 programs’ average standard deviation was 6.3 for 2015-2020 (p<.01), and their average standard deviations were 9.2 versus 6.7, respectively, for 2016-2020 (p<.001).
The average pass rate was significantly higher for candidates graduating from programs accredited before 2000 (mean [SD] of 93.7 [3.2] vs 87.9 [6.0]) for the whole observation period from 2008-2020 (p<.001, Cohen d=1.21). The pattern was similar for 2008-2015 (mean [SD] of 95.9 [3.4] vs 91.2 [6.3], p<.001, Cohen d=1.02) and for 2016-2020 (mean [SD] of 90.0 [5.8] vs 84.7 [7.5], p<.001, Cohen d=.83).
The effects of the blueprint and testing condition plus passing standard changes were both represented by the 2016 cut point. The average pass rate before the blueprint and testing condition plus passing standard changes in 2008-2015 was higher than in 2016-2020 (mean [SD] 95.4 [5.1] vs 88.2 [6.9], p<.001, Cohen d=1.15). The average pass rates for the two periods were highly positively correlated (r=.65, p<.001).
Table 3 shows that the variable representing the testing condition plus passing standard changes was significant (p<.001), as was the overall model (p<.001). After Step 2, the testing condition plus passing standard changes remained significant, but neither the number of programs nor applicants were statistically significant ( p>.10). The R2 change after addition of the number of programs and applicants was not significant, indicating that these variables had no effect on the pass rate when controlling for the testing condition plus passing standard changes. The influence of the testing condition plus passing standard change variable was five to nine times greater than the number of programs and applicants’ variables. The overall model remained significant (p<.001).
Predictors of Annual First-Time NAPLEX Pass Rate for the Period Before and After Changes to the Blueprint, Testing Conditions, and Pass Rate Standards and the Number of Pharmacy Programs and Pharmacy Applicants
The downward slope from 2008-2015 was more severe for programs accredited after 2000 compared with programs accredited before 2000 (Figure 2a; unstandardized coefficient B(>2000)=-0.23 vs B(<2000)=-0.92, p=.02). The average pass rate for pre-2000 programs was 97% compared with 91% for post-2000 programs from 2008-2015. In contrast, from 2016-2020, the average pass rate for the pre-2000 programs was 90%, and it was 85% for the post-2000 accreditation era (p<.001). Additionally, the positive upward trend from 2016-2020 was not statistically different for programs accredited before or after 2000 (B(<2000))=0.59 vs B(>2000)=0.14, p=.55).
In the first graph (top), linear slope changes are shown for the North American Pharmacist Licensure Examination (NAPLEX) administrations before versus after 2016 for programs accredited before 2000 versus programs accredited after 2000. +T-test was used to determine significance between pre-2000 and post-2000 programs in average first-time pass rate for each year, defined as p<.05. ‡T-test was used to determine significance between pre-2000 and post-2000 programs in average first-time pass rate for each time period (2008-2015 and 2016-2020), defined as p<.05. *T-test was used to determine significance between the slopes of pre-2000 and post-2000 programs for each time period (2008-2015 and 2016-2020), defined as p<.05. In the second graph (bottom), linear slope changes for NAPLEX administrations before versus after 2015 for programs accredited before 2000 versus programs accredited after 2000. +T-test was used to determine significance between pre-2000 and post-2000 programs in average first-time pass rate for each year, defined as p<.05. ‡T-test was used to determine significance between pre-2000 and post-2000 programs in average first-time pass rate for each time period (2008-2014) and (2015-2020), defined as p<.05. *T-test was used to determine significance between the slopes of pre-2000 and post-2000 programs for each time period (2008-2014 and 2015-2020) defined as p<.05.
In 2015, NABP changed only the NAPLEX blueprint. The raw data hinted at a larger than normal decline in the 2015 compared with the 2014 pass rate (a mean difference [SD] of 2.7 [3.8], p<.001, Cohen d=.58). Therefore, the published NAPLEX pass rates were dichotomized into two categories representing 2008-2014 and 2015-2020 as an alternate cut point. Table 3 shows that the variable representing the blueprint change also was statistically significant (p<.001). After Step 2, the blueprint change remained significant ( p=.08), but neither the number of programs nor applicants were statistically significant ( p>.10). The R2 change after addition of the number of programs and applicants was not significant, indicating that these variables had no effect on the pass rate when controlling for the blueprint. The overall model remained significant (p<.001).
From 2008-20014, the average pass rate for pre-2000 programs was 96% compared with 93% for post-2000 programs (p<.001). Figure 2b shows that the slope for programs accredited before 2000 was relatively flat before 2015 and the downward trend was significantly more severe for the programs accredited after 2000 (B(<2000) = -.03 vs B(>2000) = -.72, p=.03).
From 2015-2020, the average pass rate for pre-2000 programs was 91% compared with 86% for post-2000 programs (p<.001). After 2014, both accreditation eras trended downward, but the slopes were not significantly different (B(<2000) = -.17 vs B(>2000) = -.59, p=.59).
In summary, programs accredited before 2000 had higher first-time NAPLEX pass rates than programs accredited after 2000 every year after 2011. When the 2015 and 2016 cut points were separately examined as potential confounders, the slopes representing average pass rates for both accreditation eras were significantly different before the cut point but were not statistically different after both cut points. Thus, while the average pass rates remained significantly different after the NAPLEX modifications, both eras’ pass rates declined at the same ratio after the variables representing the NAPLEX changes were taken into consideration, indicating that the NAPLEX changes impacted both accreditation eras similarly. Finally, the blueprint and testing condition plus passing standard changes represented by the 2016 cut point impacted the average pass rate more than the 2015 blueprint changes alone, with nearly double the magnitude.
DISCUSSION
This study has five meaningful findings: First, the average NAPLEX pass rate declined as the number of programs increased; second, the annual NAPLEX pass rates of programs accredited before 2000 were higher than the average pass rate of programs accredited after 2000; third, the number of programs’ association with pass rate was confounded by other factors; fourth, pass rates were significantly associated with blueprint changes; and, fifth, pass rates were impacted even more by testing condition plus passing standard changes. Each of these findings will be discussed in turn.
Our first finding tentatively verified the veracity of critics’ concerns that the number of programs had a detrimental influence on pass rates; however, the number of applicants did not. From 2004-2010, the number of programs increased as did the number of applicants. However, after 2010, the number of programs continued to rise but the number of applicants steadily declined. One purported reason for the general pass rate decline is a lower-quality applicant pool because fewer students were choosing pharmacy as a career path. However, the period of the highest number of student applicants (2009-2011) coincided with the largest pass rate declines in 2015 and 2016. Yet, because data used for student admission decisions are not widely available, the student applicant quality assertion cannot be refuted or supported.
Our second meaningful result was that average pass rates of programs accredited before 2000 were significantly higher than pass rates of programs accredited after 2000 (Table 2, Figure 1b). Downward trajectories of pass rates for programs accredited before and after 2000 overlapped from 2008-2011 and began to diverge beginning in 2012. Most programs with pass rates above the national average for all five years between 2016-2020 were accredited before 2000. Conversely, most of the programs that never exceeded the national average were accredited after 2000. The differences in these two accreditation eras likely represent unmeasured characteristics of mature versus immature programs. The inability to adjust quickly to the unprecedented increase in the number of programs may have resulted in supply and demand imbalances across the Academy. At the beginning of this era, programs established before 2000 likely had optimal levels of resources required to operate an accredited program, as evidenced by the minimal decline in pass rates until 2015. However, programs accredited after 2000 needed to acquire sufficient resources to meet accreditation standards. Sometimes those resources were acquired at the expense of established programs. New and established programs competed for quality experiential programs, qualified faculty and administrators, and qualified students during this time.
At the core of this argument is the untested assumption that established schools would retain the same proportion of high-quality students, while newer programs would admit lower-quality students from the shrinking applicant pool. The statistical findings do not support this view for two reasons. If the statistical variability among program eras’ pass rates was due to the decline in the quality of students among the newer programs, it should have been visible beginning with the 2008 data. Rather, the statistical variation in NAPLEX pass rates for both accreditation eras was similar for the entire 2008 to 2020 timeframe. This finding is more in line with a hypothesis of the shrinkage of the whole applicant pool. The second reason is that when the 2015 and 2016 changes were examined as confounders, the pre- and post-2000 accreditation groups showed similar variability before both cut points, but not afterward. Notably, the applicant pool was at or near its highest point and coincided with declining pass rates represented by the 2015 and 2016 cut point variables. Thus, critics’ speculation about an influx of lower-caliber students being associated with lower pass rates during this temporal period was confounded by the blueprint and testing condition plus passing standard changes.3 Unfortunately, important information about proxy indicators for student caliber at admission is unavailable for this timeframe. Therefore, hypotheses of confounding between student quality, number of applicants, and NAPLEX changes can be neither refuted nor supported by this analysis. Questions about modifiable student and faculty academic elements associated with NAPLEX outcomes should be the subject of future research.
The remaining findings were unexpected, and our subsequent investigations were prompted by the finding of confounding factors. The NABP modifies the NAPLEX blueprint approximately every five years to reflect contemporary changes in pharmacy practice.17 The two accreditation eras’ pass rates overlapped until 2011, when NABP changed the blueprint.18 After 2011, the next meaningful drop in the NAPLEX pass rate coincided with the next blueprint change and occurred between the 2014 and 2015 testing cycles. Notably, the trend line for the pre-2000 programs was flat until 2014 and started its downward trajectory in 2015. The pre-2000 programs’ pass rates were unaffected until the blueprint changed, even though the post-2000 programs were impacted earlier. This finding is important because declines due to content changes are consistent with a validly constructed test,19 and it provides additional support for the hypothesis that it was not the number of programs but rather when programs were accredited that influenced the pass rates.
Conversely, after the testing condition plus passing standard changes in 2016, slope lines for programs from both eras reversed direction and trended upward. The average pass rates declined for both accreditation eras after the testing condition plus passing standard changes were implemented in 2016. The blueprint was the same in both 2015 and 2016, and the pass rates declined even further after the testing condition plus passing standard changes despite the blueprint being the same. Two explanations are plausible for the change in direction in 2016. First, 2016 was the nadir for the pass rates for both groups during the 13-year observation period. So, the slopes trended upward after the 2016 low point. This reversal of the trend somewhat mitigates critics’ concerns about the pass rate decline being due solely to the number of programs. Second, from a psychometric perspective, the downward trends attributable to the 2015 blueprint content changes are consistent with a reliable and valid test.19 In contrast, testing condition plus passing standard changes are not associated with content, and they likely introduced bias and error.19 Questions about the test’s validity after the testing condition plus passing standard changes should have introduced doubt as to whether the test accurately assesses licensure candidates.
Our final finding was that the number of programs and applicants were nonsignificant and had less than half of the impact of the 2015 and 2016 NAPLEX modifications. The modifications hypothesis provides a richer explanation for the precipitous decline and offers opportunities for constructive improvements rather than opining it was due solely to the increase in programs and decline in applicants. It is clear that the rapid rise in the demand and potential shortage of experienced faculty and administrators, qualified students, and capable experiential sites could have played a role in the decline. However, a modicum of culpability in the post-2015 and post-2016 trends must be attributed to the blueprint and testing condition plus passing standard changes implemented by NABP.
In their defense, NABP maintains that candidates’ scores on the NAPLEX are neither intended nor validated for assessing practice competency or judging the quality of pharmacy curricula.20,21 Yet, despite these caveats, NAPLEX is still used in unintended ways. State boards of pharmacy make decisions about candidates’ readiness for licensure in part based on NAPLEX performance. Likewise, pass rates are a de facto measure of the quality of programs’ curricula because ACPE uses two standard deviations below the national average as a red flag to guide accreditation decisions.22 So, verifying the construct validity and controlling systematic biases are relevant.
Similar licensure testing experiences among other health disciplines have contemporary implications for pharmacy. For example, a blueprint change did not change examination performance for osteopathic licensure candidates,23 testing condition changes irrelevant to content updates influenced United States Medical Licensing Examination results downward,24,25 and results reported as pass-fail coincided with more failures by first-time dental licensure test takers.26 NAPLEX blueprint changes foreshadowed significant and meaningful pass rate declines in 2011 and 2015. The NAPLEX blueprint changes were again implemented in 2021,27,28 and consequences of those changes are imminent. Future NAPLEX results will be reported as pass-fail.29 Collaborative research on the implications of these changes and other future modifications is warranted.
Our dependent variables were average pass rates. Use of a single numeric average to represent multiple years before and after relevant cut points limits interpretation, since a single numeric average disregards important fundamental information that is time based. We were unable to use better methods for time-dependent analyses because of the small number of observations (n=13). Future research should adjust for time dependency.
The second limitation is the relatively small number of NAPLEX testing years. The magnitude of relationships is associated with sample size. The small sample size could have attenuated or resulted in variability of the magnitude of the coefficients or decreased the study’s power. The difference between newer and mature programs was judged to be nonsignificant for 2004 to 2010.3 We found that the accreditation eras’ pass rates overlapped from 2008 until the 2011 blueprint change. These observations provide some credence to our assertion that the test modifications contributed in part to the steady decline in NAPLEX pass rates over the past two decades. While the statistical coefficients may have varied less with a larger sample size, the slopes’ patterns seem persuasive.
Despite these limitations, this study directly responds to critics’ speculation about the cause of decline in NAPLEX pass rates since 2008. Contrary to critics’ speculation, too many programs and too few applicants were not at the root of the largest single-year decline in pass rates since 200830 after acknowledging the effects of the accreditation era, blueprint changes, and testing condition plus passing standard changes. There are two implications from this work. One, additional research needs to be conducted examining factors measured at the individual program level that are amenable to modification. This work would provide guidance regarding programmatic factors that may reverse the downward trends. Two, given recent history, results of this research emphasize the need for internal and external stakeholders to collaborate on reducing bias and increasing reliability and validity introduced by extraneous factors, with the ultimate goal of optimizing students’ performance on the NAPLEX.
Programs accredited after 2000 generally had lower first-time NAPLEX pass rates. Even so, changes to the blueprint, testing conditions, and passing standards instituted by the NABP were more important predictors of the decline of first-time NAPLEX pass rates. Stakeholders should collaborate and embrace best practices for assessing practice-ready competency for licensure.
- Received December 14, 2021.
- Accepted May 28, 2022.
- © 2023 American Association of Colleges of Pharmacy