ABSTRACT
Rapid influenza diagnostic tests (RIDTs) may be useful during institutional respiratory disease outbreaks to identify influenza and enable antivirals to be rapidly administered to patients and for the prophylactic treatment of those exposed to the virus but not yet symptomatic. The performance of RIDTs at the outbreak level is not well documented in the literature. This study aimed to evaluate the performance of RIDTs in comparison with that of real-time reverse transcription (rRT)-PCR in the context of institutional respiratory disease outbreaks. This study included outbreak-related respiratory specimens tested for influenza virus at Public Health Ontario Laboratories by both RIDT and rRT-PCR, from 1 September 2010 to 30 April 2013. At the outbreak level, performance testing of RIDTs compared to rRT-PCR for the detection of any influenza virus type demonstrated an overall sensitivity of 76.5%, a specificity of 99.7%, a positive predictive value (PPV) of 99.5%, and a negative predictive value of 85.3%. Because of their high specificity and PPV, even outside of the influenza season, RIDTs can play a role in screening for influenza virus in outbreaks and instituting antiviral therapy in a timely manner when positive. RIDTs can also be useful in remote settings where molecular virology testing is not easily accessible. Suboptimal sensitivity of RIDTs can be addressed by the use of molecular testing.
INTRODUCTION
Prior to the widespread use of PCR for the detection of influenza virus, rapid influenza diagnostic tests (RIDTs), such as immunochromatographic tests or enzyme-linked immunosorbent assays, were valuable tools to inform treatment decisions for patient management in the event of institutional influenza outbreaks. Currently, RIDTs enable the detection of influenza virus in a timely fashion and are less expensive then molecular methods, and laboratory staff can be easily trained in their use.
The sensitivity and specificity of RIDTs vary with the individual tests, the influenza virus type and subtype, the body site from which the specimen was collected (e.g., nasopharyngeal versus throat swab), the time to specimen collection, and patient age (1–4). In addition, their positive predictive values (PPVs) and negative predictive values (NPVs) are influenced by the prevalence of influenza in the population tested (5). The limitations of RIDTs have been well described in the literature and include suboptimal sensitivity resulting in false-negative results, particularly during periods of high influenza virus activity. Conversely, false-positive results are more common during low influenza virus activity (6). In the published literature, their sensitivity varies from 10 to 80% (most commonly from 40 to 70%); their specificity is better and ranges from 85 to 100% (5, 6). Despite these limitations, RIDTs remain a useful tool during institutional respiratory disease outbreaks to identify influenza virus and enable antivirals to be rapidly administered in order to treat patients and prophylactically treat those exposed to the virus but not yet symptomatic. Testing of more than one symptomatic individual with influenza-like illness in each outbreak increases the sensitivity of RIDTs in detecting influenza virus at the outbreak level (with at least one specimen positive for influenza virus out of those tested from the same outbreak) (6).
RIDT performance at the outbreak level was reported previously, but those studies either focused on small outbreaks or used viral culture as the gold standard for comparison (7, 8). The purpose of our study was to evaluate the performance of RIDTs in comparison to that of PCR in the context of institutional respiratory disease outbreak testing conducted over three influenza seasons.
MATERIALS AND METHODS
Study period.The data used for this study included three consecutive influenza surveillance seasons, 1 September 2010 to 31 August 2011 (2010-2011 season), 1 September 2011 to 31 August 2012 (2011-2012 season), and 1 September 2012 to 30 April 2013 (2012-2013 abridged season). Per Ontario Ministry of Health and Long Term Care, the yearly influenza surveillance season is defined as the period between 1 September and 31 August of the following year.
Testing location.Testing for this study was done at Public Health Ontario Laboratories (PHOL) as part of routine laboratory testing and respiratory virus surveillance. PHOL provide service to the entire province of Ontario, Canada (population, 13.4 million) at 11 locations (Hamilton, Kingston, London, Orillia, Ottawa, Peterborough, Sault Ste. Marie, Sudbury, Thunder Bay, Timmins, and Toronto [the central laboratory]). PHOL provide testing for most of the respiratory specimens submitted from community (e.g., daycare), hospital, long-term care facilities (LTCFs), and other institutional outbreak settings. At PHOL, up to four specimens are routinely tested by real-time reverse transcription (rRT)-PCR and RIDTs per outbreak, with additional specimens tested by special request from the local health care unit overseeing the outbreak. However, on rare occasions, more than one specimen may have been submitted from each resident or patient. Influenza RIDTs are not routinely used at PHOL for testing of nonoutbreak specimens.
In Ontario, PHOL relies on the local medical officer of health or designate to determine if an outbreak meets the provincial case definition (9); PHOL do not receive sufficient information to verify that declared institutional outbreaks have met the definition.
Specimen collection and laboratory testing.Respiratory specimens were collected from patients with respiratory symptoms by using Universal Transport Medium (UTM) kits (Copan Italia, Brescia, Italy) and transported at 2 to 8°C or on wet ice to the PHOL for processing within 48 h of collection.
We included respiratory specimens that had both rRT-PCRs and RIDTs performed at PHOL. In-house rRT-PCRs were used at PHOL to test for influenza viruses A and B in accordance with CDC protocols. Results of rRT-PCRs were based on CT (cycle threshold) values as follows: a CT value of ≤38 was considered a positive reaction, 38.1 to 39.9 was considered indeterminate, and ≥40 was considered negative.
For the rapid test, Directigen EZ Flu A&B (BD, Mississauga, Ontario, Canada) was performed until 8 January 2013. For the subsequent period, this RIDT was replaced with Remel Xpect Flu A&B (Thermo Fisher Scientific, Nepean, Ontario, Canada), which was phased in over several months across all PHOL.
Hierarchical rules.For the purpose of this study, some hierarchical rules were applied at different levels of analysis. Specifically, at the specimen level, indeterminate results obtained by RIDTs (n = 5) and by rRT-PCR for influenza virus A (n = 48) and for influenza virus B (n = 10) were excluded from analysis. At the outbreak level, for each testing method, outbreaks were considered positive for any influenza virus if either influenza virus A or B or both were detected by that method in at least one outbreak-related specimen, with positive results overriding indeterminate or negative results. Outbreaks were considered negative for any influenza virus if neither influenza virus A nor B was detected by that method in any outbreak-related specimen. Outbreaks with the only available result being indeterminate (n = 1 for influenza virus A, n = 1 for influenza virus B) were removed from analysis. When comparing the two methods at both the outbreak and specimen levels, specimens that were negative for influenza virus by rRT-PCR but positive by RIDTs were considered false positive, while specimens that were positive by rRT-PCR and negative by RIDTs were considered false negative.
Statistical analysis and study design.The combined sensitivity, specificity, PPV, and NPV of both RIDTs were calculated for each influenza virus type at the specimen and outbreak levels along with 95% confidence intervals (CIs) using rRT-PCR as the reference method. These testing performance indicators were calculated/compared for influenza virus A or B, A versus B, by influenza season, and each rapid testing method at both the specimen and outbreak levels. Additional analyses were performed by the age of the resident or patient from whom the specimen was obtained and the influenza virus activity level at the specimen level, the outbreak setting type, and the number of specimens tested by both methods at the outbreak level. For the purpose of this study, the influenza virus activity level was defined as either high or low on the basis of a cutoff point of a provincial influenza virus percent positivity of ≥10% for the predominant circulating influenza virus type (5). At the outbreak level, we included only those outbreaks that had specimens tested by both methods. Chi-square and Fisher exact tests were used to compare performance testing indicators. P values of <0.05 were considered significant.
Some regional laboratories may have switched to Remel Xpect Flu A&B slightly later than 8 January 2014.To address this, sensitivity analyses were also performed by comparing the combined estimates with the estimates for Directigen EZ Flu A&B alone, which was used for the longer period. This was a retrospective study.
RESULTS
The predominant seasonal influenza viruses.For the 2010-2011 season, the dominant influenza virus type was influenza virus A, accounting for 3,095/3,543 (87.5%) influenza virus-positive specimens detected at PHOL, with the most common subtype being A/H3N2, representing 2,694/2,921 (92.2%) subtyped specimens. During the 2011-2012 season, influenza virus B accounted for 2,004/2,597 (77.2%) influenza virus-positive specimens. In the 2012-2013 season, influenza virus A was the dominant type, accounting for 6,454/7,173 (89.9%) influenza virus specimens, with the most common subtype being A/H3N2, representing 3,737/4,124 (90.6%) subtyped specimens.
Characteristics of patients.Most (6,979; 96.6%) of the specimens included in this study were outbreak related. The mean and median ages of the patients were 82.5 and 86 years, respectively (range, 3 months to 108 years), reflecting the fact that most specimens were collected from elderly persons who were part of respiratory disease outbreaks in LTCFs. Nasopharyngeal swabs were the most common specimen type, representing 7,146 (98.9%) of the specimens collected; the remainder of the specimens were bronchoalveolar lavage fluid samples, throat swabs, nasal swabs, or swabs submitted without documentation of the collection site. The average time delay from specimen collection to testing was 1.6 days, with a 90th percentile of 3.6 days.
Characteristics of outbreak-related specimens.According to the PHOL outbreak testing algorithm, not all outbreak-related specimens are tested by RIDT (10). Of the 3,074 outbreaks from which specimens were submitted for testing during the study period, 2,345 (76.2%) and 2,338 (76.1%) had both RIDTs and rRT-PCRs performed for influenza virus A and/or B, respectively (Fig. 1). A range of 1 to 12 specimens were tested by both RIDTs and rRT-PCRs per outbreak, with a mean of 2.9 and a median of 3 specimens per outbreak. The settings were LTCFs for 1,759 (75.1%) of the outbreaks, retirement homes for 395 (16.9%) of the outbreaks, and hospitals for 66 (2.8%) of the outbreaks. Other settings from which specimens were submitted included daycare centers, camps, and schools for a total of 61 (2.6%) outbreaks. For 66 (2.8%) outbreaks, no setting type was reported.
Study inclusion criteria used at PHOL from September 2010 to April 2013. *, Not all respiratory specimens were outbreak related. †, Most of the specimens were tested for both influenza viruses A and B. Few specimens (n = 19) were tested only for influenza virus A by rRT-PCR. ††, Most of the outbreak samples were tested for both influenza viruses A and B. Few outbreak samples (n = 8) were tested only for influenza virus A. β, Samples indeterminate by any test method were excluded from this study.
Performance testing of RIDTs at the specimen level.After removing all of the indeterminate results obtained by any testing method, a total of 7,228 specimens tested for influenza virus A and 7,237 specimens tested for influenza virus B (with most specimens tested for both targets) were included in this study (Fig. 1). The distribution of respiratory specimens tested by the two methods in each influenza season is illustrated in Table 1.
Numbers of specimens tested by RT-PCR and RIDTs by season at PHOL from 1 September 2010 to 30 April 2013a
Overall, detection of influenza virus A by RIDTs had a sensitivity of 60.3%, a specificity of 99.9%, a PPV of 99.8%, and an NPV of 86.4% (Table 2). Only the specificity and NPV varied by influenza season (P values of <0.05 and <0.001, respectively). The specificity was lowest during the 2010-2011 season, and the NPV was highest during the 2011-2012 season (P < 0.001).
Performance testing of RIDTs for influenza viruses A and B at the specimen level by PHOL from 1 September 2010 to 30 April 2013
For influenza virus B, RIDTs had a sensitivity of 37.6%, a specificity of 99.9%, a PPV of 99.1%, and an NPV of 97.3% (Table 2). Only the NPV varied by the influenza season, and it was highest during the 2010-2011 season (P < 0.001).
Comparing the performance of RIDTs for influenza viruses A and B, we found that their sensitivity and NPV varied with the influenza virus type (P < 0.001). The sensitivity was the highest and the NPV was the lowest for influenza virus A (Table 2).
Of the specimens included in this study, 80% were tested by Directigen EZ Flu A&B and the remainder were tested by Remel Xpect Flu A&B, which was implemented across all of the regional laboratories during the peak of the 2012-2013 season (Fig. 2). The performance results of the two rapid testing methods are compared in Table 3. At the specimen level, the sensitivities, specificities, and PPVs, of the two rapid testing methods were similar whether testing for influenza virus A or B (P > 0.05) (Table 4). The NPV of Remel Xpect Flu A&B was higher than that of Directigen EZ Flu A&B for both influenza viruses A and B (P < 0.001). The combined performance indicators for both testing methods were similar to those for Directigen EZ Flu A&B alone (P > 0.05).
Rapid testing methods used to test for influenza virus at PHOL from September 2010 to April 2013. *, Not all regional laboratories switched to the Remel test exactly on 8 January 2014. Some regional laboratories may have taken slightly longer to switch to Remel Xpect Flu A&B. †, More than 80% of the specimens from the 2010-2011 season, the 2011-2012 season, and part of the 2012-2013 season were tested by Directigen EZ Flu A&B. Remel Xpect Flu A&B was used only during the peak of the 2012-2013 season and was used to test approximately 20% of the specimens in this study.
Numbers of specimens tested by Directigen EZ Flu A&B and Remel Xpect Flu A&B at PHOL from 1 September 2010 to 30 April 2013
Performance testing of Directigen EZ Flu A&B and Remel Xpect Flu A&B at PHOL from 1 September 2010 to 30 April 2013
RIDTs demonstrated similar sensitivities for both influenza virus A subtypes (P > 0.05). Of the influenza virus A-positive specimens subtyped as A/H3N2 by rRT-PCR, 1,163/1,881 (61.8%) were also positive for influenza virus A by RIDTs, while 8/15 (61.5%) of the (H1N1)pdm09 subtype were positive for influenza virus A by RIDTs.
For influenza virus A, only the sensitivity and NPV varied by patient age (P < 0.01) (Tables 5 and 6). The sensitivity for influenza virus A was higher in individuals <19 years old and those >65 years old and was highest for children <4 years old. The NPV was highest among children <5 years old. For influenza virus B, there were no positive specimens from children <4 years of age; hence, the sensitivity and PPV could not be determined (Tables 5 and 6). However, the sensitivity was highest in those 5 to 19 and >65 years of age, but the difference did not achieve statistical significance. Only NPV differed significantly by patient age (P < 0.05) and was highest among individuals >65 years of age.
Age distribution of patients and residents whose specimens were tested by RT-PCR and RIDTs at PHOL from 1 September 2010 to 30 April 2013
Performance testing of RIDTs by patient age at PHOL from 1 September 2010 to 30 April 2013
The sensitivity, specificity, and PPV for both influenza viruses A and B did not differ by influenza virus activity (i.e., high versus low). Only the NPV varied by influenza virus activity for both influenza viruses A and B (P < 0.001)—it was consistently lower during periods of high influenza virus activity (70.1 versus 93.8% during the 2010-2011 season, 82.5 versus 97.7% during the 2011-2012 season, and 76.4 versus 97.6%, during the 2012-2013 influenza season for high and low influenza virus activities, respectively).
Performance testing of RIDTs at the institutional outbreak level.After excluding all indeterminate results by any testing method, a total of 2,345 outbreaks tested for influenza virus A and 2,338 outbreaks tested for influenza virus B (with most outbreaks tested for both targets) were included in this study (Fig. 1 and 2). A comparison of the results of rapid tests and rRT-PCRs is illustrated in Table 7.
Numbers of outbreak specimens tested by RT-PCR and RIDTs by season at PHOL from 1 September 2010 to 30 April 2013a
RIDTs had an overall sensitivity of 79.1%, a specificity of 99.8%, a PPV of 99.6%, and an NPV of 89.2% for the detection of influenza virus A (Table 8). Only their specificity and NPV varied by the influenza season (P values of <0.05 and <0.001, respectively). Their specificity was lowest during the 2010-2011 season, while their NPV was highest during the 2011-2012 season.
Performance of RIDTs in detecting influenza viruses A and B in outbreak specimens at PHOL from 1 September 2010 to 30 April 2013
For influenza virus B, RIDTs had a sensitivity of 57.8%, a specificity of 100%, a PPV of 98.8%, and an NPV of 97.3% (Table 7). Only their NPV varied by season and was lowest during the 2011-2012 season (P < 0.001).
When the performance of RIDTs for the detection of influenza virus A versus B was compared at the outbreak level, the sensitivity and NPV varied by the influenza virus type (P < 0.001) (Table 8). The sensitivity was higher for influenza virus A than for influenza virus B (79.1 versus 57.8%, respectively), while the NPV was lower for influenza virus A than for influenza virus B (89.2 versus 97.3%, respectively). The specificity and PPV of RIDTs did not vary by the influenza virus type.
At the outbreak level, the two rapid test methods had similar sensitivities, specificities, PPVs, and NPVs (P > 0.05) (Table 4).
At the outbreak level, performance testing of RIDTs compared to rRT-PCR for the detection of any influenza virus type demonstrated an overall sensitivity of 76.5%, a specificity of 99.7%, a PPV of 99.5%, and an NPV of 85.3%.
The sensitivity, specificity, and PPV for both influenza virus A and B outbreaks did not vary by the setting. Only NPV for influenza virus A outbreaks varied by the setting; it was highest in LTCFs. Specifically, it was 91.4% in LTCFs, 83% in retirement homes, and 75.6% in hospitals (P < 0.001). However, since no influenza virus B-positive outbreaks were identified in hospitals by RIDTs, their sensitivity and PPV in this setting could not be determined.
DISCUSSION
RIDTs can be a useful tool for identifying the presence of influenza virus in institutional respiratory disease outbreaks, as well as community settings, particularly in remote areas with no timely access to molecular testing. Their rapid turnaround time, relative simplicity of use, and low cost make RIDTs an appealing diagnostic tool in outbreak settings, despite the existence of more advance testing techniques. However, it is essential that clinicians understand the drawbacks of RIDTs to be able to accurately interpret their results.
Our study is unique in that we report on RIDT performance at both the specimen and outbreak levels—the latter has not been previously reported in the literature. We evaluated RIDT performance in comparison to that of rRT-PCR in three consecutive influenza seasons. Influenza virus A/H3N2 was the dominant virus during two seasons (2010-2011 and 2012-2013), and influenza virus B was the dominant virus during one season (2011-2012 season). Our study population was composed primarily of elderly residents associated with LTCF outbreaks.
Overall, RIDTs showed low sensitivity for the detection of influenza virus A (60.3%) and very low sensitivity for the detection of influenza virus B (37.6%). However, RIDTs showed a very high specificity and PPV for both influenza viruses A and B (>99%). Low sensitivity and very high specificity have been reported previously, with higher sensitivity for influenza virus A than for influenza virus B (11). Conversely, moderate to high sensitivity and specificity were reported for both influenza viruses A and B recently; however, this study investigated the performance of RIDTs mostly among pediatric patients and tested specimens directly without using any transportation medium (12). The NPVs were moderate (86.4%) and very high (97.3%) for influenza viruses A and B, respectively; the NPV was significantly lower for A than for B, reflecting higher circulation of influenza virus A than of influenza virus B during the time frame of this study. Similar PPVs and NPVs were reported in another study, although that study looked only at children with influenza virus A/(H1N1)pdm09 (13).
As reported previously, RIDTs were not able to detect almost 40% of influenza virus A-positive specimens and 60% of influenza virus B-positive specimens (6, 11). However, the likelihood of false-positive results was very low for both influenza viruses A and B even outside the influenza season. Hence, clinicians should be confident of a positive influenza virus result obtained by RIDT. Conversely, a negative result by RIDT in patients with symptoms of acute respiratory infection would not rule out influenza (6).
All of the performance indicators except the NPV did not vary by influenza season for either influenza virus A or B; the NPV was previously reported to be lower when influenza virus A or B activity was high (14). The overall lower prevalence of influenza virus B than influenza virus A explains the consistently higher NPV for influenza virus B. This is also a manifestation of the low sensitivity of RIDTs, which results in a higher number of false-negative specimens during high influenza virus activity.
Two different rapid tests were used in this study, Directigen EZ Flu A&B and Remel Xpect Flu A&B. The first method was used for the longest period of the study (28/32 months). PHOL switched to a new testing kit as a result of the routine laboratory testing kit selection process. The results of the two rapid tests were combined, as there was no difference in performance testing at either the specimen or the outbreak level. The NPV of Remel Xpect Flu A&B was slightly higher; however, this was likely related to the lower prevalence of influenza as a result of the shorter period for which this method was used, as opposed to an actual difference in test performance.
Similar specificities and PPVs were reported for both Directigen EZ A&B and Remel Xpect Flu A&B previously (7, 15). While lower sensitivity was reported in both studies (46.2 and 47%, respectively), a very low NPV was reported in a study of the Remel test (32%). However, both studies were conducted during the 2009 influenza virus A/(H1N1)pdm09 and the Remel study had a small sample size.
Makkoch et al. reported a lower sensitivity of RIDTs for the detection of influenza virus A/(H1N1)pdm09 than for the detection of influenza virus A/H3N2 (5); we did not find any difference in sensitivity by subtype. However, the number of specimens positive for influenza virus A/(H1N1)pdm09 was very small and therefore results should be interpreted with caution.
Sensitivity for the detection of influenza virus A was higher among those <19 or >65 years old (elderly) than among other adults, with the highest sensitivity occurring in children ≤4 years of age. A similar pattern was observed for influenza virus B, but the results were not significantly different, likely because of the small number of influenza virus B-positive specimens submitted from younger age groups. Higher sensitivity among younger children is likely due to higher virus shedding in this age group (6, 11, 16). Most studies have failed to report the performance of RIDTs exclusively among the elderly or grouped them together with other adults (4, 12). We found a higher sensitivity of RIDTs for influenza virus A in the elderly than among adults 20 to 64 years of age, which may be related to having more severe disease, prolonged shedding, and high viral loads, which leads to better sensitivity (11, 15, 17).
When influenza virus activity is high, the proportion of patients with positive RIDT results who have influenza (PPV) is highest, while the proportion of patients with negative RIDT results who do not have influenza (NPV) is lowest (6). Consistent with other studies, we found the NPV to be higher during periods of low influenza virus activity (influenza virus percent positivity, <10%) but did not find a relationship between the PPV and influenza virus activity, which may be related to the low number of false positives (three specimens false positive for influenza virus A and one false positive for influenza virus B), which all occurred during the 2010-2011 influenza season. We also looked at the performance of RIDTs during the “traditional” influenza season (November to April) versus “out of season,” from May to October. We did not find any difference in any of the indicators of RIDT performance, since the timing of influenza varied from year to year. Thus, limiting the use of RIDTs to the traditional influenza season was not supported by the results of this study.
RIDT performance at the outbreak level for both influenza viruses A and B showed higher sensitivity than at the specimen level, with significantly higher sensitivity for influenza virus A outbreaks than for influenza virus B outbreaks. Sensitivity was moderate for influenza virus A outbreaks (79%) and low (58%) for influenza virus B outbreaks. Specificity and PPV were very high for both influenza virus A and B outbreaks (>99 and 98%, respectively). The specificity for influenza virus A outbreaks and the NPV for both influenza virus A and B outbreaks varied by influenza season. Specificity variations for influenza virus A just met the threshold for significance; thus, the results may have been the due to random variation. The NPV was moderate (89.2%) for influenza virus A and high (97.3%) for influenza virus B. As expected, the NPV was the lowest when the influenza virus prevalence was highest.
The performance of RIDTs for the detection of any influenza virus at the outbreak level showed a moderate sensitivity (76.5%), which was slightly lower than the sensitivity for influenza virus A when examined separately. Other performance parameters were similar to individual results for each influenza virus type (all >85%).
RIDTs performed better at the outbreak level than at the specimen level, as RIDTs did not detect 21% of the influenza virus A outbreaks and 40% of the influenza virus A-positive specimens and did not detect 42% of the influenza virus B outbreaks and 60% of the influenza virus B-positive specimens. However, a positive result has a less than 1% chance of being false positive at both the outbreak and specimen levels. Therefore, a single positive RIDT result should trigger the initiation of antiviral treatment and prophylaxis for outbreak management. The suboptimal sensitivity of RIDTs can be addressed by the use of molecular testing for a limited number of specimens.
RIDTs performed similarly in all of the outbreak settings, with the exception of their NPV, which was higher in LTCFs. However, these results should be interpreted with caution as the number of outbreak specimens tested from hospitals and retirement homes was small in comparison to those from LTCFs. Thus, the percent positivity of influenza virus specimens in these settings may not represent the true disease prevalence there.
This study has a number of limitations. At PHOL, RIDTs were used primarily for outbreak specimens, with most of those submitted from LTCFs; hence, fewer specimens were from younger age groups or from other settings (e.g., hospitals or day care centers). In addition, there was an overall low circulation of influenza virus B during the study period. This prevented us from fully exploring rapid test performance among different age groups. Second, there were few influenza virus A/(H1N1)pdm09- and influenza virus B-positive specimens from outbreaks; hence, there was insufficient statistical power for a comparison of the performance of RIDTs for all influenza virus types and subtypes. Third, we have to rely on our customers in regard to adherence to PHOL's specimen collection, handling, and transportation recommendations, as noncompliance may affect testing. In addition, the date of symptom onset is not consistently reported on the laboratory requisition form, preventing us from calculating the mean time from disease onset to specimen collection (11). Fourth, the time period for the use of the Remel Xpect Flu A&B method could not be defined properly since there was no uniformity across regional laboratories in the date of switching to the new testing method. However, the sensitivity analysis adjusted for this by confirming that the combined results of both RIDTs were not different from those from the earlier time period, when only one RIDT was in use.
In conclusion, our study is unique in that it evaluated RIDT performance indicators at both the specimen and outbreak levels. Because of their high specificity and PPV, even outside of the influenza season, RIDTs can play a role in screening for influenza virus in outbreaks and in the institution of antiviral therapy in a timely manner when the results are positive. RIDT results can also be useful in remote settings where molecular virology testing is not easily accessible. Suboptimal sensitivity of RIDTs can be addressed by the use of more sensitive molecular methods.
ACKNOWLEDGMENTS
Jonathan B. Gubbay received funding from GlaxoSmithKline and Hoffman La Roche to study resistance in influenza viruses. The other authors do not have a commercial or other association to declare.
FOOTNOTES
- Received 16 July 2014.
- Returned for modification 12 August 2014.
- Accepted 2 October 2014.
- Accepted manuscript posted online 15 October 2014.
- Copyright © 2014, American Society for Microbiology. All Rights Reserved.