ABSTRACT
Respiratory syncytial virus (RSV) rapid antigen detection tests (RADT) are extensively used in clinical laboratories. We performed a systematic review and meta-analysis to evaluate the accuracy of RADTs for diagnosis of RSV infection and to determine factors associated with accuracy estimates. We searched EMBASE and PubMed for diagnostic-accuracy studies of commercialized RSV RADTs. Studies reporting sensitivity and specificity data compared to a reference standard (reverse transcriptase PCR [RT-PCR], immunofluorescence, or viral culture) were considered. Two reviewers independently extracted data on study characteristics, diagnostic-accuracy estimates, and study quality. Accuracy estimates were pooled using bivariate random-effects regression models. Heterogeneity was investigated with prespecified subgroup analyses. Seventy-one articles met inclusion criteria. Overall, RSV RADT pooled sensitivity and specificity were 80% (95% confidence interval [CI], 76% to 83%) and 97% (95% CI, 96% to 98%), respectively. Positive- and negative-likelihood ratios were 25.5 (95% CI, 18.3 to 35.5) and 0.21 (95% CI, 0.18 to 0.24), respectively. Sensitivity was higher in children (81% [95% CI, 78%, 84%]) than in adults (29% [95% CI, 11% to 48%]). Because of this disparity, further subgroup analyses were restricted to pediatric data (63 studies). Test sensitivity was poorest using RT-PCR as a reference standard and highest using immunofluorescence (74% versus 88%; P < 0.001). Industry-sponsored studies reported significantly higher sensitivity (87% versus 78%; P = 0.01). Our results suggest that the poor sensitivity of RSV RADTs in adults may preclude their use in this population. Furthermore, industry-sponsored studies and those that did not use RT-PCR as a reference standard likely overestimated test sensitivity.
INTRODUCTION
Acute respiratory infection (ARI) due to respiratory syncytial virus (RSV) is a leading cause of emergency department (ED) visits and hospitalizations in infants and children (1–3). RSV also produces substantial morbidity and mortality among the elderly and adults with underlying medical conditions (4, 5).
Accurate and prompt diagnosis of RSV ARI can have important benefits for patient care. Because concurrent serious bacterial infection with RSV is uncommon, especially in children (6), a timely diagnosis of RSV ARI should diminish unnecessary antibiotic use (7–9). It may also minimize ancillary testing (10), decrease hospital stay durations (11), and permit prompt implementation of cohort assignment for the purpose of limiting nosocomial transmission within hospitals and long-term-care facilities (13–16, 57). Laboratory testing of respiratory secretions is required for confirmation of RSV infection because its seasonality and nonspecific clinical manifestations may overlap those of other viral and bacterial causes of ARI (17, 18).
There are currently four RSV diagnostic modalities in clinical use. Viral culture was long considered the gold standard for RSV diagnosis, but it has a turnaround time of 3 to 7 days (1 to 2 days for shell vial culture) (19). Reverse transcriptase PCR (RT-PCR) has a much shorter turnaround time (hours) than and analytic and clinical sensitivities superior to those of culture; it is now the reference diagnostic method for respiratory virus detection (17, 20). However, only ∼15% of clinical laboratories participating in the United States National Respiratory and Enteric Virus Surveillance System (NREVSS) presently identify RSV by RT-PCR because of its associated costs and because of the specialized equipment and expertise required (21). Immunofluorescence (IF) testing for RSV detection is potentially faster than RT-PCR but is less sensitive and requires considerable technical skill (19). Finally, a number of commercially developed rapid antigen detection tests (RADTs) are currently available for the diagnosis of RSV ARI. These assays are easy to perform and provide results in less than 30 min, and several of them have the potential for point-of-care use (22). Although they are less sensitive than culture, their speed and ease of use have made them an integral part of the diagnostic algorithm of many clinical laboratories (21, 22). It is thus crucial for clinicians and for public health surveillance systems that rely on such tests for decision-making to understand their performance characteristics and the factors that might influence them.
To date, the literature evaluating the performance characteristics of RSV RADTs has not yet been systematically reviewed. Therefore, the primary objective of this systematic review and meta-analysis was to summarize the available evidence on the diagnostic accuracy of commercialized RADTs for detecting RSV infection in patients with ARI. We also aimed to determine if patient, test, and methodological factors (e.g., patient age, type of specimen, commercial brand, clinical presentation, duration of symptoms, point-of-care testing, industry study sponsorship, and genotype of infecting RSV strain) might influence RADT accuracy estimates.
MATERIALS AND METHODS
Prior to conducting this study, a protocol was prepared according to standard guidelines for the systematic review of diagnostic studies (23, 24). The PRISMA statement was used for preparing this report (25).
Information sources and searches.PubMed and EMBASE were searched for data added from their inception through November 2013. An update of the search, performed through April 2015, was conducted in PubMed. Studies published in either English or French were considered. The search strategy was designed with the help of an experienced librarian and contained search terms for RSV infection and search terms for rapid diagnostic immunoassays, including the most common brand names (see Table S1 in the supplemental material). Additionally, the reference list of all included studies and relevant recent narrative reviews was manually searched for additional studies.
Eligibility criteria and study selection.Studies were considered for inclusion if they assessed the diagnostic accuracy of a commercial rapid immunoassay for RSV in patients with suspected ARI. RADT was defined as any commercialized immunoassay that identifies RSV antigen in respiratory specimens in 30 min or less. In-house tests and precommercial versions were excluded since they are not widely available and may not be standardized. Acceptable reference standards included viral culture, RT-PCR, and IF. Studies were excluded if the rapid test was itself part of a composite reference standard (incorporation bias) or if only rapid-test-negative samples were tested with the reference standard (partial verification bias). Studies were also excluded if they pertained to patients without respiratory illness.
Only original studies that described their methods and reported enough data for the construction of the standard two-by-two table were included. Editorials, letters to the editors, and conference abstracts were excluded since they usually contained insufficient information on many important data items relevant to the investigation of sources of heterogeneity (such as patient characteristics, type of specimen, point-of-care use, etc.) and the ascertainment of methodological quality (blind procedures, patient selection, etc.). Attempts were made to contact the authors if there was insufficient information to construct the two-by-two table. Of the 3 authors contacted, 2 provided additional data.
Following the electronic database search, the title and abstract were screened by one reviewer (C. Chartrand). Full-text articles of relevant citations were obtained and independently assessed for eligibility by two reviewers (C. Chartrand and N. Tremblay). Disagreements were solved by consensus or by involvement of a third reviewer (J. Papenburg).
Data extraction and assessment of the risk of bias.A data extraction form was created and initially used for pilot purposes with a subset of 5 studies by two reviewers (C. Chartrand and J. Papenburg) before being finalized. Two reviewers (C. Chartrand and N. Tremblay) independently extracted data from all included studies and assessed risk of bias. Disagreements were solved by consensus or by involvement of a third reviewer (J. Papenburg).
In the data collection process, the following assumptions or simplifications were applied. In determining the reference standard, traditional viral culture and shell vial culture were considered together, regardless of the cell line used. Similarly, RT-PCR and immunofluorescence were each considered as a whole, independently of the kit or protocol used. If separate information was available for two or more reference standards, RT-PCR was chosen in priority, because of its superior sensitivity and specificity, followed by immunofluorescence and then viral culture. The study population was considered to be pediatric if most (>85%) of the study subjects were younger than 21 of age or if the investigation was carried out in a pediatric hospital. Point-of-care testing was defined as a test done outside a formal laboratory setting by personnel other than trained laboratory personnel. Specimens were considered to have been collected during the epidemic season for RSV if they were collected during winter or early spring. A study was considered to have been industry sponsored if the industry funded the study or provided index tests to be used in the study.
The risk of bias of included studies was assessed using the Quality Assessment of Diagnostic-accuracy studies (QUADAS) 2 tool (26). Risk of bias assessment was used to present an overall picture of the quality of the included studies.
Data synthesis and analysis.Data were extracted to construct two-by-two tables, which were used to calculate the sensitivity and specificity of the rapid tests in each study. The sensitivity and specificity estimates were pooled across studies using a bivariate random-effects regression model (27). The bivariate model takes into consideration the potential tradeoff between sensitivity and specificity by incorporating this negative correlation into the analysis. Since heterogeneity is usually expected in meta-analysis of accuracy diagnostic studies, a random-effects model is generally preferred (27). The model was also used to draw a summary receiver operating curve plot to graphically depict each study's sensitivity and specificity, along with the summary point. Analyses were conducted using the user-written command midas in STATA (Stata Corp., TX, USA).
Some articles (16 of 71) compared two to four rapid tests using the same specimens. Since inclusion of these in our meta-analysis would have resulted in (at least) doubled counting results from certain studies, one rapid test comparison was selected per study. After we carried out a sensitivity analysis to assess the impact on the overall accuracy of systematically selecting the most (and then the least) accurate test, the most common test was selected, favoring those still commercially available.
Substantial heterogeneity in levels of test accuracy was expected, and subgroup analyses were planned to attempt to explain the heterogeneity. The following variables were selected a priori as potential sources of heterogeneity: population age (children versus adults), genotype of circulating RSV strain (type A versus type B), brand of rapid test, type of respiratory specimen, duration of symptoms before testing, reference standard used, point-of-care testing, setting and season during which the test was carried out, blind procedures, and industry sponsoring. These variables were added as covariates to the bivariate model, providing enough studies were available in each subgroup. Summary sensitivity and specificity estimates were calculated for each level of a particular covariate, along with their 95% confidence intervals. A P value below 0.05 was used to decide whether there were statistically significant differences in accuracy (joint sensitivity and specificity) across the levels of a particular covariate.
RESULTS
Study selection.After we screened titles and abstracts, 192 articles were eligible for full text review. Of these, 62 articles (28–89) were included in the study (Fig. 1). The update of the search in April 2015 yielded 9 new articles (90–98). The full list of excluded studies, with reasons for exclusion, is available from us upon request.
Study selection. A flow chart summarizing evidence search and study selection is shown. (Flow diagram template from reference 25; for more information, see http://www.prisma-statement.org/.)
Study characteristics.Table 1 presents the main characteristics and results of the 71 included studies, while Table 2 summarizes the distributions of the main variables of interest. Most (83%) studies were conducted in children, and very few (3%) looked specifically at the adult population. Less than half (44%) of the studies gave any information about the clinical presentation of the included patients, and very few (8%) provided information on the duration of symptoms before testing. Fifteen different rapid tests were evaluated by the included studies. The most frequently studied were the Abbott TestPack RSV (Abbott Laboratories, North Chicago, IL) (23 studies), the Directigen tests (Directigen RSV [20 studies] and the newer Directigen EZ RSV [8 studies] [Becton Dickinson, Franklin Lakes, NJ]), and the Binax NOW RSV (Binax, Inverness Medical, Portland ME) (16 studies). RT-PCR was the reference standard in 41% of the studies, while immunofluorescence and culture were each used in half of the remaining studies.
Characteristics of 71 individual studies included in the reviewa
Characteristics of 71 included studies
Risk of bias of included studies.Figure 2 presents an overview of the risk of bias of included studies, using the QUADAS-2 criteria. Because of our inclusion criteria, all included studies used an appropriate reference standard. Since culture, immunofluorescence, and RT-PCR were considered to be objective tests, whether or not they were interpreted without knowledge of the results of the index test was deemed not to impact the risk of bias assessment. However, only 61% of the included studies reported that index test results were interpreted without knowledge of the results of the reference standard, an important potential source of bias with the use of the nonautomated colorimetric tests (99). Patient selection (consecutive or random) and their applicability for the research question were difficult to ascertain for many studies, but very few (3 studies) used a clear case control design, which, by creating an extreme contrast, can overestimate a test's accuracy.
Risk of bias of included studies. Data represent the risk of bias of included studies as assessed by reviewers using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS) 2 tool.
Synthesis of results.Overall, rapid immunoassays for RSV demonstrated pooled sensitivity of 80% (95% CI, 76 to 83) and pooled specificity of 97% (95% CI, 96 to 98). This corresponds to a positive likelihood ratio of 25.5 (95% CI, 18.3 to 35.5) and a negative likelihood ratio of 0.21 (95% CI, 0.18 to 0.24). Systematically choosing the most accurate test or the least accurate test in cases in which a study evaluated two or more rapid tests did not significantly change the overall accuracy (for the best tests, pooled sensitivity was 80% [95% CI, 77 to 83] and pooled specificity was 97% [95% CI, 95 to 98]; for the worst tests, pooled sensitivity was 79% [95% CI, 75 to 82] and pooled specificity was 96% [95% CI, 95 to 98]). As shown in the summary receiver operating characteristic (SROC) plot in Fig. 3, there was a greater variation in sensitivity (from 12.2% to 98.3%) than specificity (from 67.1% to 100%) across studies, with only 13% of the reports indicating specificity estimates below 85%. Forest plots of individual studies and pooled sensitivity and specificity estimates are presented in Fig. S1 in the supplemental material.
Hierarchical summary receiver operating characteristic curve plot of RSV rapid antigen detection test diagnostic-accuracy studies. Individual studies (n = 71) are shown as open circles. The summary operating point is shown as a closed diamond (with surrounding 95% confidence and prediction contours), representing sensitivity (SENS) and specificity (SPEC) estimates pooled by using a bivariate random-effects regression model. The hierarchical summary receiver operating characteristic curve (SROC) is shown as a solid line. AUC, area under the curve.
Investigation of heterogeneity.In an attempt to explain the observed heterogeneity in test accuracy (mainly in terms of sensitivity), subgroup analyses were conducted. Table 3 presents the accuracy estimates for the different subgroups.
RSV RADT accuracy estimates from subgroup analysesa
Rapid tests for RSV were significantly more sensitive in children than in adults, with pooled sensitivity of 81% (95% CI, 78 to 84) in children and pooled sensitivity of only 29% (95% CI, 11 to 48) in adults. Because of this important disparity in terms of sensitivity and the relatively small number of studies in adults, the rest of the subgroup analyses were conducted exclusively in children (pediatric studies and pediatric subgroup data from mixed-population studies) to alleviate the confounding that would result from an unbalanced distribution of adults and children between levels of another variable. Eight studies were thus excluded from the other subgroup analyses.
As expected, rapid tests for RSV performed worst in assessments against RT-PCR (pooled sensitivity, 74% [95% CI, 71 to 78]) and better in assessments against immunofluorescence (pooled sensitivity, 88% [95% CI, 86 to 91]) or culture (pooled sensitivity, 83% [95% CI, 79 to 88]) owing to the higher accuracy of RT-PCR.
Test accuracy results were fairly similar between the different rapid tests and the different types of specimens used (Table 3). Two immunoassays recently cleared by the FDA that employ an instrument-based digital scan of the test strip to improve accuracy, the BD Veritor RSV (Becton, Dickinson and Company, Franklin Lakes, NJ) and the Quidel Sofia RSV (Quidel Corporation, San Diego, CA), were evaluated in 4 and 5 studies, respectively. However, when two or more index tests were evaluated in a study, our prespecified selection criterion employed to determine which results should be included in our pooled analyses was the use of the most commonly evaluated method. Consequently, too few studies of these two automated immunoassays were available to be included in our pooled subgroup analyses for bivariate random-effect models to converge. Nevertheless, analyzed separately, the pooled accuracy estimates from the 4 BD Veritor RSV studies (sensitivity, 76% [95% CI, 72 to 80]; specificity, 99% [95% CI, 98 to 99]) and the 5 Quidel Sofia RSV studies (sensitivity, 77% [95% CI, 71 to 82]; specificity, 97% [95% CI, 93 to 98]) were similar to those of our overall results of RADTs compared to a reference standard of RT-PCR.
Neither the clinical setting in which the test was performed nor whether or not it was done at the point of care had a significant impact on RADT accuracy. As well, methodological issues, such as whether the samples were collected during the epidemic season for RSV or whether the rapid tests were interpreted without knowledge of the result of the reference standard, did not have a statistically significant effect on the pooled accuracy estimates, although studies that reported blind procedures of the rapid test tended to have lower pooled sensitivity (79% versus 84%, P = 0.11). Industry-sponsored studies reported significantly higher sensitivity for rapid tests for RSV (pooled sensitivity of 87% [95% CI, 83 to 90] compared to 78% [95% CI, 75 to 82] for studies not sponsored by the industry). Since the year 2000, the proportions of studies that included RT-PCR as the reference standard were not significantly different between those sponsored by industry and those not sponsored by industry (50% and 72%, respectively; P = 0.25). We did not analyze studies published before 2000 for this last comparison because RT-PCR was not widely available prior to that time and because only one pre-2000 publication reported RT-PCR results (88).
Too few studies gave information on symptom duration before testing to allow us to do pooled analyses. Similarly, only 5 studies compared the sensitivities of the rapid test for detecting RSV type A and RSV type B (specificity could not be calculated since the rapid tests do not discriminate between the different genotypes). Results of analyses of the effect of RSV genotype on RADT accuracy are presented in Table S2 in the supplemental material.
DISCUSSION
This systematic review and meta-analysis is the first to synthesize the available evidence on the diagnostic accuracy of RSV RADTs. Overall, we observed that these simple and rapid assays displayed consistently high specificity (97%) and positive likelihood ratio (25.5) results. Therefore, physicians can diagnose RSV ARI with confidence on the basis of a positive RSV RADT result. Timely and accurate diagnosis of RSV has the potential to improve patient care and decrease health care costs by permitting prompt hospital infection control measures (13–15, 57) and by decreasing unnecessary antibiotic use (7–9) and unneeded ancillary investigations (e.g., chest radiography, blood cultures, and urine cultures) (10, 11).
A key finding of our study is that RSV RADTs demonstrated a sensitivity of only 29% in adults. It should be noted that this pooled estimate of sensitivity is based on relatively limited data: 4 studies that evaluated RSV RADTs in 738 adults, including elderly and immunocompromised subjects (33, 34, 38, 60). However, poorer sensitivity with advancing age is expected, because prior immunity, although insufficient to protect against reinfection, diminishes viral titers in respiratory secretions as well as the duration of viral shedding (22, 100, 101). Given the observed lack of sensitivity of RSV RADTs in adults, their utility in this population, especially among the elderly and the immunocompromised, is probably very limited. In children, we observed an overall pooled sensitivity of 81%. This level of accuracy is likely to be considered acceptable by many users. However, among pediatric studies, in comparisons of RADTs to RT-PCR, pooled sensitivity decreased to 74%. Therefore, clinicians need to be aware of the possibility of false-negative RADT results in children and should consider retesting a negative sample by a more sensitive method, e.g., RT-PCR, if the result could influence patient management. From a public health perspective, because clinical laboratories that provide data to RSV surveillance systems frequently use RADTs (21, 102), test sensitivity must be taken into account to avoid underestimating the burden of RSV-associated disease.
We found that choice of reference standard by the investigators significantly affected RADT sensitivity estimates. Studies that employed only viral culture or immunofluorescence as a comparator exhibited pooled sensitivities that were 9% or 14% higher, respectively, than those that used RT-PCR. RSV diagnostic research that does not use RT-PCR as a reference standard is therefore likely to overestimate RADT accuracy. However, using RT-PCR in the clinical setting, it has been observed that this method may detect asymptomatic or very low levels of viral shedding, which could sometimes be of questionable significance (103).
Among the 63 pediatric studies included in our subgroup analyses, approximately one-third declared industry sponsorship in the form of funding or in kind provision of study materials. These industry-sponsored studies produced significantly higher sensitivity estimates (87% versus 78%; P = 0.01). There is considerable evidence that industry-sponsored biomedical research tends to produce proindustry results (104). Our finding might be partially explained by industry preferentially supporting study designs that favor the performance of their product (105), e.g., the index test, by the systematic use of a less accurate comparator. While the proportion of industry-sponsored studies published since the year 2000 that used RT-PCR was smaller than that of nonsponsored studies, this difference was not statistically significant (50% versus 72%; P = 0.25). Publication bias, the phenomenon of favorable results being published more frequently than negative results, has also been hypothesized to contribute to associations between industry sponsorship and study outcomes (104). We could not formally assess publication bias because the methods typically employed for its detection are not reliable when used with diagnostic-accuracy data (27).
Two novel RADT platforms use automated instruments, the BD Veritor System and the Quidel Sofia Analyser, to read the signal produced by the test strip. These newer-generation RADTs eliminate the potential subjectivity of an operator visualizing and interpreting test results, which can lead to improved assay performance (106). However, the pooled estimates of the sensitivities of these novel RSV RADTs (76% and 77% compared to RT-PCR) were not substantially different from those of other assays in our review. This is in keeping with our overall finding that the levels of accuracy did not differ significantly across commercial brands.
The evidence base for this review has potential limitations. We highlight a nearly uniform lack of important contextual information among included studies. Data that are not readily available to laboratory practitioners, such as clinical manifestations, presence of comorbidities, and duration of symptoms, are relevant because they could affect RADT accuracy. Also, patient age clearly influences test performance. Although we were able to broadly stratify data into pediatric and adult age groups, we could not perform finer analyses on the effect of age, as further age subcategories were not uniformly reported, if at all. Finally, we were unable to draw any conclusions about the effect of RSV genotype on RADT sensitivity because too few studies used a comparator that could distinguish RSV-A from RSV-B.
Because of their simplicity and speed, RSV RADTs are considered by many clinical laboratories to be valuable diagnostic tools, despite their modest sensitivity compared to more-complex diagnostic methods such as RT-PCR. Novel, highly accurate rapid molecular assays for RSV that may be just as fast and easy to operate as RADTs are currently in development (20, 107). Nevertheless, the relatively low cost of commercial RSV RADTs and the advent of newer assays with automated readers are likely to ensure their continued widespread use in the near future, particularly in children. Therefore, understanding their performance characteristics is important to inform diagnostic laboratory researchers who must decide upon their implementation, clinicians who rely on RSV RADTs to guide patient management, and public health authorities who must interpret RSV surveillance data that utilize RADT results. Our systematic review and meta-analysis suggest that the very poor pooled sensitivity of RSV RADTs in adults may preclude their use in this population. We also found that studies published to date that were sponsored by industry produced higher index test sensitivity estimates. Finally, diagnostic-accuracy studies that did not use RT-PCR as a reference standard likely produced overestimates of RSV RADT sensitivity.
ACKNOWLEDGMENTS
We thank Nandini Dendukuri for helpful discussions regarding data analysis and Patricia S. Fontela for her thoughtful review of the manuscript.
FOOTNOTES
- Received 15 July 2015.
- Returned for modification 10 August 2015.
- Accepted 1 September 2015.
- Accepted manuscript posted online 9 September 2015.
Supplemental material for this article may be found at http://dx.doi.org/10.1128/JCM.01816-15.
- Copyright © 2015, American Society for Microbiology. All Rights Reserved.
REFERENCES
- 1.↵
- 2.↵
- 3.↵
- 4.↵
- 5.↵
- 6.↵
- 7.↵
- 8.↵
- 9.↵
- 10.↵
- 11.↵
- 12.
- 13.↵
- 14.↵
- 15.↵
- 16.↵
- 17.↵
- 18.↵
- 19.↵
- 20.↵
- 21.↵
- 22.↵
- 23.↵
- 24.↵
- 25.↵
- 26.↵
- 27.↵
- 28.↵
- 29.↵
- 30.↵
- 31.↵
- 32.↵
- 33.↵
- 34.↵
- 35.↵
- 36.↵
- 37.↵
- 38.↵
- 39.↵
- 40.↵
- 41.↵
- 42.↵
- 43.↵
- 44.↵
- 45.↵
- 46.↵
- 47.↵
- 48.↵
- 49.↵
- 50.↵
- 51.↵
- 52.↵
- 53.↵
- 54.↵
- 55.↵
- 56.↵
- 57.↵
- 58.↵
- 59.↵
- 60.↵
- 61.↵
- 62.↵
- 63.↵
- 64.↵
- 65.↵
- 66.↵
- 67.↵
- 68.↵
- 69.↵
- 70.↵
- 71.↵
- 72.↵
- 73.↵
- 74.↵
- 75.↵
- 76.↵
- 77.↵
- 78.↵
- 79.↵
- 80.↵
- 81.↵
- 82.↵
- 83.↵
- 84.↵
- 85.↵
- 86.↵
- 87.↵
- 88.↵
- 89.↵
- 90.↵
- 91.↵
- 92.↵
- 93.↵
- 94.↵
- 95.↵
- 96.↵
- 97.↵
- 98.↵
- 99.↵
- 100.↵
- 101.↵
- 102.↵
- 103.↵
- 104.↵
- 105.↵
- 106.↵
- 107.↵