ABSTRACT
The sensitivity and specificity of the human immunodeficiency virus (HIV) type 1-specific immunoglobulin G capture enzyme-linked immunosorbent assay (BED-CEIA) were compared with those of the avidity index method to identify recent HIV infection using a panel of 148 samples (81 patients) representing durations of infection ranging from 0 to 222 weeks. The results from the two tests were similar (sensitivity of 80% versus 74% [P = 0.53]; specificity of 86% versus 82% [P = 0.67]).
In Germany, surveillance of the human immunodeficiency virus (HIV) epidemic is based on newly diagnosed HIV infections that are reported to the Robert Koch Institute. The duration of HIV infection in newly diagnosed individuals is usually unknown and can be highly variable. The ability to differentiate recent from chronic HIV type 1 (HIV-1) infections among newly diagnosed patients would greatly improve estimates of incidence.
Janssen et al. (6) first described the use of a serological testing algorithm for recent HIV seroconversion by comparing the enzyme-linked immunosorbent assay (ELISA) reactivity of undiluted serum samples to that of diluted serum samples based on the increase of HIV-specific immunoglobulin G (IgG) antibodies early after infection (“detuned” ELISA). Recently, a commercially available test, the BED-CEIA (Calypte), was approved by the FDA for epidemiological studies. In the BED-CEIA, the increase in antibodies specific for HIV-1 gp41 (branched gp41 peptide derived from subtype B, CRF01_AE, and D) is determined in relation to IgG levels (2, 14). Barin et al. (1) previously developed a test algorithm that uses a mixture of defined HIV antigens (IDE-V3 enzyme immunoassay); however, this test is not commercially available.
The increase of antibody avidity resulting from the somatic hypermutation of IgG genes in B cells early in infection was examined previously by Suligoi et al. (20, 21). An avidity index (AI) that scores the stability of antibody binding in the presence or absence of a chaotropic reagent (guanidine hydrochloride [G]) using an automated ELISA (AxSym HIV1/2gO; Abbott, Delkenheim, Germany) was established. All methods have been used in epidemiological studies (1, 4, 5, 7, 9-13, 15, 16, 18, 19).
The aim of our study was to compare the sensitivity and specificity of BED-CEIA and the AI method for the differentiation of recent (incident) and chronic (prevalent) HIV-1 infections. Both are commercially available or are based on a commercially available test format but rely on different test parameters. The final goal was to identify the test most appropriate for studying HIV incidence in Germany.
Eighty-one adult patients with a defined date of infection and zero to four follow-up samples per patient were selected from the German HIV seroconverter study (3, 8), giving 148 plasma samples to be used for defining optimal sensitivity and specificity and to compare these data to previously reported data. HIV seroconverters are defined as having a documented last negative and first immunoblot-confirmed antibody test or by a first reactive test before completion of seroconversion (ELISA negative/indeterminate and viral load positive or ELISA positive and immunoblot negative/indeterminate).
Samples for the reference panel were selected from the seroconverter cohort according to the following criteria: (i) a maximum time interval between the last negative and first positive antibody test dates of 3 months (n = 9) or an immunoblot-positive follow-up sample from a seroconverter defined by a first reactive test (n = 72), (ii) an antiretrovirally naïve patient, (iii) no AIDS-defining disease, and (iv) HIV-1 subtype B infection (pol subtype). The first reactive test date or the arithmetic mean between the last negative test and the first positive test date was considered to be the best proxy for the date of infection. Samples were collected 0 to 56 months after infection (73 samples in ≤6 months and 75 samples in >6 months). Sample aliquots were stored at −70°C until use.
All samples were tested in duplicate by both methods according to the manufacturers' specifications. For the BED-CEIA testing, samples were diluted 1:100, and a normalized optical density (ODn) was determined using an internal calibrator. An ODn of ≤0.8 indicated duration of infection of ≤6 months (155 days per the package insert).
To determine the AI, samples were prediluted 1:10 in 1 M G (Sigma-Aldrich, Wiesbaden, Germany) or in phosphate-buffered saline (20) before testing with the automated immunoassay (AxSYM HIV1/2gO). Sample/cutoff (S/CO) ratios and AIs were calculated from the ratio of the two measurements [AI = (S/CO) G/(S/CO) phosphate-buffered saline]. An AI of ≤0.8 indicated a duration of infection of less than 6 months (20).
Cutoff values of AI, ODn, and the duration of infection (window period) defining “incident” and “prevalent” samples to determine optimal sensitivity and specificity for the seroconverter sample panel were varied.
Optimal sensitivity (80% [95% confidence interval {CI}, 68, 89]) and specificity (86% [95% CI, 76, 92]) were obtained in the BED-CEIA if an ODn of ≤0.8 and a duration of infection of 5 months (20 weeks) were considered to be the cutoffs separating incident and prevalent samples of the reference sample panel (Fig. 1 and 2). The positive and negative predictive values of the BED-CEIA were 81% and 85%, respectively. For the avidity method, a sensitivity of 74% (95% CI, 61, 84) and a specificity of 82% (95% CI, 72, 89) were optimal if cutoff values of an AI of ≤0.8 and also a duration of infection of less than 5 months were used to define recent infections (Fig. 1 and 2). Using these conditions, the predictive value was 76% (positive) or 80% (negative). Ten incident and six prevalent samples were misclassified by both methods, whereas 9 were misclassified by BED-CEIA alone and 16 were classified by the avidity method alone.
The sensitivity and specificity of both methods for detecting incident infections did not differ significantly (P = 0.53 for sensitivity; P = 0.67 for specificity) (Table 1), and combining the results of both tests improved neither sensitivity nor specificity. No parameter correlating with misclassification (viral load, CD4 cell count, and titer of HIV-1-neutralizing antibodies) could be identified (data not shown). We compared specificity and sensitivity with the values reported previously by Parekh et al. (14) and Suligoi et al. (20) because they used similarly well-defined study panels of HIV seroconverters. The sensitivities and specificities of both tests were lower when applied to a sample panel of the German seroconverter cohort, but the difference was not significant (Table 1). The slightly lower sensitivity and specificity could be due to differences in the sample panels. The seroconverter sample panel used here included more samples collected between 6 and 12 months after infection than those used in other studies (14, 20), and most misclassifications were obtained with samples collected close to the cutoff time period between the incident and prevalent categories. Similar results were described previously by Sakarovitch et al. (17), who compared four different assays for detecting recent HIV-1 infections in a West African population and found that all tests, including BED-CEIA, failed to give reliable estimates of HIV incidence. In agreement with our data, this underscores the need to assess serological testing assays for the detection of recent HIV seroconversion using reference panels that are relevant to the study populations with respect to geographical origin and subtype distribution if reliable cutoff values are to be defined. Because antiretroviral treatment and AIDS at later stages of infection might contribute to reduced avidity and low IgG levels (6, 19), thereby increasing the false-incident proportion, such samples were excluded from our panel.
In contrast to reports claiming that BED-CEIA results in an overestimation of incidence rates (7, 22), fewer samples from the seroconverter panel were classified as being false incident by the BED-CEIA than by the AI method (20% and 26%, respectively) (difference not significant). The optimal window period for incident samples is 5 months based on the seroconverter sample panel, compared to 6 months as indicated using other sample panels (14, 20). Since 89% of the study samples were derived from patients with a first reactive test date, we feel that the sample panel used in this study reflects the true duration of infection very closely.
In conclusion, comparable results were obtained using two methods to differentiate recent from long-standing HIV infection in a sample panel derived from HIV-1 seroconverters with a well-defined date of infection. The major drawback of the AI method is the need to use the automated test version, which, although a standard in commercial diagnostic laboratories, is more expensive than the BED-CEIA in a research setting.
HIV incidence estimations observed through seroconversion in a longitudinal cohort study conducted in North America and The Netherlands correlated well with results of cross-sectional serology-based HIV-1-specific IgG capture ELISA (BED-CEIA) measurements in the same study population from a multicenter vaccine trial (10). However, these results were obtained in regions where HIV subtype B infections are prevalent and might not be unrestrictedly valid in regions where the HIV epidemic is characterized by other subtypes. Furthermore, options for analytical adjustments (such as changing the window period or using more restrictive cutoff values with respect to the predictive values) to improve incidence estimations were described previously (10, 16, 17). Quality assessment of the different assays using well-characterized serum panels would be very useful for evaluating the feasibility of national HIV incidence surveillance programs involving multicenter studies. Monitoring of the trends of HIV incidence is important if populations at increased risk for HIV infection are to be identified and specific preventive measures are to be applied accordingly.
Comparison of BED-CEIA and the AI method using the HIV-1 seroconverter reference sample panel. (a) BED-CEIA, with ODn as a function of duration of infection. (b) AI method, with AI as a function of duration of infection. Dashed lines indicate the cutoff values resulting in optimal sensitivity and specificity (ODn of 0.8 [a] and AI of 0.8 [b], with a time window of 5 months [20 weeks] [identical in a and b]). The AI could not be calculated for five incident samples because the extinction in the presence of G was zero.
Differentiation of incident and prevalent samples using the BED-CEIA and AI methods. Boxes extend from the 25th percentile to the 75th percentile. The lines inside the boxes represent the median values. Samples of the reference panel are grouped according to ≤5 months or >5 months of duration of infection. Using this cutoff, the best sensitivity and specificity were obtained if combined with the optimal cutoff values of an ODn of 0.8 for BED-CEIA (a) and an AI of 0.8 for the AI method (b) (dashed lines). The AI could not be calculated for five incident samples because the extinction in the presence of G was zero.
Comparison of sensitivities and specificitiesa
ACKNOWLEDGMENTS
We thank all private practitioners and clinic physicians who contributed to this study by providing samples and data from HIV patients.
The study was funded by a grant from the German Federal Ministry of Health.
FOOTNOTES
- Received 23 May 2007.
- Returned for modification 7 September 2007.
- Accepted 24 October 2007.
- Copyright © 2008 American Society for Microbiology