If you’re involved with complex diagnostic tests — in particular, next generation sequencing (NGS)-based laboratory-developed tests (LDTs) — producing the right results consistently can be a big concern. Your test equation has many different variables, each of which carries a chance of something going wrong:
- The multiple manual steps of the wet lab work.
- The vagaries and many parameters on the dry lab (bioinformatics) analyses.
- The challenge of interpretation (depending on the nature of the test).
When an LDT Goes Wrong
The recent high-profile fiasco at fingerstick microfluidics diagnostics company Theranos Incorporated is a case study on the genuine harm testing errors can inflict on patients.
The Wall Street Journal reports that the undue anxiety and other harm patients experienced from incorrect test results sparked at least 10 lawsuits against the company in California and Arizona.
“While inaccurate test results can occur at any laboratory, Theranos failed to maintain basic safeguards to ensure consistent results, according to regulators, independent lab directors and quality-control experts.”
Theranos may be one extreme example, but in late 2015, the U.S. Food and Drug Administration (FDA) published a report outlining 20 instances of harm from LDTs.
The acute dangers of false-positive and false-negative results from laboratory developed tests are real:
- When patients are told they have conditions they do not actually have, it can cause unneeded distress and unnecessary treatment.
- When life-threatening diseases go undetected, patients can suffer and die.
In our experience, NGS-based LDTs are error-prone for two main reasons:
- The inadequacy of “known positive” samples.
- The lack of peer review for comparing one lab’s results with another’s.
Here, we’ll take a closer look at each problem and suggest a solution.
1. The Problem: ‘Known Positives’ Based on a Single Mutation
As a lab director in a routine, clinical-testing “production laboratory” environment, you may receive an assay that was developed and validated by an independent group (or groups). Your task now is to take that test and place it into routine use, often with residual patient samples with already-known genetic characteristics.
These remnant samples will have been tested as “known positive,” and as monogenic disease or cancer tests, a single genetic mutation will “drive” the condition.
With commonly-used multiplexed NGS-based assays, the genetic “footprint” of the assay can range from a few genes to the entire exome (over 20,000 genes). Yet, the residual sample you’re using for a known positive will have only a single mutation (and many times, a single nucleotide substitution) out of potentially hundreds of alternative mutations.
Does it make sense to assume a single mutation in a single gene can stand in for the dozens of mutations of varying types (single nucleotide variants, insertion-deletion, structural variants), much less for dozens or hundreds of genes?
A single remnant sample of limited abundance and a single mutation cannot adequately account for the thousands of potential deleterious mutations you can detect with a single assay.
And, that’s assuming that single mutation in a single gene from that leftover patient sample is an “easy-to-detect” single-nucleotide variant (SNV). What about a harder-to-detect 15 base-pair deletion? What about a 12 base-pair insertion?
One way to improve the number of mutations in a sample is to grow mutated cell lines, purify genomic DNA, and blend them together.
This approach, however, has its own limitations, not the least of which is the number of cell lines you can mix together before the relative allele frequencies become unusable. Plus, blended multiple genomic backgrounds can confuse the analysis and prevent determination of accurate specificity metrics for the different variants within the assay.
In the routine clinical NGS production environment, you don’t want the inherent limitations of remnant patient samples, nor the difficulties of producing and qualifying laboratory-made materials. Multiplex controls are the solution here, because:
- They allow you to interrogate important variants by clinical relevance, even those that are hard to sequence.
- You can test for these variants in one single run, at a lower cost than the labs running 10 samples or more.
- The product is stable and reproducible lot to lot.
SeraCare has highly multiplexed reference materials in both purified nucleic acid and patient-like formats, whether as synthetic plasma for ctDNA or NIPT tests or as FFPE sections for fusion RNA NGS-based assays, in addition to biosynthetic targets as purified DNA blends of mutations.
2. The Problem: No Well-Developed System for Comparing Labs
As the director of a laboratory that has developed an NGS-based LDT, one question you need to address is how your laboratory’s performance compares with results from other laboratories.
Historically, in the context of clinical chemistry and molecular diagnostics, organizations such as the College of American Pathologists in the United States or NEQAS (National External Quality Assessment Service) in the UK do this “peer review” of data comparison. For NGS-based LDTs, however, the peer review assessment process is still in its infancy.
How will you determine your laboratory’s relative proficiency, especially if multiple instruments are involved, perhaps across multiple laboratories? How will you be able to track that performance over time?
One option is to use SeraCare’s iQ NGS QC Management Software along with multiplex reference materials, which flexibly manages clinical NGS data from DNA extraction to DNA quality, quantitation, and sequencing performance. Our iQ NGS software has a peer review feature that allows you to perform comparisons, easily, on demand, with anonymized data.