If you’re involved with complex diagnostic tests — in particular, next generation sequencing (NGS)-based laboratory-developed tests (LDTs) — producing the right results consistently can be a big concern. Your test equation has many different variables, each of which carries a chance of something going wrong:
- The multiple manual steps of the wet lab work.
- The vagaries and many parameters on the dry lab (bioinformatics) analyses.
- The challenge of interpretation (depending on the nature of the test).
When an LDT Goes Wrong
The recent high-profile fiasco at fingerstick microfluidics diagnostics company Theranos Incorporated is a case study on the genuine harm testing errors can inflict on patients.
The Wall Street Journal reports that the undue anxiety and other harm patients experienced from incorrect test results sparked at least 10 lawsuits against the company in California and Arizona.
“While inaccurate test results can occur at any laboratory, Theranos failed to maintain basic safeguards to ensure consistent results, according to regulators, independent lab directors and quality-control experts.”
Theranos may be one extreme example, but in late 2015, the U.S. Food and Drug Administration (FDA) published a report outlining 20 instances of harm from LDTs.
The acute dangers of false-positive and false-negative results from laboratory developed tests are real:
- When patients are told they have conditions they do not actually have, it can cause unneeded distress and unnecessary treatment.
- When life-threatening diseases go undetected, patients can suffer and die.
And naturally, the FDA wants to exert its regulatory authority consistent with its mandate. (For additional background on the FDA’s oversight push, read our point / counterpoint articles.)
In our experience, NGS-based LDTs are error-prone for two main reasons:
- The inadequacy of “known positive” samples.
- The lack of peer review for comparing one lab’s results with another’s.
Here, we’ll take a closer look at each problem and suggest a solution.