It is very likely that on your last flight the turbofan engines were controlled by full authority digital engine controls – FADECs for short. FADECs have played a significant role in keeping airline ticket prices low (except during holidays) by continually adjusting engine parameters so that the engine operates with maximum fuel efficiency and within operational limits, allowing pilots to focus on other tasks.
If you’ve attended the AMP Annual Meeting over the years or seen any of the headlines it generates, you know how next-generation sequencing-based assays are becoming indispensable diagnostic, prognostic, and predictive tools for a growing number of disease states. But just as important as the newest biomarker or latest chemistry – but seemingly less headline-worthy – are NGS quality control and standardization.
For many years, next-generation sequencing (NGS) made headlines with researchers promising unprecedented breakthroughs in medical diagnostics. But the clinical impact was always explained as being a few years and more large-scale studies away from reality. In 2010, forward-thinking academics forecasted whole-genome sequencing in a matter of hours for only $30 (right around that same time, a Stanford researcher sequenced his own genome for less than $50,000 – a record low at that point).
Laboratory-developed tests (LDTs) have proliferated in the absence of clear guidelines and regulations. So how can laboratorians, physicians, and patients be assured of the quality of the diagnostic result? A panel of clinical genomics experts (Girish Putcha, MD, PhD, Director of Laboratory Science, Palmetto GBA; Roger Klein, MD, Principal, JD Consulting; Elaine Lyon, PhD, Medical Director, Molecular Genetics and Genomics, ARUP Laboratories; and Russell Garlick, PhD, CSO, SeraCare) delved into this topic during the audience Q&A session of a recent webinar hosted by GenomeWeb (you can download a full report on the entire series here).
The Chair of Molecular Diagnostics, Department of Pathology at Virginia Commonwealth University shares her success story
As a 25-year veteran of clinical molecular diagnostics, Dr. Andrea Ferreira-Gonzalez has seen many changes in genetic technologies used in the testing laboratory. With the advent of personalized medicine and using multi-gene NGS panels as a laboratory-developed test, Dr. Ferreira-Gonzalez and other experts have agreed to lend their expertise to the design of SeraCare’s reference materials.
She and other groups have participated in an interlaboratory test of standardized reference materials for detecting cancer somatic mutations, with results that will be published in the coming months.
How do I know my NGS-based LDT is producing the right results? And how do my results compare with others running similar tests?
If you’re involved with complex diagnostic tests — in particular, next generation sequencing (NGS)-based laboratory-developed tests (LDTs) — producing the right results consistently can be a big concern. Your test equation has many different variables, each of which carries a chance of something going wrong:
- The multiple manual steps of the wet lab work.
- The vagaries and many parameters on the dry lab (bioinformatics) analyses.
- The challenge of interpretation (depending on the nature of the test).
When an LDT Goes Wrong
The recent high-profile fiasco at fingerstick microfluidics diagnostics company Theranos Incorporated is a case study on the genuine harm testing errors can inflict on patients.
The Wall Street Journal reports that the undue anxiety and other harm patients experienced from incorrect test results sparked at least 10 lawsuits against the company in California and Arizona.
“While inaccurate test results can occur at any laboratory, Theranos failed to maintain basic safeguards to ensure consistent results, according to regulators, independent lab directors and quality-control experts.”
Theranos may be one extreme example, but in late 2015, the U.S. Food and Drug Administration (FDA) published a report outlining 20 instances of harm from LDTs.
The acute dangers of false-positive and false-negative results from laboratory developed tests are real:
- When patients are told they have conditions they do not actually have, it can cause unneeded distress and unnecessary treatment.
- When life-threatening diseases go undetected, patients can suffer and die.
In our experience, NGS-based LDTs are error-prone for two main reasons:
- The inadequacy of “known positive” samples.
- The lack of peer review for comparing one lab’s results with another’s.
Here, we’ll take a closer look at each problem and suggest a solution.
When you’re in charge of quality control (QC) for a clinical genomics testing laboratory, you know that one word — quality — casts a wide net. For you, quality means (among many other things):
Patients Can't Afford to Wait for Their Test Results
Downtime can be devastating to a clinical testing laboratory. The timely return of test results is critical for effective patient care. Any delay can hurt your lab’s reputation and prompt your customers to seek testing services elsewhere.
Unfortunately, any clinical testing laboratory using sophisticated next-generation sequencing or other genetic analysis technology will suffer downtime sooner or later.
The question is, how fast can you recover?
What Causes Downtime?
There are myriad factors that could cause your laboratory to cease operations:
- Simple operator error - anything from a sample mix-up to a PCR contamination - can cause a downstream problem that can take days or weeks to identify and correct.
- Poorly-performing vendor supplied reagents, kits, instruments, or software.
Operators Are Human; Mistakes Can and Do Happen
As it makes its way through your lab, a patient sample interacts with a wide range of operators, materials, and instrumentation. Mistakes can creep in at any point along this process, including:
- Labeling and accessioning.
- The use of particular reagents.
- The setting of instrument parameters.
- The informatics pipeline.
When an error does occur, it can take a long time and a lot of effort to determine root cause and take action to fix it. Without an appropriate reference standard, the problem becomes more complex, as the number of potential causes increases exponentially.
Here is an example:
Topics: clinical genomics
Status of FDA regulation of laboratory developed tests, the promise of precision medicine, and a workshop about achieving accurate NGS laboratory test results
On January 23-25, 2017 the Precision Medicine World Congress was held in Mountain View, California. The PWMC conference kicked off with Dr. Keith Yamamoto, Vice Chancellor for Science and Policy and Strategy UCSF, with Dr. Robert Califf, FDA Commissioner in a “fireside chat” format. Dr. Califf has been with the FDA for 2 years, has served as Commissioner for 11 months, but has resigned as of January 20th 2017. One of his important parting thoughts presented was how the FDA has been re-energized by the >21st Century Cures Act to hire new scientific talent to implement the President’s Precision Medicine and Cancer Moonshot plans.