SeraCare Customer Poster Talk Video with Data Presented by Asuragen
Next-generation sequencing (NGS) of liquid biopsies offers a minimally invasive alternative to solid tissue biopsies and a more holistic profile of intra- and inter-tumoral heterogeneity for therapy selection and disease monitoring.
Watch the video and download this free poster to learn:
There comes a point during the development of every NGS assay at which you want to make sure it will reliably detect everything you say it can. You want its real-world performance to match your claims.
That means fine-tuning your development protocol — from specimen handling, to nucleic acid extraction, to library prep — against a range of variants and allele frequencies to make sure your early-stage assay picks them all up.
Only after doing so can you move the assay on to the costly validation phase.
Naturally, to replicate real-world scenarios, many developers instinctually turn to real patient samples.
But the truth is, if you’re only subjecting your assay to the patient samples your lab has on-hand — or even ones you procure from colleagues or biobanks — you’re not exposing your assay to a wide enough range of variants and conditions to ensure its performance. Plus, you may be costing your lab money and time:
- Development delays can allow another lab to launch their assay before yours.
- Not using robust enough reference materials can lead to your assay not meeting performance claims in production.
While there is clearly a need to evaluate assays with real specimens tested on orthogonal methods, in the development phase multiplexed truth sets such as biosynthetic NGS reference materials are superior to patient specimens.
Here are three reasons not to trust remnant patient samples alone for assessing your clinical NGS assay’s workflow.
For many years, next-generation sequencing (NGS) made headlines with researchers promising unprecedented breakthroughs in medical diagnostics. But the clinical impact was always explained as being a few years and more large-scale studies away from reality. In 2010, forward-thinking academics forecasted whole-genome sequencing in a matter of hours for only $30 (right around that same time, a Stanford researcher sequenced his own genome for less than $50,000 – a record low at that point).
Critical missteps during the assay development phase can cause expensive delays and risk the quality of an assay. How can you be sure your bioinformatics pipeline is correctly calling variants?
If you’re relying on remnant patient samples to tell you how well your lab's bioinformatics pipeline can call clinically important variants, you might be missing more than you realize.
In our experience, the bioinformatics pipeline can be the weakest link in assay development for many labs. Just because a variant is sequenced correctly doesn’t always mean that it will be called. And false-positives are just as bad.
- Sometimes it’s an issue of allele frequency. For example, we’ve seen cases where labs could detect certain mutations at 10% allele frequency, but as soon as the frequency dropped to 7%, they stopped detecting it.
- Other cases are caused by the complexity of the variant. For example, even at low allele frequencies, a lab may pick up relatively easy-to-detect single-nucleotide variants (SNVs) but can have problems with insertion/deletion (INDEL) calling errors.
In both examples, the mutations aren’t missed because of sequencing or library preparation problems. As we’ve witnessed time and time again, when labs optimize their bioinformatics pipelines, they start picking up the low-frequency and difficult-to-detect variants again.
The catch is, you first have to know you’re missing something. In assay development, what you don’t know can seriously weaken your test.
From extraction, to library prep, to sequencing, to the bioinformatics pipeline, there are countless points where something could go wrong.
Despite the absence of clear guidelines or firmly established best practices, next-generation sequencing (NGS) assays are becoming the method of choice for gene fusion detection.
This is significant because, although some of the cancers that contain fusion RNAs are rare, they’re now treatable thanks to new targeted therapies. If your assay can detect fusion RNAs, it can help profile tumors for important diagnostic, prognostic, and therapeutic targets, which can lead to improved patient outcomes.
The old FISH method limited you to one type of fusion variant at a time; it was effective, but also slow and cumbersome. With the latest NGS techniques, detecting fusion RNAs is more efficient than ever. It’s more sensitive and can detect multiple fusions in the same assay.
Nevertheless, it’s still challenging because of the complex workflows and the need to rigorously ensure performance across all fusion variants. From extraction, to library prep, to sequencing, to the bioinformatics pipeline, there are countless points where something could go wrong.
Daily variations in your test performance can cause assay failure and may lead to false positives. Do you have the tools to detect them?
Clinical NGS tests may be powerful diagnostic tools for your molecular pathology laboratory, but they remain complex amalgamations of different hardware, reagents, and software systems — often from several different vendors and with different levels of quality. Only one of these critical reagents or systems has to fail or underperform in an assay to cause performance drift.
If you don’t catch assay drift quickly enough, it can lead to assay failures such as false positives or unexpected changes to assay performance — such as those that impact limit of detection (LoD).
How can your lab protect itself better? Avoid these two common mistakes:
Laboratory-developed tests are in the spotlight by the US Food and Drug Administration
Recently, the FDA upped the ante in the ongoing debate over its desire to regulate laboratory developed tests (LDTs) with the release of a report detailing the ‘real and potential harms to patients and to public health’ arising from LDTs. This debate has been heating up for several years now—not coincidentally with the emergence of precision medicine and the rapid adoption of data-intensive tools such as Next Generation Sequencing (NGS) and the growing pipelines of targeted therapeutics. One might argue that the horse has already left the barn and the FDA are trying to corral it back in.