In our experience, the bioinformatics pipeline can be the weakest link in assay development for many labs. Just because a variant is sequenced correctly doesn’t always mean that it will be called. And false-positives are just as bad.
In both examples, the mutations aren’t missed because of sequencing or library preparation problems. As we’ve witnessed time and time again, when labs optimize their bioinformatics pipelines, they start picking up the low-frequency and difficult-to-detect variants again.
The catch is, you first have to know you’re missing something. In assay development, what you don’t know can seriously weaken your test.
Too often, labs move assays to validation — the most expensive and comprehensive phase of the assay development process — without realizing their bioinformatics pipelines are failing to detect important variants. This results in labs putting flawed tests into operation.
Very rare variants can have diagnostic or prognostic implications. As cancer profiling and inherited disease panels get larger, your lab is in a race to develop increasingly complex clinical NGS assays with improved limits of detection, lower input requirements, and the ability to detect additional mutations or mutation types for lower costs.
Critical missteps during the assay development phase can cause expensive delays and risk the quality of an assay. How can you be sure your bioinformatics pipeline is correctly calling variants?
If uncertainty about your lab’s bioinformatics pipeline is keeping you up at night, the most effective way to regain confidence in it is to run it through its paces against a known quantity. If you have material you know contains specific variants, it can function as a truth set with which to fine-tune your pipeline.
“Tune” is the right word here. Just as a musician tries to find the sweet spot on a violin string between too high and too low, an assay developer tries to reach a balance between specificity and sensitivity. And just as all the members of an orchestra tune to the same note from the oboe, an assay developer needs a rock-solid standard on which they can rely.
But where to get such material?
Your inclination might be to use whatever you have on hand: patient remnant samples or cancer cell lines.
In our recent white paper, “How to Develop a Clinical NGS Assay Without Losing Your Mind or Your Shirt” we show how these options can cost your lab time and money and jeopardize the quality of your assays.
As you’ll learn in the paper, any home-brewed solution will suffer from a limited shelf-life and scarcity. And most importantly, it will lack the hard-to-find variants and the low allele frequencies you need to properly challenge your bioinformatics pipeline.
Highly multiplexed biosynthetic NGS reference materials, on the other hand, are available in a wide range of pathogenic variants in a variety of allele frequencies, and can also be customized to fit individual needs. Because of their high lot-to-lot consistency and precise quantitation, they serve as an unchanging truth-set to help identify sources of variability within the assay, workflow, and bioinformatics pipeline.
Compared to hunting for useful remnant specimens, biosynthetic NGS reference materials can dramatically speed up assay development and improve your confidence in your bioinformatics pipeline.
The idea of using biosynthetic NGS reference materials may be new to many assay developers. But as you’ll read in our white paper, they can overcome many traditional challenges. Specifically, they can help you keep costs down while improving your assays’ performance.
To learn how, click below for your free copy of our white paper.