They say good things come to those who wait, but when it comes to laboratory testing, faster is almost always better (assuming, of course, that accuracy is never compromised).
The more rapidly that reportable results are generated, the quicker clinicians and patients can make decisions and embark on an effective treatment program. Furthermore, the more efficiently labs can run tests and generate results, the more they can accomplish. Faster turnaround times (TAT) can free up staff and resources for other activities, like growing the overall test menu.
And let’s not forget the reputation factor. Labs want to be the reliable go-to for the clinicians they serve. They want to be trusted for their accuracy, professionalism, and speed.
While the individual incidents that can lead to poor TAT are numerous and varied, they can generally be grouped into 2 main categories:
Thus, the most direct path to improved TAT is through implementation of more efficient processes as well as an effective quality control program that reduces the frequency of test failure. Below are some strategies to achieve this and take control of TAT.
It’s difficult to know where your lab stands with regards to TAT - and efforts to improve it - without first establishing some baseline metrics. There are several ways to evaluate TAT: by test, by patient population (i.e. inpatient vs. outpatient), by STAT vs. routine test. The important thing is to select a categorization that makes the most sense for the environment your lab operates in and then establish a baseline. A decision also has to be made with regards to how TAT is measured. This can be tricky, as clinical labs may not have direct control over every activity that impacts total TAT (such as sample collection), despite expectations from clinicians. Does the clock start ticking when the clinician places the order, or when the lab receives the sample? (Not surprisingly, most clinicians will choose the former as the starting point.)
Once you’ve decided on the categorization and measurement interval, begin collecting data over a period of time to calculate the baseline. For a routine test, this may simply involve calculating the mean TAT for a specified sample size (i.e. 100 tests) using data collected from your laboratory information system. Once the baseline TAT is determined, it’s then relatively straightforward to track trends, as well as the impact of any improvement efforts.
Your lab may utilize the most advanced high throughput chemistry analyzer available on the market, but all that horsepower is going to waste because patient samples have to go through manual sample prep before getting loaded on the instrument. To truly understand where the impediments to stellar TAT metrics lie, break down testing workflows into core processes and evaluate each phase for slowdowns and bottlenecks. Often, it’s the most mundane tasks - labeling, sample prep, sample transfer and storage - that are the culprits when it comes to slow TAT. When a bottleneck is identified, evaluate the options to automate the process. Investment in an automated sample prep module that delivers the patient sample to a primary tube ready for analysis may be well worth it given the derived improvements in TAT, not to mention gains in resource allocation efficiency and lab morale.
As discussed in a previous blog, the importance of a high quality training program for lab personnel can’t be understated. Not only does training build staff capabilities and confidence, it can have a real impact on overall TAT. Reducing the frequency of operator error for any protocol will naturally improve the average time to result for that particular test. Furthermore, training and practice will often identify areas where speed and efficiencies can be gained from workflow and protocol improvements.
The greatest threat to TAT is test failure, which leads to costly and time-consuming cycles of retests, troubleshooting, and downtime. No matter how efficiently designed lab workflows are, a single test failure event leading to extended downtime will wreak havoc on your TAT metric. You can mitigate the risk of assay failure and downtime by incorporating daily independent controls into your testing protocol. While many labs rely solely on the internal controls provided by assay manufacturers, only high-quality external controls offer a truly independent evaluation of an assay’s performance over time. Although not required from a regulatory perspective, independently manufactured external controls are considered a “best practice.” These controls can be viewed as an early warning system for test performance, providing advance notice that the lab needs to take corrective action to avoid an event that can have a negative impact on results reporting and TAT.
Independent controls should mimic the composition of true patient samples, to function as a full-process control that evaluates the entire testing workflow from extraction to detection. They should also be designed as weak-positives to ensure superior detection of assay variability and overall performance.
In addition, to keep your laboratory humming along at maximum efficiency, your external controls should be readily available in large, single-lot quantities (no sourcing issues or delays) and ready-to-use to ensure maximum workflow efficiency.
Learn more about increasing efficiency, decreasing turnaround time, and avoiding errors in your clinical lab in our free resource, “Best Practices for Clinical Labs: Strategies for Implementing a Best-in-Class Quality Control System.”