The most common answer to the title question in laboratory medicine circles is, “laboratory tests are the basis of 70% of medical decisions.” Although this is a nice soundbite, it is an unsubstantiated claim and has little scientific basis. However, the 70% number has been propagated because it is both compelling and easy to remember. Ngo et al. (1) in their article in this issue, “Frequency that laboratory tests influence medical decisions,” show that the truth is far more complicated and depends on the clinical setting. They found that the average percentage of encounters that had at least 1 associated laboratory test was 35%, with 98% in inpatients and 29% in outpatients. Other settings, such as the emergency room, fell somewhere in the middle. The study also compared the use of laboratory testing to that of other diagnostic modalities: radiology and vital sign assessment. The use of laboratory testing per encounter outstripped that for radiology in every setting, and for vital signs in both the emergency and outpatient settings. As the authors acknowledge, this study addresses the question of how many encounters have a laboratory test ordered and not the more difficult question of how many decisions are influenced by laboratory testing. Although a useful surrogate, the number of encounters with laboratory testing has a number of limitations. For one, many studies have reported a striking lack of follow-up on noncritical laboratory results (2, 3). However, this work is very important because it begins to address the value of the clinical laboratory.
Appreciation of this value is critical for planning and investment in staff, space, supplies, equipment, and quality improvements. A focus on value is important because value is often related to, but not equivalent to, cost. As an example, vaccines may both cost less and have more health benefits than smartphones but are clearly more valuable. Similarly, laboratory testing represents a small fraction (2–5%) of the total cost of providing healthcare in both the US and low-resource markets (4, 5), yet it has an outsized benefit in healthcare in both resource-rich and resource-poor settings. In 2012, approximately 20–30 clinical laboratory tests were performed per person in the US (5, 6). The number of tests per person is much lower in resource-poor settings; nevertheless, the pattern is similar. A comprehensive survey of laboratories in Kampala, Uganda, found that the number of laboratory procedures per person was similar to that in the US (7, 8). Furthermore, studies have found shocking consequences where diagnostics are unavailable. To illustrate, the WHO has produced clinical algorithms to improve empirical treatment where laboratory testing is not available. Nevertheless, 40% of children at a tertiary referral center in Ghana who had been given a WHO-defined clinical diagnosis of malaria were confirmed to actually have bacterial sepsis (9). Another study found that 50% of patients who were determined to have severe malaria by WHO clinical algorithms actually tested negative for malaria on blood smear (10). Moreover, those testing negative had poorer outcomes, suggesting missed conditions due, at least in part, to the lack of diagnostics.
However, laboratory tests alone are not enough. We also need diagnosticians. Deployment of a test in a health system without preparing that system to optimally use the test will fail to bring the hoped-for results. For example, tuberculosis (TB)3 is a condition in which patients visit on average 2.7 providers before diagnosis, are often trialed on multiple antibiotic regimes, and experience a median delay of 55 days before appropriate treatment is initiated (11). For reasons such as these, it was thought that the Cepheid GeneXpert TB MDR test—a highly accurate, point-of-care, nucleic acid test for TB that is far more sensitive than smear microscopy—would be highly impactful and cost-effective in sub-Saharan African countries. This thought process led to a massive rollout of the device with nearly 22000 instruments and over 16000000 cartridges procured under concessionary pricing between 2010 and 2015 (12). Although the detection rate of multidrug-resistant TB was increased 3- to 8-fold, improvements in outcomes of morbidity and mortality have not been documented from this deployment. The reason for this is multifactorial but underscores that diagnostics alone in isolation are not sufficient to improve population clinical outcomes. The same lesson also applies to richer, more developed countries. For example, the use of rapid molecular testing for Clostridium difficile to prevent presumptive patient isolation also drives unnecessary treatment for C. difficile infection in patients with mere colonization (13).
As healthcare dollars become increasingly restricted and payment becomes increasingly value-based, laboratory testing will face ever more scrutiny. The 70% simplification appears deceptively helpful but can lead to some unwise conclusions and unintended consequences. The number 70% was freely used by Elizabeth Holmes of Theranos to support the vision of increased laboratory monitoring in search of early detection. Use of tests in the wrong situation not only fail to help, but can lead to the “Ulysses syndrome,” where an incidental finding due to an unnecessary test leads to a number of expensive and harmful downstream interventions, including undue stress on both the healthcare system and the patient. Using the 70% claim will not suffice when it comes to justifying the value of laboratory medicine in the coming decades. The study by Ngo et al. (1) helps to clarify this point. Value must be established through careful, thoughtful studies designed to establish an impact on patient outcomes (benefit), balanced by the resources expended (costs), because in the end, this is what represents the true value of tests.
↵3 Nonstandard abbreviation:
Authors' Disclosures or Potential Conflicts of Interest: No authors declared any potential conflicts of interest.
- © 2016 American Association for Clinical Chemistry