Is most research a waste?
At 80,000 Hours we have been looking into which research questions are most important or prone to neglect. As part of that, I was recently lucky enough to have dinner with Iain Chalmers, one of the founders of the Cochrane Collaborations. He let me know about this helpful summary of reasons to think most clinical research is predictably wasteful:
“Worldwide, over US$100 billion is invested every year in supporting biomedical research, which results in an estimated 1 million research publications per year
…
A recently updated systematic review of 79 follow-up studies of research reported in abstracts estimated the rate of publication of full reports after 9 years to be only 53%.
…
An e?cient system of research should address health problems of importance to populations and the interventions and outcomes considered important by patients and clinicians. However, public funding of research is correlated only modestly with disease burden, if at all. Within speci?c health problems there is little research on the extent to which questions addressed by researchers match questions of relevance to patients and clinicians. In an analysis of 334 studies, only nine compared researchers’ priorities with those of patients or clinicians. The ?ndings of these studies have revealed some dramatic mismatches. For example, the research priorities of patients with osteoarthritis of the knee and the clinicians looking after them favoured more rigorous evaluation of physiotherapy and surgery, and assessment of educational and coping strategies. Only 9% of patients wanted more research on drugs, yet over 80% of randomised controlled trials in patients with osteoarthritis of the knee were drug evaluations. This interest in non-drug interventions in users of research results is re?ected in the fact that the vast majority of the most frequently consulted Cochrane reviews are about non-drug forms of treatment.
New research should not be done unless, at the time it is initiated, the questions it proposes to address cannot be answered satisfactorily with existing evidence. Many researchers do not do this—for example, Cooper and colleagues found that only 11 of 24 responding authors of trial reports that had been added to existing systematic reviews were even aware of the relevant reviews when they designed their new studies.
New research is also too often wasteful because of inadequate attention to other important elements of study design or conduct. For example, in a sample of 234 clinical trials reported in the major general medical journals, concealment of treatment allocation was often inadequate (18%) or unclear (26%). In an assessment of 487 primary studies of diagnostic accuracy, 20% used di?erent reference standards for positive and negative tests, thus overestimating accuracy, and only 17% used double-blind reading of tests.
More generally, studies with results that are disappointing are less likely to be published promptly, more likely to be published in grey literature, and less likely to proceed from abstracts to full reports. The problem of biased under-reporting of research results mainly from decisions taken by research sponsors and researchers, not from journal editors rejecting submitted reports. Over the past decade, biased under-reporting and over-reporting of research have been increasingly acknowledged as unacceptable, both on scienti?c and on ethical grounds.
Although their quality has improved, reports of research remain much less useful than they should be. Sometimes this is because of frankly biased reporting—eg, adverse e?ects of treatments are suppressed, the choice of primary outcomes is changed between trial protocol and trial reports, and the way data are presented does not allow comparisons with other, related studies. But even when trial reports are free of such biases, there are many respects in which reports could be made more useful to clinicians, patients, and researchers. We select here just two of these. First, if clinicians are to be expected to implement treatments that have been shown in research to be useful, they need adequate descriptions of the interventions assessed, especially when these are non-drug interventions, such as setting up a stroke unit, o?ering a low fat diet, or giving smoking cessation advice. Adequate information on interventions is available in around 60% of reports of clinical trials; yet, by checking references, contacting authors, and doing additional searches, it is possible to increase to 90% the proportion of trials for which adequate information could be made available.
Although some waste in the production and reporting of research evidence is inevitable and bearable, we were surprised by the levels of waste suggested in the evidence we have pieced together. Since research must pass through all four stages shown in the ?gure, the waste is cumulative. If the losses estimated in the ?gure apply more generally, then the roughly 50% loss at stages 2, 3, and 4 would lead to a greater than 85% loss, which implies that the dividends from tens of billions of dollars of investment in research are lost every year because of correctable problems.”
His assessment was that the research profession could not be expected to fix up these problems internally, as it had not done so already despite widespread knowledge of these problems, and had no additional incentive to do so now. It needs external intervention and some options are proposed in the paper.
There is a precedent for this. The US recently joined a growing list of countries who have helped their researchers coordinate to weaken the academic publishing racket, by insisting that publicly-funded research be free and openly available within a year. So long as academics are permitted to publish publicly-funded research in pay-for-access journals, established and prestigious journals can earn big rents by selling their prestige to researchers to help them advance their careers in exchange for copyright on their publicly-funded research. Now that researchers aren’t permitted to sell that copyright, an individual who would refuse to do so out of principle won’t be outcompeted by less scrupulous colleagues.
Likewise, rules that require everyone receiving public money to do the public-spirited thing, for instance by checking for systematic reviews, publishing null results, pre-registering their approach to data analysis, opening their data to scrutiny by colleagues, and so on, would make it harder for unscrupulous researchers to get ahead with corner-cutting or worse chicanery.
You might also enjoy: