Science —

Analysis of meta-analyses identifies where sciences’ real problems lie

But the pressure to publish might not be such a problem after all.

Analysis of meta-analyses identifies where sciences’ real problems lie

Science is in a phase of pretty intense soul-searching. Over the past few years, systemic problems that lead to unreliable scientific results have become more and more obvious. There’s a litany of woes for good science: publication bias leads to buried data, single studies don’t stand well on their own yet not enough people are replicating them, and flaws in the peer-review process are showing. And that’s before we even get to the (hopefully occasional) research fraud.

John Ioannidis, one of the heroes of the science-scrutinizing movement, has some news in PNAS this week that is simultaneously uncomfortable and comforting. Ioannidis, along with colleagues Daniele Fanelli and Rodrigo Costas, scoured thousands of scientific papers to uncover some of the most common causes of bias. Their findings suggest that, for the most part, people are worrying about the right things, including small studies that spark a lot of scientific conversation. But they also pinpoint other causes for concern that haven’t attracted much attention so far: early career researchers and isolated scientists.

Data about data about data

Fanelli is a meta-researcher: a scientist whose research is itself about scientific research. In order to get a broad view of the biases at play across all of science, he went hunting for meta-analyses. These are scientific studies that combine the data from a range of separate studies in the same area. Meta-analyses often give a more comprehensive picture of the current evidence than any individual study.

Fanelli narrowed the field down to 1,910 papers that reported the kinds of data he could use to study bias. These papers represented research across the social sciences, biological sciences, and natural sciences. By comparing each of the initial studies to the more general conclusions of the meta-analysis, he could look for studies that reported inflated effects and see whether these biased papers had any characteristics in common.

“The magnitude of these biases varied widely across fields and was on average very small,” the authors write. But they found important patterns. Often a field of inquiry starts out with a small, exploratory paper that finds an exciting effect and goes on to be cited by many more researchers, all eager to explore whether there’s something going on here. Those papers, however, are a big source of bias across science as a whole.

This doesn’t necessarily mean they’re the result of questionable research practices, write Fanelli and colleagues. “Choices in study design aimed at maximizing the detection of an effect might be justified in some discovery-oriented research contexts.” The popularity of small, exploratory papers does mean, though, that early results should be considered with interest but never taken as the final word on a question—and it means that replication of these studies is vital.

Old hands versus newbies

One of the big topics in meta-research is pressure to publish: without papers, scientists don’t get tenure or funding. It’s widely assumed that this drives scientists to try to find the most exciting results. So, you’d expect that researchers with more papers might have a greater tendency to publish exceptional results. Surprisingly, the analysis by Ioannidis, Fanelli, and Costas doesn’t find any support for that assumption. Instead, it suggests that researchers with more publications have less bias.

New researchers, on the other hand, were a source of bias—a small one, but present nonetheless. Isolation was also a problem: bigger teams were associated with less bias (presumably because scientists on a team hold each other to account), while long-distance collaborations were associated with more bias. These problems currently aren’t a big part of the conversation about improving science, yet they “might actually be growing in importance,” write the researchers in PNAS.

Then there are the dirty players. Authors who had retracted a paper were more generally associated with greater bias. This points to the likelihood that behaviors like negligence, questionable practices, and straight-up misconduct all cluster together, write the authors. But these are also behaviors that could be reduced with common interventions.

The scientists are alright, kinda

None of this is great news. The flaws in contemporary science run deep, and they need to be looked squarely in the face. Social science, in particular, has more than its fair share of bias—which makes sense given the variability and complexity of its human research subjects.

To be clear, Fanelli, Costas, and Ioannidis are not trying to undermine science or people’s faith in it. “Most of these bias patterns may induce highly significant distortions within specific fields and meta-analyses,” they write, “but do not invalidate the scientific enterprise as a whole.”

Science is still by far the best system we have for understanding our world and making it better—which makes it all the more important to understand its problems and try to fix them. Unfortunately, that’s not an easy thing to do, because the problems are complex and are different for different fields. A common solution for all science isn’t likely, the authors write. But identifying where the real problems are is an important step in the process.

PNAS, 2016. DOI: 10.1073/pnas.1618569114  (About DOIs).

Channel Ars Technica