Last week we discussed the big problems in science, and what can be done to solve them. One of the problem issues is that too many studies are poorly designed, with biases in the study design or the way in which data is analysed leading to false positives. The proposed solutions for this problem were rethinking the rewards system and making the research process more transparent.
Persistent problems with scientific conduct have more to do with incentives than with pure misunderstandings. So fixing them has more to do with removing incentives that reward poor research methods than with issuing more guidelines. … This paper argues that some of the most powerful incentives in contemporary science actively encourage, reward and propagate poor research methods and abuse of statistical procedures. We term this process the natural selection of bad science to indicate that it requires no conscious strategizing nor cheating on the part of researchers. Instead, it arises from the positive selection of methods and habits that lead to publication.
By “natural selection”, the researchers mean that it is almost certain that some normative methods of analysis have been selected to further publication instead of discovery, and that these methods have spread because publication contributes to career success. This spread can occur through graduate students who start their own labs or adoption by researchers in other labs.
The researchers support their argument both empirically and analytically.
Firstly, they investigated if statistical power has increased over time. “Statistical power refers to the probability that a statistical test will correctly reject the null hypothesis when it is false, given information about sample size, effect size and likely rates of false positives.” Studies should be sufficiently high powered because of the typically small effects measured in the biomedical, behavioural and social sciences, but low power studies are easier to perform and cheap, and the data can be trawled for significant results.
The researchers examined 19 studies from 16 papers published between 1992 and 2014 containing reviews of statistical power from published papers in the social, behavioural and biological sciences. They found that statistical power has not improved in this 60-year period, despite the repeated publication of corrective guidelines.
Secondly, the researchers developed and analysed a model that validates the logic of the natural selection of bad science.
The researchers also investigated if replication can impede the evolution of bad science. The replication of studies is essential because repeated investigation is the only way to effectively separate true hypotheses from false ones. The lack of replication of studies is another of the big problems discussed in last week’s article. However, the researchers found that replication slows but does not stop the process of methodological deterioration. This is because some labs will avoid getting caught unless all published studies are replicated several times, which is an ideal that is near impossible to achieve.
In conclusion, the researchers recommend that improving the quality of research requires change at the institutional level. “If we want to improve how our scientific culture functions, we must consider not only the individual behaviours we wish to change, but also the social forces that provide affordances and incentives for those behaviours.” The incentives for success should ideally be changed from publication rates and impact factors to the conduct of high-quality research.
- Smaldino, Paul E. & McElreath, Richard (2016). The natural selection of bad science. Royal Society Open Science, 3:160384. ↩
Also published on Medium.