More on the problems in science: the five diseases and other perspectives
Awareness of the key problems in science and the replication crisis continues to grow.
In this further article in the RealKM Magazine special series on the quality of science and science communication, we look at describing the key problems in science publishing as diseases: significosis, neophilia, theorrhea, arigorium, and disjunctivitis. This approach has the potential to draw more attention to the problems, and help people remember them.
We also look at how poor measurement perpetuates the replication crisis, and the admissions from a Nobel Prize-winning researcher that he relied on weak studies in a chapter of his bestselling book.
The five diseases in academic publishing
John Antonakis, psychologist and newly appointed editor of academic journal The Leadership Quarterly. has described the key problems in science publishing as diseases. In an interview with Retraction Watch, Antonakis said that he has used this description because “A disease is usually thought of as some sort of disorder having some symptoms and causing some debilitating outcomes on a body—in this case, the body of knowledge.” He identifies five diseases, which are strongly interlinked and probably have overlapping causes:
- Significosis is the incessant focus on producing statistically significant results, a well-known problem but one that still plagues us. Because the players in the publication game only consider statistically significant results as interesting and worthwhile to publish, the distribution of effect sizes is highly skewed. The potentially wrong estimates feed into meta-analyses and then inform policy. A result could be significant for many reasons, including chance or investigator bias, and not because the effect is true.
- Neophilia is about an excessive appreciation for novelty and for snazzy results. There is nothing per se wrong with novel findings, but these are not the only findings that are useful; and, of course, sometimes novel findings turn out to be false. Replications of a previous effect, for instance, may not seem very interesting at the outset; but they are critical to helping understand if an effect is present or not. Many journals simply do not consider publishing replications, which I find disturbing. In my field, I am rather certain that many published findings and theories are flawed; however, they will never be challenged if replications—and null results studies too—are never published.
- Theorrhea refers to a mania for new theory is something that afflicts many branches of social science. That is, there is usually a requirement in top journals to make a new contribution to theory, and that research should be theory driven and not exploratory. How is it possible that we can have so many contributions to theory? Imagine just in the field of management research, having say 5 elite journals, each publishing 80 papers a year. How is it possible to produce several hundred new contributions to theory every year, as compared say to physics, which has very strong theoretical foundations, but operates more slowly in terms of theory development and also appreciates basic research?
- Arigorium concerns a deficiency of rigor in theoretical and empirical work. The theories in my field, and in most of the social sciences too, save economics and some branches of political science and sociology, are very imprecise. They need to be formalized and not produced on the “cheap” and in large quantities. They must make more precise, realistic, and testable predications. As regards empirical work, there is a real problem of failing to clearly identify causal empirical relations. A lot of the work that is done in many social sciences disciplines is observational and cross sectional and there is a dearth of well-done randomized controlled experiments, either in the field or the laboratory, or work that uses robust quasi-experimental procedures.
- Disjunctivitis is a disease that is about a collective proclivity to produce large quantities of redundant, trivial, and incoherent works. This happens because of several reasons, but primarily because quantity of publications is usually rewarded. In addition, researchers have to stake a name for themselves; given that novelty, significance results, and new theory are favored too means that a lot of research is produced that is disjointed from an established body of knowledge. Instead of advancing in a paradigmatic fashion, researchers each take little steps in different directions. Worse, they go backwards or just run on the spot and do not achieve much. The point is that the research that is done is fragmented and is not helping science advance in a cohesive fashion. Findings must be synthesized and bridges must be built to other disciplines (e.g., evolutionary biology) so that we can better understand how the world works.
To address these problems in his own journal, Antonakis said that he “will be accepting a broader range of articles and making clear that contributions do not only come from statistically significant and novel findings. I will also be desk-rejecting more manuscripts.”
Measurement error and the replication crisis
In an article in the journal Science1, statisticians Eric Loken and Andrew Gelman warn that “In noisy research settings, poor measurement can contribute to exaggerated estimates of effect size.”
Loken and Gelman state that while “It seems intuitive that producing a result under challenging circumstances makes it all the more impressive,” it is a fallacy to assume “that that which does not kill statistical significance makes it stronger.”
They warn that:
The consequences for scientific replication are obvious. Many published effects are overstated and future studies, powered by the expectation that the effects can be replicated, might be destined to fail before they even begin … when it comes to surprising research findings from small studies, measurement error (or other uncontrolled variation) should not be invoked automatically to suggest that effects are even larger.
Nobel Prize-winning researcher admits that he relied on weak studies
Retraction Watch also reports on a blog post by Nobel Prize-winning researcher Daniel Kahneman in which he admits that he relied on weak studies in a chapter of his bestselling book Thinking Fast and Slow.
This follows a post on the Replicability-Index blog which advised that:
…readers of his [Kahneman’s] book “Thinking Fast and Slow” should not consider the presented studies as scientific evidence that subtle cues in their environment can have strong effects on their behavior outside their awareness.
The Replicability-Index blog researchers rated the studies Kahneman cites according to their power, and a measure known as the “R-index“.2 The R-Index is a method of quantifying statistical research integrity.
In Kahneman’s response to the Replicability-Index post, he states that “What the blog gets absolutely right is that I placed too much faith in underpowered studies.”
While indicating continued support for the studies he cites, Kahneman warns that:
The lesson I have learned, however, is that authors who review a field should be wary of using memorable results of underpowered studies as evidence for their claims.
References:
Also published on Medium.