Researchers More Likely To Downplay Than Exaggerate Findings

As a variety of alternative metrics have grown in importance in the academic world, there has been a sense that researchers might be motivated to exaggerate their findings in an attempt to garner more citations, references, and coverage in the media.

Research from the University of Michigan suggests these fears are unwarranted, and that instead, researchers are more likely to downplay their findings.

Scientific uncertainty

The researchers examined how scientific uncertainty is communicated in news articles and whether the claims made tend to be exaggerated or not.  They also wanted to discover whether claims made in the news differ depending on the reputability of the publication, and also between peer-reviewed journals and their less rigorous peers.

Generally, the researchers found that journalists are pretty careful when reporting on scientific research, with claims often downplayed more than anything.

“Journalists have a hard job,” the researchers say. “It’s nice to see that journalists really are trying to contextualize and temper scientific conclusions within the broader space.”

Expressing certainty

Certainty is obviously something that can be expressed in a variety of, often subtle, ways.  For instance, using precise numbers indicates a high degree of certainty, whereas terms like “might” and “suggest” express more uncertainty.

The researchers harvested around 130,000 news stories that mentioned scientific papers before analyzing the text of the articles for words, such as “conclude” and “find” to try and understand how the journalists were reporting the claims made in the research.  The researchers then used human annotators to examine the claims made in the papers themselves to examine the apparent certainty with which they were made.

“We took claims in the abstract and tried to match them with claims found in the news,” the researchers explain. “So we said, ‘OK, here’s two different people—scientists and journalists—trying to describe the same thing, but to two different audiences. What do we see in terms of certainty?’”

A model was then built to try and replicate the certainty levels found by the human assessments of certainty, with the model capable of achieving “good enough” performance over a large dataset, but the researchers accept that the diversity in human responses to certainty makes any model difficult to create.

Research translation

Things get that bit harder when looking at the quality of the journal, as the authors highlight that journalists can often report a similar level of certainty regardless of the quality of the journal the study appeared in.

“This can be problematic given the journal impact factor is an important indicator of research quality,” the researchers explain. “If journalists are reporting research that appeared in Nature or Science and some unknown journals with the same degrees of certainty, it might not be clear to the audience which finding is more trustworthy.”

The researchers hope that their work will give us a better understanding of how science is reported in the media, and they have developed a tool for both scientists and journalists to use to help them calculate the uncertainty in reporting and research.

“It’s easy to get frustrated with uncertainty,” they conclude. “I think providing a tool like this could have a calming effect to some degree. This work isn’t the magic bullet, but I think this tool could play into a holistic understanding for readers.”

Facebooktwitterredditpinterestlinkedinmail