Scientists increasingly put politics over uncertainty in their research papers

The modern scientific method
The modern scientific method

The death of uncertainty in science: According to a paper published this week in the peer-review journal Science, scientists in recent years are increasingly abandoning uncertainty in their research papers and are instead more willing to make claims of absolute certainty without hesitation or even proof.

If this trend holds across the scientific literature, it suggests a worrisome rise of unreliable, exaggerated claims, some observers say. Hedging and avoiding overconfidence “are vital to communicating what one’s data can actually say and what it merely implies,” says Melissa Wheeler, a social psychologist at the Swinburne University of Technology who was not involved in the study. “If academic writing becomes more about the rhetoric … it will become more difficult for readers to decipher what is groundbreaking and truly novel.”

The new analysis, one of the largest of its kind, examined more than 2600 research articles published from 1997 to 2021 in Science, which the team chose because it publishes articles from multiple disciplines. (Science’s news team is independent from the editorial side.) The team searched the papers for about 50 terms such as “could,” “appear to,” “approximately,” and “seem.” The frequency of these hedging words dropped from 115.8 instances per 10,000 words in 1997 to 67.42 per 10,000 words in 2021.

Those numbers represent a 40% decline, a trend that has been clear for decades, first becoming obvious in the climate field. » Read more

Less than 1% of all science papers follow scientific method

The uncertainty of science: A survey of the research done for papers published in scientific peer-reviewed journals has found that less than 1% properly follow the scientific method.

Armstrong defined eight criteria for compliance with the scientific method, including full disclosure of methods, data, and other reliable information, conclusions that are consistent with the evidence, valid and simple methods, and valid and reliable data.

According to Armstrong, very little of the forecasting in climate change debate adheres to these criteria. “For example, for disclosure, we were working on polar bear [population] forecasts, and we were asked to review the government’s polar bear forecast. We asked, ‘could you send us the data’ and they said ‘No’… So we had to do it without knowing what the data were.”

According to Armstrong, forecasts from the Intergovernmental Panel on Climate Change (IPCC) violate all eight criteria.

“Why is this all happening? Nobody asks them!” said Armstrong, who says that people who submit papers to journals are not required to follow the scientific method. “You send something to a journal and they don’t tell you what you have to do. They don’t say ‘here’s what science is, here’s how to do it.’”

Worse, the research found that many results came not from data but to confirm something that was politically advantageous or helpful in winning grants.

Digging deeper into their motivations, Armstrong pointed to the wealth of incentives for publishing papers with politically convenient rather than scientific conclusions. “They’re rewarded for doing non-scientific research. One of my favourite examples is testing statistical significance – that’s invalid. It’s been over 100 years we’ve been fighting the fight against that. Even its inventor thought it wasn’t going to amount to anything. You can be rewarded then, for following an invalid [method].”

“They cheat. If you don’t get statistically significant results, then you throw out variables, add variables, [and] eventually you get what you want.” [emphasis mine]

The scientific community and especially the climate field has got to get a handle on this and demand better. Otherwise, we lose the greatest gift science has given to civilization, an unwavering dedication to the truth.

Requiring scientists to document their methods caused positive results in medical trials to plunge

The uncertainty of science: The requirement that medical researchers register in detail the methods they intend to use in their clinical trials, both to record their data as well as document their outcomes, caused a significant drop in trials producing positive results.

A 1997 US law mandated the registry’s creation, requiring researchers from 2000 to record their trial methods and outcome measures before collecting data. The study found that in a sample of 55 large trials testing heart-disease treatments, 57% of those published before 2000 reported positive effects from the treatments. But that figure plunged to just 8% in studies that were conducted after 2000. Study author Veronica Irvin, a health scientist at Oregon State University in Corvallis, says this suggests that registering clinical studies is leading to more rigorous research. Writing on his NeuroLogica Blog, neurologist Steven Novella of Yale University in New Haven, Connecticut, called the study “encouraging” but also “a bit frightening” because it casts doubt on previous positive results.

In other words, before they were required to document their methods, research into new drugs or treatments would prove the success of those drugs or treatment more than half the time. Once they had to document their research methods, however, the drugs or treatments being tested almost never worked.

The article also reveals a failure of the medical research community to confirm their earlier positive results:

Following up on these positive-result studies would be interesting, says Brian Nosek, a psychologist at the University of Virginia in Charlottesville and the executive director of the Center for Open Science, who shared the study results on Twitter in a post that has been retweeted nearly 600 times. He said in an interview: “Have they all held up in subsequent research, or are they showing signs of low reproducibility?”

Well duh! It appears the medical research field has forgotten this basic tenet of science: A result has to be proven by a second independent study before you can take it seriously. Instead, they would do one study, get the results they wanted, and then declare success.

The lack of success once others could see their methods suggests strongly that much of the earlier research was simply junk, not to be taken seriously.