The uncertainty of science: An attempt to replicate 98 different psychological research studies has found that significantly less than half could be replicated.
In the biggest project of its kind, Brian Nosek, a social psychologist and head of the Center for Open Science in Charlottesville, Virginia, and 269 co-authors repeated work reported in 98 original papers from three psychology journals, to see if they independently came up with the same results. The studies they took on ranged from whether expressing insecurities perpetuates them to differences in how children and adults respond to fear stimuli, to effective ways to teach arithmetic.
According to the replicators’ qualitative assessments, as previously reported by Nature, only 39 of the 100 replication attempts were successful. … There is no way of knowing whether any individual paper is true or false from this work, says Nosek. Either the original or the replication work could be flawed, or crucial differences between the two might be unappreciated. Overall, however, the project points to widespread publication of work that does not stand up to scrutiny. [emphasis mine]
None of this surprises me. The focus of much science research, especially in the soft sciences like psychology, is statistical in nature and easily manipulated. In fact, most of it isn’t science at all, but an attempt to use mere statistics to prove a point. Science would instead try to find out why something happens, not just demonstrate through statistics that it does.