Research methods in the social sciences have come under fire in the last decade for being unreliable. In a new study, Gideon Nave, a Wharton assistant professor who specializes in decision science [“Gazetteer,” Jan|Feb 2018], finds the criticism is justified—and that online betting markets could be a handy way to distinguish solid findings from spurious ones.
Nave and his collaborators set out to reproduce the results of 21 high-profile social science studies published in Science or Nature. The researchers were unable to find evidence supporting the original findings for eight of the studies. For the other 13 they found weaker, though still consistent, evidence. They published their results in August in Nature Human Behavior.
“The incapacity to replicate and generalize published scientific findings undermines the very core of the process in which science accumulates knowledge,” said Nave.
Studies that failed to replicate tended to have traits in common: they involved small sample sizes, high p-values (reflecting the likelihood the results occurred by chance), and hypotheses that just sounded too good to be true. In a twist, the researchers also set up a betting market in which social scientists could guess which studies would replicate and which wouldn’t. Bettors correctly predicted the replication outcomes for 18 of the 21 studies, suggesting experts know a flimsy result when they see one.
Yet the new study wasn’t entirely gloomy about the future of human knowledge. Of the 21 studies, those conducted in the last five years replicated at higher rates than those conducted five to eight years ago. This suggests, said Nave, that social scientists have taken the criticism to heart and improved their methods. “I’d take it as another reason for cautious optimism,” he wrote on his blog. —Kevin Hartnett