Publication bias

Replication crises, P-values, bitching about journals, other debilities of contemporary science at large

August 30, 2016 — July 22, 2021

collective knowledge
how do science
information provenance
statistics
Figure 1

Multiple testing across a whole scientific field, with a side helping of biased data release and terrible incentives.

On one hand, we hope that journals will help us find things that are relevant. On the other hand, we hope the things they help us find are actually true. It’s not at all obvious how to solve these kinds of classification problems economically, but we kind of hope that peer review does it.

To read: My likelihood depends on your frequency properties.

Keywords: “file-drawer process” and the “publication sieve”, which are the large-scale models of how this works in a scientific community and “researcher degrees of freedom” which is the model for how this works at the individual scale.

This is particularly pertinent in social psychology, where it turns out there is too much bullshit with \(P\leq 0.05\).

Figure 2: We’re out here every day, doing the dirty work finding noise and then polishing it into the hypotheses everyone loves. It’s not easy. —John Schmidt, The noise miners

Sanjay Srivastava, Everything is fucked, the syllabus.

2 On the easier problem of local theories

On the other hand, we can all agree that finding small-effect universal laws in messy domains like human society is a hard problem. In machine learning, we frequently give up on that and just try to solve a local problem — does this work in this domain with enough certainty to help this problem? Then we still need to solve a problem about domain adaptation when we try to work out if we are still working on this problem, or at least one similar enough to this. But that feels like it might be easier by virtue of being less ambitious.

3 Incoming

4 References

Gabry, Simpson, Vehtari, et al. 2019. Visualization in Bayesian Workflow.” Journal of the Royal Statistical Society: Series A (Statistics in Society).
Gelman, and Shalizi. 2013. Philosophy and the Practice of Bayesian Statistics.” British Journal of Mathematical and Statistical Psychology.
McShane, Gal, Gelman, et al. 2019. Abandon Statistical Significance.” The American Statistician.
Nissen, Magidson, Gross, et al. 2016. Publication Bias and the Canonization of False Facts.” arXiv:1609.00494 [Physics, Stat].
Ritchie. 2020. Science Fictions: How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth.
Simmons, Nelson, and Simonsohn. 2011. False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant.” Psychological Science.