Highlighted Selections from:

It's Too Hard to Publish Criticisms and Obtain Data for Replication


Gelman, Andrew. “It’s Too Hard to Publish Criticisms and Obtain Data for Replication.” Ethics and Statistics 26.3 (2013): 49-52. Print.

p.50: Exploratory research, inconclusive research, and research that confirms existing beliefs-—all these can be difficult to get published in a top journal. Instead there is a premium on the surprising and counterintuitive—-especially if such claims can be demonstrated in a seemingly conclusive way via statistical significance. I worry that the pressure to produce innovation leads to the sort of mistakes that led to this manuscript getting through the refereeing process. -- Highlighted apr 20, 2014

p.50: The letter was sent to three reviewers, none of whom disagreed on the merits of my criticisms, but the editors declined to publish the letter because they judged it to not be in the top 10% of submissions to the journal. I’m sure my letter was indeed not in the top 10% of submissions, but the journal’s attitude presents a serious problem, if the bar to publication of a correction is so high. That’s a disincentive for the journal to publish corrections, a disincentive for outsiders such as me to write corrections, and a disincentive for researchers to be careful in the first place. -- Highlighted apr 20, 2014

p.50: The editors concluded their letter to me as follows: “We do genuinely share with you an enthusiasm for your topic and its importance to sociology. We hope that the reviewer comments on this particular manuscript provide useful guidance for that future work.” I appreciate their encouragement, but again the difficulty here is that selection bias is not a major topic of my research. The journal does not seem to have a good system for handling short critical comments by outsiders who would like to contribute without it being part of a major research project. -- Highlighted apr 20, 2014

p.50: Publishing drive-by criticism such as mine would, I believe, serve the following three benefits: * Correct the record so future researchers, when they encounter this research paper, are not misled by the biases in the research methods * Provide an incentive for journal editors to be careful about accepting papers with serious mistakes, and it takes some of the pressure off the review process, if there is a chance to catch problems in the post-publication stage * Spur an expert (such as Hamilton or someone else working in that field) to revisit the problem

-- Highlighted apr 20, 2014

p.51: The asymmetry is as follows: Hamilton’s paper represents a major research effort, whereas my criticism took very little effort (given my existing understanding of selection bias and causal inference). The journal would have had no problem handling my criticisms, had they appeared in the pre-publication review process. Indeed, I am pretty sure the original paper would have needed serious revision and would have been required to fix the problem. But once the paper has been published, it is placed on a pedestal and criticism is held to a much higher standard. -- Highlighted apr 20, 2014

p.51: One of the anonymous referees of my letter wrote, “We need voices such as the voice of the present comment-writer to caution those who may be over-interpreting or mis-specifying models (or failing to acknowledge potential bias-inducing aspects of samples and models). But we also need room in our social science endeavor to reveal intriguing empirical patterns and invoke sociological imagination (that is, theorizing) around what might be driving them.” This seems completely reasonable to me, and I find it unfortunate that the current system does not allow the exploration and criticism to appear in the same place. -- Highlighted apr 20, 2014

p.52: Science is said to be self-correcting, with the mechanism being a dynamic process in which theoretical developments are criticized; experiments are designed to refute theories, distinguish among hypotheses, and (not least) to suggest new ideas; empirical data and analyses are published and can then be replicated (or fail to be replicated); and theories and hypothesized relationships are altered in light of new information. -- Highlighted apr 20, 2014

p.52: A recent example arose with Carmen Reinhardt and Kenneth Rogoff, who made serious mistakes handling their time-series cross-sectional data in an influential paper on economic growth and government debt. It was over two years before those economists shared the data that allowed people to find the problems in their study. I’m not suggesting anything unethical on their part; rather, I see an ethical flaw in the system, so that it is considered acceptable practice not to share the data and analysis underlying a published report. -- Highlighted apr 20, 2014

p.52: Psychology researcher Gary Marcus writes:

There is something positive that has come out of the cri-sis of replicability—-something vitally important for all experimental sciences. For years, it was extremely difficult to publish a direct replication, or a failure to replicate an experiment, in a good journal. ... Now, happily, the scientific culture has changed.

-- Highlighted apr 20, 2014

p.52: And sociologist Fabio Rojas writes:

People may sneer at the social sciences, but they hold up as well. Recently, a well-known study in economics was found to be in error. People may laugh because it was an Excel error,but there’s a deeper point. There was data, it could be obtained, and it could be replicated. Fixing errors and looking for mistakes is the hallmark of science. ...

-- Highlighted apr 20, 2014

p.52: But I worry about a sense of complacency. The common thread in all these stories is the disproportionate effort required to get criticisms and replications into the standard channels of scientific publication. In case after case, outside researchers who notice problems with published articles find many obstacles in the way of post-publication replication and review. -- Highlighted apr 20, 2014

p.52: Economist Steven Levitt recognizes the problem:

Is it surprising that scientists would try to keep work that disagrees with their findings out of journals? ... Within the field of economics, academics work behind the scenes constantly trying to undermine each other. I’ve seen economists do far worse things than pulling tricks in figures. When economists get mixed up in public policy, things get messier.

-- Highlighted apr 20, 2014

p.52: If post-publication criticism and replication were encouraged rather than discouraged, this could motivate a new wave of research evaluating existing published claims and also motivate published researchers to be more accepting about problems with their work. The larger goal is to move science away from a collection of isolated studies that are brittle (in the terminology of financial critic Nassim Taleb) to a system of communication that better matches the continually revising process of science itself. -- Highlighted apr 20, 2014