Andrew Gelman with an excellent suggestion on how we could improve the quality of scientific journals (emphasis added):
– Lots of iffy studies are published every year in psychology, medicine, biology etc. For reasons explained by Uri Simohnson and others, it’s possible to get tons of publishable (i.e., “statistically significant”) results out of noise. Even some well-respected work turns out to me quite possibly wrong.
– It would be great if studies were routinely replicated. In medicine there are ethical concerns, but in biolab or psychology experiments, why not? What if first- and second-year grad students in these fields were routinely required to conduct replications of well-known findings? There are lots of grad students out there, and we’d soon get a big N on all these questionable claims—at least those that can be evaluated by collecting new data in the lab.
– As many have noted, it’s hard to publish a replication. But now we can have online journals. Why not a Journal of Replication? (We also need to ditch the system where people are expected to anonymously review papers for free, but that’s not such a big deal. We could pay reviewers (using the money that otherwise would go to the executives at Springer etc) and also move to an open post-publication review system (that is, the journal looks something like a blog, in that there’s a space to comment on any article). Paying reviewers might sound expensive, but peer review is part of the scientific process. It’s worth paying for.
Getting paid to do peer-review would be nice and probably more people would participate. I like it. That noted, I think his suggestion of instituting a post-publication review system would have a far great effect on raising the quality of science writing and research. The last paper I submitted to a peer-reviewed journal was pretty long. Only one reviewer commented.* (IIRC, the journal editor sent the manuscript out to three people.) That’s it. I got more feedback from colleagues at work prior to sending the manuscript to the journal than I did from the journal’s review process. Last I checked on the order of a dozen people had downloaded the manuscript. (I’m actually rather pleased with myself that it’s been that many.) I’d like to think some of those people would have offered comments if Applied Optics had a post-publication review system in place.
* Here’s another thing: I found that reviewer’s comments a bit pedantic. Okay, more than a bit pedantic – really pedantic. At Applied Optics – and all other journals I’m familiar with – the editor gives the author(s) an opportunity to respond to the reviewer’s (reviewers’) criticisms. So I did. In no uncertain terms. My reply was in turn passed along to the reviewer when I submitted the revised manuscript. I hadn’t realized that would be the case but I should have. I suppose it’s appropriate that they were but the asymmetry seemed a bit unfair to me at the time. Why should the reviewer get to take anonymous shots at me? If he’s got a beef then stand up and say so. Did I reduce the odds of my manuscript getting published by defending my myself? Dunno. Whether or not it did, it shouldn’t have. It’s hard for me to believe one-way anonymity helped in that regard. Anyhow, did the anonymous review process make my manuscript better? Eh, maybe a little. Would I appreciate feedback now? Absolutely, particularly non-anonymous feedback. I look at the paper now and note things I would have done differently – not just editing it so that it would read easier, changes in the way I did my analysis. (The paper came out 3.5 years ago. I’ve learned things since then.) It would be instructive to see if anyone else identified the same issues and, if so, how they would recommend dealing with them.