NOTE: the following post is a direct reply to Micah Allen article “Birth of a New School: How Self-Publication can Improve Research“. I am replicating my reply here to maximise exposure, but any discussion should be carried over at the point of origin. If you wish to engage and comment, please do so there. You should also read the original post in order to understand what I’m talking about.
My other point of view.
For a little while, I thought about writing a similar, but similar only in the intentions.
My starting point is different. I think that the research environment is profoundly ill (and personally, that’s why I’m happy to be sitting on its edge), and that it needs reform. From this position, new trends like Post Publication Peer Review (PPPR) and Self publishing (SP) may come to the rescue: they are to some extent inevitable (technology enables them, hence they will happen, but we don’t know in what form), and therefore they offer a chance to direct the upcoming change in a direction that could help curing the profound illness. The people who gravitate within Micah’s circle and will read this, mostly share two common features: they are not part of the problem, and are likely to welcome both PPPR and SP. This is why I’m writing, I would like to help raising the awareness of the challenges that will come ahead.
The main difficulty that I face is that many of you will probably disagree with my initial diagnosis, you will probably agree that there are plenty of problems in the scientific community, but would stop short of declaring that research is profoundly broken. Let’s see if I can change your mind (assuming I’m not preaching to the converted).
Most of you will know this, but the Economist recently published two articles, “How science goes wrong” and “Trouble at the lab“. I’ll cite the first, because it’s short and to the point:
“A rule of thumb among biotechnology venture-capitalists is that half of published research cannot be replicated. Even that may be optimistic.” (Hint: it IS optimistic, emphasis is mine)
“‘Negative results’ now account for only 14% of published papers, down from 30% in 1990.” (I think the source of this claim is here – pay-wall alert! Emphasis is mine)
This isn’t “far from ideal”, it’s disastrous. Consider this: how does research get funded? From research grants that need to be backed with published evidence (that is likely to be flawed), and by the PI reputations, again based on peer-reviewed publications, equally likely to be bogus.
P-hacking (I like to think also of P-fishing) is a problem as is a general misapplication of statistics. If you are not convinced read this: “Most researchers don’t understand error bars“. This could be enough, but the ironic reality is that even the explanation given there is wrong (and not marginally!), and the mistake was not uncovered in the comments, I’ve checked. 10 brownie points to the first one to spot it.
Why does this happen? Mostly, because the current system provides the wrong incentives: one needs to publish or die, no one gains anything from pointless replication, PPPR doesn’t produce any advantage and statistics is hard.
So, let’s summarise the symptoms (based on the Economist):
1) 50% or more of what is published is wrong.
2) A vast majority of negative or null results is never published.
3) Researches won’t spot clear statistical mistakes, and a vast majority of life-sciences rely heavily on stats.
4) No one knows about negative results, so may try to test what should already be recognisable as wrong hypotheses.
5) Grants are assigned by people who don’t spot evident errors, based on ‘evidence’ that is mostly wrong.
Because of error propagation, one could estimate that 80-90% of research funds are assigned for the wrong reasons, and that’s being generous.
If this is not a terminally ill patient, I’ve never seen one. Convinced? If not, do your homework and make your (evidence backed) case. Otherwise follow me.
Now, the Economist suggests the following solutions:
a) Getting to grips with statistics
b) Research protocols should be registered in advance and monitored in virtual notebooks
c) Work out how best to encourage replication
d) Journals should allocate space for “uninteresting” work,
e) Grant-givers should set aside money to pay for it
f) Peer review should be tightened – or perhaps dispensed with altogether, in favour of post-publication evaluation in the form of appended comments
g) Policymakers should ensure that institutions using public money also respect the rules.
And I’ll add:
h) For goodness sake, make sure that negative results are published, in one way or the other.
These are all good suggestions, there may be more, but I guess we can work with these. The game is: can we design new procedures and foster a cultural change that will facilitate the solutions above?
PPPR can help with a) and f), it’s already happening, so that’s good. What about Self Publishing? Well, one could lead by example and SP their own unsuccessful experiments.
I would also argue that one should always SP an outline of ongoing experiments, containing the predicted outcome measure. You won’t need to explain any detail, just the working hypothesis to explain what you expect to find. When results are published, you could link to the SP declaration and show that you haven’t fished a significant P (or publish the unsuccessful report).
Furthermore, to facilitate all of the points, even those that seem hopelessly out of reach (such as g), everyone can promote the necessary cultural changes. In a gazillion of different ways, a few examples follow:
I. give a seminar about the problem and the possible solutions in your institution. If possible, make sure these things are thought at BA and postgrad levels.
II. Talk about it. Link to this discussion, at regular intervals(!), ask you peers and PIs for their opinion, foster the right kind of peer pressure.
III. Reward the right attitude whenever you see it. If you’re evaluating an application (it will happen!) keep this discussion in mind, and share your concerns with the panel.
IV. If you make an application, make sure you show-off the things you’re doing towards this aim (allow other people to reward the right attitude).
V. Once in your life, choose a topic a do a Systematic Review on it (even a narrow/small one). You’ll learn a lot, and will maximise the impact of existing research; encourage your junior partners/PhD students to do the same (note, I work on this field, so it’s a clear COI for me). You can always publish a systematic review, it won’t be wasted time.
VI. Allocate one hour per week to some form of PPPR, make sure your colleagues know you are doing so.
VII. You fill this one in: there must be other things that can be done.
As you can see, this post is not about self publication in itself, I’m trying to look at the bigger picture, and talk about the culture shift that need to precede and guide the more technical side of SP. My previous point on the need of a unified, not-for-profit platform remains valid, but is secondary to what I’ve written here. In this sense, I’m with Micah: cultural change takes precedence. So this is why I’m making the bigger cultural problem explicit.