Pages

Sunday, 25 March 2018

Separation of feedback, publishing and assessment of scientific studies



I once asked a friend and colleague about a wrong sentence in one of his scientific articles. He is a smart cookie and should have known better than that. His answer was that he knew it was wrong, but the peer reviewer requested that claim. The error was small and completely inconsequential for the results; no real harm was done. I wondered what I would have done.

Peer review has two roles: it provides detailed feedback on your work and it advises the editor on whether the article is good enough for the journal. This feedback normally makes the article better, but it is somewhat uncomfortable to discuss with reviewers who have a lot of power because of their second role.


Your Manuscript On Peer Review by redpen/blackpen.
My experience is that normally you can argue your case with a reviewer. Still to reach a common understanding can take an additional round of review, which means that the paper is published a few months later. In the worst case, not agreeing with a reviewer can mean that the paper is rejected and you have to submit to another journal.

It is quite common for reviewers to abuse their power by requesting their work to be cited (more). Mostly this is somewhat subtle and the citation more or less relevant. However, an anonymous reviewer once requested that I'd cite four article by one author, one of which was somewhat relevant. That does not hurt the article, but is disgusting power abuse and rewards bad behavior. My impression is that these are not all head fakes; when I write a critical review I make sure not to ask for citations to my work, but recommend some articles of colleagues instead. Multiple colleagues, not to get them into trouble.

Grassroots journals

I have started a grassroots journal on homogenization of climate data and only recently started to realize that this also produces a valuable separation of feedback, publishing and assessment of scientific studies. That by itself can lead to a much more healthy and productive quality control system.

A grassroots journal assesses published articles and manuscripts in a field of study. One could also see it as a continually up-to-date review article. At least two reviewers write a review on the strengths and weaknesses of an article, everyone can comments on parts of the article and the editors write a synthesis of the reviews. A grassroots journal does not publish the articles themselves, but collects articles published everywhere.

Every article also gets a quantitative assessment. This is similar to the current estimate of how important an article is by the journal it was able to get into. However, it does not reward people submitting the articles to a too big journal, hoping to get lucky, making unnecessary work for double reviews. For example, the publisher Frontiers reviews 2.4 million manuscripts and has to bounce about 1 million valid papers.

In case of traditional journals your manuscript only has to pass the threshold at the time of publishing. With an up-to-date rolling review of grassroots journals articles are rewarded that are of lasting value.

I would not have minded making a system without a quantitative assessment, but there are real differences between articles, the reader needs to prioritize their reading and funding agencies would likely not accept grassroots journals as replacement of the current system without it.

That is the final aim: getting rid of the current publishing system that holds science back. That grassroots journals immediately provide value is hopefully what makes the transition easier.

The more assessments made by grassroots journals are accepted the less it matters where you publish. Currently there is typically one journal, sometimes two, that have the right topic and prestige to publish in. The situation for the reader is even more terrible: you often need a specific paper and not just some paper on the topic. For this one specific paper there is one (legal) supplier. This near-monopolistic market leads to Elsevier making profits of 30 to 50% and it suppresses innovation.



Another symbol of the monopolistic market are the manuscript submission systems, which combine the worst of pre-internet paper submissions (every figure a separate file, captions in a separate file) with the internet age adage "save labor costs by letting your customers do the work" (adding the captions a second time when uploading a figure with a neat pop-up for special characters).

Separation of powers

Publishing is easy nowadays. ArXiv does this for about one dollar per manuscript. Once scientists can freely chose where to publish, the publishers will have to provide good services at reasonable costs. The most important service would be to provide a broad readership by publishing Open Access.

Maybe it will even go one step further and scientists will simply publish their manuscript on a pre-print server and tell the relevant grassroots journals where to find it. Such scientists likely still would like get some feedback from their colleagues on the manuscript. Several initiatives are currently springing up to review manuscripts before they are submitted to journals, for example, Peer Community In (PCI). Currently PCI makes several rounds until the reviewers "endorse" a manuscript so that in principle a journal could publish such a manuscript without further peer review.

With a separate independent assessment of the published article there would no longer be any need for the "feedback peer reviewers" to give their endorsement. (It doesn't hurt.) The authors would have much more freedom to decide whether the changes peer reviewers suggest are actually improvements. The authors, and not the reviewers, would decide when the manuscript is finished and can be published. If they make the wrong decisions that would naturally be reflected in the assessment. If they do not not add four citations to a peer reviewer that would not be any problem.

There is a similar initiative in the life sciences called APPRAISE, but this will only review manuscripts published on pre-print servers. Once the journals are gone, this will be the same, but I feel that grassroots journals add more immediate value by reviewing all articles on one topic. Just like a review article should review the entire literature and not a random part.

A vigorously debated topic is whether peer reviews should be open or closed. Recently ASAPbio had this discussion and comprehensively summarized the advantages and disadvantages (well worth reading). Both systems have their strengths and I do not see one of them winning.

This discussion may change when we separate feedback and assessment. Giving feedback is mostly doing the authors a favor and could more easily be done in the open. Rather than cumbersome month-long rounds of review, it would be possible to simply write an email and pick up the phone and clarify contentious points. On the other hand anonymity makes it easier to give an honest assessment and I expect this part to be mostly performed anonymously. The editors of a grassroots journal determine what is published and can thus ensure that no one abuses their anonymity.

The future

Concluding, in a decade a researcher writes an article and asks their colleagues for feedback. Once the manuscript no longer changes that much it is send to an independent proof reading service. Another firm or person takes care of the lay-out and ensures that the article can still be read in a century by making versions using open standards.

The authors decide when their manuscript is ready to be published and can be uploaded to the article repository. They send a notice to the journals that cover the topic. Journal A makes an assessment. Journals B and C copy this assessment, while journal D also uses it, but requests an additional review for a part that is important to them and they write another synthesis.

Readers add comments to the article using web annotations and the authors reply to them with clarifications. Also authors can add comments to share new insights on what was good and bad about the article.

Two years later a new study shows that one of the choices of the article was not optimal. This part was important for journal C and D and they update their assessment. The authors decide that it is relatively easy to redo their article with a better choice and that the article is sufficiently important to put in some work, they upload the updated study to the repository and the journals update their assessment.



Related reading

APPRAISE (A Post-Publication Review and Assessment In Science Experiment). A similar idea to grassroots journals, but they only want to to review pre-prints and will thus only review part of the literature. See also NPR on this initiative.

A related proposal by Gavin Schmidt: Someone C.A.R.E.S. Commentary And Replication in Earth Science (C.A.R.E.S.). Do we need a new venue for post-publication comments and replications?

Psychologist Henry L. Roediger, III on Anonymity in Scientific Publishing. A well written article that lays out all arguments, which are whether we talk about the authors, reviewers or editors. The author likes signed reviews. I feel that editors should prevent reviewers taking advantage of their anonymity.


* Photo of scientific journals by Tobias von der Haar used under a Attribution 2.0 Generic (https://creativecommons.org/licenses/by/2.0/) license.
* Graph of publishing costs by Dave Gray used under a Attribution-NonCommercial-NoDerivs 2.0 Generic (CC BY-NC-ND 2.0) license.



No comments:

Post a Comment

Comments are welcome, but comments without arguments may be deleted. Please try to remain on topic. (See also moderation page.)

I read every comment before publishing it. Spam comments are useless.

This comment box can be stretched for more space.