Showing posts with label peer review. Show all posts
Showing posts with label peer review. Show all posts

Friday, 27 July 2018

German investigative reporter team uncovers large peer review scandal



The International Consortium of Investigative Journalists that also uncovered the Paradise Paper and Panama Papers investigated the world of predatory scientific journals and conferences. Most of the investigation was done by German journalists, where it has become a major news story that made the evening news.

The problem is much larger than I thought. In Germany it involves about five thousand scientists, about one percent sometimes use these predatory services. That is embarrassing and a waste of money.

While the investigators seem to understand scientific publishing well, it seems they do not understand science and the role of peer review in it. One piece of evidence is the preview picture of the documentary at the top of this article. It shows how the two reporters dressed up to present some nonsense at a fake conference. Presumably dressed up how they think scientists look like. Never seen such weird people at a real conference.

They may naturally claim they dressed like that to make it even more fake. But they also claim that no one in the audience noticed their presentation was fake, I guess they think so because they got a polite applause. That is not a good argument, everyone gets an applause, no matter how bad a presentation was.


A figure from the article that was published in the proceeding of the above fake conference with the title "Highly-Available, Collaborative, Trainable Communication - A Policy-Neutral approach". Clearly no one had a look at it before publishing. It got a "Best Presentation Award".

The stronger evidence that the reporters do not understand the way science works are the highly exaggerated conclusions they draw, which may lead to bad solutions. At the end of the above documentary (in German) the reporter asks: "Was wenn man keinen mehr glauben kann?", "What if you can no longer believe anyone?". Maybe the journalists forgot the ask the interviewed scientists to assess the bigger picture, which is what I will do in this post. There is no reason to doubt our scientific understand of the world because of this.

As an aside, the journalists of the International Consortium of Investigative Journalists are the good guys, but (Anglo-American) journalism that rejects objectivity is the bigger problem for the question what we can believe in than science.

Fortunately most of the reporting makes clear that the main driving force behind the problem is the publish-or-perish system that politicians and the scientific establishment have set up to micro-manage scientists. If you reward scientists for writing papers rather than doing science, they will write more papers, in the worst case in predatory journals.

Those in power will likely prefer to make the micromanagement more invasive and prescribe where scientists are allowed to publish. The near monopolistic legacy publishers, who are the only ones really benefiting from this dysfunctional system, will likely lobby in this direction.

Peer review

Many outside of science have unrealistic expectations of peer review and of peer reviewed studies. Peer review is just a filter that improves the average quality of the articles. Science does not deal in certainty (that is religion) and peer reviewed studies certainly do not offer certainty. The claims of single studies (and single scientists) may be better than a random blog post, but reliable scientific understanding should be based on all studies, preferably a few years old, and interpreted by many scientists.

This goes much against the mores of the news business, who focus on single studies that are just out, may add some personal interest by portraying single heroic scientists. The news likes spectacular studies that challenge our current understand and are thus the most likely studies to be wrong. If this mess the public sees were science we would not have much scientific progress.

As a consumer of science reporting I would much prefer to read overviews of how the understanding in a certain scientific community is and possibly how it has changed over the last years. It does not have to be recent to be new to me. There is so much I do not know.


the biggest threat to the proper public understanding of science is ... the lie we tell the public (and ourselves) that journal peer review works to separate valid and invalid science
Michael Eisen


Peer review is nothing more than that (typically) two independent scientists read the study, give their feedback on things that could be improved and advice the editor on whether the study is sufficiently interesting for the journal in question. Review is not there to detect fraud. Reviewers do not redo all experiments, do not repeat all calculations, do not know every related study that could shine a different light on the results. They just think that other scientists would be interested in reading the article. The main part of the checking, processing of the new information and weaving it into the scientific literature is performed after publication when scientists (try to) build on the work.

The documentary starts with someone with cancer who is conned into a treatment by a scrupulous producer pointing to their peer reviewed studies, partially published in predatory journals. They also criticize that articles from predatory journals were available in a database of a regulatory agency.

However, the treatment in question was not approved and the agency pointed out that they had not used these articles in their assessments. For these assessments scientists come together and discuss the entire literature and how convincing the evidence is in various aspects. These scientists know which journals are reliable, they read the studies and try to understand the situation. One of the interviewed scientists looked at one of the studies on this cancer treatment in a predatory journal and found several reasons why the journal should not have accepted it in the present form.

Also politicians would often like every scientific study to be so perfect that you do not need any expertise to interpret it and can directly use it for regulation. That is not how science works and also not what science was designed for. Science is not an enormous heap of solid facts. Science is a process where scientists gradually understand reality a little better.

Trying to get to the "ideal" of flawless and final studies would make do science much harder. Every scientists would have to be as smart and knowledgeable as the entire community working on something for years. Writing and reviewing scientific article would be so hard that scientific progress would come to a screeching halt. Especially new ideas would have no chance any more.

Conned scientists

Like most scientists I get multiple spam mails a day for fake scientific journals and conferences. Most of them have the quality of Nigerian Prince Spam. People say this kind of spam still exists because the spammers only want really stupid people to respond as they are the easiest to con.

Thus I had expected that people who take up such offers know what they are doing. Part of becoming a scientist is learning the publishing landscape of your field. But Open Access publishing reporter Richard Poynder mentioned several cases of scientists being honestly deceived, who tried to reverse their error when they noticed what happened.
The first researcher who contacted me realised something had gone wrong when the manuscript that he and his co-authors had submitted was returned to them with no peer review reports attached and no suggested changes. There was, however, a note to say that it had been accepted, and could they please pay the attached invoice. They later learned that the paper had already been published.

Quickly realising what had happened, and desperate to recover the situation, the authors agreed to pay the publisher the journal’s full [Author Processing Charges] (over $2,000) – not for publishing their paper, but for taking it down.
Apparently there are also predatory journals with names that are very similar to legitimate ones and a 1% error rate easily happens. I guess assessing the quality of journals can be harder in large fields and in case of interdisciplinary fields. If the first author selects a predatory journal, the co-authors may not have the overview of the journals in the other field to notice the problem.

We need to find a way help scientists who were honestly fooled and make it possible that the authors can retract their articles themselves. Otherwise they can be held hostage by the predatory publishers, which also funds the organised deception.

If the title of the real article is the same as the one of the predatory article it would be hard to put both on your CV or article list. Real publishers could be a bit more lenient there. A retraction notice of the predatory version in the acknowledgements of the real version should be "shameful" enough that people do not game the system, first publish predatory and then look for a real publisher.



Predators

If there is evidence that scientists purposefully publish in predatory journals or visit fake conferences that should naturally have consequences. That is wasting public money. One institute had 29 publications over a time span of ten years. There is likely a problem there.

It should also have consequences when scientists are in the editorial boards of such predatory journals. It may look nice on their CV to be editor, but editors should notice that they are not involved in the peer review or that it is done badly. It is hard to avoid the conclusion that they are aware that they are helping these shady companies. Sometimes these companies put scientists on their editorial boards without asking them. In that case you can expect a scientist to at least state on their homepage that they did not consent.

It is good to see that prosecutors are trying to take down some of these fake publishers. I wish them luck, although I expect this to be hard because it will be difficult to define how good peer review should work. Someone managed to get a paper published with the title "Get me off Your Fucking Mailing List". That would be a clear example of a fail and probably a case of one strike and you are out. At least scientifically, no idea about juridically. With more subtle cases you probably need to demonstrate that this happens more often. Climate "sceptics" occasionally manage to publish enormously bad articles in real scientific journals. That does not immediately make them predatory journals.

Changing publishing

In the past scientific articles were mostly published in paper journals to which academic libraries had subscriptions. This made it hard for the public and many scientists to read scientific articles, especially for scientists from the global South, but I also cannot read articles in one of the journals I publish in regularly myself.

Nowadays this system is no longer necessary as journals can be published online. Furthermore, the legacy system is made for monopolies: a reader needs a specific article and an author needs a journal that most scientists subscribe to. As society replaces morality with money and as the publishing industry is concentrating and clearly prioritizes profits over being a good member of the scientific community subscription prices have gone up and service has gone down. As an example of the former, Elsevier has a profit margin of 30 to 50 percent. As an example of the latter, in one journal I unfortunately publish in the manuscript submission system is so complicated that you have to reserve almost a full working day to submit a manuscript.

The hope of the last decade was that a new publishing model would break open the monopoly: open access publishing. In this model articles are free to read and in most cases the authors fund the journals. This reduces the monopoly power of the journals. Readers can read the articles they need and authors can be sure their colleagues can read the article. However, scientists want to publish in journals with a good reputation, which takes years if not decades to build up and still produces a quite strong monopoly situation.

This has resulted in publishing fees of several thousands of Euro for the most prestigious open access journals. In this way these journals are open to read, but no longer open to publish for many researchers. These journals drain a lot of resources that could have been used for research; likely more than the predatory publishers ever will. My guess would be that the current publishing system is 50 to 90 percent too expensive; the predatory journals have less than 1 percent of the market.

The legacy publishers defend their profits and bad service with horror stories about predatory open access journals. They prefer to ignore all the high quality open access journals. This investigative story unfortunately feeds this narrative.

Bad solutions

The Austrian national science foundation (FWF) has found a way to make the situation worse. They want make sure that the scientists they fund will only publish in a list of known good quality open access journals, for example in the Directory of Open Access Journals. That sounds good, but if all science foundations would adopt this policy it would become nearly impossible to start new scientific journals and the monopolies would get stronger again.

[UPDATE The German Alliance of Scientific Organizations fortunately states that journal selection is part of the freedom of science. They furthermore state that the quality of a study does not depend on where it is published and want to help scientists with training and information persons. They see a key role for the Directory of Open Access Journals (DOAJ).]

I just send a nice manuscript to a new journal, which has no real reputation yet. Its topics fits very well to my work, so I am happy my colleagues started this journal. I did my due diligence, know several people on the editorial board as excellent researchers and even looked through a few published articles. The same publisher has many good journals and journal is by now also listed in the Directory of Open Access Journals (DOAJ). The DOAJ was actually very quick and already listed this journal after only publishing 11 articles. But getting those first 11 would be hard if the FWF policy wins out.

The opposite model is to create a black list. This has less problems, but it is quite hard to determine which journals are predatory. There used to be a list of predatory journals by Jeffrey Beall, but he had to stop because of legal threads to his university by the predatory publishers. There were complaints that this list discriminated against journals from developing countries. True or not, this illustrates how hard it is to maintain such a list. There is now, oh irony, a pay-walled version of such a list with predatory journals. The subscriptions should probably pay for the legal risks.

Changing publishing

A good solution would be to review the articles after publication. This would allow researchers to update their assessments when evidence from newer studies come in and we understand the older studies better. PubPeer is a system to do this post-publication peer review, but it mostly has reviews for flawed papers and thus does not give a good overview over the scientific literature.

F1000 Prime is an open access journal with post publication review. I know of two more complete post-publication review systems: The Self-Journals of Science and recently Peeriodicals. Here every scientist can start a journal, collect the articles that are worthwhile and write something about them. The more scientists endorse an article, the more influential it is. In these systems I miss reviews of article are not that important, but are valid and they may still be informative for some. Furthermore, I would expect that the review would need to be organized more formally to be seen as worthy successors of the current quality control system.

That is what I am trying to build up at the moment and I have started a first such "grassroots journal" for my own field to show how the system would work. I expect that the system will be superior because these "grassroots journals" do not publish the articles themselves, only review them, and thus can assess all articles in one field at one place, while traditionally articles are spread over many journals. The quality of the reviews will be better because it uses a post-publication review model. The reviews are more helpful to the readers because they are published themselves and quantify in more detail what is good about an article. As such it performs the role of a supervisor in finding one's way in the scientific literature.

You get a similar effect from the always up-to-date review paper on sea surface temperature proposed and executed by my colleagues John Kennedy. This makes it easy for others to contribute, while having versioning and attribution. There is naturally less detail per article that is reviewed.



Changing the system

But also a better reviewing system cannot undo the damage of the fake competitive system currently used to fund scientific research.

Volker Epping, president of the University of Hannover, stated: "The pressure to publish is enormous. Problems are inherent to the system." I would even argue: Given the way the system is designed, it is a testament of the dedication of the scientists that it still works so well.

It is called "competitive", but researchers are competing to get their colleagues to approve the funding their research. There is no real competition because there is no real market. If you did a good job, there are no customers that reward you for this. In the best case the rewards come as new funding decided by people who have no skin in the game. People who have no incentive to make good funding decisions. Given that situation, it is amazing that scientists still spend time in making good peer reviews of research proposals and show dedication comparing them with each other to decide what to fund.

My proposal would be to return to the good old days. Give the funding to the universities, which give it to the professors, which allocate it to what they, as the most informed experts, think is interesting research, which furthers their reputation. Professors have skin in the game, their reputation is on the line, and they will invest the limited funds where they expect to get most benefits. In the current system there is no incentive to set priorities, submitting more research proposals has no downsides for them beyond the time it takes to write them. One of the downsides of this model for science is that the best researchers are not doing research, but are writing research proposals.

A compromise could be to limit the number of projects a science foundation funds per laboratory. The Swiss Science Foundation uses this model.

The old and hopefully future system also allows for awarding permanent positions to good researchers. Now most researchers are on short-term contracts because the project funding does not provide stable funding. With these better labour conditions one could attract much better researchers for the same salary.

Because project science requires so many peer reviews (of research proposals and of a bloated number of articles) a lot of time is wasted. (This waste is again much bigger than that of the predatory publishers.) This invites the reviewers to use short-cuts and not assess how good a scientist is and instead assess how many articles they write and how high the prestige is of the journals the articles appear in (bibliometric measures). Officially this system is illegal in Germany, the ethics rules of the German Science Foundation forbid judging researchers and small groups on their bibliometric measures, but it still happens.

My expectation is that without the publish-or-perish system scientific progress would go much faster and we certainly would not have the German public being shocked to learn about predatory publishers.

I hope the affair will inspire journalists to inform the public better on how science works and what peer review is and is not.

Related reading

Investigation of predatory publishing

English summary by the International Consortium of Investigative Journalists (ICIJ): New international investigation tackles ‘fake science’ and its poisonous effects.

A critical comment in English on the affair: Beyond #FakeScience: How to Overcome Shallow Certainty in Scholarly Communication. With many (mostly German) links to news sources.

The newspaper Indian Express (the Indian partner of the ICIJ): Inside India’s fake research paper shops: pay, publish, profit. Despite UGC blacklist, hundreds of ‘predatory journals’ thrive, cast shadow on quality of faculty and research nationwide.

Comment on the investigation: Predatory Open Access Journals: Is Open Peer Review Any Help? I think it would help, but there are also commercial firms where you can buy peer reviews (next to copy editing, statistical analysis and writing complete articles and theses).

The Bern university library feels criteria for black lists are not transparent and making them puts much power into the hands of a commercial company: On the topic of #predatoryjournals - are there black lists and how reliable are they? In German: Zum Thema #predatoryjournals – gibt es schwarze Listen und wie verlässlich sind sie?

Investigation of scientists of the City University of New York publishing in predatory journals.

Overview of the investigation by the ICIJ in German: Overview of the investigative project in German.

Q&A of ICIJ in German: #FakeScience - Fragen und Antworten.

Why so many researchers use dubious ways to publish (in German). Warum so viele Forscher auf unseriösem Weg publizieren. Volker Epping, president of the University of Hannover: "The pressure to publish is enormous. Problems are inherent to the system." I would even argue: Given the way the system is designed, it is a testament of the dedication of the scientists that it still works so well.

Video of a comment on the affair by Svea-Eckert in German. Like other jouralists, I think she is wrong about the implications.

Peer review

The first grassroots scientific journal, which I hope will inspire the post-publication review system of the future.

An always up-to-date review paper by John Kennedy, Elizabeth Kent on GitHub: A review of uncertainty in in situ measurements and data sets of sea-surface temperature. You can use bug reports and pull requests to add to the text.

Separation of feedback, publishing and assessment of scientific studies.

Separation of review powers into feedback and importance assessment could radically improve peer review. Grassroots scientific journals.

Publish or perish is illegal in Germany, for good reason. German science has another tradition, trusting scientists more and focusing on quality. This is expressed in the safeguards for good scientific practice of the German Science Foundation (DFG). It explicitly forbids the use of quantitative assessments of articles.

The value of peer review for science and the press. Is it okay to seek publicity for a work that is not peer reviewed? Should a journalist only write about peer-reviewed studies? Is peer review gate keeping? Is peer review necessary?

Peer review helps fringe ideas gain credibility.

Three cheers for gatekeeping.

Did Isaac Newton Need Peer Review? Scholarly Journals Swear By This Practice of Expert Evaluation. But It’s a New Phenomenon That Isn’t the Only Way To Establish the Facts.

Think. Check. Submit. How to recognise predatory publishers before you submit your work.

Tuesday, 12 June 2018

Separation of review powers into feedback and importance assessment could radically improve peer review

After some blog posts about grassroots journals, it looks as if no one else will pick up the idea and I have started creating a first grassroots journal.

(It is interesting how often fear of being scooped is mentioned as reason against Open Science. Typically good ideas are not recognised before they are presented in detail and even then it takes time. At least that is my impression with the small paradigm changes I was responsible for: surrogate clouds and adaptive parameterisations.)


"It was the pits," said [economist Brian Arthur]. "Nobody there believes in increasing returns."

Susan Arthur had seen her husband returning from the academic wars before. "Well," she said, trying to find something comforting to say, "I guess it wouldn't be a revolution, would it, if everybody believed in it at the start?"


The first grassroots journal is naturally on homogenisation of climate observations. That was a good way for me to check whether the idea would work in practise. I think it does. And a concrete example is a good way for everyone to see how such a journal would work.

Using a WordPress blog works, but I am learning how to code a WordPress site to make it more user friendly and to make it easy for everyone to start such a grassroots journal, just as easy as starting a blog. (Which is really easy. Hint.)

With that it also became time to spread the idea and I have written a short guest post for the blog of the OpenUp project, an EU project on Open Publishing. After all these years of blogging that was my first guest post.

I was supposed to write about how wonderful openness is. So I wrote about how wonderful the right mix of privacy and openness are. Scientists are natural contrarians.

The main message, however, was already presented in my last blog post that if we separate the two roles of peer review — 1) feedback for the authors and 2) advising the journal whether the article is important enough for them — we will get a much more healthy quality assurance system.

In the feedback round I see no real reason to publish the content, all the (little) mistakes that were corrected. However, as it is just feedback, a friendly helping role, it would be easy publish the name of the reviewer.

In the assessment round the review itself is very interesting for others as well and is best published. Because this is judging a colleague and anonymous makes it easier to be honest. I would give the choice whether to be named to the reviewer.

Enjoy: Separation of review powers into feedback and importance assessment could radically improve peer review.

Sunday, 25 March 2018

Separation of feedback, publishing and assessment of scientific studies



I once asked a friend and colleague about a wrong sentence in one of his scientific articles. He is a smart cookie and should have known better than that. His answer was that he knew it was wrong, but the peer reviewer requested that claim. The error was small and completely inconsequential for the results; no real harm was done. I wondered what I would have done.

Peer review has two roles: it provides detailed feedback on your work and it advises the editor on whether the article is good enough for the journal. This feedback normally makes the article better, but it is somewhat uncomfortable to discuss with reviewers who have a lot of power because of their second role.


Your Manuscript On Peer Review by redpen/blackpen.
My experience is that normally you can argue your case with a reviewer. Still to reach a common understanding can take an additional round of review, which means that the paper is published a few months later. In the worst case, not agreeing with a reviewer can mean that the paper is rejected and you have to submit to another journal.

It is quite common for reviewers to abuse their power by requesting their work to be cited (more). Mostly this is somewhat subtle and the citation more or less relevant. However, an anonymous reviewer once requested that I'd cite four article by one author, one of which was somewhat relevant. That does not hurt the article, but is disgusting power abuse and rewards bad behavior. My impression is that these are not all head fakes; when I write a critical review I make sure not to ask for citations to my work, but recommend some articles of colleagues instead. Multiple colleagues, not to get them into trouble.

Grassroots journals

I have started a grassroots journal on homogenization of climate data and only recently started to realize that this also produces a valuable separation of feedback, publishing and assessment of scientific studies. That by itself can lead to a much more healthy and productive quality control system.

A grassroots journal assesses published articles and manuscripts in a field of study. One could also see it as a continually up-to-date review article. At least two reviewers write a review on the strengths and weaknesses of an article, everyone can comments on parts of the article and the editors write a synthesis of the reviews. A grassroots journal does not publish the articles themselves, but collects articles published everywhere.

Every article also gets a quantitative assessment. This is similar to the current estimate of how important an article is by the journal it was able to get into. However, it does not reward people submitting the articles to a too big journal, hoping to get lucky, making unnecessary work for double reviews. For example, the publisher Frontiers reviews 2.4 million manuscripts and has to bounce about 1 million valid papers.

In case of traditional journals your manuscript only has to pass the threshold at the time of publishing. With an up-to-date rolling review of grassroots journals articles are rewarded that are of lasting value.

I would not have minded making a system without a quantitative assessment, but there are real differences between articles, the reader needs to prioritize their reading and funding agencies would likely not accept grassroots journals as replacement of the current system without it.

That is the final aim: getting rid of the current publishing system that holds science back. That grassroots journals immediately provide value is hopefully what makes the transition easier.

The more assessments made by grassroots journals are accepted the less it matters where you publish. Currently there is typically one journal, sometimes two, that have the right topic and prestige to publish in. The situation for the reader is even more terrible: you often need a specific paper and not just some paper on the topic. For this one specific paper there is one (legal) supplier. This near-monopolistic market leads to Elsevier making profits of 30 to 50% and it suppresses innovation.



Another symbol of the monopolistic market are the manuscript submission systems, which combine the worst of pre-internet paper submissions (every figure a separate file, captions in a separate file) with the internet age adage "save labor costs by letting your customers do the work" (adding the captions a second time when uploading a figure with a neat pop-up for special characters).

Separation of powers

Publishing is easy nowadays. ArXiv does this for about one dollar per manuscript. Once scientists can freely chose where to publish, the publishers will have to provide good services at reasonable costs. The most important service would be to provide a broad readership by publishing Open Access.

Maybe it will even go one step further and scientists will simply publish their manuscript on a pre-print server and tell the relevant grassroots journals where to find it. Such scientists likely still would like get some feedback from their colleagues on the manuscript. Several initiatives are currently springing up to review manuscripts before they are submitted to journals, for example, Peer Community In (PCI). Currently PCI makes several rounds until the reviewers "endorse" a manuscript so that in principle a journal could publish such a manuscript without further peer review.

With a separate independent assessment of the published article there would no longer be any need for the "feedback peer reviewers" to give their endorsement. (It doesn't hurt.) The authors would have much more freedom to decide whether the changes peer reviewers suggest are actually improvements. The authors, and not the reviewers, would decide when the manuscript is finished and can be published. If they make the wrong decisions that would naturally be reflected in the assessment. If they do not not add four citations to a peer reviewer that would not be any problem.

There is a similar initiative in the life sciences called APPRAISE, but this will only review manuscripts published on pre-print servers. Once the journals are gone, this will be the same, but I feel that grassroots journals add more immediate value by reviewing all articles on one topic. Just like a review article should review the entire literature and not a random part.

A vigorously debated topic is whether peer reviews should be open or closed. Recently ASAPbio had this discussion and comprehensively summarized the advantages and disadvantages (well worth reading). Both systems have their strengths and I do not see one of them winning.

This discussion may change when we separate feedback and assessment. Giving feedback is mostly doing the authors a favor and could more easily be done in the open. Rather than cumbersome month-long rounds of review, it would be possible to simply write an email and pick up the phone and clarify contentious points. On the other hand anonymity makes it easier to give an honest assessment and I expect this part to be mostly performed anonymously. The editors of a grassroots journal determine what is published and can thus ensure that no one abuses their anonymity.

The future

Concluding, in a decade a researcher writes an article and asks their colleagues for feedback. Once the manuscript no longer changes that much it is send to an independent proof reading service. Another firm or person takes care of the lay-out and ensures that the article can still be read in a century by making versions using open standards.

The authors decide when their manuscript is ready to be published and can be uploaded to the article repository. They send a notice to the journals that cover the topic. Journal A makes an assessment. Journals B and C copy this assessment, while journal D also uses it, but requests an additional review for a part that is important to them and they write another synthesis.

Readers add comments to the article using web annotations and the authors reply to them with clarifications. Also authors can add comments to share new insights on what was good and bad about the article.

Two years later a new study shows that one of the choices of the article was not optimal. This part was important for journal C and D and they update their assessment. The authors decide that it is relatively easy to redo their article with a better choice and that the article is sufficiently important to put in some work, they upload the updated study to the repository and the journals update their assessment.



Related reading

APPRAISE (A Post-Publication Review and Assessment In Science Experiment). A similar idea to grassroots journals, but they only want to to review pre-prints and will thus only review part of the literature. See also NPR on this initiative.

A related proposal by Gavin Schmidt: Someone C.A.R.E.S. Commentary And Replication in Earth Science (C.A.R.E.S.). Do we need a new venue for post-publication comments and replications?

Psychologist Henry L. Roediger, III on Anonymity in Scientific Publishing. A well written article that lays out all arguments, which are whether we talk about the authors, reviewers or editors. The author likes signed reviews. I feel that editors should prevent reviewers taking advantage of their anonymity.


* Photo of scientific journals by Tobias von der Haar used under a Attribution 2.0 Generic (https://creativecommons.org/licenses/by/2.0/) license.
* Graph of publishing costs by Dave Gray used under a Attribution-NonCommercial-NoDerivs 2.0 Generic (CC BY-NC-ND 2.0) license.



Sunday, 1 October 2017

The Earth sciences no longer need the publishers for publishing



Manuscript servers are buzzing around our ears, as the Dutch say.

In physics it is common to put manuscripts on the ArXiv server (pronounced: Archive server). A large part of these manuscripts are later send to a scientific journal for peer review following the traditional scientific quality control system and assessment of the importance of studies.

This speeds up the dissemination of scientific studies and can promote informal peer review before the formal peer review. Manuscripts do not have copyrights yet, so this also makes the research available to all without pay-walls. Expecting the manuscripts to be published on paper in a journal later, ArXiv is called a pre-print server. In these modern times I prefer manuscript server.

The manuscript gets a time stamp, a pre-print server can thus be used to claim precedence. Although the date of publication is traditionally used for this and there are no rules which date is most important. Pre-print servers can also give the manuscript a Digital Object Identifier (DOI) that can be used to cite it. A problem could be that some journals see a pre-print as prior publication, but I am not aware of any such journals in the atmospheric sciences, if you do please leave a comment below.

ArXiv has a section for atmospheric physics, where I also uploaded some manuscripts as a young clouds researcher. However because most meteorologists did not participate it could not perform the same function as it does in physics; I never got any feedback based on these manuscripts. When ArXiv made uploading manuscripts harder to get rid of submissions by retire engineers, I stopped and just put the manuscripts on my homepage.

Three manuscript archives

Maybe the culture will now change and more scientists participate with three new initiatives for manuscript servers for the Earth sciences. All three follow a different concept.

This August a digital archive started for Paleontology (paleorXiv, twitter). If I see it correctly they already have 33 manuscripts. (Only a part of them are climate related.) This archive builds on the open source preprint server of the Open Science Framework (OSF) of the non-profit Center for Open Science. The OSF is a platform for the entire scientific workflow from idea, to coding and collaboration to publishing. Also other groups are welcome to make a pre-print archive using their servers and software.

[UPDATE. Just announced that in November a new ArXiv will start: MarXiv, not for Marxists, but for the marine-conservation and marine-climate sciences.]

Two initiatives have just started for all of the Earth sciences. One grassroots initiative (EarthArXiv) and one by AGU/Wiley (ESSOAr).

EarthArXiv will also be based on the open source solution of the Open Science Framework. It is not up yet, but I presume it will look a lot like paleorXiv. It seems to catch on with about 600 twitter listeners and about 100 volunteers in just a few days. They are working on a logo (requirements, competition). Most logos show the globe; I would include the study of other planets in the Earth sciences.

The American Geophysical Union (AGU) has announced plans for an Earth and Space Science Open Archive (ESSOAr), which should be up and running early next year. They plan to be able to show a demo at the AGU's fall meeting in December.

The topic would thus be somewhat different due to the inclusion of space science and they will also permanently archive posters presented at conferences. That sounds really useful; now every conference designs their own solution and the posters and presentations are often lost after some time when the homepage goes down. EarthArXiv unfortunately seems to be against hosting posters. ESSOAr would also make it easy to transfer the manuscripts to (AGU?) journals.

A range of other academic societies are on the "advisory board" of ESSOAr, including EGU. ESSOAr will be based on proprietary software of the scientific publisher Wiley. Proprietary software is a problem for something that should function for as close to an eternity as possible. Not only Wiley, but also the AUG itself are major scientific publishers. They are not Elsevier, but this quickly leads to conflicts of interest. It would be better to have an independent initiative.

There need not be any conflict between the two "duelling" (according to Nature) servers. The manuscripts are open access and I presume they will have an API that makes it possible to mirror manuscripts of one server on the other. The editors could then remove the ones they do not see as fitting to their standards (or not waste their time). Beyond esoteric (WUWT & Co.) nonsense, I would prefer not to have much standards, that is the idea of a manuscript server.



Paul Voosen of Nature magazine wonders whether: "researchers working in more sensitive areas of the geosciences, such as climate science, will embrace posting their work prior to peer review." I see no problem there. There is nothing climate scientists can do to pacify the American culture war, we should thus do our job as well as possible and my impression is that climatology is easily in the better half of the Open Science movement.

I love to complain about it, but my impression is that sharing data is more common in the atmospheric sciences than average. This could well be because it is more important because data is needed from all over the world. The World meteorological Organization was one of the first global organizations set up to coordinate this. The European Geophysical Union (EGU) has open review journals for more than 15 years. The initial publication in a "discussion" journal is similar to putting your manuscript on a pre-print server. Many of the contributions to the upcoming FORCE2017 conference on Research Communication and e-Scholarship that mention a topic are about climate science.

The road to Open Access

A manuscript server is one step on the way to an Open Access publishing future. This would make articles better accessible to researchers and the public who paid for it.

Open Access would break the monopoly given to scientific publishers by copyright laws. An author looking for a journal to publish his work can compare price and service. But a reader typically needs to read one specific article and then has to deal with a publishers with monopoly power. This has led to monopolistic profits and commercial publishers that have lost touch with their customers, the scientific community. That Elsevier has a profit margin of "only" 36 percent thus seems to be mismanagement, it should be close to a 100 percent.



ArXiv shows that publishing a manuscripts costs less than a dollar per article. Software to support the peer review can be rented for 10 dollar per article (see also: Episciences.org and Open Journal Systems). Writing the article and reviewing it is done for free by the scientific community. Most editors are also scientists working for free, sometimes the editor in chief gets some secretarial support, some money for a student help. Typesetting by journals is highly annoying as they often add errors doing so. Typesetting is easily done by a scientist, especially using Latex, but also with a Word template. That scientists pay thousands of dollars per article is not related to the incurred costs, but due to monopoly brand power.

Publishers that serve the community, articles that everyone can read and less funding wasted on publishing is a desirable goal, but it is hard to get there because the barriers to entry are large. Scientists want to publish in journals with a good reputation and if the journals are not Open Access with a broad circulation. This makes starting a new journal hard, even if a new journal does a much better job at a much lower price, it will start with no reputation and without a reputation it will not get manuscripts to prove its worth.

To make it easier to get from the current situation to an Open Access future, I propose the concept of Grassroot Scientific Publishing. Starting a new journal should be as easy as starting a blog: Make an account, give the journal name and select a lay-out. Finished, start reviewing.

To overcome the problem that initially no one will submit manuscripts a grassroots journal can start with reviewing already published articles. This is not wasted time because we can do a much better job communicating the strength and weakness as well as the importance of an article than we do now, where the only information we have on the importance is the journal in which it is published. We can categorise and rank them. We can have all articles of one field in the same journal, no longer scattered around in many different journals.

Even without replacing traditional journals, such a grassroots journal would provide a valuable service to its scientific community.

To explain the idea and get feedback on how to make it better I have started a new grassroots publishing blog:
Once this kind of journals is established and has shown it provides superior quality assurance and information, there is no longer any need for pay-wall journals and we can just review the articles on manuscript servers.

Related reading

Paul Voosen in Nature: Dueling preprint servers coming for the geosciences

AGU: ESSOAr Frequently Asked Questions

The Guardian, long read: Is the staggeringly profitable business of scientific publishing bad for science?

If you are on twitter, do show support and join EarthArXiv

Three cheers for gatekeeping

Peer review helps fringe ideas gain credibility

Grassroots scientific publishing


* Photo Clare Night 2 by Paolo Antonio Gonella is used under a Creative Commons Attribution 2.0 Generic (CC BY 2.0) license.

Tuesday, 13 September 2016

Publish or perish is illegal in Germany, for good reason


Had Albert Einstein died just after his wonder year 1905 he would only have had a few publications on special relativity, the equivalence of mass and energy, Brownian motion and Photoelectric Effect on his name and would nowadays be seen as a mediocre researcher. He got the Nobel prize in 1921 "for his services to Theoretical Physics, and especially for his discovery of the law of the photoelectric effect", not for relativity, not for Brownian motion. This illustrates how hard it is to judge scientific work, even more than a decade afterwards, much less in advance.
Managing scientists is hard. It is nearly impossible to determine who will do a good job, who is doing a good job and even whether someone did a good job in the past. The last decades science managers have largely given up trying to assess how good a scientist is in most of the world and instead assess how many articles they write and how high the prestige is of the journals the articles appear in.

Unsurprisingly, this has succeeded in increasing the number of articles scientists write. Especially in America scientists are acutely aware that they have to publish or perish.

Did this hurt scientific progress? It is unfortunately impossible to say how fast science is progressing and how fast it could progress. The work is about the stuff we do not understand yet after all. The big steps, evolution, electromagnetism, quantum mechanics, have become rare the last decades. Maybe the low hanging fruit is simply gone. Maybe it is also modern publish-or-perish management.

There are good reasons to expect publish-or-perish management to be detrimental.
1. The most basic reason: The time spend writing and reading articles the ever increasing number of articles is not spend on doing research. (I hope no one is so naive as to think that the average scientist actually became several times more productive.)
2. Topics that quickly and predictably lead to publications are not the same topics that will bring science forward. I personally try to write a mix because only working on more risky science you expect is important is unfortunately too dangerous.
3. The stick and carrot type of management works for manual labor, but for creative open-ended work it is often found to be detrimental. For creative work mastery and purpose are the incentives.

German science has another tradition, trusting scientists more and focusing on quality. This is expressed in the safeguards for good scientific practice of the German Science Foundation (DFG). It explicitly forbids the use of quantitative assessments of articles.
Universities and research institutes shall always give originality and quality precedence before quantity in their criteria for performance evaluation. This applies to academic degrees, to career advancement, appointments and the allocation of resources. …

criteria that primarily measure quantity create incentives for mass production and are therefore likely to be inimical [harmful] to high quality science and scholarship. …

Quantitative criteria today are common in judging academic achievement at all levels. … This practice needs revision with the aim of returning to qualitative criteria. … For applications for academic appointments, a maximum number of publications should regularly be requested for the evaluation of scientific merit.
For a project proposal to the German Science Foundation this "maximum number" means that you are not allowed to list all your publications, but only the 6 best ones (for a typical project, smaller projects even less).

[UPDATE. This limit has unfortunately now been increased to 10. They say the biologists are to blame.]

While reading the next paragraphs, please hear me screaming YES, YES, YES in your ear at an unbearable volume.
An adequate evaluation of the achievements of an individual or a small group, however, always requires qualitative criteria in the narrow sense: their publications must be read and critically compared to the relevant state of the art and to the contributions of other individuals and working groups.

This confrontation with the content of the science, which demands time and care, is the essential core of peer review for which there is no alternative. The superficial use of quantitative indicators will only serve to devalue or to obfuscate the peer review process.
I fully realize that actually reading someone’s publications is much more work than counting them and that top scientists spend a large part of their time reviewing. In my view that is a reason to reduce the number of reviews and trust scientists more. Hire people who have a burning desire to understand the world, so that you can trust them.

Sometimes this desire goes away when people get older. For the outside world this is most visible in some older participants of the climate “debate” who hardly produce new work trying to understand climate change, but use their technical skills and time to deceive the public. The most extreme example I know is a professor who was painting all day long, while his students gave his lectures. We should be able to get rid of such people, but there is no need for frequent assessments of people doing their job well.

You also see this German tradition in the research institutes of the Max Planck Society. The directors of these institutes are the best scientists of the world and they can do whatever they think will bring their science forward. Max Planck Director Bjorn Stevens describes this system in the fourth and best episode of the podcast Forecast. The part on his freedom and the importance of trust starts at minute 27, but best listen to the whole inspiring podcast about which I could easily write several blog posts.

Stevens started his scientific career in the USA, but talks about the German science tradition when he says:
I can think of no bigger waste of time than reviewing Chris Bretherton’s proposals. I mean, why would you want to do that? They guy has shown himself to have good idea, after good idea, after good idea. At some point you say: go doc, go! Here is your budget and let him go. This whole industry that develops to keep someone like Chris Bretherton on a leash makes no sense to me.
Compare scientists who sets priorities within their own budgets with scientists who submit research proposals judged by others. If you have your own budget you will only support what you think is really important; if you do A, you cannot do B. Many project proposals are written to fit into a research program, because a colleague wants to collaborate and apart from the time wasted on writing it, there are no downsides for asking for more funding. If you have your own budget, the person with the most expertise and with the most skin in the game decides. This while they call the project funding, where the deciders have no skin in the game, competitive. It is Soviet style planning; that it works at all shows the dedication and altruism of the scientists involved. Those are scientists you could simply trust.

I hope this post will inspire the scientific community to move towards more trust in scientists, increase the fraction of unleashed researchers and reduce the misdirected quantitative micro-management. Please find below the full text of the safeguards of the German Science Foundation on performance evaluation; above I had to skip many worthwhile parts.



Recommendation 6: Performance Evaluation

Universities and research institutes shall always give originality and quality precedence before quantity in their criteria for performance evaluation. This applies to academic degrees, to career advancement, appointments and the allocation of resources.

Commentary
For the individual scientist and scholar, the conditions of his or her work and its evaluation may facilitate or hinder observing good scientific practice. Conditions that favour dishonest conduct should be changed. For example, criteria that primarily measure quantity create incentives for mass production and are therefore likely to be inimical to high quality science and scholarship.

Quantitative criteria today are common in judging academic achievement at all levels. They usually serve as an informal or implicit standard, although cases of formal requirements of this type have also been reported They apply in many different contexts: length of Bachelor, Master or PhD thesis, number of publications for the Habilitation (formal qualification for university professorships in German speaking countries), as criteria for career advancements, appointments, peer review of grant proposals, etc. This practice needs revision with the aim of returning to qualitative criteria. The revision should begin at the first degree level and include all stages of academic qualification. For applications for academic appointments, a maximum number of publications should regularly be requested for the evaluation of scientific merit.

Since publications are the most important “product” of research, it may have seemed logical, when comparing achievement, to measure productivity as the number of products, i.e. publications, per length of time. But this has led to abuses like the so-called salami publications, repeated publication of the same findings, and observance of the principle of the LPU (least publishable unit).

Moreover, since productivity measures yield little useful information unless refined by quality measures, the length of publication lists was soon complemented by additional criteria like the reputation of the journals in which publications appeared, quantified as their “impact factor” (see section 2 5).

However, clearly neither counting publications nor computing their cumulative impact factors are by themselves adequate forms of performance evaluation. On the contrary, they are far removed from the features that constitute the quality element of scientific achievement: its originality, its “level of innovation”, its contribution to the advancement of knowledge. Through the growing frequency of their use, they rather run the danger of becoming surrogates for quality judgements instead of helpful indicators.

Quantitative performance indicators have their use in comparing collective activity and output at a high level of aggregation (faculties, institutes, entire countries) in an overview, or for giving a salient impression of developments over time. For such purposes, bibliometry today supplies a variety of instruments. However, they require specific expertise in their application.

An adequate evaluation of the achievements of an individual or a small group, however, always requires qualitative criteria in the narrow sense: their publications must be read and critically compared to the relevant state of the art and to the contributions of other individuals and working groups.

This confrontation with the content of the science, which demands time and care, is the essential core of peer review for which there is no alternative. The superficial use of quantitative indicators will only serve to devalue or to obfuscate the peer review process.

The rules that follow from this for the practice of scientific work and for the supervision of young scientists and scholars are clear. They apply conversely to peer review and performance evaluation:
  • Even in fields where intensive competition requires rapid publication of findings, quality of work and of publications must be the primary consideration. Findings, wherever factually possible, must be controlled and replicated before being submitted for publication.
  • Wherever achievement has to be evaluated — in reviewing grant proposals, in personnel management, in comparing applications for appointments — the evaluators and reviewers must be encouraged to make explicit judgements of quality before all else. They should therefore receive the smallest reasonable number of publications — selected by their authors as the best examples of their work according to the criteria by which they are to be evaluated.

Related information

Nature on new evaluation systems in The Netherlands and Ireland: Fewer numbers, better science

Episode 4 of Forecast with Max Planck Director Bjorn Stevens on clouds, aerosols, science and science management. Highly recommended.

Memorandum of the German Science Foundation: Safeguarding Good Scientific Practice. English part starts at page 61.

On of my first posts explaining why stick and carrot management makes productivity worse for cognitive tasks: Good ideas, motivation and economics

* Photo of Albert Einstein at the top is in the public domain.

Sunday, 8 May 2016

Grassroots scientific publishing

These were the weeks of peer review. Sophie Lewis wrote her farewell to peer reviewing. Climate Feedback is making it easy for scientists to review journalistic articles with nifty new annotation technology. And Carbon Brief showed that while there is a grey area, it is pretty easy to distinguish between science and nonsense in the climate "debate", which is one of the functions of peer review. And John Christy and Richard McNider managed to get an article published, which I would have advised to reject as reviewer. A little longer ago we had the open review of the Hansen sea level rise paper, where the publicity circus resulted in a-scientific elements spraying their graffiti on the journal wall.

Sophie Lewis writes about two recent reviews she was asked to make. One where the reviewers were negative, but the article was published anyway by the volunteer editor and one case where the reviewers were quite positive, but the manuscript was rejected by a salaried editor.

I have had similar experiences. As reviewer you invest your time and heart in a manuscript and root for the ones you like to make it in print. Making the final decision naturally is the task of the editor, but it is very annoying as a reviewer to have the feeling your review is ignored. There are many interesting things you could have done in that time. At least nowadays you get to see the other reviews and hear the final decision more often, which is motivating.

The European Geophysical Union has a range of journals with open review, where you can see the first round of reviews and anyone can contribute reviews. This kind of open review could benefit from the annotation system used by Climate Feedback to review journalistic articles; it makes reviewing easier and the reader can immediately see the text the review refers to. The open annotation system allows you to add comments to any webpage or PDF article or manuscript. You can see it as an extra layer on top of the web.

The reviewer can select a part of the text and add comments, including figures and links to references. Here is an annotated article in the New York Times that Climate Feedback found to be scientifically very credible, where you can see the annotate system in action. You can click on the text with a yellow background to see the corresponding comment or click on the small symbol at the top right to see all comments. (Examples of articles with low scientific credibility are somehow mostly pay-walled; one would think that the dark money behind these articles would want them to be read widely.)

I got to know annotation via Climate Feedback. We use the annotation system of Hypothes.is and this system was actually not developed to annotate journalistic articles, but for reviewing scientific articles.

The annotation system makes writing a review easier for the reviewer and makes it easier to read reviews. The difference between writing some notes on an article for yourself and a peer review becomes gradual this way. It cannot take away having to read the manuscript and trying to understand it. That takes most time, but this is the fun part, reducing time time for the tedious part makes it more attractive to review.

Publishing and peer review

Is there a better way to review and publish? The difficult part is no longer the publishing. The central part that remains is the trust of a reader in a source.

It starts to become ironic that the owners of the scientific journals are called "scientific publishers" because the main task of a publisher is nowadays no longer the publishing. Everyone can do that nowadays with a (free) word processor and a (free) web page. The publishers and their journals are mostly brands nowadays. The scientific publisher, the journal is a trusted name. Trust is slow to build up (and easy to lose), producing huge barriers to entry and leading to near monopoly profits of scientific publishing houses of 30 to 40%. That is tax-payer money that is not spend on science and promotes organization that prefer to keep science unused behind pay-walls.

Peer review performs various functions. It helps to give a manuscript the initial credibility that makes people trust it, that makes people willing to invest time in it to study its ideas. If the scientific literature would be as abominable as the mitigation skeptical blog Watts Up With That (WUWT) scientific progress would slow down enormously. At WUWT the unqualified readers are supposed to find out themselves whether they are being conned or not. Even if they would do so: having every reader do a thorough review is wasteful; it is much more efficient to ask a few experts to first vet manuscripts.

Without peer review it would be harder for new people to get others to read their work, especially if they would make a spectacular claim and use unfamiliar methods. My colleagues will likely be happy to read my homogenization papers without peer review. Gavin Schmidt's colleagues will be happy to read his climate modelling papers and Michel Mann's colleagues his papers on climate reconstructions. But for new people it would be harder to be heard, for me it would be harder to be heard if I would publish something about another topic and for outsiders it would be harder to judge who is credible. The latter is increasingly important the more interdisciplinary sciences becomes.

Improving peer review

When I was dreaming of a future review system where scientific articles were all in one global database, I used to think of a system without journals or editors. The readers would simply judge the articles and comments, like on Ars Technica or Slashdot. The very active open science movement in Spain has implemented such a peer review system for institutional repositories, where the manuscripts and reviews are judged and reputation metrics are estimated. Let me try to explain why I changed my mind and how important editors and journals are for science.

One of my main worries for a flat database would be that there would be many manuscripts that never got any review. In the current system the editor makes sure that every reasonable manuscript gets a review. Without an editor explicitly asking a scientist to write a review, I would expect that many articles would never get a review. Personal relations are important.

Science is not a democracy, but a meritocracy. Just voting an article up or down does not do the job. It is important that this decision is made carefully. You could try to statistically determine which readers are good at predicting the quality of an article, where quality could be determined by later votes or citations. This would be difficult, however, because it is important that the assessment is made by people with the right expertise, often by people from multiple backgrounds; we have seen how much even something as basic as the scientific consensus on climate change depends on expertise. Try determining expertise algorithmically. The editor knows the reviewers.

While it is not a democracy, the scientific enterprise should naturally be open. Everyone is welcome to submit manuscripts. But editors and reviewers need to be trusted and level headed individuals.

More openness in publishing could in future come from everyone being able to start a "journal" by becoming editor (or better by organization a group of editors) and try to convince their colleagues that they do a good job. The fun thing about the annotation system is that you can demonstrate that you do a good job using existing articles and manuscripts.

This could provide real value for the reader. Not only would the reviews be visible, but it would also be possible to explain why an article was accepted, was it speculative, but really interesting if true (something for experts) or was it simply solid (something for outsiders). Which parts do the experts debate about. The debate would also continue after acceptance.

The code and the data of every "journal" should be open so that everyone can start a new "journal" with reviewed articles. So that when Heartland offers me a nice amount of dark money to start accepting WUWT-quality articles, a group of colleagues can start a new journal and fix my dark-money "mistakes", but otherwise have a complete portfolio from the beginning. If they would have to start from scratch that would be a large barrier to entry, which like the traditional system encourages sloppy work, corruption and power abuse.

Peer review is also not just for selecting articles, but also to help making them better. Theoretically the author can also ask colleagues to do so, but in practice reviewers are better in finding errors. Maybe because the colleagues who will put in most effort are your friends who have to same blind spots? These improvements of the manuscript would also be missing in a pure voting system of "finished" articles. Having a manuscript phase is helpful.

Finally, an editor makes anonymous reviews a lot less problematic because the editor could delete comment where the anonymity seduced people into inappropriate behavior. Anonymity could be abused to make false attacks with impunity. On the other hand anonymity can also provide protection in case of large power differences in case of real problems.

The advantage of internet publishing is that there is no need for an editor to reject technically correct manuscripts. If the contribution to science is small or if the result is very speculative and quite likely to be found to be wrong in future, the manuscript can still be accepted but simply be given a corresponding grade.

This also points to a main disadvantage of the current dead-tree-inspired system: you get either a yes or a no. There is a bit more information in the journal the author chooses, but that is about it. A digital system can communicate much more subtly with a prospective reader. A speculative article is interesting for experts, but may be best avoided by outsiders until the issues are better understood. Some articles mainly review the state-of-the-art, others provide original research. Some articles have a specific audience: for example the users of a specific dataset or model. Some articles are expected to be more important for scientific progress than others or discuss issues that are more urgent than others. And so on. This information can be communicated to the reader.

The nice thing about the open annotate system is that we can begin reviewing articles before authors start submitting their articles. We can simply review existing articles as well as manuscripts, such as the ones uploaded to ArXiv. The editors could reject articles that should not have been published in the traditional journals and accept manuscripts from archives. I would judge this assessment of a knowledgeable editor (team) more than the acceptance by a traditional journal.

In this way we can produce collections of existing articles. If the new system provides a better reviewing service to science, the authors at some moment can stop submitting their manuscripts to traditional journals and submit them directly to the editors of a collection. Then we have real grassroots scientific journals that serve science.

For colleagues in the communities it would be clear which of these collections have credibility. However, for outsiders we would also need some system that communicates this, which would traditionally be the role of publishing houses and the high barriers to entry. This could be assessed where collections have overlap. Preferably again by humans and not by algorithms. For some articles there may be legitimate reasons why there are differences (hard to assess, other topic of collection), for other articles an editor not having noticed problems may be a sign of bad editorship. This problem is likely not too hard, in a recent analysis of twitter discussions on climate change there was a very clear distinction between science and nonsense.

There is still a lot to do, but with the ease of modern publishing and the open annotate system a lot of software is already there. Larger improvements would be tools for editors to moderate review comments (or at least to collapse less valuable comments); Hypothes.is is working on it. A grassroots journal would need a grading system; standardized when possible. More practical tools would include some help in tracking the manuscripts under review and for sending reminders, and the editors of one collection should be able to communicate with each other. The grassroots journal should remain visible even if the editor team stops; that will need collaboration with libraries or science societies.

If we get this working
  • we can say goodbye to frustrated reviewers (well mostly),
  • goodbye to pay-walled journals in which publicly financed research is hidden for the public and many scientists alike and
  • goodbye to wasting limited research money on monopolistic profits by publishing houses, while
  • we can welcoming better review and selection and
  • we are building a system that inherently allows for post-publication peer review.

What do you think?



Related reading

There is now an "arXiv overlay journal", Discrete Analysis. Articles are published/hosted by ArXiv, otherwise traditional peer review. The announcement mentions three software initiative that make starting a digital journal easy: Scholastica, Episciences.org and Open Journal Systems.

Annotating the scholarly web

A coalition to Annotating All Knowledge A new open layer is being created over all knowledge

Brian A. Nosek and Yoav Bar-Anan describe a scientific utopia: Scientific Utopia: I. Opening scientific communication. I hope the ideas in the above post makes this transition possible.

Climate Feedback has started a crowed funding campaign to be able to review more media articles on climate science

Farewell peer reviewing

7 Crazy Realities of Scientific Publishing (The Director's Cut!)

Mapped: The climate change conversation on Twitter

I would trust most scientists to use annotation responsibly, but it can also be used to harass vulnerable voices on the web. Genius Web Annotator vs. One Young Woman With a Blog. Hypothesis is discussing how to handle such situations.

Nature Chemistry blog: Post-publication peer review is a reality, so what should the rules be?

Report from the Knowledge Exchange event: Pathways to open scholarship gives an overview of the different initiative to make science more open.

Magnificent BBC Reith lecture: A question of trust