Showing posts with label scientific literature. Show all posts
Showing posts with label scientific literature. Show all posts

Thursday, May 6, 2021

We launched a new group to promote the translation of the scientific literature

Tell your story, tell your journey, they say. Climate Outreach advised: tell about how you came to accept climate change is a problem. Maybe I am too young, but still not being 50 I have accepted climate change was a risk we should care about already as a kid.

Also otherwise, I do not remember suddenly changing my mind often, so that I could talk about my journey. Where the word "remember" may do a lot of the work. Is it useful not to remember such things to make it easier on you to change your mind? Or do many people work with really narrow uncertainty intervals even when they do not have a clue yet?

But when it comes to translations of scientific articles, I changed a lot. When I was doing cloud research I used to think that knowing English was just one of the skills a scientist needs. Just like logic, statistics, coding, knowing the literature, public speaking, and so on.

Working on historical climate data changed this. I regularly have to communicate with people from weather services from all over the world and many do not speak English (well), while they do work that is crucial for science. Given how hard we make it for them to participate they do an amazing job; I guess the World Meteorological Organization translating all their reports in many languages helps.

The most "journey" moment was at the Data Management Workshop in Peru, where I was the only one not speaking Spanish. A colleague told me that she translated important scientific articles into Spanish and send them by email to her colleagues. Just like Albert Einstein translated scientific articles into English for those who did not master the language of science at the time.

This got me thinking about a database where such translations could be made available. When you search for an article and can see which translations are available. Or where you can search for translated articles on a specific topic. Such a resource would make producing translations more worthwhile and would thus hopefully stimulate their production.

Gathering literature, bookmarks on this topic and noticing who else was interested in this topic, I have invited a group of people to see if we can collaborate on this topic. After a series of pandemic video calls, we decided to launch as a group, somewhat unimaginatively called: "Translate Science". Please find below the part of our launch blog post about why translations are important.

(To be fair to me, and I like being fair to me, for a fundamental science needing expensive instruments such as cloud studies it makes more sense to simply do it in English. While for sciences that directly impact people, climate, health, agriculture, two-way communication within science, with the orbit around science and with society is much more important.

But even in the clouds sciences I should probably have paid more attention to studies in other languages. One of our group members works on turbulence and droplets and found many worthwhile papers in Russian. I had never considered that and might have found some turbulent gems there as well.)


The importance of translated articles

English as a common language has made global communication within science easier. However, this has made communication with non-English communities harder. For English-speakers it is easy to overestimate how many people speak English because we mostly deal with foreigners who do speak English. It is thought that that about one billion people speak English. That means that seven billion people do not. For example, at many weather services in the Global South only few people master English, but they use the translated guidance reports of the World Meteorological Organization (WMO) a lot. For the WMO, as a membership organization of the weather services, where every weather service has one vote, translating all its guidance reports into many languages is a priority.

Non-English or multilingual speakers, in both African (and non-African) continents, could participate in science on an equal footing by having a reliable system where scientific work written in non-English language is accepted and translated into English (or any other language) and vice versa. Language barriers should not waste scientific talent.

Translated scientific articles open science to regular people, science enthusiasts, activists, advisors, trainers, consultants, architects, doctors, journalists, planners, administrators, technicians and scientists. Such a lower barrier to participating in science is especially important on topics such as climate change, environment, agriculture and health. The easier knowledge transfer goes both ways: people benefiting from scientific knowledge and people having knowledge scientists should know. Translations thus help both science and society. They aid innovation and tackling the big global challenges in the fields of climate change, agriculture and health.

Translated scientific articles speed up scientific progress by tapping into more knowledge and avoiding double work. They thus improve the quality and efficiency of science. Translations can improve public disclosure, scientific engagement and science literacy. The production of translated scientific articles also creates a training dataset to improve automatic translations, which for most languages is still lacking.

The full post at the Translate Science blog explains more about who we are, what we would like to do to promote translations and how you can join.


Sunday, March 25, 2018

Separation of feedback, publishing and assessment of scientific studies



I once asked a friend and colleague about a wrong sentence in one of his scientific articles. He is a smart cookie and should have known better than that. His answer was that he knew it was wrong, but the peer reviewer requested that claim. The error was small and completely inconsequential for the results; no real harm was done. I wondered what I would have done.

Peer review has two roles: it provides detailed feedback on your work and it advises the editor on whether the article is good enough for the journal. This feedback normally makes the article better, but it is somewhat uncomfortable to discuss with reviewers who have a lot of power because of their second role.


Your Manuscript On Peer Review by redpen/blackpen.
My experience is that normally you can argue your case with a reviewer. Still to reach a common understanding can take an additional round of review, which means that the paper is published a few months later. In the worst case, not agreeing with a reviewer can mean that the paper is rejected and you have to submit to another journal.

It is quite common for reviewers to abuse their power by requesting their work to be cited (more). Mostly this is somewhat subtle and the citation more or less relevant. However, an anonymous reviewer once requested that I'd cite four article by one author, one of which was somewhat relevant. That does not hurt the article, but is disgusting power abuse and rewards bad behavior. My impression is that these are not all head fakes; when I write a critical review I make sure not to ask for citations to my work, but recommend some articles of colleagues instead. Multiple colleagues, not to get them into trouble.

Grassroots journals

I have started a grassroots journal on homogenization of climate data and only recently started to realize that this also produces a valuable separation of feedback, publishing and assessment of scientific studies. That by itself can lead to a much more healthy and productive quality control system.

A grassroots journal assesses published articles and manuscripts in a field of study. One could also see it as a continually up-to-date review article. At least two reviewers write a review on the strengths and weaknesses of an article, everyone can comments on parts of the article and the editors write a synthesis of the reviews. A grassroots journal does not publish the articles themselves, but collects articles published everywhere.

Every article also gets a quantitative assessment. This is similar to the current estimate of how important an article is by the journal it was able to get into. However, it does not reward people submitting the articles to a too big journal, hoping to get lucky, making unnecessary work for double reviews. For example, the publisher Frontiers reviews 2.4 million manuscripts and has to bounce about 1 million valid papers.

In case of traditional journals your manuscript only has to pass the threshold at the time of publishing. With an up-to-date rolling review of grassroots journals articles are rewarded that are of lasting value.

I would not have minded making a system without a quantitative assessment, but there are real differences between articles, the reader needs to prioritize their reading and funding agencies would likely not accept grassroots journals as replacement of the current system without it.

That is the final aim: getting rid of the current publishing system that holds science back. That grassroots journals immediately provide value is hopefully what makes the transition easier.

The more assessments made by grassroots journals are accepted the less it matters where you publish. Currently there is typically one journal, sometimes two, that have the right topic and prestige to publish in. The situation for the reader is even more terrible: you often need a specific paper and not just some paper on the topic. For this one specific paper there is one (legal) supplier. This near-monopolistic market leads to Elsevier making profits of 30 to 50% and it suppresses innovation.



Another symbol of the monopolistic market are the manuscript submission systems, which combine the worst of pre-internet paper submissions (every figure a separate file, captions in a separate file) with the internet age adage "save labor costs by letting your customers do the work" (adding the captions a second time when uploading a figure with a neat pop-up for special characters).

Separation of powers

Publishing is easy nowadays. ArXiv does this for about one dollar per manuscript. Once scientists can freely chose where to publish, the publishers will have to provide good services at reasonable costs. The most important service would be to provide a broad readership by publishing Open Access.

Maybe it will even go one step further and scientists will simply publish their manuscript on a pre-print server and tell the relevant grassroots journals where to find it. Such scientists likely still would like get some feedback from their colleagues on the manuscript. Several initiatives are currently springing up to review manuscripts before they are submitted to journals, for example, Peer Community In (PCI). Currently PCI makes several rounds until the reviewers "endorse" a manuscript so that in principle a journal could publish such a manuscript without further peer review.

With a separate independent assessment of the published article there would no longer be any need for the "feedback peer reviewers" to give their endorsement. (It doesn't hurt.) The authors would have much more freedom to decide whether the changes peer reviewers suggest are actually improvements. The authors, and not the reviewers, would decide when the manuscript is finished and can be published. If they make the wrong decisions that would naturally be reflected in the assessment. If they do not not add four citations to a peer reviewer that would not be any problem.

There is a similar initiative in the life sciences called APPRAISE, but this will only review manuscripts published on pre-print servers. Once the journals are gone, this will be the same, but I feel that grassroots journals add more immediate value by reviewing all articles on one topic. Just like a review article should review the entire literature and not a random part.

A vigorously debated topic is whether peer reviews should be open or closed. Recently ASAPbio had this discussion and comprehensively summarized the advantages and disadvantages (well worth reading). Both systems have their strengths and I do not see one of them winning.

This discussion may change when we separate feedback and assessment. Giving feedback is mostly doing the authors a favor and could more easily be done in the open. Rather than cumbersome month-long rounds of review, it would be possible to simply write an email and pick up the phone and clarify contentious points. On the other hand anonymity makes it easier to give an honest assessment and I expect this part to be mostly performed anonymously. The editors of a grassroots journal determine what is published and can thus ensure that no one abuses their anonymity.

The future

Concluding, in a decade a researcher writes an article and asks their colleagues for feedback. Once the manuscript no longer changes that much it is send to an independent proof reading service. Another firm or person takes care of the lay-out and ensures that the article can still be read in a century by making versions using open standards.

The authors decide when their manuscript is ready to be published and can be uploaded to the article repository. They send a notice to the journals that cover the topic. Journal A makes an assessment. Journals B and C copy this assessment, while journal D also uses it, but requests an additional review for a part that is important to them and they write another synthesis.

Readers add comments to the article using web annotations and the authors reply to them with clarifications. Also authors can add comments to share new insights on what was good and bad about the article.

Two years later a new study shows that one of the choices of the article was not optimal. This part was important for journal C and D and they update their assessment. The authors decide that it is relatively easy to redo their article with a better choice and that the article is sufficiently important to put in some work, they upload the updated study to the repository and the journals update their assessment.



Related reading

APPRAISE (A Post-Publication Review and Assessment In Science Experiment). A similar idea to grassroots journals, but they only want to to review pre-prints and will thus only review part of the literature. See also NPR on this initiative.

A related proposal by Gavin Schmidt: Someone C.A.R.E.S. Commentary And Replication in Earth Science (C.A.R.E.S.). Do we need a new venue for post-publication comments and replications?

Psychologist Henry L. Roediger, III on Anonymity in Scientific Publishing. A well written article that lays out all arguments, which are whether we talk about the authors, reviewers or editors. The author likes signed reviews. I feel that editors should prevent reviewers taking advantage of their anonymity.


* Photo of scientific journals by Tobias von der Haar used under a Attribution 2.0 Generic (https://creativecommons.org/licenses/by/2.0/) license.
* Graph of publishing costs by Dave Gray used under a Attribution-NonCommercial-NoDerivs 2.0 Generic (CC BY-NC-ND 2.0) license.



Tuesday, November 21, 2017

The fight for the future of science in Berlin



A group of scientists, scholars, data scientists, publishers and librarians gathered in Berlin to talk about the future of research communication. With the scientific literature being so central to science, one could also say the conference was about the future of science.

This future will be more open, transparent, findable, accessible, interoperable and reusable.


The open world of research from Mark Hooper on Vimeo.

Open and transparent sounds nice and most seem to assume that more is better. But it can also be oppressive, help the powerful with the resources to mine the information efficiently.

This is best known when it comes to government surveillance, which can be dangerous; states are powerful and responsible for the biggest atrocities in history. The right to vote in secret, to privacy, to organize and protections against unreasonable searches are fundamental protections against power abuse.

Powerful lobbies and political activists abuse transparency laws to harass inconvenient science.

ResearchGate, Google Scholar profiles and your ORCID ID page contribute to squeezing scientists like lemons by prominently displaying the number of publications and citations. This continual pressure can lead to burnout, less creativity and less risk taking. It encourages scientists to pick low hanging fruits rather than do those studies they think would bring science forward the most. Next to this bad influence on publications many other activities, which are just as important for science, suffer from this pressure. Many good-willing people were trying to solve this by also quantifying these activities. But in doing so add more lemon presses.


That technology brings more surveillance and detrimental micro-management is not unique to science. The destruction of autonomy is a social trend, that, for example, also affects truckers.

Science is a creative profession (even if many scientists do not seem to realise this). You have good ideas when you relax under the shower, in bed with fever or on a hike. The modern publish or perish system is detrimental to cognitive work. Work that requires cognitive skills is performed worse if you pressure people, it needs autonomy, mastery and purpose.

Scientists work on the edge of what is known and invariably make mistakes often. If you are not making mistakes you are not pushing your limits. This needs some privacy because unfortunately making mistakes is not socially acceptable for adults.



Chinese calligraphy with water on a stone floor. More ephemeral communication can lead to more openness, improve the exchange of views and produce more quality feedback.
Also later on in the process the ephemeral nature of a scientific talk requires deep concentration from the listener and is a loss for people not present, but it is also a feature early in a study. Without the freedom to make mistakes there will be less exiting research and slower progress. Scientists are also humans and once an idea is fixed on "paper" it becomes harder to change, while the flexibility to update your ideas to the evidence is important and likely needed in early stages.

These technologies also have real benefits, for example, make it easier to find related articles by the same author. A unique researcher identifier like ORCID especially helps when someone changes their name or in countries like China where one billion people seem to share about 1000 unique names. But there is no need for ResearchGate to put the number of publications and citations in huge numbers on the main profile page. (The prominent number of followers on Twitter profile pages also makes it less sympathetic in my view and needlessly promotes competition and inequality. Twitter is not my work, artificial competition is even more out of place.)

Open Review is a great option if you are confident about your work, but fear that reviewers will be biased, but sometimes it is hard to judge how good your work is and nice to have someone discretely point to problems with your manuscript. Especially in interdisciplinary work it is easy to miss something a peer review would notice, while your network may not include someone from another discipline you can ask to read the manuscript.

Once an article, code or dataset is published, it is free game. That is the point where I support Open Science. For example, publishing Open Access is better than pay-walled. If there is a reasonable chance of re-use publishing data and code helps science progress and should be rewarded.

Still I would not make a fetish out of it; I made the data available for my article on benchmarking homogenisation algorithms. This is an ISI highly-cited article, but I only know of one person having used the data. For less important papers publishing data can quickly be additional work without any benefits. I prefer nudging people towards Open Science over making it obligatory.

The main benefactor of publishing data and code is your future self, no one is more likely to continue your work. This should be an important incentive. Another incentive are Open Science "badges": icons presented next to the article title indicating whether the study was preregistered and provides open data and open materials (code). The introductions of these badges in the journal "Psychological Science" increased the percentage of articles with available data quickly to almost 40%.

The conference was organised by FORCE11, a community interested in future research communication and e-scholarship. There are already a lot of tools for the open, findable and well-connected world of the future, but their adoption could go faster. So the theme of this year's conference was "changing the culture".

Open Access


Christopher Jackson; on the right. (I hope I am allowed to repeat his joke.)
A main address was by Christopher Jackson. He has published over 150 scientific articles, but only became aware of how weird the scientific publishing system is when he joined ResearchGate, a social network for scientists, and was not allowed to put many of his articles on it because the publishers have the copy rights and do not allow this.

The frequent requests for copies of his articles on Research Gate also created an awareness how many scientists have trouble accessing the scientific literature due to pay-walls.

Another key note speaker, Diego Gómez, was threatened with up to eight years in jail for making scientific articles accessible. His university, Universidad del Quindio in Costa Rica, spends more on licenses for scientific journals ($375,000) than on producing scientific knowledge themselves ($253,000).



The lack of access to the scientific literature makes research in poorer countries a lot harder, but even I am regularly not able to download important articles and have to ask the authors for a copy or ask our library to order a photocopy elsewhere, although the University of Bonn is not a particularly poor university.

Also non-scientists may benefit from being able to read scientific articles, although when it is important I would prefer to consul an expert over mistakenly thinking I got the gist of an article in another field. Sometimes a copy of the original manuscript is found on one of the authors homepages or a repository. Google (Scholar) and the really handy browser add-on unpaywall can help find those using the Open Access DOI database.

Also sharing passwords and Sci-Hub are solutions, but illegal. The real solutions to making research more accessible are Open Access publishing and repositories for manuscripts. By now about half of the recently published articles are Open Access and in this pace all articles would be Open Access by 2040. Interestingly the largest fraction of the publicly available articles does not have an Open Access license, also called bronze Open Access. This means that the download possibility could also be revoked again.

The US National Institutes of Health and the European Union mandate that its supported research will be published Open Access.

A problem with Open Access journals can be that some are only interested in the publication fees and do not care about the quality. These predatory journals are bad for the reputation of real Open Access journals, especially in the eyes of the public.

I have a hard time believing that the authors do not know that these journals are predatory. Next to the sting operations to reveal that certain journals will publish anything, it would be nice to also have sting predatory journals that openly email the authors that they will accept any trash and see if that scares away the authors.

Jeffrey Beall used to keep a list of predatory journals, but had to stop after legal pressure from these frauds. The publishing firm Cabell now launched their own proprietary (pay-walled) blacklist, which already has 6000 journals and is growing fast.

Preprint repositories

Before a manuscript is submitted to a journal, the authors naturally still have the copy rights. They can thus upload the manuscript to a database, so-called preprint or institutional repositories. Unfortunately some publishers say this constitutes publishing a manuscript and they refuse to publish it because it is no longer new. However, most publishers accept the publications of the manuscript as it was before submission. A smaller part is also okay with the final version being published on the author's homepages or repositories.

Where a good option for an Open Access journal exists we should really try to use it. Where it is allowed, we should upload our manuscripts to repositories.

Good news for the readers of this blog is that a repository for the Earth Sciences was opened last week: EarthArXiv. This fall, the AGU will also demonstrate its preprint repository at the AGU Fall meeting. For details see my previous post.  EarthArXiv already has 15 climate related preprints.

This November also a new OSF ArXiv has started: MarXiv, not for Marxists, but for the marine-conservation and marine-climate sciences.
    When we combine the repositories with peer review organised by the scientific community itself, we will no longer need pay-walling scientific publishers. This can be done in a much more informative way than currently where the reader only knows that the paper was apparently good enough for the journal, but not why it is a good article, not how it fits in the (later published) literature. With Grassroots scientific publishing we can do a much better job.

    One way the reviews at a Grassroots journal can be better is by openly assessing the quality of the work. Now all we know is that the study was sufficiently interesting for some journal at that time for whatever reason. What I did not realise before Berlin is that this wastes a lot of time reviewing. Traditional journals waste resources on manuscripts, which are valid, but are rejected because they are seen as not important enough for the journal. For example, Frontiers reviews 2.4 million manuscripts and has to bounce about 1 million valid papers.

    On average scientists pay $5,000 per published article. This while scientists do most of the work for free (writing, reviewing, editing) and while the actual costs are a few $100. The money we save can be used for research. In the light of these numbers it is actually amazing that Elsevier only makes a profit of 35 to 50%. I guess their CEO's salary eats into the profits.

    Preprints would also have the advantage of making studies available faster. Open Access makes text and data mining easier, which helps in finding all articles on molecule M or receptor R. First publishers are using Text mining and artificial intelligence to suggest suitable peer reviewers to their editors. (I would prefer editors who know their field.) It would also help in detecting plagiarism and even statistical errors.

    (Before our machine overlords find out, let me admit that I did not always write the model description of the weather prediction model I used from scratch.)



    Impact factors

    Another issue Christopher Jackson highlighted is the madness of the Journal Impact Factors (JIF or IF). They measure how often an average article in a journal is cited in the first two or five years after publication. They are quite useful for librarians to get an overview over which journals to subscribe to. The problem begins when this impact factor is used to determine the quality of a journal or the articles in it.

    How common this is, is actually something I do not know. For my own field I would think I have a reasonable feeling about the quality of the journals, which is independent of the impact factor. More focussed journals tend to have smaller impact factors, but that does not signal that they are less good. Boundary Layer Meteorology is certainly not worse than the Journal of Geophysical Research. The former has in Impact Factor of 2.573, the latter of 3.454. If you made a boundary layer study it would be madness to then publish it in a more general geophysical journal where the chance is smaller that relevant colleagues will read it. Climate journals will have higher impact factors than meteorological journals because meteorologists mainly cite each other, while many sciences build on climatology. When the German meteorological journal MetZet was still a pay-wall journal it had a low impact factor because not many outside of Germany had a subscription, but the quality of the peer review and the articles was excellent.

    I would hope that reviewers making funding and hiring decisions know the journals in their field and take these kind of effects into account and read the articles itself. The [[San Francisco Declaration on Research Assessment]] (DORA) rejects the use of the impact factor. In Germany it is officially forbidden to judge individual scientists and small groups based on bibliographic measures such as the number of articles times the impact factor of the journals. Although I am not sure if everybody knows this. Imperial College recently adopted similar rules:
    “the College should be leading by example by signalling that it assesses research on the basis of inherent quality rather than by where it is published”
    “eliminate undue reliance on the use of journal-based metrics, such as JIFs, in funding, appointment, and promotion considerations”
    The relationship between the number of citations an article can expect and the impact factor is weak because there is enormous spread. Jackson showed this figure.



    This could well be a feature and not a bug. We would like to measure quality, not estimate the (future) number of citations of an article. For my own articles, I do not see much correlation between my subjective quality assessment and the number of citations. Which journal you can get into may well be a better quality measure than individual citations. (The best assessment is reading articles.)

    The biggest problem is when the journals, often commercial entities, start optimising for the number of citations rather than quality. There are many ways to get more citations, a higher impact factor, than making the best possible quality control. An article that reviews the state of the scientific field typically get a lot of citations, especially if writing by the main people in the field. Nearly every article will mention it in the introduction. Review papers are useful, but we do not need a new one every year. Articles with many authors typically get more citations. Articles on topics many scientists work on will get more citations. For Science and Nature it is important to get coverage in the main stream press, which is also read by scientists and leads to more citations.

    Reading articles is naturally work. I would suggest to reduce the number of reviews.

    Attribution, credit

    Traditionally one gets credit for scientific work by being author of a scientific paper. However, with increased collaboration and interdisciplinary work author lists have become longer and longer. Also the publish or perish system likely contributed: outsourcing part of the work is often more efficient than doing it yourself, while the person doing a small part of the analysis is happy to have another paper on their publish or perish list.

    What is missing from such a system is getting credit for a multitude of other import tasks. How does one value non-traditional output items supplied by researchers: code, software, data, design, standards, models, MOOC lectures, newspaper articles, blog posts, community engaged research and citizen science? Someone even mentioned musicals.

    A related question is who should be credited: technicians, proposal writers, data providers? As far as I know it would be illegal to put people in such roles in author list, but they do work that is important, needs to be done and thus needs to be credited somehow. A work-around is to invite them to help in editing the manuscript, but it would be good to have systems where various roles are credited. Designing such a system is hard.

    One is temped to make such a credit system very precise, but ambiguity also has its advantages to deal with the messiness of reality. I once started a study with one colleague. Most of this study did not work out and the final article was only about a part. A second colleague helped with that part. For the total work the first colleague had done more work, for the part that was published the second one. Both justifiably found that they should be second author. Do you get credit for the work or for the article?

    Later the colleague who had become third author of this paper wrote another study where I helped. It was clear that I should have been the second author, but in retaliation he made me the third author. The second author wrote several emails that this was insane, not knowing what was going on, but to no avail. A too precise credit system would leave no room for such retaliation tactics to clear the air for future collaborations.

    In one session various systems of credit "badges" were shown and tried out. What seemed to work best was a short description of the work done by every author, similar to a detailed credit role at the end of a movie.

    This year a colleague wrote on a blog that he did not agree with a sentence of an article he was author of. I did not know that was possible; in my view authors are responsible for the entire article. Maybe we should also split up the authors list in authors who guarantee with their name and reputation for the quality of the full article and honorary authors who only contributed a small part. This colleague could then be a honorary author.

    LindedIn endorsements were criticised because they are not transparent and they make it harder to change your focus because the old endorsements and contacts stick.

    Pre-registration

    Some fields of study have trouble replicating published results. These are mostly empirical fields where single studies — to a large part — stand on their own and are not woven together by a net of theories.

    One of the problems is that only interesting findings are published and if no effect is found the study is aborted. In a field with strong theoretical expectations also finding no effect when one is expected is interesting, but if no one expected a relationship between A and B, finding no relationship between A and B is not interesting.

    This becomes a problem when there is no relationship between A and B, but multiple experiments/trails are made and some will find a fluke relationship by chance. If only those get published that gives a wrong impression. This problem can be tackled by registering trails before they are made, which is becoming more common in medicine.

    A related problem is p-hacking and hypothesis generation after results are known (HARKing). A relationship which is statistically significant if only one outlier were not there, makes it tempting to find a reason why the outlier is a measurement error and should be removed.

    Similarly the data can be analyses in many different ways to study the same question, one of which may be statistically significant by chance. This is also called "researcher degrees of freedom" or "the garden of forking paths". The Center for Open Science has made a tool where you can pre-register your analysis before the data is gathered/analysed to reduce the freedom to falsely obtain significant results this way.



    A beautiful example of the different answers one can get analysing the same data for the same question. If found this graph via a FiveThirtyEight article, which is also otherwise highly recommended: "Science Isn’t Broken. It’s just a hell of a lot harder than we give it credit for."

    These kind of problems may be less severe in natural sciences, but avoiding them can still make the science more solid. Before Berlin I was hesitant about pre-registering the analysis because in my work every analysis is different, which makes is harder to know in detail in advance how the analysis should go; there are also valid outlier that need to be removed, selecting the best study region needs a look at the data, etc.

    However, what I did not realise, although quite trivial, is that you can do the pre-registered analysis, but also additional analysis and simply mark them as such. So if you can do a better analysis after looking at the data, you can still do so. One of the problems of pre-registering is that quite often people did not do the analysis in the same way and that reviewers mostly do not check this.

    In the homogenisation benchmarking study of the ISTI we will describe the assessment measures in advance. This is mostly because the benchmarking participants have a right to know how their homogenisation algorithms will be judged, but it can also be seen as pre-registration of the analysis.

    To stimulate the adoption of pre-registration, the Center for Open Science has designed Open Science badges, which can be displayed with the articles meeting the criteria. The pre-registration has to be done at an external site where the text cannot be changed afterwards. The pre-registration can be kept undisclosed for up to two years. To get things started they even award 1000 prices of $1000 for pre-registered studies.

    The next step would be journals that review "registered reports", which are peer reviewed before the results are in. This should stimulate the publication of negative (no effect found) results. (There is still a final review when the results are in.)

    Quick hits

    Those were the main things I learned, now some quick hits.

    With the [[annotation system]] you can add comments to all web pages and PDF files. People may know annotation from Hypothes.is, which is used by ClimateFeedback to add comments to press articles on climate change. A similar initiative is PaperHive. PaperHive sells its system as collaborative reading and showed an example of students jointly reading a paper for class, annotating difficult terms/passages. It additionally provides channels for private collaboration, literature management and search. It has also already been used for the peer review (proof reading) of academic books. They now both have groups/channels to allow groups to make or read annotations, as well as private annotations, which can be used for your own paper archive. Web annotations aimed at the humanities are made by Pund.it.

    Since February this year, web annotation is a World Wide Web (W3C) standard. This will hopefully mean that web browsers will start adding annotation in their default configuration and it will be possible to comment every homepage. This will likely lead to public annotation streams going down to the level of YouTube comments. Also for the public channel some moderation will be needed, for example to combat doxing. PaperHive is a German organisation and thus removes hate speech.

    Peer Community In (PCI) a system to collaboratively peer review manuscripts that can later be send to an official journal.

    The project OpenUp studied a large number of Open Peer Review systems and their pros and cons.

    Do It Yourself Science. Not sure it is science, but great when people are having fun with science. When the quality level is right, you could say it is citizen science led by the citizens themselves. (What happened to the gentlemen scientists?)

    Philica: Instant academic publishing with transparent peer-review.



    Unlocking references from the literature: The Initiative for Open Citations. See also their conference abstract.

    I never realised there was an organisation behind the Digital Object Identifiers for scientific articles: CrossRef. It is a collaboration of about eight thousand scientific publishers. For other digital sources there are other organisation, while the main system is run by the international DOI Foundation. The DOIs for data are handled, amongst others, by DataCite. CrossRef is working on a system where you can also see the webpages that are citing scientific articles, what they call "event data". For example, this blog has cited 142 articles with a DOI. CrossRef will also take web annotations into account.

    Climate science was well represented at this conference. There were posters on open data for the Southern Ocean and on the data citation of the CMIP6 climate model ensemble. Shelley Stall of AGU talked about making FAIR and Open data the default for Earth and space science. (Et moi.)



    In the Life Sciences they are trying to establish "micro publications", the publication of a small result or dataset, several of which can then later be combined with a narrative into a full article.

    A new Open Science Journal: Research Ideas and Outcomes (RIO), which publishes all outputs along the research cycle, from research ideas, proposals, to data, software and articles. They are interested in all areas of science, technology, humanities and the social sciences.

    Collaborative writing tools are coming of age, for example, Overleaf for people using LaTex. Google Docs and Microsoft Word Live also do the trick.

    Ironically Elsevier was one of the sponsors. Their brochure suggests they are ones of the nice guys serving humanity with cutting edge technology.

    The Web of Knowledge/Science (a more selective version of Google Scholar) moved from Thomson Reuters to Clarivate Analytics, together with the Journal Citation Reports that computes the Journal Impact Factors.

    Publons has set up a system where researchers can get public credit for their (anonymous) peer reviews. It is hoped that this stimulates scientists to do more reviews.

    As part of Wikimedia, best known for Wikipedia, people are building up a multilingual database with facts: wikidata. Like in Wikipedia volunteers build up the database and sources need to be cited to make sure the facts are right. People are still working on software to make contributing easier for people who are not data scientists and do not dream of the semantic web every night.

    Final thoughts

    For a conference about science, there was relatively little science. One could have made a randomized controlled trial to see the influence of publishing your manuscript on a preprint server. Instead the estimated larger number of citations for articles also submitted to ArXiv (18%) was based on observational data and the difference could be that scientists put more work in spreading their best articles.

    The data manager at CERN argued that close collaboration with the scientists can help in designing interfaces that promote the use of Open Science tools. Sometimes small changes produce large increases in adoption of tools. More research into the needs of scientists could also help in creating the tools in a way that they are useful.

    Related reading, resources

    The easiest access to the talks of the FORCE2017 conference is via the "collaborative note taking" Google Doc

    Videos of last year's FORCE conference

    Peer review

    The Times Literary Supplement: Peer review: The end of an error? by ArXiving mathematician Timothy Gowers

    Peer review at the crossroads: overview over the various open review options, advantages and acceptance

    Jon Tennant and many colleagues: A multi-disciplinary perspective on emergent and future innovations in peer review

    My new idea: Grassroots scientific publishing

    Pre-prints

    The Earth sciences no longer need the publishers for publishing
     
    ArXivist. A Machine Learning app that suggest the most relevant new ArXiv manuscripts in a daily email

    The Stars Are Aligning for Preprints. 2017 may be considered the ‘year of the preprint

    Open Science


    The State of OA: A large-scale analysis of the prevalence and impact of Open Access articles

    Open Science MOOC (under development) already has an extensive resources page

    Metadata2020: Help us improve the quality of metadata for research. They are interested in metadata important for discoverability and reuse of data

    ‘Kudos’ promises to help scientists promote their papers to new audiences. For example with plain-language summaries and tools measure which dissemination actions were effective

    John P. A. Ioannidis and colleagues: Bibliometrics: Is your most cited work your best? Survey finds that highly cited authors feel their best work is among their most cited articles. It is the same for me, still looking at all articles it is not a strong correlation

    Lorraine Hwang and colleagues in Earth and Space Science: Software and the scientist: Coding and citation practices in geodynamics, 2017

    Neuroskeptic: Is Reproducibility Really Central to Science?


    Sunday, October 1, 2017

    The Earth sciences no longer need the publishers for publishing



    Manuscript servers are buzzing around our ears, as the Dutch say.

    In physics it is common to put manuscripts on the ArXiv server (pronounced: Archive server). A large part of these manuscripts are later send to a scientific journal for peer review following the traditional scientific quality control system and assessment of the importance of studies.

    This speeds up the dissemination of scientific studies and can promote informal peer review before the formal peer review. Manuscripts do not have copyrights yet, so this also makes the research available to all without pay-walls. Expecting the manuscripts to be published on paper in a journal later, ArXiv is called a pre-print server. In these modern times I prefer manuscript server.

    The manuscript gets a time stamp, a pre-print server can thus be used to claim precedence. Although the date of publication is traditionally used for this and there are no rules which date is most important. Pre-print servers can also give the manuscript a Digital Object Identifier (DOI) that can be used to cite it. A problem could be that some journals see a pre-print as prior publication, but I am not aware of any such journals in the atmospheric sciences, if you do please leave a comment below.

    ArXiv has a section for atmospheric physics, where I also uploaded some manuscripts as a young clouds researcher. However because most meteorologists did not participate it could not perform the same function as it does in physics; I never got any feedback based on these manuscripts. When ArXiv made uploading manuscripts harder to get rid of submissions by retire engineers, I stopped and just put the manuscripts on my homepage.

    Three manuscript archives

    Maybe the culture will now change and more scientists participate with three new initiatives for manuscript servers for the Earth sciences. All three follow a different concept.

    This August a digital archive started for Paleontology (paleorXiv, twitter). If I see it correctly they already have 33 manuscripts. (Only a part of them are climate related.) This archive builds on the open source preprint server of the Open Science Framework (OSF) of the non-profit Center for Open Science. The OSF is a platform for the entire scientific workflow from idea, to coding and collaboration to publishing. Also other groups are welcome to make a pre-print archive using their servers and software.

    [UPDATE. Just announced that in November a new ArXiv will start: MarXiv, not for Marxists, but for the marine-conservation and marine-climate sciences.]

    Two initiatives have just started for all of the Earth sciences. One grassroots initiative (EarthArXiv) and one by AGU/Wiley (ESSOAr).

    EarthArXiv will also be based on the open source solution of the Open Science Framework. It is not up yet, but I presume it will look a lot like paleorXiv. It seems to catch on with about 600 twitter listeners and about 100 volunteers in just a few days. They are working on a logo (requirements, competition). Most logos show the globe; I would include the study of other planets in the Earth sciences.

    The American Geophysical Union (AGU) has announced plans for an Earth and Space Science Open Archive (ESSOAr), which should be up and running early next year. They plan to be able to show a demo at the AGU's fall meeting in December.

    The topic would thus be somewhat different due to the inclusion of space science and they will also permanently archive posters presented at conferences. That sounds really useful; now every conference designs their own solution and the posters and presentations are often lost after some time when the homepage goes down. EarthArXiv unfortunately seems to be against hosting posters. ESSOAr would also make it easy to transfer the manuscripts to (AGU?) journals.

    A range of other academic societies are on the "advisory board" of ESSOAr, including EGU. ESSOAr will be based on proprietary software of the scientific publisher Wiley. Proprietary software is a problem for something that should function for as close to an eternity as possible. Not only Wiley, but also the AUG itself are major scientific publishers. They are not Elsevier, but this quickly leads to conflicts of interest. It would be better to have an independent initiative.

    There need not be any conflict between the two "duelling" (according to Nature) servers. The manuscripts are open access and I presume they will have an API that makes it possible to mirror manuscripts of one server on the other. The editors could then remove the ones they do not see as fitting to their standards (or not waste their time). Beyond esoteric (WUWT & Co.) nonsense, I would prefer not to have much standards, that is the idea of a manuscript server.



    Paul Voosen of Nature magazine wonders whether: "researchers working in more sensitive areas of the geosciences, such as climate science, will embrace posting their work prior to peer review." I see no problem there. There is nothing climate scientists can do to pacify the American culture war, we should thus do our job as well as possible and my impression is that climatology is easily in the better half of the Open Science movement.

    I love to complain about it, but my impression is that sharing data is more common in the atmospheric sciences than average. This could well be because it is more important because data is needed from all over the world. The World meteorological Organization was one of the first global organizations set up to coordinate this. The European Geophysical Union (EGU) has open review journals for more than 15 years. The initial publication in a "discussion" journal is similar to putting your manuscript on a pre-print server. Many of the contributions to the upcoming FORCE2017 conference on Research Communication and e-Scholarship that mention a topic are about climate science.

    The road to Open Access

    A manuscript server is one step on the way to an Open Access publishing future. This would make articles better accessible to researchers and the public who paid for it.

    Open Access would break the monopoly given to scientific publishers by copyright laws. An author looking for a journal to publish his work can compare price and service. But a reader typically needs to read one specific article and then has to deal with a publishers with monopoly power. This has led to monopolistic profits and commercial publishers that have lost touch with their customers, the scientific community. That Elsevier has a profit margin of "only" 36 percent thus seems to be mismanagement, it should be close to a 100 percent.



    ArXiv shows that publishing a manuscripts costs less than a dollar per article. Software to support the peer review can be rented for 10 dollar per article (see also: Episciences.org and Open Journal Systems). Writing the article and reviewing it is done for free by the scientific community. Most editors are also scientists working for free, sometimes the editor in chief gets some secretarial support, some money for a student help. Typesetting by journals is highly annoying as they often add errors doing so. Typesetting is easily done by a scientist, especially using Latex, but also with a Word template. That scientists pay thousands of dollars per article is not related to the incurred costs, but due to monopoly brand power.

    Publishers that serve the community, articles that everyone can read and less funding wasted on publishing is a desirable goal, but it is hard to get there because the barriers to entry are large. Scientists want to publish in journals with a good reputation and if the journals are not Open Access with a broad circulation. This makes starting a new journal hard, even if a new journal does a much better job at a much lower price, it will start with no reputation and without a reputation it will not get manuscripts to prove its worth.

    To make it easier to get from the current situation to an Open Access future, I propose the concept of Grassroot Scientific Publishing. Starting a new journal should be as easy as starting a blog: Make an account, give the journal name and select a lay-out. Finished, start reviewing.

    To overcome the problem that initially no one will submit manuscripts a grassroots journal can start with reviewing already published articles. This is not wasted time because we can do a much better job communicating the strength and weakness as well as the importance of an article than we do now, where the only information we have on the importance is the journal in which it is published. We can categorise and rank them. We can have all articles of one field in the same journal, no longer scattered around in many different journals.

    Even without replacing traditional journals, such a grassroots journal would provide a valuable service to its scientific community.

    To explain the idea and get feedback on how to make it better I have started a new grassroots publishing blog:
    Once this kind of journals is established and has shown it provides superior quality assurance and information, there is no longer any need for pay-wall journals and we can just review the articles on manuscript servers.

    Related reading

    Paul Voosen in Nature: Dueling preprint servers coming for the geosciences

    AGU: ESSOAr Frequently Asked Questions

    The Guardian, long read: Is the staggeringly profitable business of scientific publishing bad for science?

    If you are on twitter, do show support and join EarthArXiv

    Three cheers for gatekeeping

    Peer review helps fringe ideas gain credibility

    Grassroots scientific publishing


    * Photo Clare Night 2 by Paolo Antonio Gonella is used under a Creative Commons Attribution 2.0 Generic (CC BY 2.0) license.

    Sunday, July 2, 2017

    The Trump administration proposes a new scientific method just for climate studies



    What could possibly go wrong?

    [[Scott Pruitt]] is the former Oklahoma Attorney General who copied and pasted letters for pro-pollution lobbyists onto his letter head. Much of his previous work was devoted to suing the EPA. Now he works for the big money donors as head of the EPA.  This Scott Pruitt is allegedly working on formulating a new scientific method to be used for studying climate change alone. E&E News just reported that this special scientific method will use "red team, blue team" exercises to conduct an "at-length evaluation of U.S. climate science."

    Let's ignore that it makes no sense to speak of US climate science when it comes to the results. Climate science is the same in every country. There tends to be only one reality.

    Previously [[Rick Perry]], head of the Department of Energy (DOE) who campaigned on closing the DOE before he knew what it does, had joined the group calling to replace the scientific method with a Red Team Blue Team exercise.



    A Red Team is supposed to challenge the claims of the Blue Team. It is an idea from hierarchical organisations, like the military and multinationals, where challenging the orthodoxy is normally not appreciated and thus needs to be specially encouraged when management welcomes it.

    Poking holes is our daily bread

    It could naturally be that the climate "sceptics" do not know that challenging other studies is build in into everything scientists do; they do not give the impression to know science that well. In their Think Tanks and multinational corporations they are probably happy to bend the truth to get ahead. They may think that that is how science works and they may not able to accept that a typical scientist is intrinsically motivated to figure out how reality works.
    At every step of a study a scientist is aware that at the end it has to be written up very clearly to be criticised by peer reviewers before publication and by any expert in the field after publication. That people will build on the study and in doing so may find flaws. Scientific claims should be falsifiable, one should be able to show them wrong. The main benefit of this is that it forces scientists to very clearly describe the work and make it vulnerable to attack.

    The first time new results are presented is normally in a working group seminar where the members of the Red Team are sitting around the table, ask specific questions during the talk and criticise the main ideas after the talk. These are scientists working with similar methods, but also ones who work on very different problems. All and especially the group leaders have an interest in defending the reputation of the group and making sure no nonsense spoils it.

    The results are normally also presented at workshops, conferences and invited talks at other groups. At workshops leading experts will be there working on similar problems, but with a range of different methods and backgrounds. At conferences and invited talks there are in addition also many scientists from adjacent fields in the audience or scientists working with similar methods on other problems. A senior scientist will get blunt questions after the talk if anything is wrong with it. Younger scientists will get nicer questions in public and the blunt ones in private.

    An important Red Team consists of your co-authors. Modern science is mostly done in teams. That is more efficient, reduces the chances of rookie errors and very easy due to the internet. The co-authors guarantee with their reputation for the quality of the study, especially for the part where they have expertise.

    None of these steps are perfect and journalists should get away from their single-study fetish. But together these steps ensure that the quality of the scientific literature as a whole is high.

    (It is actually good that none of these steps are perfect. Science works on the boundary of what is known, scientists that do not make errors are not pushing themselves enough. If peer review would only pass perfect articles that would be highly inefficient and not much would be published, it normally takes several people and studies until something is understood. It is helpful that the scientific literature is high quality, it does not need to be perfect.)

    Andrew Revkin should know not to judge the quality of science by single papers or single scientists, that peer review does not need to be perfect and did not exist for most of the scientific era. But being a false balance kind of guy he regrettably uses "Peer review is often not as adversarial as intended" as argument to see merit in a Red Team exercise. While simultaneously acknowledging that "All signs point to political theater"

    Red Team science

    An optimistic person may think that the Red Team proposal of the Trump administration will follow the scientific method. We already had the BEST project of the conservative physics professor Richard Muller. BEST was a team of outside people have a look at the warming over land estimated from weather station observations. This project was funded in part by the Charles G. Koch Foundation. the Heartland Institute, hard core deniers funded by Koch Brother organisations.

    The BEST project found that the previous scientific assessments of the warming were right.



    The BEST project is also a reason not to be too optimistic about Pruitt's proposal. Before BEST published their results mitigation sceptics were very enthusiastic about their work and one of their main bloggers, Anthony Watts, claimed that their methods were so good and he would accept the outcome no matter the result. That changed when the result was in.

    Judith Curry was part of BEST, but left before she would have had to connect her name with the results. Joseph Majkut of Niskanen Center, who wrote an optimistic Red Team article, claims there were people who changed their minds due to BEST, but did not give any examples yet.

    It also looks as if BEST was punished for the result that was inconvenient for the funders. The funders are apparently no longer interest in studying the quality of climate observations. Berkeley Earth now mainly works on air pollution. While BEST did not even look at the largest part of the Earth yet: the oceans. The nice thing of being funded by national science foundations is that they care about the quality of the work, but not the outcomes.

    If coal or oil corporations thought there was a minute possibility that climate science was wrong, they would fund their own research. Feel free to call that Red Team research. That they invest in PR instead shows how confident they are that the science is right. Initially Exxon did fund research, when it became clear climate change was a serious risk they switched to PR.

    Joseph Majkut thinks that a well-executed Red Team exercise could convince people. In the light of the BEST project, the corporate funding priorities and the behaviour of mitigation sceptics in the climate "debate", I am sceptical. People who did not arrive at their position because of science will not change their position because of science.

    Washington Republicans will change their mind when the bribes, aka campaign contributions, of the renewable energy sector are larger than those of the fossil fuel sector. Or when the influence of money is smaller than that of the people, like in the good old days.


    Science lives on clarity

    As a scientist, I would suggest just wait and see at this time. Let the Trump administration make a clear plan for this new scientific method. I am curious.

    Let them tell us how they will select the members of the Red Team. Given that scientists are always critiquing each others work, I am curious how they plan to keep serious scientists out of their Red Team. I would be happy to join, there is still a lot of work to do on the quality of station data. Scientific articles typically end with suggestions for future research. That is the part I like writing the most.

    Because the Trump administration is also trying to cut funding for (climate) science, I get the impression that scientists doing science is not what they want. I would love to see how they excuse keeping scientists like me out of the Red Team.

    It would also be interesting to see how they will keep the alarmists out. Surely Peter Wadhams would like to defend his position that the Arctic will be ice free this year or the next. Surely Guy McPearson would like to explain why we are doomed and mainstream science, aka science, understates the problem in every imaginable way. I am sure Reddit Collapse of Civilization can suggest many more people with just as much scientific credibility as the people Scott Pruitt would like to invite. I hope they will apply to the Red Team.

    That is just one question. Steven Koonin proposes in the Opinion section of the Wall Street Journal that:
    A commission would coordinate and moderate the process and then hold hearings to highlight points of agreement and disagreement, as well as steps that might resolve the latter
    Does this commission select the topics? Who are these organisers? Who selects them? What are the criteria? After decades of an unproductive blog climate "debate" we already know that there is no evidence that will convince the unreasonable. Will the commission simply write that the Red Team and the Blue Team disagreed about everything? Or will they make an assessment whether it is reasonable for the Red Team to disagree with with evidence?

    Clearly Scott Pruitt himself would be the worst possible choice to select the commission. Then the outcome would trivially be: the two teams disagree and Commission Coal Industry declares the Red Team as winner. We already have an NIPCC report with a collection of blog "science". There is no need for a second one.

    The then right-wing government of The Netherlands made a similar exercise: Climate Dialogue. They had a somewhat balanced commission and a few interesting debates on, for instance, climate sensitivity, the tropical hotspot, long-term persistence and Arctic sea ice. It was discontinued when it failed to find incriminating evidence. Just like funding for BEST stopped and confirming the general theme of the USA climate "debate": scientists judge studies based on their quality, mitigation "sceptics" based on the outcome.

    A somewhat similar initiative in the US was the Climate Change National Forum, where a journalist determined the debating topics by selecting newspaper articles. The homepage is still there, but no longer current. Maybe Pruitt has a few bucks.


    "This is yet another example of politicians engaging in unhelpful meddling in things they know nothing about."
    Ken Caldeira


    How will Pruitt justify not asking the National Academy of Sciences (NAS), whose job these kind of assessments is, to organise the exercise. Surely the donors of Pruitt will not find the NAS acceptable, they already did an assessment and naturally found the answer that does not fit their economic interests. (Like the findings on climate change of every other scientific organisations from all over the world does not fit their corruption-fuelled profits.)

    I guess they will also not ask the Science Division of the White House.

    Climate scientist Ken Caldeira called on Scott Pruitt to clarify the hypothesis he wants to test. Given the Trumpian overconfidence, the continual Trumpian own-goals, the Trumpian China-hoax extremism, the Trumpian incompetence and Trump's irrational donors wanting to go after the endangerment finding, I would would not be surprised if they go after the question whether the greenhouse effect exists, whether CO2 is a greenhouse gas or whether the world is warming. Pruitt said he wanted a "discussion about CO2 [carbon dioxide]."

    That would be a party. There are many real and difficult questions and sources of uncertainties in climate science (regional changes, changes in extremes, the role of clouds, impacts etc.), but these stupid greenhouse-CO2-warming questions that dominate the low-rated US public "debate" are not among them.

    The mitigation sceptical groups are not even able to agree with themselves which of these stupid three questions is the actual problem. I would thus suggest that the climate "sceptics" use their new "scientific method" themselves first to make their chaotic mess of incompatible claims into something.


    Red Team PR exercise

    Donald Trump has already helped climate action in America enormously by cancelling the voluntary Paris climate agreement. Climate change is slow and global. Everyone hopes someone else will solve it some time and attends to more urgent personal problems. When the climate hoaxer president cancelled the Paris agreement the situation became more dangerous and Americans started paying attention. This surge is seen above in the Google searches for climate change in the USA. This surge was noticeable in Reddit where there was a huge demand for reliable information on climate science and climate action.

    The Red Team exercise would give undue weight to a small group of fringe scientists. This is a general problem in America, where many Americans have the impression that extremist positions are still under debate because the fossil fuel industries bought many politicians who in turn say stupid things on cable TV and in opinion sections. These industries also place many ads and in return corporate media is happy to put "experts" on TV that represent their positions. Reality is that 97% of scientists and scientific studies agree that climate change is real and caused by us.

    On the optimistic side, just like cancelling Paris made Americans discover that Washington is completely isolated on the world stage in their denial that climate change is a risk, the Red Team exercise could also lead to more American learning how broad the support in the scientific community for climate change is and how strong the evidence.



    If the rules of the exercise are clearly unfair, scientists will easily be able to explain why they do not join and ask Pruitt why he thinks he needs such unfair rules. While scientists are generally trusted, the opposite is true for Washington and the big corporations behind Pruitt.

    The political donors have set up a deception industry with politicians willing to lie for them, media dedicated to spreading misinformation or at least willing to let their politician deceive the public, they have "think tanks" and their own fake version of the IPCC report and a stable of terrible blogs. These usual suspects writing another piece of misinformation for the EPA will hardly add to the load.

    The most tricky thing could be to make clear to the public that science is not resolved in debates. The EPA official E&E News talked to was thinking of a "back-and-forth critique" by government-recruited experts. In science that back and forth is done on paper, to make sure it is clearly formulated, with time to check the claims, read the cited articles and crunch the data. If it is just talk, it is easy to make false claims, which cannot be fact checked on the spot. Unfortunately history has shown that the Red Team will likely be willing to make false claims in public.

    If the rules of he exercise are somewhat fair, science will win big time; we have the evidence on our side. At this time, where America pays attention to climate change, that could be a really good advertisement for science and the strength of the evidence that climate change is a huge risk that cannot be ignored.

    Concluding, I am optimistic. Either they make the rules unfair. It seems likely they will try to make this exercise into political theatre. Then we can ask them in public why they make the rules so unfair. Don't they have confidence in their position that climate change is a hoax?

    If they make the rules somewhat fair, science will win big time. Science will win so much, you will be tired of all the winning, you will be begging, please mister scientist no more winning, I cannot take it any more.

    Let me close with John Oliver on Coal. Oliver was sued over this informative and funny piece by coal Barron Robert Murray who also stands behind Scott Pruitt and Trump.



    Related reading

    Red/Blue & Peer Review by the presidents of the American Geophysical Union (AGU) & the National Academy of Sciences: "Is this a one-off proposal targeting only climate science, or will it be applied to the scientific community’s research on vaccine safety, nuclear waste storage, or any of a number of important policies that should be informed by science?"

    Are debatable scientific questions debatable?

    Why doesn't Big Oil fund alternative climate research?

    My previous post on the Red Cheeks Team.

    Great piece by climate scientist Ken Caldeira: Red team, blue team.

    Josh Voorhees in Slate: EPA Chief Scott Pruitt Wants to Enlist a “Red Team” to Sow Doubts About Climate Change.

    Andrew Freedman in Mashable: EPA to actually hold 'red-team' climate debates, and scientists are livid.

    Ars Technica: Playing fossil’s advocate — EPA intends to form “red team” to debate climate science. Agency head reported to desire “back-and-forth critique” of published research by Scott K. Johnson.

    The pro-climate libertarian Niskanen Center: Can a Red Team Exercise Exorcise the Climate Debate? May I summarise this optimistic post as: if this new "Red Team" scientific method turns out to be the normal scientific method it would be useful.

    Talking Points Memo: Pruitt Is Reportedly Starting An EPA Initiative To Challenge Climate Science.

    Audobon's letter to Scott Pruitt: "The oil and gas industry manufactures a debate to avoid legal responsibility for their pollution and to eke out a few more years of profit and power."

    Rebecca Leber in Mother Jones (May 2017): Leading Global Warming Deniers Just Told Us What They Want Trump to Do.

    Scott Pruitt will likely not ask a court of law. Then they would lose again.

    The Red Team method would still be a better scientific method than the authoritarian Soviet method proposed by a comment on a large mitigation sceptical blog, WUWT: Does anyone know if the [American Meteorological Society] gets any federal funding like the National Academy of Science does? ... People sometimes can change their tune when their health of their pocketbook is at stake. Do you really want to get your science from authoritarians abusing the power of the state to determine the truth?

    Our wise and climate-cynical bunny thinks the Red Team exercise is a Team B exercise, which is the kind of exercise a Red Team should prevent.

    Brad Plumer and Coral Davenport in the New York Times: E.P.A. to Give Dissenters a Voice on Climate, No Matter the Consensus.

    Steven Koonin in the Opinion section of Rupert Murdoch's Wall Street Journal (April 2017): A ‘Red Team’ Exercise Would Strengthen Climate Science. (pay-walled)

    Kelly Levin of the World Resources Institute: Pruitt’s “Red Team-Blue Team” Exercise a Bad Fit for EPA Climate Science.

    Statement by Ken Kimmell, President, Union of Concerned Scientists: EPA to Launch Program Critiquing Climate Science


    * Photo at the top of Scott Pruitt at CPAC 2017 by Gage Skidmore under a Creative Commons Attribution-ShareAlike 2.0 Generic (CC BY-SA 2.0) license.