Pages

Tuesday, 21 November 2017

The fight for the future of science in Berlin



A group of scientists, scholars, data scientists, publishers and librarians gathered in Berlin to talk about the future of research communication. With the scientific literature being so central to science, one could also say the conference was about the future of science.

This future will be more open, transparent, findable, accessible, interoperable and reusable.


The open world of research from Mark Hooper on Vimeo.

Open and transparent sounds nice and most seem to assume that more is better. But it can also be oppressive, help the powerful with the resources to mine the information efficiently.

This is best known when it comes to government surveillance, which can be dangerous; states are powerful and responsible for the biggest atrocities in history. The right to vote in secret, to privacy, to organize and protections against unreasonable searches are fundamental protections against power abuse.

Powerful lobbies and political activists abuse transparency laws to harass inconvenient science.

ResearchGate, Google Scholar profiles and your ORCID ID page contribute to squeezing scientists like lemons by prominently displaying the number of publications and citations. This continual pressure can lead to burnout, less creativity and less risk taking. It encourages scientists to pick low hanging fruits rather than do those studies they think would bring science forward the most. Next to this bad influence on publications many other activities, which are just as important for science, suffer from this pressure. Many good-willing people were trying to solve this by also quantifying these activities. But in doing so add more lemon presses.


That technology brings more surveillance and detrimental micro-management is not unique to science. The destruction of autonomy is a social trend, that, for example, also affects truckers.

Science is a creative profession (even if many scientists do not seem to realise this). You have good ideas when you relax under the shower, in bed with fever or on a hike. The modern publish or perish system is detrimental to cognitive work. Work that requires cognitive skills is performed worse if you pressure people, it needs autonomy, mastery and purpose.

Scientists work on the edge of what is known and invariably make mistakes often. If you are not making mistakes you are not pushing your limits. This needs some privacy because unfortunately making mistakes is not socially acceptable for adults.



Chinese calligraphy with water on a stone floor. More ephemeral communication can lead to more openness, improve the exchange of views and produce more quality feedback.
Also later on in the process the ephemeral nature of a scientific talk requires deep concentration from the listener and is a loss for people not present, but it is also a feature early in a study. Without the freedom to make mistakes there will be less exiting research and slower progress. Scientists are also humans and once an idea is fixed on "paper" it becomes harder to change, while the flexibility to update your ideas to the evidence is important and likely needed in early stages.

These technologies also have real benefits, for example, make it easier to find related articles by the same author. A unique researcher identifier like ORCID especially helps when someone changes their name or in countries like China where one billion people seem to share about 1000 unique names. But there is no need for ResearchGate to put the number of publications and citations in huge numbers on the main profile page. (The prominent number of followers on Twitter profile pages also makes it less sympathetic in my view and needlessly promotes competition and inequality. Twitter is not my work, artificial competition is even more out of place.)

Open Review is a great option if you are confident about your work, but fear that reviewers will be biased, but sometimes it is hard to judge how good your work is and nice to have someone discretely point to problems with your manuscript. Especially in interdisciplinary work it is easy to miss something a peer review would notice, while your network may not include someone from another discipline you can ask to read the manuscript.

Once an article, code or dataset is published, it is free game. That is the point where I support Open Science. For example, publishing Open Access is better than pay-walled. If there is a reasonable chance of re-use publishing data and code helps science progress and should be rewarded.

Still I would not make a fetish out of it; I made the data available for my article on benchmarking homogenisation algorithms. This is an ISI highly-cited article, but I only know of one person having used the data. For less important papers publishing data can quickly be additional work without any benefits. I prefer nudging people towards Open Science over making it obligatory.

The main benefactor of publishing data and code is your future self, no one is more likely to continue your work. This should be an important incentive. Another incentive are Open Science "badges": icons presented next to the article title indicating whether the study was preregistered and provides open data and open materials (code). The introductions of these badges in the journal "Psychological Science" increased the percentage of articles with available data quickly to almost 40%.

The conference was organised by FORCE11, a community interested in future research communication and e-scholarship. There are already a lot of tools for the open, findable and well-connected world of the future, but their adoption could go faster. So the theme of this year's conference was "changing the culture".

Open Access


Christopher Jackson; on the right. (I hope I am allowed to repeat his joke.)
A main address was by Christopher Jackson. He has published over 150 scientific articles, but only became aware of how weird the scientific publishing system is when he joined ResearchGate, a social network for scientists, and was not allowed to put many of his articles on it because the publishers have the copy rights and do not allow this.

The frequent requests for copies of his articles on Research Gate also created an awareness how many scientists have trouble accessing the scientific literature due to pay-walls.

Another key note speaker, Diego Gómez, was threatened with up to eight years in jail for making scientific articles accessible. His university, Universidad del Quindio in Costa Rica, spends more on licenses for scientific journals ($375,000) than on producing scientific knowledge themselves ($253,000).



The lack of access to the scientific literature makes research in poorer countries a lot harder, but even I am regularly not able to download important articles and have to ask the authors for a copy or ask our library to order a photocopy elsewhere, although the University of Bonn is not a particularly poor university.

Also non-scientists may benefit from being able to read scientific articles, although when it is important I would prefer to consul an expert over mistakenly thinking I got the gist of an article in another field. Sometimes a copy of the original manuscript is found on one of the authors homepages or a repository. Google (Scholar) and the really handy browser add-on unpaywall can help find those using the Open Access DOI database.

Also sharing passwords and Sci-Hub are solutions, but illegal. The real solutions to making research more accessible are Open Access publishing and repositories for manuscripts. By now about half of the recently published articles are Open Access and in this pace all articles would be Open Access by 2040. Interestingly the largest fraction of the publicly available articles does not have an Open Access license, also called bronze Open Access. This means that the download possibility could also be revoked again.

The US National Institutes of Health and the European Union mandate that its supported research will be published Open Access.

A problem with Open Access journals can be that some are only interested in the publication fees and do not care about the quality. These predatory journals are bad for the reputation of real Open Access journals, especially in the eyes of the public.

I have a hard time believing that the authors do not know that these journals are predatory. Next to the sting operations to reveal that certain journals will publish anything, it would be nice to also have sting predatory journals that openly email the authors that they will accept any trash and see if that scares away the authors.

Jeffrey Beall used to keep a list of predatory journals, but had to stop after legal pressure from these frauds. The publishing firm Cabell now launched their own proprietary (pay-walled) blacklist, which already has 6000 journals and is growing fast.

Preprint repositories

Before a manuscript is submitted to a journal, the authors naturally still have the copy rights. They can thus upload the manuscript to a database, so-called preprint or institutional repositories. Unfortunately some publishers say this constitutes publishing a manuscript and they refuse to publish it because it is no longer new. However, most publishers accept the publications of the manuscript as it was before submission. A smaller part is also okay with the final version being published on the author's homepages or repositories.

Where a good option for an Open Access journal exists we should really try to use it. Where it is allowed, we should upload our manuscripts to repositories.

Good news for the readers of this blog is that a repository for the Earth Sciences was opened last week: EarthArXiv. This fall, the AGU will also demonstrate its preprint repository at the AGU Fall meeting. For details see my previous post.  EarthArXiv already has 15 climate related preprints.

This November also a new OSF ArXiv has started: MarXiv, not for Marxists, but for the marine-conservation and marine-climate sciences.
    When we combine the repositories with peer review organised by the scientific community itself, we will no longer need pay-walling scientific publishers. This can be done in a much more informative way than currently where the reader only knows that the paper was apparently good enough for the journal, but not why it is a good article, not how it fits in the (later published) literature. With Grassroots scientific publishing we can do a much better job.

    One way the reviews at a Grassroots journal can be better is by openly assessing the quality of the work. Now all we know is that the study was sufficiently interesting for some journal at that time for whatever reason. What I did not realise before Berlin is that this wastes a lot of time reviewing. Traditional journals waste resources on manuscripts, which are valid, but are rejected because they are seen as not important enough for the journal. For example, Frontiers reviews 2.4 million manuscripts and has to bounce about 1 million valid papers.

    On average scientists pay $5,000 per published article. This while scientists do most of the work for free (writing, reviewing, editing) and while the actual costs are a few $100. The money we save can be used for research. In the light of these numbers it is actually amazing that Elsevier only makes a profit of 35 to 50%. I guess their CEO's salary eats into the profits.

    Preprints would also have the advantage of making studies available faster. Open Access makes text and data mining easier, which helps in finding all articles on molecule M or receptor R. First publishers are using Text mining and artificial intelligence to suggest suitable peer reviewers to their editors. (I would prefer editors who know their field.) It would also help in detecting plagiarism and even statistical errors.

    (Before our machine overlords find out, let me admit that I did not always write the model description of the weather prediction model I used from scratch.)



    Impact factors

    Another issue Christopher Jackson highlighted is the madness of the Journal Impact Factors (JIF or IF). They measure how often an average article in a journal is cited in the first two or five years after publication. They are quite useful for librarians to get an overview over which journals to subscribe to. The problem begins when this impact factor is used to determine the quality of a journal or the articles in it.

    How common this is, is actually something I do not know. For my own field I would think I have a reasonable feeling about the quality of the journals, which is independent of the impact factor. More focussed journals tend to have smaller impact factors, but that does not signal that they are less good. Boundary Layer Meteorology is certainly not worse than the Journal of Geophysical Research. The former has in Impact Factor of 2.573, the latter of 3.454. If you made a boundary layer study it would be madness to then publish it in a more general geophysical journal where the chance is smaller that relevant colleagues will read it. Climate journals will have higher impact factors than meteorological journals because meteorologists mainly cite each other, while many sciences build on climatology. When the German meteorological journal MetZet was still a pay-wall journal it had a low impact factor because not many outside of Germany had a subscription, but the quality of the peer review and the articles was excellent.

    I would hope that reviewers making funding and hiring decisions know the journals in their field and take these kind of effects into account and read the articles itself. The [[San Francisco Declaration on Research Assessment]] (DORA) rejects the use of the impact factor. In Germany it is officially forbidden to judge individual scientists and small groups based on bibliographic measures such as the number of articles times the impact factor of the journals. Although I am not sure if everybody knows this. Imperial College recently adopted similar rules:
    “the College should be leading by example by signalling that it assesses research on the basis of inherent quality rather than by where it is published”
    “eliminate undue reliance on the use of journal-based metrics, such as JIFs, in funding, appointment, and promotion considerations”
    The relationship between the number of citations an article can expect and the impact factor is weak because there is enormous spread. Jackson showed this figure.



    This could well be a feature and not a bug. We would like to measure quality, not estimate the (future) number of citations of an article. For my own articles, I do not see much correlation between my subjective quality assessment and the number of citations. Which journal you can get into may well be a better quality measure than individual citations. (The best assessment is reading articles.)

    The biggest problem is when the journals, often commercial entities, start optimising for the number of citations rather than quality. There are many ways to get more citations, a higher impact factor, than making the best possible quality control. An article that reviews the state of the scientific field typically get a lot of citations, especially if writing by the main people in the field. Nearly every article will mention it in the introduction. Review papers are useful, but we do not need a new one every year. Articles with many authors typically get more citations. Articles on topics many scientists work on will get more citations. For Science and Nature it is important to get coverage in the main stream press, which is also read by scientists and leads to more citations.

    Reading articles is naturally work. I would suggest to reduce the number of reviews.

    Attribution, credit

    Traditionally one gets credit for scientific work by being author of a scientific paper. However, with increased collaboration and interdisciplinary work author lists have become longer and longer. Also the publish or perish system likely contributed: outsourcing part of the work is often more efficient than doing it yourself, while the person doing a small part of the analysis is happy to have another paper on their publish or perish list.

    What is missing from such a system is getting credit for a multitude of other import tasks. How does one value non-traditional output items supplied by researchers: code, software, data, design, standards, models, MOOC lectures, newspaper articles, blog posts, community engaged research and citizen science? Someone even mentioned musicals.

    A related question is who should be credited: technicians, proposal writers, data providers? As far as I know it would be illegal to put people in such roles in author list, but they do work that is important, needs to be done and thus needs to be credited somehow. A work-around is to invite them to help in editing the manuscript, but it would be good to have systems where various roles are credited. Designing such a system is hard.

    One is temped to make such a credit system very precise, but ambiguity also has its advantages to deal with the messiness of reality. I once started a study with one colleague. Most of this study did not work out and the final article was only about a part. A second colleague helped with that part. For the total work the first colleague had done more work, for the part that was published the second one. Both justifiably found that they should be second author. Do you get credit for the work or for the article?

    Later the colleague who had become third author of this paper wrote another study where I helped. It was clear that I should have been the second author, but in retaliation he made me the third author. The second author wrote several emails that this was insane, not knowing what was going on, but to no avail. A too precise credit system would leave no room for such retaliation tactics to clear the air for future collaborations.

    In one session various systems of credit "badges" were shown and tried out. What seemed to work best was a short description of the work done by every author, similar to a detailed credit role at the end of a movie.

    This year a colleague wrote on a blog that he did not agree with a sentence of an article he was author of. I did not know that was possible; in my view authors are responsible for the entire article. Maybe we should also split up the authors list in authors who guarantee with their name and reputation for the quality of the full article and honorary authors who only contributed a small part. This colleague could then be a honorary author.

    LindedIn endorsements were criticised because they are not transparent and they make it harder to change your focus because the old endorsements and contacts stick.

    Pre-registration

    Some fields of study have trouble replicating published results. These are mostly empirical fields where single studies — to a large part — stand on their own and are not woven together by a net of theories.

    One of the problems is that only interesting findings are published and if no effect is found the study is aborted. In a field with strong theoretical expectations also finding no effect when one is expected is interesting, but if no one expected a relationship between A and B, finding no relationship between A and B is not interesting.

    This becomes a problem when there is no relationship between A and B, but multiple experiments/trails are made and some will find a fluke relationship by chance. If only those get published that gives a wrong impression. This problem can be tackled by registering trails before they are made, which is becoming more common in medicine.

    A related problem is p-hacking and hypothesis generation after results are known (HARKing). A relationship which is statistically significant if only one outlier were not there, makes it tempting to find a reason why the outlier is a measurement error and should be removed.

    Similarly the data can be analyses in many different ways to study the same question, one of which may be statistically significant by chance. This is also called "researcher degrees of freedom" or "the garden of forking paths". The Center for Open Science has made a tool where you can pre-register your analysis before the data is gathered/analysed to reduce the freedom to falsely obtain significant results this way.



    A beautiful example of the different answers one can get analysing the same data for the same question. If found this graph via a FiveThirtyEight article, which is also otherwise highly recommended: "Science Isn’t Broken. It’s just a hell of a lot harder than we give it credit for."

    These kind of problems may be less severe in natural sciences, but avoiding them can still make the science more solid. Before Berlin I was hesitant about pre-registering the analysis because in my work every analysis is different, which makes is harder to know in detail in advance how the analysis should go; there are also valid outlier that need to be removed, selecting the best study region needs a look at the data, etc.

    However, what I did not realise, although quite trivial, is that you can do the pre-registered analysis, but also additional analysis and simply mark them as such. So if you can do a better analysis after looking at the data, you can still do so. One of the problems of pre-registering is that quite often people did not do the analysis in the same way and that reviewers mostly do not check this.

    In the homogenisation benchmarking study of the ISTI we will describe the assessment measures in advance. This is mostly because the benchmarking participants have a right to know how their homogenisation algorithms will be judged, but it can also be seen as pre-registration of the analysis.

    To stimulate the adoption of pre-registration, the Center for Open Science has designed Open Science badges, which can be displayed with the articles meeting the criteria. The pre-registration has to be done at an external site where the text cannot be changed afterwards. The pre-registration can be kept undisclosed for up to two years. To get things started they even award 1000 prices of $1000 for pre-registered studies.

    The next step would be journals that review "registered reports", which are peer reviewed before the results are in. This should stimulate the publication of negative (no effect found) results. (There is still a final review when the results are in.)

    Quick hits

    Those were the main things I learned, now some quick hits.

    With the [[annotation system]] you can add comments to all web pages and PDF files. People may know annotation from Hypothes.is, which is used by ClimateFeedback to add comments to press articles on climate change. A similar initiative is PaperHive. PaperHive sells its system as collaborative reading and showed an example of students jointly reading a paper for class, annotating difficult terms/passages. It additionally provides channels for private collaboration, literature management and search. It has also already been used for the peer review (proof reading) of academic books. They now both have groups/channels to allow groups to make or read annotations, as well as private annotations, which can be used for your own paper archive. Web annotations aimed at the humanities are made by Pund.it.

    Since February this year, web annotation is a World Wide Web (W3C) standard. This will hopefully mean that web browsers will start adding annotation in their default configuration and it will be possible to comment every homepage. This will likely lead to public annotation streams going down to the level of YouTube comments. Also for the public channel some moderation will be needed, for example to combat doxing. PaperHive is a German organisation and thus removes hate speech.

    Peer Community In (PCI) a system to collaboratively peer review manuscripts that can later be send to an official journal.

    The project OpenUp studied a large number of Open Peer Review systems and their pros and cons.

    Do It Yourself Science. Not sure it is science, but great when people are having fun with science. When the quality level is right, you could say it is citizen science led by the citizens themselves. (What happened to the gentlemen scientists?)

    Philica: Instant academic publishing with transparent peer-review.



    Unlocking references from the literature: The Initiative for Open Citations. See also their conference abstract.

    I never realised there was an organisation behind the Digital Object Identifiers for scientific articles: CrossRef. It is a collaboration of about eight thousand scientific publishers. For other digital sources there are other organisation, while the main system is run by the international DOI Foundation. The DOIs for data are handled, amongst others, by DataCite. CrossRef is working on a system where you can also see the webpages that are citing scientific articles, what they call "event data". For example, this blog has cited 142 articles with a DOI. CrossRef will also take web annotations into account.

    Climate science was well represented at this conference. There were posters on open data for the Southern Ocean and on the data citation of the CMIP6 climate model ensemble. Shelley Stall of AGU talked about making FAIR and Open data the default for Earth and space science. (Et moi.)



    In the Life Sciences they are trying to establish "micro publications", the publication of a small result or dataset, several of which can then later be combined with a narrative into a full article.

    A new Open Science Journal: Research Ideas and Outcomes (RIO), which publishes all outputs along the research cycle, from research ideas, proposals, to data, software and articles. They are interested in all areas of science, technology, humanities and the social sciences.

    Collaborative writing tools are coming of age, for example, Overleaf for people using LaTex. Google Docs and Microsoft Word Live also do the trick.

    Ironically Elsevier was one of the sponsors. Their brochure suggests they are ones of the nice guys serving humanity with cutting edge technology.

    The Web of Knowledge/Science (a more selective version of Google Scholar) moved from Thomson Reuters to Clarivate Analytics, together with the Journal Citation Reports that computes the Journal Impact Factors.

    Publons has set up a system where researchers can get public credit for their (anonymous) peer reviews. It is hoped that this stimulates scientists to do more reviews.

    As part of Wikimedia, best known for Wikipedia, people are building up a multilingual database with facts: wikidata. Like in Wikipedia volunteers build up the database and sources need to be cited to make sure the facts are right. People are still working on software to make contributing easier for people who are not data scientists and do not dream of the semantic web every night.

    Final thoughts

    For a conference about science, there was relatively little science. One could have made a randomized controlled trial to see the influence of publishing your manuscript on a preprint server. Instead the estimated larger number of citations for articles also submitted to ArXiv (18%) was based on observational data and the difference could be that scientists put more work in spreading their best articles.

    The data manager at CERN argued that close collaboration with the scientists can help in designing interfaces that promote the use of Open Science tools. Sometimes small changes produce large increases in adoption of tools. More research into the needs of scientists could also help in creating the tools in a way that they are useful.

    Related reading, resources

    The easiest access to the talks of the FORCE2017 conference is via the "collaborative note taking" Google Doc

    Videos of last year's FORCE conference

    Peer review

    The Times Literary Supplement: Peer review: The end of an error? by ArXiving mathematician Timothy Gowers

    Peer review at the crossroads: overview over the various open review options, advantages and acceptance

    Jon Tennant and many colleagues: A multi-disciplinary perspective on emergent and future innovations in peer review

    My new idea: Grassroots scientific publishing

    Pre-prints

    The Earth sciences no longer need the publishers for publishing
     
    ArXivist. A Machine Learning app that suggest the most relevant new ArXiv manuscripts in a daily email

    The Stars Are Aligning for Preprints. 2017 may be considered the ‘year of the preprint

    Open Science


    The State of OA: A large-scale analysis of the prevalence and impact of Open Access articles

    Open Science MOOC (under development) already has an extensive resources page

    Metadata2020: Help us improve the quality of metadata for research. They are interested in metadata important for discoverability and reuse of data

    ‘Kudos’ promises to help scientists promote their papers to new audiences. For example with plain-language summaries and tools measure which dissemination actions were effective

    John P. A. Ioannidis and colleagues: Bibliometrics: Is your most cited work your best? Survey finds that highly cited authors feel their best work is among their most cited articles. It is the same for me, still looking at all articles it is not a strong correlation

    Lorraine Hwang and colleagues in Earth and Space Science: Software and the scientist: Coding and citation practices in geodynamics, 2017

    Neuroskeptic: Is Reproducibility Really Central to Science?


    No comments:

    Post a Comment

    Comments are welcome, but comments without arguments may be deleted. Please try to remain on topic. (See also moderation page.)

    I read every comment before publishing it. Spam comments are useless.

    This comment box can be stretched for more space.