My apologies. One last thing about these damn email now that Julian Assange is indicted and may be extradited to the United States after being locked up for seven years in the London embassy of Ecuador for the crime of informing the public of abuses of power by the elite as leader of WikiLeaks.
Assange is indicted for trying (failing) to help his source Chelsea Manning to crack a password. Helping a source is what a journalist does and I feel that the relationship between a journalist and a source should be protected as part of the freedom of the press; this is unfortunately not the case in America, while, for instance, codified into law in Sweden. Journalists need sources to do their work. Just like your democratic right to vote should not be made into a theoretical right by making it hard to vote.
It is possible to have a honest discussion on whether WikiLeaks is part of the press or whether it is a source. It has aspects of both. Nowadays major media organizations provide anonymous contact facilities similar to those of WikiLeaks. It should not matter that WikiLeaks is independent and works with multiple sources and multiple media organizations.
There is one aspect where WikiLeaks is different from the press, however. Those damn emails. WikiLeaks published every single one of them in a big data dump. A journalist would only have cited those mails which are of public interest. Some email definitely were newsworthy, such as the Democratic party helping Hillary Clinton in the 2016 primaries where it was supposed to be neutral. But there was no need to publish all emails.
[UPDATE, The Intercept after a data leak in Brazil:
"When making these judgements, we employ the standard used by journalists in democracies around the world: namely, that material revealing wrongdoing or deceit by powerful actors should be reported, but information that is purely private in nature and whose disclosure may infringe upon legitimate privacy interests or other social values should be withheld." The Intercept]
Publishing all mails is an unacceptable violation of the right to privacy and the right to organize. In this case, they contained personal information of donors, including home addresses and Social Security numbers. Even if these had been more professionally edited out, these emails would still have done enormous collateral damage to the network of Hillary Clinton.
"Whistleblowers face prosecution under the Espionage Act if they leak information of public interest to the press, while there is still no federal “shield law” guaranteeing reporters’ right to protect their sources. Journalists and their devices continue to be searched at the US border" Reporters Without Borders
Blinded by their hatred for Clinton some seem to be blind to that damage. We only have rule of law when the rules are universal and one should thus try to remove any biases due to a specific case. A trick that helps me think more objectively is to switch the subjects. When the USA is killing people all over the world with drones, it helps to consider how one would feel about that when Russia, Iran or North Korea would do so. (Pick your favorite enemy country.) When the media tells you it is terrible what country or person X did, see how you feel if your country or your mom did the same. This may be the best trick I have ever learned. Tribalists may not like it.
For some climate scientists may be more sympathetic than Hillary Clinton. The first big political email dump was a collection of [[emails stolen from the servers of the Climatic Research Unit (CRU)]] in the United Kingdom shortly before the international climate negotiations in Copenhagen.
Cherry picking from these emails bloggers and conservative media did great damage to the public view of climate science. For example by quoting an email using the term "trick", which people interpreted as something nefarious, while in science it simply means a smart small way to compute something. All scientists involved were cleared by numerous investigations by several organizations, but by then the damage had been done. Had these emails been given to a journalistic organization, it would have investigated the situation and asked experts for advice before publishing and, in this case, likely would not have published anything.
There was also much damage to the professional networks of the scientists involved. Even if a message were not private, in an email you do not express yourself like in a public statement. You write for a specific audience. If one would be forced to write everything for public consumption that would slow down everything enormously. It should be possible to get feedback on ideas in private before saying something ill-considered in public.
Also in case of conflicts it is best to first try to solve this in private and not do this in public. It should be possible for me to write a colleague that I think they behaved wrongly or visa versa; that keeps our community a friendly place for all. It should also be possible to organize. For example, when there are problems with (sexual) harassment, women should be able to warn each other. It is harmful when such emails are published, especially when the accusations are not true. And it is harmful when such email are not possible.
Such communications are important everywhere, whether in science, in politics or any other profession. Approving of email dumps to be published is similar to requiring everyone to communicate via a public bulletin board. I do not want to live in such a post-privacy society. It is good that newspapers have set up their own confidential and anonymous contact mechanisms, whether they be dropboxes, encrypted emails, messengers of secure FTP. I hope that whistle-blowers will use those as their first option to uncover abuses by the powerful. Writing a good article is a service to the reader. Dumping private data a danger to the public.
Reporters Without Borders puts the USA 45th of 180 in press freedom. Just putting the freedom of the press in the first amendment of the constitution is not enough.
A group of scientists, scholars, data scientists, publishers and librarians gathered in Berlin to talk about the future of research communication. With the scientific literature being so central to science, one could also say the conference was about the future of science.
This future will be more open, transparent, findable, accessible, interoperable and reusable.
Open and transparent sounds nice and most seem to assume that more is better. But it can also be oppressive, help the powerful with the resources to mine the information efficiently.
This is best known when it comes to government surveillance, which can be dangerous; states are powerful and responsible for the biggest atrocities in history. The right to vote in secret, to privacy, to organize and protections against unreasonable searches are fundamental protections against power abuse.
ResearchGate, Google Scholar profiles and your ORCID ID page contribute to squeezing scientists like lemons by prominently displaying the number of publications and citations. This continual pressure can lead to burnout, less creativity and less risk taking. It encourages scientists to pick low hanging fruits rather than do those studies they think would bring science forward the most. Next to this bad influence on publications many other activities, which are just as important for science, suffer from this pressure. Many good-willing people were trying to solve this by also quantifying these activities. But in doing so add more lemon presses.
That technology brings more surveillance and detrimental micro-management is not unique to science. The destruction of autonomy is a social trend, that, for example, also affects truckers.
Science is a creative profession (even if many scientists do not seem to realise this). You have good ideas when you relax under the shower, in bed with fever or on a hike. The modern publish or perish system is detrimental to cognitive work. Work that requires cognitive skills is performed worse if you pressure people, it needs autonomy, mastery and purpose.
Scientists work on the edge of what is known and invariably make mistakes often. If you are not making mistakes you are not pushing your limits. This needs some privacy because unfortunately making mistakes is not socially acceptable for adults.
Chinese calligraphy with water on a stone floor. More ephemeral communication can lead to more openness, improve the exchange of views and produce more quality feedback.
Also later on in the process the ephemeral nature of a scientific talk requires deep concentration from the listener and is a loss for people not present, but it is also a feature early in a study. Without the freedom to make mistakes there will be less exiting research and slower progress. Scientists are also humans and once an idea is fixed on "paper" it becomes harder to change, while the flexibility to update your ideas to the evidence is important and likely needed in early stages.
These technologies also have real benefits, for example, make it easier to find related articles by the same author. A unique researcher identifier like ORCID especially helps when someone changes their name or in countries like China where one billion people seem to share about 1000 unique names. But there is no need for ResearchGate to put the number of publications and citations in huge numbers on the main profile page. (The prominent number of followers on Twitter profile pages also makes it less sympathetic in my view and needlessly promotes competition and inequality. Twitter is not my work, artificial competition is even more out of place.)
Open Review is a great option if you are confident about your work, but fear that reviewers will be biased, but sometimes it is hard to judge how good your work is and nice to have someone discretely point to problems with your manuscript. Especially in interdisciplinary work it is easy to miss something a peer review would notice, while your network may not include someone from another discipline you can ask to read the manuscript.
Once an article, code or dataset is published, it is free game. That is the point where I support Open Science. For example, publishing Open Access is better than pay-walled. If there is a reasonable chance of re-use publishing data and code helps science progress and should be rewarded.
Still I would not make a fetish out of it; I made the data available for my article on benchmarking homogenisation algorithms. This is an ISI highly-cited article, but I only know of one person having used the data. For less important papers publishing data can quickly be additional work without any benefits. I prefer nudging people towards Open Science over making it obligatory.
The main benefactor of publishing data and code is your future self, no one is more likely to continue your work. This should be an important incentive. Another incentive are Open Science "badges": icons presented next to the article title indicating whether the study was preregistered and provides open data and open materials (code). The introductions of these badges in the journal "Psychological Science" increased the percentage of articles with available data quickly to almost 40%.
The conference was organised by FORCE11, a community interested in future research communication and e-scholarship. There are already a lot of tools for the open, findable and well-connected world of the future, but their adoption could go faster. So the theme of this year's conference was "changing the culture".
Open Access
Christopher Jackson; on the right. (I hope I am allowed to repeat his joke.)
A main address was by Christopher Jackson. He has published over 150 scientific articles, but only became aware of how weird the scientific publishing system is when he joined ResearchGate, a social network for scientists, and was not allowed to put many of his articles on it because the publishers have the copy rights and do not allow this.
The frequent requests for copies of his articles on Research Gate also created an awareness how many scientists have trouble accessing the scientific literature due to pay-walls.
Another key note speaker, Diego Gómez, was threatened with up to eight years in jail for making scientific articles accessible. His university, Universidad del Quindio in Costa Rica, spends more on licenses for scientific journals ($375,000) than on producing scientific knowledge themselves ($253,000).
The lack of access to the scientific literature makes research in poorer countries a lot harder, but even I am regularly not able to download important articles and have to ask the authors for a copy or ask our library to order a photocopy elsewhere, although the University of Bonn is not a particularly poor university.
Also non-scientists may benefit from being able to read scientific articles, although when it is important I would prefer to consul an expert over mistakenly thinking I got the gist of an article in another field. Sometimes a copy of the original manuscript is found on one of the authors homepages or a repository. Google (Scholar) and the really handy browser add-on unpaywall can help find those using the Open Access DOI database.
Also sharing passwords and Sci-Hub are solutions, but illegal. The real solutions to making research more accessible are Open Access publishing and repositories for manuscripts. By now about half of the recently published articles are Open Access and in this pace all articles would be Open Access by 2040. Interestingly the largest fraction of the publicly available articles does not have an Open Access license, also called bronze Open Access. This means that the download possibility could also be revoked again.
The US National Institutes of Health and the European Union mandate that its supported research will be published Open Access.
A problem with Open Access journals can be that some are only interested in the publication fees and do not care about the quality. These predatory journals are bad for the reputation of real Open Access journals, especially in the eyes of the public.
I have a hard time believing that the authors do not know that these journals are predatory. Next to the sting operations to reveal that certain journals will publish anything, it would be nice to also have sting predatory journals that openly email the authors that they will accept any trash and see if that scares away the authors.
Jeffrey Beall used to keep a list of predatory journals, but had to stop after legal pressure from these frauds. The publishing firm Cabell now launched their own proprietary (pay-walled) blacklist, which already has 6000 journals and is growing fast.
Preprint repositories
Before a manuscript is submitted to a journal, the authors naturally still have the copy rights. They can thus upload the manuscript to a database, so-called preprint or institutional repositories. Unfortunately some publishers say this constitutes publishing a manuscript and they refuse to publish it because it is no longer new. However, most publishers accept the publications of the manuscript as it was before submission. A smaller part is also okay with the final version being published on the author's homepages or repositories.
Where a good option for an Open Access journal exists we should really try to use it. Where it is allowed, we should upload our manuscripts to repositories.
Good news for the readers of this blog is that a repository for the Earth Sciences was opened last week: EarthArXiv. This fall, the AGU will also demonstrate its preprint repository at the AGU Fall meeting. For details see my previous post. EarthArXiv already has 15 climate related preprints.
This November also a new OSF ArXiv has started: MarXiv, not for Marxists, but for the marine-conservation and marine-climate sciences.
When we combine the repositories with peer review organised by the scientific community itself, we will no longer need pay-walling scientific publishers. This can be done in a much more informative way than currently where the reader only knows that the paper was apparently good enough for the journal, but not why it is a good article, not how it fits in the (later published) literature. With Grassroots scientific publishing we can do a much better job.
One way the reviews at a Grassroots journal can be better is by openly assessing the quality of the work. Now all we know is that the study was sufficiently interesting for some journal at that time for whatever reason. What I did not realise before Berlin is that this wastes a lot of time reviewing. Traditional journals waste resources on manuscripts, which are valid, but are rejected because they are seen as not important enough for the journal. For example, Frontiers reviews 2.4 million manuscripts and has to bounce about 1 million valid papers.
On average scientists pay $5,000 per published article. This while scientists do most of the work for free (writing, reviewing, editing) and while the actual costs are a few $100. The money we save can be used for research. In the light of these numbers it is actually amazing that Elsevier only makes a profit of 35 to 50%. I guess their CEO's salary eats into the profits.
Preprints would also have the advantage of making studies available faster. Open Access makes text and data mining easier, which helps in finding all articles on molecule M or receptor R. First publishers are using Text mining and artificial intelligence to suggest suitable peer reviewers to their editors. (I would prefer editors who know their field.) It would also help in detecting plagiarism and even statistical errors.
(Before our machine overlords find out, let me admit that I did not always write the model description of the weather prediction model I used from scratch.)
Impact factors
Another issue Christopher Jackson highlighted is the madness of the Journal Impact Factors (JIF or IF). They measure how often an average article in a journal is cited in the first two or five years after publication. They are quite useful for librarians to get an overview over which journals to subscribe to. The problem begins when this impact factor is used to determine the quality of a journal or the articles in it.
How common this is, is actually something I do not know. For my own field I would think I have a reasonable feeling about the quality of the journals, which is independent of the impact factor. More focussed journals tend to have smaller impact factors, but that does not signal that they are less good. Boundary Layer Meteorology is certainly not worse than the Journal of Geophysical Research. The former has in Impact Factor of 2.573, the latter of 3.454. If you made a boundary layer study it would be madness to then publish it in a more general geophysical journal where the chance is smaller that relevant colleagues will read it. Climate journals will have higher impact factors than meteorological journals because meteorologists mainly cite each other, while many sciences build on climatology. When the German meteorological journal MetZet was still a pay-wall journal it had a low impact factor because not many outside of Germany had a subscription, but the quality of the peer review and the articles was excellent.
I would hope that reviewers making funding and hiring decisions know the journals in their field and take these kind of effects into account and read the articles itself. The [[San Francisco Declaration on Research Assessment]] (DORA) rejects the use of the impact factor. In Germany it is officially forbidden to judge individual scientists and small groups based on bibliographic measures such as the number of articles times the impact factor of the journals. Although I am not sure if everybody knows this. Imperial College recently adopted similar rules:
“the College should be leading by example by signalling that it assesses research on the basis of inherent quality rather than by where it is published”
“eliminate undue reliance on the use of journal-based metrics, such as JIFs, in funding, appointment, and promotion considerations”
The relationship between the number of citations an article can expect and the impact factor is weak because there is enormous spread. Jackson showed this figure.
This could well be a feature and not a bug. We would like to measure quality, not estimate the (future) number of citations of an article. For my own articles, I do not see much correlation between my subjective quality assessment and the number of citations. Which journal you can get into may well be a better quality measure than individual citations. (The best assessment is reading articles.)
The biggest problem is when the journals, often commercial entities, start optimising for the number of citations rather than quality. There are many ways to get more citations, a higher impact factor, than making the best possible quality control. An article that reviews the state of the scientific field typically get a lot of citations, especially if writing by the main people in the field. Nearly every article will mention it in the introduction. Review papers are useful, but we do not need a new one every year. Articles with many authors typically get more citations. Articles on topics many scientists work on will get more citations. For Science and Nature it is important to get coverage in the main stream press, which is also read by scientists and leads to more citations.
Reading articles is naturally work. I would suggest to reduce the number of reviews.
Attribution, credit
Traditionally one gets credit for scientific work by being author of a scientific paper. However, with increased collaboration and interdisciplinary work author lists have become longer and longer. Also the publish or perish system likely contributed: outsourcing part of the work is often more efficient than doing it yourself, while the person doing a small part of the analysis is happy to have another paper on their publish or perish list.
What is missing from such a system is getting credit for a multitude of other import tasks. How does one value non-traditional output items supplied by researchers: code, software, data, design, standards, models, MOOC lectures, newspaper articles, blog posts, community engaged research and citizen science? Someone even mentioned musicals.
A related question is who should be credited: technicians, proposal writers, data providers? As far as I know it would be illegal to put people in such roles in author list, but they do work that is important, needs to be done and thus needs to be credited somehow. A work-around is to invite them to help in editing the manuscript, but it would be good to have systems where various roles are credited. Designing such a system is hard.
One is temped to make such a credit system very precise, but ambiguity also has its advantages to deal with the messiness of reality. I once started a study with one colleague. Most of this study did not work out and the final article was only about a part. A second colleague helped with that part. For the total work the first colleague had done more work, for the part that was published the second one. Both justifiably found that they should be second author. Do you get credit for the work or for the article?
Later the colleague who had become third author of this paper wrote another study where I helped. It was clear that I should have been the second author, but in retaliation he made me the third author. The second author wrote several emails that this was insane, not knowing what was going on, but to no avail. A too precise credit system would leave no room for such retaliation tactics to clear the air for future collaborations.
In one session various systems of credit "badges" were shown and tried out. What seemed to work best was a short description of the work done by every author, similar to a detailed credit role at the end of a movie.
This year a colleague wrote on a blog that he did not agree with a sentence of an article he was author of. I did not know that was possible; in my view authors are responsible for the entire article. Maybe we should also split up the authors list in authors who guarantee with their name and reputation for the quality of the full article and honorary authors who only contributed a small part. This colleague could then be a honorary author.
LindedIn endorsements were criticised because they are not transparent and they make it harder to change your focus because the old endorsements and contacts stick.
Pre-registration
Some fields of study have trouble replicating published results. These are mostly empirical fields where single studies — to a large part — stand on their own and are not woven together by a net of theories.
One of the problems is that only interesting findings are published and if no effect is found the study is aborted. In a field with strong theoretical expectations also finding no effect when one is expected is interesting, but if no one expected a relationship between A and B, finding no relationship between A and B is not interesting.
This becomes a problem when there is no relationship between A and B, but multiple experiments/trails are made and some will find a fluke relationship by chance. If only those get published that gives a wrong impression. This problem can be tackled by registering trails before they are made, which is becoming more common in medicine.
A related problem is p-hacking and hypothesis generation after results are known (HARKing). A relationship which is statistically significant if only one outlier were not there, makes it tempting to find a reason why the outlier is a measurement error and should be removed.
Similarly the data can be analyses in many different ways to study the same question, one of which may be statistically significant by chance. This is also called "researcher degrees of freedom" or "the garden of forking paths". The Center for Open Science has made a tool where you can pre-register your analysis before the data is gathered/analysed to reduce the freedom to falsely obtain significant results this way.
These kind of problems may be less severe in natural sciences, but avoiding them can still make the science more solid. Before Berlin I was hesitant about pre-registering the analysis because in my work every analysis is different, which makes is harder to know in detail in advance how the analysis should go; there are also valid outlier that need to be removed, selecting the best study region needs a look at the data, etc.
However, what I did not realise, although quite trivial, is that you can do the pre-registered analysis, but also additional analysis and simply mark them as such. So if you can do a better analysis after looking at the data, you can still do so. One of the problems of pre-registering is that quite often people did not do the analysis in the same way and that reviewers mostly do not check this.
In the homogenisation benchmarking study of the ISTI we will describe the assessment measures in advance. This is mostly because the benchmarking participants have a right to know how their homogenisation algorithms will be judged, but it can also be seen as pre-registration of the analysis.
To stimulate the adoption of pre-registration, the Center for Open Science has designed Open Science badges, which can be displayed with the articles meeting the criteria. The pre-registration has to be done at an external site where the text cannot be changed afterwards. The pre-registration can be kept undisclosed for up to two years. To get things started they even award 1000 prices of $1000 for pre-registered studies.
The next step would be journals that review "registered reports", which are peer reviewed before the results are in. This should stimulate the publication of negative (no effect found) results. (There is still a final review when the results are in.)
Quick hits
Those were the main things I learned, now some quick hits.
With the [[annotation system]] you can add comments to all web pages and PDF files. People may know annotation from Hypothes.is, which is used by ClimateFeedback to add comments to press articles on climate change. A similar initiative is PaperHive. PaperHive sells its system as collaborative reading and showed an example of students jointly reading a paper for class, annotating difficult terms/passages. It additionally provides channels for private collaboration, literature management and search. It has also already been used for the peer review (proof reading) of academic books. They now both have groups/channels to allow groups to make or read annotations, as well as private annotations, which can be used for your own paper archive. Web annotations aimed at the humanities are made by Pund.it.
Since February this year, web annotation is a World Wide Web (W3C) standard. This will hopefully mean that web browsers will start adding annotation in their default configuration and it will be possible to comment every homepage. This will likely lead to public annotation streams going down to the level of YouTube comments. Also for the public channel some moderation will be needed, for example to combat doxing. PaperHive is a German organisation and thus removes hate speech.
Peer Community In (PCI) a system to collaboratively peer review manuscripts that can later be send to an official journal.
Do It Yourself Science. Not sure it is science, but great when people are having fun with science. When the quality level is right, you could say it is citizen science led by the citizens themselves. (What happened to the gentlemen scientists?)
Philica: Instant academic publishing with transparent peer-review.
"Standards are like toothbrushes; everyone likes the idea of them, but everyone wants to use their own" #FORCE2017@Metadata2020
I never realised there was an organisation behind the Digital Object Identifiers for scientific articles: CrossRef. It is a collaboration of about eight thousand scientific publishers. For other digital sources there are other organisation, while the main system is run by the international DOI Foundation. The DOIs for data are handled, amongst others, by DataCite. CrossRef is working on a system where you can also see the webpages that are citing scientific articles, what they call "event data". For example, this blog has cited 142 articles with a DOI. CrossRef will also take web annotations into account.
In the Life Sciences they are trying to establish "micro publications", the publication of a small result or dataset, several of which can then later be combined with a narrative into a full article.
A new Open Science Journal: Research Ideas and Outcomes (RIO), which publishes all outputs along the research cycle, from research ideas, proposals, to data, software and articles. They are interested in all areas of science, technology, humanities and the social sciences.
Collaborative writing tools are coming of age, for example, Overleaf for people using LaTex. Google Docs and Microsoft Word Live also do the trick.
Ironically Elsevier was one of the sponsors. Their brochure suggests they are ones of the nice guys serving humanity with cutting edge technology.
Publons has set up a system where researchers can get public credit for their (anonymous) peer reviews. It is hoped that this stimulates scientists to do more reviews.
As part of Wikimedia, best known for Wikipedia, people are building up a multilingual database with facts: wikidata. Like in Wikipedia volunteers build up the database and sources need to be cited to make sure the facts are right. People are still working on software to make contributing easier for people who are not data scientists and do not dream of the semantic web every night.
Final thoughts
For a conference about science, there was relatively little science. One could have made a randomized controlled trial to see the influence of publishing your manuscript on a preprint server. Instead the estimated larger number of citations for articles also submitted to ArXiv (18%) was based on observational data and the difference could be that scientists put more work in spreading their best articles.
The data manager at CERN argued that close collaboration with the scientists can help in designing interfaces that promote the use of Open Science tools. Sometimes small changes produce large increases in adoption of tools. More research into the needs of scientists could also help in creating the tools in a way that they are useful.
John P. A. Ioannidis and colleagues: Bibliometrics: Is your most cited work your best? Survey finds that highly cited authors feel their best work is among their most cited articles. It is the same for me, still looking at all articles it is not a strong correlation
Data was mostly already free for research, but really free data still helps science a lot. If data is only free for research that means that you have to sign a contract. For a global study that means 200 contracts, in the best case where all countries do this, in the local language with hard to find contact persons, with different conditions each time and often only a part of the data. If the data is really free, you can automatically download it, create regional and global collections, enrich them with additional information, add value with data processing (homogenisation, quality control, extremes, etc.) and publish them for everyone to use. It would also make the data streams more transparent.
Strengthen their commitment to the free and unrestricted exchange of [Global Framework for Climate Services] GFCS relevant data and products;
Increase the volume of GFCS relevant data and products accessible to meet the needs for implementation of the GFCS and the requirements of the GFCS partners;
Unfortunately, there still is no legally binding require to share the data. The weather services cannot force their governments to do so, but the resolution makes it clear that governments refusing to open the data are hurting their people.
There is also a downside, the German weather service, [[Deutscher Wetterdienst]] (DWD), currently earns about 3.5 million Euro selling data. In perspective that is about 1 percent of their 305 million Euro budget. (The DWD earns about 20% of their budget themselves and thus costs only 3 Euro per citizen per year.)
Because of these earnings many weather services are reluctant to open up their data. Especially in poorer countries these earnings can be a considerable part of the budget. On the other hand, the benefits to society of open data are sure to be much higher. Because of more people and companies will actually use the data and because better data products can be produced. When it comes to climate data I hope that the international climate negotiations can free the data in return for funding for the observational networks of poorer countries.
The main problem in Germany are, optimistically were, the commercial weather services. They fear competition, both from the DWD themselves and because free data lowers the barrier to entry for other companies to start offering better services. These companies have been so successful that a long time it was even forbidden for the DWD to publish their weather predictions on their homepage. Weather prediction the DWD still had to make because it is their job to warn for dangerous weather. That was an enormous destruction of value created with taxpayer money to create an artificial market for (often worse quality) weather predictions.
There is a similar problem where commercial media companies have succeeded in limiting the time that public broadcasting organisations can make their information available for watching/listening/download. This destruction of public capital is still ongoing.
Good that for weather and climate data common sense has won in Germany. Only a small number of countries have made their data fully open, but I have the impression that there is a trend. It would be great if someone would track this, if only to create more pressure to open the data holdings.