Pages

Wednesday, 24 June 2015

Overconfidence in the nut test


My apologies, yesterday I promoted an erroneous article on twitter. Science journalist Dan Vergano wrote about his simple nut test, based on the hallmark of almost any nut: overconfidence. Overconfidence is also common among mitigation sceptics, who are quick to shout fraud, rather than first trying to understanding the science and ask polite specific questions trying to clear any misunderstandings.

Thus when Vergano explained his nut test, I accepted that as easy as the word of God is accepted by an elder, as the Dutch say. A typical case of confirmation bias. He writes:

"A decade after my first climate science epiphany, I was interviewing a chronic critic of global warming studies, particularly the 1998 “hockey stick” one that found temperatures in our century racing upward on a slope that mirrored a hockey blade pointed skyward. He argued vociferously that the study’s math was all messed up, and that this meant all of climate science was a sham.

I listened, and at the end of the interview, I gave him the nut test.

“What are the odds that you are wrong?” I asked, or so I remember.

“I’d say zero,” the critic replied. “No chance.”

That’s how you fail the nut test.

I had asked a climate scientist the same question on the phone an hour before.

“I could always be wrong,” the scientist said. Statistically, he added, it could be about a 20% to 5% chance, depending on what he might be wrong about.

That’s how you pass the nut test: by admitting you could be wrong.

And that’s how a climate denier finally convinced me, once and for all, that climate science was on pretty safe ground.
"

The problem of the test is, it is possible to be confident that a scientific statement is wrong. A scientific hypothesis should be falsifiable. One should mainly not be too confident that one is right. Making of positive claim about reality is always risky.



For example, you can be confident that someone cannot claim that all of climate science is a sham after studying temperature changes in the distant past ("hockey stick"). That is a logical fallacy. The theory of global warming is not only based on the hockey stick, but also on our physical understanding of radiative transfer and the atmosphere and on our understanding of the atmospheres of the other planets and on global climate models. Science is confident it is warming not only because of the hockey stick, but also because of historical temperature measurements, other changes in the climate (precipitation, circulation), changes in ecosystems, warming of lakes and rivers, decreases of the snow cover and so on.

So in this case, the mitigation sceptic is talking nonsense, but theoretically it would have been possible that he was rightly confident that the maths was wrong. Just like I am confident that many of the claims on WUWT & Co on homogenization are wrong. That does not mean that I am confident the data is flawless, but just that you should not get your science from WUWT & Co.

Three years ago Anthony Watts, host of WUWT, called a conference contribution a peer reviewed article. I am confident that that is wrong. The abstract claimed without arguments that half of the homogenization should go up and half should go down. I am confident that that assumption is wrong. The conference contribution offered a span of possible values. Anthony Watts put the worst extreme in his headline. That is wrong. Now after three years with no follow-up it is clear that the authors accept that the conference contribution contained serious problems.

Anthony Watts corrected his post and admitted that the conference contribution was not a peer reviewed article. This is rare and the other errors remain. Next to overconfidence, not admitting to be wrong is also common among mitigation sceptics. Anyone who is truly sceptical and follows the climate "debate", please pay attention, when a mitigation sceptic loses an argument, he ignores this and moves to the next try. This is so common that one of my main tips on debating mitigation sceptics is to make sure you stay on topic and point out to the reader when the mitigation sceptic tries to change the topic. (And the reader is the person you want to convince.)

Not being able to admit mistakes is human, but also a sure way to end up with a completely wrong view of the world. That may explain this tendency of the mitigation sceptics. It is also possible that the mitigation sceptic knows from the start that his argument is bogus, but hopes to confuse the public. Then it is better not to admit to be wrong, because then this mitigation sceptic runs the risk of being reminded of that the next time he tries his scam.

Less common, but also important is the second order nut test for people who promote obvious nonsense, claim not to know who is right to give the impression that there is more uncertainty than there really is. Someone claiming to have doubts about a large number of solid results is a clear warning light. One the above mitigation sceptic is apparently also guilty of ("chronic critic"). It needs a lot of expertise to find problems, it is not likely that some average bloke or even some average scientist pulls this off.

Not wanting to look like a nut, I make an explicit effort to not only talk about what we are sure about (it is warming, it is us, it will continue if we keep on using fossil fuels), but also what we are nor sure about (the temperature change up to the last tenth of a degree). To distinguish myself from the nuts, I try to apologize even when this is not strictly necessary. In this case even when the nut test is quite useful and the above conclusions were probably right.



Related reading

Science journalist Dan Vergano wrote a nice article article on his journey from conservative Catholic climate "sceptic" to someone who accepts the science (including the nut test): How I Came To Jesus On Global Warming.

The three year old conference contribution: Investigation of methods for hydroclimatic data homogenization.

Anthony Watts calls inhomogeneity in his web traffic a success.

Some ideas on how to talk with mitigation sceptics and some stories of people who managed to find their way back to reality.

Falsifiable and falsification in science. Falsifiable is essential. Falsification not that important nor straightforward.

Wednesday, 17 June 2015

Did you notice the recent anti-IPCC article?

You may have missed the latest attack on the IPCC, because the mitigation sceptics did not celebrated it. Normally they like to claim that the job of scientists is to write IPCC friendly articles. Maybe because that is the world they know, that is how their think tanks function, that is what they would be willing to do for their political movement. The claim is naturally wrong and it illustrates that they are either willing to lie for their movement or do not have a clue how science works.

It is the job of a scientist to understand the world better and thus to change the way we currently see the world. It is the fun of being a scientist to challenge old ideas.

The case in point last week was naturally the new NOAA assessment of the global mean temperature trend (Karl et al., 2015). The new assessment only produced minimal changes, but NOAA made that interesting by claiming the IPCC was wrong about the "hiatus". The abstract boldly states:
Here we present an updated global surface temperature analysis that reveals that global trends are higher than reported by the IPCC ...
The introduction starts:
The Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report concluded that the global surface temperature “has shown a much smaller increasing linear trend over the past 15 years [1998-2012] than over the past 30 to 60 years.” ... We address all three of these [changes in the observation methods], none of which were included in our previous analysis used in the IPCC report.
Later Karl et al. write, that they are better than the IPCC:
These analyses have surmised that incomplete Arctic coverage also affects the trends from our analysis as reported by IPCC. We address this issue as well.
To stress the controversy they explicitly use the IPCC periods:
Our analysis also suggests that short- and long-term warming rates are far more similar than previously estimated in IPCC. The difference between the trends in two periods used in IPCC (1998-2012 and 1951-2012) is an illustrative metric: the trends for these two periods in the new analysis differ by 0.043°C/dec compared to 0.078°C/dec in the old analysis reported by IPCC.
The final punchline goes:
Indeed, based on our new analysis, the IPCC’s statement of two years ago – that the global surface temperature “has shown a much smaller increasing linear trend over the past 15 years than over the past 30 to 60 years” – is no longer valid.
And they make the IPCC periods visually stand out in their main figure.


Figure from Karl et al. (2015) showing the trend difference for the old and new assessment over a number of periods, the IPCC periods and their own. The circles are the old dataset, the squares the new one and the triangles depict the new data with interpolation of the Arctic datagap.

This is a clear example of scientists attacking the orthodoxy because it is done so blatantly. Normally scientific articles do this more subtly, which has the disadvantage that the public does not notice it happening. Normally scientists would mention the old work casually, often the expect their colleagues to know which specific studies are (partially) criticized. Maybe NOAA found it easier to use this language this time because they did not write about a specific colleague, but about a group and a strong group.


Figure SPM.1. (a) Observed global mean combined land and ocean surface temperature anomalies, from 1850 to 2012 from three data sets. Top panel: annual mean values. Bottom panel: decadal mean values including the estimate of uncertainty for one dataset (black). Anomalies are relative to the mean of 1961−1990. (b) Map of the observed surface temperature change from 1901 to 2012 derived from temperature trends determined by linear regression from one dataset (orange line in panel a).
The attack is also somewhat unfair. The IPCC clearly stated that it not a good idea to focus on such short periods:
In addition to robust multi-decadal warming, global mean surface temperature exhibits substantial decadal and interannual variability (see Figure SPM.1). Due to natural variability, trends based on short records are very sensitive to the beginning and end dates and do not in general reflect long-term climate trends. As one example, the rate of warming over the past 15 years (1998–2012; 0.05 [–0.05 to 0.15] °C per decade), which begins with a strong El NiƱo, is smaller than the rate calculated since 1951 (1951–2012; 0.12 [0.08 to 0.14] °C per decade)
What the IPCC missed in this case is that the problem goes beyond natural variability, that another problem is whether the data quality is high enough to talk about such subtle variations.

The mitigation sceptics may have missed that NOAA attacked the IPCC consensus because the article also attacked the one thing they somehow hold dear: the "hiatus".

I must admit that I originally thought that the emphasis the mitigation sceptics put on the "hiatus" was because they mainly value annoying "greenies" and what better way to do so than to give your most ridiculous argument. Ignore the temperature rise over the last century, start your "hiatus" in a hot super El Nino year and stupidly claim that global warming has stopped.

But they really cling to it, they already wrote well over a dozen NOAA protest posts at WUWT, an important blog of the mitigation sceptical movement. The Daily Kos even wrote: "climate denier heads exploded all over the internet."

This "hiatus" fad provided Karl et al. (2015) the public interest — or interdisciplinary relevance as these journals call that — and made it a Science paper. Without the weird climate "debate", it would have been an article for a good climate journal. Without challenging the orthodoxy, it would have been an article for a simple data journal.

Let me close this post with a video of Richard Alley explaining even more enthusiastic than usually
what drives (climate) scientists? Hint: it ain't parroting the IPCC. (Even if their reports are very helpful.)
Suppose Einstein had stood up and said, I have worked very hard and I have discovered that Newton is right and I have nothing to add. Would anyone ever know who Einstein was?







Further reading

My draft was already written before I noticed that at Real Climate Stefan Rahmstorf had written: Debate in the noise.

My previous post on the NOAA assessment asked the question whether the data is good enough to see something like a "hiatus" and stressed the need to climate data sharing and building up a global reference network. It was frivolously called: No! Ah! Part II. The return of the uncertainty monster.

Zeke Hausfather: Whither the pause? NOAA reports no recent slowdown in warming. This post provides a comprehensive, well-readable (I think) overview of the NOAA article.

How climatology treats sceptics. My experience fits to what you would expect.

References

IPCC, 2013: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change [Stocker, T.F., D. Qin, G.-K. Plattner, M. Tignor, S.K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex and P.M. Midgley (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 1535 pp, doi: 10.1017/CBO9781107415324.

Thomas R. Karl, Anthony Arguez, Boyin Huang, Jay H. Lawrimore, James R. McMahon, Matthew J. Menne, Thomas C. Peterson, Russell S. Vose, Huai-Min Zhang, 2015: Possible artifacts of data biases in the recent global surface warming hiatus. Science. doi: 10.1126/science.aaa5632.

Boyin Huang, Viva F. Banzon, Eric Freeman, Jay Lawrimore, Wei Liu, Thomas C. Peterson, Thomas M. Smith, Peter W. Thorne, Scott D. Woodruff, and Huai-Min Zhang, 2015: Extended Reconstructed Sea Surface Temperature Version 4 (ERSST.v4). Part I: Upgrades and Intercomparisons. Journal Climate, 28, pp. 911–930, doi: 10.1175/JCLI-D-14-00006.1.

Rennie, Jared, Jay Lawrimore, Byron Gleason, Peter Thorne, Colin Morice, Matthew Menne, Claude Williams, Waldenio Gambi de Almeida, John Christy, Meaghan Flannery, Masahito Ishihara, Kenji Kamiguchi, Abert Klein Tank, Albert Mhanda, David Lister, Vyacheslav Razuvaev, Madeleine Renom, Matilde Rusticucci, Jeremy Tandy, Steven Worley, Victor Venema, William Angel, Manola Brunet, Bob Dattore, Howard Diamond, Matthew Lazzara, Frank Le Blancq, Juerg Luterbacher, Hermann Maechel, Jayashree Revadekar, Russell Vose, Xungang Yin, 2014: The International Surface Temperature Initiative global land surface databank: monthly temperature data version 1 release description and methods. Geoscience Data Journal, 1, pp. 75–102, doi: 10.1002/gdj3.8.

Saturday, 13 June 2015

Free our climate data - from Geneva to Paris

Royal Air Force- Italy, the Balkans and South-east Europe, 1942-1945. CNA1969

Neglecting to monitor the harm done to nature and the environmental impact of our decisions is only the most striking sign of a disregard for the message contained in the structures of nature itself.
Pope Francis

The 17th Congress of the World Meteorological Organization in Geneva ended today. After countless hours of discussions they managed to pass a almost completely rewritten resolution on sharing climate data in the last hour.

The glass is half full. On the one hand, the resolution clearly states the importance of sharing data. It demonstrates that it is important to help humanity cope with climate change by making it part of the global framework for climate services (GFCS), which is there to help all nations to adapt to climate change.

The resolution considers and recognises:
The fundamental importance of the free and unrestricted exchange of GFCS relevant data and products among WMO Members to facilitate the implementation of the GFCS and to enable society to manage better the risks and opportunities arising from climate variability and change, especially for those who are most vulnerable to climate-related hazards...

That increased availability of, and access to, GFCS relevant data, especially in data sparse regions, can lead to better quality and will create a greater variety of products and services...

Indeed free and unrestricted access to data can and does facilitate innovation and the discovery of new ways to use, and purposes for, the data.
On the other hand, if a country wants to it can still refuse to share the most important datasets: the historical station observations. Many datasets will be shared: Satellite data and products, ocean and cryosphere (ice) observations, measurements on the composition of the atmosphere (including aerosols). However, information on streamflow, lakes and most of the climate station data are exempt.

The resolution does urge Members to:
Strengthen their commitment to the free and unrestricted exchange of GFCS relevant data and products;

Increase the volume of GFCS relevant data and products accessible to meet the needs for implementation of the GFCS and the requirements of the GFCS partners;
But there is no requirement to do so.

The most positive development is not on paper. Data sharing may well have been the main discussion topic among the directors of the national weather services at the Congress. They got the message that many of them find this important and they are likely to prioritise data sharing in future. I am grateful to the people at the WMO Congress who made this happen, you know who you are. Some directors really wanted to have a strong resolution as justification towards their governments to open up the databases. There is already a trend towards more and more countries opening up their archives, not only of climate data, but going towards open governance. Thus I am confident that many more countries will follow this trend after this Congress.

Also good about the resolution is that WMO will start monitoring data availability and data policies. This will make visible how many countries are already taking the high road and speed up the opening of the datasets. The resolution requests WMO to:
Monitor the implementation of policies and practices of this Resolution and, if necessary, make proposals in this respect to the Eighteenth World Meteorological Congress;
In a nice twist the WMO calls the data to be shared: GFCS data. Thus basically saying, if you do not share climate data you are responsible for the national damages of climatic changes that you could have adapted to and you are responsible for the failed adaptation investments. The term "GFSC data" misses how important this data is for basic climate research. Research that is important to guide expensive political decisions on mitigation and in the end again adaptation and ever more likely geo-engineering.

If I may repeat myself, we really need all the data we can get for an accurate assessment of climatic changes, a few stations will not do:
To reduce the influence of measurement errors and non-climatic changes (inhomogeneities) on our (trend) assessments we need dense networks. These errors are detected and corrected by comparing one station to its neighbours. The closer the neighbours are, the more accurate we can assess the real climatic changes. This is especially important when it comes to changes in severe and extreme weather, where the removal of non-climatic changes is very challenging.
The problem, as so often, is mainly money. Weather services get some revenues from selling climate data. These can't be big compared to the impacts of climate change or compared to the investments needed to adapt, but relative to the budget of a weather service, especially in poorer countries, it does make a difference. At least the weather services will have to ask their governments for permission.

Thus we will probably have to up our game. The mandate of the weather services is not enough, we need to make clear to the governments of this world that sharing climate data is of huge benefit to every single country. Compared to the costs of climate change this is a no-brainer. Don Henry writes that "[The G7] also said they would continue efforts to provide US$100 billion a year by 2020 to support developing countries' own climate actions." The revenues from selling climate data are irrelevant compared to that number.

There is just a large political climate summit coming up, the COP21 in Paris in December. This week there was a preparatory meeting in Bonn to work on the text of the climate treaty. This proposal already has an optional text about climate research:
[Industrialised countries] and those Parties [nations] in a position to do so shall support the [Least Developed Countries] in the implementation of national adaptation plans and the development of additional activities under the [Least Developed Countries] work programme, including the development of institutional capacity by establishing regional institutions to respond to adaptation needs and strengthen climate-related research and systematic observation for climate data collection, archiving, analysis and modelling.
An earlier climate treaty (COP4 from 1998) already speaks about the exchange of climate data (FCCC/CP/1998/16/Add.1):
Urges Parties to undertake free and unrestricted exchange of data to meet the needs of the Convention, recognizing the various policies on data exchange of relevant international and intergovernmental organizations;
"Urges" is not enough, but that is a basis that could be reinforced. With the kind of money COP21 is dealing with it should be easy to support weather services of less wealthy countries to improve their observation systems and make the data freely available. That would be an enormous win-win situation.

To make this happen, we probably need to show that the climate science community stands behind this. We would need a group of distinguished climate scientists from as much countries as possible to support a "petition" requesting better measurements in data sparse regions and free and unrestricted data sharing.

To get heard we would probably also need to write articles for the national newspapers, to get published they would again have to be written by well-known scientists. To get attention it would also be great if many climate blogs would write about the action on the same day.

Maybe we could make this work. My impression was already that basically everyone in the climate science community finds the free exchange of climate data very important and the current situation a major impediment for better climate research. After last weeks article on data sharing the response was enormous and only positive. This may have been the first time that a blog post of mine that did not respond to something in the press got over 1000 views. It was certainly my first tweet that got over 13 thousand views and 100 retweets:


This action of my little homogenization blog was even in the top of the twitter page on the Congress of the WMO (#MeteoWorld), right next to the photo of the newly elected WMO Secretary-General Petteri Taalas.



With all this internet enthusiasm and the dedication of the people fighting for free data at the WMO and likely many more outside of the WMO, we may be able to make this work. If you would like to stay informed please fill in the form below or write to me. If enough people show interest, I feel we should try. I also do not have the time, but this is important.






Related reading

Congress of the World Meteorological Organization, free our climate data

Why raw temperatures show too little global warming

Everything you need to know about the Paris climate summit and UN talks

Bonn climate summit brings us slowly closer to a global deal by Don Henry (Public Policy Fellow, Melbourne Sustainable Society Institute at University of Melbourne) at The Conversation.

Free climate data action promoted in Italian. Thank you Sylvie Coyaud.

If my Italian is good enough, that is Google Translate, this post wants the Pope to put the sharing of climate data in his encyclical. Weather data is a common good.


* Photo at the top: By Royal Air Force official photographer [Public domain], via Wikimedia Commons

Tuesday, 9 June 2015

Comparing the United States COOP stations with the US Climate Reference Network

Last week the mitigation sceptics apparently expected climate data to be highly reliable and were complaining that an update led to small changes. Other weeks they expect climate data to be largely wrong, for example due to non-ideal micro-siting or urbanization. These concerns can be ruled out for the climate-quality US Climate Reference Network (USCRN). This is a guest post by Jared Rennie* introducing a recent study comparing USCRN stations with nearby stations of the historical network, to study the differences in the temperature and precipitation measurements.


Figure 1. These pictures show some of instruments from the observing systems in the study. The exterior of a COOP cotton region shelter housing a liquid-in-glass thermometer is pictured in the foreground of the top left panel, and a COOP standard 8-inch precipitation gauge is pictured in the top right. Three USCRN Met One fan-aspirated shields with platinum resistance thermometers are pictured in the middle. And, a USCRN well-shielded Geonor weighing precipitation gauge is pictured at the bottom.
In 2000 the United States started building a measurement network to monitor climate change, the so called United States Climate Reference Network (USCRN). These automatic stations have been installed in excellent locations and are expected not to show influences of changes in the direct surroundings for decades to come. To avoid loss of data the most important variables are measured by three high-quality instruments. A new paper by Leeper, Rennie, and Palecki now compares the measurements of twelve station pairs of this reference network with nearby stations of the historical US network. They find that the reference network records slightly cooler temperature and less precipitation and that there are almost no differences in the temperature variability and trend.

COOP and USCRN

The detection and attribution of climate signals often rely upon long, historically rich records. In the United States, the Cooperative Observer Program (COOP) has collected many decades of observations for thousands of stations, going as far back as the late 1800’s. While the COOP network has become the backbone of the U.S. climatology dataset, non-climatic factors in the data have introduced systematic biases, which require homogenization corrections before they can be included in climatic assessments. Such factors include modernization of equipment, time of observation differences, changes in observing practices, and station moves over time. A part of the COOP stations with long observations is known as the US Historical Climate Network (USHCN), which is the default dataset to report on temperature changes in the USA.

Recognizing these challenges, the United States Climate Reference Network (USCRN) was initiated in 2000. 15 years after its inception, 132 stations have been installed across the United States with sub-hourly observations of numerous weather elements using state-of-the-art instrumentation calibrated to traceable standards. For a high data quality temperature and precipitation sensors are well shielded and for continuity the stations have three independent sensors, so no data loss is incurred. Because of these advances, no homogenization correction is necessary.

Comparison

The purpose of this study is to compare observations of temperature and precipitation from closely spaced members of USCRN and COOP networks. While the pairs of stations are near to each other they are not adjacent. Determining the variations in data between the networks allows scientists to develop an improved understanding of the quality of weather and climate data, particularly over time as the periods of overlap between the two networks lengthen.

To ensure observational differences are the result of network discrepancies, comparisons were only evaluated for station pairs located within 500 meters. The twelve station pairs chosen were reasonably dispersed across the lower 48 states of the US. Images of the instruments used in both networks are provided in Figure 1.

The USCRN stations all have the same instrumentation: well-shielded rain gauges and mechanically ventilated temperature sensors. Two types of thermometers are used: modern automatic electrical sensors known as the maximum-minimum temperature sensor (MMTS ) and old-fashioned normal thermometers, which now have to be called liquid-in-glass (LiG) thermometers. Both are naturally ventilated.

An important measurement problem for rain gauges is undercatchment: due to turbulence around the instruments not all droplets land in the mouth. This is especially important in case of high winds and for snow and can be reduced by wind shields. The COOP rain gauges are unshielded, however, and have been known to underestimate precipitation in windy conditions. COOP gauges also include a funnel, which can be removed before snowfall events. The funnel reduces evaporation losses on hot days, but can also get clogged by snow. Hourly temperature data from USCRN were averaged into 24 hour periods to match daily COOP measurements at the designated observation times, which vary by station. Precipitation data was aggregated into precipitation events and also matched with respective COOP events.

Observed differences and their reasons

Overall, COOP sensors in shields naturally ventilated reported warmer daily maximum temperatures (+0.48°C) and cooler daily minimum temperatures (-0.36°C) than USCRN sensors, which have better solar shielding and fans to ventilate the instrument. The magnitude of temperature differences were on average larger for stations operating LiG systems, than those for the MMTS system. Part of the reduction in network biases with the MMTS system is likely due to the smaller-sized shielding that requires less surface wind speed to be adequately ventilated.

While overall mean differences were in line with side-by-side comparisons of ventilated and non-ventilated sensors, there was considerable variability in the differences from station to station (see Figure 2). While all COOP stations observed warmer maximum temperatures, not all saw cooler minimum temperatures. This may be explained by differing meteorological conditions (surface wind speed, cloudiness), local siting (heat sources and sinks), and sensor and human errors (poor calibration, varying observation time, reporting error). While all are important to consider, meteorological conditions were only examined further by categorizing temperature differences by wind speed. The range in network differences for maximum and minimum temperatures seemed to reduce with increasing wind speed, although more so with maximum temperature, as sensor shielding becomes better ventilated with increasing wind speed. Minimum temperatures are highly driven by local radiative and siting characteristics. Under calm conditions one might expect radiative imbalances between naturally and mechanically aspirated shields or differing COOP sensors (LiG vs MMTS). That along with local vegetation and elevation differences may help to drive these minimum temperature differences.


Figure 2. USCRN minus COOP average minimum (blue) and maximum (red) temperature differences for collocated station pairs. COOP stations monitoring temperature with LiG technology are denoted with asterisks.

For precipitation, COOP stations reported slightly more precipitation overall (1.5%). Similar to temperature, this notion was not uniform across all station pairs. Comparing by season, COOP reported less precipitation than USCRN during winter months and more precipitation in the summer months. The dryer wintertime COOP observations are likely due to the lack of gauge shielding, but may also be impacted by the added complexity of observing solid precipitation. An example is removing the gauge funnel before a snowfall event and then melting the snow to calculate liquid equivalent snowfall.

Wetter COOP observations over warmer months may have been associated with seasonal changes in gauge biases. For instance, observation errors related to gauge evaporation and wetting factor are more pronounced in warmer conditions. Because of its design, the USCRN rain gauge is more prone to wetting errors (that some precipitation sticks to the wall and is thus not counted). In addition, USCRN does not use an evaporative suppressant to limit gauge evaporation during the summer, which is not an issue for the funnel-capped COOP gauge. The combination of elevated biases for USCRN through a larger wetting factor and enhanced evaporation could explain wetter COOP observations. Another reason could be the spatial variability of convective activity. During summer months, daytime convection can trigger unorganized thundershowers whose scale is small enough where it would report at one station, but not another. For example, in Gaylord Michigan, the COOP observer reported 20.1 mm more than the USCRN gauge 133 meters away. Rain radar estimates showed nearby convection over the COOP station, but not the USCRN, thus creating a valid COOP observation.


Figure 3. Event (USCRN minus COOP) precipitation differences grouped by prevailing meteorological conditions during events observed at the USCRN station. (a) event mean temperature: warm (more than 5°C), near-freezing (between 0°C and 5°C), and freezing conditions (less than 0°C); (b) event mean surface wind speed: light (less than 1.5 m/s), moderate (between 1.5 m/s and 4.6 m/s), and strong (larger than 4.6 m/s); and (c) event precipitation rate: low (less than 1.5 mm/hr), moderate (between 1.5 mm/hr and 2.8 mm/hr), and intense (more than 2.8 mm/hr).

Investigating further, precipitation events were categorized by air temperature, wind speed, and precipitation intensity (Figure 3). Comparing by temperature, results were consistent with the seasonal analysis, showing lower COOP values (higher USCRN) in freezing conditions and warmer COOP values (lower USCRN) in near-freezing and warmer conditions. Stratifying by wind conditions is also consistent, indicating that the unshielded gauges in COOP will not catch as much precipitation as it should, showing a higher USCRN value. On the other hand, COOP reports much more precipitation in lighter wind conditions, due to higher evaporation rate in the USCRN gauge. For precipitation intensity, USCRN observed less than COOP for all categories.


Figure 4. National temperature anomalies for maximum (a) and minimum (b) temperature between homogenized COOP data from the United States Historical Climatology Network (USHCN) version 2.5 (red) and USCRN (blue).
Comparing the variability and trends between USCRN and homogenized COOP data from USHCN we see that they are very similar for both maximum and minimum national temperatures (Figure 4).

Conclusions

This study compared two observing networks that will be used in future climate and weather studies. Using very different approaches in measurement technologies, shielding, and operational procedures, the two networks provided contrasting perspectives of daily maximum and minimum temperatures and precipitation.

Temperature comparisons between stations in local pairings were partially attributed to local factors including siting (station exposure), ground cover, and geographical aspects (not fully explored in this study). These additional factors are thought to accentuate or minimize anticipated radiative imbalances between the naturally and mechanically aspirated systems, which may have also resulted in seasonal trends. Additional analysis with more station pairs may be useful in evaluating the relative contribution of each local factor noted.

For precipitation, network differences also varied due to the seasonality of the respective gauge biases. Stratifying by temperature, wind speed, and precipitation intensity showed these biases are revealed in more detail. COOP gauges recorded more precipitation in warmer conditions with light winds, where local summertime convection and evaporation in USCRN gauges may be a factor. On the other hand, COOP recorded less precipitation in colder, windier conditions, possibly due to observing error and lack of shielding, respectively.

It should be noted that all observing systems have observational challenges and advantages. The COOP network has many decades of observations from thousands of stations, but it lacks consistency in instrumentation type and observation time in addition to instrumentation biases. USCRN is very consistent in time and by sensor type, but as a new network it has a much shorter station record with sparsely located stations. While observational differences between these two separate networks are to be expected, it may be possible to leverage the observational advantages of both networks. The use of USCRN as a reference network (consistency check) with COOP, along with more parallel measurements, may prove to be particularly useful in daily homogenization efforts in addition to an improved understanding of weather and climate over time.




* Jared Rennie currently works at the Cooperative Institute for Climate and Satellites – North Carolina (CICS-NC), housed within the National Oceanic and Atmospheric Administration’s (NOAA’s) National Centers for Environmental Information (NCEI), formerly known as the National Climatic Data Center (NCDC). He received his masters and bachelor degrees in Meteorology from Plymouth State University in New Hampshire, USA, and currently works on maintaining and analyzing global land surface datasets, including the Global Historical Climatology Network (GHCN) and the International Surface Temperature Initiative’s (ISTI) Databank.

Further reading

Ronald D. Leeper, Jared Rennie, and Michael A. Palecki, 2015: Observational Perspectives from U.S. Climate Reference Network (USCRN) and Cooperative Observer Program (COOP) Network: Temperature and Precipitation Comparison. Journal Atmospheric and Oceanic Technology, 32, pp. 703–721, doi: 10.1175/JTECH-D-14-00172.1.

The informative homepage of the U.S. Climate Reference Network gives a nice overview.

A database with parallel climate measurements, which we are building to study the influence of instrumental changes on the probability distributions (extreme weather and weather variability changes).

The post, A database with daily climate data for more reliable studies of changes in extreme weather, provides a bit more background on this project.

Homogenization of monthly and annual data from surface stations. A short description of the causes of inhomogeneities in climate data (non-climatic variability) and how to remove it using the relative homogenization approach.

Previously I already had a look at trend differences between USCRN and USHCN: Is the US historical network temperature trend too strong?

Saturday, 6 June 2015

No! Ah! Part II. The return of the uncertainty monster



Some may have noticed that a new NOAA paper on the global mean temperature has been published in Science (Karl et al., 2015). It is minimally different from the previous one. Why the press is interested, why this is a Science paper, why the mitigation sceptics are not happy at all is that due to these minuscule changes the data no longer shows a "hiatus", no statistical analysis needed any more. That such paltry changes make so much difference shows the overconfidence of people talking about the "hiatus" as if it were a thing.

You can see the minimal changes, mostly less than 0.05°C, both warmer and cooler, in the top panel of the graph below. I made the graph extra large, so that you can see the differences. The thick black line shows the new assessment and the thin red line the previous estimated global temperature signal.



It reminds of the time when a (better) interpolation of the datagap in the Arctic (Cowtan and Way, 2014) made the long-term trend almost imperceptibly larger, but changed the temperature signal enough to double the warming during the "hiatus". Again we see a lot of whining from the people who should not have build their political case on such a fragile feature in the first place. And we will see a lot more. And after that they will continue to act as if the "hiatus" is a thing. At least after a few years of this dishonest climate "debate" I would be very surprised if they would sudden look at all the data and would make a fair assessment of the situation.

The most paradox are the mitigation sceptics who react by claiming that scientists are not allowed to remove biases due to changes in the way temperature was measured. Without accounting for the fact that old sea surface temperature measurements were biased to be too cool, global warming would be larger. Previously I explained the reasons why raw data shows more warming and you can see the effect in the bottom panel of the above graph. The black line shows NOAA's current best estimate for the temperature change, the thin blue (?) line the temperature change in the raw data. Only alarmists would prefer the raw temperature trend.



The trend changes over a number of periods are depicted above; the circles are the old dataset, the squares the new one. You can clearly see differences between the trend for the various short periods. Shifting the period by only 2 years creates large trend difference. Another way to demonstrate that this features is not robust.

The biggest change in the dataset is that NOOA now uses the raw data of the land temperature database of the International Surface Temperature Initiative (ISTI). (Disclosure, I am member of the ISTI.) This dataset contains much more stations than the previously used Global Historical Climate Network (GHCNv3) dataset. (The land temperatures were homogenized with the same Pairwise Homogenization Algorithm (PHA) as before.)

The new trend in the land temperature is a little larger over the full period; see both graphs above. This was to be expected. The ISTI dataset contains much more stations and is now similar to the one of Berkeley Earth, which already had a somewhat stronger temperature trend. Furthermore, we know that there is a cooling bias in the land surface temperatures and with more stations it is easier to see data problems by comparing stations with each other and relative homogenization methods can remove a larger part of this trend bias.

However, the largest trend changes in recent periods are due to the oceans; the Extended Reconstructed Sea Surface Temperature (ERSST v4) dataset. Zeke Hausfather:
They also added a correction for temperatures measured by floating buoys vs. ships. A number of studies have found that buoys tend to measure temperatures that are about 0.12 degrees C (0.22 F) colder than is found by ships at the same time and same location. As the number of automated buoy instruments has dramatically expanded in the past two decades, failing to account for the fact that buoys read colder temperatures ended up adding a negative bias in the resulting ocean record.
It is not my field, but if I understand it correctly other ocean datasets, COBE2 and HadSST3, already took these biases into account. Thus the difference between these datasets needs to have another reason. Understanding these differences would be interesting. And NOAA did not yet interpolate over the data gap in the Arctic, which would be expected to make its recent trends even stronger, just like it did for Cowtan and Way. They are working on that; the triangles in the above graph are with interpolation. Thus the recent trend is currently still understated.

Personally, I would be most interested in understanding the difference that are important for long-term trends, like the differences shown below in two graphs prepared by Zeke Hausfather. That is hard enough and such questions are more likely answerable. The recent differences between the datasets is even tinier than the tiny "hiatus" itself; no idea whether that can be understood.





I need some more synonyms for tiny or minimal, but the changes are really small. They are well within the statistical uncertainty computed from the year to year fluctuations. They are well within the uncertainty due to the fact that we do not have measurements everywhere and need to interpolate. The latter is the typical confidence interval you see in historical temperature plots. For most datasets the confidence interval does not include the uncertainty because biases were not perfectly removed. (HadCRUT does this partially.)

This uncertainty becomes relatively more important on short time scales (and for smaller regions); for large time scales are large regions (global) many biases will compensate each other. For land temperatures a 15-year period is especially dangerous, that is about the period between two inhomogeneities (non-climatic changes).

The recent period is in addition especially tricky. We are just in an important transitional period from manual observations with thermometers Stevenson screens to automatic weather stations. Not only the measurement principle is different, but also the siting. It is difficult, on top of this, to find and remove inhomogeneities near the end of the series because the computed mean after the inhomogeneity is based on only a few values and has a large uncertainty.

You can get some idea of how large this uncertainty is be comparing the short-term trend of two independent datasets. Ed Hawkins has compared the new USA NOAA data and the current UK HadCRUT4.3 dataset at Climate Lab Book and presented these graphs:



By request, he kindly computed the difference between these 10-year trends shown below. They suggest that if you are interested in short term trends smaller than 0.1°C per decade (say the "hiatus"), you should study whether your data quality is good enough to be able to interpret the variability as being due to climate system. The variability should be large enough or have a stronger regional pattern (say El Nino).

If the variability you are interested in is somewhat bigger than 0.1°C you probably want to put in work. Both datasets are based on much of the same data and use similar methods. For homogenization of surface stations we know that it can reduce biases, but not fully remove them. Thus part of the bias will be the same for all datasets that use statistical homogenization. The difference shown below is thus an underestimate of the uncertainty and it will need analytic work to compute the real uncertainty due to data quality.



[UPDATE. I thought I had an interesting new angle, but now see that Gavin Schmidt, director of NASA GISS, has been saying this in newspapers since the start: “The fact that such small changes to the analysis make the difference between a hiatus or not merely underlines how fragile a concept it was in the first place.”]

Organisational implications

To reduce the uncertainties due to changes in the way we measure climate we need to make two major organizational changes: we need to share all climate data with each other to better study the past and for the future we need to build up a climate reference network. These are, unfortunately, not things climatologists can do alone, but need actions by politicians and support by their voters.

To quote from my last post on data sharing:
We need [to share all climate data] to see what is happening to the climate. We already had almost a degree of global warming and are likely in for at least another one. This will change the sea level, the circulation, precipitation patterns. This will change extreme and severe weather. We will need to adapt to these climatic changes and to know how to protect our communities we need climate data. ...

To understand climate, we need a global overview. National studies are not enough. To understand changes in circulation, interactions with mountains and vegetation, to understand changes in extremes, we need spatially resolved information and not just a few stations. ...

To reduce the influence of measurement errors and non-climatic changes (inhomogeneities) on our (trend) assessments we need dense networks. These errors are detected and corrected by comparing one station to its neighbours. The closer the neighbours are, the more accurate we can assess the real climatic changes. This is especially important when it comes to changes in severe and extreme weather, where the removal of non-climatic changes is very challenging. ... For the best possible data to protect our communities, we need dense networks, we need all the data there is.
The main governing body of the World Meteorological Organization (WMO) is just meeting until next week Friday (12th of June). They are debating a resolution on climate data exchange. To show your support for the free exchange of climate data please retweet or favourite the tweet below.

We are conducting a (hopefully) unique experiment with our climate system. Future generations climatologists would not forgive us if we did not observe as well as we can how our climate is changing. To make expensive decisions on climate adaptation, mitigation and burden sharing, we need reliable information on climatic changes: Only piggy-backing on meteorological observations is not good enough. We can improve data using homogenization, but homogenized data will always have much larger uncertainties than truly homogeneous data, especially when it comes to long term trends.

To quote my virtual boss at the ISTI Peter Thorne:
To conclude, worryingly not for the first time (think tropospheric temperatures in late 1990s / early 2000s) we find that potentially some substantial portion of a model-observation discrepancy that has caused a degree of controversy is down to unresolved observational issues. There is still an undue propensity for scientists and public alike to take the observations as a 'given'. As [this study by NOAA] attests, even in the modern era we have imperfect measurements.

Which leads me to a final proposition for a more scientifically sane future ...

This whole train of events does rather speak to the fact that we can and should observe in a more sane, sensible and rational way in the future. There is no need to bequeath onto researchers in 50 years time a similar mess. If we instigate and maintain reference quality networks that are stable SI traceable measures with comprehensive uncertainty chains such as USCRN, GRUAN etc. but for all domains for decades to come we can have the next generation of scientists focus on analyzing what happened and not, depressingly, trying instead to inevitably somewhat ambiguously ascertain what happened.
Building up such a reference network is hard because we will only see the benefits much later. But already now after about 10 years the USCRN provides evidence that the siting of stations is in all likelihood not a large problem in the USA. The US reference network with stations at perfectly sited locations, not affected by urbanization or micro-siting problems, shows about the same trend as the homogenized historical USA temperature data. (The reference network even has a non-significant somewhat larger trend.)

There is a number of scientists working on trying to make this happen. If you are interested please contact me or Peter. We will have to design such reference networks, show how much more accurate they would make climate assessments (together with the existing networks) and then lobby to make it happen.



Further reading

Metrologist Michael de Podesta sees to agree with the above post and wrote about the overconfidence of the mitigation sceptics in the climate record.

Zeke Hausfather: Whither the pause? NOAA reports no recent slowdown in warming. This post provides a comprehensive, well-readable (I think) overview of the NOAA article.

A similar well-informed article can be found on Ars Technica: Updated NOAA temperature record shows little global warming slowdown.

If you read the HotWhopper post, you will get the most scientific background, apart from reading the NOAA article itself.

Peter Thorne of the ISTI on The Karl et al. Science paper and ISTI. He gives more background on the land temperatures and makes a case for global climate reference networks.

Ed Hawkins compares the new NOAA dataset with HadCRUT4: Global temperature comparisons.

Gavin Schmidt as a climate modeller explains who well the new dataset fits to climate projections: NOAA temperature record updates and the ‘hiatus’.

Chris Merchant found about the same recent trend in his satellite sea surface temperature dataset and writes: No slowdown in global temperature rise?

Hotwhopper discusses the main egregious errors of the first two WUWT posts on Karl et al. and an unfriendly email of Anthony Watts to NOAA. I hope Hotwhopper is not planning any holidays. It will be busy times. Peter Thorne has the real back story.

NOAA press release: Science publishes new NOAA analysis: Data show no recent slowdown in global warming.

Thomas R. Karl, Anthony Arguez, Boyin Huang, Jay H. Lawrimore, James R. McMahon, Matthew J. Menne, Thomas C. Peterson, Russell S. Vose, Huai-Min Zhang, 2015: Possible artifacts of data biases in the recent global surface warming hiatus. Science. doi: 10.1126/science.aaa5632.

Boyin Huang, Viva F. Banzon, Eric Freeman, Jay Lawrimore, Wei Liu, Thomas C. Peterson, Thomas M. Smith, Peter W. Thorne, Scott D. Woodruff, and Huai-Min Zhang, 2015: Extended Reconstructed Sea Surface Temperature Version 4 (ERSST.v4). Part I: Upgrades and Intercomparisons. Journal Climate, 28, pp. 911–930, doi: 10.1175/JCLI-D-14-00006.1.

Rennie, Jared, Jay Lawrimore, Byron Gleason, Peter Thorne, Colin Morice, Matthew Menne, Claude Williams, Waldenio Gambi de Almeida, John Christy, Meaghan Flannery, Masahito Ishihara, Kenji Kamiguchi, Abert Klein Tank, Albert Mhanda, David Lister, Vyacheslav Razuvaev, Madeleine Renom, Matilde Rusticucci, Jeremy Tandy, Steven Worley, Victor Venema, William Angel, Manola Brunet, Bob Dattore, Howard Diamond, Matthew Lazzara, Frank Le Blancq, Juerg Luterbacher, Hermann Maechel, Jayashree Revadekar, Russell Vose, Xungang Yin, 2014: The International Surface Temperature Initiative global land surface databank: monthly temperature data version 1 release description and methods. Geoscience Data Journal, 1, pp. 75–102, doi: 10.1002/gdj3.8.

Wednesday, 3 June 2015

Congress of the World Meteorological Organization, free our climate data



A small revolution happens at the World Meteorological Organization (WMO). Its main governing body (WMO congress) is discussing a draft resolution that national weather services shall provide free and unrestricted access to climate data. The problem is the fine print. The fine print makes it possible to keep on refusing to share important climate data with each other.

The data situation is getting better, more and more countries are freeing their climate data. The USA, Canada and Australia have a long traditions. Germany, The Netherlands, Finland, Sweden, Norway, Slovenia, Brazil and Israel have just freed their data. China and Russia are pretty good with sharing data. Switzerland has concrete plans to free their data. I probably forgot many countries and for Israel you currently still have to be able to read Hebrew, but things are definitely improving.

That there are large differences between countries is illustrated by this map of data availability for daily mean temperature data in the ECA&D database, a dataset that is used to study changes in severe weather. The green dots are data where you can download and work with the station data, the red dots are data that ECA&D are only allowed to use internally to make maps. In the number of stations available you can clearly see many national boundaries; that is not just the number of real stations, but to a large part national policies on data sharing.



Sharing data is important

We need this data to see what is happening to the climate. We already had almost a degree of global warming and are likely in for at least another one. This will change the sea level, the circulation, precipitation patterns. This will change extreme and severe weather. We will need to adapt to these climatic changes and to know how to protect our communities we need climate data.

Many countries have set up Climate Service Centres or are in the process of doing so to provide their populations with the information they need to adapt. Here companies, (local) governments, non-governmental organisation and citizens can get advice on how to prepare themselves for climate change.

It makes a large difference how often we will see heat waves like the one in [[Europe in 2003]] (70 thousand additional deaths; Robine et al., 2008), in [[Russia in 2010]] (a death toll of 55,000, a crop failure of ~25% and an economic loss of about 1% of the GBP; Barriopedro et al., 2011) or now in India. It makes a large difference how often a [[winter flood like in the UK in 2013-2014]] or [[the flood now in Texas and Oklahoma]] will occur. Once every 10, 100 or 1000 years? If it is 10 years, expensive infrastructural changes will be needed, if it is 1000 years, we will probably decide to life with that. It makes a difference how long droughts like the ones in California or in Chile will last and being able to make regional climate prediction requires high-quality historical climate data.

One of the main outcomes of the current 17th WMO congress will be the adoption of the Global Framework on Climate Services (GFCS). A great initiative to make sure that everyone benefits from climate services, but how will the GFCS framework succeed in helping humanity cope with climate change if there is almost no data to work with?

In their own resolution (8.1) on GFCS, the Congress recognizes this themselves:
Congress noted that EC-66 had adopted a value proposition for the international exchange of climate data and products to support the implementation of the GFCS and recommended a draft resolution on this topic for consideration by Congress.

To understand climate, we need a global overview. National studies are not enough. To understand changes in circulation, interactions with mountains and vegetation, to understand changes in extremes, we need spatially resolved information and not just a few stations.

Homogenization

To reduce the influence of measurement errors and non-climatic changes (inhomogeneities) on our (trend) assessments we need dense networks. These errors are detected and corrected by comparing one station to its neighbours. The closer the neighbours are, the more accurate we can assess the real climatic changes. This is especially important when it comes to changes in severe and extreme weather, where the removal of non-climatic changes is very challenging.

For the global mean land temperature the non-climatic changes already represent 25% of the change: After homogenization (to reduce non-climatic changes) in GHCNv3 the trend is 0.8°C per century since 1880 (Lawrimore et al., 2011; table 4). In the raw data this trend is only 0.6°C per century. That makes a large difference for our assessment how far climate change has progressed, while for large parts of the world we currently do not have enough data to remove such non-climatic changes well. This results in large uncertainties.

This 25% is global, but when in comes to the impacts of climate change, we need reliable local information. Locally the (trend) biases are much larger; on a global scale many biases cancel each ohter. For (decadal) climate prediction we need accurate variability on annual time scales, not "just" secular trends, this is again harder and has larger uncertainties. In the German climate prediction project MiKlip it was shown that a well-homogenized radiosonde dataset was able to distinguish much better between prediction systems and thus to better guide the development. Based on the physics of the non-climatic change we expect that (trend) biases are much stronger for extremes than they are for the mean. For example, errors due to insolation are worst on hot, sunny and calm days, while they are much less a problem on normal cloudy and windy days and thus less of a problem for the average. For the best possible data to protect our communities, we need dense networks, we need all the data there is.

WMO resolution

Theoretically the data exchange resolution will free everything you ever dreamed of. A large number of datasets is mentioned from satellites to sea and lake level, from greenhouse gases to snow cover and river ice. But exactly for the historical climate station that is so important to put climate change into a perspective a limitation is made. Here the international exchange is limited to the [[GCOS]] Stations. The total number of GCOS Stations is 1017 (March 01, 2014). For comparison, Berkeley Earth and the International Surface Temperature Initiative have records with more than 30 thousand stations. And most GCOS stations are likely already included in that. Thus in the end, this resolution will free almost no new climate station data.

The resolution proposes to share “all available data”. But they define that basically as data that is currently already open:
“All available” means that the originators of the data can make them available under this resolution. The term recognizes the rights of Members to choose the manner by, and the extent to, which they make their climate relevant data and products available domestically and for international exchange, taking into consideration relevant international instruments and national policies and legislation.
I have not heard of cases where national weather services denied access to data just for the fun of it. Normally they say it is due to "national policies and legislation". Thus this resolution will not change much.

No idea where these counterproductive national policies come from. For new instruments, for expensive satellites, for the [[Argo system]] to measure the ocean heat content, it is normally specified that the data should be open to all so that society maximally benefits from the investment. In America they naturally see the data as free for all because the tax payer has already paid for it.

In the past there may have been strategic (military) concerns. Climate and weather information can determine wars. However, nowadays weather and climate models are so good that the military benefit of observations is limited. Had Napoleon had a climate model, his troops would have been given warmer cloths before leaving for Russia. To prepare for war you do not need it more accurate than that.

The ministers of finance seems to like the revenues from selling climate data, but I cannot imagine them making much money that way. It is nothing in comparison to the impacts of climate changes or the costs of maladaptation. It will be much less than the money society invested in the climate observations. An investment that is devalued by sitting on the data and not sharing it.

All that while the WMO theoretically recognises how important sharing data is. In another resolution (9.1), ironically on big data, they write:
With increasing acceptance that the climate is changing, Congress noted that Members are again faced with coming to agreement with respect to the international exchange of data of importance of free and unrestricted access to climate-related information at both global and regional levels.
UN and Data Revolution. In August 2014 UN Secretary-General Ban Ki-moon asked an Independent Expert Advisory Group to make concrete recommendations on bringing about a data revolution in sustainable development (). The report indicates that too often existing data remain unused because they are released too late or not at all, not well-documented and harmonized, or not available at the level of detail needed for decision-making. More diverse, integrated, timely and trustworthy information can lead to better decision-making and real-time citizen feedback.
All that while citizen scientists are building up huge meteorological networks in Japan and North America. The citizen scientists are happy to share their data and the weather services should fear that their closed datasets will soon become a laughing stock.

Free our climate data

My apologies when this post sounds angry. I am angry. If that is reason to fire me as chair of the Task Team on Homogenization of the WMO Commission for Climatology, so be it. I cannot keep my mouth shut while this is happing.

Even if this resolution is a step forward and I am grateful for the people who made this happen, it is impossible that in these times the weather services of the world do not do everything they can to protect the communities they work for and freely share all climate data internationally. I really cannot understand how the limited revenues from selling data can seriously be seen as a reason to accept huge societal losses from climate change impacts and maladaptation.

Don't ask me how to solve this deadlock, but WMO Congress it is your job to solve this. You have until Friday next week the 12th of June.

[UPDATE. It might not be visible here because there are only little comments, but this post is read a lot for one without a connection to the mass media. That often happens below science posts that do not say something controversial. (All the scientists I know see data exchange as holding climate science back.) Also the tweet to this post is popular, never had one like this before, please retweet it to show your support for the free exchange of climate data.

]

[UPDATE. Wow, the above tweet has now been seen over 7,000 times (4th June; 19h). Not bad for a small station data blog. Never seen anything like this. Also Sylvie Coyaud blogs at La Repubblica now reports about freeing climate data (in Italian). If there are journalists in Geneva, do ask the delegates about sharing data, especially when they present the Global Framework for Climate Services as the prime outcome of this WMO Congress.]

Related reading

Nature published a column by Martin Bobrow of the Expert Advisory Group on Data Access, which just wrote a report on the governance of scientific data access: Funders must encourage scientists to share.

Why raw temperatures show too little global warming

Just the facts, homogenization adjustments reduce global warming

New article: Benchmarking homogenisation algorithms for monthly data

Statistical homogenisation for dummies

New article: Benchmarking homogenisation algorithms for monthly data

A framework for benchmarking of homogenisation algorithm performance on the global scale - Paper now published

Monday, 1 June 2015

No, blog posts cannot replace scientific articles

Journalist Nate Silver of FiveThirtyEight proposes to replace peer reviewed scientific articles by blog posts. Well almost.


That would be a disaster for science, but somehow some twitterers apparently liked the idea. Maybe this is because the journalistic view and public view is on a minute portion of science. The tip of the iceberg could even be an understatement here. For example, together with Ralf Lindau I just have a nice paper out on how well we can determine the date of an abrupt non-climatic change in a temperature time series.


The one "retweet" is me. The two "favourites" are two colleagues working on homogenization. Even I am wondering whether I should write a blog post about this paper and it certainly will not get into the press. That does not mean that it is not important for science, just that it is highly technical and that it will "just" help some scientists to better understand how to remove non-climatic changes. This will hopefully lead to better methods to remove non-climatic changes and finally to better assessments of the climatic changes. The latter study would be interesting to the public, all the studies it is based on, not so much. I may actually write a blog post about this paper, but then mainly explaining why it is important, rather than what is in the paper itself. The why could be interesting.

Of all the articles I have written only one was marginally interesting for the press. The paper were we showed that homogenization methods to remove non-climatic changes from station data work. The FAZ, a German conservative newspaper reported on it, ironically concluding: Climate change is not a measurement error. The University of Bonn is setting up a crisis helpline for mitigation sceptics. With that angle they could make sufficiently interesting for their news ticker. On the other hand, for the scientific community working on homogenization this was one of the main papers. For the scientific community the importance was the improvement in the validation methodology and the results that showed which kind of method works best, for the press this was already too much detail, they just reported that the methods work, which is nothing new for us.

Scientific literature

The science the public does not see, neither in the media nor on blogs, is also important for science. We will need a way to disseminate science that also works for the other 99.9% of science. Blogs and "blog review" won't do for this part.

Peer review is sometimes seen as gate keeping, but peer review actually helps the underdogs with fringe ideas. Helps ideas that otherwise would not be taken seriously. Helps ideas in which people would otherwise not be willing to invest their time to check it and see how they could build on it. And it saves a lot of time that otherwise every reader would have to invest to check everything much more carefully.

Without peer review, scientists would have a stronger tendency of focussing more on big-name scientists whose work is more likely to be worthwhile. No peer review worked in a time when all scientists still knew each other a century ago, it would not work well with the current large scientific community. To me it seems to be a really bad idea for interdisciplinary science because it is hard to judge how credible a paper is from another field. Just like it is hard for journalists to judge the quality of a paper without peer review.

Moving science dissemination to blogs may make this the-winner-takes-all principle even stronger. That is how media work. It gives the incumbents much more visibility and power. I do not think that my blog posts are better now than two years ago, but I have much more readers now. Building up an audience takes time.

Quality and deliberation are really important in science. Maybe it goes frustratingly slow, but it more likely goes into the right direction. The Winnower jumped on the tweet of Nate Silver to promote his tool to give blog posts a doi, a digital object identifier with which you can cite the object and that guarantees that it is stored for a decade. An interesting tool that may be useful, but personally, I do not think I have written a blog post that was good enough that it should have a doi. The ideas you find on this blog are hopefully useful or inspiring, but this blog is no science. The precision needed in science can only be found in my scientific articles. I actually wrote in my "about" page:
Some of the posts contain ideas, which may be converted into a work of science. If you are interested in this, there is no need to refer to these posts: you are welcome to call the idea your own. The main step is not to write down a vague idea in a few hours, but to recognize that an idea is worth working on for a year to convert it into a scientific study.
Given my posts a doi may hinder that. Then people may feel they have to cite me, then it may no longer be worthwhile for them to invest so much time in the idea any more and the idea would not be turned into science.

It would be wonderful if there was a solution for the dissemination of science away from the scientific publishers. Many publishers have demonstrated with their actions that they do not see themselves as part of the scientific community any more, but that their main priority is the protection of their near-monopoly profit margins of 30 to 40%. If the people designing such new dissemination solutions think only of the high-interest paper and do not find a solution for everyday papers, their solution will not help science and it will not be adopted. Every paper will need to be bench rejected or peer reviewed, every paper will have to be weaved into a network of trust. When popular papers are continually reviewed that would be an interesting innovation, but a review at the beginning is also needed.

Nate Silver implicitly assumes that fraud is important and hinders scientific progress. That could well be the journalistic and public impression because these cases get a lot of publicity. But actually fraud is extremely rare and hardly a problem in the natural sciences. It is thus not a particularly good reason to change the customs of the scientific community. If such a change makes something else worse, it is probably not worth it. If it makes something else better, we should do it for that reason. (Fraud seems to be more prominent in the medical and social sciences; it is hard for me to judge whether it is so big there that it would be worthwhile to accept trade offs to fight it.)

Informal communication


Chinese calligraphy with water on a stone floor. More ephemeral communication can lead to more openness, improve the exchange of views and produce more quality feedback.
As a blogger I am naturally not against blogging. Even if I mainly see it as a hobby and not as part of my work. It would be great if more colleagues would blog, I would certainly read those posts. Blogging could replace a part of the informal communications that would otherwise happen at conferences, on email distribution lists, normal email and the famous water cooler. It is good for keeping people up to date on the latest papers, conferences and datasets. And I would include twitter in that category as well.

Social media will not be able to replace conferences completely. Scientists being humans, I feel that real contact is still important, especially in the beginning of collaborations. Discussions also often work better in person. The ephemeral quality of debating is important. On twitter and blogs everything is stored for eternity, that stifles debate.

The blog of Nature Chemistry recently commented on social media and how it has changed post publication review, which used to take place at lab meetings or over coffee at conferences. Now it is written in print for all eternity. That creates the problems for both sides. The attacked paper will be damaged even if the attack was found to be unwarranted. While talking about a paper over coffee with people you trust and who may immediately correct you if you are wrong, has much less repercussions than writing it under your name on a blog.

If we go to more internet discussions and internet video presentations we should try to make it more like a real meeting, more like [[snapchat]]; limited in time and you can see it only once.

Another aspect of conferences that is currently missing on the net is that a group of experts is present at the same time and location. The quick bouncing of ideas is missing in the virtual world where discussions go much slower and many are not present. Typically only two persons are discussing with each other on a scientific level, rarely a few more.

Concluding, I would say that social media cannot replace scientific journals. They will not replace scientific conferences and workshop, but may create a new place for informal discussions. To make the internet more useful, we may have to make it more ephemeral.




Related reading

Nature Chemistry blog: Post-publication peer review is a reality, so what should the rules be?

Peer review helps fringe ideas gain credibility

The value of peer review for science and the press

Three cheers for gatekeeping


* Photo of "Taoist monk" by Antoine Taveneaux - Own work. Licensed under CC BY-SA 3.0 via Wikimedia Commons.