Showing posts with label climate reference networks. Show all posts
Showing posts with label climate reference networks. Show all posts

Wednesday, 12 June 2019

The World Meteorological Organisation will build the greatest global climate change network

“Having left a legacy of a changing climate, this [reference climate network] is the very least successive generations can expect from us in order to enable them to more precisely determine how the climate has changed.”
 

Never trust a headline. The WMO cannot build the network. But the highest body of the World Meteorological Organisation (WMO) has approved our plans for a Global Climate Reference Station Network. Its Congress with the leaders of all member organisations meets every two years in neutral Geneva, Switzerland, and has approved the report on a Global Surface Reference Network of the Global Climate Observing System (GCOS) Task Team on a reference network. The WMO is the oldest international organisation and coordinates the works of its members, mostly national weather services. So the WMO will not build the network itself; we are now looking for volunteers.

(Disclosure: I am a member of the Task Team.* Funny: in a team with big name climatologists I am somehow the "Climate scientist representative".)

Humanity is performing the greatest experiment in its history. We better measure it accurately. For humanity and for science.

Never trust a headline. What the heck does “greatest” mean? As someone trying to estimate how much the climate has changed, I would have been so happy if people had continued the really poor measurement methods they used in the 19th century. Mercury thermometers were placed in the North (pole) facing window of an unheated room. Being so close to the building is not good for ventilation, the sun could get on the sensor or heat the wall beneath. I would have lost that fight. Mercury thermometers are now forbidden. Weather prediction models would be better than this observation. The finance minister would have forced us to switch to automatic measurements. We may think that how we measure today is good enough, but people in 2100 will likely disagree.

At least following the biggest technological steps will be unavoidable. If that happens we will make long comparisons with the old and new set-up; estimating differences in the averages is not enough, also the variability is affected, which is hard to estimate. The reasons for measurement errors will change and thus its dependence on the weather.

Any data processing, if only averaging or applying a calibration factor, that is performed today, will be performed on hardware and software that is not available in 2100. Any instrument we would buy off the shelf will not be available in 2100; the upper air reference network is being forced to change their instruments because Vaisala will soon no longer sell them. So best means that we have open hardware and open software so that we can keep on building the instrument, can redo the data processing from scratch and can recreate the exact same processing on newer computers or whatever we use after the Butlerian Jihad.

Photo of a station of the US Climate Reference Network with a prominent wind shield for the rain gauges.
A station of the US Climate Reference Network.

Never trust a headline. What does measuring climate mean? I work on improving trend estimates based on historical measurements made in many different ways by comparing neighbouring stations with each other (statistical homogenisation). This makes me acutely aware that there is only so much you can do with statistical homogenisation, a considerable error remains. It works relatively well for annual average temperatures because the correlations between stations are high. Much harder are estimates of the changes of the variability around the means, which are important for changes in extreme weather. Especially estimates of changes in precipitation, humidity, insolation, cloud cover, snow depth, etc. have wide confidence intervals because statistical homogenisation is very hard. For these other observations having reference data that does not need to be statistically homogenised is crucial. These other variables are very important for climate change impacts and understanding how the climate is changing. Reference networks can not only help in quantifying these confidence intervals, but as an independent line of evidence also provide confidence the confidence intervals are right.

The preliminary proposal for variables to observe in reference quality is:

  • Air temperature
  • Precipitation
  • Pressure
  • Wind speed and direction (10 m)
  • Relative humidity
  • Surface radiation (down and up)
  • Land Surface Temperature
  • Soil moisture
  • Soil temperature
  • Snow/ice (Snow Water Equivalent)
  • Albedo
If you disagree or have additional ideas please contact us.


Tiered system of systems approach.

Never trust a headline. By itself this network will not be the best to study climate change. We also need the other stations. The reference network will be the stable backbone of the entire climate observation system. The part which is best at estimating the long term trends, while we need the other stations to reduce sampling errors and study spatial patterns.

Maintaining a reference station will be clearly more expensive than a standard climate station. Thus the number of stations will be limited. For the long term warming we expect to need about 200 stations well spread over the world. This takes into account that even if we select locations where we expect nothing will happen in the next century, we will still loose some stations due to conflict or "progress".

At a reference station (or nearby) preferably also measurements with the locally standard set-up are made, so that they can be compared with each other and provide information on any measurement problems. This will improve the quality of the entire network. A network with 200 reference stations would on average have about 1 station per country. For the comparison with the national networks having at least one station per country would also be desirable, but large countries will need multiple stations and it is also more efficient when countries with a reference station have multiple stations because a large part of the costs are overheads (well-trained operators and well-instrumented laboratories).


A society grows great when old men plant trees whose shade they know they will never sit in - Greek proverb (I did not check the provenance, experience tells me, the source of such quotes is always wrong, but do leave a comment).

Never trust a headline. The reference network is not only interesting for studying climate change. If it were we would need to wait many decades before it becomes useful. In this age that would likely mean that it would not be funded. Due to the metrological [sic] standards for computing confidence intervals and the traceability back to SI standards, the measurements will be comparable all over the world within specific confidence intervals for the absolute values, not just the (e.g., temperature) anomalies mostly used to study climate change. Together with the representativeness of the stations for the region this makes the network useful for the validation of absolute estimates from satellites or atmospheric models.

Also the comparison of the reference measurements with the national networks will produce valuable information within the first decade. For example, the American Climate Reference Network shows that the warming estimates of the national network are reliable and if anything underestimates the warming in America; the reference network has the larger trend.

Graph showing the US climate reference network (USCRN) and the normal US network (ClimDiv)
The US Climate Reference Network (USCRN; red line) is below the normal national station network (ClimDiv; green line) in beginning and above it at the end. The trend of the reference network is thus larger. (The values themselves are quite noisy because America is just a small part of the Earth and trends over such short periods do not contain information on long-term warming.)

Never trust a headline. We are land animals and it is thus come natural to us to see climate stations as prototypical for climate observations, but the climate system is much richer. There is already a network for reference upper air measurements (GRUAN) made with weather balloons (radiosondes). The high metrological quality of the ARGO network probably also makes them a reference network. They measure ocean temperature profiles to estimate the ocean heat content.

Both the upper air and the oceans are wonderfully uniform media to measure; characterising the influence of the surroundings and preventing changes therein will be the main additional challenge of a land station network.

Studying climatic changes in urban regions is also important. Here it would be even more important to accurately describe the surrounding because changes will happen. Thus urban regions would need their own reference network.

We hope that our reference network will stimulate the founding of further reference networks. The cryosphere (the part of the Earth which is frozen) needs specialised observations. Hydrological and marine surface observations in reference quality would be very valuable; we should never forget that 70% of the Earth is water. Observations of tiny airborne particles (aerosols) and clouds could be made in reference quality.



In other news. The WMO Congress has also decided to make & share more real-time observations for weather predictions. The norms for quality & quantity will become more strict & are monitored.

20-25% of WMO members is already compliant.

25-30% would be compliant if they would share their data internationally. Many of these countries are big, so they represent a larger part of the world.

The rest will need international support to build the capacity to extend their measurement program and share the data.

Hopefully, the Green Climate Fund can help. The 24/7 monitoring by the WMO will give feedback to the funders on the value of their investment.

Climatology has the advantage that national weather services perform observations operationally. This institutional support has produced the long series we can use to study climate change. We currently see huge changes in the biosphere. Insects seem to be vanishing, but this is really hard to study without long-term observations. The ecological long-term observational programs need institutional support.

Where possible these reference networks should aim to use the same locations, so that the observations can support each other, as well as to reduce costs. It may be easier to obtain funding for reference networks in a large coalition than for every network separately. So I hope that these other communities will develop similar plans. If you know of anyone in these communities, please point them to this post or our report.

We estimate that this reference land station network will cost a few million dollars per year. Thus running this network for a decade would still cost much less than a single satellite missions, which measures far fewer climate variances and has much less accuracy and less confidence in its accuracy. If you know someone at Lockheed Martin or Airbus who may be interested in building a space-grade reference network and has the right lobbyists, please tell them of this initiative.

Coming back the first paragraph: we need volunteers. We need weather services interested in setting up reference stations and we need ones interested in becoming a Lead Centre. A Lead Centre would coordinate the network, organise joint calibrations and comparison campaigns, lead the drawing up of measurement requirements, etc. To spread the work load it could be an idea to one Lead Centre to one instrument or observation type. Please talk about this with your colleagues and spread this post.

UPDATE November 2020. The World Meteorological Organization Commission for Observation, infrastructure and information system (INFCOM) has approved the plan. The climate reference network implementation plan is now part of the WMO Infrastructure Commission workplan, which includes in its outputs and deliverables the establishment of a GSRN, identifying candidate stations and the call for the Lead Centre. Based on this and on the recommendation from the report of the GSRN task team, published in February 2019 (GCOS-226),  a new task team has been established to develop (i) a draft implementation plan for the GSRN, (ii) a proposal for management and governance structures of the GSRN, and (iii) a process for nominating and approving stations contributing to the GSRN.


* The opinions in the post are mine, the report represents the opinion of the Task Team.

Further reading

Thorne P.W., H.J. Diamond, B. Goodison, S. Harrigan, Z. Hausfather, N.B. Ingleby, P.D. Jones, J.H. Lawrimore, D.H. Lister, A. Merlone, T. Oakley, M. Palecki, T.C. Peterson, M. de Podesta, C. Tassone, V. Venema and K.M. Willett, 2018: Towards a global land surface climate fiducial reference measurements network. Int J Climatol., 38, pp. 2760–2774. https://doi.org/10.1002/joc.5458

The report of the GCOS Task Team: GCOS Surface Reference Network (GSRN): Justification, requirements, siting and instrumentation options

GCOS, 2017: Report of the 1st Meeting of the GCOS Surface Reference Network (GSRN) Task Team
Maynooth, Ireland, 1-3 November 2017.

My first post trying to get the discussion going in October 2016: A stable global climate reference network

January 2018 GCOS Newsletter on designing a GCOS Surface Reference Network

Thursday, 1 February 2018

GCOS Newletter on designing a GCOS Surface Reference Network

Outcomes of AOPC Task Team, 1-3 November 2017, Maynooth, Ireland
Article in the GCOS Newsletter of January 2018



While not perfect, the in-situ component of the global climate observing system has been broadly successful in contributing to the detection, attribution, and monitoring of climate change. Measurements of surface meteorological parameters have been made for more than a century in many parts of the world and, together with satellites and other in-situ systems, have provided the evidence for the Intergovernmental Panel on Climate Change to conclude in its last two assessment reports that the evidence for a warming world is unequivocal (IPCC, 2013).

However, the demands on the climate observing systems are ever increasing and a more rigorous assessment of future climate change and variability is needed. This can most plausibly be delivered by a coordinated metrological reference-measurement approach to such monitoring at a sufficient subset of global sites. The principles for such a reference network are traceability, comparability, representativeness, long-term operational viability, full data and metadata retention and open data provision. Reference networks currently exist that have proven value, like the US Climate Reference Network (USCRN), the Global Climate Observing System (GCOS) Reference Upper Air Network (GRUAN), and Cryonet stations from WMO’s Global Cryosphere Watch.

At the request of GCOS Atmospheric Observation Panel for Climate (AOPC) and the WMO Commission for Climatology, a paper outlining the steps toward establishing a GCOS Surface Reference Network (GSRN) was developed and has now been accepted for publication in the International Journal of Climatology. In 2017, the AOPC agreed to the creation of a 2 year task team whose main objective is to assess the feasibility of a global surface reference network by identifying the major stakeholders, the benefits, the practicality of doing this, and the costs.

The task team, chaired by Howard Diamond (US National Oceanic and Atmospheric Administration/Oceanic and Atmospheric Research NOAA/OAR, Air Resources Laboratory) includes experts from the metrology community, WMO’s CIMO, Numerical Weather Prediction, the climate community, other GCOS networks, and met for the first time from 1 to 3 November 2017 at Maynooth University, Ireland. The meeting agreed that the primary benefits of a GCOS Surface Reference Network would be:

  • A key step in improving the long-term accuracy, stability and comparability of the observations and result in an improved confidence in detecting the global increase in temperature, as well as the link to historical records.
  • Rigorously characterized time series from these sites will lead to the development of a better understanding of important climate related processes, including extreme events, and key to assessing mitigation effectiveness.
  • Observations from a GSRN can be used to improve measurements made at other, non-reference site, and co-located reference quality measurements will provide a valuable data set for the calibration and validation of satellite data.
  • New techniques and equipment can be tested at the reference sites which will also provide good locations to base future field campaigns. In addition to WMO Members contributing measurement sites, a key catalyst for the success of the GSRN would be the establishment of a global lead center structure to help ensure the adequate coordination of all GSRN activities.
The task team will produce a concept note that will be used to get feedback from the Members on whether there is interest from their country in participating, and it will include a proposed list of steps to follow in the GSRN implementation.



Related post

A stable global climate reference network. Some first thoughts on how to design and organise such a global reference network.


* Top photo: US Climate Reference Network.
* Last photo of automatic weather station at Cape Morris Jesup, the northernmost point of mainland Greenland, taken by the technicians of the Danish weather service and kindly offered by Ruth Mottram.

Monday, 10 October 2016

A stable global climate reference network


Historical climate data contains inhomogeneities, for example due to changes in the instrumentation or the surrounding. Removing these inhomogeneities to get more accurate estimates of how much the Earth has actually warmed is a really interesting problem. I love the statistical homogenization algorithms we use for this; I am a sucker for beautiful algorithms. As an observationalist it is great to see the historical instruments, read how scientists understood their measurements better and designed new instruments to avoid errors.

Still for science it would be better if future climatologists had an easier task and could work with more accurate data. Let's design a climate-change-quality network that is a stable as we can humanly get it to study the ongoing changes in the climate.

Especially now that the climate is changing, it is important to accurately predict the climate for the coming season, year, decade and beyond at a regional and local scale. That is information (local) governments, agriculture and industry needs to plan, adapt, prepare and limit the societal damage of climate change.

Historian Sam White argues that the hardship of the Little Ice Age in Europe is not just about cold, but also about the turbulent and unpredictable weather. Also the coming century much hardship can be avoided with better predictions. To improve decadal climate prediction of regional changes and to understand the changes in extreme weather we need much better measurements. For example, with a homogenized radiosonde dataset, the improvements in the German decadal prediction system became much clearer than with the old dataset.

We are performing a unique experiment with the climate system and the experiment is far from over. It would also be scientifically unpardonable not to measure this ongoing change as well as we can. If your measurements are more accurate, you can see new things. Methodological improvements that lead to smaller uncertainties is one of the main factors that brings science forward.



A first step towards building a global climate reference network is agreeing on a concept. This modest proposal for preventing inhomogeneities due to poor observations from being a burden to future climatologists is hopefully a starting point for this discussion. Many other scientists are thinking about this. More formally there are the Rapporteurs on Climate Observational Issues of the Commission for Climatology (CCl) of the World Meteorological Organization (WMO). One of their aims is to:
Advance specifications for Climate Reference Networks; produce a statement of guidance for creating climate observing networks or climate reference stations with aspects such as types of instruments, metadata, and siting;

Essential Climate Variables

A few weeks ago Han Dolman and colleagues wrote a call to action in Nature Goescience titled "A post-Paris look at climate observations". They argue that while the political limits are defined for temperature, we need climate quality observations for all essential climate variables listed in the table below.
We need continuous and systematic climate observations of a well-thought-out set of indicators to monitor the targets of the Paris Agreement, and the data must be made available to all interested users.
I agree that we should measure much more than just temperature. It is quite a list, but we need that to understand the changes in the climate system and to monitor the changes in the atmosphere, oceans, soil and biology we will need to adapt to. Not in this list, but important are biological changes, especially ecology needs support for long-term observational programs, because they lack the institutional support the national weather services provide on the physical side.

Measuring multiple variables also helps in understanding measurement uncertainties. For instance, in case of temperature measurements, additional observations of insolation, wind speed, precipitation, soil temperature and albedo are helpful. The US Climate Reference Network measures this wind speed at the height of the instrument (and humans) rather than at the meteorologically typical height of 10 meter.

Because of my work, I am mainly thinking of the land surface stations, but we need a network for many more observations. Please let me know where the ideas do not fit to the other climate variables.

Table. List of the Essential Climate Variables; see original for footnotes.
Domain GCOS Essential Climate Variables
Atmospheric (over land, sea and ice) Surface: Air temperature, Wind speed and direction, Water vapour, Pressure, Precipitation, Surface radiation budget.

Upper-air: Temperature, Wind speed and direction, Water vapour, Cloud properties, Earth radiation budget (including solar irradiance).

Composition: Carbon dioxide, Methane, and other long-lived greenhouse gases, Ozone and Aerosol, supported by their precursors.
Oceanic Surface: Sea-surface temperature, Sea-surface salinity, Sea level, Sea state, Sea ice, Surface current, Ocean colour, Carbon dioxide partial pressure, Ocean acidity, Phytoplankton.

Sub-surface: Temperature, Salinity, Current, Nutrients, Carbon dioxide partial pressure, Ocean acidity, Oxygen, Tracers.
Terrestrial River discharge, Water use, Groundwater, Lakes, Snow cover, Glaciers and ice caps, Ice sheets, Permafrost, Albedo, Land cover (including vegetation type), Fraction of absorbed photosynthetically active radiation, Leaf area index, Above-ground biomass, Soil carbon, Fire disturbance, Soil moisture.

Comparable networks

There are comparable networks and initiatives, which likely shape how people think about a global climate reference network. Let me thus describe how they fit into the concept and where they are different.

There is the Global Climate Observing System (GCOS), which is mainly an undertaking of the World Meteorological Organization (WMO) and the Intergovernmental Oceanographic Commission (IOC). They observe the entire climate system; the idea of the above list of essential climate variables comes from them (Bojinski and colleagues, 2014). GOCS and its member organization are important for the coordination of the observations, for setting standard so that measurements can be compared and for defending the most important observational capabilities against government budget cuts.

Especially important from a climatological perspective is a new program to ask governments to recognize centennial stations as part of the world heritage. If such long series are stopped or the station is forced to move, a unique source of information is destroyed or damaged forever. That is comparable to destroying ancient monuments.



A subset of the meteorological stations are designated as GCOS Surface Network measuring temperature and precipitation. These stations have been selected for their length, quality and to cover all regions of the Earth. Its monthly data is automatically transferred to global databases.

National weather services normally take good care of their GCOS stations, but a global reference network would have much higher standards and also provide data at better temporal resolutions than monthly averages to be able to to study changes in extreme weather and weather variability.



There is already a global radiosonde reference network, the GCOS Reference Upper-Air Network (GRUAN, Immler and colleagues, 2010). This network provides measurements with well characterized uncertainties and they make extensive parallel measurements when they transition from one radiosonde design to the next. No proprietary software is used to make sure it is know exactly what happened to the data.

Currently they have about 10 sites, a similar number is on the list to be certified and the plan is not make this a network of about 30 to 40 stations; see map below. Especially welcome would be partners to start a site in South America.



The observational system for the ocean Argos is, as far as I can see, similar to GRUAN. It measures temperature and salinity (Roemmich and colleagues, 2009). If your floats meet the specifications of Argos, you can participate. Compared to land stations the measurement environment is wonderfully uniform. The instruments typically work a few years. Their life span is thus between a weather station and a one-way radiosonde ascent. This means that the instruments may deteriorate somewhat during their lifetimes, but maintenance problems are more important for weather stations.

A wonderful explanation of how Argos works for kids:


Argos has almost four thousand floats. They are working on a network with spherical floats that can go deeper.



Finally there are a number of climate reference networks of land climate stations. The best known is probably the US Climate Reference Network (USCRN, Diamond and colleagues, 2013). It has has 131 stations. Every station has 3 identical high quality instrument, so that measurement problems can be detected and the outlier attributed to a specific instrument. To find these problems quickly all data is relayed online and checked at their main office. Regular inspections are performed and everything is well documented.



The USCRN has selected new locations for its stations, which are expected to be free of human changes of the surroundings in the coming decades. This way it takes some time until the data becomes climatologically interesting, but they can already be compared with the normal network and this gives some confidence that its homogenized data is okay for the national mean; see below. The number of stations was sufficient to compute a national average in 2005/2006.



Other countries, such as Germany and the United Kingdom, have opted to make existing stations into a national climate reference network. The UK Reference Climatological Stations (RCS) have a long observational record spanning at least 30 years and their distribution aims to be representative of the major climatological areas, while the locations are unaffected by environmental changes such as urbanisation.


German Climate Reference Station which was founded in 1781 in Bavaria on the mountain Hohenpeißenberg. The kind of weather station photo, WUWT does not dare to show.
In Germany the climate reference network are existing stations with a very long history. Originally they were the stations where conventional manual observations continued. Unfortunately, they will now also switch to automatic observations. Fortunately, after making a long parallel measurement to see what this does to the climate record*.

An Indian scientist proposes an Indian Climate Reference Network of about 110 stations (Jain, 2015). His focus is on precipitation observations. While temperature is a good way to keep track on the changes, most of the impacts are likely due to changes in the water cycle and storms. Precipitation measurements have large errors; it is very hard to make precipitation measurements with an error below 5%. When these errors change, that produces important inhomogeneities. Such jumps in precipitation data are hard to remove with relative statistical homogenization because the correlations between stations are low. If there is one meteorological parameters for which we need a reference network, it is precipitation.

Network of networks

For a surface station Global Climate Reference Network, the current US Climate Reference Network is a good template when it comes to the quality of the instrumentation, management and documentation.

A Global Climate Reference Network does not have to do the heavy lifting all alone. I would see it as the temporally stable backbone of the much larger climate observing system. We still have all the other observations that help to make sampling errors smaller and provide the regional information you need to study how energy and mass moves through the climate system (natural variability).

We should combine them in a smart way to benefit from the strengths of all networks.



The Global Climate Reference Network does not have to be large. If the aim is to compute a global mean temperature signal, we would need just as many samples as we would need to compute the US mean temperature signal. This is in the order of 100 stations. Thus on average, every country in the world would have one climate reference station.

The figure on the right from Jones (1994) compares the temperature signal from 172 selected stations &mdsh; 109 in the Northern Hemisphere. 63 in the Southern Hemisphere. &mdash with the temperature signal computed from all available stations. There is nearly no difference, especially with respect to the long term trend.

Callendar (1961) used 80 only stations, but his temperature reconstruction fits quite well to the modern reconstructions (Hawkins and Jones, 2013).

Beyond the global means

The number of samples/stations can be modest, but it is important that all climate regions of the world are sampled; some regions warm/change faster than others. It probably makes sense to have more stations in especially vulnerable regions, such as mountains, Greenland, Antarctica. We really need a stable network of buoys in the Arctic, where changes are fast and these changes also influence the weather in the mid-latitudes.


Crew members and scientists from the US Coast Guard icebreaker Healy haul a buoy across the sea ice during a deployment. In the lead an ice bear watcher and a rescue swimmer.
To study changes in precipitation we probably need more stations. Rare events contribute a lot to the mean precipitation rate. The threshold to get into the news seems to be the rain sum of a month falling in on one day. Enormous downpours below that level are not even newsworthy. This makes the precipitation data noisy.

To study changes in extreme events we need more samples and might need more stations as well. How much more depends on how strong the synergy between the reference network and the other networks is and thus how much the other networks could then be used to produce more samples. That question needs some computational work.

The idea to use 3 redundant instruments in the USCRN is something we should also use in the GCRN and I would propose to also to create clusters of 3 stations. That would make it possible to detect and correct inhomogeneities by making comparisons. Even in a reference network there may still be inhomogeneities due to changes in the surrounding or management (which were not noticed).


We should also carefully study whether is might be a problem to only use pristine locations. That could mean that the network is no longer representative for the entire world. We should probably include stations in agricultural regions, that is a large part of the surface and they may respond differently from natural regions. But agricultural practices (irrigation, plant types) will change.

Starting a new network at pristine locations has as disadvantage that it takes time until the network becomes valuable for climate change research. Thus I understand why Germany and the UK have opted to use locations where there are already long historical observations. Because we only need 100+ stations it may be possible to select existing locations from the 30 thousand stations we have that are and likely stay pristine in the coming century. If not, I would not compromise and use a new pristine location for the reference network.

Finally, when it comes to the number of stations, we probably have to take into account that no matter how much we try some stations will become unsuitable due to war, land-use change and many other unforeseen problems. Just look back a century and consider all the changes we experienced, the network should be robust against such changes for the next century.

Absolute values or changes

Argos (ocean) and GRUAN (upper air) do not specify the instruments, but set specification for the measurement uncertainties and their characterization. Instruments may thus change and this change has to be managed. In case of GRUAN they perform many launches with multiple instruments.

For a climate reference land station I would prefer to keep the instruments exactly the same design for the coming century.

To study changes in the climate climatologists look at the local changes (compute anomalies) and average those. We had a temperature increase of about 1°C since 1900 and are confident it is warming. This while the uncertainty in the average absolute temperature is of the same order of magnitude. Determining changes directly is easier than first estimating the absolute level and then look whether it is changing. By keeping the instruments the same, you can study changes more easily.


This is an extreme example, but how much thermometer screens weather and yellow before they are replaced depends on the material (and the climate). Even if we have better materials in the future, we'd better keep it the same for stable measurements.
For GRUAN managing the change can solve most problems. Upper air measurements are hard; the sun is strong, the air is thin (bad ventilation) and the clouds and rain make the instruments wet. Because the instruments are only used once, they cannot be too expensive. On the other hand, each time starting with a freshly calibrated instrument makes the characterization of the uncertainties easier. Parallel measurements to manage changes are likely more reliable up in the air than at the surface where two instruments measuring side by side can legitimately measure a somewhat different climate, especially when it comes to precipitation, where undercatchment strongly depends on the local wind or for temperature when cold air flows at night hugging the orography.

Furthermore, land observations are used to study changes in extreme weather, not just the mean state of the atmosphere. The uncertainty of the rain rate depends on the rain rate itself. Strongly. Even in the laboratory and likely more outside where also the influence factors (wind, precipitation type) depend on the rain rate. I see no way to keep undercatchment the same without at least specifying the outside geometry of the gauge and wind shield in minute detail.

The situation for temperature may be less difficult with high-quality instruments, but is similar. When it comes to extremes also the response time (better: response function) of the instruments becomes important and how much out-time the instrument experiences, which is often related to severe weather. It will be difficult to design new instruments that have the same response functions and the same errors over the full range of values. It will also be difficult to characterize the uncertainties over the full range of values and velocity of changes.

Furthermore, the instruments of a land station are used for a long time while not being observed. Thus weather, flora, fauna and humans become error sources. Instruments which have the same specifications in the laboratory may thus still perform differently in the field. Rain gauges may be more or less prone to getting clogged by snow or insects, more or less attractive for drunks to pee in. Temperature screens may be more or less prone to be blocked by icing or for bees to build their nest in. Weather stations may be more or less attractive to curious polar bears.

This is not a black and white situation. It will depend on the quality of the instruments which route to prefer. In the extreme case of an error free measurement, there is no problem with replacing it with another error free instrument. Metrologists in the UK are building an instrument that acoustically measures the temperature of the air, without needing a thermometer, which should have the temperature of the air, but in practice never has. If after 2 or 3 generations of new instruments, they are really a lot better in 50 years and we would exchange them, that would still be a huge improvement of the current situation with an inhomogeneity every 15 to 20 years.



The software of GRUAN is all open source. So that when we understand the errors better in future, we know exactly what we did and can improve the estimates. In case we specify the instruments, that would mean that we need Open Hardware as well. The designs would need to be open and specified in detail. Simple materials should be used to be sure we can still obtain them in 2100. An instruments measuring humidity using the dewpoint of a mirror will be easier to build in 2100 than one using a special polymer film. These instruments can still be build by the usual companies.

If we keep the instrumentation of the reference network the same, the normal climate network, the GCOS network will likely have better equipment in 2100. We will discover many ways to make more accurate observations, to cut costs and make the management more easy. There is no way to stop progress for the entire network, which in 2100 may well have over 100 thousand stations. But I hope we can stop progress for a very small climate reference network of just 100 to 200 stations. We should not see the reference network as the top of hierarchy, but as the stable backbone that complements the other observations.

Organization

How do we make this happen? First the scientific community should agree on a concept and show how much the reference network would improve our understanding of the climatic changes in the 21st century. Hopefully this post is a step in this direction and there is an article in the works. Please add your thoughts in the comments.

With on average one reference station per country, it would be very inefficient if every country would manage its own station. Keeping the high metrological and documentation standards is an enormous task. Given that the network would be the same size as USCRN, the GCRN could in principle be managed by one global organization, like USCRN is managed by NOAA. It would, however, probably be more practical to have regional organizations for better communication with the national weather services and to reduce travel costs for maintenance and inspections.

Funding


The funding of a reference network should be additional funding. Otherwise it will be a long hard struggle in every country involved to build a reference station. In developing countries the maintenance of one reference station may well exceed the budget of their current network. We already see that some meteorologists fear that the millennial stations program will hurt the rest of the observational network. Without additional funding, there will likely be quite some opposition and friction.

In the Paris climate treaty, the countries of the world have already pledged to support climate science to reduce costs and damages. We need to know how close we are to the 2°C limit as feedback to the political process and we need information on all other changes as well to assess the damages from climate change. Compared to the economic consequences of these decisions the costs of a climate reference network is peanuts.

Thus my suggestion would be to ask the global climate negotiators to provide the necessary funding. If we go there, we should also ask the politicians to agree on the international sharing of all climate data. Restrictions to data is holding climate research and climate services back. These are necessary to plan adaptation and to limit damages.

The World Meteorological Organization had its congress last year. The directors of the national weather services have shown that they are not able to agree on the international sharing of data. For weather services selling data is often a large part of their budget. Thus the decision to share data internationally should be made by politicians who have the discretion to compensate these losses. In the light of the historical responsibility of the rich countries, I feel a global fund to support the meteorological networks in poor countries would be just. This would compensate them for the losses in data sales and would allow them to better protect themselves against severe weather and climate conditions.

Let's make sure that future climatologists can study the climate in much more detail.

Think of the children.


Related information

Hillary Rosner in the NYT on the global greenhouse gas reference network: The Climate Lab That Sits Empty

Free our climate data - from Geneva to Paris

Congress of the World Meteorological Organization, free our climate data

Climate History Podcast with Dr. Sam White mainly on the little ice age

A post-Paris look at climate observations. Nature Geoscience (manuscript)

Why raw temperatures show too little global warming

References

Bojinski, Stephan, Michel Verstraete, Thomas C. Peterson, Carolin Richter, Adrian Simmons and Michael Zemp, 2014: The Concept of Essential Climate Variables in Support of Climate Research, Applications, and Policy. Journal of Climate, doi: 10.1175/BAMS-D-13-00047.1.

Callendar, Guy S., 1961: Temperature fluctuations and trends over the earth. Quarterly Journal Royal Meteorological Society, 87, pp. 1–12. doi: 10.1002/qj.49708737102.

Diamond, Howard J., Thomas R. Karl, Michael A. Palecki, C. Bruce Baker, Jesse E. Bell, Ronald D. Leeper, David R. Easterling, Jay H. Lawrimore, Tilden P. Meyers, Michael R. Helfert, Grant Goodge, Peter W. Thorne, 2013: U.S. Climate Reference Network after One Decade of Operations: Status and Assessment. Bulletin of the American Meteorological Society, doi: 10.1175/BAMS-D-12-00170.1.

Dolman, A. Johannes, Alan Belward, Stephen Briggs, Mark Dowell, Simon Eggleston, Katherine Hill, Carolin Richter and Adrian Simmons, 2016: A post-Paris look at climate observations. Nature Geoscience, 9, September, doi: 10.1038/ngeo2785. (manuscript)

Hawkins, Ed and Jones, Phil. D. 2013: On increasing global temperatures: 75 years after Callendar. Quarterly Journal Royal Meteorological Society, 139, pp. 1961–1963, doi: 10.1002/qj.2178.

Immler, F.J., J. Dykema, T. Gardiner, D.N. Whiteman, P.W. Thorne, and H. Vömel, 2010: Reference Quality Upper-Air Measurements: guidance for developing GRUAN data products. Atmospheric Measurement Techniques, 3, pp. 1217–1231, doi: 10.5194/amt-3-1217-2010.

Jain, Sharad Kumar, 2015: Reference Climate and Water Data Networks for India. Journal of Hydrologic Engineering, 10.1061/(ASCE)HE.1943-5584.0001170, 02515001. (Manuscript)

Jones, Phil D., 1994: Hemispheric Surface Air Temperature Variations: A Reanalysis and an Update to 1993. Journal of Climate, doi: 10.1175/1520-0442(1994)007<1794:HSATVA>2.0.CO;2.

Pattantyús-Ábrahám, Margit and Wolfgang Steinbrecht, 2015: Temperature Trends over Germany from Homogenized Radiosonde Data. Journal of Climate, doi: 10.1175/JCLI-D-14-00814.1.

Roemmich, D., G.C. Johnson, S. Riser, R. Davis, J. Gilson, W.B. Owens, S.L. Garzoli, C. Schmid, and M. Ignaszewski, 2009: The Argo Program: Observing the global ocean with profiling floats. Oceanography, 22, p. 34–43, doi: 10.5670/oceanog.2009.36.

* The transition to automatic weather stations in Germany happened to have almost no influence on the annual means, contrary to what Klaus Hager and the German mitigation sceptical blog propagandise based on badly maltreated data.

** The idea to illustrate the importance of smaller uncertainties by showing two resolutions of the same photo comes from metrologist Michael de Podesta.

Saturday, 6 June 2015

No! Ah! Part II. The return of the uncertainty monster



Some may have noticed that a new NOAA paper on the global mean temperature has been published in Science (Karl et al., 2015). It is minimally different from the previous one. Why the press is interested, why this is a Science paper, why the mitigation sceptics are not happy at all is that due to these minuscule changes the data no longer shows a "hiatus", no statistical analysis needed any more. That such paltry changes make so much difference shows the overconfidence of people talking about the "hiatus" as if it were a thing.

You can see the minimal changes, mostly less than 0.05°C, both warmer and cooler, in the top panel of the graph below. I made the graph extra large, so that you can see the differences. The thick black line shows the new assessment and the thin red line the previous estimated global temperature signal.



It reminds of the time when a (better) interpolation of the datagap in the Arctic (Cowtan and Way, 2014) made the long-term trend almost imperceptibly larger, but changed the temperature signal enough to double the warming during the "hiatus". Again we see a lot of whining from the people who should not have build their political case on such a fragile feature in the first place. And we will see a lot more. And after that they will continue to act as if the "hiatus" is a thing. At least after a few years of this dishonest climate "debate" I would be very surprised if they would sudden look at all the data and would make a fair assessment of the situation.

The most paradox are the mitigation sceptics who react by claiming that scientists are not allowed to remove biases due to changes in the way temperature was measured. Without accounting for the fact that old sea surface temperature measurements were biased to be too cool, global warming would be larger. Previously I explained the reasons why raw data shows more warming and you can see the effect in the bottom panel of the above graph. The black line shows NOAA's current best estimate for the temperature change, the thin blue (?) line the temperature change in the raw data. Only alarmists would prefer the raw temperature trend.



The trend changes over a number of periods are depicted above; the circles are the old dataset, the squares the new one. You can clearly see differences between the trend for the various short periods. Shifting the period by only 2 years creates large trend difference. Another way to demonstrate that this features is not robust.

The biggest change in the dataset is that NOOA now uses the raw data of the land temperature database of the International Surface Temperature Initiative (ISTI). (Disclosure, I am member of the ISTI.) This dataset contains much more stations than the previously used Global Historical Climate Network (GHCNv3) dataset. (The land temperatures were homogenized with the same Pairwise Homogenization Algorithm (PHA) as before.)

The new trend in the land temperature is a little larger over the full period; see both graphs above. This was to be expected. The ISTI dataset contains much more stations and is now similar to the one of Berkeley Earth, which already had a somewhat stronger temperature trend. Furthermore, we know that there is a cooling bias in the land surface temperatures and with more stations it is easier to see data problems by comparing stations with each other and relative homogenization methods can remove a larger part of this trend bias.

However, the largest trend changes in recent periods are due to the oceans; the Extended Reconstructed Sea Surface Temperature (ERSST v4) dataset. Zeke Hausfather:
They also added a correction for temperatures measured by floating buoys vs. ships. A number of studies have found that buoys tend to measure temperatures that are about 0.12 degrees C (0.22 F) colder than is found by ships at the same time and same location. As the number of automated buoy instruments has dramatically expanded in the past two decades, failing to account for the fact that buoys read colder temperatures ended up adding a negative bias in the resulting ocean record.
It is not my field, but if I understand it correctly other ocean datasets, COBE2 and HadSST3, already took these biases into account. Thus the difference between these datasets needs to have another reason. Understanding these differences would be interesting. And NOAA did not yet interpolate over the data gap in the Arctic, which would be expected to make its recent trends even stronger, just like it did for Cowtan and Way. They are working on that; the triangles in the above graph are with interpolation. Thus the recent trend is currently still understated.

Personally, I would be most interested in understanding the difference that are important for long-term trends, like the differences shown below in two graphs prepared by Zeke Hausfather. That is hard enough and such questions are more likely answerable. The recent differences between the datasets is even tinier than the tiny "hiatus" itself; no idea whether that can be understood.





I need some more synonyms for tiny or minimal, but the changes are really small. They are well within the statistical uncertainty computed from the year to year fluctuations. They are well within the uncertainty due to the fact that we do not have measurements everywhere and need to interpolate. The latter is the typical confidence interval you see in historical temperature plots. For most datasets the confidence interval does not include the uncertainty because biases were not perfectly removed. (HadCRUT does this partially.)

This uncertainty becomes relatively more important on short time scales (and for smaller regions); for large time scales are large regions (global) many biases will compensate each other. For land temperatures a 15-year period is especially dangerous, that is about the period between two inhomogeneities (non-climatic changes).

The recent period is in addition especially tricky. We are just in an important transitional period from manual observations with thermometers Stevenson screens to automatic weather stations. Not only the measurement principle is different, but also the siting. It is difficult, on top of this, to find and remove inhomogeneities near the end of the series because the computed mean after the inhomogeneity is based on only a few values and has a large uncertainty.

You can get some idea of how large this uncertainty is be comparing the short-term trend of two independent datasets. Ed Hawkins has compared the new USA NOAA data and the current UK HadCRUT4.3 dataset at Climate Lab Book and presented these graphs:



By request, he kindly computed the difference between these 10-year trends shown below. They suggest that if you are interested in short term trends smaller than 0.1°C per decade (say the "hiatus"), you should study whether your data quality is good enough to be able to interpret the variability as being due to climate system. The variability should be large enough or have a stronger regional pattern (say El Nino).

If the variability you are interested in is somewhat bigger than 0.1°C you probably want to put in work. Both datasets are based on much of the same data and use similar methods. For homogenization of surface stations we know that it can reduce biases, but not fully remove them. Thus part of the bias will be the same for all datasets that use statistical homogenization. The difference shown below is thus an underestimate of the uncertainty and it will need analytic work to compute the real uncertainty due to data quality.



[UPDATE. I thought I had an interesting new angle, but now see that Gavin Schmidt, director of NASA GISS, has been saying this in newspapers since the start: “The fact that such small changes to the analysis make the difference between a hiatus or not merely underlines how fragile a concept it was in the first place.”]

Organisational implications

To reduce the uncertainties due to changes in the way we measure climate we need to make two major organizational changes: we need to share all climate data with each other to better study the past and for the future we need to build up a climate reference network. These are, unfortunately, not things climatologists can do alone, but need actions by politicians and support by their voters.

To quote from my last post on data sharing:
We need [to share all climate data] to see what is happening to the climate. We already had almost a degree of global warming and are likely in for at least another one. This will change the sea level, the circulation, precipitation patterns. This will change extreme and severe weather. We will need to adapt to these climatic changes and to know how to protect our communities we need climate data. ...

To understand climate, we need a global overview. National studies are not enough. To understand changes in circulation, interactions with mountains and vegetation, to understand changes in extremes, we need spatially resolved information and not just a few stations. ...

To reduce the influence of measurement errors and non-climatic changes (inhomogeneities) on our (trend) assessments we need dense networks. These errors are detected and corrected by comparing one station to its neighbours. The closer the neighbours are, the more accurate we can assess the real climatic changes. This is especially important when it comes to changes in severe and extreme weather, where the removal of non-climatic changes is very challenging. ... For the best possible data to protect our communities, we need dense networks, we need all the data there is.
The main governing body of the World Meteorological Organization (WMO) is just meeting until next week Friday (12th of June). They are debating a resolution on climate data exchange. To show your support for the free exchange of climate data please retweet or favourite the tweet below.

We are conducting a (hopefully) unique experiment with our climate system. Future generations climatologists would not forgive us if we did not observe as well as we can how our climate is changing. To make expensive decisions on climate adaptation, mitigation and burden sharing, we need reliable information on climatic changes: Only piggy-backing on meteorological observations is not good enough. We can improve data using homogenization, but homogenized data will always have much larger uncertainties than truly homogeneous data, especially when it comes to long term trends.

To quote my virtual boss at the ISTI Peter Thorne:
To conclude, worryingly not for the first time (think tropospheric temperatures in late 1990s / early 2000s) we find that potentially some substantial portion of a model-observation discrepancy that has caused a degree of controversy is down to unresolved observational issues. There is still an undue propensity for scientists and public alike to take the observations as a 'given'. As [this study by NOAA] attests, even in the modern era we have imperfect measurements.

Which leads me to a final proposition for a more scientifically sane future ...

This whole train of events does rather speak to the fact that we can and should observe in a more sane, sensible and rational way in the future. There is no need to bequeath onto researchers in 50 years time a similar mess. If we instigate and maintain reference quality networks that are stable SI traceable measures with comprehensive uncertainty chains such as USCRN, GRUAN etc. but for all domains for decades to come we can have the next generation of scientists focus on analyzing what happened and not, depressingly, trying instead to inevitably somewhat ambiguously ascertain what happened.
Building up such a reference network is hard because we will only see the benefits much later. But already now after about 10 years the USCRN provides evidence that the siting of stations is in all likelihood not a large problem in the USA. The US reference network with stations at perfectly sited locations, not affected by urbanization or micro-siting problems, shows about the same trend as the homogenized historical USA temperature data. (The reference network even has a non-significant somewhat larger trend.)

There is a number of scientists working on trying to make this happen. If you are interested please contact me or Peter. We will have to design such reference networks, show how much more accurate they would make climate assessments (together with the existing networks) and then lobby to make it happen.



Further reading

Metrologist Michael de Podesta sees to agree with the above post and wrote about the overconfidence of the mitigation sceptics in the climate record.

Zeke Hausfather: Whither the pause? NOAA reports no recent slowdown in warming. This post provides a comprehensive, well-readable (I think) overview of the NOAA article.

A similar well-informed article can be found on Ars Technica: Updated NOAA temperature record shows little global warming slowdown.

If you read the HotWhopper post, you will get the most scientific background, apart from reading the NOAA article itself.

Peter Thorne of the ISTI on The Karl et al. Science paper and ISTI. He gives more background on the land temperatures and makes a case for global climate reference networks.

Ed Hawkins compares the new NOAA dataset with HadCRUT4: Global temperature comparisons.

Gavin Schmidt as a climate modeller explains who well the new dataset fits to climate projections: NOAA temperature record updates and the ‘hiatus’.

Chris Merchant found about the same recent trend in his satellite sea surface temperature dataset and writes: No slowdown in global temperature rise?

Hotwhopper discusses the main egregious errors of the first two WUWT posts on Karl et al. and an unfriendly email of Anthony Watts to NOAA. I hope Hotwhopper is not planning any holidays. It will be busy times. Peter Thorne has the real back story.

NOAA press release: Science publishes new NOAA analysis: Data show no recent slowdown in global warming.

Thomas R. Karl, Anthony Arguez, Boyin Huang, Jay H. Lawrimore, James R. McMahon, Matthew J. Menne, Thomas C. Peterson, Russell S. Vose, Huai-Min Zhang, 2015: Possible artifacts of data biases in the recent global surface warming hiatus. Science. doi: 10.1126/science.aaa5632.

Boyin Huang, Viva F. Banzon, Eric Freeman, Jay Lawrimore, Wei Liu, Thomas C. Peterson, Thomas M. Smith, Peter W. Thorne, Scott D. Woodruff, and Huai-Min Zhang, 2015: Extended Reconstructed Sea Surface Temperature Version 4 (ERSST.v4). Part I: Upgrades and Intercomparisons. Journal Climate, 28, pp. 911–930, doi: 10.1175/JCLI-D-14-00006.1.

Rennie, Jared, Jay Lawrimore, Byron Gleason, Peter Thorne, Colin Morice, Matthew Menne, Claude Williams, Waldenio Gambi de Almeida, John Christy, Meaghan Flannery, Masahito Ishihara, Kenji Kamiguchi, Abert Klein Tank, Albert Mhanda, David Lister, Vyacheslav Razuvaev, Madeleine Renom, Matilde Rusticucci, Jeremy Tandy, Steven Worley, Victor Venema, William Angel, Manola Brunet, Bob Dattore, Howard Diamond, Matthew Lazzara, Frank Le Blancq, Juerg Luterbacher, Hermann Maechel, Jayashree Revadekar, Russell Vose, Xungang Yin, 2014: The International Surface Temperature Initiative global land surface databank: monthly temperature data version 1 release description and methods. Geoscience Data Journal, 1, pp. 75–102, doi: 10.1002/gdj3.8.