Monday, 10 October 2016

A stable global climate reference network


Historical climate data contains inhomogeneities, for example due to changes in the instrumentation or the surrounding. Removing these inhomogeneities to get more accurate estimates of how much the Earth has actually warmed is a really interesting problem. I love the statistical homogenization algorithms we use for this; I am a sucker for beautiful algorithms. As an observationalist it is great to see the historical instruments, read how scientists understood their measurements better and designed new instruments to avoid errors.

Still for science it would be better if future climatologists had an easier task and could work with more accurate data. Let's design a climate-change-quality network that is a stable as we can humanly get it to study the ongoing changes in the climate.

Especially now that the climate is changing, it is important to accurately predict the climate for the coming season, year, decade and beyond at a regional and local scale. That is information (local) governments, agriculture and industry needs to plan, adapt, prepare and limit the societal damage of climate change.

Historian Sam White argues that the hardship of the Little Ice Age in Europe is not just about cold, but also about the turbulent and unpredictable weather. Also the coming century much hardship can be avoided with better predictions. To improve decadal climate prediction of regional changes and to understand the changes in extreme weather we need much better measurements. For example, with a homogenized radiosonde dataset, the improvements in the German decadal prediction system became much clearer than with the old dataset.

We are performing a unique experiment with the climate system and the experiment is far from over. It would also be scientifically unpardonable not to measure this ongoing change as well as we can. If your measurements are more accurate, you can see new things. Methodological improvements that lead to smaller uncertainties is one of the main factors that brings science forward.



A first step towards building a global climate reference network is agreeing on a concept. This modest proposal for preventing inhomogeneities due to poor observations from being a burden to future climatologists is hopefully a starting point for this discussion. Many other scientists are thinking about this. More formally there are the Rapporteurs on Climate Observational Issues of the Commission for Climatology (CCl) of the World Meteorological Organization (WMO). One of their aims is to:
Advance specifications for Climate Reference Networks; produce a statement of guidance for creating climate observing networks or climate reference stations with aspects such as types of instruments, metadata, and siting;

Essential Climate Variables

A few weeks ago Han Dolman and colleagues wrote a call to action in Nature Goescience titled "A post-Paris look at climate observations". They argue that while the political limits are defined for temperature, we need climate quality observations for all essential climate variables listed in the table below.
We need continuous and systematic climate observations of a well-thought-out set of indicators to monitor the targets of the Paris Agreement, and the data must be made available to all interested users.
I agree that we should measure much more than just temperature. It is quite a list, but we need that to understand the changes in the climate system and to monitor the changes in the atmosphere, oceans, soil and biology we will need to adapt to. Not in this list, but important are biological changes, especially ecology needs support for long-term observational programs, because they lack the institutional support the national weather services provide on the physical side.

Measuring multiple variables also helps in understanding measurement uncertainties. For instance, in case of temperature measurements, additional observations of insolation, wind speed, precipitation, soil temperature and albedo are helpful. The US Climate Reference Network measures this wind speed at the height of the instrument (and humans) rather than at the meteorologically typical height of 10 meter.

Because of my work, I am mainly thinking of the land surface stations, but we need a network for many more observations. Please let me know where the ideas do not fit to the other climate variables.

Table. List of the Essential Climate Variables; see original for footnotes.
Domain GCOS Essential Climate Variables
Atmospheric (over land, sea and ice) Surface: Air temperature, Wind speed and direction, Water vapour, Pressure, Precipitation, Surface radiation budget.

Upper-air: Temperature, Wind speed and direction, Water vapour, Cloud properties, Earth radiation budget (including solar irradiance).

Composition: Carbon dioxide, Methane, and other long-lived greenhouse gases, Ozone and Aerosol, supported by their precursors.
Oceanic Surface: Sea-surface temperature, Sea-surface salinity, Sea level, Sea state, Sea ice, Surface current, Ocean colour, Carbon dioxide partial pressure, Ocean acidity, Phytoplankton.

Sub-surface: Temperature, Salinity, Current, Nutrients, Carbon dioxide partial pressure, Ocean acidity, Oxygen, Tracers.
Terrestrial River discharge, Water use, Groundwater, Lakes, Snow cover, Glaciers and ice caps, Ice sheets, Permafrost, Albedo, Land cover (including vegetation type), Fraction of absorbed photosynthetically active radiation, Leaf area index, Above-ground biomass, Soil carbon, Fire disturbance, Soil moisture.

Comparable networks

There are comparable networks and initiatives, which likely shape how people think about a global climate reference network. Let me thus describe how they fit into the concept and where they are different.

There is the Global Climate Observing System (GCOS), which is mainly an undertaking of the World Meteorological Organization (WMO) and the Intergovernmental Oceanographic Commission (IOC). They observe the entire climate system; the idea of the above list of essential climate variables comes from them (Bojinski and colleagues, 2014). GOCS and its member organization are important for the coordination of the observations, for setting standard so that measurements can be compared and for defending the most important observational capabilities against government budget cuts.

Especially important from a climatological perspective is a new program to ask governments to recognize centennial stations as part of the world heritage. If such long series are stopped or the station is forced to move, a unique source of information is destroyed or damaged forever. That is comparable to destroying ancient monuments.



A subset of the meteorological stations are designated as GCOS Surface Network measuring temperature and precipitation. These stations have been selected for their length, quality and to cover all regions of the Earth. Its monthly data is automatically transferred to global databases.

National weather services normally take good care of their GCOS stations, but a global reference network would have much higher standards and also provide data at better temporal resolutions than monthly averages to be able to to study changes in extreme weather and weather variability.



There is already a global radiosonde reference network, the GCOS Reference Upper-Air Network (GRUAN, Immler and colleagues, 2010). This network provides measurements with well characterized uncertainties and they make extensive parallel measurements when they transition from one radiosonde design to the next. No proprietary software is used to make sure it is know exactly what happened to the data.

Currently they have about 10 sites, a similar number is on the list to be certified and the plan is not make this a network of about 30 to 40 stations; see map below. Especially welcome would be partners to start a site in South America.



The observational system for the ocean Argos is, as far as I can see, similar to GRUAN. It measures temperature and salinity (Roemmich and colleagues, 2009). If your floats meet the specifications of Argos, you can participate. Compared to land stations the measurement environment is wonderfully uniform. The instruments typically work a few years. Their life span is thus between a weather station and a one-way radiosonde ascent. This means that the instruments may deteriorate somewhat during their lifetimes, but maintenance problems are more important for weather stations.

A wonderful explanation of how Argos works for kids:


Argos has almost four thousand floats. They are working on a network with spherical floats that can go deeper.



Finally there are a number of climate reference networks of land climate stations. The best known is probably the US Climate Reference Network (USCRN, Diamond and colleagues, 2013). It has has 131 stations. Every station has 3 identical high quality instrument, so that measurement problems can be detected and the outlier attributed to a specific instrument. To find these problems quickly all data is relayed online and checked at their main office. Regular inspections are performed and everything is well documented.



The USCRN has selected new locations for its stations, which are expected to be free of human changes of the surroundings in the coming decades. This way it takes some time until the data becomes climatologically interesting, but they can already be compared with the normal network and this gives some confidence that its homogenized data is okay for the national mean; see below. The number of stations was sufficient to compute a national average in 2005/2006.



Other countries, such as Germany and the United Kingdom, have opted to make existing stations into a national climate reference network. The UK Reference Climatological Stations (RCS) have a long observational record spanning at least 30 years and their distribution aims to be representative of the major climatological areas, while the locations are unaffected by environmental changes such as urbanisation.


German Climate Reference Station which was founded in 1781 in Bavaria on the mountain Hohenpeißenberg. The kind of weather station photo, WUWT does not dare to show.
In Germany the climate reference network are existing stations with a very long history. Originally they were the stations where conventional manual observations continued. Unfortunately, they will now also switch to automatic observations. Fortunately, after making a long parallel measurement to see what this does to the climate record*.

An Indian scientist proposes an Indian Climate Reference Network of about 110 stations (Jain, 2015). His focus is on precipitation observations. While temperature is a good way to keep track on the changes, most of the impacts are likely due to changes in the water cycle and storms. Precipitation measurements have large errors; it is very hard to make precipitation measurements with an error below 5%. When these errors change, that produces important inhomogeneities. Such jumps in precipitation data are hard to remove with relative statistical homogenization because the correlations between stations are low. If there is one meteorological parameters for which we need a reference network, it is precipitation.

Network of networks

For a surface station Global Climate Reference Network, the current US Climate Reference Network is a good template when it comes to the quality of the instrumentation, management and documentation.

A Global Climate Reference Network does not have to do the heavy lifting all alone. I would see it as the temporally stable backbone of the much larger climate observing system. We still have all the other observations that help to make sampling errors smaller and provide the regional information you need to study how energy and mass moves through the climate system (natural variability).

We should combine them in a smart way to benefit from the strengths of all networks.



The Global Climate Reference Network does not have to be large. If the aim is to compute a global mean temperature signal, we would need just as many samples as we would need to compute the US mean temperature signal. This is in the order of 100 stations. Thus on average, every country in the world would have one climate reference station.

The figure on the right from Jones (1994) compares the temperature signal from 172 selected stations &mdsh; 109 in the Northern Hemisphere. 63 in the Southern Hemisphere. &mdash with the temperature signal computed from all available stations. There is nearly no difference, especially with respect to the long term trend.

Callendar (1961) used 80 only stations, but his temperature reconstruction fits quite well to the modern reconstructions (Hawkins and Jones, 2013).

Beyond the global means

The number of samples/stations can be modest, but it is important that all climate regions of the world are sampled; some regions warm/change faster than others. It probably makes sense to have more stations in especially vulnerable regions, such as mountains, Greenland, Antarctica. We really need a stable network of buoys in the Arctic, where changes are fast and these changes also influence the weather in the mid-latitudes.


Crew members and scientists from the US Coast Guard icebreaker Healy haul a buoy across the sea ice during a deployment. In the lead an ice bear watcher and a rescue swimmer.
To study changes in precipitation we probably need more stations. Rare events contribute a lot to the mean precipitation rate. The threshold to get into the news seems to be the rain sum of a month falling in on one day. Enormous downpours below that level are not even newsworthy. This makes the precipitation data noisy.

To study changes in extreme events we need more samples and might need more stations as well. How much more depends on how strong the synergy between the reference network and the other networks is and thus how much the other networks could then be used to produce more samples. That question needs some computational work.

The idea to use 3 redundant instruments in the USCRN is something we should also use in the GCRN and I would propose to also to create clusters of 3 stations. That would make it possible to detect and correct inhomogeneities by making comparisons. Even in a reference network there may still be inhomogeneities due to changes in the surrounding or management (which were not noticed).


We should also carefully study whether is might be a problem to only use pristine locations. That could mean that the network is no longer representative for the entire world. We should probably include stations in agricultural regions, that is a large part of the surface and they may respond differently from natural regions. But agricultural practices (irrigation, plant types) will change.

Starting a new network at pristine locations has as disadvantage that it takes time until the network becomes valuable for climate change research. Thus I understand why Germany and the UK have opted to use locations where there are already long historical observations. Because we only need 100+ stations it may be possible to select existing locations from the 30 thousand stations we have that are and likely stay pristine in the coming century. If not, I would not compromise and use a new pristine location for the reference network.

Finally, when it comes to the number of stations, we probably have to take into account that no matter how much we try some stations will become unsuitable due to war, land-use change and many other unforeseen problems. Just look back a century and consider all the changes we experienced, the network should be robust against such changes for the next century.

Absolute values or changes

Argos (ocean) and GRUAN (upper air) do not specify the instruments, but set specification for the measurement uncertainties and their characterization. Instruments may thus change and this change has to be managed. In case of GRUAN they perform many launches with multiple instruments.

For a climate reference land station I would prefer to keep the instruments exactly the same design for the coming century.

To study changes in the climate climatologists look at the local changes (compute anomalies) and average those. We had a temperature increase of about 1°C since 1900 and are confident it is warming. This while the uncertainty in the average absolute temperature is of the same order of magnitude. Determining changes directly is easier than first estimating the absolute level and then look whether it is changing. By keeping the instruments the same, you can study changes more easily.


This is an extreme example, but how much thermometer screens weather and yellow before they are replaced depends on the material (and the climate). Even if we have better materials in the future, we'd better keep it the same for stable measurements.
For GRUAN managing the change can solve most problems. Upper air measurements are hard; the sun is strong, the air is thin (bad ventilation) and the clouds and rain make the instruments wet. Because the instruments are only used once, they cannot be too expensive. On the other hand, each time starting with a freshly calibrated instrument makes the characterization of the uncertainties easier. Parallel measurements to manage changes are likely more reliable up in the air than at the surface where two instruments measuring side by side can legitimately measure a somewhat different climate, especially when it comes to precipitation, where undercatchment strongly depends on the local wind or for temperature when cold air flows at night hugging the orography.

Furthermore, land observations are used to study changes in extreme weather, not just the mean state of the atmosphere. The uncertainty of the rain rate depends on the rain rate itself. Strongly. Even in the laboratory and likely more outside where also the influence factors (wind, precipitation type) depend on the rain rate. I see no way to keep undercatchment the same without at least specifying the outside geometry of the gauge and wind shield in minute detail.

The situation for temperature may be less difficult with high-quality instruments, but is similar. When it comes to extremes also the response time (better: response function) of the instruments becomes important and how much out-time the instrument experiences, which is often related to severe weather. It will be difficult to design new instruments that have the same response functions and the same errors over the full range of values. It will also be difficult to characterize the uncertainties over the full range of values and velocity of changes.

Furthermore, the instruments of a land station are used for a long time while not being observed. Thus weather, flora, fauna and humans become error sources. Instruments which have the same specifications in the laboratory may thus still perform differently in the field. Rain gauges may be more or less prone to getting clogged by snow or insects, more or less attractive for drunks to pee in. Temperature screens may be more or less prone to be blocked by icing or for bees to build their nest in. Weather stations may be more or less attractive to curious polar bears.

This is not a black and white situation. It will depend on the quality of the instruments which route to prefer. In the extreme case of an error free measurement, there is no problem with replacing it with another error free instrument. Metrologists in the UK are building an instrument that acoustically measures the temperature of the air, without needing a thermometer, which should have the temperature of the air, but in practice never has. If after 2 or 3 generations of new instruments, they are really a lot better in 50 years and we would exchange them, that would still be a huge improvement of the current situation with an inhomogeneity every 15 to 20 years.



The software of GRUAN is all open source. So that when we understand the errors better in future, we know exactly what we did and can improve the estimates. In case we specify the instruments, that would mean that we need Open Hardware as well. The designs would need to be open and specified in detail. Simple materials should be used to be sure we can still obtain them in 2100. An instruments measuring humidity using the dewpoint of a mirror will be easier to build in 2100 than one using a special polymer film. These instruments can still be build by the usual companies.

If we keep the instrumentation of the reference network the same, the normal climate network, the GCOS network will likely have better equipment in 2100. We will discover many ways to make more accurate observations, to cut costs and make the management more easy. There is no way to stop progress for the entire network, which in 2100 may well have over 100 thousand stations. But I hope we can stop progress for a very small climate reference network of just 100 to 200 stations. We should not see the reference network as the top of hierarchy, but as the stable backbone that complements the other observations.

Organization

How do we make this happen? First the scientific community should agree on a concept and show how much the reference network would improve our understanding of the climatic changes in the 21st century. Hopefully this post is a step in this direction and there is an article in the works. Please add your thoughts in the comments.

With on average one reference station per country, it would be very inefficient if every country would manage its own station. Keeping the high metrological and documentation standards is an enormous task. Given that the network would be the same size as USCRN, the GCRN could in principle be managed by one global organization, like USCRN is managed by NOAA. It would, however, probably be more practical to have regional organizations for better communication with the national weather services and to reduce travel costs for maintenance and inspections.

Funding


The funding of a reference network should be additional funding. Otherwise it will be a long hard struggle in every country involved to build a reference station. In developing countries the maintenance of one reference station may well exceed the budget of their current network. We already see that some meteorologists fear that the millennial stations program will hurt the rest of the observational network. Without additional funding, there will likely be quite some opposition and friction.

In the Paris climate treaty, the countries of the world have already pledged to support climate science to reduce costs and damages. We need to know how close we are to the 2°C limit as feedback to the political process and we need information on all other changes as well to assess the damages from climate change. Compared to the economic consequences of these decisions the costs of a climate reference network is peanuts.

Thus my suggestion would be to ask the global climate negotiators to provide the necessary funding. If we go there, we should also ask the politicians to agree on the international sharing of all climate data. Restrictions to data is holding climate research and climate services back. These are necessary to plan adaptation and to limit damages.

The World Meteorological Organization had its congress last year. The directors of the national weather services have shown that they are not able to agree on the international sharing of data. For weather services selling data is often a large part of their budget. Thus the decision to share data internationally should be made by politicians who have the discretion to compensate these losses. In the light of the historical responsibility of the rich countries, I feel a global fund to support the meteorological networks in poor countries would be just. This would compensate them for the losses in data sales and would allow them to better protect themselves against severe weather and climate conditions.

Let's make sure that future climatologists can study the climate in much more detail.

Think of the children.


Related information

Free our climate data - from Geneva to Paris

Congress of the World Meteorological Organization, free our climate data

Climate History Podcast with Dr. Sam White mainly on the little ice age

A post-Paris look at climate observations. Nature Geoscience (manuscript)

Why raw temperatures show too little global warming

References

Bojinski, Stephan, Michel Verstraete, Thomas C. Peterson, Carolin Richter, Adrian Simmons and Michael Zemp, 2014: The Concept of Essential Climate Variables in Support of Climate Research, Applications, and Policy. Journal of Climate, doi: 10.1175/BAMS-D-13-00047.1.

Callendar, Guy S., 1961: Temperature fluctuations and trends over the earth. Quarterly Journal Royal Meteorological Society, 87, pp. 1–12. doi: 10.1002/qj.49708737102.

Diamond, Howard J., Thomas R. Karl, Michael A. Palecki, C. Bruce Baker, Jesse E. Bell, Ronald D. Leeper, David R. Easterling, Jay H. Lawrimore, Tilden P. Meyers, Michael R. Helfert, Grant Goodge, Peter W. Thorne, 2013: U.S. Climate Reference Network after One Decade of Operations: Status and Assessment. Bulletin of the American Meteorological Society, doi: 10.1175/BAMS-D-12-00170.1.

Dolman, A. Johannes, Alan Belward, Stephen Briggs, Mark Dowell, Simon Eggleston, Katherine Hill, Carolin Richter and Adrian Simmons, 2016: A post-Paris look at climate observations. Nature Geoscience, 9, September, doi: 10.1038/ngeo2785. (manuscript)

Hawkins, Ed and Jones, Phil. D. 2013: On increasing global temperatures: 75 years after Callendar. Quarterly Journal Royal Meteorological Society, 139, pp. 1961–1963, doi: 10.1002/qj.2178.

Immler, F.J., J. Dykema, T. Gardiner, D.N. Whiteman, P.W. Thorne, and H. Vömel, 2010: Reference Quality Upper-Air Measurements: guidance for developing GRUAN data products. Atmospheric Measurement Techniques, 3, pp. 1217–1231, doi: 10.5194/amt-3-1217-2010.

Jain, Sharad Kumar, 2015: Reference Climate and Water Data Networks for India. Journal of Hydrologic Engineering, 10.1061/(ASCE)HE.1943-5584.0001170, 02515001. (Manuscript)

Jones, Phil D., 1994: Hemispheric Surface Air Temperature Variations: A Reanalysis and an Update to 1993. Journal of Climate, doi: 10.1175/1520-0442(1994)007<1794:HSATVA>2.0.CO;2.

Pattantyús-Ábrahám, Margit and Wolfgang Steinbrecht, 2015: Temperature Trends over Germany from Homogenized Radiosonde Data. Journal of Climate, doi: 10.1175/JCLI-D-14-00814.1.

Roemmich, D., G.C. Johnson, S. Riser, R. Davis, J. Gilson, W.B. Owens, S.L. Garzoli, C. Schmid, and M. Ignaszewski, 2009: The Argo Program: Observing the global ocean with profiling floats. Oceanography, 22, p. 34–43, doi: 10.5670/oceanog.2009.36.

* The transition to automatic weather stations in Germany happened to have almost no influence on the annual means, contrary to what Klaus Hager and the German mitigation sceptical blog propagandise based on badly maltreated data.

** The idea to illustrate the importance of smaller uncertainties by showing two resolutions of the same photo comes from metrologist Michael de Podesta.

16 comments:

Zeke said...

Hear, hear!

It really wouldn't cost that much (in the grand scheme of things) to set up a barebones land-based CRN with, say, 200 stations. Getting permission from nations and having a system to deal with inspections/repairs if things break would be the only difficult parts.

Hans Erren said...

Perhaps you could show the data before 1860 with the complete Hohenpeissenberg data series? http://members.casema.nl/errenwijlens/co2/t_hohenpeissenberg_200512.txt

Victor Venema said...

To start with the original statement, don't you like the siting of these instruments on the mountain observatory in Hohenpeissenberg. If people would look at your data, they would find a temperature increase of around 1.5°C since 1900. Completely without bad micro-siting or urbanization. It would be nice is you could acknowledge that.

Naturally the early data of such an observatory is unreliable. Measurements were made 9 meter above the ground in a wall screen at the North side of the parochial house. The ventilation of the early screen before 1842 was not that good and the sun could get onto the screen until 1849. Compared to the garden screen the observed temperatures of the wall screen were up to 5°C warmer on bad days (sun during sun rise and sun set and little wind for ventilation).

It is interesting the mitigation sceptical movement shows so many pictures of bad siting of selected modern stations, but ignores the warming biases in early instrumental data. You could almost think they have a political agenda.

Tonyb said...

Yes, better and more consistent observation, measurement and recording are key, whether on land or ocean. This latter area being particularly poorly served.

However, that still leaves us with the problem of how you compare modern consistent trecords with historic highly inconsistent records. All the algorithms in the world cant't be a substitute for this instance where we just don't know how good historic information is.

I have referenced you this book before and have written about it

https://archive.org/stream/pt1handbookofcli00hannuoft/pt1handbookofcli00hannuoft_djvu.txt

Von hann saw the inconsistencies between one observer and another in the same country let alone the different methodologies employed by different countries.

So, yes to better modern data but reconciling it with the historic material will keep you busy!

Best regards

Tonyb

Victor Venema said...

Yes, understanding the problems in the historical data will remain an active area of research.

The algorithms to remove changes in the measurement methods and other inhomogeneities from station data improve the data. The question is how much they improve the data and how much trend bias remains.

If we had a climate reference network we would also still need to homogenize the rest of the data, but with the information from the reference network we could do this with much more accuracy. So you are right maybe my intro was worded a bit too optimistically.

Howard Diamond said...

I disagree with the statement that, "It really wouldn't cost that much (in the grand scheme of things) to set up a barebones land-based CRN with, say, 200 stations. Getting permission from nations and having a system to deal with inspections/repairs if things break would be the only difficult parts." Even at a very low cost of US$15K per station to install such equipment (and I am very much on the low side here), for 200 stations that is US$3M; and that does not even include the site surveys and clearances required in each country; the US Climate Reference Network has NO resources to contribute to this; we have enough problem in just maintaining what we have.

Perhaps in the grand scheme of things that is not a lot of funding, but in my many years of experience with GCOS, I do not see anyone writing any checks for that amount. The on-going costs of monitoring, data quality, archiving, access, and station maintenance and calibration on a regular annual basis of at least that much each year per station then puts one on a US$3M per year budget.

So, even if you could get someone to write a check for the initial stations; the long-term operations and maintenance costs would be a major hurdle. One need no further than the existing GCOS Surface Network and GCOS Upper Air Network (networks established to ensure long-term climate monitoring) and the gaps in those to see what an issue long-term operations and maintenance has. Sure, a global reference network as talked about here sounds great, and I'd be all for it if resources were available, but that is unfortunately a pipe dream, and I speak from many years of experience in trying to look at maintaining GSN and GUAN stations. Adding a new network to the mix (while a laudable goal) is simply impractical.

Gregor Vertačnik said...

There could be another obvious problem in some countries with the maintainance of a reference climate station. In order to get stable measurements also the neighborhood of the station has to be stable. In humid warm regions that implies cutting the grass constantly or maintaining same agriculture activity to prevent trees to grow (and that means hundreds of meters from the station). That is very hard to achieve for a long period in regions with high population density. 200 hundreds stations is far from enough as climate signal depends also on elevation and microsite (valley, hilltop etc.) and not just latitude and longitude.

So, altough the proposed idea is very nice, it seems rather optimistic for me in a short term.

Victor Venema said...

I agree that it would be nearly impossible to maintain the surrounding of stations the same over a century in densely populated areas. Even if population growth is declining. We should make sure that we do sample similar climates, humans like to live near the coast, but that seems to be doable.

Keeping agricultural areas the same might be a challenge. Maybe we could collaborate with nature conservation foundations. They also sometimes conserve agricultural areas like meadows with low nutrient soils. It may be possible to make a contract with them to keep the management the same; the want to conserve anyway.

If it were just lat lon maybe even 20 stations would be enough for the long-term trend. I provided some studies that 100 stations is enough. They already took into account that there are more factors that influence climate change.

Tonyb said...

Victor

One of the reasons that I have spent so long studying CET and am gathering material to try to push its start date back much further is because it has a close correlation with global, or at least northern hemisphere temperatures. I am excited by the idea that if we could get some rough approximation of the ups and downs of CET to the 11 th century then that can tell us a lot about a wider area than just central England.

Anyway, my point is that There must be Other stations around the world which are similarly a good approximation to not only a large regional area but maybe have even a continental or hemispheric indication.

If say twenty such stations could be identified they could be closely tracked, their history researched and and current factors that would affect their accuracy could be taken into account. Twenty stations would be a much easier network to handle than 200 and would be much cheaper.

Tonyb

Victor Venema said...

There is a lot of overhead. The selection criteria of the stations, the training of the auditors, the equipment and administration to stick to metrological standards, the reporting of the results to the public and funders. Thus I would be surprised if a network of 20 stations is much cheaper than 200 stations, maybe 50% less?

The difficulty will be finding an additional funding mechanism. For the kind of groups/governments that could fund this, the amount is peanuts. Thus the decision will be whether the idea is worthwhile. I do not think it will matter much whether it is 1 or 3 million a year. The first million is the hardest part.

tonyb said...

Victor

Whilst I admire and support your aims I wonder if you are being a little optimistic in expecting the initial and on-going finance-which will be considerable- to be forthcoming at a time of budget cuts?

Much of the EU funding that might have found its way to a scheme like this will be under pressure as the EU's second biggest paymaster-The UK- removes its funding.

Therefore it may be that you will need to reduce your ambitions-hence my suggestion that 20 historic stations might well be all that you need, or can afford.

In this connection might it be worth reviving some of the old networks such as the Mannheim Palatine or Jurins' Royal Society stations?

The opportunity to be part of a living history project might well appeal to some of the older, or more climatically important, stations.

As an alternative-and this may well be a long way down the line-would be the idea of crowd-funding. Provided the criteria for the project are sorted out and it was seen to be an objective programme that merely wants to confirm science and is not political, there would surely be many-including me-that would be prepared to make a donation and no doubt there might be several major individual benefactors.

As I say, this latter resolution is somewhat down the road and I hope you manage to achieve your objectives via official channels, but if not there are alternatives

tonyb

Victor Venema said...

Now that you mention it, the EU might soon have more money because they no longer have to pay anything to the UK. That is a nice wind fall. :-)

We will have to compute how much more accurate climate data would be with such a global climate reference network. We could do that for a few scenarios including one with only 20 stations. I am personally expecting a lot less benefit. Climatology wants to know a lot more than just the global mean. But sometimes it is good to start small, show that you are up for the job before you grow.

The crowd funding initiatives I have seen were rather modest in size and sometimes I was wondering whether it covered the salary of the person running the campaign. Have you seen any larger crowd funding initiatives for stuff where the funders did not get goodies, just a glowing feeling from contributing to science? I am just a scientist from a modest background, no aristocratic friends.

tonyb said...

Come on Victor, now the Brits are leaving this undemocratic and bureaucratic decades long party, only the Germans are still crazy enough to continue to subsidise everyone else in the way they have been doing. :)

AS regards crowd funding, this site is primarily aimed at helping to promote the idea of scientific and other crowd funding itself, but I note there are three German people on the board and also that next month there is a crowd funding conference in Paris. There also seem possibilities of networking. Might be worth contacting your German peers? (see, you do have aristocratic friends)

http://eurocrowd.org/work-groups/scientific/

This next link seems to relate more directly to scientific crowd funding. There are some interesting articles on the right hand side.

http://annualconference.astp-proton.eu/blog/crowdfunding-science/

Generally the amounts raised, as you say, seem quite small as they are for individual research projects. Your plans are somewhat more ambitious but the principle remains the same.

The topic might be worth investigating further.

tonyb

Victor Venema said...

Thanks for the links. Will have a look.

Germany is big, I am not friends with everyone. Plus, I am Dutch, not German, I did not start the War. ;-)

The Germans are fortunately still so rational to realize that the pennies they send to Brussels come back with a huge multiplier via economic growth. Growth purely from better synergies in Europe, without having to work harder or longer. That is the best possible wealth you can get. Almost like old money aristocracy, getting rich without having to work for it.

Trike1300 said...

Victor
thanks for this contribution to a such a key point and discussion. This is of high interest for the metrology community since there’s lot to do in defining standards, checking instrument characteristics, studying measurement best practice and evaluating measurement uncertainties. To reach full reliability and comparability in space and time, measurement records need documented traceability to the International Systems of Units and uncertainty evaluations. These metrological aspects are fundamental when measurement sites and stations are designed to provide reference grade data.

Yes the first step towards building a global climate reference network is agreeing on a concept. And this concept includes a step backward: before defining the characteristics of a reference installation, we should sit around the table and clarify what we intend and need for reference grade data - value. This will set the basis to define the reference grade measurement and, most important, the target uncertainty.

A global network is a need. And not only in sound with a successful similar action like GRUAN. In establishing its reference upper air network (GRUAN), GCOS adopted strict metrology requirement for participating stations. The GRUAN guide and manual are now well aligned with metrology documents such as the GUM. Networks of ground based stations are made of multitude of observing sites, managed and maintained in a cost-effective balance. Only a limited number of stations can be raised to the quality level of a “reference site” and become the stable climatological backbone of the networks. The process should be sustainable from an economic point of view. After a first kick-off funding process, the value of hosting such stations can also be recognized by local authorities: maintenance can be on a voluntary basis, supervised by a central and single Institution. Let’s start smaller. At present there’s no such an approach in Europe, where all nations are operating very different systems and the comparability is almost lost: borders are still well present in EU! This idea should be submitted to EU as soon, as a first (and possibly easier) transnational attempt towards a worldwide concept. Collaboration among EU countries is facilitated by joint research project (H2020) and by article 185 focused actions. And even “Interact” proposal can be considered. Let’s just work on that.

The success of this proposal requires a deep interaction between the climate community and operational departments of weather services on one side and metrologists on the other side. The work proposed needs to be based on a constant exchange of information, through expert team memberships, workshops, conferences, meetings and on-site visits and joint measurements activities. Such interaction is now well established and it’s the right time to act.

An additional benefit of the proposal is that new sites, once established, can also be used for research objectives, comparisons, instrument change management, uncertainty evaluations.

When top-level requirements are defined, downgrading the specification can lead to a better classification of different levels under a cost efficient approach.
Agro meteorology and urban meteorology as examples, can also be involved in this process. Defining appropriate standards for the purpose was one of the key messages in conclusion of the recent WMO CIMO TECO conference.

Victor Venema said...

Sorry for my slow response. Yes, I think meterology would be very important for such a climate quality network to ensure that measurements over decades are comparable. Also your tradition of designing and building instruments would be very important to make sure that we have designs that can still be build in decades. Currently we depend on commercial providers that keep on changing the designs that can be bought. Even if they would build the instruments, we should be the ones who say how the instrument should look like to ensure it stays the same.

GRUAN and USCRN are important trail blazers in showing the merits of high metrological standards. Like I tried to argue in the post, land stations may well be more difficult that the upper air network and there are still a few points I would love to improve on the USCRN design for even better centennial scale stability.

The specifications for the quality also depend on what we can get for a reasonable amount of resources. So I think it would be an iterative process between specifying the target uncertainty and designing a network and its procedures and instruments. The higher the quality of the data, the more we can see. And we do not know in advance what we will be able to see.

For temperature the political debate of the last decade (the "hiatus" stupidity) has shown that 0.1°C per decade globally is important. To keep a track on global warming itself a 0.1°C per century would be good for monitoring and to estimate the climate sensitivity. In both cases specs that are 10 times as good would be great. Locally the changes due to climate modes are larger, thus locally one could allow for larger uncertainties.

Precipitation measurements have huge uncertainties, easily several percent if not more depending also on the local climate. We expect changes in precipitation of 1-2% per °C. Thus the absolute uncertainty is likely never good enough even with the best instruments we can build. The stability over a century would need to be well below 1%. No idea if that is doable. For extremes the expected changes are larger, a rule of the thumb gives 7%/°C. But extremes are also harder to measure.

For humidity and wind, I would have to read up more to make a first estimate.

A European reference network may well be the logical next step after the USCRN and before a GCRN. At least organizationally. Climatologically, better data on other continents is more valuable, but also a lot harder to organize. Tony already mentioned the Mannheim Palatine network. That is a precedent for an international network and shows that it can work.