Showing posts with label station data. Show all posts
Showing posts with label station data. Show all posts

Thursday, September 19, 2019

European Meteorological Society Meeting highlights on station data quality and communication #EMS2019

Last week I was at the Annual Meeting of the European Meteorological Society in Copenhagen, Denmark. Here are the highlights for station data (quality) and communication.

Warming in Svalbard

Øyvind Nordli and colleagues estimated the warming on the Arctic island of Svalbard/Spitsbergen; see figure below. They use the linear red line to estimate the total warming and claim 3.8°C of warming. I would say it warmed a whooping 6°C (11°F). The graph already mostly shows that such a linear trend based estimate will underestimate the total warming.

The monthly data was already published in 2014. At that time I would have called it 5°C of warming; recent years were very warm.

They put a lot of work in the homogenization; even made modern parallel measurements to estimate the effect of past relocations of the station. The next, almost published, paper is on the daily data, so that we can study changes in the number of growing, freezing or melting days.



Warming in the tropical hot spot

There is a small region high up in the air in the tropics that is dear to many climate "skeptics", the tropical hot spot. It is one of the coldest places on Earth which warms strongly when the world is warming (for any reason). Because some observations do not show as much warming there, climate "skeptics" have declared this region to be the arbiter of climate truth, these observations and satellite estimates to the be best we have and most informative for the changes of our climate.


The warming for a GISS model equilibrium run for a 2% increase in solar forcing showing a maximum around 20N to 20S around 300mb (10 km).

Back to reality, it is really hard to make good measurements of such a cold place starting at such a tropically warm place. The thermometer needs to be reliable over about 100°C of range. That is a lot. It is not that easy to launch a weather balloon up to such heights and colds; the balloon will expand enormously. While the countries making these measurements are among the poorest on Earth.

What I had not realized is how few weather balloon make it to such heights. A poster by Souleymane Sy showed this; see Figure below. For trend estimates the sharp drop off above the pressure level of 300mb is especially very worrying. Changes in this drop off level due to changes in equipment can easily lead to changes in the estimated temperature. There is a part of the tropical hot spot below 300mb; that would be the part I would prioritize in trend estimates.


Number of radiosonde stations recording at least a given percentage of temperature and relative humidity monthly data at mandatory pressure levels since 1978 to present time for the Tropics (20° North to 20° South).

Weather forecasts in America and Europe

Communication at the EMS mostly means presenting the daily TV weather forecasts. There was a lovely difference between American and European presenters. The Americans were explaining how to dumb down your forecast as much as possible. A study found that most high school students in Alabama could not find their county on a map of Alabama; so the advice is to put a city name on every number on the map. The Europeans presented their educational work.

Our Irish friends had made three one-hour shows about the weather on consecutive days between 7 and 8pm when normally the soaps are running; light information in a botanical garden with a a small audience.

German weather presenter Karsten Schwanke got a price for his educational weather forecasts, which add information on climate change; for example in case of Dorian show the increase in the sea surface temperature. For Schwanke providing context is the main task of TV weather, the local numbers are available from a weather app.


Karsten Schwanke explains the relationship between the jet stream, wild fires and the drought in Europe. In German.

An increasing problem is fake weather predictions. Amateurs who can make a decent map are often seen as reliable sources, which can be dangerous in case of severe weather.

American weather caster Jay Trobec reported that it is common to have weather information three times during a news block, before, in the middle and at the end. In Europe you just get weather at the end. In America the weather is live, a presenter explaining everyone should leave the disaster area they went to to make this live broadcast. In Europe typically reported and the weather shown in videos. Trobec stated that during severe weather people watch TV rather than use the internet.


Live hurricane weather. :-)

The difference is likely that there is not that much severe weather in Europe, you normally watch the weather to see if you have to take an umbrella with you, rarely to see whether your house will soon be destroyed. Live weather would be looking at a weather presenter slowly getting wet in the drizzle. In addition, European public media have an educational mandate, they are paid by the public to make society better, while in America media is commercial and will do whatever makes money.

In the harbor of Copenhagen is the famous little mermaid. Tourists ships went to see it, had to keep quite a distance and could only show her back. Typically the boats only waited a few seconds because there was nothing to see. But due to commercial pressure they had to have the little mermaid on their tour schedule. They follow demand, whether the outcome is good or not.

Short hits communication

  • When asked what 30% probability of rain means for a weather prediction most people gave the wrong answer: that 30% of the region would experience rain. The formally correct answer is that 30% of the cases this prediction is made you will experience rain. To be fair to the people, I often explain the need to give such a percentage by saying that in case of showers we cannot tell whether it rains in Bonn or Cologne. I feel this is quite common explanation and the main effect. The German weather service is working on providing more detailed probabilistic information to weather brigades. That seems to be appreciated (and they answered the question mostly right).
  • Amanda Ruggeri won the journalism award for her story on sea level rise in Miami, which was reviewed by ClimateFeedback who found its scientific credibility to be "very high". Recommended read.
  • EUMETSAT operates the European satellites once in space. They also make MOOCs ([[Massive Open Online  Courses]]). They have one on the oceans and one on the atmosphere. They are a great way to introduce these topics to new people and in future they plan to do more live. 
  • Climate change is seen as the Top Global Threat according to global polling by the Pew Institute. In 2018 67 percent of the world sees climate change as a major threat to their country.  
  • During a Q&A someone remarked that it would be good to talk about the history of climatology more because people are spreading the rumor that climatology is a new field of science trying to make it sound less solid.
  • In case I have any Finnish speaking readers, Finland has a two-yearly bulletin on weather and climate, recently revamped.
  • Copernicus has a "new" journal on statistical climatology, ideally suited for homogenization studies: Advances in Statistical Climatology, Meteorology and Oceanography (ASCMO). It does not have an Impact Factor yet, but seeing the editorial team and reading a few articles it is clearly a serious journal and likely will get one soon. It is worth building up such a journal to have an outlet for statistical/methodological studies on climate. We already published there once; post upcoming.
  • Did you know about STATMOS, an American Research Network for Statistical Methods for Atmospheric and Oceanic Sciences?

Short hits observations

  • I had seen people use measurements of cosmic rays to estimate the soil moisture between the surface and the probe, but it was new to me to use it to measure the amount of snow on top of a glacier.
  • Michal Zak of the Czech Hydrometeorological Institute and colleagues had an interesting way to estimate how urban a station is. They computed the absolute day to day differences of the maximum and of the minimum temperature and subtracted them from each other. If the maximum temperature varies more a station is likely urban, if the minimum varies more it is likely rural. For Prague and its surrounding the differences between stations were not particularly large and smaller than its seasonal cycle, but it could be a useful check. This could also be a measure that could help one to selected climatologically similar pairs of stations in relative statistical homogenization.
  • The Homogenization Seminar in Budapest will be from 18 to 21 of May 2020. Announcements will follow, e.g., on the homogenization list. (I should write less mails to the homogenization list; at EMS someone asked to be added to the homogenization newsletter.) 
  • Carla Mateus studied Data Rescue (DARE) as a scientific problem. By creating one really high quality transcribed dataset as a benchmark, she studied how accurately various groups transcribed historical observations. Volunteers of the Irish meteorological society were an order of magnitude more accurate (0.3% errors) than students (3.3%). Great talk.
  • Our colleagues from Catalonia studied the influence of the time of observation. Manual observations tend to be made at 8am, while automatic measurements often use a normal calendar day. This naturally mattered most for the minimum temperature. With statistical homogenization the small breaks are hard to find, to formulate it diplomatically.
  • Monika Lakato has ambitious plans to study changes in hourly precipitation in Hungary motivated by increases in rain intensity (precipitation amount on rainy days).
  • Peter Domonkos studied how well network-wide trends are corrected in the new MULTITEST benchmark dataset (the presentation as pptx file). He found that his method (ACMANTv4) was able to reduce this error by about 30% and others were worse. It would be interesting to study what is different in the MULTITEST dataset or this analysis because the results of Williams et al. (2012) are much more optimistic; here 50 to 90% of the trend error is removed for similarly dense networks.
  • ACMANTv4 is on GitHub and about to be published. Some colleagues already used it. 

Meteorological Glossaries

Miloslav Müller gave a talk on the new Slovak meteorological glossary, listing many other glossaries. So I now have a bookmark folder full of glossaries.
To finish with a great audience comment on the last day, not directly weather related: "In Russian education everything is explained, you do not have to remember or study." I loved that expression. That is the reason I studied physics, I also loved biology, but you have to remember so much and my memory is very poor for random stuff like names of organisms. When you understand something, you (I?) automatically remember it, it does not even feel like learning.

Related reading

The IPCC underestimates global warming. This post explains why using linear regression underestimates total warming

Annual Meeting of the European Meteorological Society

Thursday, July 27, 2017

WMO Recognition of Long-Term Observing Stations

From the July 2017 newsletter of the WMO [World Meteorological Organization] Integrated Global Observing System (WIGOS). With some additional links & [clarifications].

Long-term meteorological observations are part of the irreplaceable cultural and scientific heritage of mankind
that serve the needs of current and future generations for long-term high quality climate records. They are
unique sources of past information about atmospheric parameters, thus are references for climate variability
and change assessments. To highlight this importance, WMO has a mechanism to recognize long-term observing stations. By so doing, the Organization promotes sustainable observational standards and best practices that facilitate the generation of high-quality time series data.

The initiative is envisaged to maintaining long-term observing stations, including in particular stations with more than 100 years observations — Centennial Stations — in support of climate applications (DRR [Disaster Risk Reduction Programme], GFCS [Global Framework for Climate Services], etc.) and research (climate assessment, climate adaptation, etc.). While acknowledging the efforts by Members to run and maintain appropriate observing systems including long-term observing stations, existing and potential difficulties which Members’ NMHSs [National Meteorological and Hydrological Service; mostly national weather services] are facing, due to their overall resource constraints and competing societal interests at national level, are observed by the same time.

The mechanism involves close collaboration between the Commission for Climatology (CCl), the Commission for Basic Systems (CBS), the Commission for Instruments and Methods of Observations (CIMO), the Global Climate Observing System (GCOS) through an ad-hoc advisory board, as well as the WMO Members and the Secretariat. The 69th Session of WMO Executive Council (May 2017) recognized a first set of 60 long-term observing stations following an invitation letter from WMO Secretariat to Members to submit no more than three candidate stations. Further invitation letters will be released every second year to extend the list of WMO recognized long-term observing stations. The next call for the nomination of candidate stations will be issued in early 2018.

The recognition mechanism is based on recognition criteria that address the length, completeness and consistency of observations at a station, the availability of minimum station metadata, data rescue, WMO observing standards including siting classification, observational data quality control and the future of the observing station. A self-assessment template for recognition criteria compliance of individual observing stations has been developed for Members to submit candidate stations, which has to be filled in for each candidate station. After review by the above mentioned advisory board, a list of stations is tabled at Executive Council sessions for final decision. It is envisaged to renew the recognition of observing stations every ten years to ensure criteria compliance.

A special WMO Website has been implemented that provides information on the mechanism and lists candidate and recognized stations:

https://public.wmo.int/en/our-mandate/what-we-do/observations/long-term-observing-stations

Furthermore, the recognition will be reflected in the WIGOS station catalogues. It is also planned to design a certificate per recognized station as well as a metal plate for installation at the station site.

Monday, October 10, 2016

A stable global climate reference network


Historical climate data contains inhomogeneities, for example due to changes in the instrumentation or the surrounding. Removing these inhomogeneities to get more accurate estimates of how much the Earth has actually warmed is a really interesting problem. I love the statistical homogenization algorithms we use for this; I am a sucker for beautiful algorithms. As an observationalist it is great to see the historical instruments, read how scientists understood their measurements better and designed new instruments to avoid errors.

Still for science it would be better if future climatologists had an easier task and could work with more accurate data. Let's design a climate-change-quality network that is a stable as we can humanly get it to study the ongoing changes in the climate.

Especially now that the climate is changing, it is important to accurately predict the climate for the coming season, year, decade and beyond at a regional and local scale. That is information (local) governments, agriculture and industry needs to plan, adapt, prepare and limit the societal damage of climate change.

Historian Sam White argues that the hardship of the Little Ice Age in Europe is not just about cold, but also about the turbulent and unpredictable weather. Also the coming century much hardship can be avoided with better predictions. To improve decadal climate prediction of regional changes and to understand the changes in extreme weather we need much better measurements. For example, with a homogenized radiosonde dataset, the improvements in the German decadal prediction system became much clearer than with the old dataset.

We are performing a unique experiment with the climate system and the experiment is far from over. It would also be scientifically unpardonable not to measure this ongoing change as well as we can. If your measurements are more accurate, you can see new things. Methodological improvements that lead to smaller uncertainties is one of the main factors that brings science forward.



A first step towards building a global climate reference network is agreeing on a concept. This modest proposal for preventing inhomogeneities due to poor observations from being a burden to future climatologists is hopefully a starting point for this discussion. Many other scientists are thinking about this. More formally there are the Rapporteurs on Climate Observational Issues of the Commission for Climatology (CCl) of the World Meteorological Organization (WMO). One of their aims is to:
Advance specifications for Climate Reference Networks; produce a statement of guidance for creating climate observing networks or climate reference stations with aspects such as types of instruments, metadata, and siting;

Essential Climate Variables

A few weeks ago Han Dolman and colleagues wrote a call to action in Nature Goescience titled "A post-Paris look at climate observations". They argue that while the political limits are defined for temperature, we need climate quality observations for all essential climate variables listed in the table below.
We need continuous and systematic climate observations of a well-thought-out set of indicators to monitor the targets of the Paris Agreement, and the data must be made available to all interested users.
I agree that we should measure much more than just temperature. It is quite a list, but we need that to understand the changes in the climate system and to monitor the changes in the atmosphere, oceans, soil and biology we will need to adapt to. Not in this list, but important are biological changes, especially ecology needs support for long-term observational programs, because they lack the institutional support the national weather services provide on the physical side.

Measuring multiple variables also helps in understanding measurement uncertainties. For instance, in case of temperature measurements, additional observations of insolation, wind speed, precipitation, soil temperature and albedo are helpful. The US Climate Reference Network measures this wind speed at the height of the instrument (and humans) rather than at the meteorologically typical height of 10 meter.

Because of my work, I am mainly thinking of the land surface stations, but we need a network for many more observations. Please let me know where the ideas do not fit to the other climate variables.

Table. List of the Essential Climate Variables; see original for footnotes.
Domain GCOS Essential Climate Variables
Atmospheric (over land, sea and ice) Surface: Air temperature, Wind speed and direction, Water vapour, Pressure, Precipitation, Surface radiation budget.

Upper-air: Temperature, Wind speed and direction, Water vapour, Cloud properties, Earth radiation budget (including solar irradiance).

Composition: Carbon dioxide, Methane, and other long-lived greenhouse gases, Ozone and Aerosol, supported by their precursors.
Oceanic Surface: Sea-surface temperature, Sea-surface salinity, Sea level, Sea state, Sea ice, Surface current, Ocean colour, Carbon dioxide partial pressure, Ocean acidity, Phytoplankton.

Sub-surface: Temperature, Salinity, Current, Nutrients, Carbon dioxide partial pressure, Ocean acidity, Oxygen, Tracers.
Terrestrial River discharge, Water use, Groundwater, Lakes, Snow cover, Glaciers and ice caps, Ice sheets, Permafrost, Albedo, Land cover (including vegetation type), Fraction of absorbed photosynthetically active radiation, Leaf area index, Above-ground biomass, Soil carbon, Fire disturbance, Soil moisture.

Comparable networks

There are comparable networks and initiatives, which likely shape how people think about a global climate reference network. Let me thus describe how they fit into the concept and where they are different.

There is the Global Climate Observing System (GCOS), which is mainly an undertaking of the World Meteorological Organization (WMO) and the Intergovernmental Oceanographic Commission (IOC). They observe the entire climate system; the idea of the above list of essential climate variables comes from them (Bojinski and colleagues, 2014). GOCS and its member organization are important for the coordination of the observations, for setting standard so that measurements can be compared and for defending the most important observational capabilities against government budget cuts.

Especially important from a climatological perspective is a new program to ask governments to recognize centennial stations as part of the world heritage. If such long series are stopped or the station is forced to move, a unique source of information is destroyed or damaged forever. That is comparable to destroying ancient monuments.



A subset of the meteorological stations are designated as GCOS Surface Network measuring temperature and precipitation. These stations have been selected for their length, quality and to cover all regions of the Earth. Its monthly data is automatically transferred to global databases.

National weather services normally take good care of their GCOS stations, but a global reference network would have much higher standards and also provide data at better temporal resolutions than monthly averages to be able to to study changes in extreme weather and weather variability.



There is already a global radiosonde reference network, the GCOS Reference Upper-Air Network (GRUAN, Immler and colleagues, 2010). This network provides measurements with well characterized uncertainties and they make extensive parallel measurements when they transition from one radiosonde design to the next. No proprietary software is used to make sure it is know exactly what happened to the data.

Currently they have about 10 sites, a similar number is on the list to be certified and the plan is not make this a network of about 30 to 40 stations; see map below. Especially welcome would be partners to start a site in South America.



The observational system for the ocean Argos is, as far as I can see, similar to GRUAN. It measures temperature and salinity (Roemmich and colleagues, 2009). If your floats meet the specifications of Argos, you can participate. Compared to land stations the measurement environment is wonderfully uniform. The instruments typically work a few years. Their life span is thus between a weather station and a one-way radiosonde ascent. This means that the instruments may deteriorate somewhat during their lifetimes, but maintenance problems are more important for weather stations.

A wonderful explanation of how Argos works for kids:


Argos has almost four thousand floats. They are working on a network with spherical floats that can go deeper.



Finally there are a number of climate reference networks of land climate stations. The best known is probably the US Climate Reference Network (USCRN, Diamond and colleagues, 2013). It has has 131 stations. Every station has 3 identical high quality instrument, so that measurement problems can be detected and the outlier attributed to a specific instrument. To find these problems quickly all data is relayed online and checked at their main office. Regular inspections are performed and everything is well documented.



The USCRN has selected new locations for its stations, which are expected to be free of human changes of the surroundings in the coming decades. This way it takes some time until the data becomes climatologically interesting, but they can already be compared with the normal network and this gives some confidence that its homogenized data is okay for the national mean; see below. The number of stations was sufficient to compute a national average in 2005/2006.



Other countries, such as Germany and the United Kingdom, have opted to make existing stations into a national climate reference network. The UK Reference Climatological Stations (RCS) have a long observational record spanning at least 30 years and their distribution aims to be representative of the major climatological areas, while the locations are unaffected by environmental changes such as urbanisation.


German Climate Reference Station which was founded in 1781 in Bavaria on the mountain Hohenpeißenberg. The kind of weather station photo, WUWT does not dare to show.
In Germany the climate reference network are existing stations with a very long history. Originally they were the stations where conventional manual observations continued. Unfortunately, they will now also switch to automatic observations. Fortunately, after making a long parallel measurement to see what this does to the climate record*.

An Indian scientist proposes an Indian Climate Reference Network of about 110 stations (Jain, 2015). His focus is on precipitation observations. While temperature is a good way to keep track on the changes, most of the impacts are likely due to changes in the water cycle and storms. Precipitation measurements have large errors; it is very hard to make precipitation measurements with an error below 5%. When these errors change, that produces important inhomogeneities. Such jumps in precipitation data are hard to remove with relative statistical homogenization because the correlations between stations are low. If there is one meteorological parameters for which we need a reference network, it is precipitation.

Network of networks

For a surface station Global Climate Reference Network, the current US Climate Reference Network is a good template when it comes to the quality of the instrumentation, management and documentation.

A Global Climate Reference Network does not have to do the heavy lifting all alone. I would see it as the temporally stable backbone of the much larger climate observing system. We still have all the other observations that help to make sampling errors smaller and provide the regional information you need to study how energy and mass moves through the climate system (natural variability).

We should combine them in a smart way to benefit from the strengths of all networks.



The Global Climate Reference Network does not have to be large. If the aim is to compute a global mean temperature signal, we would need just as many samples as we would need to compute the US mean temperature signal. This is in the order of 100 stations. Thus on average, every country in the world would have one climate reference station.

The figure on the right from Jones (1994) compares the temperature signal from 172 selected stations &mdsh; 109 in the Northern Hemisphere. 63 in the Southern Hemisphere. &mdash with the temperature signal computed from all available stations. There is nearly no difference, especially with respect to the long term trend.

Callendar (1961) used 80 only stations, but his temperature reconstruction fits quite well to the modern reconstructions (Hawkins and Jones, 2013).

Beyond the global means

The number of samples/stations can be modest, but it is important that all climate regions of the world are sampled; some regions warm/change faster than others. It probably makes sense to have more stations in especially vulnerable regions, such as mountains, Greenland, Antarctica. We really need a stable network of buoys in the Arctic, where changes are fast and these changes also influence the weather in the mid-latitudes.


Crew members and scientists from the US Coast Guard icebreaker Healy haul a buoy across the sea ice during a deployment. In the lead an ice bear watcher and a rescue swimmer.
To study changes in precipitation we probably need more stations. Rare events contribute a lot to the mean precipitation rate. The threshold to get into the news seems to be the rain sum of a month falling in on one day. Enormous downpours below that level are not even newsworthy. This makes the precipitation data noisy.

To study changes in extreme events we need more samples and might need more stations as well. How much more depends on how strong the synergy between the reference network and the other networks is and thus how much the other networks could then be used to produce more samples. That question needs some computational work.

The idea to use 3 redundant instruments in the USCRN is something we should also use in the GCRN and I would propose to also to create clusters of 3 stations. That would make it possible to detect and correct inhomogeneities by making comparisons. Even in a reference network there may still be inhomogeneities due to changes in the surrounding or management (which were not noticed).


We should also carefully study whether is might be a problem to only use pristine locations. That could mean that the network is no longer representative for the entire world. We should probably include stations in agricultural regions, that is a large part of the surface and they may respond differently from natural regions. But agricultural practices (irrigation, plant types) will change.

Starting a new network at pristine locations has as disadvantage that it takes time until the network becomes valuable for climate change research. Thus I understand why Germany and the UK have opted to use locations where there are already long historical observations. Because we only need 100+ stations it may be possible to select existing locations from the 30 thousand stations we have that are and likely stay pristine in the coming century. If not, I would not compromise and use a new pristine location for the reference network.

Finally, when it comes to the number of stations, we probably have to take into account that no matter how much we try some stations will become unsuitable due to war, land-use change and many other unforeseen problems. Just look back a century and consider all the changes we experienced, the network should be robust against such changes for the next century.

Absolute values or changes

Argos (ocean) and GRUAN (upper air) do not specify the instruments, but set specification for the measurement uncertainties and their characterization. Instruments may thus change and this change has to be managed. In case of GRUAN they perform many launches with multiple instruments.

For a climate reference land station I would prefer to keep the instruments exactly the same design for the coming century.

To study changes in the climate climatologists look at the local changes (compute anomalies) and average those. We had a temperature increase of about 1°C since 1900 and are confident it is warming. This while the uncertainty in the average absolute temperature is of the same order of magnitude. Determining changes directly is easier than first estimating the absolute level and then look whether it is changing. By keeping the instruments the same, you can study changes more easily.


This is an extreme example, but how much thermometer screens weather and yellow before they are replaced depends on the material (and the climate). Even if we have better materials in the future, we'd better keep it the same for stable measurements.
For GRUAN managing the change can solve most problems. Upper air measurements are hard; the sun is strong, the air is thin (bad ventilation) and the clouds and rain make the instruments wet. Because the instruments are only used once, they cannot be too expensive. On the other hand, each time starting with a freshly calibrated instrument makes the characterization of the uncertainties easier. Parallel measurements to manage changes are likely more reliable up in the air than at the surface where two instruments measuring side by side can legitimately measure a somewhat different climate, especially when it comes to precipitation, where undercatchment strongly depends on the local wind or for temperature when cold air flows at night hugging the orography.

Furthermore, land observations are used to study changes in extreme weather, not just the mean state of the atmosphere. The uncertainty of the rain rate depends on the rain rate itself. Strongly. Even in the laboratory and likely more outside where also the influence factors (wind, precipitation type) depend on the rain rate. I see no way to keep undercatchment the same without at least specifying the outside geometry of the gauge and wind shield in minute detail.

The situation for temperature may be less difficult with high-quality instruments, but is similar. When it comes to extremes also the response time (better: response function) of the instruments becomes important and how much out-time the instrument experiences, which is often related to severe weather. It will be difficult to design new instruments that have the same response functions and the same errors over the full range of values. It will also be difficult to characterize the uncertainties over the full range of values and velocity of changes.

Furthermore, the instruments of a land station are used for a long time while not being observed. Thus weather, flora, fauna and humans become error sources. Instruments which have the same specifications in the laboratory may thus still perform differently in the field. Rain gauges may be more or less prone to getting clogged by snow or insects, more or less attractive for drunks to pee in. Temperature screens may be more or less prone to be blocked by icing or for bees to build their nest in. Weather stations may be more or less attractive to curious polar bears.

This is not a black and white situation. It will depend on the quality of the instruments which route to prefer. In the extreme case of an error free measurement, there is no problem with replacing it with another error free instrument. Metrologists in the UK are building an instrument that acoustically measures the temperature of the air, without needing a thermometer, which should have the temperature of the air, but in practice never has. If after 2 or 3 generations of new instruments, they are really a lot better in 50 years and we would exchange them, that would still be a huge improvement of the current situation with an inhomogeneity every 15 to 20 years.



The software of GRUAN is all open source. So that when we understand the errors better in future, we know exactly what we did and can improve the estimates. In case we specify the instruments, that would mean that we need Open Hardware as well. The designs would need to be open and specified in detail. Simple materials should be used to be sure we can still obtain them in 2100. An instruments measuring humidity using the dewpoint of a mirror will be easier to build in 2100 than one using a special polymer film. These instruments can still be build by the usual companies.

If we keep the instrumentation of the reference network the same, the normal climate network, the GCOS network will likely have better equipment in 2100. We will discover many ways to make more accurate observations, to cut costs and make the management more easy. There is no way to stop progress for the entire network, which in 2100 may well have over 100 thousand stations. But I hope we can stop progress for a very small climate reference network of just 100 to 200 stations. We should not see the reference network as the top of hierarchy, but as the stable backbone that complements the other observations.

Organization

How do we make this happen? First the scientific community should agree on a concept and show how much the reference network would improve our understanding of the climatic changes in the 21st century. Hopefully this post is a step in this direction and there is an article in the works. Please add your thoughts in the comments.

With on average one reference station per country, it would be very inefficient if every country would manage its own station. Keeping the high metrological and documentation standards is an enormous task. Given that the network would be the same size as USCRN, the GCRN could in principle be managed by one global organization, like USCRN is managed by NOAA. It would, however, probably be more practical to have regional organizations for better communication with the national weather services and to reduce travel costs for maintenance and inspections.

Funding


The funding of a reference network should be additional funding. Otherwise it will be a long hard struggle in every country involved to build a reference station. In developing countries the maintenance of one reference station may well exceed the budget of their current network. We already see that some meteorologists fear that the millennial stations program will hurt the rest of the observational network. Without additional funding, there will likely be quite some opposition and friction.

In the Paris climate treaty, the countries of the world have already pledged to support climate science to reduce costs and damages. We need to know how close we are to the 2°C limit as feedback to the political process and we need information on all other changes as well to assess the damages from climate change. Compared to the economic consequences of these decisions the costs of a climate reference network is peanuts.

Thus my suggestion would be to ask the global climate negotiators to provide the necessary funding. If we go there, we should also ask the politicians to agree on the international sharing of all climate data. Restrictions to data is holding climate research and climate services back. These are necessary to plan adaptation and to limit damages.

The World Meteorological Organization had its congress last year. The directors of the national weather services have shown that they are not able to agree on the international sharing of data. For weather services selling data is often a large part of their budget. Thus the decision to share data internationally should be made by politicians who have the discretion to compensate these losses. In the light of the historical responsibility of the rich countries, I feel a global fund to support the meteorological networks in poor countries would be just. This would compensate them for the losses in data sales and would allow them to better protect themselves against severe weather and climate conditions.

Let's make sure that future climatologists can study the climate in much more detail.

Think of the children.


Related information

Hillary Rosner in the NYT on the global greenhouse gas reference network: The Climate Lab That Sits Empty

Free our climate data - from Geneva to Paris

Congress of the World Meteorological Organization, free our climate data

Climate History Podcast with Dr. Sam White mainly on the little ice age

A post-Paris look at climate observations. Nature Geoscience (manuscript)

Why raw temperatures show too little global warming

References

Bojinski, Stephan, Michel Verstraete, Thomas C. Peterson, Carolin Richter, Adrian Simmons and Michael Zemp, 2014: The Concept of Essential Climate Variables in Support of Climate Research, Applications, and Policy. Journal of Climate, doi: 10.1175/BAMS-D-13-00047.1.

Callendar, Guy S., 1961: Temperature fluctuations and trends over the earth. Quarterly Journal Royal Meteorological Society, 87, pp. 1–12. doi: 10.1002/qj.49708737102.

Diamond, Howard J., Thomas R. Karl, Michael A. Palecki, C. Bruce Baker, Jesse E. Bell, Ronald D. Leeper, David R. Easterling, Jay H. Lawrimore, Tilden P. Meyers, Michael R. Helfert, Grant Goodge, Peter W. Thorne, 2013: U.S. Climate Reference Network after One Decade of Operations: Status and Assessment. Bulletin of the American Meteorological Society, doi: 10.1175/BAMS-D-12-00170.1.

Dolman, A. Johannes, Alan Belward, Stephen Briggs, Mark Dowell, Simon Eggleston, Katherine Hill, Carolin Richter and Adrian Simmons, 2016: A post-Paris look at climate observations. Nature Geoscience, 9, September, doi: 10.1038/ngeo2785. (manuscript)

Hawkins, Ed and Jones, Phil. D. 2013: On increasing global temperatures: 75 years after Callendar. Quarterly Journal Royal Meteorological Society, 139, pp. 1961–1963, doi: 10.1002/qj.2178.

Immler, F.J., J. Dykema, T. Gardiner, D.N. Whiteman, P.W. Thorne, and H. Vömel, 2010: Reference Quality Upper-Air Measurements: guidance for developing GRUAN data products. Atmospheric Measurement Techniques, 3, pp. 1217–1231, doi: 10.5194/amt-3-1217-2010.

Jain, Sharad Kumar, 2015: Reference Climate and Water Data Networks for India. Journal of Hydrologic Engineering, 10.1061/(ASCE)HE.1943-5584.0001170, 02515001. (Manuscript)

Jones, Phil D., 1994: Hemispheric Surface Air Temperature Variations: A Reanalysis and an Update to 1993. Journal of Climate, doi: 10.1175/1520-0442(1994)007<1794:HSATVA>2.0.CO;2.

Pattantyús-Ábrahám, Margit and Wolfgang Steinbrecht, 2015: Temperature Trends over Germany from Homogenized Radiosonde Data. Journal of Climate, doi: 10.1175/JCLI-D-14-00814.1.

Roemmich, D., G.C. Johnson, S. Riser, R. Davis, J. Gilson, W.B. Owens, S.L. Garzoli, C. Schmid, and M. Ignaszewski, 2009: The Argo Program: Observing the global ocean with profiling floats. Oceanography, 22, p. 34–43, doi: 10.5670/oceanog.2009.36.

* The transition to automatic weather stations in Germany happened to have almost no influence on the annual means, contrary to what Klaus Hager and the German mitigation sceptical blog propagandise based on badly maltreated data.

** The idea to illustrate the importance of smaller uncertainties by showing two resolutions of the same photo comes from metrologist Michael de Podesta.

Saturday, June 13, 2015

Free our climate data - from Geneva to Paris

Royal Air Force- Italy, the Balkans and South-east Europe, 1942-1945. CNA1969

Neglecting to monitor the harm done to nature and the environmental impact of our decisions is only the most striking sign of a disregard for the message contained in the structures of nature itself.
Pope Francis

The 17th Congress of the World Meteorological Organization in Geneva ended today. After countless hours of discussions they managed to pass a almost completely rewritten resolution on sharing climate data in the last hour.

The glass is half full. On the one hand, the resolution clearly states the importance of sharing data. It demonstrates that it is important to help humanity cope with climate change by making it part of the global framework for climate services (GFCS), which is there to help all nations to adapt to climate change.

The resolution considers and recognises:
The fundamental importance of the free and unrestricted exchange of GFCS relevant data and products among WMO Members to facilitate the implementation of the GFCS and to enable society to manage better the risks and opportunities arising from climate variability and change, especially for those who are most vulnerable to climate-related hazards...

That increased availability of, and access to, GFCS relevant data, especially in data sparse regions, can lead to better quality and will create a greater variety of products and services...

Indeed free and unrestricted access to data can and does facilitate innovation and the discovery of new ways to use, and purposes for, the data.
On the other hand, if a country wants to it can still refuse to share the most important datasets: the historical station observations. Many datasets will be shared: Satellite data and products, ocean and cryosphere (ice) observations, measurements on the composition of the atmosphere (including aerosols). However, information on streamflow, lakes and most of the climate station data are exempt.

The resolution does urge Members to:
Strengthen their commitment to the free and unrestricted exchange of GFCS relevant data and products;

Increase the volume of GFCS relevant data and products accessible to meet the needs for implementation of the GFCS and the requirements of the GFCS partners;
But there is no requirement to do so.

The most positive development is not on paper. Data sharing may well have been the main discussion topic among the directors of the national weather services at the Congress. They got the message that many of them find this important and they are likely to prioritise data sharing in future. I am grateful to the people at the WMO Congress who made this happen, you know who you are. Some directors really wanted to have a strong resolution as justification towards their governments to open up the databases. There is already a trend towards more and more countries opening up their archives, not only of climate data, but going towards open governance. Thus I am confident that many more countries will follow this trend after this Congress.

Also good about the resolution is that WMO will start monitoring data availability and data policies. This will make visible how many countries are already taking the high road and speed up the opening of the datasets. The resolution requests WMO to:
Monitor the implementation of policies and practices of this Resolution and, if necessary, make proposals in this respect to the Eighteenth World Meteorological Congress;
In a nice twist the WMO calls the data to be shared: GFCS data. Thus basically saying, if you do not share climate data you are responsible for the national damages of climatic changes that you could have adapted to and you are responsible for the failed adaptation investments. The term "GFSC data" misses how important this data is for basic climate research. Research that is important to guide expensive political decisions on mitigation and in the end again adaptation and ever more likely geo-engineering.

If I may repeat myself, we really need all the data we can get for an accurate assessment of climatic changes, a few stations will not do:
To reduce the influence of measurement errors and non-climatic changes (inhomogeneities) on our (trend) assessments we need dense networks. These errors are detected and corrected by comparing one station to its neighbours. The closer the neighbours are, the more accurate we can assess the real climatic changes. This is especially important when it comes to changes in severe and extreme weather, where the removal of non-climatic changes is very challenging.
The problem, as so often, is mainly money. Weather services get some revenues from selling climate data. These can't be big compared to the impacts of climate change or compared to the investments needed to adapt, but relative to the budget of a weather service, especially in poorer countries, it does make a difference. At least the weather services will have to ask their governments for permission.

Thus we will probably have to up our game. The mandate of the weather services is not enough, we need to make clear to the governments of this world that sharing climate data is of huge benefit to every single country. Compared to the costs of climate change this is a no-brainer. Don Henry writes that "[The G7] also said they would continue efforts to provide US$100 billion a year by 2020 to support developing countries' own climate actions." The revenues from selling climate data are irrelevant compared to that number.

There is just a large political climate summit coming up, the COP21 in Paris in December. This week there was a preparatory meeting in Bonn to work on the text of the climate treaty. This proposal already has an optional text about climate research:
[Industrialised countries] and those Parties [nations] in a position to do so shall support the [Least Developed Countries] in the implementation of national adaptation plans and the development of additional activities under the [Least Developed Countries] work programme, including the development of institutional capacity by establishing regional institutions to respond to adaptation needs and strengthen climate-related research and systematic observation for climate data collection, archiving, analysis and modelling.
An earlier climate treaty (COP4 from 1998) already speaks about the exchange of climate data (FCCC/CP/1998/16/Add.1):
Urges Parties to undertake free and unrestricted exchange of data to meet the needs of the Convention, recognizing the various policies on data exchange of relevant international and intergovernmental organizations;
"Urges" is not enough, but that is a basis that could be reinforced. With the kind of money COP21 is dealing with it should be easy to support weather services of less wealthy countries to improve their observation systems and make the data freely available. That would be an enormous win-win situation.

To make this happen, we probably need to show that the climate science community stands behind this. We would need a group of distinguished climate scientists from as much countries as possible to support a "petition" requesting better measurements in data sparse regions and free and unrestricted data sharing.

To get heard we would probably also need to write articles for the national newspapers, to get published they would again have to be written by well-known scientists. To get attention it would also be great if many climate blogs would write about the action on the same day.

Maybe we could make this work. My impression was already that basically everyone in the climate science community finds the free exchange of climate data very important and the current situation a major impediment for better climate research. After last weeks article on data sharing the response was enormous and only positive. This may have been the first time that a blog post of mine that did not respond to something in the press got over 1000 views. It was certainly my first tweet that got over 13 thousand views and 100 retweets:


This action of my little homogenization blog was even in the top of the twitter page on the Congress of the WMO (#MeteoWorld), right next to the photo of the newly elected WMO Secretary-General Petteri Taalas.



With all this internet enthusiasm and the dedication of the people fighting for free data at the WMO and likely many more outside of the WMO, we may be able to make this work. If you would like to stay informed please fill in the form below or write to me. If enough people show interest, I feel we should try. I also do not have the time, but this is important.






Related reading

Congress of the World Meteorological Organization, free our climate data

Why raw temperatures show too little global warming

Everything you need to know about the Paris climate summit and UN talks

Bonn climate summit brings us slowly closer to a global deal by Don Henry (Public Policy Fellow, Melbourne Sustainable Society Institute at University of Melbourne) at The Conversation.

Free climate data action promoted in Italian. Thank you Sylvie Coyaud.

If my Italian is good enough, that is Google Translate, this post wants the Pope to put the sharing of climate data in his encyclical. Weather data is a common good.


* Photo at the top: By Royal Air Force official photographer [Public domain], via Wikimedia Commons

Tuesday, November 26, 2013

Are break inhomogeneities a random walk or a noise?

Tomorrow is the next conference call of the benchmarking and assessment working group (BAWG) of the International Surface Temperature Initiative (ISTI; Thorne et al., 2011). The BAWG will create a dataset to benchmark (validate) homogenization algorithm. It will mimic the real mean temperature data of the ISTI, but will include know inhomogeneities, so that we can assess how well the homogenization algorithms remove them. We are almost finished discussing how the benchmark dataset should be developed, but still need to fix some details. Such as the question: Are break inhomogeneities a random walk or a noise?

Previous studies

The benchmark dataset of the ISTI will be global and is also intended to be used to estimate uncertainties in the climate signal due to remaining inhomogeneities. These are the two main improvements over previous validation studies.

Williams, Menne, and Thorne (2012) validated the pairwise homogenization algorithm of NOAA on a dataset mimicking the US Historical Climate Network. The paper focusses on how well large-scale biases can be removed.

The COST Action HOME has performed a benchmarking of several small networks (5 to 19 stations) realistically mimicking European climate networks (Venema et al., 2012). It main aim was to intercompare homogenization algorithms, the small networks allowed HOME to also test manual homogenization methods.

These two studies were blind, in other words the scientists homogenizing the data did not know where the inhomogeneities were. An interesting coincidence is that the people who generated the blind benchmarking data were outsiders at the time: Peter Thorne for NOAA and me for HOME. This probably explains why we both made an error, which we should not repeat in the ISTI.

Sunday, November 17, 2013

On the reactions to the doubling of the recent temperature trend by Curry, Watts and Lucia

The recent Cowtan and Way study, coverage bias in the HadCRUT4 temperature record, in the QJRMS showed that the temperature trend over the last 15 years is more than twice as strong as previously thought. [UPDATE: The paper can be read here it is now Open Access]

This created quite a splash in the blog-o-sphere; see my last post. This is probably no wonder. The strange idea that the global warming has stopped is one of the main memes of the climate ostriches and in the USA even of the main stream media. A recent media analysis showed that half of the reporting of the recent publication of the IPCC report pertained this meme.

This reporting is in stark contrast to the the IPCC having almost forgotten to write about it as it has little climatological significance. Also after the Cowtan and Way (2013) paper, the global temperature trend between 1880 and now is still about 0.8 degrees per century.

The global warming of the entire climate system is continuing without pause in the warming of the oceans. While the oceans are the main absorber of energy in the climate system. The atmospheric temperature increase only accounts for about 2 percent of the total. Because the last 15 years also just account for a short part of the anthropogenic warming period, one can estimate that the discussion is about less than one thousandths of the warming.

Reactions

The study was positively received by amongst others the Klimalounge (in German), RealClimate, Skeptical Science, Carbon Brief, QuakeRattled, WottsUpWithThatBlog, OurChangingClimate, Moyhu (Nick Stockes) and Planet 3.0. It is also discussed in the press: Sueddeutsche Zeitung, TAZ, Spiegel Online (three leading newspapers in Germany, in German), The Independent (4 articles), Mother Jones, Hürriyet (a large newspaper in Turkey) and Science Daily.

Lucia at The Blackboard wrote in her first post Cotwan and Way: Have they killed the pause? and stated: "Right now, I’m mostly liking the paper. The issues I note above are questions, but they do do quite a bit of checking". And Lucia wrote in her second post: "The paper is solid."

Furthermore, Steve Mosher writes: "I know robert [Way] does first rate work because we’ve been comparing notes and methods and code for well over a year. At one point we spent about 3 months looking at labrador data from enviroment canada and BEST. ... Of course, folks should double and triple check, but he’s pretty damn solid."

The main serious critical voice seems to be Judith Curry at Climate Etc. Her comments have been taken up by numerous climate ostrich blogs. This post discusses Curry's comments, which were also taken up by Lucia. And I will also include some erroneous additions by Antony Watts. And it will discuss one one additional point raised by Lucia.
  1. Interpolation
  2. UAH satellite analyses
  3. Reanalyses
  4. No contribution
  5. Model validation
  6. A hiatus in the satellite datasets (Black Board)

Wednesday, November 13, 2013

Temperature trend over last 15 years is twice as large as previously thought

UPDATED: Now with my response to Juddith Curry's comments and an interesting comment by Peter Thorne.

Yesterday a study appeared in the Quarterly Journal of the Royal Meteorological Society that suggests that the temperature trend over the last 15 years is about twice a large as previously thought. This study [UPDATE: Now Open Access] is by Kevin Cowtan and Robert G. Way and is called: "Coverage bias in the HadCRUT4 temperature series and its impact on recent temperature trends".

The reason for the bias is that in the HadCRUT dataset, there is a gap in the Arctic and the study shows that it is likely that there was strong warming in this missing data region (h/t Stefan Rahmstorf at Klimalounge in German; the comments and answers by Rahmstorf there are also interesting and refreshingly civilized; might be worth reading the "translation"). In the HadCRUT4 dataset the temperature trend over the period 1997-2012 is only 0.05°C per decade. After filling the gap in the Arctic, the trend is 0.12 °C per decade.

The study starts with the observation that over the period 1997 to 2012 "GISTEMP, UAH and NCEP/NCAR [which have (nearly) complete global coverage and no large gap at the Arctic, VV] all show faster warming in the Arctic than over the planet as a whole, and GISTEMP and NCEP/NCAR also show faster warming in the Antarctic. Both of these regions are largely missing in the HadCRUT4 data. If the other datasets are right, this should lead to a cool bias due to coverage in the HadCRUT4 temperature series.".

Datasets

All datasets have their own strengths and weaknesses. The nice thing about this paper is how they combine the datasets and use the strengths and mitigate their weaknesses.

Surface data. Direct (in-situ) measurements of temperature (used in HadCRU and GISTEMP) are very important. Because they lend themselves well to homogenization, station data is temporal consistent and its trend are thus most reliable. Problems are that most observations were not performed with climate change in mind and the spatial gaps that are so important for this study.

Satellite data. Satellites perform indirect measurements of the temperature (UAH and RSS). Their main strengths are the global coverage and spatial detail. A problem for satellite datasets are that the computation of physical parameters (retrievals) needs simplified assumptions and that other (partially unknown) factors can influence the result. The temperature retrieval needs information on the surface, which is especially important in the Arctic. Another satellite temperature dataset by RSS therefore omits the Arctic from their dataset. UAH is also expected to have biases in the Arctic, but does provide data.

Monday, September 30, 2013

Reviews of the IPCC review

The first IPCC report (Working Group One), "Climate Change 2013, the physical science basis", has just been released.

One way to judge the reliability of a source, is to see what it states about a topic you are knowledgeable about. I work on homogenization of station climate data and was thus interested in the question how well the IPCC report presents the scientific state-of-the-art on the uncertainties in trend estimates due to historical changes in climate monitoring practices.

Furthermore, I have asked some colleague climate science bloggers to review the IPCC report on their areas of expertise. You find these reviews of the IPCC review report at the end of the post as they come in. I have found most of these colleagues via the beautiful list with climate science bloggers of Doug McNeall.

Large-Scale Records and their Uncertainties

The IPCC report is nicely structured. The part that deals with the quality of the land surface temperature observations is in Chapter 2 Observations: Atmosphere and Surface, Section 2.4 Changes in Temperature, Subsection 2.4.1 Land-Surface Air Temperature, Subsubsection 2.4.1.1 Large-Scale Records and their Uncertainties.

The relevant paragraph reads (my paragraph breaks for easier reading):
Particular controversy since AR4 [the last fourth IPCC report, vv] has surrounded the LSAT [land surface air temperature, vv] record over the United States, focussed upon siting quality of stations in the US Historical Climatology Network (USHCN) and implications for long-term trends. Most sites exhibit poor current siting as assessed against official WMO [World Meteorological Organisation, vv] siting guidance, and may be expected to suffer potentially large siting-induced absolute biases (Fall et al., 2011).

However, overall biases for the network since the 1980s are likely dominated by instrument type (since replacement of Stevenson screens with maximum minimum temperature systems (MMTS) in the 1980s at the majority of sites), rather than siting biases (Menne et al., 2010; Williams et al., 2012).

A new automated homogeneity assessment approach (also used in GHCNv3, Menne and Williams, 2009) was developed that has been shown to perform as well or better than other contemporary approaches (Venema et al., 2012). This homogenization procedure likely removes much of the bias related to the network-wide changes in the 1980s (Menne et al., 2010; Fall et al., 2011; Williams et al., 2012).

Williams et al. (2012) produced an ensemble of dataset realisations using perturbed settings of this procedure and concluded through assessment against plausible test cases that there existed a propensity to under-estimate adjustments. This propensity is critically dependent upon the (unknown) nature of the inhomogeneities in the raw data records.

Their homogenization increases both minimum temperature and maximum temperature centennial-timescale United States average LSAT trends. Since 1979 these adjusted data agree with a range of reanalysis products whereas the raw records do not (Fall et al., 2010; Vose et al., 2012a).

I would argue that this is a fair summary of the state of the scientific literature. That naturally does not mean that all statements are true, just that it fits to the current scientific understanding of the quality of the temperature observations over land. People claiming that there are large trend biases in the temperature observations, will need to explain what is wrong with Venema et al. (an article of mine from 2012) and especially Williams et al. (2012). Williams et al. (2012) provides strong evidence that if there is a bias in the raw observational data, homogenization can improve the trend estimate, but it will normally not remove the bias fully.

Personally, I would be very surprised if someone would find substantial trend biases in the homogenized US American temperature observations. Due to the high station density, this dataset can be investigated and homogenized very well.

Tuesday, February 5, 2013

A database with daily climate data for more reliable studies of changes in extreme weather

In summary:
  • We want to build a global database of parallel measurements: observations of the same climatic parameter made independently at the same site
  • This will help research in many fields
    • Studies of how inhomogeneities affect the behaviour of daily data (variability and extreme weather)
    • Improvement of daily homogenisation algorithms
    • Improvement of robust daily climate data for analysis
  • Please help us to develop such a dataset

Introduction



One way to study the influence of changes in measurement techniques is by making simultaneous measurements with historical and current instruments, procedures or screens. This picture shows three meteorological shelters next to each other in Murcia (Spain). The rightmost shelter is a replica of the Montsouri screen, in use in Spain and many European countries in the late 19th century and early 20th century. In the middle, Stevenson screen equipped with automatic sensors. Leftmost, Stevenson screen equipped with conventional meteorological instruments.
Picture: Project SCREEN, Center for Climate Change, Universitat Rovira i Virgili, Spain.


We intend to build a database with parallel measurements to study non-climatic changes in the climate record. This is especially important for studies on weather extremes where the distribution of the daily data employed must not be affected by non-climatic changes.

There are many parallel measurements from numerous previous studies analysing the influence of different measurement set-ups on average quantities, especially average annual and monthly temperature. Increasingly, changes in the distribution of daily and sub-daily values are also being investigated (Auchmann and Bönnimann, 2012; Brandsma and Van der Meulen, 2008; Böhm et al., 2010; Brunet et al., 2010; Perry et al., 2006; Trewin, 2012; Van der Meulen and Brandsma, 2008). However, the number of such studies is still limited, while the number of questions that can and need to be answered are much larger for daily data.

Unfortunately, the current common practice is not to share parallel measurements and the analyses have thus been limited to smaller national or regional datasets, in most cases simply to a single station with multiple measurement set-ups. Consequently there is a pressing need for a large global database of parallel measurements on a daily or sub-daily scale.

Also datasets from pairs of nearby stations, while officially not parallel measurements, are interesting to study the influence of relocations. Especially, typical types of relocations, such as the relocation of weather stations from urban areas to airports, could be studied this way. In addition, the influence of urbanization can be studied on pairs of nearby stations.

Tuesday, September 18, 2012

Future research in homogenisation of climate data – EMS2012 in Poland

By Enric Aguilar and Victor Venema

The future of research and training in homogenisation of climate data was discussed at the European Meteorological Society in Lodz by 21 experts. Homogenisation of monthly temperature data has improved much in the last years, as seen in the results of the COST-HOME project. On the other hand the homogenization of daily and subdaily data is still in its infancy and this data is used frequently to analyse changes in extreme weather. It is expected that inhomogeneities in the tails of the distribution are stronger than in the means. To make such analyses on extremes more reliable, more work on daily homogenisation is urgently needed. This does not mean than homogenisation at the monthly scale is already optimal, much can still be improved.

Parallel measurements

Parallel measurements with multiple measurement set-ups were seen as an important way to study the nature of inhomogeneities in daily and sub-daily data. It would be good to have a large international database with such measurements. The regional climate centres (RCC) could host such a dataset. Numerous groups are working on this topic, but more collaboration is needed. Also more experiments would be valuable.

When gathering parallel measurements the metadata is very important. INSPIRE (an EU Directive) has a standard format for metadata, which could be used.

It may be difficult to produce an open database with parallel measurements as European national meteorological and hydrological services are often forced to sell their data for profit.(Ironically, in the Land the Free (markets), climate data is available freely, the public already paid for it with their tax money after all.) Political pressure to free climate data is needed. Finland is setting a good example and will free its data in 2013.

Thursday, August 2, 2012

A short introduction to the time of observation bias and its correction




Figure 1. A thermo-hygrograph, measures and records temperature and humidity.
Due to recent events, the time of observation bias in climatological temperature measurements has become a hot topic. What is it, why is it important, why should we and how can we correct for it? A short introduction.

Mean temperature

The mean daily temperature can be determined in multiple ways. Nowadays, it is easy to measure the temperature frequently, store it in a digital memory and compute the daily average. Also in the past something similar was possible using a thermograph; see Figure 1. However, such an instrument was expensive and fragile.

Thus normally other ways were used for standard measurements, using minimum and maximum thermometers and by computing a weighted average over observations at 3 or 4 fixed times. Another good approximation for many climate regions is to average over the minimum and maximum temperature. Special minimum and maximum thermometers were invented in 1782 for this task.