Showing posts with label temperature. Show all posts
Showing posts with label temperature. Show all posts

Monday, March 21, 2016

Cooling moves of urban stations



It has been studied over and over again, in very many ways: in global temperature datasets urban stations have about the same temperature trend as surrounding rural stations.

There is also massive evidence that urban areas are typically warmer than their surroundings. For large urban areas the Urban Heat Island (UHI) effect can increase the temperature by several degrees Celsius.

A constant higher temperature due to the UHI does not influence temperature changes. However, when cities grow around a weather station, this produces an artificial warming trend.

Why don’t we see this in the urban stations of the global temperature collections? There are several reasons; the one I want to focus on in this post is that stations do not stay at the same place.

Urban stations are often relocated to better locations, more outside of town. It is common for urban stations to be moved to airports, especially when meteorological offices are moved to the airport to assist in airport safety. Also when meteorological offices can no longer pay the rent in the city center, they are forced to move out and take the station with them. When urban development makes the surrounding unsuited or when a volunteer observer retires, the station has to move, it makes sense to then search for a better location, which will likely be in a less urban area.

Relocations are nearly always the most frequent reason for inhomogeneities. For example, Manola Brunet and colleagues (2006) write about Spain:
“Changes in location and setting are the main cause of inhomogeneities (about 56% of stations). Station relocations have been common during the longest Spanish temperature records. Stations were moved from one place to another within the same city/town (i.e. from the city centre to outskirts in the distant past and, more recently, from outskirts to airfields and airports far away from urban influence) and from one setting (roofs) to another (courtyards).”
Since relocations of that kind are likely to result in a cooling, the Parallel Observations Science Team (ISTI-POST) wants to have a look at how large this effect is. As far as we know there is no overview study yet, but papers on the homogenization of a station network often report on adjustments made for specific inhomogeneities.

We, that is mainly Jenny Linden of Mainz University, had a look in the scientific literature. Let’s start in China were urbanization is strong and can be clearly seen in the raw data of many stations. They also have strong cooling relocations. The graph below from Wenhui Xu and colleagues (2013) shows the distribution of breaks that were detected (and corrected) with statistical homogenization for which the station history indicated that they were caused by relocations. Both the minimum and the maximum temperature cool by a few tenth of a degree Celsius due to the relocations.


The distribution of the breaks that were due to relocations for the maximum temperature (left) and minimum temperature (right). The red line is a Gaussian distribution for comparison.


Going more in detail, Zhongwei Yan (2010) and colleagues studied two relocations in Beijing. They found that the relocations cooled the observations by −0.81°C and −0.69°C. Yuan-Jian Yang and colleagues (2013) find a cooling relocation of 0.7°C in the data of Hefei. Clearly for single urban stations, relocations can have a large influence.

Fatemeh Rahimzadeh and Mojtaba Nassaji Zavareh (2014) homogenized the Iranian temperature observations and observed that relocations were frequent:
“The main non-climatic reasons for non-homogeneity of temperature series measured in Iran are relocation and changes in the measuring site, especially a move from town to higher elevations, due to urbanization and expansion of the city, construction of buildings beside the stations, and changes in vegetation.”
They show an example with 5 stations where one station (Khoramabad) has a relocation in 1980 and another station (Shahrekord) has two relocation in 1980 and 2002. These relocations have a strong cooling effect of 1 to 3 degrees Celsius.


Temperature in 5 stations in Iran, including their adjusted series.


The relocations do not always have a strong effect. Margarita Syrakova and Milena Stefanova (2009) do not find any influence of the inhomogeneities on the annual mean temperature averaged over Bulgaria. This while “Most of the inhomogeneities were caused by station relocations… As there were no changes of the type of thermometers, shelters and the calculation of the daily mean temperatures, the main reasons of inhomogeneities could be station relocations, changes of the environment or changes of the station type (class).

In Finland, Norway, Sweden and the UK the relocations produced a cooling bias of -0.11°C and relocations appear to be the most common cause of inhomogeneities (Tuomenvirta, 2001). The table below summarises the breaks that were found and what the reasons for them were if this was known from the station histories. They write:
“[Station histories suggest] that during the 1930s, 1940s and 1950s, there has been a tendency to move stations from closed areas in growing towns to more open sites, for example, to airports. This can be seen as a counter-action to increasing urbanization.”


Table with the average bias of inhomogeneities found in Finland, Sweden, Norway and the UK in winter (DJF), spring (MAM), summer (JJA) and autumn (SON) and in the yearly average. Changes in the surrounding, such as urbanization or micro-siting changes, made the temperatures higher. This was counteracted by more frequent cooling biases from changes in the thermometers and the screens used to protect the thermometers, by relocations and by changes in the formula used to compute the daily mean temperature.


Concluding, relocations are a frequent type of inhomogeneity. They produce a cooling bias. For urban stations the cooling can be very large. For the average over a region, the values are smaller, but especially because they are so common, they will have most likely a clear influence on global warming in raw temperature observations.

Future research

One problem with studying relocations is that they are frequently accompanied by other changes. Thus you can study them in two ways: study only relocations where you know that no other changes were made or study all historical relocations whether there was another change or not.

The first set-up allows us to characterize the relocations directly, to understand the physical consequences to move for example a station from the center of a city / village to the airport. In this way the differences are not subject to other changes specific to a network. So, the results can be easily compared between regions. The problem is that only a part of the parallel measurements available satisfy these strict conditions.

Conversely, for the second design (taking all historical relocations, also when they have another change) the characterization of the bias will be limited to the datasets studied and we will need a large sample to say something about the global climate record. But on the other hand, we can also analyze more data this way.

There are also two possible sources of information. The above studies relied on statistical homogenization comparing a candidate station to its neighbors. All you need to know for this is which inhomogeneities belong to a relocation. A more direct way to study these relocations is by using parallel measurements at both locations. This is especially helpful to study changes in the variability around the mean and in weather extremes. That is where the Parallel Observation Science Team (ISTI-POST) comes into play.

It is also possible to study specific relocations. The relocation of stations to airports was an important transition, especially around the 1940s. This temperature change is likely large and this transition quite frequent and well documented. One could focus on urban stations or on village stations, rather than studying all stations.

One could make a classification of the micro and macro siting before and after the relocation. For micro-siting the Michel Leroy (2010) classification could be interesting; as far as I know this classification has not been validated yet, we do not know how large the biases of the 5 categories are and how well-defined these biases are. Ian Stewart and Tim Oke (2012) have made a beautiful classification of the local climate zones of (urban) areas, which can also be used to classify the surrounding of stations.


Example of various combinations of building and land use of the local climate zones of Stewart and Oke.


There are many options and what we choose will also depend on what kind of data we can get. Currently our preference is to study parallel data with identical instrumentation at two locations, to understand the influence of the relocation itself as well as possible. In addition to study the influence on the mean, we are gathering data on the break sizes found by statistical homogenization for breaks due to relocations. The station histories (metadata) are crucial for this in order to clearly assign breakpoints to relocation activities. It will also be interesting to compare those two information sources where possible. This may become one study or two depending on how involved the analysis will become.

This POST study is coordinated by Alba Guilabert and Jenny Linden and Manuel Dienst are very active. Please contact one of us if you would like to be involved in a global study like this and tell us what kind of data you would have. Also if anyone knows of more studies reporting the size of inhomogeneities due to relocations, please let us know. I certainly have seen more such tables at conferences, but they may not have been published.



Related reading

Parallel Observations Science Team (POST) of the International Surface Temperature Initiative (ISTI).

The transition to automatic weather stations. We’d better study it now.

Changes in screen design leading to temperature trend biases.

Early global warming.

Why raw temperatures show too little global warming.

References

Brunet M., O. Saladie, P. Jones, J. Sigró, E. Aguilar, et al., 2006: The development of a new daily adjusted temperature dataset for Spain (SDATS) (1850–2003). International Journal of Climatology, 26, pp. 1777–1802, doi: 10.1002/joc.1338.
See also: a case-study/guidance on the development of long-term daily adjusted temperature datasets.

Leroy, M., 2010: Siting classifications for surface observing stations on land. In WMO Guide to Meteorological Instruments and Methods of Observation. "CIMO Guide", WMO-No. 8, Part I, Chapter 1, Annex 1B.

Rahimzadeh, F. and M.N. Zavareh, 2014: Effects of adjustment for non‐climatic discontinuities on determination of temperature trends and variability over Iran. International Journal of Climatology, 34, pp. 2079-2096, doi: 10.1002/joc.3823.

Stewart, I.D. and T.R. Oke, 2012: Local climate zones for urban temperature studies. Bulletin American Meteorological Society, 93, pp. 1879–1900, doi: 10.1175/BAMS-D-11-00019.1.
See also the World Urban Database.

Tuomenvirta, H., 2001: Homogeneity adjustments of temperature and precipitation series - Finnish and Nordic data. International Journal of Climatology, 21, pp. 495-506, doi: 10.1002/joc.616.

Xu, W., Q. Li, X.L. Wang, S. Yang, L. Cao, and Y. Feng, 2013: Homogenization of Chinese daily surface air temperatures and analysis of trends in the extreme temperature indices. Journal Geophysical Research Atmospheres, 118, doi: 10.1002/jgrd.50791.

Syrakova M. and Stefanova M., 2009: Homogenization of Bulgarian temperature series. International. Journal Climatology, 29, pp. 1835-1849, doi: 10.1002/jov.1829.

Yan ZW; Li Z; Xia JJ. 2014. Homogenisation of climate series: The basis for assessing climate changes. Science China: Earth Sciences, 57, pp 2891-2900, doi: 10.1007/s11430-014-4945-x.

* Photo at the top "High Above Sydney" by Taro Taylor used with a Attribution-NonCommercial 2.0 Generic (CC BY-NC 2.0) license.

Tuesday, August 11, 2015

History of temperature scales and their impact on the climate trends

Guest post by Peter Pavlásek of the Slovak Institute of Metrology. Metrology, not meteorology, they are the scientists that work on making measurements more precise by developing high accurate standards and thus make experimental results better comparable.

Since the beginning of climate observations temperature has always been an important quantity that needed to be measured as its values affected every aspect of human society. Therefore its precise and reliable temperature determination was important. Of course the ability to precisely measure temperature strongly depends on the measuring sensor and method. To be able to determine how precisely the sensor measures temperature it needs to be calibrated by a temperature standard. As science progressed with time new temperature scales were introduced and the previous temperature standards naturally changed. In the following sections we will have a look on the importance of temperature scales throughout the history and their impact on evaluation of historical climate data.

The first definition of a temperature standard was created in 1889. At the time thermometers were ubiquitous, and had been used for centuries; for example, they had been used to document ocean and air temperature now included in historical records. Metrological temperature standards are based on state transitions of matter (under defined conditions and matter composition) that generate a precise and highly reproducible temperature value. For example, the melting of ice, the freezing of pure metals, etc. Multiple standards can be used as a base for a temperature scale by creating a set of defined temperature points along the scale. An early definition of a temperature scale was invented by the medical doctor Sebastiano Bartolo (1635-1676), who was the first to use melting snow and the boiling point of water to calibrate his mercury thermometers. In 1694 Carlo Renaldini, mathematician and engineer, suggested using the ice melting point and the boiling point of water to divide the interval between these two points into 12 degrees, applying marks on a glass tube containing mercury. Reamur divided the scale in 80 degrees, while the modern division of roughly 100 degrees was adopted by Anders Celsius in 1742. Common to all the scales was the use of phase transitions as anchor points, or fixed points, to define intermediate temperature values.

It is not until 1878 that the first sort of standardized mercury-in-glass thermometers were introduced as an accompanying instrument for the metre prototype, to correct from thermal expansion of the length standard. These special thermometers were constructed to guarantee reproducibility of measurement of a few thousandths of a degree. They were calibrated at the Bureau International des Poids et Mesures (BIPM), established after the recent signature of the Convention du Metre of 1875. The first reference temperature scale was adopted by the 1st Conférence générale des poids et measures ( CGPM) in 1889. It was based on constant volume gas thermometry, and relied heavily on the work of Chappius at BIPM, who had used the technique to link the readings of the very best mercury-in-glass thermometers to absolute (i.e. thermodynamic) temperatures.

Meanwhile, the work of Hugh Longbourne Callendar and Ernest Howard Griffiths on the development of platinum resistance thermometers (PRTs) lay the foundations for the first practical scale. In 1913, after a proposal from the main Institutes of metrology, the 5th CGPM encouraged the creation of a thermodynamic International Temperature Scale (ITS) with associated practical realizations, thus merging the two concepts. The development was halted by the World War I, but the discussions resumed in 1923 when platinum resistance thermometers were well developed and could be used to cover the range from –38 °C, the freezing point of mercury, to 444.5 °C, the boiling point of sulphur, using a quadratic interpolation formula, that included the boiling point of water at 100 °C. In 1927 the 7th CGPM adopted the International Temperature Scale of 1927 that even extended the use of PRTs to -183 °C. The main intention was to overcome the practical difficulties of the direct realization of thermodynamic temperatures by gas thermometry, and the scale was a universally acceptable replacement for the various existing national temperature scales.

In 1937 the CIPM established the Consultative Committee on Thermometry (CCT). Since then the CCT has taken all initiatives in matter of temperature definition and thermometry, including, in the recent years, issues concerning environment, climate and meteorology. It was in fact the CCT that in 2010, shortly after the BIPM-WMO workshop on “Measurement Challenges for Global Observing Systems for Climate Change Monitoring” submitted the recommendation CIPM (T3 2010), encouraging National Metrology Institutes to cooperate with the meteorology and climate communities for establishing traceability to those thermal measurements of importance for detecting climate trends.

The first revision of the 1927 ITS took place in 1948, when extrapolation below the oxygen point to –190 °C was removed from the standard, since it had been found to be an unreliable procedure. The IPTS-48 (with “P” now standing for “practical”) extended down only to –182.97 °C. It was also decided to drop the name "degree Centigrade" for the unit and replace it by degree Celsius. In 1954 the 10th CGPM finally adopted a proposal that Kelvin had made back one century before, namely that the unit of thermodynamic temperature to be defined in terms of the interval between the absolute zero and a single fixed point. The fixed point chosen was the triple point of water, which was assigned the thermodynamic temperature of 273.16 °K or equivalently 0.01 °C and replaced the melting point of ice. Work continued on helium vapour pressure scales and in 1958 and 1962 the efforts were concentrated at low temperatures below 0.9 K. In 1964 the CCT defined the reference function “W” for interpolating the PRTs readings between all new low temperature fixed points, from 12 K to 273,16 K and in 1966 further work on radiometry, noise, acoustic and magnetic thermometry made CCT preparing for a new scale definition.

In 1968 the second revision of the ITS was delivered: both thermodynamic and practical units were defined to be identical and equal to 1/273.16 of the thermodynamic temperature of the triple point of water. The unit itself was renamed "the kelvin" in place of "degree Kelvin" and designated "K" in place of "°K". In 1976 further consideration and results at low temperatures between 0.5 K and 30 K were included in the Provisional Temperature Scale, EPT-76. Meanwhile several NMIs continued the work to better define the fixed points values and the PRT’s characteristics. The International Temperature Scale of 1990 (ITS-90) came into effect on 1 January 1990, replacing the IPTS-68 and the EPT-76 and is still today adopted to guarantee traceability of temperature measurements. Among the main features of ITS-90, with respect to the 1968 one, is the use of the triple point of water (273.16 K), rather than the freezing point of water (273.15 K), as a defining point; it is in closer agreement with thermodynamic temperatures; it has improved continuity and precision.

It follows that any temperature measurement made before 1927 is impossible to trace to an international standard, except for a few nations with a well-defined national definition. Later on, during the evolution of both the temperature unit and the associated scales, changes have been introduced to improve the realization and measurement accuracy.

With each redefinition of the practical temperature scale since the original scale of 1927, the BIPM published official transformation tables to enable conversion between the old and the revised temperature scale (BIPM. 1990). Because of the way the temperature scales have been defined, they really represent an overlap of multiple temperature ranges, each of which may have their own interpolating instrument, fixed points or mathematical equations describing instrument response. A consequence of this complexity is that no simple mathematical relations can be constructed to convert temperatures acquired according to older scales into the modern ITS90 scale.

As an example of the effect of temperature scales alternations let us examine the correction of the daily mean temperature record at Brera, Milano in Italy from 1927 to 2010, shown in Figure 1. The figure illustrates the consequences of the temperature scale change and the correction that needed to be applied to convert the historical data to the current ITS-90. The introduction of new temperature scales in 1968 and 1990 is clearly visible as discontinuities in the magnitude of the correction, with significantly larger corrections for data prior to 1968. As expected from Figure 1, the cycling follows the seasonal changes in temperature. The higher summer temperatures require a larger correction.


Figure 1. Example corrections for the weather station at Brera, Milano in Italy. The values are computed for the daily average temperature. The magnitude of the correction cycles with the annual variations in temperature: the inset highlights how the warm summer temperatures are corrected much more (downward) than the cool winter temperatures.

For the same reason the corrections will differ between locations. The daily average temperatures at the Milano station typically approaches 30 °C on the warmest summer days, while it may fall slightly below freezing in winter. In a different location with larger differences between typical summer and winter temperature the corrections might oscillate around 0 °C, and a more stable climate might see smaller corrections overall: at Utsira, a small island off the south-western coast of Norway the summertime corrections are typically 50% below the values for Brera. To better see the magnitude of corrections for specific historical temperatures the Figure 2 is provided.


Figure 2. The corrections in °C that need to be applied to a certain historical temperatures in the range form -50 °C up to +50 °C with regard to the time period the historical data were measured.

The uncertainty in the temperature readings from any individual thermometer is significantly larger than the corrections presented here. Furthermore, even for the limited timespan since 1927 a typical meteorological weather station has seen many changes which may affect the temperature readings. Examples include instrument replacement; instrument relocations; screens may be rebuilt, redesigned or moved; the schedule for readings may change; the environment close to the station may become more densely populated and therefore enhance the urban heat island effect; and manually recorded temperatures may suffer from unconscious observer bias (Camuffo, 2002; Bergstrøm and Moberg, 2002; Kennedy, 2013). Despite the diligent quality control employed by meteorologists during the reconstruction of long records, every such correction also has an uncertainty associated with it. Thus, for an individual instrument, and perhaps even an individual station, the scale correction is insignificant.

On the other hand, more care is needed for aggregate data. The scale correction represents a bias which is equal for all instruments, regardless of location and use, and simply averaging data from multiple sources will not eliminate it. The scale correction is smaller than, but of the same order of magnitude as the uncertainty components claimed for monthly average global temperatures in the HadCRUT4 dataset (Morice et al., 2012). To evaluate the actual value of the correction for the global averages would require a recalculation of all the individual temperature records. However, the correction does not alter the warming trend: if anything it would exacerbate it slightly. Time averaging or averaging multiple instruments has been claimed to lower temperature uncertainty to around 0.03 °C (for example in Kennedy (2013) for aggregate temperature records of sea surface temperature). To be credible such claims for the uncertainty need to consider the scale correction in our opinion.

Scale correction for temperatures earlier than 1927 is harder to assess. Without an internationally accepted and widespread calibration reference it is impossible to construct a simple correction algorithm, but there is reason to suspect that the corrections become more important for older parts of the instrumental record. Quantifying the correction would entail close scrutiny of the old calibration practices, and hinges on available contemporary descriptions. Conspicuous errors can be detected, such as the large discrepancy which Burnette et al. found in 1861 from records at Fort Riley, Kansas (Burnette et al., 2010). In that case the decision to correct the dubious values was corroborated by metadata describing a change of observer: however, this also illustrates the calibration pitfall when no widespread temperature standard was available. One would expect that many more instruments were slightly off, and the question is whether this introduced a bias or just random fluctuations which can be averaged away when producing regional averages.

Whether the relative importance of the scale correction increases further back in time remains an open question. The errors from other sources such as the time schedule for the measurements also become more important and harder to account for, such as the transformation from old Italian time to modern western European time described in (Camuffo, 2002).

This brief overview of temperature scales history has shown what an impact these changes have on historical temperature data. As it was discussed earlier the corrections originating from the temperature scale changes is small when compared with other factors. Even when the values of the correction may be small it doesn’t mean it should be ignored as their magnitude are far from negligible. More details about this problematic and the conversion equation that enables to convert any historical temperature data from 1927 up to 1989 to the current ITS-90 can be found in the publication of Pavlasek et al. (2015).



Related reading

Why raw temperatures show too little global warming

Just the facts, homogenization adjustments reduce global warming

References

Camuffo, Dario, 2002: Errors in early temperature series arising from changes in style of measuring time, sampling schedule and number of observations. Climatic change, 53, pp. 331-352.

Bergstrøm, H. and A. Moberg, 2002: Daily air temperature and pressure series for Uppsala (1722-1998). Climatic change, 53, pp. 213-252.

Kenndy, John J., 2013: A review of uncertainty in in situ measurements and data sets of sea surface temperature. Reviews of geophysics, 52, pp. 1-32.

Morice, C.P., et al., 2012: Quantifying uncertainties in global and regional temperature change using an ensemble of observational estimates: The HaddCRUT4 data set. Journal of geophysical research, 117, pp. 1-22.

Burnette, Dorian J., David W. Stahle, and Cary J. Mock, 2010: Daily-Mean Temperature Reconstructed for Kansas from Early Instrumental and Modern Observations. Journal of Climate, 23, pp. 1308-1333.

Pavlasek P., A. Merlone, C. Musacchio, A.A.F. Olsen, R.A. Bergerud, and L. Knazovicka, 2015: Effect of changes in temperature scales on historical temperature data. International Journal of Climatology, doi: 10.1002/joc.4404.

Tuesday, June 9, 2015

Comparing the United States COOP stations with the US Climate Reference Network

Last week the mitigation sceptics apparently expected climate data to be highly reliable and were complaining that an update led to small changes. Other weeks they expect climate data to be largely wrong, for example due to non-ideal micro-siting or urbanization. These concerns can be ruled out for the climate-quality US Climate Reference Network (USCRN). This is a guest post by Jared Rennie* introducing a recent study comparing USCRN stations with nearby stations of the historical network, to study the differences in the temperature and precipitation measurements.


Figure 1. These pictures show some of instruments from the observing systems in the study. The exterior of a COOP cotton region shelter housing a liquid-in-glass thermometer is pictured in the foreground of the top left panel, and a COOP standard 8-inch precipitation gauge is pictured in the top right. Three USCRN Met One fan-aspirated shields with platinum resistance thermometers are pictured in the middle. And, a USCRN well-shielded Geonor weighing precipitation gauge is pictured at the bottom.
In 2000 the United States started building a measurement network to monitor climate change, the so called United States Climate Reference Network (USCRN). These automatic stations have been installed in excellent locations and are expected not to show influences of changes in the direct surroundings for decades to come. To avoid loss of data the most important variables are measured by three high-quality instruments. A new paper by Leeper, Rennie, and Palecki now compares the measurements of twelve station pairs of this reference network with nearby stations of the historical US network. They find that the reference network records slightly cooler temperature and less precipitation and that there are almost no differences in the temperature variability and trend.

COOP and USCRN

The detection and attribution of climate signals often rely upon long, historically rich records. In the United States, the Cooperative Observer Program (COOP) has collected many decades of observations for thousands of stations, going as far back as the late 1800’s. While the COOP network has become the backbone of the U.S. climatology dataset, non-climatic factors in the data have introduced systematic biases, which require homogenization corrections before they can be included in climatic assessments. Such factors include modernization of equipment, time of observation differences, changes in observing practices, and station moves over time. A part of the COOP stations with long observations is known as the US Historical Climate Network (USHCN), which is the default dataset to report on temperature changes in the USA.

Recognizing these challenges, the United States Climate Reference Network (USCRN) was initiated in 2000. 15 years after its inception, 132 stations have been installed across the United States with sub-hourly observations of numerous weather elements using state-of-the-art instrumentation calibrated to traceable standards. For a high data quality temperature and precipitation sensors are well shielded and for continuity the stations have three independent sensors, so no data loss is incurred. Because of these advances, no homogenization correction is necessary.

Comparison

The purpose of this study is to compare observations of temperature and precipitation from closely spaced members of USCRN and COOP networks. While the pairs of stations are near to each other they are not adjacent. Determining the variations in data between the networks allows scientists to develop an improved understanding of the quality of weather and climate data, particularly over time as the periods of overlap between the two networks lengthen.

To ensure observational differences are the result of network discrepancies, comparisons were only evaluated for station pairs located within 500 meters. The twelve station pairs chosen were reasonably dispersed across the lower 48 states of the US. Images of the instruments used in both networks are provided in Figure 1.

The USCRN stations all have the same instrumentation: well-shielded rain gauges and mechanically ventilated temperature sensors. Two types of thermometers are used: modern automatic electrical sensors known as the maximum-minimum temperature sensor (MMTS ) and old-fashioned normal thermometers, which now have to be called liquid-in-glass (LiG) thermometers. Both are naturally ventilated.

An important measurement problem for rain gauges is undercatchment: due to turbulence around the instruments not all droplets land in the mouth. This is especially important in case of high winds and for snow and can be reduced by wind shields. The COOP rain gauges are unshielded, however, and have been known to underestimate precipitation in windy conditions. COOP gauges also include a funnel, which can be removed before snowfall events. The funnel reduces evaporation losses on hot days, but can also get clogged by snow. Hourly temperature data from USCRN were averaged into 24 hour periods to match daily COOP measurements at the designated observation times, which vary by station. Precipitation data was aggregated into precipitation events and also matched with respective COOP events.

Observed differences and their reasons

Overall, COOP sensors in shields naturally ventilated reported warmer daily maximum temperatures (+0.48°C) and cooler daily minimum temperatures (-0.36°C) than USCRN sensors, which have better solar shielding and fans to ventilate the instrument. The magnitude of temperature differences were on average larger for stations operating LiG systems, than those for the MMTS system. Part of the reduction in network biases with the MMTS system is likely due to the smaller-sized shielding that requires less surface wind speed to be adequately ventilated.

While overall mean differences were in line with side-by-side comparisons of ventilated and non-ventilated sensors, there was considerable variability in the differences from station to station (see Figure 2). While all COOP stations observed warmer maximum temperatures, not all saw cooler minimum temperatures. This may be explained by differing meteorological conditions (surface wind speed, cloudiness), local siting (heat sources and sinks), and sensor and human errors (poor calibration, varying observation time, reporting error). While all are important to consider, meteorological conditions were only examined further by categorizing temperature differences by wind speed. The range in network differences for maximum and minimum temperatures seemed to reduce with increasing wind speed, although more so with maximum temperature, as sensor shielding becomes better ventilated with increasing wind speed. Minimum temperatures are highly driven by local radiative and siting characteristics. Under calm conditions one might expect radiative imbalances between naturally and mechanically aspirated shields or differing COOP sensors (LiG vs MMTS). That along with local vegetation and elevation differences may help to drive these minimum temperature differences.


Figure 2. USCRN minus COOP average minimum (blue) and maximum (red) temperature differences for collocated station pairs. COOP stations monitoring temperature with LiG technology are denoted with asterisks.

For precipitation, COOP stations reported slightly more precipitation overall (1.5%). Similar to temperature, this notion was not uniform across all station pairs. Comparing by season, COOP reported less precipitation than USCRN during winter months and more precipitation in the summer months. The dryer wintertime COOP observations are likely due to the lack of gauge shielding, but may also be impacted by the added complexity of observing solid precipitation. An example is removing the gauge funnel before a snowfall event and then melting the snow to calculate liquid equivalent snowfall.

Wetter COOP observations over warmer months may have been associated with seasonal changes in gauge biases. For instance, observation errors related to gauge evaporation and wetting factor are more pronounced in warmer conditions. Because of its design, the USCRN rain gauge is more prone to wetting errors (that some precipitation sticks to the wall and is thus not counted). In addition, USCRN does not use an evaporative suppressant to limit gauge evaporation during the summer, which is not an issue for the funnel-capped COOP gauge. The combination of elevated biases for USCRN through a larger wetting factor and enhanced evaporation could explain wetter COOP observations. Another reason could be the spatial variability of convective activity. During summer months, daytime convection can trigger unorganized thundershowers whose scale is small enough where it would report at one station, but not another. For example, in Gaylord Michigan, the COOP observer reported 20.1 mm more than the USCRN gauge 133 meters away. Rain radar estimates showed nearby convection over the COOP station, but not the USCRN, thus creating a valid COOP observation.


Figure 3. Event (USCRN minus COOP) precipitation differences grouped by prevailing meteorological conditions during events observed at the USCRN station. (a) event mean temperature: warm (more than 5°C), near-freezing (between 0°C and 5°C), and freezing conditions (less than 0°C); (b) event mean surface wind speed: light (less than 1.5 m/s), moderate (between 1.5 m/s and 4.6 m/s), and strong (larger than 4.6 m/s); and (c) event precipitation rate: low (less than 1.5 mm/hr), moderate (between 1.5 mm/hr and 2.8 mm/hr), and intense (more than 2.8 mm/hr).

Investigating further, precipitation events were categorized by air temperature, wind speed, and precipitation intensity (Figure 3). Comparing by temperature, results were consistent with the seasonal analysis, showing lower COOP values (higher USCRN) in freezing conditions and warmer COOP values (lower USCRN) in near-freezing and warmer conditions. Stratifying by wind conditions is also consistent, indicating that the unshielded gauges in COOP will not catch as much precipitation as it should, showing a higher USCRN value. On the other hand, COOP reports much more precipitation in lighter wind conditions, due to higher evaporation rate in the USCRN gauge. For precipitation intensity, USCRN observed less than COOP for all categories.


Figure 4. National temperature anomalies for maximum (a) and minimum (b) temperature between homogenized COOP data from the United States Historical Climatology Network (USHCN) version 2.5 (red) and USCRN (blue).
Comparing the variability and trends between USCRN and homogenized COOP data from USHCN we see that they are very similar for both maximum and minimum national temperatures (Figure 4).

Conclusions

This study compared two observing networks that will be used in future climate and weather studies. Using very different approaches in measurement technologies, shielding, and operational procedures, the two networks provided contrasting perspectives of daily maximum and minimum temperatures and precipitation.

Temperature comparisons between stations in local pairings were partially attributed to local factors including siting (station exposure), ground cover, and geographical aspects (not fully explored in this study). These additional factors are thought to accentuate or minimize anticipated radiative imbalances between the naturally and mechanically aspirated systems, which may have also resulted in seasonal trends. Additional analysis with more station pairs may be useful in evaluating the relative contribution of each local factor noted.

For precipitation, network differences also varied due to the seasonality of the respective gauge biases. Stratifying by temperature, wind speed, and precipitation intensity showed these biases are revealed in more detail. COOP gauges recorded more precipitation in warmer conditions with light winds, where local summertime convection and evaporation in USCRN gauges may be a factor. On the other hand, COOP recorded less precipitation in colder, windier conditions, possibly due to observing error and lack of shielding, respectively.

It should be noted that all observing systems have observational challenges and advantages. The COOP network has many decades of observations from thousands of stations, but it lacks consistency in instrumentation type and observation time in addition to instrumentation biases. USCRN is very consistent in time and by sensor type, but as a new network it has a much shorter station record with sparsely located stations. While observational differences between these two separate networks are to be expected, it may be possible to leverage the observational advantages of both networks. The use of USCRN as a reference network (consistency check) with COOP, along with more parallel measurements, may prove to be particularly useful in daily homogenization efforts in addition to an improved understanding of weather and climate over time.




* Jared Rennie currently works at the Cooperative Institute for Climate and Satellites – North Carolina (CICS-NC), housed within the National Oceanic and Atmospheric Administration’s (NOAA’s) National Centers for Environmental Information (NCEI), formerly known as the National Climatic Data Center (NCDC). He received his masters and bachelor degrees in Meteorology from Plymouth State University in New Hampshire, USA, and currently works on maintaining and analyzing global land surface datasets, including the Global Historical Climatology Network (GHCN) and the International Surface Temperature Initiative’s (ISTI) Databank.

Further reading

Ronald D. Leeper, Jared Rennie, and Michael A. Palecki, 2015: Observational Perspectives from U.S. Climate Reference Network (USCRN) and Cooperative Observer Program (COOP) Network: Temperature and Precipitation Comparison. Journal Atmospheric and Oceanic Technology, 32, pp. 703–721, doi: 10.1175/JTECH-D-14-00172.1.

The informative homepage of the U.S. Climate Reference Network gives a nice overview.

A database with parallel climate measurements, which we are building to study the influence of instrumental changes on the probability distributions (extreme weather and weather variability changes).

The post, A database with daily climate data for more reliable studies of changes in extreme weather, provides a bit more background on this project.

Homogenization of monthly and annual data from surface stations. A short description of the causes of inhomogeneities in climate data (non-climatic variability) and how to remove it using the relative homogenization approach.

Previously I already had a look at trend differences between USCRN and USHCN: Is the US historical network temperature trend too strong?

Thursday, January 29, 2015

Temperature bias from the village heat island

The most direct way to study how alterations in the way we measure temperature affect the registered temperatures is to make simultaneous measurements the old way and the current way. New technological developments have now made it much easier to study the influence of location. Modern batteries have made it possible to just install an automatically recording weather station anywhere and obtain several years of data. It used to be necessary to have nearby electricity access, permissions to use it and dig cables in most cases.

Jenny Linden used this technology to study the influence of the siting of weather stations on the measured temperature for two villages. One village was in North Sweden, one in the West of Germany. In both cases the center of the village was about half a degree Centigrade (one degree Fahrenheit) warmer than the current location of the weather station on grassland just outside the villages. This is small compared to the urban heat island found in large cities, but it is comparable in size to the warming we have seen since 1900 and thus important for the understanding of global warming. In urban areas, the heat island can be multiple degrees and is studied much because of the additional heat stress it produces. This new study may be the first for villages.

Her presentation (together with Jan Esper and Sue Grimmond) at EMS2014 (abstract) was my biggest discovery in the field of data quality in 2014. Two locations is naturally not not enough for strong conclusions, but I hope that this study will be the start of many more, now that the technology has been shown to work and the effects to be significant for climate change studies.

The experiments


A small map of Haparanda, Sweden, with all measurement locations indicated by a pin. Mentioned in the text are Center and SMHI current met-station.
The Swedish case is easiest to interpret. The village [[Haparanda]] with 5 thousand inhabitants is in the North of Sweden, on the border with Finland. It has a beautiful long record, measurements started in 1859. Observations started on a North wall in the center of the village and were continued there until 1942. Currently the station is on the edge of the village. It is thought that the center did not change much any more since 1942. Thus the difference could be interpreted as the cooling bias due to the relocation from the center to its current location in the historical observations. The modern measurement was not at the original North wall, but free standing. Thus only the difference of the location can be studied.

As so often, the minimum temperature at night is affected most. It has a difference of 0.7°C between the center and the current location. The maximum temperature only shows a difference of 0.1°C. The average temperature has a difference of 0.4°C.

The village [[Geisenheim]] is close to Mainz, Germany, and was the first testing location for the equipment. It has 11.5 thousand inhabitants and is on the right bank of the Rhine. Also this station has a quite long history and started in 1884 in a park and stayed there until 1915. Now it is well-sited outside of the village in the meadows. A lot has changed in Geisenheim between 1915 and now. So we cannot make any historical interpretation of the changes, but it is interesting to compare the measurements in the center with the current ones to compare with Haparanda and to get an idea how large the maximum effect would theoretically be.



A small map of Geisenheim, Germany. Compared in the text are Center and DWD current met-station. The station started in Park.
The difference in the minimum temperature between the center and the current location is 0.8°C. In this case also the maximum temperature has a clear difference of 0.4°C. The average temperature has a difference of 0.6°C.

The next village on the list is [[Cazorla]] in Spain. I hope the list will become much longer. If you have any good suggestions please comment below or write Jenny Linden. Especially locations where the center is still mostly like it used to be are of interest. And as much different climate regions should be sampled as possible.

The temperature record

Naturally not all stations started in villages and even less exactly in the center. But this is still a quite common scenario, especially for long series. In the 19th century thermometers were expensive scientific instruments. The people making the measurements were often the few well-educated people in the village or town, priests, apothecaries, teachers and so on.

Erik Engström, climate communicator of the Swedish weather service (SMHI) wrote:
In Sweden we have many stations that have moved from a central location out to a location outside the village. ... We have several stations located in small towns and villages that have been relocated from the centre to a more rural location, such as Haparanda. In many cases the station was also relocated from the city centre to the airport outside the city. But we also have many stations that have been rural and are still rural today.
Improvements in siting may be even more interesting for urban stations. Stations in cities have often been relocated (multiple times) to better sited locations, if only because meteorological offices cannot afford the rents in the center. Because the Urban Heat Island is stronger, this could lead to even larger cooling biases. What counts is not how much the city is warming due to its growth, but the siting of the first station location versus its current one.

More specifically, it would be interesting to study how much improvements in siting have contributed to a possible temperature trend bias in the recent decades. The move to the current locations took place in 2010 in Haparanda and in 2006 in Geisenheim. Where it should be noted that the cooling bias did not take place in one jump: decent measurements are likely to have been recorded since 1977 in Haparanda, and since 1946 in Geisenheim; For Geisenheim the information is not very reliable).

It would make sense to me that the more people started thinking about climate change, the more the weather services realized that even small biases due to imperfect siting are important and should be avoided. Also modern technology, automatic weather stations, batteries and solar panels, have made it easier to install stations in remote locations.

An exception here is likely the United States of America. The Surface Stations project has shown many badly sited stations in the USA and the transition to automatic weather stations is thought to have contributed to this. Explanations could be that America started early with automation, the cables were short and the technician had only one day to install the instruments.

When also villages have a small urban effect, it is also possible that this gradually increases while the village is growing. Such a gradual increase can also be removed by statistical homogenization by comparison with its neighboring stations. However, if too many stations have a such a gradual inhomogeneity, the homogenization methods will no longer be able to remove this non-climatic increase (well). Thus this finding makes it more important to make sure that sufficient really rural stations are used for comparison.

On the other hand, because a village is smaller, one may expect that the "gradual" increases are actually somewhat jumpy. Rather than being due to many changes in a large area around the station, in case of a village the changes may be expected to be more often nearer to the station and produce a small jump. Jumps are easier to remove by statistical homogenization than smooth gradual inhomogeneities, because the probability of something happening simultaneously in the neighboring station is smaller.



A parallel measurement in Basel, Switzerland. A historical Wild screen, which is open to the bottom and to the North and has single Louvres to reduce radiation errors, measures in parallel with a Stevenson screen (Cotton Region Shelter), which is close to all sides and has double Louvres.

Parallel measurements

These measurements at multiple locations are an example of parallel measurements. The standard case is that an old instrument is compared to a new one while measuring side by side. This helps us to understand the reasons for biases in the climate record.

From parallel measurements we, for example, also know that the way temperature was measured before the introduction of Stevenson Screens has caused a bias in the old measurements of up to a few tenth of a degree. With differences of 0.5°C being found for two locations Spain and two tropical countries, while the differences in North West Europe are typically small.

To be able to study these historical changes and their influence on the global datasets, we have started an initiative to build a database with parallel measurements under the umbrella of the International Surface Temperature Initiative (ISTI), the Parallel Observations Science Team (POST). We have just started and are looking for members and parallel datasets. Please contact us if you are interested.

[UPDATE. The above study is now published as. Lindén, J., C.S.B. Grimmond, and J. Esper: Urban warming in villages, Advances in Science and Research, 12, pp. 157-162, doi: 10.5194/asr-12-157-2015, 2015.]


Wednesday, January 29, 2014

Testimony Judith Curry on Arctic temperature seems to be a misquotation

Looks like the IPCC is not even wrong.

There has been a heated debate between Judith Curry (Climate Etc.) and Tamino (Open Mind) about the temperature in the Arctic. This debate was initiated by Curry's testimony before congress two weeks ago.

In her testimony Judith Curry quotes:
“Arctic temperature anomalies in the 1930s were apparently as large as those in the 1990s and 2000s. There is still considerable discussion of the ultimate causes of the warm temperature anomalies that occurred in the Arctic in the 1920s and 1930s.” (AR5 Chapter 10)
Tamino at Open Mind investigated this claim and found that recent temperatures were clearly higher as in the beginning of the 20th century. In his post (One of) the Problem(s) with Judith Curry Tamino concludes that the last IPCC report and Curry's testimony are wrong about the Arctic temperature increase:
"I think the IPCC goofed on this one — big-time — and if so, then Curry’s essential argument about Arctic sea ice is out the window. I’ve studied the data. Not only does it fail to support the claim about 1930s Arctic temperatures, it actually contradicts that claim. By a wide margin. It ain’t even close."

That sounded convincing, but I am not so sure about the IPCC any more.

Tamino furthermore wonders where Curry got her information from. I guess he found it funny that Judith Curry would quote the IPCC as a reliable source without checking the information. Replying to another question of mine, Judith Curry replied on twitter that she indeed got her information from the last (draft) IPCC report:


Later she also wrote a reply on her blog, Climate Ect., starting with the above quote from the IPCC report.

Then the story takes a surprising turn, when Steve Bloom hidden in a large number of comments at AndThenTheresPhysics notes that the quote is missing important context. The full paragraph in the IPCC namely reads (my emphasis and the quote in Curry's testimony in red):
A question as recently as six years ago was whether the recent Arctic warming and sea ice loss was unique in the instrumental record and whether the observed trend would continue (Serreze et al., 2007). Arctic temperature anomalies in the 1930s were apparently as large as those in the 1990s and 2000s. There is still considerable discussion of the ultimate causes of the warm temperature anomalies that occurred in the Arctic in the 1920s and 1930s (Ahlmann, 1948; Veryard, 1963; Hegerl et al., 2007a; Hegerl et al., 2007b). The early 20th century warm period, while reflected in the hemispheric average air temperature record (Brohan et al., 2006), did not appear consistently in the mid-latitudes nor on the Pacific side of the Arctic (Johannessen et al., 2004; Wood and Overland, 2010). Polyakov et al. (2003) argued that the Arctic air temperature records reflected a natural cycle of about 50–80 years. However, many authors (Bengtsson et al., 2004; Grant et al., 2009; Wood and Overland, 2010; Brönnimann et al., 2012) instead link the 1930s temperatures to internal variability in the North Atlantic atmospheric and ocean circulation as a single episode that was sustained by ocean and sea ice processes in the Arctic and north Atlantic. The Arctic wide temperature increases in the last decade contrast with the episodic regional increases in the early 20th century, suggesting that it is unlikely that recent increases are due to the same primary climate process as the early 20th century. IPCC(2014, draft, page 10-43 to 10-44).

Steve Bloom dryly comments: "So it was a question in 2007." In other words, the IPCC was right, but Judith Curry selectively quoted from the report. That first sentence is very important, also the age of the references could have revealed that this paragraph was not discussing the current state-of-the-art. The data of the last six years makes a large difference between "with some goodwill in the same range of temperatures" to "clearly higher Arctic temperatures".

This is illustrated by one of the figures from Tamino's post, presenting the data:


This is the annual average temperature in the Arctic from 60 to 90 degrees North as computed by the Berkeley Earth Surface Temperature group. The smooth red line is computed using LOESS smoothing.

And the misquotation is not for lack of space in the testimony. In her blog post, Curry quotes many sections of the IPCC report at length and also the entire paragraph like it is displayed here, just somehow without the first sentence printed here in bold, the one that provides the important context.


Related reading


The congressional Testimony by Curry: STATEMENT TO THE COMMITTEE ON ENVIRONMENT AND PUBLIC WORKS OF THE UNITED STATES SENATE Hearing on “Review of the President’s Climate Action Plan" 16 January 2014, Judith A. Curry.

(One of) the Problem(s) with Judith Curry by Tamino at Open Mind.

The reply by Curry about Tamino's post on her blog, Climate ect.

The answer to that by Tamino suggests that Curry's reply is not that convincing.

Also Robert Way contributed to the discussion at Skeptical Science: "A Historical Perspective on Arctic Warming: Part One". Robert Way made the round in the blog-o-sphere with the paper Cowtan and Way (2013), where they studied the recent strong warming in the Arctic and suggested that that may explain a part of the recent slowdown in the warming of surface temperature.

A previous post of mine of Curry's testimony, focussing on her suggestive, but non-committal language: "Interesting what the interesting Judith Curry finds interesting".