Wednesday, 15 April 2015

Why raw temperatures show too little global warming

In the last few amonths I have written several posts why raw temperature observations may show too little global warming. Let's put it all in perspective.

People who have followed the climate "debate" have probably heard of two potential reasons why raw data shows too much global warming: urbanization and the quality of the siting. These are the two non-climatic changes that mitigation sceptics promote claiming that they are responsible for a large part of the observed warming in the global mean temperature records.

If you only know of biases producing a trend that is artificially too strong, it may come as a surprise that the raw measurements actually have too small a trend and that removing non-climatic changes increases the trend. For example, in the Global Historical Climate Network (GHCNv3) of NOAA, the land temperature change since 1880 is increased by about 0.2°C by the homogenization method that removes non-climatic changes. See figure below.

(If you also consider the adjustments made to ocean temperatures, the net effect of the adjustments is that they make the global temperature increase smaller.)

The global mean temperature estimates from the Global Historical Climate Network (GHCNv3) of NOAA, USA. The red curve shows the global average temperature in the raw data. The blue curve is the global mean temperature after removing non-climatic changes. (Figure by Zeke Hausfather.)

The adjustments are not always that "large". The Berkeley Earth group may much smaller adjustments. The global mean temperature of Berkeley Earth is shown below. However, as noted by Zeke Hausfather in the comments below, also the curve where the method did not explicitly detect breakpoints does homogenize the data partially because it penalises stations that have a very different trend than their neighbours. After removal of non-climatic changes BEST come to a similar climatic trend as seen in GHCNv3.

The global mean temperature estimates from the Berkeley Earth project (previously known as BEST), USA. The blue curve is computed without using their method to detect breakpoints, the red curve the temperature after adjusting for non-climatic changes. (Figure by Steven Mosher.)

Let's go over the reasons why the temperature trend may show too little warming.
Urbanization and siting
Urbanization warms the location of a station, but these stations also tend to move away from the centre to better locations. What matters is where the stations were in the beginning of the observation and where they are now. How much too warm the origin was and how much too warm the ending. This effect has been studied a lot and urban stations seem to have about the same trend as their surrounding (more) rural stations.
A recent study for two villages showed that the current location of the weather station is half a degree centigrade cooler than the centre of the village. Many stations started in villages (or cities), thermometers used to be expensive scientific instruments operated by highly educated people and they had to be read daily. Thus the siting of many stations may have improved, which would lead to a cooling bias.
When a city station moves to an airport, which happened a lot around WWII, this takes the station (largely) out of the urban heat island. Furthermore, cities are often located near the coast and in valleys. Airports may thus often be located at a higher altitude. Both reasons could lead to a considerable cooling for the fraction of stations that moved to airports.
Changes in thermometer screens
During the 20th century the Stevenson screen was established as the dominant thermometer screen. This screen protected the thermometer much better against radiation (solar and heat) than earlier designs. Deficits of earlier measurement methods have artificially warmed the temperatures in the 19th century.
Some claim that earlier Stevenson screens were painted with inferior paints. The sun consequently heats up the screen more, which again heats the incoming air. The introduction of modern durable white paints may thus have produced a cooling bias.
Currently we are in a transition to Automatic Weather Stations. This can show large changes in either direction for the network they are introduced in. What the net global effect is, is not clear at this moment.
Irrigation on average decreases the 2m-temperature by about 1 degree centigrade. At the same time, irrigation has spread enormously during the last century. People preferentially live in irrigated areas and weather stations serve agriculture. Thus it is possible that there is a higher likelihood that weather stations are erected in irrigated areas than elsewhere. In this case irrigation could lead to a spurious cooling trend. For suburban stations an increase of watering gardens could also produce a spurious cooling trend.
It is understandable that in the past the focus was on urbanization as a non-climatic change that could make the warming in the climate records too strong. Then the focus was on whether climate change was happening (detection). To make a strong case, science had to show that even the minimum climatic trend was too large to be due to chance.

Now that we know that the Earth is warming, we no longer just need a minimum estimate of the temperature trend, but the best estimate of the trend. For a realistic assessment of models and impacts we need the best estimate of the trend, not just the minimum possible trend. Thus we need to understand the reasons why raw records may show too little warming and quantify these effects.

Just because the mitigation skeptics are talking nonsense about the temperature record does not mean that there are no real issues with the data and it does not mean that statistical homogenization can remove trend errors sufficiently well. This is a strange blind spot in climate science. As Neville Nicholls, one of the heroes of the homogenization community, writes:
When this work began 25 years or more ago, not even our scientist colleagues were very interested. At the first seminar I presented about our attempts to identify the biases in Australian weather data, one colleague told me I was wasting my time. He reckoned that the raw weather data were sufficiently accurate for any possible use people might make of them.
One wonders how this colleague knew this without studying it.

The reasons for a cooling bias have been studied much too little. At this time we cannot tell which reason is how important. Any of these reasons is potentially important enough to be able to explain the 0.2°C per century trend bias found in GHNv3. Especially in the light of the large range of possible values, a range that we can often not even estimate at the moment. In fact, all the above mentioned reasons could together explain a much larger trend bias, which could dramatically change our assessment of the progress of global warming.

The fact is that we cannot quantify the various cooling biases at the moment and it is a travesty that we can't.

Other posts in this series

Irrigation and paint as reasons for a cooling bias

Temperature trend biases due to urbanization and siting quality changes

Changes in screen design leading to temperature trend biases

Temperature bias from the village heat island


Zeke said...

Saying that Berkeley's adjustments are smaller is somewhat misleading; some of the difference is accounted for the fact that Berkeley's spatial fields are constructed in a manner that downweights the impact of locally-divergent trends, which itself is a form of homogenization not shown in the figure. If that step were excluded, the effect of pairwise homogenization would likely be larger and more comparable to NOAA's PHA (as is the case in the U.S.).

Victor Venema said...

Hi Zeke, thank you very much for that valuable comment. That was very important for the interpretation of the size of the adjustments in the BEST dataset. Such comments make blogging worthwhile.