And Anthony Watts writes: "From the “we told you so” department comes this paper out of China that quantifies many of the very problems with the US and global surface temperature record we have been discussing for years: the adjustments add more warming than the global warming signal itself.
Not bad!Those are huge implications for a paper about the homogenization of one station, written by a first author that can only cite one previous study written by him in Chinese and in a journal with a rather modest impact factor. I will come back to these two statements at the end.
The study
Let's have a look what the study really tells us. The nice thing is, the paper is an open access paper and the English is mostly okay (more than 95%, medium confidence), so everyone can read it.The study investigates the influence of the urban heat island (UHI) effect on one measurement station, using two rural stations as reference. To study the influence of this gradual inhomogeneity (UHI), they need to remove the effect of the break inhomogeneities, which according to the station history are due to relocations.
They do so with a special homogenization method that hardly reacts to gradual inhomogeneities, but can still detect strong breaks. Had they used a standard homogenization method from climatology, they would also have removed the UHI effect, which they wanted to study.
In the homogenization they compare their so-called candidate station (in the city) with the average signal of two rural stations (reference). For this comparison they compute the difference of the candidate and the reference. In this difference time series, the common regional climate signal is removed. Another advantage of using a reference is that the difference time series is less noisy as the station data itself (all stations experienced about the same weather) and that you can thus see inhomogeneities better. The difference time series should be a constant value with some random noise, if you see jumps or gradual changes, this is assumed to be non-climatic (inhomogeneity) rather than climatic.
This part is standard, except that the number of time series used to compute the reference is too small. Also the reference stations will contain inhomogeneities, by averaging over many stations you can reduce the influence of these inhomogeneities. For this study the small number was likely no problem because the rural reference time series did not show clear inhomogeneities themselves.
The non-standard part is that to detect a break in the difference time series for a certain year, they apply a statistical test (a t-test) for the difference of the mean in the three years before that year and the mean of the three years after. Three years is very short, the uncertainty in the mean thus quite large and this test is consequently not very sensitive. The advantage for this method for this study is that the gradual inhomogeneity due to urbanization hardly changes in 3 years and is thus nearly undetectable. Thus their detection method can only detect strong break inhomogeneities and cannot detect gradual inhomogeneities due to urbanization.
The main result is show in the figure below. It shows the difference between the city station and the rural stations, that is it shows the influence of urbanization.
The red curves of the homogenized data provide a more accurate estimate of the influence of urbanization as the black curves showing the raw data. From the raw data one would wrongly estimate that not even a rapidly growing city such as Beijing gives any artificial warming due to urbanization. Surely, the climate ostriches would prefer climatologists to use homogenization to estimate this effect more accurately?
The authors estimate that the effect of urbanization is 0.388°C per decade for the minimum temperature and 0.096°C per decade for the maximum temperature.
The differences of annual mean Tmax (a) and Tmin (b) between Huairou station and reference data for original (dotted black lines) and adjusted (solid red lines) data series during 1960–2008. The solid straight lines denote linear trends.(Figure 5 of Zhang et al., 2013)
Implications
It could be that the authors selected the right homogenization method for this specific study, but did not realize that their method is not standard and should not be used to study global warming. They namely suggest that it should be studied whether their results have implications for homogenization in general, which is not the case. As The Hockey Schtick and Anthony Watts cite from the article:“Our analysis shows that “data homogenization for [temperature] stations moved from downtowns to suburbs can lead to a significant overestimate of rising trends of surface air temperature.”The sentence in the article is:
"Our analysis shows that data homogenization for the stations moved from downtowns to suburbs can lead to a significant overestimate of rising trends of surface air temperature, and this necessitates a careful evaluation and adjustment for urban biases before the data are applied in analyses of local and regional climate change."I will leave it as exercise for the reader, whether our climate ostriches should have cited the full sentence because the second part is important for its understanding. I would at least have indicated citing only the first part by ending with three dots ..., and not adding one.
I would say that the full sentence just states that you should not only homogenize the break inhomogeneities, but also the gradual inhomogeneities. That is right. James Hansen et al. already wrote so in 2001:
"It follows that a necessary concomitant of discontinuity adjustments is an adequate correction for urban warming. Otherwise, if the discontinuities in the temperature record have a predominance of downward jumps over upward jumps, the adjustments may introduce a false warming, as in Figure 1. This might happen, for example, if it is more common for stations to move from population centers toward the suburbs, rather than vice versa."That is a sentence often quoted by climate ostriches, probably hoping that their readers are sufficiently stupid to believe the implied and erroneous statement that homogenization only removes breaks, otherwise I cannot understand their fetish with this statement. The following figure illustrates the effect.
(a) Schematic illustration of a temperature record at a site experiencing urban warming and a station move from the urban center to the urban outskirts. (b) The temperature record adjusted for the discontinuity has a stronger warming trend than that in the undisturbed environment. (Figure 1. from Hansen et al., 2001.)
So, yes, if you are interested in the global climate, you should use a homogenization method that not only removes break inhomogeneities, but also gradual ones. Thus, in that case you should not use a detection method that can only detect breaks like Zhang et al. (2013) did.
Furthermore, you should only use the station history to precise the date of the break, but not for the decision whether to remove the break or not. The latter is actually probably the biggest problem. There are climatologists that use statistical homogenization to detect breaks, but only correct these breaks if they can find evidence of this break in the station history, sometimes going at great length and reading the local newspapers around that time.
If you would do this wrong, you would notice that the urban station has a stronger trend than the surrounding stations. This is a clear sign that the station is inhomogeneous and that your homogenization efforts failed. A climatologist would thus reconsider his methodology and such a station would not be used to study changes in the regional or global climate.
Two ostrich quotes
The Hockey Schtick: the article corroborates that "leading meteorological institutions in the USA and around the world have so systematically tampered with instrumental temperature data that it cannot be safely said that there has been any significant net “global warming” in the 20th century.Let's be generous and interpret the conspiratorial term "systematically tampered" as "homogenization". We now know that the authors did not use a standard homogenization method and that the study thus does not tell us anything about the effect of homogenization of real climatological data.
Even if there were no evidence about global warming from the station networks, there would still be so much independent confirmation of strong warming, that it is hard to believe someone would actually write that there is no evidence of "any significant net global warming".
Somehow climate ostriches are capable of making such statements. Also Eric Worrall asked recently on this blog: "How do you know the climate didn't actually cool?". After mentioning the independent evidence, he was no longer interested in the discussion. One sometimes wonders what mushrooms these people are taking.
Anthony Watts: "From the “we told you so” department comes this paper out of China that quantifies many of the very problems with the US and global surface temperature record we have been discussing for years: the adjustments add more warming than the global warming signal itself.
The paper out of China quantified the influence of urbanization on one station in China. You could state that the paper suggests that there may be problems that need to be investigated. But quantified? No idea where he got that from. And like I showed in this post, you can only make the latter statement if you are not knowledgeable. That could fit, which is kinda sad after "discussing" or rather blogging about the quality of station data for so many years.
The influence of homogenization is rather modest, also in the raw data the global mean surface temperature trend between 1880 and now is 0.6°C per century, homogenization only increases it to 0.8°C (GHCNv3 dataset). Thus even if you reject homogenization, preferably with good arguments, claiming that the adjustments (0.2°C) add more warming that the signal itself (0.6 or 0.8°C depending on your preference) does not add up. As the Killer Rabbit of Caerbannog has shown, anyone who would like to convince himself and with a little programming skills can confirm this fact her or himself.
Related posts
- Statistical homogenisation for dummies
- A primer on statistical homogenisation with many pictures.
- Homogenisation of monthly and annual data from surface stations
- A short description of the causes of inhomogeneities in climate data (non-climatic variability) and how to remove it using the relative homogenisation approach.
- New article: Benchmarking homogenisation algorithms for monthly data
- Raw climate records contain changes due to non-climatic factors, such as relocations of stations or changes in instrumentation. This post introduces an article that tested how well such non-climatic factors can be removed.
- A short introduction to the time of observation bias and its correction
- The time of observation bias is an important cause of inhomogeneities in temperature data.
References
The Hockey Schtick, 2014: New paper finds adjusted temperature data in China has significantly exaggerated warming.Watts, Anthony, 2014: Important study on temperature adjustments: ‘homogenization…can lead to a significant overestimate of rising trends of surface air temperature.’
Lei Zhang, Guo-Yu Ren, Yu-Yu Ren, Ai-Ying Zhang, Zi-Ying Chu, and Ya-Qing Zhou, 2014: Effect of data homogenization on estimate of temperature trend: a case of Huairou station in Beijing Municipality. Theor. Appl. Climatol., 115, pp. 365– 373, doi: 10.1007/s00704-013-0894-0.
No comments:
Post a Comment
Comments are welcome, but comments without arguments may be deleted. Please try to remain on topic. (See also moderation page.)
I read every comment before publishing it. Spam comments are useless.
This comment box can be stretched for more space.