Saturday, 6 July 2013

Five statistically interesting problems in homogenization. Part 1. The inhomogeneous reference problem

This is a series I have been wanting to write for a long time. The final push was last week's conference, the 12th International Meeting Statistical Climatology (IMSC), a very interesting meeting with an equal mix of statisticians and climatologists. (The next meeting in three years will be in the area of Vancouver, Canada, highly recommended.)

At the last meeting in Scotland, there were unfortunately no statisticians present in the parallel session on homogenization. This time it was a bit better. Still it seems as if homogenization is not seen as the interesting statistical problem it is. I hope that this post can convince some statisticians to become (more) active in homogenization of climate data, which provides many interesting problems.

As I see it, there are five problems for statisticians to work on. This post discusses the first one. The others will follow the coming days. UPDATE: they are now linked in the list below.
Problem 1. The inhomogeneous reference problem
Neighboring stations are typically used as reference. Homogenization methods should take into account that this reference is also inhomogeneous
Problem 2. The multiple breakpoint problem
A longer climate series will typically contain more than one break. Methods designed to take this into account are more accurate as ad-hoc solutions based single breakpoint methods
Problem 3. Computing uncertainties
We do know about the remaining uncertainties of homogenized data in general, but need methods to estimate the uncertainties for a specific dataset or station
Problem 4. Correction as model selection problem
We need objective selection methods for the best correction model to be used
Problem 5. Deterministic or stochastic corrections?
Current correction methods are deterministic. A stochastic approach would be more elegant

Problem 1. The inhomogeneous reference problem

Relative homogenization

Statisticians often work on absolute homogenization. In climatology relative homogenization methods, which utilize a reference time series, are almost exclusively used. Relative homogenization means comparing a candidate station with multiple neighboring stations (Conrad & Pollack, 1950).

There are two main reasons for using a reference. Firstly, as the weather at two nearby stations is strongly correlated, this can take out a lot of weather noise and make it much easier to see small inhomogeneities. Secondly, it takes out the complicated regional climate signal. Consequently, it becomes a good approximation to assume that the difference time series (candidate minus reference) of two homogeneous stations is just white noise. Any deviation from this can then be considered as inhomogeneity.

The example with three stations below shows that you can see breaks more clearly in a difference time series (it only shows the noise reduction as no nonlinear trend was added). You can see a break in the pairs B-A and in C-A, thus station A likely has the break. This is confirmed by there being no break in the difference time series of C and B. With more pairs such an inference can be made with more confidence. For more graphical examples, see the post Homogenization for Dummies.

Figure 1. The temperature of all three stations. Station A has a break in 1940.
Figure 2. The difference time series of all three pairs of stations.

Absolute homogenization

In absolute homogenization, the time series to be homogenized will also show natural decadal variability and secular trends. This makes it very hard to distinguish non-climatic changes from climatic ones. Doing so requires making assumptions about how variable the climate can be, which is one of the things one would like to study with the homogenized data. Datasets that are homogenized absolutely thus should be used with much care to study climate variability and especially regional rapid climate change. This, together with the much smaller signal to noise ratio, absolute homogenization will lead to much larger uncertainties in secular trends and decadal variability.

Benchmarking studies show that relative homogenization of dense networks makes the remaining errors sufficiently small to be insignificant for many studies. In case of absolute homogenization the remaining errors are likely much larger and the estimate of uncertainties (see problem 5 later on) a pressing need. As long as such estimates are not available, I would personally prefer to refrain from using data that would need to be homogenized absolutely, even if that means not being able to make inferences on some regions and periods.

Solutions for the inhomogeneous reference problem

The example above illustrates the homogenization of data using multiple pairs. Using pairs the reference is not assumed to be inhomogeneous. Rather once the breaks have been detected in the pairs, there is an attribution step, which attributes the breaks to a specific station. Currently this is done by hand (for PRODIGE; Caussinus & Mestre, 2004) or with ad-hoc rules (by the Pairwise Homogenization algorithm of NOAA; Menne & Williams, 2009).

In the homogenization method HOMER (Mestre et al., 2013) a first attempt is made to homogenize all pairs simultaneously using a joint detection method from bio-statistics. This approach likely lends itself most to a mathematically tractable and optimal solution. Much more research on such methods is needed.

The alternative to using pairs is to use a composite reference computed from multiple neighboring stations. For example, the much used Standard Normal Homogeneity Test (SNHT; Alexandersson & Moberg, 1997) uses this approach. The advantage over performing detection on one pair is that you remove more noise this way. However, it is typical that any break in the difference time series computed from such a composite reference is directly attributed to the candidate station. This is not a bad approximation, but may lead to errors as the break can also be in one of the reference stations. Thus it is important to select reference stations that have no breaks near the break in the candidate station. In SNHT this is performed manually, in MASH it is performed automatically (Szentimre, 1999, 2003). Making this robust is probably more a informatics as a statistical problem.
An important transition in Spain and many European countries in the late 19th century and early 20th century was the change from using Montsouri (French) screens (the rightmost shelter is a replica) to Stevenson screens (in the middle equipped with automatic sensors. Leftmost is a Stevenson screen equipped with conventional meteorological instruments). Brunet et al. (2011) showed that this produced a considerable jump in the temperature record, both in the mean and in the tails. Picture: Project SCREEN, Center for Climate Change, Universitat Rovira i Virgili, Spain.


You may wonder why having a break in one of the reference stations is such a problem. However, the problem is that often not just one other station contains an inhomogeneity. Climate networks have periods with technological transitions. For instance, currently the Stevenson screens are replaced by automatic weather stations; for more examples see this post on homogenization of monthly and annual data. Such transitions are important periods as they may cause biases in the network mean trends and they produce many breaks over a short period. Solving the combinatorial problem with nearby breaks in many stations reliably is important for reducing biases due to such technological transitions.

A related problem is that sometimes all stations in a network have a break at the same data. For example when a weather service changes the time of observation. In this case, the relative homogenization principle breaks down. Consequently, such inhomogeneities are typically corrected using additional information, typically from parallel measurements with the old and new set-up. One could in principle still detect and correct such inhomogeneities by comparison with other nearby networks. That would require an algorithm that additionally knows which stations belong to which network and prioritizes correcting breaks found between stations in different networks. Such algorithms do not exist yet.

Concluding remarks

Concluding, The inhomogeneous reference problem is important for the relative homogenization of climate data. We are very close to solving it optimally, but not fully there yet.

Related posts

All posts in this series:
Problem 1. The inhomogeneous reference problem
Neighboring stations are typically used as reference. Homogenization methods should take into account that this reference is also inhomogeneous
Problem 2. The multiple breakpoint problem
A longer climate series will typically contain more than one break. Methods designed to take this into account are more accurate as ad-hoc solutions based single breakpoint methods
Problem 3. Computing uncertainties
We do know about the remaining uncertainties of homogenized data in general, but need methods to estimate the uncertainties for a specific dataset or station
Problem 4. Correction as model selection problem
We need objective selection methods for the best correction model to be used
Problem 5. Deterministic or stochastic corrections?
Current correction methods are deterministic. A stochastic approach would be more elegant
In previous posts I have discussed future research in homogenization from a climatological perspective.

Future research in homogenisation of climate data – EMS 2012 in Poland

HUME: Homogenisation, Uncertainty Measures and Extreme weather

A database with daily climate data for more reliable studies of changes in extreme weather

References

Alexandersson H. and A. Moberg. Homogenization of Swedish Temperature Data. Part I: Homogeneity test for linear trends. International Journal of Climatology, 17, pp. 23-34, doi: 10.1002/(SICI)1097-0088(199701)17:1<25::AID-JOC103>3.0.CO;2-J, 1997.

Brunet, M., J. Asin, J. Sigró, M. Bañón, F. García, E. Aguilar, J.E. Palenzuela, T.C. Peterson and P.D. Jones. The minimisation of the "screen bias" from ancient Western Mediterranean air temperature records: an exploratory statistical analysis. International Journal of Climatology, 31: pp. 1879-1895 doi: 10.1002/joc.2192, 2011.

Caussinus, H. and O. Mestre. Detection and correction of artificial shifts in climate series. Applied Statistics, 53, pp. 405–425, doi: 10.1111/j.1467-9876.2004.05155.x, 2004.

Conrad, V. and C. Pollak. Methods in climatology. Harvard University Press, Cambridge, MA, p. 459, 1950.

Menne, M.J. and C.N. Williams. Homogenization of temperature series via pairwise comparisons. Journal of Climate, 22, pp. 1700-1717, doi: 10.1175/2008JCLI2263.12009.

Mestre, O., P. Domonkos, F. Picard, I. Auer, S. Robin, E. Lebarbier, R. Böhm, E. Aguilar, J. Guijarro, G. Vertachnik, M. Klancar, B. Dubuisson, and P. Stepanek: HOMER: a homogenization software – methods and applications. Idojaras, Quarterly journal of the Hungarian Meteorological Service, 117, no. 1, 2013.

Szentimrey, T. Multiple Analysis of Series for Homogenization (MASH). Proceedings of the second seminar for homogenization of surface climatological data, Budapest, Hungary, WMO, WCDMP-No. 41, pp. 27-46, 1999.

Szentimrey, T. Multiple Analysis for Series for Homogenization (MASH v3.02). Report Hungarian meteorological Service, 2003.

2 comments:

  1. I am looking forward to your post on problem 4. That is important for studies on changes in extremes.

    ReplyDelete
  2. Tim, problem 4 has been renumbered to problem 5. You will have to wait a little longer. Sorry.

    ReplyDelete

Comments are welcome, but comments without arguments may be deleted. Please try to remain on topic. (See also moderation page.)

I read every comment before publishing it. Spam comments are useless.

This comment box can be stretched for more space.