Pages

Friday, 19 July 2013

Statistically interesting problems: correction methods in homogenization

This is the last post in a series on five statistically interesting problems in the homogenization of climate network data. This post will discuss two problems around the correction methods used in homogenization. Especially the correction of daily data is becoming an increasingly important problem because more and more climatologist work with daily climate data. The main added value of daily data is that you can study climatic changes in the probability distribution, which necessitates studying the non-climatic factors (inhomogeneities) as well. This is thus a pressing, but also a difficult task.

The five main statistical problems are:
Problem 1. The inhomogeneous reference problem
Neighboring stations are typically used as reference. Homogenization methods should take into account that this reference is also inhomogeneous
Problem 2. The multiple breakpoint problem
A longer climate series will typically contain more than one break. Methods designed to take this into account are more accurate as ad-hoc solutions based single breakpoint methods
Problem 3. Computing uncertainties
We do know about the remaining uncertainties of homogenized data in general, but need methods to estimate the uncertainties for a specific dataset or station
Problem 4. Correction as model selection problem
We need objective selection methods for the best correction model to be used
Problem 5. Deterministic or stochastic corrections?
Current correction methods are deterministic. A stochastic approach would be more elegant

Problem 4. Correction as model selection problem

The number of degrees of freedom (DOF) of the various correction methods varies widely. From just one degree of freedom for annual corrections of the means, to 12 degrees of freedom for monthly correction of the means, to 120 for decile corrections (for the higher order moment method (HOM) for daily data, Della-Marta & Wanner, 2006) applied to every month, to a large number of DOF for quantile or percentile matching.

What is the best correction method depends on the characteristics of the inhomogeneity. For a calibration problem just the annual mean would be sufficient, for a serious exposure problem (e.g. insolation of the instrument) a seasonal cycle in the monthly corrections may be expected and the full distribution of the daily temperatures may need to be adjusted.

The best correction method also depends on the reference. Whether the variables of a certain correction model can be reliably estimated depends on how well-correlated the neighboring reference stations are.

Currently climatologists choose their correction method mainly subjectively. For precipitation annual correction are typically applied and for temperature monthly correction are typical. The HOME benchmarking study showed these are good choices. For example, an experimental contribution correcting precipitation on a monthly scale had a larger error as the same method applied on the annual scale because the data did not allow for an accurate estimation of 12 monthly correction constants.

One correction method is typically applied to the entire regional network, while the optimal correction method will depend on the characteristics of each individual break and on the quality of the reference. These will vary from station to station and from break to break. Especially in global studies, the number of stations in a region and thus the signal to noise ratio varies widely and one fixed choice is likely suboptimal. Studying which correction method is optimal for every break is much work for manual methods, instead we should work on automatic correction methods that objectively select the optimal correction method, e.g., using an information criterion. As far as I know, no one works on this yet.

Problem 5. Deterministic or stochastic corrections?

Annual and monthly data is normally used to study trends and variability in the mean state of the atmosphere. Consequently, typically only the mean is adjusted by homogenization. Daily data, on the other hand is used to study climatic changes in weather variability, severe weather and extremes. Consequently, not only the mean should be corrected, but the full probability distribution describing the variability of the weather.

Seen as a variability problem, the correction of daily data is similar to statistical downscaling in many ways. Both methodologies aim to produce data with the right variability, taking into account the local climate and large-scale circulation. A difference is that downscaling adds variability, whereas daily homogenization correction methods may also need to reduce variability.
Figure illustrating one problem of inflating the variance of a time series

This picture illustrates one problem of inflation. If your original data are the green dots and have a trend in the mean (green drawn line) and if you want to (deterministically) increase the variance of your data by multiplying it with a factor (red dots), you will also unintentionally change the trend in the mean (red stripped line). By (stochastically) adding noise, you would not systematically change the trend in the mean.


One lesson from statistical downscaling is that increasing the variance of a time series deterministically by multiplication with a faction, called inflation, is the wrong approach and that the variance that could not be explained (for instance by the large scale circulation) should be added stochastically as noise instead (Von Storch, 1999). Maraun (2013) recently generalized this result to the deterministic Quantile Matching method, which is also used in daily homogenization. Most statistical correction methods deterministically change the daily temperature distribution and do not stochastically add noise.

A large community of climatologists and statisticians works on statistical downscaling. Transferring their ideas to daily homogenization is likely fruitful. For example, predictor selection methods from downscaling could be useful. Both fields require powerful and robust (time invariant) predictors. Multi-site statistical downscaling techniques aim at reproducing the auto- and cross-correlations between stations (Maraun et al., 2010), which may be interesting for homogenization as well.

The deterministic correction methods may not lead to severe errors in homogenization, that should still be studied, but stochastic methods that implement the corrections by adding noise would at least be more fitting to the problem. Such stochastic corrections are not trivial and should have the right variability on all temporal and spatial scales.

For many applications it may be better to only detect the dates of break inhomogeneities and perform the analysis on the homogeneous subperiods. In case of trend analysis, this would be similar to the work of the Berkeley Earth Surface Temperature group on the mean temperature signal. Periods with gradual inhomogeneities, e.g. due to urbanization, would have to be detected and excluded from such an analysis.

An outstanding problem is that current correction methods have only been developed for break inhomogeneities, methods for gradual ones are still missing. In homogenization of the mean of annual and monthly data, gradual inhomogeneities are successfully removed by implementing multiple small breaks in the same direction. However, as daily data is used to study changes in the distribution, this may not be appropriate for daily data as it could produce larger deviations near the small breaks.

Furthermore, most daily correction methods use one reference station; the new method by Trewin (2013) using multiple reference stations is likely an important innovation.

At the moment all daily correction methods correct the breaks one after another. In monthly homogenization it is found that correcting all breaks simultaneously (Caussinus and Mestre, 2004) is more accurate (Domonkos et al., 2011). It is thus likely worthwhile to develop multiple breakpoint correction methods for daily data as well.

Finally, current daily correction methods rely on previously detected breaks and assume that the homogeneous subperiods (HSP) are homogeneous. However, these HSP are currently based on detection of breaks in the mean only. Breaks in higher moments may thus still be present in the "homogeneous" subperiods and affect the corrections. If only for this reason, we should also work on detection of breaks in the distribution.

Concluding remarks

Concluding, homogenization provides many interesting statistical problem. Especially the problems around daily data are pressing, as the inhomogeneities in the distribution of daily data are expected to be large and affect our estimate of changes in extreme weather. The latter is expected to be responsible for a large part of the societal impact of climate change. Consequently much climatological work is done in this field and homogenization science is currently not keeping up.

Related posts

The first three statistical problems can be found in the previous posts of this series.
Problem 1. The inhomogeneous reference problem
Neighboring stations are typically used as reference. Homogenization methods should take into account that this reference is also inhomogeneous
Problem 2. The multiple breakpoint problem
A longer climate series will typically contain more than one break. Methods designed to take this into account are more accurate as ad-hoc solutions based single breakpoint methods
Problem 3. Computing uncertainties
We do know about the remaining uncertainties of homogenized data in general, but need methods to estimate the uncertainties for a specific dataset or station
Problem 4. Correction as model selection problem
We need objective selection methods for the best correction model to be used
Problem 5. Deterministic or stochastic corrections?
Current correction methods are deterministic. A stochastic approach would be more elegant
Previously I have discussed future research in homogenization from a climatological perspective, which may also be of interest.

Future research in homogenisation of climate data – EMS 2012 in Poland

HUME: Homogenisation, Uncertainty Measures and Extreme weather

A database with daily climate data for more reliable studies of changes in extreme weather

References

Caussinus, H. and O. Mestre. Detection and correction of artificial shifts in climate series. Applied Statistics, 53, pp. 405–425, doi: 10.1111/j.1467-9876.2004.05155.x, 2004.

Della-Marta P.M. and Wanner H. A method of homogenizing the extremes and mean of daily temperature measurements, Journal of Climate, 19, pp. 4179-4197, doi: 10.1175/JCLI3855.1, 2006.

Domonkos, P., V. Venema and O. Mestre. Efficiencies of homogenisation methods: our present knowledge and its limitation. Seventh seminar for homogenization and quality control in climatological databases, Budapest, Hungary, 24 – 28 October, submitted 2011.

Maraun, D. Bias correction, quantile mapping, and downscaling: revisiting the inflation issue. Journal of Climate, 26, pp. 2137-2143, doi: 10.1175/JCLI-D-12-00821.1, 2013.

Maraun, D., F. Wetterhall, A.M. Ireson, R.E. Chandler, E.J. Kendon, M. Widmann, S. Brienen, H.W. Rust, T. Sauter, M. Themeßl, V.K.C. Venema, K.P. Chun, C.M. Goodess, R.G. Jones, C. Onof, M. Vrac, and I. Thiele-Eich. Precipitation downscaling under climate change. Recent developments to bridge the gap between dynamical models and the end user. Reviews in Geophysics, 48, RG3003, doi: 10.1029/2009RG000314, 2010.

Trewin, B. A daily homogenized temperature data set for Australia. International Journal of Climatology, 33, pp. 1510–1529. doi: 10.1002/joc.3530, 2013.

Von Storch, H. On the use of "inflation" in statistical downscaling. Journal of Climate, 12, pp. 3505-3506, doi: 10.1175/1520-0442(1999)012<3505:OTUOII>2.0.CO;2, 1999.

No comments:

Post a Comment

Comments are welcome, but comments without arguments may be deleted. Please try to remain on topic. (See also moderation page.)

I read every comment before publishing it. Spam comments are useless.

This comment box can be stretched for more space.