Tuesday, 26 November 2013

Are break inhomogeneities a random walk or a noise?

Tomorrow is the next conference call of the benchmarking and assessment working group (BAWG) of the International Surface Temperature Initiative (ISTI; Thorne et al., 2011). The BAWG will create a dataset to benchmark (validate) homogenization algorithm. It will mimic the real mean temperature data of the ISTI, but will include know inhomogeneities, so that we can assess how well the homogenization algorithms remove them. We are almost finished discussing how the benchmark dataset should be developed, but still need to fix some details. Such as the question: Are break inhomogeneities a random walk or a noise?

Previous studies

The benchmark dataset of the ISTI will be global and is also intended to be used to estimate uncertainties in the climate signal due to remaining inhomogeneities. These are the two main improvements over previous validation studies.

Williams, Menne, and Thorne (2012) validated the pairwise homogenization algorithm of NOAA on a dataset mimicking the US Historical Climate Network. The paper focusses on how well large-scale biases can be removed.

The COST Action HOME has performed a benchmarking of several small networks (5 to 19 stations) realistically mimicking European climate networks (Venema et al., 2012). It main aim was to intercompare homogenization algorithms, the small networks allowed HOME to also test manual homogenization methods.

These two studies were blind, in other words the scientists homogenizing the data did not know where the inhomogeneities were. An interesting coincidence is that the people who generated the blind benchmarking data were outsiders at the time: Peter Thorne for NOAA and me for HOME. This probably explains why we both made an error, which we should not repeat in the ISTI.

Monday, 25 November 2013

Introduction to series on weather variability and extreme events

This is the introduction to a series on changes in the daily weather and extreme weather. The series discusses how much we know about whether and to what extent the climate system experiences changes in the variability of the weather. Variability here denotes the the changes of the shape of probability distribution around the mean. The most basic variable to denote variability would be the variance, but many other measures could be used.

Dimensions of variability

Studying weather variability adds more dimensions to our apprehension of climate change and also complexities. This series is mainly aimed at other scientists, but I hope it will be clear enough for everyone interested. If not, just complain and I will try to explain it better. At least if that is possible, we do not have much solid results on changes in the weather variability yet.

The quantification of weather variability requires the specification of the length of periods and the size of regions considered (extent, the scope or domain of the data). Different from studying averages is that the consideration of variability adds the dimension of the spatial and temporal averaging scale (grain, the minimum spatial resolution of the data); thus variability requires the definition of an upper and lower scale. This is important in climate and weather as specific climatic mechanisms may influence variability at certain scale ranges. For instance, observations suggest that near-surface temperature variability is decreasing in the range between 1 year and decades, while its variability in the range of days to months is likely increasing.

Similar to extremes, which can be studied on a range from moderate (soft) extremes to extreme (hard) extremes, variability can be analysed by measures which range from describing the bulk of the probability distribution to ones that focus more on the tails. Considering the complete probability distribution adds another dimension to anthropogenic climate change. Such a soft measure of variability could be the variance, or the interquartile range. A harder measure of variability could be the kurtosis (4th moment) or the distance between the first and the 99th percentile. A hard variability measure would be the difference between the maximum and minimum 10-year return periods.

Another complexity to the problem is added by the data: climate models and observations typically have very different averaging scales. Thus any comparisons require upscaling (averaging) or downscaling, which in turn needs a thorough understanding of variability at all involved scales.

A final complexity is added by the need to distinguish between the variability of the weather and the variability added due to measurement and modelling uncertainties, sampling and errors. This can even affect trend estimates of the observed weather variability because improvements in climate observations have likely caused apparent, but non-climatic, reductions in the weather variability. As a consequence, data homogenization is central in the analysis of observed changes in weather variability.

Friday, 22 November 2013

IPCC videos on the working groups The Physical Science Basis and Extremes



The IPCC has just released a video with main points from the IPCC report on the physical basis. Hat tip Klimazwiebel. It is beautifully made and I did not notice any obvious errors, which you can normally not say for journalistic works, unfortunately.

For the typical reader of this blog it may be a bit to superficial. And the people who do not read this blog will likely never see it. Thus I do wonder for whom the video is made. :-)

Another nice video by the IPCC is the one on the special report on Extremes (SREX) published last year. Same caveat.

Sunday, 17 November 2013

On the reactions to the doubling of the recent temperature trend by Curry, Watts and Lucia

The recent Cowtan and Way study, coverage bias in the HadCRUT4 temperature record, in the QJRMS showed that the temperature trend over the last 15 years is more than twice as strong as previously thought. [UPDATE: The paper can be read here it is now Open Access]

This created quite a splash in the blog-o-sphere; see my last post. This is probably no wonder. The strange idea that the global warming has stopped is one of the main memes of the climate ostriches and in the USA even of the main stream media. A recent media analysis showed that half of the reporting of the recent publication of the IPCC report pertained this meme.

This reporting is in stark contrast to the the IPCC having almost forgotten to write about it as it has little climatological significance. Also after the Cowtan and Way (2013) paper, the global temperature trend between 1880 and now is still about 0.8 degrees per century.

The global warming of the entire climate system is continuing without pause in the warming of the oceans. While the oceans are the main absorber of energy in the climate system. The atmospheric temperature increase only accounts for about 2 percent of the total. Because the last 15 years also just account for a short part of the anthropogenic warming period, one can estimate that the discussion is about less than one thousandths of the warming.

Reactions

The study was positively received by amongst others the Klimalounge (in German), RealClimate, Skeptical Science, Carbon Brief, QuakeRattled, WottsUpWithThatBlog, OurChangingClimate, Moyhu (Nick Stockes) and Planet 3.0. It is also discussed in the press: Sueddeutsche Zeitung, TAZ, Spiegel Online (three leading newspapers in Germany, in German), The Independent (4 articles), Mother Jones, Hürriyet (a large newspaper in Turkey) and Science Daily.

Lucia at The Blackboard wrote in her first post Cotwan and Way: Have they killed the pause? and stated: "Right now, I’m mostly liking the paper. The issues I note above are questions, but they do do quite a bit of checking". And Lucia wrote in her second post: "The paper is solid."

Furthermore, Steve Mosher writes: "I know robert [Way] does first rate work because we’ve been comparing notes and methods and code for well over a year. At one point we spent about 3 months looking at labrador data from enviroment canada and BEST. ... Of course, folks should double and triple check, but he’s pretty damn solid."

The main serious critical voice seems to be Judith Curry at Climate Etc. Her comments have been taken up by numerous climate ostrich blogs. This post discusses Curry's comments, which were also taken up by Lucia. And I will also include some erroneous additions by Antony Watts. And it will discuss one one additional point raised by Lucia.
  1. Interpolation
  2. UAH satellite analyses
  3. Reanalyses
  4. No contribution
  5. Model validation
  6. A hiatus in the satellite datasets (Black Board)

Wednesday, 13 November 2013

Temperature trend over last 15 years is twice as large as previously thought

UPDATED: Now with my response to Juddith Curry's comments and an interesting comment by Peter Thorne.

Yesterday a study appeared in the Quarterly Journal of the Royal Meteorological Society that suggests that the temperature trend over the last 15 years is about twice a large as previously thought. This study [UPDATE: Now Open Access] is by Kevin Cowtan and Robert G. Way and is called: "Coverage bias in the HadCRUT4 temperature series and its impact on recent temperature trends".

The reason for the bias is that in the HadCRUT dataset, there is a gap in the Arctic and the study shows that it is likely that there was strong warming in this missing data region (h/t Stefan Rahmstorf at Klimalounge in German; the comments and answers by Rahmstorf there are also interesting and refreshingly civilized; might be worth reading the "translation"). In the HadCRUT4 dataset the temperature trend over the period 1997-2012 is only 0.05°C per decade. After filling the gap in the Arctic, the trend is 0.12 °C per decade.

The study starts with the observation that over the period 1997 to 2012 "GISTEMP, UAH and NCEP/NCAR [which have (nearly) complete global coverage and no large gap at the Arctic, VV] all show faster warming in the Arctic than over the planet as a whole, and GISTEMP and NCEP/NCAR also show faster warming in the Antarctic. Both of these regions are largely missing in the HadCRUT4 data. If the other datasets are right, this should lead to a cool bias due to coverage in the HadCRUT4 temperature series.".

Datasets

All datasets have their own strengths and weaknesses. The nice thing about this paper is how they combine the datasets and use the strengths and mitigate their weaknesses.

Surface data. Direct (in-situ) measurements of temperature (used in HadCRU and GISTEMP) are very important. Because they lend themselves well to homogenization, station data is temporal consistent and its trend are thus most reliable. Problems are that most observations were not performed with climate change in mind and the spatial gaps that are so important for this study.

Satellite data. Satellites perform indirect measurements of the temperature (UAH and RSS). Their main strengths are the global coverage and spatial detail. A problem for satellite datasets are that the computation of physical parameters (retrievals) needs simplified assumptions and that other (partially unknown) factors can influence the result. The temperature retrieval needs information on the surface, which is especially important in the Arctic. Another satellite temperature dataset by RSS therefore omits the Arctic from their dataset. UAH is also expected to have biases in the Arctic, but does provide data.

Tuesday, 12 November 2013

Has COST HOME (2007-2011) passed without true impact on practical homogenisation?

Guest post by Peter Domonkos, one of the leading figures in the homogenization of climate data and developer of the homogenization method ACMANT, which is probably the most accurate method currently available.

A recent investigation done in the Centre of Climate Change of University Rovira i Virgili (Spain) showed that the ratio of the practical use of HOME-recommended monthly homogenisation methods is very low, namely it is only 8.4% in the studies published or accepted for publication in 6 leading climatic journals in the first half of 2013.

The six journals examined are the Bulletin of the American Meteorological Society, Climate of the Past, Climatic Change, International Journal of Climatology, Journal of Climate and Theoretical and Applied Climatology. 74 studies were found in which one or more statistical homogenisation methods were applied for monthly temperature or precipitation datasets, the total number of homogenisation exercises in them is 119. A large variety of homogenisation methods was applied: 34 different methods have been used, even without making distinction among different methods labelled by the same name (as it is the case with the procedures of SNHT and RHTest). HOME-recommended methods were applied only in 10 cases (8.4%) and the use of objective or semi-objective multiple break methods was even much rare, 3.4% only.

In the international blind test experiments of HOME, the participating multiple break methods produced the highest efficiency in terms of the residual RMSE and trend bias of homogenised time series. (Note that only methods that detect and correct directly the structures of multiple breaks are considered multiple break methods.) The success of multiple break methods was predictable, since their mathematical structures are more appropriate for treating the multiple break problem than the hierarchic organisation of single break detection and correction.

Highlights EUMETNET Data Management Workshop 2013

The Data Management Workshop (DMW) had four main themes: data rescue, homogenization, quality control and data products. Homogenization was clearly the most important topic with about half of the presentations and was also the main reason I was there. Please find below the highlights I expect to be more interesting. In retrospect this post has quite a focus on organizational matters, mainly because this was most new to me.

The DMW is different from the Budapest homogenization workshops in that it focused more on best practices at weather services and Budapest more on the science and the development of homogenization methods. One idea from the workshop is that it may be worthwhile to have a counterpart to the homogenization workshop in the field of quality control.

BREAKING NEWS: Tamas Szentimrey announced that the 8th Homogenization seminar will be organized together with 3rd interpolation seminar in Budapest on 12-16 May 2014.

UPDATE: The slides of many presentations can now be downloaded.

Monday, 4 November 2013

Weather variability and Data Management Workshop 2013 in San Lorenzo de El Escorial, Spain

This week I will be at the Data Management Workshop (DMW) in San Lorenzo de El Escorial. Three fun filled days about data rescue, homogenization, quality control and data products (database). While it is nice weather outside.

It is organized by EUMETNET, a network of 30 European National Meteorological Services. Thus I will be one of the few from a university, as is typical for homogenization. It is a topic of high interest to the weather services.

Most European experts will be there. The last meeting I was at was great. The program looks good. I am looking forward to it.

My contribution to the workshop will be to present a joint review of what we know about inhomogeneities in daily data. Much of this information stems from parallel measurements, in other words from simultaneous measurements with a modern and a historical set-up. We need to know about non-climatic changes in extremes and weather variability, to be able to assess the climatic changes.

The coming time, I hope to be able to blog about some of the topics of this review. It shows that the homogenization of daily data is a real challenge and that we need much more data from parallel measurements to study the non-climatic changes in the probability distribution of daily datasets. Please find our abstract below.

The slides of the presentation can be downloaded here.

Friday, 1 November 2013

Atmospheric warming hiatus: The peculiar debate about the 2% of the 2%

Dana Nuccitelli recently wrote an article for the Guardian and the introduction read: "The slowed warming is limited to surface temperatures, two percent of overall global warming, and is only temporary". As I have been arguing before, how minute the recent deviation of the predicted warming is, my first response was, good that someone finally computed how small.

However, Dana Nuccitelli followed the line of argumentation of Wotts and argued that the atmosphere is just a small part of the climate system and that you do see the warming continue in the rest, mainly in the oceans. He thus rightly sees focusing on the surface temperatures only as a form of cherry picking. More on that below.

The atmospheric warming hiatus is a minor deviation

There is another two percent. Just look at the graph below of the global mean temperature since increases in greenhouse gasses became important.


The anomalies of the global mean temperature of the Global Historical Climate Network dataset versions 3 (GHCNv3) of NOAA. The anomalies are computed of the temperature by subtracting the mean temperature from 1880 to 1899.

The temperature increase we have seen since the beginning of 1900 is about 31 degree years (the sum of the anomalies over all years). You can easily compute that this is about right because the triangle below the temperature curve, with a horizontal base of about 100 years and a vertical size (temperature increase) of about 0.8°C: 0.5*100*0.8=40 degree years; the large green triangle in the figure below. For the modest aims of this post 31 and 40 degree years are both fine values.

The hiatus, the temperature deviation the climate ostriches are getting crazy about, has lasted at best 15 years and has a size of about 0.1°C. Thus using the same triangular approach we can compute that this is 0.5*15*0.1=0.75 degree years; this is the small blue triangle in the figure below.

The atmospheric warming hiatus is thus only 100% * 0.75 / 31 = 2.4% of the total warming since 1900. This is naturally just a coarse estimate of the order of magnitude of the effect, almost any value below 5% would be achievable with other reasonable assumptions. I admit having tried a few combinations before getting the nice matching value for the title.