Pages

Saturday, 15 December 2012

No trend in global water vapor, another WUWT fail

Forrest M. Mims III
Forrest Mims is an interesting character. To quote from the introduction to his Science article on amateur science: "Forrest M. Mims III is a writer, teacher, and amateur scientist. He received a Rolex Award for developing a miniature instrument that measures the ozone layer and has contributed projects to “The Amateur Scientist” column in Scientific American. His scientific publications have appeared in Nature and other scholarly journals."

Anthony Watts just published a guest post by Forrest M. Mims III with the title: "Another IPCC AR5 reviewer speaks out: no trend in global water vapor". I have no special expertise in this area, but I am privileged being able to read the article that is discussed. This is sufficient to see that the article and its post are two different worlds. Update: An earlier draft is available (thanks, michael sweet).

First, note that being a "expert reviewer" does not say much. There are over a thousand reviewers, even Anthony Watts himself is an IPCC "expert reviewer". On the other hand, Mims may be an amateur, but did do valued scientific work on UV measurements.

The trend in global water vapor

The post discusses a paper by Vonder Haar et al. (2012) on the NASA Water Vapor Project (NVAP) dataset. The main piece of information missing from the post is that this dataset is only 22 years long. Almost any climatological measurement will not have a statistically significant trend over such a short period, but the story is even weirder.

Just as in the misleading post on homogenization of climate data earlier this year, Anthony Watts again proofs to have a keen eye in finding the best misinformation.

Mims added a list with all the comments of his review. In this list, Watts found this comment:

This paper concludes,

“Therefore, at this time, we can neither prove nor disprove a robust trend in the global water vapor data.”

Non-specialist readers must be made aware of this finding and that it is at odds with some earlier papers.

The complete citation from the Geophysical Research Letters article is:

"The results of Figures 1 and 4 have not been subjected to detailed global or regional trend analyses, which will be a topic for a forthcoming paper. Such analyses must account for the changes in satellite sampling discussed in the auxiliary material. Therefore, at this time, we can neither prove nor disprove a robust trend in the global water vapor data."

In other words, they cannot say anything about the trend, because they have not even tried to compute it and estimate its uncertainty. Especially estimating the error in the trend will be very difficult as the dataset uses different satellites for different periods of the dataset, which invariably creates jumps in the dataset that should not be mistaken for true climate variability or trends.

The paper is thus not at odds with earlier papers. These earlier papers studied longer periods and probably datasets which were more homogenenous and consequently did find a statistically significant trend. There is thus no contradiction.

Sunday, 9 December 2012

Changing the political dynamics of greenhouse gas reductions


Photo by Caveman Chuck Coker
, Creative Commons by-nd licence


Another climate conference failed miserably. Maybe we need a completely different system, a system in which forerunners are rewarded and not punished.

A stable, predictable climate is a common good. Climate change is one of the most difficult tragedies of the commons. There are great benefits to using energy and the climate costs are spread almost perfectly to everyone. No single industry or country contributes much to the problem, but some industries and countries do benefit strongly and have a large incentive to halt the negotiations and to spread doubt. This makes greenhouse gas mitigation arguably the most difficult tragedy of the commons.

It is possible to solve such tragedies, the Montreal protocol to curb emissions of chlorofluorocarbons (CFC) to protect the ozone layer seems to work well. The ozone layer is now at its thinnest, but scientists expect that is will start to become thicker soon and return to almost normal levels in several decades. However, in case of the Montreal protocol only the producers of fridges, air conditionings and spray cans were affected. Greenhouse gasses are emitted by the energy, agricultural and building sectors. These are powerful parties and this makes a global treaty difficult. Maybe it is better to solve the tragedy of the commons by allowing countries and regions that want to reduce their CO2 emissions to protect themselves against unfair competition.

Friday, 23 November 2012

Traditional milk in Germany: raw and hay milk

In the ancestral health community raw milk and milk from grass-fed cows is highly praised. See Chris Kresser for an excellent overview of the benefits and the small risks of raw milk. Mark Sisson gives a nice overview of the more healthy fat composition of pastured butter. It took me some time to understand the situation in Germany, until I knew the two magic words: Vorzugsmilch and Heumilch.

Vorzugsmilch

Raw milk is not pasteurised and not homogenized. Pasteurisation is quickly heating and cooling to reduce the bacteria concentration. Milk is white because of all the small fat droplets in the water. In homogenization, milk is pressed through a valve at very high pressures to make the droplets smaller, which prolongs the time until the droplets combine to form cream at the top of the milk.

In Germany, retail of normal raw milk is forbidden, but a farmer is allowed to sell his raw milk directly to consumers. Raw milk sold in shops is called Vorzugsmilch, let's call it merit milk in English, I like alliteration. Cows and farms producing merit milk are inspected regularly and the milk has to get to the consumer within 96 hours. According to Andrea Fink-Keßler (agricultural scientist) the diet of the cows producing Vorzugsmilch is similar as for hay milk; see below.

Tuesday, 30 October 2012

Radiative transfer and cloud structure

Last month our paper on small-scale cloud structure and radiative transfer using a state-of-the-art 3-dimensional Monte Carlo radiative transfer model was published. It was written together with two radiative transfer specialists: Sebastian Gimeno García and Thomas Trautmann. The paper introduces the new version of this model called MoCaRT, but the interesting part for this blog on variability are the results on the influence of small-scale variability on radiative transfer. Previously, I have written about cloud structure, whether it is fractal and the processes involved in creating such complicated and beautiful structures. This post will explain, why this structure is important for radiative transfer and thus for remote sensing (for example for weather satellites) and the radiative balance of the earth (determining the surface temperature). I will try to do so also for people not familiar with radiative transfer.

As an aside, the word radiation in this context should not be confused with radioactive radiation. (It is rumored that the Earth Radiation satellite Mission had to be renamed to the EarthCARE to be funded, as the word radiation sounds negative due to its association with radioactivity.)

Radiative transfer

In theory, radiative transfer is well understood. The radiative transfer equation is long know and describes how electromagnetic radiation (intensity) propagates through a medium and is scatter and emitted by it. Climatologically important are solar radiation from the sun and infrared (heat) radiation from the earth's surface and the atmosphere. For remote sensing of the atmosphere also radio waves are important.

In practice, radiative transfer through the atmosphere is difficult to compute. This starts with the fact that the equation is valid for one frequency of the electromagnetic wave only, while the optical properties of the atmosphere can depend strongly on the frequency. To compute the radiative balance of the earth, a large number of frequencies in the solar and infra red regime thus need to be computed (such models are called line-by-line models). More efficient are computations in broader frequency bands, but then approximations need to be made.

Thursday, 4 October 2012

Beta version of a new global temperature database released

Today, a first version of the global temperature dataset of the International Surface Temperature Initiative (ISTI) with 39 thousand stations has been released. The aim of the initiative is to provide an open and transparent temperature dataset for climate research.

The database is designed as a climate "sceptic" wet dream: the entire processing of the data will be performed with automatic open software. This includes every processing step from conversion to standard units, to merging stations to longer series, to quality control, homogenisation, gridding and computation of regional and global means. There will thus be no opportunity for evil climate scientists to fudge the data and create an artificially strong temperature trend.

It is planned that in many cases, you can go back to the digital images of the books or cards on which the observer noted down the temperature measurements. This will not be possible for all data. Many records have been keyed directly in the past, without making digital images. Sometimes the original data is lost, for instance in case of Austria, where the original daily observation have been lost in the Second World War and only the monthly means are still available from annual reports.

The ISTS also has a group devoted to data rescue to encourage people to go into the archives, image and key in the observations and upload this information to the database.


Tuesday, 18 September 2012

Future research in homogenisation of climate data – EMS2012 in Poland

By Enric Aguilar and Victor Venema

The future of research and training in homogenisation of climate data was discussed at the European Meteorological Society in Lodz by 21 experts. Homogenisation of monthly temperature data has improved much in the last years, as seen in the results of the COST-HOME project. On the other hand the homogenization of daily and subdaily data is still in its infancy and this data is used frequently to analyse changes in extreme weather. It is expected that inhomogeneities in the tails of the distribution are stronger than in the means. To make such analyses on extremes more reliable, more work on daily homogenisation is urgently needed. This does not mean than homogenisation at the monthly scale is already optimal, much can still be improved.

Parallel measurements

Parallel measurements with multiple measurement set-ups were seen as an important way to study the nature of inhomogeneities in daily and sub-daily data. It would be good to have a large international database with such measurements. The regional climate centres (RCC) could host such a dataset. Numerous groups are working on this topic, but more collaboration is needed. Also more experiments would be valuable.

When gathering parallel measurements the metadata is very important. INSPIRE (an EU Directive) has a standard format for metadata, which could be used.

It may be difficult to produce an open database with parallel measurements as European national meteorological and hydrological services are often forced to sell their data for profit.(Ironically, in the Land the Free (markets), climate data is available freely, the public already paid for it with their tax money after all.) Political pressure to free climate data is needed. Finland is setting a good example and will free its data in 2013.

Friday, 17 August 2012

The paleo culture

A volunteer of the Ancestral Health Symposium 2012 has criticized the culture of the paleo movement. Richard Nikoley apparently felt attacked and as a prolific blogger immediately wrote a hot tempered post in defence. (In the meantime, the blog with the criticism has been deleted due to the personal attacks and threats.) Richards defensive post focused on the few lines that went over the top.

The demographic at this event was almost all white, child bearing age, healthy, wealthy, highly educated, libertarian, racist, sexist and bigoted.
I presume these lines were more provoked by a life of discrimination as by a single symposium.

It is normal to be defensive while receiving criticism. The day after, one often notices that honest feedback is actually very valuable, that it gives rare and precious insight into how one is seen from the outside. The valuable points of the criticism were (i) that she did not feel welcome, as a not wealthy person and also as an older woman. Furthermore, there were (ii) many crackpots at the symposium.

Demographics

I must admit that I also sometimes find the paleo culture to be rather off putting. The reason I stay is because many good ideas from the paleo community have helped improve my health enormously. The main bloggers are friendly and many focus just on science, which is neutral, but you are often just one click away from the National Rife Association. The community has a strong focus on the health effects of nature, but I never saw a link to a nature conservation group. Paleo is inspired by the life style of hunter-gatherers, but I had to hear about Survival International, an organisation that helps indigenous peoples protect themselves, on the German radio. There is lots of talk about expensive food, supplements and gear, but not about anti-hierarchical strategies used by hunter-gather groups to keep their band egalitarian and strong. Much of the advice is focused on males and it may, for example, well be that the standard routines for intermittent fasting are too heavy for woman.

Wednesday, 8 August 2012

Statistical homogenisation for dummies

The self-proclaimed climate sceptics keep on spreading fairy tales that homogenisation is smoothing climate data and leads to adjustments of good stations to make them into bad stations. Quite some controversy for such an innocent method to reduce non-climatic influences from the climate record.

In this post, I will explain how homogenisation really works using a simple example with only three stations. Figure 1 shows these three nearby stations. Statistical homogenisation exploits the fact that these three time series are very similar (are highly correlated) as they measure almost the same regional climate. Changes that happen at only one of the stations are assumed to be non-climatic. The aim of homogenisation is to remove such non-climatic changes in the data.

Figure 1. The annual mean temperature data of three hypothetical stations in one climate region.

(In case colleagues of mine are reading this and are wondering about my craftsmanship: I do know who to operate scientific plotting software, but some “sceptics” make fun of people who have no experience with Excel. I just wanted to show off with being able to use a spreadsheet.)

For the example, I have added a break inhomogeneity in the middle with a typical size of 0.8 °C (1.5 °F) to the data for station A; see Figure 2.

Thursday, 2 August 2012

Do you want to help with data discovery?

Reposted from the blog of the International Surface Temperature Initiative

As was alluded to in an earlier posting here, NOAA's National Climatic Data Center has recently endeavored on an effort to discover and rescue a plethora of international holdings in hard copy in its basement and make them usable by the international science community. The resulting images of the records from the first chunk of these efforts have just been made available online. Sadly, it is not realistic at the present time to key these data so they remain stuck in a half-way house, available, tantalizingly so, but not yet truly usable.

So, if you want to undertake some climate sleuthing now is your moment to shine ...! The data have all been placed at ftp://ftp.ncdc.noaa.gov/pub/data/globaldatabank/daily/stage0/FDL/ . These consist of images at both daily and monthly resolution - don't be fooled by the daily in the ftp site address. If you find a monthly resolution data source you could digitize years worth of records in an evening.

Whether you wish to start with Angola ...


A short introduction to the time of observation bias and its correction




Figure 1. A thermo-hygrograph, measures and records temperature and humidity.
Due to recent events, the time of observation bias in climatological temperature measurements has become a hot topic. What is it, why is it important, why should we and how can we correct for it? A short introduction.

Mean temperature

The mean daily temperature can be determined in multiple ways. Nowadays, it is easy to measure the temperature frequently, store it in a digital memory and compute the daily average. Also in the past something similar was possible using a thermograph; see Figure 1. However, such an instrument was expensive and fragile.

Thus normally other ways were used for standard measurements, using minimum and maximum thermometers and by computing a weighted average over observations at 3 or 4 fixed times. Another good approximation for many climate regions is to average over the minimum and maximum temperature. Special minimum and maximum thermometers were invented in 1782 for this task.

Sunday, 29 July 2012

Blog review of the Watts et al. (2012) manuscript on surface temperature trends

[UPDATE: Skeptical Science has written an extensive review of the Watts et al. manuscript: "As it currently stands, the issues we discuss below appear to entirely compromise the conclusions of the paper." They mention all the important issues, except maybe for the selection bias mentioned below. Thus my fast preliminary review below can now be considered outdated. Have fun.]

Anthony Watts put his blog on hold for two days because he had to work on an urgent project.
Something’s happened. From now until Sunday July 29th, around Noon PST, WUWT will be suspending publishing. At that time, there will be a major announcement that I’m sure will attract a broad global interest due to its controversial and unprecedented nature.
What has happened? Anthony Watts, President of IntelliWeather has co-written a manuscript and a press release! As Mr. Watts is a fan of review by bloggers, here is my first reaction after looking through the figures and the abstract.

Tuesday, 17 July 2012

Investigation of methods for hydroclimatic data homogenization

The self-proclaimed climate sceptics have found an interesting presentation held at the General meeting of the European Geophysical Union.

In the words of Anthony Watts, the "sceptic" with one of the most read blogs, this abstract is a ”new peer reviewed paper recently presented at the European Geosciences Union meeting.” A bit closer to the truth is that this is a conference contribution by Steirou and Koutsoyiannis, based on a graduation thesis (Greek), which was submitted to the EGU session "Climate, Hydrology and Water Infrastructure". An EGU abstract is typically half a page, it is not possible to do a real review of a scientific study based on such a short text. The purpose of an EGU abstract is in practice to decide who gets a talk and who gets a poster, nothing more, everyone is welcome to come to EGU.

Monday, 21 May 2012

What is a change in extreme weather?

What is a change in extreme weather?

The reason for changes in extremes can be divided up into two categories: changes in the mean (see panel a of the figure below) and other changes in the distribution (simplified as a change in the variance in panel b). Mixtures are of course also possible (panel c).

If you are interested in the impacts of climate change, you do not care why the the extremes are changing. If the dikes need to be made stronger or the sewage system needs larger sewers and larger reservoirs, all you need to know is how likely it is that a certain threshold is reached. Much research into changes in extreme weather is climate change impact research and thus does not care much about this distinction.

If you are interested in understanding the climate system, it does matter why the extremes are changing. Changes in the mean state of the climate are relatively well studied. Interesting questions are, for instance, whether a change in the mean changes the distribution via feedback processes or whether the reduced temperature contrasts between the poles and the equator or between day and night cause changes in the distribution.

If you are interested in understanding the climate system also the spatial and temporal averaging scales matter. If rain fronts move slower, they may locally produce more extreme daily precipitation sums, while on a global scale or instantaneously there is no change in the distribution of precipitation.

I hope scientists will distinguish between these two different ways in which extremes may change in future publications and, for example, not only compute the increase in the number of tropical days, but also how many of these days are due to the change in the mean and how many are due to changes in the distribution. I think this would contribute to a better understanding of the climate system.


Figure is taken from Real Climate, which took it from IPCC (2001).

Saturday, 19 May 2012

Paleo and fruitarian lifestyles have a lot in common

My new fitness trainer eats a lot of fruit. And she looks darn healthy. Now I know, you should not take weight-training advice from a professional body builder or risk serious overtraining, but still I was intrigued and did some research. The vegan and paleo communities are often not on friendly terms. Thus what struck me most researching fruitarian blogs, was how similar many of the ideas were.

A very strict fruitarian only eats fruits in the common meaning, sweet and juicy fruits from trees or bushes. Others also include vegetable fruits such as avocados, tomatoes and cucumbers, still others also include nuts, many regularly eat salad. To get sufficient calories from fruits, a fruitarian has to eat several kilograms of fruit. Some people calling themselves fruitarians actually get most calories from nuts and avocados. In this post fruitarians are people getting most calories from simple carbohydrates, that is from sweet fruits.

The paleolithic lifestyle is inspired by the way people lived before agriculture. As the information from the Paleolithic Age is scarce, in praxis this often means, that existing hunter gatherers and their diets and lifestyles are studied. Such bands often trade with nearby agriculturalists and thus no longer live a true stone-age life. Still as long as they are free from the deceases of civilisation, they provide good role models in my view. Similarly, many paleos also look at other existing cultures that are in good health. In this respect the paleo community is close to the Weston A Price Foundation, who seek guidance with how people lived a few generations ago. The paleo diet is best defined by what it not eaten: processed foods, grains, sugar and refined seed oils.

Friday, 17 February 2012

HUME: Homogenisation, Uncertainty Measures and Extreme weather

Proposal for future research in homogenisation

To keep this post short, a background in homogenisation is assumed and not every argument is fully rigorous.

Aim

This document wants to start a discussion on the research priorities in homogenisation of historical climate data from surface networks. It will argue that with the increased scientific work on changes in extreme weather, the homogenisation community should work more on daily data and especially on quantifying the uncertainties remaining in homogenized data. Comments on these ideas are welcome as well as further thoughts. Hopefully we can reach a consensus on research priorities for the coming years. A common voice will strengthen our voice with research funding agencies.

State-of-the-art

From homogenisation of monthly and yearly data, we have learned that the size of breaks is typically on the order of the climatic changes observed in the 20th century and that period between two detected breaks is around 15 to 20 years. Thus these inhomogeneities are a significant source of error and need to be removed. The benchmark of the Cost Action HOME has shown that these breaks can be removed reliably, that homogenisation improves the usefulness of the temperature and precipitation data to study decadal variability and secular trends. Not all problems are already optimally solved, for instance the solutions for the inhomogeneous reference problem are still quite ad hoc. The HOME benchmark found mixed results for precipitation and the handling of missing data can probably be improved. Furthermore, homogenisation of other climate elements and from different, for example dry, regions should be studied. However, in general, annual and monthly homogenisation can be seen as a mature field. The homogenisation of daily data is still in its infancy. Daily datasets are essential for studying extremes of weather and climate. Here the focus is not on the mean values, but on what happens in the tails of the distributions. Looking at the physical causes of inhomogeneities, one would expect that many of them especially affect the tails of the distributions. Likewise the IPCC AR4 report warns that changes in extremes are often more sensitive to inhomogeneous climate monitoring practices than changes in the mean.

Monday, 16 January 2012

Homogenisation of monthly and annual data from surface stations

To study climate change and variability long instrumental climate records are essential, but are best not used directly. These datasets are essential since they are the basis for assessing century-scale trends or for studying the natural (long-term) variability of climate, amongst others. The value of these datasets, however, strongly depends on the homogeneity of the underlying time series. A homogeneous climate record is one where variations are caused only by variations in weather and climate. In our recent article we wrote: “Long instrumental records are rarely if ever homogeneous”. A non-scientist would simply write: homogeneous long instrumental records do not exist. In practice there are always inhomogeneities due to relocations, changes in the surrounding, instrumentation, shelters, etc. If a climatologist only writes: “the data is thought to be of high quality” and then removes half of the data and does not mention the homogenisation method used, it is wise to assume that the data is not homogeneous.

Results from the homogenisation of instrumental western climate records indicate that detected inhomogeneities in mean temperature series occur at a frequency of roughly 15 to 20 years. It should be kept in mind that most measurements have not been specifically made for climatic purposes, but rather to meet the needs of weather forecasting, agriculture and hydrology (Williams et al., 2012). Moreover the typical size of the breaks is often of the same order as the climatic change signal during the 20th century (Auer et al., 2007; Menne et al., 2009; Brunetti et al., 2006; Caussinus and Mestre; 2004, Della-Marta et al., 2004). Inhomogeneities are thus a significant source of uncertainty for the estimation of secular trends and decadal-scale variability.

If all inhomogeneities would be purely random perturbations of the climate records, collectively their effect on the mean global climate signal would be negligible. However, certain changes are typical for certain periods and occurred in many stations, these are the most important causes discussed below as they can collectively lead to artificial biases in climate trends across large regions (Menne et al., 2010; Brunetti et al., 2006; Begert et al., 2005).

In this post I will introduce a number of typical causes for inhomogeneities and methods to remove them from the data.

Tuesday, 10 January 2012

New article: Benchmarking homogenisation algorithms for monthly data

The main paper of the COST Action HOME on homogenisation of climate data has been published today in Climate of the Past. This post describes shortly the problem of inhomogeneities in climate data and how such data problems are corrected by homogenisation. The main part explains the topic of the paper, a new blind validation study of homogenisation algorithms for monthly temperature and precipitation data. All the most used and best algorithms participated.

Inhomogeneities

To study climatic variability the original observations are indispensable, but not directly usable. Next to real climate signals they may also contain non-climatic changes. Corrections to the data are needed to remove these non-climatic influences, this is called homogenisation. The best known non-climatic change is the urban heat island effect. The temperature in cities can be warmer than on the surrounding country side, especially at night. Thus as cities grow, one may expect that temperatures measured in cities become higher. On the other hand, many stations have been relocated from cities to nearby, typically cooler, airports. Other non-climatic changes can be caused by changes in measurement methods. Meteorological instruments are typically installed in a screen to protect them from direct sun and wetting. In the 19th century it was common to use a metal screen on a North facing wall. However, the building may warm the screen leading to higher temperature measurements. When this problem was realised the so-called Stevenson screen was introduced, typically installed in gardens, away from buildings. This is still the most typical weather screen with its typical double-louvre door and walls. Nowadays automatic weather stations, which reduce labor costs, are becoming more common; they protect the thermometer by a number of white plastic cones. This necessitated changes from manually recorded liquid and glass thermometers to automated electrical resistance thermometers, which reduces the recorded temperature values.



One way to study the influence of changes in measurement techniques is by making simultaneous measurements with historical and current instruments, procedures or screens. This picture shows three meteorological shelters next to each other in Murcia (Spain). The rightmost shelter is a replica of the Montsouri screen, in use in Spain and many European countries in the late 19th century and early 20th century. In the middle, Stevenson screen equipped with automatic sensors. Leftmost, Stevenson screen equipped with conventional meteorological instruments.
Picture: Project SCREEN, Center for Climate Change, Universitat Rovira i Virgili, Spain.


A further example for a change in the measurement method is that the precipitation amounts observed in the early instrumental period (about before 1900) are biased and are 10% lower than nowadays because the measurements were often made on a roof. At the time, instruments were installed on rooftops to ensure that the instrument is never shielded from the rain, but it was found later that due to the turbulent flow of the wind on roofs, some rain droplets and especially snow flakes did not fall into the opening. Consequently measurements are nowadays performed closer to the ground.

Sunday, 8 January 2012

What distinguishes a benchmark?

Benchmarking is a community effort

Science has many terms for studying the validity or performance of scientific methods: testing, validation, intercomparison, verification, evaluation, and benchmarking. Every term has a different, sometimes subtly different, meaning. Initially I had wanted to compare all these terms with each other, but that would have become a very long post, especially as the meaning for every term is different in business, engineering, computation and science. Therefore, this post will only propose a definition for benchmarking in science and what distinguishes it from other approaches, casually called other validation studies from now on.

In my view benchmarking has three distinguishing features.
1. The methods are tested blind.
2. The problem is realistic.
3. Benchmarking is a community effort.
The term benchmark has become fashionable lately. It is also used, however, for validation studies that do not display these three features. This is not wrong, as there is no generally accepted definition of benchmarking. In fact in an important article on benchmarking by Sim et al. (2003) defines "a benchmark as a test or set of tests used to compare the performance of alternative tools or techniques." which would include any validation study. Then they limit the topic of their article, however, to interesting benchmarks, which are "created and used by a technical research community." However, if benchmarking is used for any type of validation study, there would not be any added value to the word. Thus I hope this post can be a starting point for a generally accepted and a more restrictive definition.