Showing posts with label station quality. Show all posts
Showing posts with label station quality. Show all posts

Monday, 30 September 2013

Reviews of the IPCC review

The first IPCC report (Working Group One), "Climate Change 2013, the physical science basis", has just been released.

One way to judge the reliability of a source, is to see what it states about a topic you are knowledgeable about. I work on homogenization of station climate data and was thus interested in the question how well the IPCC report presents the scientific state-of-the-art on the uncertainties in trend estimates due to historical changes in climate monitoring practices.

Furthermore, I have asked some colleague climate science bloggers to review the IPCC report on their areas of expertise. You find these reviews of the IPCC review report at the end of the post as they come in. I have found most of these colleagues via the beautiful list with climate science bloggers of Doug McNeall.

Large-Scale Records and their Uncertainties

The IPCC report is nicely structured. The part that deals with the quality of the land surface temperature observations is in Chapter 2 Observations: Atmosphere and Surface, Section 2.4 Changes in Temperature, Subsection 2.4.1 Land-Surface Air Temperature, Subsubsection 2.4.1.1 Large-Scale Records and their Uncertainties.

The relevant paragraph reads (my paragraph breaks for easier reading):
Particular controversy since AR4 [the last fourth IPCC report, vv] has surrounded the LSAT [land surface air temperature, vv] record over the United States, focussed upon siting quality of stations in the US Historical Climatology Network (USHCN) and implications for long-term trends. Most sites exhibit poor current siting as assessed against official WMO [World Meteorological Organisation, vv] siting guidance, and may be expected to suffer potentially large siting-induced absolute biases (Fall et al., 2011).

However, overall biases for the network since the 1980s are likely dominated by instrument type (since replacement of Stevenson screens with maximum minimum temperature systems (MMTS) in the 1980s at the majority of sites), rather than siting biases (Menne et al., 2010; Williams et al., 2012).

A new automated homogeneity assessment approach (also used in GHCNv3, Menne and Williams, 2009) was developed that has been shown to perform as well or better than other contemporary approaches (Venema et al., 2012). This homogenization procedure likely removes much of the bias related to the network-wide changes in the 1980s (Menne et al., 2010; Fall et al., 2011; Williams et al., 2012).

Williams et al. (2012) produced an ensemble of dataset realisations using perturbed settings of this procedure and concluded through assessment against plausible test cases that there existed a propensity to under-estimate adjustments. This propensity is critically dependent upon the (unknown) nature of the inhomogeneities in the raw data records.

Their homogenization increases both minimum temperature and maximum temperature centennial-timescale United States average LSAT trends. Since 1979 these adjusted data agree with a range of reanalysis products whereas the raw records do not (Fall et al., 2010; Vose et al., 2012a).

I would argue that this is a fair summary of the state of the scientific literature. That naturally does not mean that all statements are true, just that it fits to the current scientific understanding of the quality of the temperature observations over land. People claiming that there are large trend biases in the temperature observations, will need to explain what is wrong with Venema et al. (an article of mine from 2012) and especially Williams et al. (2012). Williams et al. (2012) provides strong evidence that if there is a bias in the raw observational data, homogenization can improve the trend estimate, but it will normally not remove the bias fully.

Personally, I would be very surprised if someone would find substantial trend biases in the homogenized US American temperature observations. Due to the high station density, this dataset can be investigated and homogenized very well.

Tuesday, 5 February 2013

A database with daily climate data for more reliable studies of changes in extreme weather

In summary:
  • We want to build a global database of parallel measurements: observations of the same climatic parameter made independently at the same site
  • This will help research in many fields
    • Studies of how inhomogeneities affect the behaviour of daily data (variability and extreme weather)
    • Improvement of daily homogenisation algorithms
    • Improvement of robust daily climate data for analysis
  • Please help us to develop such a dataset

Introduction



One way to study the influence of changes in measurement techniques is by making simultaneous measurements with historical and current instruments, procedures or screens. This picture shows three meteorological shelters next to each other in Murcia (Spain). The rightmost shelter is a replica of the Montsouri screen, in use in Spain and many European countries in the late 19th century and early 20th century. In the middle, Stevenson screen equipped with automatic sensors. Leftmost, Stevenson screen equipped with conventional meteorological instruments.
Picture: Project SCREEN, Center for Climate Change, Universitat Rovira i Virgili, Spain.


We intend to build a database with parallel measurements to study non-climatic changes in the climate record. This is especially important for studies on weather extremes where the distribution of the daily data employed must not be affected by non-climatic changes.

There are many parallel measurements from numerous previous studies analysing the influence of different measurement set-ups on average quantities, especially average annual and monthly temperature. Increasingly, changes in the distribution of daily and sub-daily values are also being investigated (Auchmann and Bönnimann, 2012; Brandsma and Van der Meulen, 2008; Böhm et al., 2010; Brunet et al., 2010; Perry et al., 2006; Trewin, 2012; Van der Meulen and Brandsma, 2008). However, the number of such studies is still limited, while the number of questions that can and need to be answered are much larger for daily data.

Unfortunately, the current common practice is not to share parallel measurements and the analyses have thus been limited to smaller national or regional datasets, in most cases simply to a single station with multiple measurement set-ups. Consequently there is a pressing need for a large global database of parallel measurements on a daily or sub-daily scale.

Also datasets from pairs of nearby stations, while officially not parallel measurements, are interesting to study the influence of relocations. Especially, typical types of relocations, such as the relocation of weather stations from urban areas to airports, could be studied this way. In addition, the influence of urbanization can be studied on pairs of nearby stations.

Sunday, 29 July 2012

Blog review of the Watts et al. (2012) manuscript on surface temperature trends

[UPDATE: Skeptical Science has written an extensive review of the Watts et al. manuscript: "As it currently stands, the issues we discuss below appear to entirely compromise the conclusions of the paper." They mention all the important issues, except maybe for the selection bias mentioned below. Thus my fast preliminary review below can now be considered outdated. Have fun.]

Anthony Watts put his blog on hold for two days because he had to work on an urgent project.
Something’s happened. From now until Sunday July 29th, around Noon PST, WUWT will be suspending publishing. At that time, there will be a major announcement that I’m sure will attract a broad global interest due to its controversial and unprecedented nature.
What has happened? Anthony Watts, President of IntelliWeather has co-written a manuscript and a press release! As Mr. Watts is a fan of review by bloggers, here is my first reaction after looking through the figures and the abstract.