Friday, 18 December 2015

Anthony Watts at AGU2015: Comparison of Temperature Trends Using an Unperturbed Subset of The U.S. Historical Climatology Network

[UPDATE. I will never understand how HotWhopper writes such understandable articles so fast, but it might be best to read the HotWhopper introduction first.]

Remember the Watts et al. manuscript in 2012? Anthony Watts putting his blog on hold to urgently finish his draft? This study is now a poster at the AGU conference and Watts promises to submit it soon to an undisclosed journal.

On first sight, the study now has a higher technical quality and some problems have been solved. The two key weakness are, however, not discussed in the press release to the poster. This is strange. I have had long discussions with second author Evan Jones about this. Scientists (real sceptics) have to be critical about their own work. You would expect a scientist to focus a large part of a study on any weaknesses, if possible try to show they probably do not matter or else at least honestly confront the weaknesses, rather than simply ignore them.

Watts et al. is about the immediate surrounding, also called micro-siting, of weather stations that measure the surface air temperature. The American weather stations have been assessed for their quality in five categories by volunteers of the blog WUWT. Watts and colleagues call the two best categories "compliant" and the three worst ones "non-compliant". For these two categories they then compare the average temperature signal for the 30-year period 1979 – 2008.

An important problem of the 2012 version of this study was that historical records typically also contain temperature changes because the method of observation has changed. An important change in the USA is the time of observation bias. In the past observations were more often made in the afternoon than in the morning. Morning measurements results in somewhat lower temperatures. This change in the time of observation creates a bias of about 0.2°C per century and was ignored in the 2012 study. Also the auditor, Steve McIntyre, who was then a co-author admitted this was an error. This problem is now fixed; stations with a change in the time of observation have been removed from the study.


A Stevenson screen.
Another important observational change in the USA is the change of the screen used to protect the thermometer from the sun. In the past, so-called Cotton Region Shelters (CRS) or Stevenson screens were used, nowadays more and more automatic weather stations (AWS) are used.

A much used type of AWS in the USA is the MMTS. America was one of the first countries to automatize its network, with then analogue equipment that did not allow for long cables between the sensor and the display, which is installed inside a building. Furthermore, the technicians only had one day per station and as a consequence many of the MMTS systems were badly sited. Although they are badly sited, these MMTS system typically measure 0.2°C 0.15°C cooler temperatures. The size of the cooling has been estimated by comparing a station with such a change with a neighbouring station where nothing happens. Because both stations experience about the same weather, the difference signal shows the jump in the mean temperature more clearly.

Two weaknesses

Weakness 1 is that the authors only know the siting quality at the end of the period. Stations in the compliant categories may have been in less well sited earlier on, while stations in the non-compliant categories may have been better sited before.

pete:
Someone has a weather station in a parking lot. Noticing their error, they move the station to a field, creating a great big cooling-bias inhomogeneity. Watts comes along, and seeing the station correctly set up says: this station is sited correctly, and therefore the raw data will provide a reliable trend estimate.
The study tries to reduce this problem by creating a subset of stations that is unperturbed by Time of Observation changes, station moves, or rating changes. At least according to the station history (metadata). The problem is that metadata is never perfect.

The scientists working on homogenization thus advise to always also detect changes in the observational methods (inhomogeneities) by comparing a station to its neighbours. I have told Evan Jones how important this is, but they refuse to use homogenization methods because they feel homogenization does not work. In a scientific paper, they will have to provide evidence to explain why they reject an established method that could ameliorate a serious problem with their study. The irony is that the MMTS adjustments, which the Watts et al. study does use, depend on the same principle.

Weakness 2 is that the result is purely statistical and that no physical explanation is provided for the result. It is clear that bad micro-siting will lead to a temperature bias, but this does not affect the trend, while the study shows a difference in trend. I would not know how bad or good constant siting quality would change a trend. The press release also does not offer an explanation.

What makes this trend difference even more mysterious, if it were real, is that it mainly happens in the 1980s and 1990s, but has stopped in the last decade. See the graph below showing the trend for compliant (blue) and non-compliant stations (orange).



[UPDATE. The beginning period in which the difference builds up and that since 1996 the trends for "compliant" and "non-compliant" stations is the same is better seen in the graph below computed from the data in the above figure digitized by George Bailley. (No idea what the unit of the y-axis is on either of these graphs. Maybe 0.001°C.)


]

That the Watts phenomenon has stopped is also suggested by a comparison of the standard USA climate network (USHCN) and a new climate-quality network with perfect siting (USCRN) shown below. The pristine network even warms a little more. (Too little to be interpreted.)



While I am unable to see a natural explanation for the trend difference, that the difference is mainly seen in the first two decades fits to the hypothesis that the siting quality of the compliant stations was worse in the past: that in the past these stations were less compliant and a little too warm. The further you go back in time, the more likely it becomes that some change has happened. And the further you go back in time, the more likely it is that this change is no longer known.

six key findings

Below I have quoted the six key findings of Watts et al. (2015) according to the press release.

1. Comprehensive and detailed evaluation of station metadata, on-site station photography, satellite and aerial imaging, street level Google Earth imagery, and curator interviews have yielded a well-distributed 410 station subset of the 1218 station USHCN network that is unperturbed by Time of Observation changes, station moves, or rating changes, and a complete or mostly complete 30-year dataset. It must be emphasized that the perturbed stations dropped from the USHCN set show significantly lower trends than those retained in the sample, both for well and poorly sited station sets.

The temperature network in the USA has on average one detectable break every 15 years (and a few more breaks that are too small to be detected, but can still influence the result). The 30-year period studied should thus contain on average 2 breaks and likely only 12.6% of the stations do not have a break (154 stations). According to Watts et al. 410 of 1218 stations have no break. 256 stations (more than half their "unperturbed" dataset) thus likely have a break that Watts et al. did not find.

That the "perturbed" stations have a smaller trend than the "unperturbed" stations confirms what we know: that in the USA the inhomogeneities have a cooling bias. In the "raw" data the "unperturbed" subset has a trend in the mean temperature of 0.204°C per decade; see table below. In the "perturbed" subset the trend is only 0.126°C per decade. That is a whooping cooling difference of 0.2°C over this period.


Table 1 of Watts et al. (2015)

2. Bias at the microsite level (the immediate environment of the sensor) in the unperturbed subset of USHCN stations has a significant effect on the mean temperature (Tmean) trend. Well sited stations show significantly less warming from 1979 – 2008. These differences are significant in Tmean, and most pronounced in the minimum temperature data (Tmin). (Figure 3 and Table 1 [shown above])

The stronger trend difference for the minimum temperature would also need an explanation.

3. Equipment bias (CRS [Cotton Region Shelter] v. MMTS [Automatic Weather station] stations) in the unperturbed subset of USHCN stations has a significant effect on the mean temperature (Tmean) trend when CRS stations are compared with MMTS stations. MMTS stations show significantly less warming than CRS stations from 1979 – 2008. (Table 1 [shown above]) These differences are significant in Tmean (even after upward adjustment for MMTS conversion) and most pronounced in the maximum temperature data (Tmax).

The trend for the stations that use a Cotton Region Shelter is 0.3°C per decade. That is large and should be studied. This was the typical shelter in the past. Thus we can be quite sure that in these cases the shelter did not change, but there could naturally have been other changes.

4. The 30-year Tmean temperature trend of unperturbed, well sited stations is significantly lower than the Tmean temperature trend of NOAA/NCDC official adjusted homogenized surface temperature record for all 1218 USHCN stations.

It is natural that the trend in the raw data is smaller than the trend in the adjusted data. Mainly for the above mentioned reasons (TOBS and MMTS) the biases in the USA are large compared to the rest of the world and the trend in the USA is adjusted 0.4°C per century upwards.

5. We believe the NOAA/NCDC homogenization adjustment causes well sited stations to be adjusted upwards to match the trends of poorly sited stations.

Well, they already wrote "we believe". There is no evidence for this claim.

6. The data suggests that the divergence between well and poorly sited stations is gradual, not a result of spurious step change due to poor metadata.

The year to year variations in a single station series is about 1°C. I am not sure whether one would see whether the inhomogeneity is one or more step changes or a gradual change.

Review

If I were reviewer of this manuscript, I would ask about some choices that seem arbitrary and I would like to know whether they matter. For example using the period 1979 – 2008 and not continuing the data to 2015. It is fine to also show data until 2008 for better comparisons with earlier papers, but stopping 7 years earlier is suspicious. Also the choice to drop stations with TOBS changes, but to correct stations with MMTS changes sounds strange. It would be of interest whether any of the other 3 options show different results. Anomalies should be computed over a period, not relative to the starting year.

I hope that Anthony Watts and Evan M. Jones find the above comments useful. Jones wrote earlier this year:
Oh, a shout-out to Dr. Venema, one of the earlier critics of Watts et al. (2012) who pointed out to us things that needed to be accounted for, such as TOBS, a stricter hand on station moves, and MMTS equipment conversion.

Note to Anthony: In terms of reasonable discussion, VV is way up there. He actually has helped to point us in a better direction. I think both Victor Venema and William Connolley should get a hat-tip in the paper (if they would accept it!) because their well considered criticism was of such great help to us over the months since the 2012 release. It was just the way science is supposed to be, like you read about in books.
Watts wrote in the side notes to his press release:
Even input from openly hostile professional people, such as Victor Venema, have been highly useful, and I thank him for it.
Glad to have been of help. I do not recall having been "openly hostile" to this study. It would be hard to come to a positive judgement of the quality of the blog posts at WUWT, whether they are from the pathological misquoter Monckton or greenhouse effect denier Tim Ball.

However, it is always great when people contribute to the scientific literature. When the quality of their work meets the scientific standard, it does not matter what their motivation is, then science can learn something. The surface stations project is useful to learn more about the quality of the measurements; also for trend studies if continued over the coming decades.

Comparison of Temperature Trends Using an Unperturbed Subset of The U.S. Historical Climatology Network

Anthony Watts, Evan Jones, John Nielsen-Gammon and John Christy
Abstract. Climate observations are affected by variations in land use and land cover at all scales, including the microscale. A 410-station subset of U.S. Historical Climatology Network (version 2.5) stations is identified that experienced no changes in time of observation or station moves during the 1979-2008 period. These stations are classified based on proximity to artificial surfaces, buildings, and other such objects with unnatural thermal mass using guidelines established by Leroy (2010). The relatively few stations in the classes with minimal artificial impact are found to have raw temperature trends that are collectively about 2/3 as large as stations in the classes with greater expected artificial impact. The trend differences are largest for minimum temperatures and are statistically significant even at the regional scale and across different types of instrumentation and degrees of urbanization. The homogeneity adjustments applied by the National Centers for Environmental Information (formerly the National Climatic Data Center) greatly reduce those differences but produce trends that are more consistent with the stations with greater expected artificial impact. Trend differences between the Cooperative Observer Network and the Climate Reference Network are not found during the 2005-2014 sub-period of relatively stable temperatures, suggesting that the observed differences are caused by a physical mechanism that is directly or indirectly caused by changing temperatures.

[UPDATE. I forgot to mention the obvious: After homogenization the trend Watts et al. (2015) computed are nearly the same for all five siting categories, just like it was for Watts et al. (2012) and the published study Fall et al. Thus for the data used by climatologists, the homogenized data, the siting quality does not matter. Just like before, they did not study homogenization algorithms and thus cannot draw any conclusions about them, but unfortunately they do.]



Related reading

Anthony Watts' #AGU15 poster on US temperature trends

Blog review of the Watts et al. (2012) manuscript on surface temperature trends

A short introduction to the time of observation bias and its correction

Comparing the United States COOP stations with the US Climate Reference Network

WUWT not interested in my slanted opinion

Some history from 2010

On Weather Stations and Climate Trends

The conservative family values of Christian man Anthony Watts

Watts not to love: New study finds the poor weather stations tend to have a slight COOL bias, not a warm one

Poorly sited U.S. temperature instruments not responsible for artificial warming

Sunday, 13 December 2015

My theoretical #AGU15 program

This Monday the Fall Meeting of the American Geophysical Union (AGU2015) starts. An important function of conferences is knowing who is working on what. Browsing the program does a lot of that, even if you are not there. Here is an overview of where I would have had a look from my interest in climate data quality.

Links, emphasis and [explanations] are mine. The titles are linked to the AGU abstracts, where you can also mail, tweet or facebook the abstracts to others who may be interested.

Session: Taking the Temperature of the Earth: Long-Term Trends and Variability across All Domains of Earth's Surface

(Talks | Posters)

Inland Water Temperature and the recent Global Warming Hiatus

Simon J Hook, Nathan Healey, John D Lenters and Catherine O'Reilly
Extract abstract. We are using thermal infrared satellite data in conjunction with in situ measurements to produce water temperatures for all the large inland water bodies in North America and the rest of the world for potential use as climate indicator. Recent studies have revealed significant warming of inland waters throughout the world. The observed rate of warming is – in many cases – greater than that of the ambient air temperature. These rapid, unprecedented changes in inland water temperatures have profound implications for lake hydrodynamics, productivity, and biotic communities. Scientists are just beginning to understand the global extent, regional patterns, physical mechanisms, and ecological consequences of lake warming.
See also my previous post on the fast warming of rivers and lakes and the decrease in their freezing periods. Unfortunately the abstract does not say much about the "hiatus" mentioned in the title.

There is also a detailed study on the relationship between air and water temperature for Lake Tahoe.

Global near-surface temperature estimation using statistical reconstruction techniques

Colin P Morice, Nick A Rayner and John Kennedy
abstract. Incomplete and non-uniform observational coverage of the globe is a prominent source of uncertainty in instrumental records of global near-surface temperature change. In this study the capabilities of a range of statistical analysis methods are assessed in producing improved estimates of global near-surface temperature change since the mid 19th century for observational coverage in the HadCRUT4 data set. Methods used include those that interpolate according to local correlation structure (kriging) and reduced space methods that learn large-scale temperature patterns.

The performance of each method in estimating regional and global temperature changes has been benchmarked in application to a subset of CMIP5 simulations. Model fields are sub-sampled and simulated observational errors added to emulate observational data, permitting assessment of temperature field reconstruction algorithms in controlled tests in which globally complete temperature fields are known.

The reconstruction methods have also been applied to the HadCRUT4 data set, yielding a range of estimates of global near-surface temperature change since the mid 19th century. Results show relatively increased warming in the global average over the 21st century owing to reconstruction of temperatures in high northern latitudes, supporting the findings of Cowtan & Way (2014) and Karl et al. (2015). While there is broad agreement between estimates of global and hemispheric changes throughout much of the 20th and 21st century, agreement is reduced in the 19th and early 20th century. This finding is supported by the climate model trials that highlight uncertainty in reconstructing data sparse regions, most notably in the Southern Hemisphere in the 19th century. These results underline the importance of continued data rescue activities, such as those of the International Surface Temperature Initiative and ACRE.

The results of this study will form an addition to the HadCRUT4 global near-surface temperature data set.

The EUSTACE project: delivering global, daily information on surface air temperature

Nick A Rayner and Colin P Morice,
At first sight you may think my colleagues have gone crazy. A daily spatially complete global centennial high-resolution temperature dataset!

I would be so happy if we could get halfway reliable estimates of changes in weather variability and extremes from some high-quality, high-density station networks for the recent decades. It is really hard to detect and remove changes in variability due to changes in the monitoring practices and these changes are most likely huge.

However, if you read carefully, they only promise to make the dataset and do not promise that the data is fit for any specific use. One of the main ways mitigation sceptics misinform the public is by pretending that datasets provide reliable information for any application. In reality the reliability of a feature needs to be studied first. Let's be optimistic and see how far they will get; they just started, have a nice bag of tricks and mathematical prowess.
Day-to-day variations in surface air temperature affect society in many ways; however, daily surface air temperature measurements are not available everywhere. A global daily analysis cannot be achieved with measurements made in situ alone, so incorporation of satellite retrievals is needed. To achieve this, we must develop an understanding of the relationships between traditional (land and marine) surface air temperature measurements and retrievals of surface skin temperature from satellite measurements, i.e. Land Surface Temperature, Ice Surface Temperature, Sea Surface Temperature and Lake Surface Water Temperature. These relationships can be derived either empirically or with the help of a physical model.

Here we discuss the science needed to produce a fully-global daily analysis (or ensemble of analyses) of surface air temperature on the centennial scale, integrating different ground-based and satellite-borne data types. Information contained in the satellite retrievals would be used to create globally-complete fields in the past, using statistical models of how surface air temperature varies in a connected way from place to place. ...
A separate poster will provide more details on the satellite data.

The International Surface Temperature Initiative

Peter Thorne, Jay H Lawrimore, Kate Willett and Victor Venema
The Initiative is a multi-disciplinary effort to improve our observational understanding of all relevant aspects of land surface air temperatures from the global-mean centennial scale trends to local information relevant for climate services and climate smart decisions. The initiative was started in 2010 with a meeting that set the overall remit and direction. In the intervening 5 years much progress has been made, although much remains to be done. This talk shall highlight: the over-arching initiative framework, some of the achievements and major outcomes to date, as well as opportunities to get involved. It shall also highlight the many challenges yet to be addressed as we move from questions of global long-term trends to more local and regional data requirements to meet emerging scientific and societal needs.

Methodologies and Resulting Uncertainties in Long-Term Records of Ozone and Other Atmospheric Essential Climate Variables Constructed from Multiple Data Sources

Trends in atmospheric temperature and winds since 1959

Steven C Sherwood, Nidhi Nishant and Paul O'Gorman

Sherwood and colleagues have generated a new radiosonde dataset, removing artificial instrumental changes as well as they could. They find that the tropical hotspot does exist, that the models predictions of this tropic hotspot in the tropical tropospheric trends thus fit. They find that the recent tropospheric trend is not smaller than before. I hope there is no Australian Lamar Smith who needs an "hiatus" and is willing to harass scientists for political gain.

That there is not even a non-significant change from the long-term warming trend is surprising because of the recently more frequent cooling El Nino Southern Oscillation (ENSO) phases (La Nina phases). One would expect to see the influence of ENSO even stronger in the tropospheric temperatures than in the 2-m temperature. This makes it more likely that this insignificant trend change in the 2-m temperature is a measurement artefact.
Extract abstract.We present an updated version of the radiosonde dataset homogenized by Iterative Universal Kriging (IUKv2), now extended through February 2013, following the method used in the original version (Sherwood et al 2008 Robust tropospheric warming revealed by iteratively homogenized radiosonde data J. Clim. 21 5336–52). ...

Temperature trends in the updated data show three noteworthy features. First, tropical warming is equally strong over both the 1959–2012 and 1979–2012 periods, increasing smoothly and almost moist-adiabatically from the surface (where it is roughly 0.14 K/decade) to 300 hPa (where it is about 0.25 K/decade over both periods), a pattern very close to that in climate model predictions. This contradicts suggestions that atmospheric warming has slowed in recent decades or that it has not kept up with that at the surface. ...

Wind trends over the period 1979–2012 confirm a strengthening, lifting and poleward shift of both subtropical westerly jets; the Northern one shows more displacement and the southern more intensification, but these details appear sensitive to the time period analysed. Winds over the Southern Ocean have intensified with a downward extension from the stratosphere to troposphere visible from austral summer through autumn. There is also a trend toward more easterly winds in the middle and upper troposphere of the deep tropics, which may be associated with tropical expansion.

Uncertainty in Long-Term Atmospheric Data Records from MSU and AMSU

In session: Methodologies and Resulting Uncertainties in Long-Term Records of Ozone and Other Atmospheric Essential Climate Variables Constructed from Multiple Data Sources
Carl Mears

This talk presents an uncertainty analysis of known errors in tropospheric satellite temperature changes and an ensemble of possible estimates that makes computing uncertainties for a specific application easier.
The temperature of the Earth’s atmosphere has been continuously observed by satellite-borne microwave sounders since late 1978. These measurements, made by the Microwave Sounding Units (MSUs) and the Advanced Microwave Sounding Units (AMSUs) yield one of the longest truly global records of Earth’s climate. To be useful for climate studies, measurements made by different satellites and satellite systems need to be merged into a single long-term dataset. Before and during the merging process, a number of adjustments made to the satellite measurements. These adjustments are intended to account for issues such as calibration drifts or changes in local measurement time. Because the adjustments are made with imperfect knowledge, they are therefore not likely to reduce errors to zero, and thus introduce uncertainty into the resulting long-term data record. In this presentation, we will discuss a Monte-Carlo-based approach to calculating and describing the effects of these uncertainty sources on the final merged dataset. The result of our uncertainty analysis is an ensemble of possible datasets, with the applied adjustments varied within reasonable bounds, and other error sources such as sampling noise taken into account. The ensemble approach makes it easy for the user community to assess the effects of uncertainty on their work by simply repeating their analysis for each ensemble member.

Other sessions

The statistical inhomogeneity of surface air temperature in global atmospheric reanalyses

In session: Evaluating Reanalysis: What Can We Learn about Past Weather and Climate? (Talks I | Talks II | posters)
Craig R Ferguson and Min-Hee Lee
Recently, a new generation of so-called climate reanalyses has emerged, including the 161-year NOAA—Cooperative Institute for Research in Environmental Sciences (NOAA-CIRES) Twentieth Century Reanalysis Version 2c (20CR V2c), the 111-year ECMWF pilot reanalysis of the twentieth century (ERA-20C), and the 55-year JMA conventional reanalysis (JRA-55C). These reanalyses were explicitly designed to achieve improved homogeneity through assimilation of a fixed subset of (mostly surface) observations. We apply structural breakpoint analysis to evaluate inhomogeneity of the surface air temperature in these reanalyses (1851-2011). For the modern satellite era (1979-2013), we intercompare their inhomogeneity to that of all eleven available satellite reanalyses. Where possible, we distinguish between breakpoints that are likely linked to climate variability and those that are likely due to an artificial observational network shift. ERA-20C is found to be the most homogeneous reanalysis, with 40% fewer artificial breaks than 20CR V2c. Despite its gains in homogeneity, continued improvements to ERA-20C are needed. In this presentation, we highlight the most spatially extensive artificial break events in ERA-20C.
There is also a more detailed talk about the quality of humidity in reanalysis over China.

Assessment of Precipitation Trends over Europe by Comparing ERA-20C with a New Homogenized Observational GPCC Dataset

In session: Evaluating Reanalysis: What Can We Learn about Past Weather and Climate?
Elke Rustemeier, Markus Ziese, Anja Meyer-Christoffer, Peter Finger, Udo Schneider and Andreas Becker
...The monthly totals of the ERA-20C reanalysis are compared to two corresponding Global Precipitation Climatology Centre (GPCC) products; the Full Data Reanalysis Version 7 and the new HOMogenized PRecipitation Analysis of European in-situ data (HOMPRA Europe).
ERA-20C...covers the time period 1900 to 2010. Only surface observations are assimilated namely marine winds and pressure. This allows the comparison with independent, not assimilated data.
Sounds interesting, the abstract unfortunately does not give much results yet.

Cyclone Center: Insights on Historical Tropical Cyclones from Citizen Volunteers

In session: Era of Citizen Science and Big Data: Intersection of Outreach, Crowd-Sourced Data, and Scientific Research
Peter Thorne, Christopher Hennon, Kenneth Knapp, Carl Schreck, Scott Stevens, James Kossin, Jared Rennie, Paula Hennon, Michael Kruk
The cyclonecenter.org project started in fall 2012 and has been collecting citizen scientist volunteer tropical cyclone intensity estimates ever since. The project is hosted by the Citizen Science Alliance (zooniverse) and the platform is supported by a range of scientists. We have over 30 years of satellite imagery of tropical cyclones but the analysis to date has been done on an ocean-basin by ocean-basin basis and worse still practices have changed over time. We therefore do not, presently, have a homogeneous record relevant for discerning climatic changes. Automated techniques can classify many of the images but have a propensity to be challenged during storm transitions. The problem is fundamentally one where many pairs of eyes are invaluable as there is no substitute for human eyes in discerning patterns. Each image is classified by ten unique users before it is retired. This provides a unique insight into the uncertainty inherent in classification. In the three years of the project much useful data has accrued. This presentation shall highlight some of the results and analyses to date and touch on insights as to what has worked and what perhaps has not worked so well. There are still many images left to complete so its far from too late to jump over to www.cyclonecenter.org and help out.

Synergetic Use of Crowdsourcing for Environmental Science Research, Applications and Education

In session: Era of Citizen Science and Big Data: Intersection of Outreach, Crowd-Sourced Data, and Scientific Research
Udaysankar Nair
...Contextual information needed to effectively utilize the data is sparse. Examples of such contextual information include ground truth data for land cover classification, presence/absence of species, prevalence of mosquito breeding sites and characteristics of urban land cover. Often, there are no agencies tasked with routine collection of such contextual information, which could be effectively collected through crowdsourcing.

Crowdsourcing of such information, that is useful for environmental science research and applications, also provide opportunities for experiential learning at all levels of education. Appropriately designed crowdsourcing activity can be transform students from passive recipients of information to generators of knowledge. ... One example is crowdsourcing of land use and land cover (LULC) data using Open Data Kit (ODK) and associated analysis of satellite imagery using Google Earth Engine (GEE). Implementation of this activity as inquiry based learning exercise, for both middle school and for pre-service teachers will be discussed. Another example will detail the synergy between crowdsourcing for biodiversity mapping in southern India and environmental education...
There is also a crowd sourced project for Land use and land cover for (urban) climatology: the World Urban Database. A great initiative.


Steps Towards a Homogenized Sub-Monthly Temperature Monitoring Tool

In session: Characterizing and Interpreting Changes in Temperature and Precipitation Extremes
Jared Rennie and Kenneth Kunkel
Land surface air temperature products have been essential for monitoring the evolution of the climate system. Before a temperature dataset is included in such reports, it is important that non-climatic influences be removed or changed so the dataset is considered homogenous. These inhomogeneities include changes in station location, instrumentation and observing practices. Very few datasets are free of these influences and therefore require homogenization schemes. While many homogenized products exist on the monthly time scale, few daily products exist... Using these datasets already in existence, monthly adjustments are applied to daily data [of NOAA's Global Historical Climatology Network – Daily (GHCN-D) dataset]...
Great to see NOAA make first steps towards homogenization of their large daily data collection, a huge and important task.

Note that the daily data is only adjusted for changes in the monthly means. This is an improvement, but for the weather extremes, the topic of this session, also the rest of the marginal distribution needs to be homogenized.


Temperature Trends over Germany from Homogenized Radiosonde Data

In session: Methodologies and Resulting Uncertainties in Long-Term Records of Ozone and Other Atmospheric Essential Climate Variables Constructed from Multiple Data Sources
Wolfgang Steinbrecht and Margit Pattantyús Ábráham
We present homogenization procedure and results for Germany’s historical radiosonde [(RS)] records, dating back to the 1950s. Our manual homogenization makes use of the different RS networks existing in East and West-Germany from the 1950s until 1990.

The largest temperature adjustments, up to 2.5K, are applied to Freiberg sondes used in the East in the 1950s and 1960s. Adjustments for Graw H50 and M60 sondes, used in the West from the 1950s to the late 1980s, and for RKZ sondes, used in the East in the 1970s and 1980s, are also significant, 0.3 to 0.5K. Small differences between Vaisala RS80 and RS92 sondes used throughout Germany since 1990 and 2005, respectively, were not corrected for at levels from the ground to 300 hPa.

Comparison of the homogenized data with other radiosonde datasets, RICH (Haimberger et al., 2012) and HadAT2 (McCarthy et al., 2008), and with Microwave Sounding Unit satellite data (Mears and Wentz, 2009), shows generally good agreement. HadAT2 data exhibit a few suspicious spikes in the 1970s and 1980s, and some suspicious offsets up to 1K after 1995. Compared to RICH, our homogenized data show slightly different temperatures in the 1960s and 1970s. We find that the troposphere over Germany has been warming by 0.25 ± 0.1K per decade since the early 1960s, slightly more than reported in other studies (Hartmann et al., 2013). The stratosphere has been cooling, with the trend increasing from almost no change near 230hPa (the tropopause) to -0.5 ± 0.2K per decade near 50hPa. Trends from the homogenized data are more positive by about 0.1K per decade compared to the original data, both in troposphere and stratosphere.
Statistical relative homogenization can only partially remove trend biases. Given that the trend needed to be corrected upwards, the real temperature trend may thus be larger.

Observed Decrease of North American Winter Temperature Variability

In session: Methodologies and Resulting Uncertainties in Long-Term Records of Ozone and Other Atmospheric Essential Climate Variables Constructed from Multiple Data Sources
Andrew Rhines, Martin Tingley, Karen McKinnon, Peter Huybers
There is considerable interest in determining whether temperature variability has changed in recent decades. Model ensembles project that extratropical land temperature variance will detectably decrease by 2070. We use quantile regression of station observations to show that decreasing variability is already robustly detectable for North American winter during 1979--2014. ...

We find that variability of daily temperatures, as measured by the difference between the 95th and 5th percentiles, has decreased markedly in winter for both daily minima and maxima. ... The reduced spread of winter temperatures primarily results from Arctic amplification decreasing the meridional temperature gradient. Greater observed warming in the 5th relative to the 95th percentile stems from asymmetric effects of advection [air movements] during cold versus warm days; cold air advection is generally from northerly regions that have experienced greater warming than western or southwestern regions that are generally sourced during warm days.
Studies on changes in variability are the best. Guest appearances of the Arctic polar vortex in the eastern USA had given me the impression that the variability had increased, not decreased. Interesting.

Century-Scale of Standard Deviation in Europe Historical Temperature Records

In session: Methodologies and Resulting Uncertainties in Long-Term Records of Ozone and Other Atmospheric Essential Climate Variables Constructed from Multiple Data Sources
Fenghua Xie
The standard deviation (STD) variability in long historical temperature records in Europe is analyzed. It is found that STD is changeable with time, and a century-scale variation is revealed, which further indicates a century-scale intensity modulation of the large-scale temperature variability.

The Atlantic multidecadal oscillation (AMO) can cause significant impacts in standard deviation. During the periods of 1870–1910 and 1950–80, increasing standard deviation corresponds to increasing AMO index, while the periods of 1920-50 and 1980-2000 decreasing standard deviation corresponds to decreasing AMO index. The findings herein suggest a new perspective on the understanding of climatic change
Studies on changes in variability are the best. This intriguing osculation in the standard deviation of temperature was found before for the Greater Alpine Region by Reinhard Böhm. He also found it in precipitation and pressure (with a little fantasy). One be careful with such studies, changes in the standard deviation due to changes in monitoring practises (inhomogeneities) are mostly not detected, nor corrected. Only a few national and regional daily datasets have been (partially) homogenized in this respect.


Could scientists ‘peer-review’ the web?

A Town Hall presentation by Emmanuel Vincent of Climate Feedback and Dan Whaley, the founder of Hypothesis, which is the software basis of Climate Feedback. On Tuesday Dec.15, 12:30 - 13:30 pm

It is quite sure that I missed relevant presentations. Please add them in the comments.




Related reading

Why raw temperatures show too little global warming

Lakes are warming at a surprisingly fast rate

Monday, 7 December 2015

Fans of Judith Curry: the uncertainty monster is not your friend



I think that uncertainties in global surface temperature anomalies is substantially understated.
Judith Curry

People often see uncertainty as a failing of science. It's the opposite: uncertainty is what drives science forward.
Dallas Campbell

Imagine you are driving on a curvy forest road and it gets more foggy. Do you slow down or do you keep your foot on the pedal? More fog means more uncertainty, means less predictability, means that you see the deer in your headlights later. Climate change mitigations sceptics like talking about uncertainty. They seem to see this as a reason to keep the foot op the pedal.

While this is madness, psychology suggests that this is an effective political strategy. When you talk about uncertainty people have a tendency to become less decisive. Maybe people want to wait deciding until the situation is clearer? That is exactly what the advocates for inaction want and neglects that deciding not to start solving the problem is also a decision. For someone who would like to see all of humanity prosper deciding not to act is a bad counter-productive decision.

Focusing on uncertainty is so common in politics that this strategy got its own name:
Appeals to uncertainty to preclude or delay political action are so pervasive in political and lobbying circles that they have attracted scholarly attention under the name “Scientific Certainty Argumentation Methods”, or “SCAMs” for short. SCAMs are politically effective because they equate uncertainty with the possibility that a problem may be less serious than anticipated, while ignoring the often greater likelihood that the problem may be more deleterious.

Meaning of uncertainty

Maybe people do not realise that uncertainty can have multiple meanings. In case of climate change "uncertainty" does not mean that scientists are not sure. Science is very sure it is warming, that this is due to us and that it will continue if we do not do anything.

When we talk about climate change, "uncertainty" means that we do not know exactly how much it has warmed. It means that the best estimate of the man made contribution is "basically all", but that it is possible that it is more or that it is less. It means that we know it will warm a lot the coming century, but not exactly how much, if only because no one knows whether humanity gets it act together. It means that the seas will rise more than one meter, but that we can only give a period in which this threshold will be crossed.

Rather than talking about "uncertainty" I try to talk about "confidence intervals" nowadays, that conveys the idea of this latter kind of uncertainty much better. Science may not know the exact value, but the value will most likely be in the confidence interval.

That the term uncertainty can be misunderstood is especially a problem because scientists love to talk about uncertainties. A value is worth nothing if you do not have an idea how accurate it is. Thus most of the time scientists work on making sure that the uncertainties are accurate and as small as possible.

More uncertainty means higher risks

The damages of climate change rise with its magnitude (let's call this "temperature increase" for simplicity). I will argue in the next section that these damages rise faster than linear. If the relationship were linear, twice as much temperature increase would mean twice as much damages. Super-linear means that damages rise faster than that. Let us for this post assume that the damages are proportional to the square of the temperature increase. Any other super-linear relationship would show the same effect: that more uncertainty means higher risks.

In this case, if there were no uncertainty and the temperature in 2100 will increase by 4 degrees Celsius. For comparison, the temperature increase in 2100 is projected to be between 3 and 5.5°C for the worst scenario considered by the IPCC; RCP8.5. With 4 degrees warming the damages would be 16*D (42*D) dollar or 16*H human lives.

In the case with uncertainty, the temperature in 2100 would still increase by 4 degrees on average, but it could also be 3°C or 5°C. The damages for 4 degrees are still 16*D dollar. At 3 degrees the damage would be 9*D and at 5 degrees 25*D dollar, which is on average 17*D. The total damages will thus be higher than the 16*D dollar we had for the case without uncertainty.

If the uncertainty becomes larger, and we also have to take 2 and 6 degrees into account we get 4*D (for 2°C) and 36*D (for 6°C), which averages to 20*D dollar. When we are less certain that the temperature increase is near 4°C and uncertainty forces us to take 2°C and 6°C into account, the average expected damages become higher.

Judith Curry thinks that we should take even more uncertainty into account: "I think we can bound [equilibrium climate sensitivity] between 1 and 6°C at a likely level, I don’t think we can justify narrowing this further. ... [T]here is a 33% probability that that actual [climate] sensitivity could be higher or lower than my bounds. To bound at a 90% level, I would say the bounds need to be 0-10°C."

If the climate sensitivity were zero, the damages in 2100 would be zero. Estimating the temperature increase for a climate sensitivity of 10°C is more challenging. If we would still follow the carbon-is-life scenario mitigation skeptics prefer (RCP8.5), we would get a temperature increase of around 13°C in 2100**. It seems more likely that civilization will collapse before, but 13°C would give climate change damages of 132*D, which equals 169*D. The average damages for Curry's limiting case are thus 85*D, a lot more than the 16*D for the case were we are certain. If the uncertainty monster were this big, that would make the risk of climate change a lot higher.

Uncertainty is not the friend of people arguing against mitigation. The same thinking error is also made by climate change activists that sometimes ask scientists to emphasis uncertainty less.

Super-linear damages


Accumulated loss of regional species richness of macro-invertebrates as a function of glacial cover in catchment. They begin to disappear from assemblages when glacial cover in the catchment drops below approximately 50%, and 9 to 14 species are predicted to be lost with the complete disappearance of glaciers in each region, corresponding to 11, 16, and 38% of the total species richness in the three study regions in Ecuador, Europe, and Alaska. Figure RF-2 from IPCC AR5 report.
The above argument rests on the assumption that climate change damages rise super-linearly. If damages would rise linearly, uncertainty would not matter for the risk. In theory, if damages would rise less fast than linear (sub-linear), the risk would become lower with more uncertainty.

I am not aware of anyone doubting that the damages are super-linear. Weather and climate are variable. Every system will thus be adjusted to a small temperature increase. On the other hand, a 10°C temperature increase will affect, if not destroy, nearly everything. Once a large part of civilization is destroyed, the damages function may become sub-linear again. Whether the temperature increase is 10 or 11 degrees Celsius likely does not matter much any more.

What is a small temperature increase depends on the system. For sea level rise, the global mean temperature is important and the average temperature over centuries to millennia. This did not vary much, thus climate change quickly shows an influence. For the disappearance of permafrost, the long-term temperature is also important, but the damage to the infrastructure build on them depend on the local temperature, which varies more than the global temperature. On the local annual scale the variability is about 1°C, which is the global warming we have seen up to now and where new effects are now seen, for example nature moving poleward and up mountains (if it can). In summary, the more temperature increases, the more systems notice the change and naturally the more they are affected.

Damages can be avoided by adaptation. Both natural adaptation to the vagaries of weather and climate, as well as man-made adaptation in anticipation of the mess we are getting into. In the beginning there will still be low-hanging fruit, but the larger the changes will become, the more expensive adaptation becomes. More uncertainty also makes man-made adaptation more costly. If you do not know accurately how much bigger a centennial flood is, that is costly. To build bigger flood protections you need to have a best estimate, but also need to add a few times the uncertainty to this estimate, otherwise you would still be flooded half the time such a flood comes by.

An example of super-linear impacts is species loss for glacier catchments when the glaciers disappear. The figure above shows that initial small reductions in the glaciers did not impact nature much yet, but that it rises fast near the time the glacier is lost.


The costs of sea level rise rise super-linearly as a function of the amount of sea level rise. Data from Brown et al. (2011).
Another example are the super-linear costs of sea level rise. This plot to the right was generated by sea level rise expert Aslak Grinsted from data of a large European scientific project on the costs of climate change. He added that this shape is consistent across many studies on the costs of sea level rise.

Those were two examples for specific systems, which I trust most as natural scientist. Economists have also tried to estimate the costs of climate change. This is hard because many damages cannot really be expressed in money. Climate change is one stressor and in many cases damages also depend on many other stressors. Furthermore, solutions can often solve multiple problems. As economist Martin Weitzman writes: "The “damages function” is a notoriously weak link in the economics of climate change


The climate change damages functions of Weitzman (2010b) and Nordhaus (2008). For a zero temperature change Ω(t)=1Ω(t)=1 (no damage) and for very large temperature changes it approaches 0 (maximum damage).
Consequently, the two damages functions shown to the right differ enormously. Important for this post: they are both super-linear.

Unknown unknowns

If there is one thing I fear about climate change, then it is uncertainty. There will be many surprises. We are taking the climate system out of the state we know well and are perform a massive experiment with it. Things are bound to happen, we did not think of. Some might be nice, more likely the surprises will not be nice. As Judith Curry calls it: climate change is a wicked problem.

Medical researchers like to study rare deceases. They do so much more than the number of patients would justify. But seeing things go wrong helps you understand how things work. The other way around this means that until things go wrong, we will often not even know we should have studied it. Some surprises will be that an impact that science did study turns out to be better or worse; the known unknowns. The most dangerous surprises are bound to be the unknown unknowns, which we never realised we would have had to study.

The uncertainty monster is my main reason as citizen to want to solve this problem. Call me a conservative. The climate system is one of the traditional pillars of our society. Something you do not change without considering long and hard what the consequences will be. The climate is something you want to understand very well before you change it. If we had perfect climate and weather predictions, climate change would be a much smaller problem.


Gavin Schmidt
I think the first thing to acknowledge is that there will be surprises. We’re moving into a regime of climate that we have not experienced as humans, most ecosystems have not experienced since the beginning of the ice age cycle some three million years ago. We don’t know very well what exactly was happening then. We know some big things, like how much the sea level rose and what the temperatures were like, but there’s a lot of things that we don’t know. And so we are anticipating “unknown unknowns”. But, of course, they’re unknown, so you don’t know what they’re going to be.


One irony is that climate models reduce the uncertainty by improving the scientific understanding of the climate system. Without climate models we would still know that the Earth warms when we increase greenhouse gasses. The greenhouse effect can be directly measured by looking at the infra red radiation from the sky. When you make that bigger, it gets warmer. When you put on a second sweater, you get warmer. We know it gets warmer from simple radiative transfer computations. We know that CO2 influences temperature by studying past climates.

The climate models have about the same climate sensitivity as those simple radiative transfer computations and the estimates from past climates. It could have been different because the climate is more complicated than the simple radiative transfer computation. It could have been different because the increase in greenhouse gasses goes so fast this time. That all those climate sensitivity estimates fit together reduces the uncertainty, especially the uncertainty from the unknown unknowns.

Without climate models it would be very hard to estimate all the other changes in the climate system. The increases in precipitation and especially increases in severe precipitation, in floods, in droughts, in the circulation patterns, how fast sea level rise will go. Without climate models these risks would be much higher, without climate models we would have to adapt to a much wider range of changes in weather and climate. This would make adaptation a lot more intrusive and expensive.

Without climate models and climate research in general the risks of changing our climate would be larger. We would need to be more careful, the case for reductions of greenhouse gas emissions would have been stronger.

Especially for mitigation sceptics advocating adaptation-only policies, climate research should be important. Adaptation needs high-quality local information. For a 1000-year event such as the downpour in South Carolina earlier this year or the record precipitation this week in the UK, we may have to life with the damages. If this will happen much more often under climate change, we will have to change our infrastructure. If we do not know what is coming, we will have to prepare for everything. That is expensive. Reductions in this uncertainty save money by reducing unnecessary adaptation measures and by reducing damages due to effective adaptation.

People who are against mitigation policies should be cheering for climate research, rather than try to defund it or harass scientists for their politically inconvenient message. Let's be generous and assume that they do not know what they are doing.

In summary. Uncertainty makes the risk of climate change larger. Uncertainty makes adaptation a less attractive option relative to solving the problem (mitigation). The more we take the climate system out of known territories the more surprises (unknown unknowns) we can expect. In a logical world uncertainty would be the message of the environmental movement. In the real world uncertainty is one of the main fallacies of the mitigation skeptics and their "think" tanks.




Related reading

Tail Risk vs. Alarmism by Kerry Emanuel

Stephan Lewandowsky, Richard Pancost and Timothy Ballard making the same case and adding psychological and sociological explanations why SCAMs still work: Uncertainty is Exxon's friend, but it's not ours

Talking climate: Communicating uncertainty in climate science

European Climate Adaptation Platform: How to communicate uncertainty?

Dana Nuccitelli: New research: climate change uncertainty translates into a stronger case for tackling global warming

Uncertainty isn’t cause for climate complacency – quite the opposite

Michael Tobis in 1999 about uncertainty and risk: Wisdom from USENET


References

Botzen, W.J.W. and J.C. van den Bergh, 2012: How sensitive is Nordhaus to Weitzman? climate policy in DICE with an alternative damage function. Economics Letters, 117, pp. 372–374, doi: 10.1016/j.econlet.2012.05.032.

Nordhaus, W.D., 2008: A Question of Balance: Weighing the Options on Global Warming Policies. Yale University Press, New Haven.

Brown, Sally, Robert Nicholls, Athanasios Vafeidis, Jochen Hinkel and Paul Watkiss, 2011: The Impacts and Economic Costs of Sea-Level Rise in Europe and the Costs and Benefits of Adaptation. Summary of Results from the EC RTD ClimateCost Project. In Watkiss, P. (Editor), 2011. The ClimateCost Project. Final Report. Volume 1: Europe. Published by the Stockholm Environment Institute, Sweden, 2011. ISBN 978-91-86125-35-6.

Lewandowsky, Stephan , James S. Risbey, Michael Smithson, Ben R. Newell, John Hunter, 2014: Scientific uncertainty and climate change: Part I. Uncertainty and unabated emissions. Climatic Change, 124, pp. 21–37, doi: 10.1007/s10584-014-1082-7.

Tomassini, L., R. Knutti, G.-K. Plattner, D.P. van Vuuren, T.F. Stocker, R.B. Howarth and M.E. Borsuk, 2010: Uncertainty and risk in climate projections for the 21st century: comparing mitigation to non-intervention scenarios. Climate Change, 103, pp. 399–422. doi: .

Weitzman M.L., 2010a: What is the damages function for global warming and what difference might it make?
Climate Change Economics, 1, pp. 57–69, doi: 10.1142/S2010007810000042.

Weitzman, M.L., 2010b: GHG Targets as Insurance Against Catastrophic Climate Damages. Mimeo, Department of Economics, Harvard University.


Acknowledgements.
* Thanks to Michael Tobis for the foggy road analogy, which he seems to have gotten from [[Stephen Schneider]].

** Judith Curry speaks of the equilibrium climate sensitivity begin between 0 and 10°C per doubling of CO2. The TCR to ECS ratio peaks at around 0.6, so an ECS of 10°C could be a TCR of 6°C. Since doubling of CO2 is a forcing of 3.7 W/m2 and RCP8.5 is defined as 8.5 W/m2 in 2100, that would mean a warming of 8.5/3.7 x 6 = 13°C (Thank you ATTP).

For comparison, the IPCC estimates the climate sensitivity to between 1.5 and 4.5°C. Last IPCC report: "Equilibrium climate sensitivity is likely in the range 1.5°C to 4.5°C (high confidence), extremely unlikely less than 1°C (high confidence), and very unlikely greater than 6°C (medium confidence)"


*** Top photo by Paige Jarreau.

Friday, 20 November 2015

Sad that for Lamar Smith the "hiatus" has far-reaching policy implications



Earlier this year, NOAA made a new assessment of the surface temperature increase since 1880. Republican Congressman Lamar Smith, Chair of the House science committee did not like the adjustments NOAA made and started a harassment campaign. In the Washington Post he wrote about his conspiracy theory (my emphasis):
In June, NOAA employees altered temperature data to get politically correct results and then widely publicized their conclusions as refuting the nearly two-decade pause in climate change we have experienced. The agency refuses to reveal how those decisions were made. Congress has a constitutional responsibility to review actions by the executive branch that have far-reaching policy implications.
I guess everyone reading this blog knows that all the data and code are available online.

The debate is about this minor difference you see at the top right. Take your time. Look carefully. See it? The US mitigation sceptical movement has made the trend since the super El Nino year 1998 a major part of their argumentation that climate change is no problem. When for Lamar Smith such minute changes have "far-reaching policy implications", then maybe he is not a particularly good policy maker. The people he represents in the Texas TX-21 district deserve better.

I have explained the mitigation sceptics so many times that they should drop their "hiatus" fetish. That that would come back to hound them. That such extremely short term trends have huge uncertainties and that interpreting such changes as climatic changes assumes a data quality that I see as unrealistic. With their constant wailing about the quality data, they should theoretically certainly see it that way. But well, they did not listen.

Some political activists like to claim that the "hiatus" means that global warming has stopped. It seems like Lamar Smith is in this group, at least I see no other reason why he would think that it is policy relevant. But only 2 percent of global warming warms the atmosphere (most warms the oceans) and this "hiatus" is about 2% of the warming we have seen since 1880. It is thus a peculiar debate about 2% of %2 of the warming and not about global warming.

This peculiar political debate is the reason this NOAA study became a Science paper (Science magazine prefers article of general interest) and why NOAA's Karl et al. (2015) paper was heavily attacked by the mitigation sceptical movement.

Before this reassessment NOAA's trend since 1998 was rather low compared to the other datasets. The right panel of the figure below made by Zeke Hausfather shows the trends since 1998. In the figure the old NOAA assessment are the green dots, the new assessment the black dots.

The new assessment solved problems in the NOAA dataset that were already solved in the HadCRUTT4 dataset from the UK (red dots). The trends in HadCRUT4 are somewhat lower because it does not take the Arctic fully into account, where a lot of the warming in the last decade happened to have occurred. The version of HadCRUT4 were this problem is fixed is indicated as "Cowtan & Way" (brownish dots).

The privately funded Berkeley Earth) also takes the Arctic into account and already had somewhat larger recent trends.

Thus the new assessment of NOAA is in line with our current understanding. Given how minute this feature is, it is actually pretty amazing how similar the various assessments are.


"Karl raw" (open black circle) is the raw data of NOAA before any adjustments, the green curve in the graph at the top of this post. "Karl adj" (black dot) is the new assessment, the thick black line in the graph at the top. The previous assessment is "NCDA old" (green dot). The other dots, four well-known global temperature datasets.

Whether new assessments are seen as having "far-reaching policy implications" by Lamar Smith may also depend on the direction in which the trends change. Around the same time as the NOAA article, the Roy Spencer and Chris Christy published a new dataset with satellite estimates of the tropospheric temperatures. As David Appell reports, they make considerable changes to their dataset. Somehow I did not hear anything about a subpoena against them yet.



More important adjustments to the surface temperatures are made for data before 1940. Looking at the figure below, most would probably guess that Lamar Smith did not like the strong adjustments that made global warming a lot small. May he liked the direction better.

The adjustments before 1940 are necessary because in that period the dominant way to measure sea surface temperature was by taking a bucket of water out of the sea. During the measurement the water would cool due to evaporation. How large this adjustment should be is uncertain, anything between 0 and 0.4°C is possible. That makes a huge difference for the scientific assessment of how much warming we have seen up to now.

Also the size of the peak in the second world war is highly uncertain; the merchant ships were replaced by war ships, making the measurements differently.

This is outside of my current expertise, but the first article I read about this, a small study for the Baltic see, suggested that the cooling bias due to evaporation is small, but that there is a warming bias of 0.5°C because the thermometer is stored in the warm cabin and the sailors did not wait long enough until the thermometer equilibrates. Such uncertainties are important and a hand full of scientists are working on sea surface temperature. And now a political witch hunt keeps some of them from their work.



Whether the adjustments for buckets are 0.4 or 0°C that may be policy relevant. At least if we were already close to an optimal policy response. This adjustment affects the data over a long period and can thus influence estimates of climate sensitivity. What counts for the climate sensitivity is basically the area under the temperature graph. A change of 0.4°C over 60 years is a lot more than 0.2° over 15 years. Nic Lewis and Judith Curry (2014), who I hope Lamar Smith will trust, also do not see the "hiatus" as important for the climate sensitivity.

For those who still think that global warming has stopped, climatologist John Nielsen-Gammon (and friend of Anthony Watts of WUWT) made the wonderful plot below, that immediately helps you see, that most of the deviations from the trend line can be explained by variations in El Nino (archived version).



It is somewhat ironic that Lamar Smith claims that NOAA rushed the publication of their dataset. It would be more logical if he hastened his campaign. It is now shortly before the Paris climate conference and the strong El Nino does not bode well for his favourite policy justification as the plot below shows. You do not need statistics any more to be completely sure that there was no change in the trend in 1998.









Related reading

WOLF-PAC has a good plan to get money out of US politics. Let's first get rid of this weight vest before we run the century long climate change marathon.

Margaret Leinen, president of the American Geophysical Union (AGU): A Growing Threat to Academic Freedom

Keith Seitter, Executive Director of the American Meteorological Society (AMS): "The advancement of science depend
s on investigators having the freedom to carry out research objectively and without the fear of threats or intimidation whether or not their results are expedient or popular.
"

The article of Chris Mooney in the Washington Post is very similar to mine, but naturally better written and with more quotes: Even as Congress investigates the global warming ‘pause,’ actual temperatures are surging

Letters to the Editor of the Washington Post: Eroding trust in scientific research. The writer, a Republican, is chairman of the House Committee on Science, Space and Technology and represents Texas’s 21st District in the House.

House science panel demands more NOAA documents on climate paper

Michael Halpern of the Union of Concerned Scientists in The Guardian: The House Science Committee Chair is harassing US climate scientists

And Then There's Physics on the hypocrisy of Judith Curry: NOAA vs Lamar Smith.

Michael Tobis: House Science, Space, and Technology Committee vs National Oceanic and Atmospheric Administration

Ars Technica: US Congressman subpoenas NOAA climate scientists over study. Unhappy with temperature data, he wants to see the e-mails of those who analyze it.

Ars Technica: Congressman continues pressuring NOAA for scientists’ e-mails. Rep. Lamar Smith seeks closed-door interviews, in the meantime.

Guardian: Lamar Smith, climate scientist witch hunter. Smith got more money from fossil fuels than he did from any other industry.

Wired: Congress’ Chief Climate Denier Lamar Smith and NOAA Are at War. It’s Benghazi, but for nerds. I hope the the importance of independent science is also clear to people who do not consume it on a daily basis.

Mother Jones: The Disgrace of Lamar Smith and the House Science Committee.

Eddie Bernice Johnson, Democrat member of the Committee on Science from Texas reveals temporal inconsistencies in the explanations offered by Lamar Smith for his harassment campaign.

Raymond S. Bradley in Huffington Post: Tweet This and Risk a Subpoena. "OMG! [NOAA] tweeted the results! They actually tried to communicate with the taxpayers who funded the research!"

David Roberts at Vox: The House science committee is worse than the Benghazi committee

Union of Concerned Scientists: The House Science Committee’s Witch Hunt Against NOAA Scientists

Reference

Karl, T.R., A. Arguez, B. Huang, J.H. Lawrimore, J.R. McMahon, M.J. Menne, T.C. Peterson, R.S. Vose, and H. Zhang, “Possible artifacts of data biases in the recent global surface warming hiatus”, Science, vol. 348, pp. 1469–1472, 2015. doi: 10.1126/science.aaa5632

Thursday, 15 October 2015

Invitation to participate in a PhD research project on climate blogging

My name is Giorgos Zoukas and I am a second-year PhD student in Science, Technology and Innovation Studies (STIS) at the University of Edinburgh. This guest post is an invitation to the readers and commenters of this blog to participate in my project.

This is a self-funded PhD research project that focuses on a small selection of scientist-produced climate blogs, exploring the way these blogs connect into, and form part of, broader climate science communication. The research method involves analysis of the blogs’ content, as well as semi-structured in-depth interviewing of both bloggers and readers/commenters.

Anyone who comments on this blog, on a regular basis or occasionally, or anyone who just reads this blog without posting any comments, is invited to participate as an interviewee. The interview will focus on the person’s experience as a climate blog reader/commenter.*

The participation of readers/commenters is very important to this study, one of the main purposes of which is to increase our understanding of climate blogs as online spaces of climate science communication.

If you are interested in getting involved, or if you have any questions, please contact me at: G.Zoukas -at- sms.ed.ac.uk (Replace the -at- with the @ sign)

(Those who have already participated through my invitation on another climate blog do not need to contact me again.)

*The research complies with the University of Edinburgh’s School of Social and Political Sciences Ethics Policy and Procedures, and an informed consent form will have to be signed by both the potential participants (interviewees) and me.



VV: I have participated as blogger. For science.

I was a little sceptical at first, with all the bad experiences with the everything-is-a-social-construct fundamentalists in the climate “debate”. But Giorgos Zoukas seems to be a good guy and gets science.

I even had to try to convince him that science is very social; science is hard to do on your own.

A good social surrounding, a working scientific community, increases speed of scientific progress. That science is social does not mean that imperfections lead to completely wrong results for social reasons, that the results are just a social construct.

Sunday, 4 October 2015

Measuring extreme temperatures in Uccle, Belgium


Open thermometer shelter with a single set of louvres.

That changes in the measurement conditions can lead to changes in the mean temperature is hopefully known by most people interested in climate change by now. That such changes are likely even more important when it comes to weather variability and extremes is unfortunately less known. The topic is studied much too little given its importance for the study of climatic changes in extremes, which are expected to be responsible for a large part of the impacts from climate change.

Thus I was enthusiastic when a Dutch colleague send me a news article on the topic from the homepage of the Belgium weather service, Koninklijk Meteorologisch Instituut (KMI). It describes a comparison of two different measurement set-ups, old and new, made side by side in [[Uccle]], the main office of the KMI. The main difference is the screen used to protect the thermometer from the sun. In the past these were often more open, that makes ventilation better, nowadays they are more closed to reduce (solar and infra red) radiation errors.

The more closed screen is a [[Stevenson screen]], invented in the last decades of the 19th century. I had assumed that most countries had switched to Stevenson screens before the 1920s. But I recently learned that Switzerland changed in the 1960s and in Uccle they changed in 1983. Making any change to the measurements is a difficult trade off between improving the system and breaking the homogeneity of the climate record. It would be great to have a historical overview of such historical transitions in the way climate is measured for all countries.

I am grateful to the KMI for their permission to republish the story here. The translation, clarifications between square brackets and the related reading section are mine.



Closed thermometer screen with a double-louvred walls [Stevenson screen].
In the [Belgian] media one reads regularly that the highest temperature in Belgium is 38.8°C and that it was recorded in Uccle on June 27, 1947. Sometimes, one also mentions that the measurement was conducted in an "open" thermometer screen. On warm days the question typically arises whether this record could be broken. In order to be able to respond to this, it is necessary to take some facts into account that we will summarize below.

It is important to know that temperature measurements are affected by various factors, the most important one is the type of the thermometer screen in which the observations are carried out. One wants to measure the air temperature and therefore prevent a warming of the measuring equipment by protecting the instruments from the distorting effects of solar radiation. The type of thermometer screen is particularly important on sunny days and this is reflected in the observations.

Since 1983, the reference measurements of the weather station Uccle are made in a completely "closed" thermometer screen [a Stevenson screen] with double-louvred walls. Until May 2006, the reference thermometers were mercury thermometers for daily maximums and alcohol thermometers for daily minimums. [A typical combination nowadays because mercury freezes at -38.8°C.] Since June 2006, the temperature measurements are carried out continuously by means of an automatic sensor in the same type of closed cabin.

Before 1983, the measurements were carried out in an "open" thermometer screen with only a single set of louvres, which on top of that offered no protection on the north side. Because of the reasons mentioned above, the maximum temperature in this type of shelter were too high, especially during the summer period with intense sunshine. On July 19, 2006, one of the hottest days in Uccle, for example, the reference [Stevenson] screen measured a maximum temperature of 36.2°C compared to 38.2°C in the "open" shelter on the same day.

As the air temperature measurements in the closed screen are more relevant, it is advisable to study the temperature records that would be or have been measured in this type of reference screen. Recently we have therefore adjusted the temperature measurements of the open shelter from before 1983, to make them comparable with the values from the closed screen. These adjustments were derived from the comparison between the simultaneous [parallel] observations measured in the two types of screens during a period of 20 years (1986-2005). Today we therefore have two long series of daily temperature extremes (minimum and maximum), beginning in 1901, corresponding to measurements from a closed screen.

When one uses the alignment method described above, the estimated value of the maximum temperature in a closed screen on June 27, 1947, is 36.6°C (while a maximum value of 38.8°C was measured in an open screen, as mentioned in the introduction). This value of 36.6°C should therefore be recognized as the record value for Uccle, in accordance with the current measurement procedures. [For comparison, David Parker (1994) estimated that the cooling from the introduction of Stevenson screens was less than 0.2°C in the annual means in North-West Europe.]

For the specialists, we note that the daily maximum temperature shown in the synoptic reports of Uccle, usually are up to a few tenths of a degree higher compared with the reference climatological observations that were mentioned previously. This difference can be explained by the time intervals over which the temperature is averaged in order to reduce the influence of atmospheric turbulence. The climatic extremes are calculated over a period of ten minutes, while the synoptic extremes are calculated from values ​​that were averaged over a time span of a minute. In the future, will make these calculation methods the same by applying the climatic procedures always.

Related reading

KMI: Het meten van de extreme temperaturen te Ukkel

To study the influence of such transitions in the way the climate is measured using parallel data we have started the Parallel Observations Science Team (ISTI-POST). One of the POST studies is on the transition to Stevenson screens, which is headed by Theo Brandsma. If you have such data please contact us. If you know someone who might, please tell them about POST.

Another parallel measurement showing huge changes in the extremes is discussed in my post: Be careful with the new daily temperature dataset from Berkeley

More on POST: A database with daily climate data for more reliable studies of changes in extreme weather

Introduction to series on weather variability and extreme events

On the importance of changes in weather variability for changes in extremes

A research program on daily data: HUME: Homogenisation, Uncertainty Measures and Extreme weather

Reference

Parker, David E., 1994: Effect of changing exposure of thermometers at land stations. International journal of climatology, 14, pp. 1-31, doi: 10.1002/joc.3370140102.