Friday, 29 July 2022

The 10th anniversary of the still unpublished Watts et al. (2012) manuscript

Anthony Watts:
Something’s happened. From now until Sunday July 29th [2012], around Noon PST, WUWT will be suspending publishing. At that time, there will be a major announcement that I’m sure will attract a broad global interest due to its controversial and unprecedented nature.

Watts suspended his holiday plans, put his blog on hold over the weekend to work on something really important. With this announcement PR expert Watts created a nice buzz. Out came a deeply flawed manuscript on the influence of the direct surrounding of weather stations (micro-siting) on temperature trends.

Even before reading it the science internet was disappointed. David Appell responded: "Clunk. That, to me, seems to be the sound of the drama queen's preprint hitting the Internet.". William Connolley: "Watts disappoints ... its just a paper preprint. All over the world scientists produce draft papers and send them off for peer review. Only dramah queens pimp them up like this."

Roger Pielke Sr. burned another part of his scientific reputation build by his regional climate modelling work by writing a press release about his godson's manuscript: 

"This paper is a game changer ... this type of analysis should have been performed by Tom Karl and Tom Peterson at NCDC, Jim Hansen at GISS and Phil Jones at the University of East Anglia (and Richard Muller). However, they apparently liked their answers and did not want to test the robustness of their findings.. ... Anthony’s new results also undermine the latest claims by Richard Muller of BEST ... His latest BEST claims are, in my view, an embarrassment."

After all the obvious problems became clear, which somehow this eminent scientist could not find himself, he wrote a new blog post:

"To be very specific, I did not play a role in their data analysis. He sent me the near final version of the discussion paper and I recommended added text and references. I am not a co-author on their paper. I am now working with them to provide suggestions as to how to examine the TOB question regarding its effect on the difference in the trends found in Watts et al 2012."

The Watts et al. (2012) study is so fundamentally wrong in its basic design and execution that it is still not published now ten years later. While Watts naturally keeps on citing it to claim one cannot trust observed temperature trends. This fits to his new job at the Heartland Institute, a company so immoral that they still work for Big Tobacco. 

Below you can find some details on a recent study from Italy, which suggests that had Watts' study been done right, it would have found that micro-siting is a minor problem for climate trends.

The question of how micro-siting influences temperature observations is an interesting one. Expecting to see an influence on trends is another matter. I have no clue how that was supposed to work and Watts et al. (2012) also did not explain the extraordinary physics.

Even if such a thing existed Watts et al. (2012) could not have found convincing evidence on trends; the most fundamental problem of the study setup is that the study tries to analyse trends, which requires at least two points in time, but only had siting information for one point in time. Why this is a problem was explained well at the time by Pete:

Someone has a weather station in a parking lot. Noticing their error, they move the station to a field, creating a great big cooling-bias inhomogeneity. Watts comes along, and seeing the station correctly set up says: this station is sited correctly, and therefore the raw data will provide a reliable trend estimate.

To see an influence of micro-siting you need to something to compare with. You need to either have two points in time with information on micro-siting or two or more points spatially. Our Italian metrological (the science of measuring, not meteorologists, the science of the weather) colleagues of Meteomet did the latter.

Coppa et al. (2021) installed a weather station only 1-meter from a road and as comparison had a weather station 100 meters away from the road, perfectly sited in the middle of a grass field. More precisely they installed seven stations at 1, 5, 10, 20, 30, 50 and 100 m from the road and this is a 2-lane asphalt road, which is half a meter above the grass and leads to an airport in the surrounding of Turin, Italy.

Climatologically the most important plot in the paper is the one below. Let me walk you through it. On the y-axis is the temperature differences in Celsius compared to the seventh station, the one 100 meter from the road. The plot shows six box plot triplets; there are the six temperature differences. The three colors are for the daily maximum temperature (white), the daily average temperature (red) and the daily minimum temperature (blue). Careful, more common would be that the red color denotes the maximum temperature. The thick part of the box plots spans 50% of all observed temperature differences, the horizontal bar inside it the mean temperature difference.

So the temperature difference of the station closest to the road to the well-sited station is ΔT₁, the triplet at the left. The maximum temperature close to the road is 0.12 °C warmer, the average temperature is about 0.2 °C warmer and the minimum temperature is 0.3°C warmer. With increasing distance from the road, these small effects gradually become smaller, which gives confidence that these differences, while small, are real. This is somewhat less true for the maximum temperature, which behaves more erratically.

This metrological study is important for climatology, even if it basically found a null effect. Understanding uncertainties in measurements helps us focus on the real problems. Unfortunately such studies are not cited much and unfortunately too often the importance of science is judged by the number of citations. This study clearly illustrates why this is a bad way to micro-manage science.

What does this mean for observed global warming trends? To make a worst case estimate one could assume that all stations were perfectly sited on lush grasslands in the past and are now close to a road in a subtropical climate with harsh sun light to get a trend error of 0.2 °C in the mean temperature of land stations, which represent a third of the Earth's surface. So even with such unrealistic assumptions this would change the global temperature trend much less than 10%.

The opposite scenario might be more realistic. Climate stations often started close to buildings as the then expensive scientific instruments had to be read by observers. Nowadays it is easy to build an automatic climate station with autonomous power and radio communication far from buildings.

The upside of this being the 10th anniversary is that people could check the micro-siting of the stations again and have two time points. It would likely give a null result, but that is valid.

Related reading

My quick review of the Watts et al. (2012) manuscript.

Reference

Coppa, G, Quarello, A, Steeneveld, G-J, Jandrić, N, Merlone, A, 2021: Metrological evaluation of the effect of the presence of a road on near-surface air temperatures. International Journal of Climatology. 41: 37053724. https://doi.org/10.1002/joc.7044
Ronald D. Leeper, John Kochendorfer, Timothy A. Henderson, Michael A. Palecki, 2019: Impacts of Small-Scale Urban Encroachment on Air Temperature Observations. Journal of Applied Meteorology and Climatology. 58: 13691380. https://doi.org/10.1175/JAMC-D-19-0002.1

9 comments:

  1. Interesting discussion. One point/question:

    "The thick part of the box plots spans 50% of all observed temperature differences, the horizontal bar inside it the mean temperature difference."

    Unless these are non-standard boxplots the horizontal bar represents the MEDIAN temp diff, not the mean, no?

    ReplyDelete
    Replies
    1. The median is more traditional, but I have seen both. I have even made boxplots with both the mean (cross) and the median (line) as the mean is a pretty important number.

      For this dataset I do not expect much differences between mean and median. When the authors discus the boxplot they mention these average differences, so I have assumed that is what they computed and plotted.

      Delete
    2. Fair enough, although I do find that an odd choice, since the mean doesn't fit with the rest of the five number summary.

      Enjoyed the discussion -- need to say that again.
      Lee

      Delete
  2. I remember back around 2005 when RP Sr. was first peddling this stuff I suggested to him that he run a few experiments along these lines to see if there was a significant effect. He declined, saying he lacked funding. It would have been a very modest grant proposal, and at the the time he was still CO state climatologist and hadn't entirely wrecked his reputation.

    I say not entirely since there was an incident a couple years prior where he had been appointed chair of a committee (maybe an NAS subcommittee, but I'm not sure) tasked with issuing a report on something closely related and refused to let the report get filed even though he was the only dissenter. Ultimately they simply appointed another chair, leading to much squawkage from RP Sr. IIRC it's why he started his blog. But perhaps this bizarre episode was some value of entirely, at least as regards getting grants to test his wacky ideas.

    ReplyDelete
  3. I think the most convincing refutation is that ClimDiv, made up of these "corrupted" stations, and USCRN, purpose-built and which most people accept as being well sited, give identical results
    https://www.ncei.noaa.gov/access/monitoring/national-temperature-index/time-series/anom-tavg/12/12

    ReplyDelete
    Replies
    1. That would be a strong argument for normal people. While writing the post I had thought of this comparison and had wanted to revisit. Last time I looked, a few years ago, you could already see this slightly faster warming of USCRN. It starts to become scientifically interesting.

      P.S. I did not publish your second comment. I do not like linking to misinformation without debunking it. But thanks for the heads up.

      Delete
    2. I took the data from the link in Nick's post, subtracted USCRN-ClimDiv for 2005-present, and calculated trend and trend confidence to get 0.16 F/decade 0.04 F/decade 2 sigma as the trend difference.

      If I've done this correctly and USCRN is assumed "right" it suggests the ClimDiv series is trending cold.

      Delete
    3. I am not so good with Fahrenheit, but that is almost a degree Celsius per century right? That is climatologically significant. When I computed it last time I was surprised, like now, at how small the uncertainty is. The temperature of such a small piece of land like America is highly variable, but the noise in the difference is surprisingly small.

      That makes a blog post.

      Your unit "per decade" could be more intuitive as it could be something only seen near the end of the series: A good explanation could be that statistical homogenization cannot improve the trend estimates much near the edges because there is not much data after any breaks to detect them. So near the edges the trend in homogenized data is likely more like the raw data than like the actual climatic trend. I am not aware of a paper showing this. This result suggests that writing such a paper would be valuable.

      Delete

Comments are welcome, but comments without arguments may be deleted. Please try to remain on topic. (See also moderation page.)

I read every comment before publishing it. Spam comments are useless.

This comment box can be stretched for more space.