Saturday 6 June 2015

No! Ah! Part II. The return of the uncertainty monster



Some may have noticed that a new NOAA paper on the global mean temperature has been published in Science (Karl et al., 2015). It is minimally different from the previous one. Why the press is interested, why this is a Science paper, why the mitigation sceptics are not happy at all is that due to these minuscule changes the data no longer shows a "hiatus", no statistical analysis needed any more. That such paltry changes make so much difference shows the overconfidence of people talking about the "hiatus" as if it were a thing.

You can see the minimal changes, mostly less than 0.05°C, both warmer and cooler, in the top panel of the graph below. I made the graph extra large, so that you can see the differences. The thick black line shows the new assessment and the thin red line the previous estimated global temperature signal.



It reminds of the time when a (better) interpolation of the datagap in the Arctic (Cowtan and Way, 2014) made the long-term trend almost imperceptibly larger, but changed the temperature signal enough to double the warming during the "hiatus". Again we see a lot of whining from the people who should not have build their political case on such a fragile feature in the first place. And we will see a lot more. And after that they will continue to act as if the "hiatus" is a thing. At least after a few years of this dishonest climate "debate" I would be very surprised if they would sudden look at all the data and would make a fair assessment of the situation.

The most paradox are the mitigation sceptics who react by claiming that scientists are not allowed to remove biases due to changes in the way temperature was measured. Without accounting for the fact that old sea surface temperature measurements were biased to be too cool, global warming would be larger. Previously I explained the reasons why raw data shows more warming and you can see the effect in the bottom panel of the above graph. The black line shows NOAA's current best estimate for the temperature change, the thin blue (?) line the temperature change in the raw data. Only alarmists would prefer the raw temperature trend.



The trend changes over a number of periods are depicted above; the circles are the old dataset, the squares the new one. You can clearly see differences between the trend for the various short periods. Shifting the period by only 2 years creates large trend difference. Another way to demonstrate that this features is not robust.

The biggest change in the dataset is that NOOA now uses the raw data of the land temperature database of the International Surface Temperature Initiative (ISTI). (Disclosure, I am member of the ISTI.) This dataset contains much more stations than the previously used Global Historical Climate Network (GHCNv3) dataset. (The land temperatures were homogenized with the same Pairwise Homogenization Algorithm (PHA) as before.)

The new trend in the land temperature is a little larger over the full period; see both graphs above. This was to be expected. The ISTI dataset contains much more stations and is now similar to the one of Berkeley Earth, which already had a somewhat stronger temperature trend. Furthermore, we know that there is a cooling bias in the land surface temperatures and with more stations it is easier to see data problems by comparing stations with each other and relative homogenization methods can remove a larger part of this trend bias.

However, the largest trend changes in recent periods are due to the oceans; the Extended Reconstructed Sea Surface Temperature (ERSST v4) dataset. Zeke Hausfather:
They also added a correction for temperatures measured by floating buoys vs. ships. A number of studies have found that buoys tend to measure temperatures that are about 0.12 degrees C (0.22 F) colder than is found by ships at the same time and same location. As the number of automated buoy instruments has dramatically expanded in the past two decades, failing to account for the fact that buoys read colder temperatures ended up adding a negative bias in the resulting ocean record.
It is not my field, but if I understand it correctly other ocean datasets, COBE2 and HadSST3, already took these biases into account. Thus the difference between these datasets needs to have another reason. Understanding these differences would be interesting. And NOAA did not yet interpolate over the data gap in the Arctic, which would be expected to make its recent trends even stronger, just like it did for Cowtan and Way. They are working on that; the triangles in the above graph are with interpolation. Thus the recent trend is currently still understated.

Personally, I would be most interested in understanding the difference that are important for long-term trends, like the differences shown below in two graphs prepared by Zeke Hausfather. That is hard enough and such questions are more likely answerable. The recent differences between the datasets is even tinier than the tiny "hiatus" itself; no idea whether that can be understood.





I need some more synonyms for tiny or minimal, but the changes are really small. They are well within the statistical uncertainty computed from the year to year fluctuations. They are well within the uncertainty due to the fact that we do not have measurements everywhere and need to interpolate. The latter is the typical confidence interval you see in historical temperature plots. For most datasets the confidence interval does not include the uncertainty because biases were not perfectly removed. (HadCRUT does this partially.)

This uncertainty becomes relatively more important on short time scales (and for smaller regions); for large time scales are large regions (global) many biases will compensate each other. For land temperatures a 15-year period is especially dangerous, that is about the period between two inhomogeneities (non-climatic changes).

The recent period is in addition especially tricky. We are just in an important transitional period from manual observations with thermometers Stevenson screens to automatic weather stations. Not only the measurement principle is different, but also the siting. It is difficult, on top of this, to find and remove inhomogeneities near the end of the series because the computed mean after the inhomogeneity is based on only a few values and has a large uncertainty.

You can get some idea of how large this uncertainty is be comparing the short-term trend of two independent datasets. Ed Hawkins has compared the new USA NOAA data and the current UK HadCRUT4.3 dataset at Climate Lab Book and presented these graphs:



By request, he kindly computed the difference between these 10-year trends shown below. They suggest that if you are interested in short term trends smaller than 0.1°C per decade (say the "hiatus"), you should study whether your data quality is good enough to be able to interpret the variability as being due to climate system. The variability should be large enough or have a stronger regional pattern (say El Nino).

If the variability you are interested in is somewhat bigger than 0.1°C you probably want to put in work. Both datasets are based on much of the same data and use similar methods. For homogenization of surface stations we know that it can reduce biases, but not fully remove them. Thus part of the bias will be the same for all datasets that use statistical homogenization. The difference shown below is thus an underestimate of the uncertainty and it will need analytic work to compute the real uncertainty due to data quality.



[UPDATE. I thought I had an interesting new angle, but now see that Gavin Schmidt, director of NASA GISS, has been saying this in newspapers since the start: “The fact that such small changes to the analysis make the difference between a hiatus or not merely underlines how fragile a concept it was in the first place.”]

Organisational implications

To reduce the uncertainties due to changes in the way we measure climate we need to make two major organizational changes: we need to share all climate data with each other to better study the past and for the future we need to build up a climate reference network. These are, unfortunately, not things climatologists can do alone, but need actions by politicians and support by their voters.

To quote from my last post on data sharing:
We need [to share all climate data] to see what is happening to the climate. We already had almost a degree of global warming and are likely in for at least another one. This will change the sea level, the circulation, precipitation patterns. This will change extreme and severe weather. We will need to adapt to these climatic changes and to know how to protect our communities we need climate data. ...

To understand climate, we need a global overview. National studies are not enough. To understand changes in circulation, interactions with mountains and vegetation, to understand changes in extremes, we need spatially resolved information and not just a few stations. ...

To reduce the influence of measurement errors and non-climatic changes (inhomogeneities) on our (trend) assessments we need dense networks. These errors are detected and corrected by comparing one station to its neighbours. The closer the neighbours are, the more accurate we can assess the real climatic changes. This is especially important when it comes to changes in severe and extreme weather, where the removal of non-climatic changes is very challenging. ... For the best possible data to protect our communities, we need dense networks, we need all the data there is.
The main governing body of the World Meteorological Organization (WMO) is just meeting until next week Friday (12th of June). They are debating a resolution on climate data exchange. To show your support for the free exchange of climate data please retweet or favourite the tweet below.

We are conducting a (hopefully) unique experiment with our climate system. Future generations climatologists would not forgive us if we did not observe as well as we can how our climate is changing. To make expensive decisions on climate adaptation, mitigation and burden sharing, we need reliable information on climatic changes: Only piggy-backing on meteorological observations is not good enough. We can improve data using homogenization, but homogenized data will always have much larger uncertainties than truly homogeneous data, especially when it comes to long term trends.

To quote my virtual boss at the ISTI Peter Thorne:
To conclude, worryingly not for the first time (think tropospheric temperatures in late 1990s / early 2000s) we find that potentially some substantial portion of a model-observation discrepancy that has caused a degree of controversy is down to unresolved observational issues. There is still an undue propensity for scientists and public alike to take the observations as a 'given'. As [this study by NOAA] attests, even in the modern era we have imperfect measurements.

Which leads me to a final proposition for a more scientifically sane future ...

This whole train of events does rather speak to the fact that we can and should observe in a more sane, sensible and rational way in the future. There is no need to bequeath onto researchers in 50 years time a similar mess. If we instigate and maintain reference quality networks that are stable SI traceable measures with comprehensive uncertainty chains such as USCRN, GRUAN etc. but for all domains for decades to come we can have the next generation of scientists focus on analyzing what happened and not, depressingly, trying instead to inevitably somewhat ambiguously ascertain what happened.
Building up such a reference network is hard because we will only see the benefits much later. But already now after about 10 years the USCRN provides evidence that the siting of stations is in all likelihood not a large problem in the USA. The US reference network with stations at perfectly sited locations, not affected by urbanization or micro-siting problems, shows about the same trend as the homogenized historical USA temperature data. (The reference network even has a non-significant somewhat larger trend.)

There is a number of scientists working on trying to make this happen. If you are interested please contact me or Peter. We will have to design such reference networks, show how much more accurate they would make climate assessments (together with the existing networks) and then lobby to make it happen.



Further reading

Metrologist Michael de Podesta sees to agree with the above post and wrote about the overconfidence of the mitigation sceptics in the climate record.

Zeke Hausfather: Whither the pause? NOAA reports no recent slowdown in warming. This post provides a comprehensive, well-readable (I think) overview of the NOAA article.

A similar well-informed article can be found on Ars Technica: Updated NOAA temperature record shows little global warming slowdown.

If you read the HotWhopper post, you will get the most scientific background, apart from reading the NOAA article itself.

Peter Thorne of the ISTI on The Karl et al. Science paper and ISTI. He gives more background on the land temperatures and makes a case for global climate reference networks.

Ed Hawkins compares the new NOAA dataset with HadCRUT4: Global temperature comparisons.

Gavin Schmidt as a climate modeller explains who well the new dataset fits to climate projections: NOAA temperature record updates and the ‘hiatus’.

Chris Merchant found about the same recent trend in his satellite sea surface temperature dataset and writes: No slowdown in global temperature rise?

Hotwhopper discusses the main egregious errors of the first two WUWT posts on Karl et al. and an unfriendly email of Anthony Watts to NOAA. I hope Hotwhopper is not planning any holidays. It will be busy times. Peter Thorne has the real back story.

NOAA press release: Science publishes new NOAA analysis: Data show no recent slowdown in global warming.

Thomas R. Karl, Anthony Arguez, Boyin Huang, Jay H. Lawrimore, James R. McMahon, Matthew J. Menne, Thomas C. Peterson, Russell S. Vose, Huai-Min Zhang, 2015: Possible artifacts of data biases in the recent global surface warming hiatus. Science. doi: 10.1126/science.aaa5632.

Boyin Huang, Viva F. Banzon, Eric Freeman, Jay Lawrimore, Wei Liu, Thomas C. Peterson, Thomas M. Smith, Peter W. Thorne, Scott D. Woodruff, and Huai-Min Zhang, 2015: Extended Reconstructed Sea Surface Temperature Version 4 (ERSST.v4). Part I: Upgrades and Intercomparisons. Journal Climate, 28, pp. 911–930, doi: 10.1175/JCLI-D-14-00006.1.

Rennie, Jared, Jay Lawrimore, Byron Gleason, Peter Thorne, Colin Morice, Matthew Menne, Claude Williams, Waldenio Gambi de Almeida, John Christy, Meaghan Flannery, Masahito Ishihara, Kenji Kamiguchi, Abert Klein Tank, Albert Mhanda, David Lister, Vyacheslav Razuvaev, Madeleine Renom, Matilde Rusticucci, Jeremy Tandy, Steven Worley, Victor Venema, William Angel, Manola Brunet, Bob Dattore, Howard Diamond, Matthew Lazzara, Frank Le Blancq, Juerg Luterbacher, Hermann Maechel, Jayashree Revadekar, Russell Vose, Xungang Yin, 2014: The International Surface Temperature Initiative global land surface databank: monthly temperature data version 1 release description and methods. Geoscience Data Journal, 1, pp. 75–102, doi: 10.1002/gdj3.8.

9 comments:

  1. "I need some more synonyms for tiny or minimal, but the changes are really small."

    Minute
    Infinitesimal
    Miniscule

    Virtuoso post, Victor.

    ReplyDelete
  2. The graph label "Global Average Temperature by Year" ? ?
    Aren't we talking about "Global Average Surface Temperature"?

    Considering that the global surface temperature contains roughly 10% of the heat within our global heat distribution engine isn't that grossly misleading?

    After all, consider the average citizen who looks at that? If we/they can't even get simple complexities like that across, it's no wonder that public perceptions are so confused. {excuse me for sounding like a broken record, but when it comes to communicating science such subtleties are pivotal. }

    ReplyDelete
  3. Yes, the default temperature is the surface temperature.

    No, that is not misleading. The graph shows temperature.

    90% of the heat increase is in the oceans, but the graph did not say: atmospheric heat content, which is just a few percent. (Do not forget the ice.)

    Looking at the heat content is convenient for people focussed on short-term changes because it is less noisy. It is useful to assess the radiative imbalance and if you want to debunk a political activist that stupidly claims that global warming has stopped.

    But temperature is important for humans and it change thus also important. In addition temperature is something people know, heat content is very abstract. Then we still have the changes in precipitation, in the acidity of the ocean, in the variability of the weather, in severe weather, in specific weather phenomena, in the timing of biological processes, in the land temperature and in the ocean temperature, in the Southern and in the Northern hemisphere. There are many ways to look at climate change, they all have their strengths and weaknesses, different datasets have different lengths, resolution and quality; a good scientist looks at all of them.

    ReplyDelete
  4. The new adjustments by NOAA seem fair and justified, and it is quite natural that climate sceptics don't like them for ideological reasons.
    However, the hiatus can be erased by the opposite approach as well, i.e. using no adjustments at all. Actually, the highest trend of all global temperature indices in the 21st century is shown by Nick Stokes TempLSmesh (1.358 C/century) that uses unadjusted GHCNv3 +ERSSTv3. Running the TempLSmesh with adjusted data reduce the trend by about 0.3 C. The reason for this effect is mainly the seemingly unfair down-adjustment of the few high Arctic temperature stations (with big area importance), which have been warming so fast that the homogenisation algorithms find them "suspicious". Cowtan and Way have adressed this issue as well. (Other important areas of the world that are cooled by GHCN are Sudan and the Amundsen-Scott base).
    Right now, I can't see that Carl et al 2015 have adressed the unfair down-adjustments of Arctic stations, ie no change of the pair-wise comparison procedure. However, the introduction of more high Arctic stations should theoretically fix the bias, by giving more support that the rapid warming is real. But the land trend has not changed with the NOAA revision, the main difference is with the SST...

    I wonder what will happen if Nick Stokes converts to ERSSTv4 and GHCNv4 (unadjusted)? Will the trend become 1.7 C/century?

    ReplyDelete
  5. Olof R, that is interesting. Do you have the link to that post of Nick Stokes?

    The homogenization algorithm of GHVNv3 suppressed the strong warming in the Arctic because of a combination of a large spatial gradient in the trend and having only a few stations in the Arctic. Relative homogenization algorithms assume that the neighbouring stations have the same climate signal and that is no longer true under such circumstances. The number of stations is now much larger, thus this problem should indeed be much reduced now.

    It would be interesting to investigate how much this contributes. There would be compensating effect for the global mean in that the larger dataset probably also allowed for a better correcting of cooling biases in the raw data.

    There is work coming up on GHCNv4. That will also use the ISTI dataset as its basis. Thus we may soon learn more about the reasons.

    ReplyDelete
  6. I suspect the new ISTI-based land record from Karl et al will not have as large an arctic issue for the same reason that Berkeley seems not to: significantly more station coverage. Will be interesting to look at Iceland as well, since GHCNv3 likely has an issue with removing localized climate change there too.

    ReplyDelete
  7. Victor, I am a frequent reader of Nicks blog, and I have picked stuff and inspiration from several blog posts, gadgets for trend viewing, comparing effects of adjustment etc. But I think that this is one of the more important posts:
    http://www.moyhu.blogspot.com.au/2015/02/homogenisation-makes-little-difference.html
    with a tool that enables trend comparison of adjusted and unadjusted TempLS versions and other global indices.
    It is only the TempLSmesh, which uses full global kriging-like infill, that gets +0.3 trend with unadjusted GHCN data.
    I guess that the reason for this is that down-adjustments by GHCN happens to be more common in the vicinity of "empty areas" (the Arctic, eastern Sahara, the South pole), hence the infill procedure give those stations a relatively larger weight by area.

    ReplyDelete
  8. Olof R, sorry for the question, I should have thought first and simply look at a previous post on the differences due to homogenization.

    There is an effect for GHCNv3, but it is really small. The raw data has a somewhat stronger recent trend than the homogenized GHCNv3 data.

    Interestingly, the HadSST3 dataset, which already includes the SST adjustments NOAA now made, like NOAA also shows a somewhat stronger warming in the recent trend due to homogenization.

    The net effect for the global temperature is a small recent trend increase due to homogenization.

    ReplyDelete
  9. Victor, C&W have a paper on the GHCN v3 Arctic cooling bias: http://www-users.york.ac.uk/~kdc3/papers/coverage2013/update.140404.pdf
    but I guess You have already read it...

    ReplyDelete

Comments are welcome, but comments without arguments may be deleted. Please try to remain on topic. (See also moderation page.)

I read every comment before publishing it. Spam comments are useless.

This comment box can be stretched for more space.