Some may have noticed that a new NOAA paper on the global mean temperature has been published in Science (Karl et al., 2015). It is minimally different from the previous one. Why the press is interested, why this is a Science paper, why the
mitigation sceptics are not happy at all is that due to these minuscule changes the data no longer shows a "hiatus", no statistical analysis needed any more. That such paltry changes make so much difference shows the overconfidence of people talking about the "hiatus" as if it were a thing.
You can see the minimal changes, mostly less than 0.05°C, both warmer and cooler, in the top panel of the graph below. I made the graph extra large, so that you can see the differences. The thick black line shows the new assessment and the thin red line the previous estimated global temperature signal.
It reminds of the time when a (better) interpolation of the datagap in the Arctic (
Cowtan and Way, 2014) made the long-term trend almost imperceptibly larger, but changed the
temperature signal enough to double the warming during the "hiatus". Again we see a lot of whining from the people who should not have build their political case on
such a fragile feature in the first place. And we will see a lot more. And after that they will continue to act as if the "hiatus" is a thing. At least after a few years of this dishonest climate "debate" I would be very surprised if they would sudden look at
all the data and would make a fair assessment of the situation.
The most paradox are the mitigation sceptics who react by claiming that scientists are not allowed to remove biases due to changes in the way temperature was measured. Without accounting for the fact that old sea surface temperature measurements were biased to be too cool, global warming would be larger.
Previously I explained the reasons why raw data shows more warming and you can see the effect in the bottom panel of the above graph. The black line shows NOAA's current best estimate for the temperature change, the thin blue (?) line the temperature change in the raw data. Only alarmists would prefer the raw temperature trend.
The trend changes over a number of periods are depicted above; the circles are the old dataset, the squares the new one. You can clearly see differences between the trend for the various short periods. Shifting the period by only 2 years creates large trend difference. Another way to demonstrate that this features is not robust.
The biggest change in the dataset is that NOOA now uses the raw data of the land temperature database of the
International Surface Temperature Initiative (ISTI). (Disclosure, I am member of the ISTI.) This dataset contains much more stations than the previously used
Global Historical Climate Network (GHCNv3) dataset. (The land temperatures were homogenized with the same Pairwise Homogenization Algorithm (PHA) as before.)
The new trend in the land temperature is a little larger over the full period; see both graphs above. This was to be expected. The ISTI dataset contains much more stations and is
now similar to the one of Berkeley Earth, which already had a somewhat stronger temperature trend. Furthermore,
we know that there is a cooling bias in the land surface temperatures and with more stations it is easier to see data problems by comparing stations with each other and
relative homogenization methods can remove a larger part of this trend bias.
However, the largest trend changes in recent periods are due to the oceans; the Extended Reconstructed Sea Surface Temperature (
ERSST v4) dataset. Zeke Hausfather:
They also added a correction for temperatures measured by floating buoys vs. ships. A number of studies have found that buoys tend to measure temperatures that are about 0.12 degrees C (0.22 F) colder than is found by ships at the same time and same location. As the number of automated buoy instruments has dramatically expanded in the past two decades, failing to account for the fact that buoys read colder temperatures ended up adding a negative bias in the resulting ocean record.
It is not my field, but if I understand it correctly other ocean datasets, COBE2 and
HadSST3, already took these biases into account. Thus the difference between these datasets needs to have another reason. Understanding these differences would be interesting. And NOAA did not yet interpolate over the data gap in the Arctic, which would be expected to make its recent trends even stronger,
just like it did for Cowtan and Way. They are working on that; the triangles in the above graph are with interpolation. Thus the recent trend is currently still understated.
Personally, I would be most interested in understanding the difference that are important for long-term trends, like the differences shown below in two graphs prepared by
Zeke Hausfather. That is hard enough and such questions are more likely answerable. The recent differences between the datasets is even tinier than the tiny "hiatus" itself; no idea whether that can be understood.
I need some more synonyms for tiny or minimal, but the changes are really small. They are well within the statistical uncertainty computed from the year to year fluctuations. They are well within the uncertainty due to the fact that we do not have measurements everywhere and need to interpolate. The latter is the typical confidence interval you see in historical temperature plots. For most datasets the confidence interval does not include the uncertainty because biases were not perfectly removed. (
HadCRUT does this partially.)
This uncertainty becomes relatively more important on short time scales (and for smaller regions); for large time scales are large regions (global) many biases will compensate each other. For land temperatures a 15-year period is especially dangerous, that is about the period between two inhomogeneities (non-climatic changes).
The recent period is in addition especially tricky. We are just in an important transitional period from manual observations with thermometers Stevenson screens to automatic weather stations. Not only the measurement principle is different, but also the siting. It is difficult, on top of this, to find and remove inhomogeneities near the end of the series because the computed mean after the inhomogeneity is based on only a few values and has a large uncertainty.
You can get some idea of how large this uncertainty is be comparing the short-term trend of two independent datasets.
Ed Hawkins has compared the new USA NOAA data and the current UK HadCRUT4.3 dataset at Climate Lab Book and presented these graphs:
By request, he kindly computed the difference between these 10-year trends shown below. They suggest that if you are interested in short term trends smaller than 0.1°C per decade (say the "hiatus"), you should study whether your data quality is good enough to be able to interpret the variability as being due to climate system. The variability should be large enough or have a stronger regional pattern (say El Nino).
If the variability you are interested in is somewhat bigger than 0.1°C you probably want to put in work. Both datasets are based on much of the same data and use similar methods. For homogenization of surface stations we know that it can reduce biases, but not fully remove them. Thus part of the bias will be the same for all datasets that use statistical homogenization. The difference shown below is thus an underestimate of the uncertainty and it will need analytic work to compute the real uncertainty due to data quality.
[UPDATE. I thought I had an interesting new angle, but now see that Gavin Schmidt, director of NASA GISS, has been saying this in newspapers since the start: “
The fact that such small changes to the analysis make the difference between a hiatus or not merely underlines how fragile a concept it was in the first place.”]
Organisational implications
To reduce the uncertainties due to changes in the way we measure climate we need to make two major organizational changes:
we need to share all climate data with each other to better study the past and for the future we need to
build up a climate reference network. These are, unfortunately, not things climatologists can do alone, but need actions by politicians and support by their voters.
To quote from my
last post on data sharing:
We need [to share all climate data] to see what is happening to the climate. We already had almost a degree of global warming and are likely in for at least another one. This will change the sea level, the circulation, precipitation patterns. This will change extreme and severe weather. We will need to adapt to these climatic changes and to know how to protect our communities we need climate data. ...
To understand climate, we need a global overview. National studies are not enough. To understand changes in circulation, interactions with mountains and vegetation, to understand changes in extremes, we need spatially resolved information and not just a few stations. ...
To reduce the influence of measurement errors and non-climatic changes (inhomogeneities) on our (trend) assessments we need dense networks. These errors are detected and corrected by comparing one station to its neighbours. The closer the neighbours are, the more accurate we can assess the real climatic changes. This is especially important when it comes to changes in severe and extreme weather, where the removal of non-climatic changes is very challenging. ... For the best possible data to protect our communities, we need dense networks, we need all the data there is.
The main governing body of the World Meteorological Organization (
WMO) is just meeting until next week Friday (12th of June). They are debating a resolution on climate data exchange. To show your support for the free exchange of climate data please retweet or favourite the tweet below.
We are conducting a (hopefully) unique experiment with our climate system. Future generations climatologists would not forgive us if we did not observe as well as we can how our climate is changing. To make expensive decisions on climate adaptation, mitigation and burden sharing, we need reliable information on climatic changes: Only piggy-backing on meteorological observations is not good enough. We can improve data using homogenization, but homogenized data will always have much larger uncertainties than truly homogeneous data, especially when it comes to long term trends.
To quote my virtual boss at the ISTI Peter Thorne:
To conclude, worryingly not for the first time (think tropospheric temperatures in late 1990s / early 2000s) we find that potentially some substantial portion of a model-observation discrepancy that has caused a degree of controversy is down to unresolved observational issues. There is still an undue propensity for scientists and public alike to take the observations as a 'given'. As [this study by NOAA] attests, even in the modern era we have imperfect measurements.
Which leads me to a final proposition for a more scientifically sane future ...
This whole train of events does rather speak to the fact that we can and should observe in a more sane, sensible and rational way in the future. There is no need to bequeath onto researchers in 50 years time a similar mess. If we instigate and maintain reference quality networks that are stable SI traceable measures with comprehensive uncertainty chains such as USCRN, GRUAN etc. but for all domains for decades to come we can have the next generation of scientists focus on analyzing what happened and not, depressingly, trying instead to inevitably somewhat ambiguously ascertain what happened.
Building up such a reference network is hard because we will only see the benefits much later. But already now after about 10 years the USCRN provides evidence that the siting of stations is in all likelihood not a large problem in the USA. The US reference network with stations at perfectly sited locations, not affected by urbanization or micro-siting problems,
shows about the same trend as the homogenized historical USA temperature data. (The reference network even has a non-significant somewhat larger trend.)
There is a number of scientists working on trying to make this happen. If you are interested please contact
me or
Peter. We will have to design such reference networks, show how much more accurate they would make climate assessments (together with the existing networks) and then lobby to make it happen.
Further reading
Metrologist Michael de Podesta sees to agree with the above post and wrote about
the overconfidence of the mitigation sceptics in the climate record.
Zeke Hausfather:
Whither the pause? NOAA reports no recent slowdown in warming. This post provides a comprehensive, well-readable (I think) overview of the NOAA article.
A similar well-informed article can be found on Ars Technica:
Updated NOAA temperature record shows little global warming slowdown.
If you read the
HotWhopper post, you will get the most scientific background, apart from reading the NOAA article itself.
Peter Thorne of the ISTI on
The Karl et al. Science paper and ISTI. He gives more background on the land temperatures and makes a case for global climate reference networks.
Ed Hawkins compares the new NOAA dataset with HadCRUT4:
Global temperature comparisons.
Gavin Schmidt as a climate modeller explains who well the new dataset fits to climate projections:
NOAA temperature record updates and the ‘hiatus’.
Chris Merchant found about the same recent trend in his satellite sea surface temperature dataset and writes:
No slowdown in global temperature rise?
Hotwhopper discusses the
main egregious errors of the first two WUWT posts on Karl et al. and an
unfriendly email of Anthony Watts to NOAA. I hope Hotwhopper is not planning any holidays. It will be busy times.
Peter Thorne has the real back story.
NOAA press release:
Science publishes new NOAA analysis: Data show no recent slowdown in global warming.
Thomas R. Karl, Anthony Arguez, Boyin Huang, Jay H. Lawrimore, James R. McMahon, Matthew J. Menne, Thomas C. Peterson, Russell S. Vose, Huai-Min Zhang, 2015:
Possible artifacts of data biases in the recent global surface warming hiatus. Science.
doi: 10.1126/science.aaa5632.
Boyin Huang, Viva F. Banzon, Eric Freeman, Jay Lawrimore, Wei Liu, Thomas C. Peterson, Thomas M. Smith, Peter W. Thorne, Scott D. Woodruff, and Huai-Min Zhang, 2015:
Extended Reconstructed Sea Surface Temperature Version 4 (ERSST.v4). Part I: Upgrades and Intercomparisons. Journal Climate,
28, pp. 911–930, doi:
10.1175/JCLI-D-14-00006.1.
Rennie, Jared, Jay Lawrimore, Byron Gleason, Peter Thorne, Colin Morice, Matthew Menne, Claude Williams, Waldenio Gambi de Almeida, John Christy, Meaghan Flannery, Masahito Ishihara, Kenji Kamiguchi, Abert Klein Tank, Albert Mhanda, David Lister, Vyacheslav Razuvaev, Madeleine Renom, Matilde Rusticucci, Jeremy Tandy, Steven Worley, Victor Venema, William Angel, Manola Brunet, Bob Dattore, Howard Diamond, Matthew Lazzara, Frank Le Blancq, Juerg Luterbacher, Hermann Maechel, Jayashree Revadekar, Russell Vose, Xungang Yin, 2014:
The International Surface Temperature Initiative global land surface databank: monthly temperature data version 1 release description and methods.
Geoscience Data Journal,
1, pp. 75–102, doi:
10.1002/gdj3.8.