Showing posts with label NOAA. Show all posts
Showing posts with label NOAA. Show all posts

Wednesday, June 17, 2015

Did you notice the recent anti-IPCC article?

You may have missed the latest attack on the IPCC, because the mitigation sceptics did not celebrated it. Normally they like to claim that the job of scientists is to write IPCC friendly articles. Maybe because that is the world they know, that is how their think tanks function, that is what they would be willing to do for their political movement. The claim is naturally wrong and it illustrates that they are either willing to lie for their movement or do not have a clue how science works.

It is the job of a scientist to understand the world better and thus to change the way we currently see the world. It is the fun of being a scientist to challenge old ideas.

The case in point last week was naturally the new NOAA assessment of the global mean temperature trend (Karl et al., 2015). The new assessment only produced minimal changes, but NOAA made that interesting by claiming the IPCC was wrong about the "hiatus". The abstract boldly states:
Here we present an updated global surface temperature analysis that reveals that global trends are higher than reported by the IPCC ...
The introduction starts:
The Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report concluded that the global surface temperature “has shown a much smaller increasing linear trend over the past 15 years [1998-2012] than over the past 30 to 60 years.” ... We address all three of these [changes in the observation methods], none of which were included in our previous analysis used in the IPCC report.
Later Karl et al. write, that they are better than the IPCC:
These analyses have surmised that incomplete Arctic coverage also affects the trends from our analysis as reported by IPCC. We address this issue as well.
To stress the controversy they explicitly use the IPCC periods:
Our analysis also suggests that short- and long-term warming rates are far more similar than previously estimated in IPCC. The difference between the trends in two periods used in IPCC (1998-2012 and 1951-2012) is an illustrative metric: the trends for these two periods in the new analysis differ by 0.043°C/dec compared to 0.078°C/dec in the old analysis reported by IPCC.
The final punchline goes:
Indeed, based on our new analysis, the IPCC’s statement of two years ago – that the global surface temperature “has shown a much smaller increasing linear trend over the past 15 years than over the past 30 to 60 years” – is no longer valid.
And they make the IPCC periods visually stand out in their main figure.


Figure from Karl et al. (2015) showing the trend difference for the old and new assessment over a number of periods, the IPCC periods and their own. The circles are the old dataset, the squares the new one and the triangles depict the new data with interpolation of the Arctic datagap.

This is a clear example of scientists attacking the orthodoxy because it is done so blatantly. Normally scientific articles do this more subtly, which has the disadvantage that the public does not notice it happening. Normally scientists would mention the old work casually, often the expect their colleagues to know which specific studies are (partially) criticized. Maybe NOAA found it easier to use this language this time because they did not write about a specific colleague, but about a group and a strong group.


Figure SPM.1. (a) Observed global mean combined land and ocean surface temperature anomalies, from 1850 to 2012 from three data sets. Top panel: annual mean values. Bottom panel: decadal mean values including the estimate of uncertainty for one dataset (black). Anomalies are relative to the mean of 1961−1990. (b) Map of the observed surface temperature change from 1901 to 2012 derived from temperature trends determined by linear regression from one dataset (orange line in panel a).
The attack is also somewhat unfair. The IPCC clearly stated that it not a good idea to focus on such short periods:
In addition to robust multi-decadal warming, global mean surface temperature exhibits substantial decadal and interannual variability (see Figure SPM.1). Due to natural variability, trends based on short records are very sensitive to the beginning and end dates and do not in general reflect long-term climate trends. As one example, the rate of warming over the past 15 years (1998–2012; 0.05 [–0.05 to 0.15] °C per decade), which begins with a strong El NiƱo, is smaller than the rate calculated since 1951 (1951–2012; 0.12 [0.08 to 0.14] °C per decade)
What the IPCC missed in this case is that the problem goes beyond natural variability, that another problem is whether the data quality is high enough to talk about such subtle variations.

The mitigation sceptics may have missed that NOAA attacked the IPCC consensus because the article also attacked the one thing they somehow hold dear: the "hiatus".

I must admit that I originally thought that the emphasis the mitigation sceptics put on the "hiatus" was because they mainly value annoying "greenies" and what better way to do so than to give your most ridiculous argument. Ignore the temperature rise over the last century, start your "hiatus" in a hot super El Nino year and stupidly claim that global warming has stopped.

But they really cling to it, they already wrote well over a dozen NOAA protest posts at WUWT, an important blog of the mitigation sceptical movement. The Daily Kos even wrote: "climate denier heads exploded all over the internet."

This "hiatus" fad provided Karl et al. (2015) the public interest — or interdisciplinary relevance as these journals call that — and made it a Science paper. Without the weird climate "debate", it would have been an article for a good climate journal. Without challenging the orthodoxy, it would have been an article for a simple data journal.

Let me close this post with a video of Richard Alley explaining even more enthusiastic than usually
what drives (climate) scientists? Hint: it ain't parroting the IPCC. (Even if their reports are very helpful.)
Suppose Einstein had stood up and said, I have worked very hard and I have discovered that Newton is right and I have nothing to add. Would anyone ever know who Einstein was?







Further reading

My draft was already written before I noticed that at Real Climate Stefan Rahmstorf had written: Debate in the noise.

My previous post on the NOAA assessment asked the question whether the data is good enough to see something like a "hiatus" and stressed the need to climate data sharing and building up a global reference network. It was frivolously called: No! Ah! Part II. The return of the uncertainty monster.

Zeke Hausfather: Whither the pause? NOAA reports no recent slowdown in warming. This post provides a comprehensive, well-readable (I think) overview of the NOAA article.

How climatology treats sceptics. My experience fits to what you would expect.

References

IPCC, 2013: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change [Stocker, T.F., D. Qin, G.-K. Plattner, M. Tignor, S.K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex and P.M. Midgley (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 1535 pp, doi: 10.1017/CBO9781107415324.

Thomas R. Karl, Anthony Arguez, Boyin Huang, Jay H. Lawrimore, James R. McMahon, Matthew J. Menne, Thomas C. Peterson, Russell S. Vose, Huai-Min Zhang, 2015: Possible artifacts of data biases in the recent global surface warming hiatus. Science. doi: 10.1126/science.aaa5632.

Boyin Huang, Viva F. Banzon, Eric Freeman, Jay Lawrimore, Wei Liu, Thomas C. Peterson, Thomas M. Smith, Peter W. Thorne, Scott D. Woodruff, and Huai-Min Zhang, 2015: Extended Reconstructed Sea Surface Temperature Version 4 (ERSST.v4). Part I: Upgrades and Intercomparisons. Journal Climate, 28, pp. 911–930, doi: 10.1175/JCLI-D-14-00006.1.

Rennie, Jared, Jay Lawrimore, Byron Gleason, Peter Thorne, Colin Morice, Matthew Menne, Claude Williams, Waldenio Gambi de Almeida, John Christy, Meaghan Flannery, Masahito Ishihara, Kenji Kamiguchi, Abert Klein Tank, Albert Mhanda, David Lister, Vyacheslav Razuvaev, Madeleine Renom, Matilde Rusticucci, Jeremy Tandy, Steven Worley, Victor Venema, William Angel, Manola Brunet, Bob Dattore, Howard Diamond, Matthew Lazzara, Frank Le Blancq, Juerg Luterbacher, Hermann Maechel, Jayashree Revadekar, Russell Vose, Xungang Yin, 2014: The International Surface Temperature Initiative global land surface databank: monthly temperature data version 1 release description and methods. Geoscience Data Journal, 1, pp. 75–102, doi: 10.1002/gdj3.8.

Saturday, June 6, 2015

No! Ah! Part II. The return of the uncertainty monster



Some may have noticed that a new NOAA paper on the global mean temperature has been published in Science (Karl et al., 2015). It is minimally different from the previous one. Why the press is interested, why this is a Science paper, why the mitigation sceptics are not happy at all is that due to these minuscule changes the data no longer shows a "hiatus", no statistical analysis needed any more. That such paltry changes make so much difference shows the overconfidence of people talking about the "hiatus" as if it were a thing.

You can see the minimal changes, mostly less than 0.05°C, both warmer and cooler, in the top panel of the graph below. I made the graph extra large, so that you can see the differences. The thick black line shows the new assessment and the thin red line the previous estimated global temperature signal.



It reminds of the time when a (better) interpolation of the datagap in the Arctic (Cowtan and Way, 2014) made the long-term trend almost imperceptibly larger, but changed the temperature signal enough to double the warming during the "hiatus". Again we see a lot of whining from the people who should not have build their political case on such a fragile feature in the first place. And we will see a lot more. And after that they will continue to act as if the "hiatus" is a thing. At least after a few years of this dishonest climate "debate" I would be very surprised if they would sudden look at all the data and would make a fair assessment of the situation.

The most paradox are the mitigation sceptics who react by claiming that scientists are not allowed to remove biases due to changes in the way temperature was measured. Without accounting for the fact that old sea surface temperature measurements were biased to be too cool, global warming would be larger. Previously I explained the reasons why raw data shows more warming and you can see the effect in the bottom panel of the above graph. The black line shows NOAA's current best estimate for the temperature change, the thin blue (?) line the temperature change in the raw data. Only alarmists would prefer the raw temperature trend.



The trend changes over a number of periods are depicted above; the circles are the old dataset, the squares the new one. You can clearly see differences between the trend for the various short periods. Shifting the period by only 2 years creates large trend difference. Another way to demonstrate that this features is not robust.

The biggest change in the dataset is that NOOA now uses the raw data of the land temperature database of the International Surface Temperature Initiative (ISTI). (Disclosure, I am member of the ISTI.) This dataset contains much more stations than the previously used Global Historical Climate Network (GHCNv3) dataset. (The land temperatures were homogenized with the same Pairwise Homogenization Algorithm (PHA) as before.)

The new trend in the land temperature is a little larger over the full period; see both graphs above. This was to be expected. The ISTI dataset contains much more stations and is now similar to the one of Berkeley Earth, which already had a somewhat stronger temperature trend. Furthermore, we know that there is a cooling bias in the land surface temperatures and with more stations it is easier to see data problems by comparing stations with each other and relative homogenization methods can remove a larger part of this trend bias.

However, the largest trend changes in recent periods are due to the oceans; the Extended Reconstructed Sea Surface Temperature (ERSST v4) dataset. Zeke Hausfather:
They also added a correction for temperatures measured by floating buoys vs. ships. A number of studies have found that buoys tend to measure temperatures that are about 0.12 degrees C (0.22 F) colder than is found by ships at the same time and same location. As the number of automated buoy instruments has dramatically expanded in the past two decades, failing to account for the fact that buoys read colder temperatures ended up adding a negative bias in the resulting ocean record.
It is not my field, but if I understand it correctly other ocean datasets, COBE2 and HadSST3, already took these biases into account. Thus the difference between these datasets needs to have another reason. Understanding these differences would be interesting. And NOAA did not yet interpolate over the data gap in the Arctic, which would be expected to make its recent trends even stronger, just like it did for Cowtan and Way. They are working on that; the triangles in the above graph are with interpolation. Thus the recent trend is currently still understated.

Personally, I would be most interested in understanding the difference that are important for long-term trends, like the differences shown below in two graphs prepared by Zeke Hausfather. That is hard enough and such questions are more likely answerable. The recent differences between the datasets is even tinier than the tiny "hiatus" itself; no idea whether that can be understood.





I need some more synonyms for tiny or minimal, but the changes are really small. They are well within the statistical uncertainty computed from the year to year fluctuations. They are well within the uncertainty due to the fact that we do not have measurements everywhere and need to interpolate. The latter is the typical confidence interval you see in historical temperature plots. For most datasets the confidence interval does not include the uncertainty because biases were not perfectly removed. (HadCRUT does this partially.)

This uncertainty becomes relatively more important on short time scales (and for smaller regions); for large time scales are large regions (global) many biases will compensate each other. For land temperatures a 15-year period is especially dangerous, that is about the period between two inhomogeneities (non-climatic changes).

The recent period is in addition especially tricky. We are just in an important transitional period from manual observations with thermometers Stevenson screens to automatic weather stations. Not only the measurement principle is different, but also the siting. It is difficult, on top of this, to find and remove inhomogeneities near the end of the series because the computed mean after the inhomogeneity is based on only a few values and has a large uncertainty.

You can get some idea of how large this uncertainty is be comparing the short-term trend of two independent datasets. Ed Hawkins has compared the new USA NOAA data and the current UK HadCRUT4.3 dataset at Climate Lab Book and presented these graphs:



By request, he kindly computed the difference between these 10-year trends shown below. They suggest that if you are interested in short term trends smaller than 0.1°C per decade (say the "hiatus"), you should study whether your data quality is good enough to be able to interpret the variability as being due to climate system. The variability should be large enough or have a stronger regional pattern (say El Nino).

If the variability you are interested in is somewhat bigger than 0.1°C you probably want to put in work. Both datasets are based on much of the same data and use similar methods. For homogenization of surface stations we know that it can reduce biases, but not fully remove them. Thus part of the bias will be the same for all datasets that use statistical homogenization. The difference shown below is thus an underestimate of the uncertainty and it will need analytic work to compute the real uncertainty due to data quality.



[UPDATE. I thought I had an interesting new angle, but now see that Gavin Schmidt, director of NASA GISS, has been saying this in newspapers since the start: “The fact that such small changes to the analysis make the difference between a hiatus or not merely underlines how fragile a concept it was in the first place.”]

Organisational implications

To reduce the uncertainties due to changes in the way we measure climate we need to make two major organizational changes: we need to share all climate data with each other to better study the past and for the future we need to build up a climate reference network. These are, unfortunately, not things climatologists can do alone, but need actions by politicians and support by their voters.

To quote from my last post on data sharing:
We need [to share all climate data] to see what is happening to the climate. We already had almost a degree of global warming and are likely in for at least another one. This will change the sea level, the circulation, precipitation patterns. This will change extreme and severe weather. We will need to adapt to these climatic changes and to know how to protect our communities we need climate data. ...

To understand climate, we need a global overview. National studies are not enough. To understand changes in circulation, interactions with mountains and vegetation, to understand changes in extremes, we need spatially resolved information and not just a few stations. ...

To reduce the influence of measurement errors and non-climatic changes (inhomogeneities) on our (trend) assessments we need dense networks. These errors are detected and corrected by comparing one station to its neighbours. The closer the neighbours are, the more accurate we can assess the real climatic changes. This is especially important when it comes to changes in severe and extreme weather, where the removal of non-climatic changes is very challenging. ... For the best possible data to protect our communities, we need dense networks, we need all the data there is.
The main governing body of the World Meteorological Organization (WMO) is just meeting until next week Friday (12th of June). They are debating a resolution on climate data exchange. To show your support for the free exchange of climate data please retweet or favourite the tweet below.

We are conducting a (hopefully) unique experiment with our climate system. Future generations climatologists would not forgive us if we did not observe as well as we can how our climate is changing. To make expensive decisions on climate adaptation, mitigation and burden sharing, we need reliable information on climatic changes: Only piggy-backing on meteorological observations is not good enough. We can improve data using homogenization, but homogenized data will always have much larger uncertainties than truly homogeneous data, especially when it comes to long term trends.

To quote my virtual boss at the ISTI Peter Thorne:
To conclude, worryingly not for the first time (think tropospheric temperatures in late 1990s / early 2000s) we find that potentially some substantial portion of a model-observation discrepancy that has caused a degree of controversy is down to unresolved observational issues. There is still an undue propensity for scientists and public alike to take the observations as a 'given'. As [this study by NOAA] attests, even in the modern era we have imperfect measurements.

Which leads me to a final proposition for a more scientifically sane future ...

This whole train of events does rather speak to the fact that we can and should observe in a more sane, sensible and rational way in the future. There is no need to bequeath onto researchers in 50 years time a similar mess. If we instigate and maintain reference quality networks that are stable SI traceable measures with comprehensive uncertainty chains such as USCRN, GRUAN etc. but for all domains for decades to come we can have the next generation of scientists focus on analyzing what happened and not, depressingly, trying instead to inevitably somewhat ambiguously ascertain what happened.
Building up such a reference network is hard because we will only see the benefits much later. But already now after about 10 years the USCRN provides evidence that the siting of stations is in all likelihood not a large problem in the USA. The US reference network with stations at perfectly sited locations, not affected by urbanization or micro-siting problems, shows about the same trend as the homogenized historical USA temperature data. (The reference network even has a non-significant somewhat larger trend.)

There is a number of scientists working on trying to make this happen. If you are interested please contact me or Peter. We will have to design such reference networks, show how much more accurate they would make climate assessments (together with the existing networks) and then lobby to make it happen.



Further reading

Metrologist Michael de Podesta sees to agree with the above post and wrote about the overconfidence of the mitigation sceptics in the climate record.

Zeke Hausfather: Whither the pause? NOAA reports no recent slowdown in warming. This post provides a comprehensive, well-readable (I think) overview of the NOAA article.

A similar well-informed article can be found on Ars Technica: Updated NOAA temperature record shows little global warming slowdown.

If you read the HotWhopper post, you will get the most scientific background, apart from reading the NOAA article itself.

Peter Thorne of the ISTI on The Karl et al. Science paper and ISTI. He gives more background on the land temperatures and makes a case for global climate reference networks.

Ed Hawkins compares the new NOAA dataset with HadCRUT4: Global temperature comparisons.

Gavin Schmidt as a climate modeller explains who well the new dataset fits to climate projections: NOAA temperature record updates and the ‘hiatus’.

Chris Merchant found about the same recent trend in his satellite sea surface temperature dataset and writes: No slowdown in global temperature rise?

Hotwhopper discusses the main egregious errors of the first two WUWT posts on Karl et al. and an unfriendly email of Anthony Watts to NOAA. I hope Hotwhopper is not planning any holidays. It will be busy times. Peter Thorne has the real back story.

NOAA press release: Science publishes new NOAA analysis: Data show no recent slowdown in global warming.

Thomas R. Karl, Anthony Arguez, Boyin Huang, Jay H. Lawrimore, James R. McMahon, Matthew J. Menne, Thomas C. Peterson, Russell S. Vose, Huai-Min Zhang, 2015: Possible artifacts of data biases in the recent global surface warming hiatus. Science. doi: 10.1126/science.aaa5632.

Boyin Huang, Viva F. Banzon, Eric Freeman, Jay Lawrimore, Wei Liu, Thomas C. Peterson, Thomas M. Smith, Peter W. Thorne, Scott D. Woodruff, and Huai-Min Zhang, 2015: Extended Reconstructed Sea Surface Temperature Version 4 (ERSST.v4). Part I: Upgrades and Intercomparisons. Journal Climate, 28, pp. 911–930, doi: 10.1175/JCLI-D-14-00006.1.

Rennie, Jared, Jay Lawrimore, Byron Gleason, Peter Thorne, Colin Morice, Matthew Menne, Claude Williams, Waldenio Gambi de Almeida, John Christy, Meaghan Flannery, Masahito Ishihara, Kenji Kamiguchi, Abert Klein Tank, Albert Mhanda, David Lister, Vyacheslav Razuvaev, Madeleine Renom, Matilde Rusticucci, Jeremy Tandy, Steven Worley, Victor Venema, William Angel, Manola Brunet, Bob Dattore, Howard Diamond, Matthew Lazzara, Frank Le Blancq, Juerg Luterbacher, Hermann Maechel, Jayashree Revadekar, Russell Vose, Xungang Yin, 2014: The International Surface Temperature Initiative global land surface databank: monthly temperature data version 1 release description and methods. Geoscience Data Journal, 1, pp. 75–102, doi: 10.1002/gdj3.8.

Thursday, August 2, 2012

A short introduction to the time of observation bias and its correction




Figure 1. A thermo-hygrograph, measures and records temperature and humidity.
Due to recent events, the time of observation bias in climatological temperature measurements has become a hot topic. What is it, why is it important, why should we and how can we correct for it? A short introduction.

Mean temperature

The mean daily temperature can be determined in multiple ways. Nowadays, it is easy to measure the temperature frequently, store it in a digital memory and compute the daily average. Also in the past something similar was possible using a thermograph; see Figure 1. However, such an instrument was expensive and fragile.

Thus normally other ways were used for standard measurements, using minimum and maximum thermometers and by computing a weighted average over observations at 3 or 4 fixed times. Another good approximation for many climate regions is to average over the minimum and maximum temperature. Special minimum and maximum thermometers were invented in 1782 for this task.