Friday, 20 November 2015

Sad that for Lamar Smith the "hiatus" has far-reaching policy implications

Earlier this year, NOAA made a new assessment of the surface temperature increase since 1880. Republican Congressman Lamar Smith, Chair of the House science committee did not like the adjustments NOAA made and started a harassment campaign. In the Washington Post he wrote about his conspiracy theory (my emphasis):
In June, NOAA employees altered temperature data to get politically correct results and then widely publicized their conclusions as refuting the nearly two-decade pause in climate change we have experienced. The agency refuses to reveal how those decisions were made. Congress has a constitutional responsibility to review actions by the executive branch that have far-reaching policy implications.
I guess everyone reading this blog knows that all the data and code are available online.

The debate is about this minor difference you see at the top right. Take your time. Look carefully. See it? The US mitigation sceptical movement has made the trend since the super El Nino year 1998 a major part of their argumentation that climate change is no problem. When for Lamar Smith such minute changes have "far-reaching policy implications", then maybe he is not a particularly good policy maker. The people he represents in the Texas TX-21 district deserve better.

I have explained the mitigation sceptics so many times that they should drop their "hiatus" fetish. That that would come back to hound them. That such extremely short term trends have huge uncertainties and that interpreting such changes as climatic changes assumes a data quality that I see as unrealistic. With their constant wailing about the quality data, they should theoretically certainly see it that way. But well, they did not listen.

Some political activists like to claim that the "hiatus" means that global warming has stopped. It seems like Lamar Smith is in this group, at least I see no other reason why he would think that it is policy relevant. But only 2 percent of global warming warms the atmosphere (most warms the oceans) and this "hiatus" is about 2% of the warming we have seen since 1880. It is thus a peculiar debate about 2% of %2 of the warming and not about global warming.

This peculiar political debate is the reason this NOAA study became a Science paper (Science magazine prefers article of general interest) and why NOAA's Karl et al. (2015) paper was heavily attacked by the mitigation sceptical movement.

Before this reassessment NOAA's trend since 1998 was rather low compared to the other datasets. The right panel of the figure below made by Zeke Hausfather shows the trends since 1998. In the figure the old NOAA assessment are the green dots, the new assessment the black dots.

The new assessment solved problems in the NOAA dataset that were already solved in the HadCRUTT4 dataset from the UK (red dots). The trends in HadCRUT4 are somewhat lower because it does not take the Arctic fully into account, where a lot of the warming in the last decade happened to have occurred. The version of HadCRUT4 were this problem is fixed is indicated as "Cowtan & Way" (brownish dots).

The privately funded Berkeley Earth) also takes the Arctic into account and already had somewhat larger recent trends.

Thus the new assessment of NOAA is in line with our current understanding. Given how minute this feature is, it is actually pretty amazing how similar the various assessments are.

"Karl raw" (open black circle) is the raw data of NOAA before any adjustments, the green curve in the graph at the top of this post. "Karl adj" (black dot) is the new assessment, the thick black line in the graph at the top. The previous assessment is "NCDA old" (green dot). The other dots, four well-known global temperature datasets.

Whether new assessments are seen as having "far-reaching policy implications" by Lamar Smith may also depend on the direction in which the trends change. Around the same time as the NOAA article, the Roy Spencer and Chris Christy published a new dataset with satellite estimates of the tropospheric temperatures. As David Appell reports, they make considerable changes to their dataset. Somehow I did not hear anything about a subpoena against them yet.

More important adjustments to the surface temperatures are made for data before 1940. Looking at the figure below, most would probably guess that Lamar Smith did not like the strong adjustments that made global warming a lot small. May he liked the direction better.

The adjustments before 1940 are necessary because in that period the dominant way to measure sea surface temperature was by taking a bucket of water out of the sea. During the measurement the water would cool due to evaporation. How large this adjustment should be is uncertain, anything between 0 and 0.4°C is possible. That makes a huge difference for the scientific assessment of how much warming we have seen up to now.

Also the size of the peak in the second world war is highly uncertain; the merchant ships were replaced by war ships, making the measurements differently.

This is outside of my current expertise, but the first article I read about this, a small study for the Baltic see, suggested that the cooling bias due to evaporation is small, but that there is a warming bias of 0.5°C because the thermometer is stored in the warm cabin and the sailors did not wait long enough until the thermometer equilibrates. Such uncertainties are important and a hand full of scientists are working on sea surface temperature. And now a political witch hunt keeps some of them from their work.

Whether the adjustments for buckets are 0.4 or 0°C that may be policy relevant. At least if we were already close to an optimal policy response. This adjustment affects the data over a long period and can thus influence estimates of climate sensitivity. What counts for the climate sensitivity is basically the area under the temperature graph. A change of 0.4°C over 60 years is a lot more than 0.2° over 15 years. Nic Lewis and Judith Curry (2014), who I hope Lamar Smith will trust, also do not see the "hiatus" as important for the climate sensitivity.

For those who still think that global warming has stopped, climatologist John Nielsen-Gammon (and friend of Anthony Watts of WUWT) made the wonderful plot below, that immediately helps you see, that most of the deviations from the trend line can be explained by variations in El Nino (archived version).

It is somewhat ironic that Lamar Smith claims that NOAA rushed the publication of their dataset. It would be more logical if he hastened his campaign. It is now shortly before the Paris climate conference and the strong El Nino does not bode well for his favourite policy justification as the plot below shows. You do not need statistics any more to be completely sure that there was no change in the trend in 1998.

Related reading

WOLF-PAC has a good plan to get money out of US politics. Let's first get rid of this weight vest before we run the century long climate change marathon.

Margaret Leinen, president of the American Geophysical Union (AGU): A Growing Threat to Academic Freedom

Keith Seitter, Executive Director of the American Meteorological Society (AMS): "The advancement of science depend
s on investigators having the freedom to carry out research objectively and without the fear of threats or intimidation whether or not their results are expedient or popular.

The article of Chris Mooney in the Washington Post is very similar to mine, but naturally better written and with more quotes: Even as Congress investigates the global warming ‘pause,’ actual temperatures are surging

Letters to the Editor of the Washington Post: Eroding trust in scientific research. The writer, a Republican, is chairman of the House Committee on Science, Space and Technology and represents Texas’s 21st District in the House.

House science panel demands more NOAA documents on climate paper

Michael Halpern of the Union of Concerned Scientists in The Guardian: The House Science Committee Chair is harassing US climate scientists

And Then There's Physics on the hypocrisy of Judith Curry: NOAA vs Lamar Smith.

Michael Tobis: House Science, Space, and Technology Committee vs National Oceanic and Atmospheric Administration

Ars Technica: US Congressman subpoenas NOAA climate scientists over study. Unhappy with temperature data, he wants to see the e-mails of those who analyze it.

Ars Technica: Congressman continues pressuring NOAA for scientists’ e-mails. Rep. Lamar Smith seeks closed-door interviews, in the meantime.

Guardian: Lamar Smith, climate scientist witch hunter. Smith got more money from fossil fuels than he did from any other industry.

Wired: Congress’ Chief Climate Denier Lamar Smith and NOAA Are at War. It’s Benghazi, but for nerds. I hope the the importance of independent science is also clear to people who do not consume it on a daily basis.

Mother Jones: The Disgrace of Lamar Smith and the House Science Committee.

Eddie Bernice Johnson, Democrat member of the Committee on Science from Texas reveals temporal inconsistencies in the explanations offered by Lamar Smith for his harassment campaign.

Raymond S. Bradley in Huffington Post: Tweet This and Risk a Subpoena. "OMG! [NOAA] tweeted the results! They actually tried to communicate with the taxpayers who funded the research!"

David Roberts at Vox: The House science committee is worse than the Benghazi committee

Union of Concerned Scientists: The House Science Committee’s Witch Hunt Against NOAA Scientists


Karl, T.R., A. Arguez, B. Huang, J.H. Lawrimore, J.R. McMahon, M.J. Menne, T.C. Peterson, R.S. Vose, and H. Zhang, “Possible artifacts of data biases in the recent global surface warming hiatus”, Science, vol. 348, pp. 1469–1472, 2015. doi: 10.1126/science.aaa5632


Anonymous said...

Victor: Consider a series of linear relationships:

y1 = mt + b1
y2 = mt + b2
y3 = mt + b3

where y1, y2, y3 ... yn are difference sources of temperature data, t is time, m is the rate at which temperature is rising and b1, b2, b3 ... bn are systematic errors associated with each measurement method. We wish to prepare a composite temperature record from all yi when the amount of data coming from each measurement is changing with time. It is trivial to see that the slope of any composite temperature record prepared from yi will vary with the accuracy with which the relative biases are corrected: b1-b2, b1-b3, b1-bn. One can easily show that the slope of the composite record depends on bias correction. And there are a wide variety of estimates available about these biases. Question: How can an outsider tell if appropriate choices were made?

Answer: Transparency comes from plotting data from homogeneous sources. For simplified data like this, one only need to plot y1 and y2 vs time on the same graph; followed by y1 and y2' vs time, where y2' is y2-(b1-b2). With the proper correction, the two lines superimpose.

This is obviously much harder to do with real temperature data. In the early days of a new measurement technique, the coverage often isn't global. One is forced to do comparisons within grid cells. And we expect m to vary with time and latitude.

At Judy's, Zeke and Kevin compare a buoy only record with ERSST v3 and ERSST v4, but these are composite records with potentially dubious bias corrections, not homogeneous records. In the most recent years, both composites are dominated by buoy records. So I would like to see as many homogeneous records of SST overlaid on the buoy record. And it would be useful to include satellite records of SST (which are perturbed by aerosols) and near-surface records from ARGO. Even UAH/RSS tropospheric records might have some value if their greater variability (due to low heat capacity) could be taken into account.

Are you aware of anywhere that reliable comparisons between homogenous records have been made?


Victor Venema said...

Frank, the comparison of the trend of all ERSST data with the trend from buoys is a good sanity check. I do not work on SST, but I would not a-priory call the buoy dataset homogeneous, just likely more homogeneous than the full dataset.

The problem is there are nearly no homogeneous datasets. So much has happened economically, technically and socially in the last 150 years. It is very difficult to keep observation methods constant over such a long time. I hope we can do this in future, the US climate reference network is hopefully something that will spread, it was certainly not the case in the past.

Under the assumption that there are no homogeneous datasets, what we do in statistical homogenization is similar to what you propose. We first detect breaks and then combine the data to one regional climate signal with an equation that is very similar to yours.

We assume that the observations are given by:
1) A regional climate signal, which is the same for all stations (in your case: mt, we compute one value per year and compute trend at the end).
2) A step function with the biases (b1, b2, b3, ...)
3) Noise due to measurement noise or local weather.

Then we minimize the noise to get the regional climate signal. Method described in Caussinus and Mestre (rather short, though).

I hope that is a satisfactory answer.

Anonymous said...

Victor: Thanks for your reply. I refuse to accept comparisons to ERSST v3 or v4 - they contain a variable amount of data from buoys. Given the simple linear problem I presented above, why would you overlay any of the y1, y2, y3 records with a composite containing them (especially a somewhat mysterious composite you hadn't created. The hypothesis we need to test concerns b1, b2 and b3.

There are METHODOLOGICALLY homogeneous data sets. Satellite microwave temperature data is a homogeneous data set, but I don't know if it is useful because of interference from changing aerosols. The change across a long period with similar aerosols at both ends could be useful. For the periods of when buoys were being introduced, I suspect there is enough metadata about engine intake temperatures to create a more homogeneous data set.

One viable interpretation of this episode is that we need to add uncertainty to the central estimate of the trend in for warming that represents uncertainty that the right values for bias correction (b1, b2 .. bn) have been identified. The existing error bars reflect random noise common to all composites; the difference in central value represents uncertainty in bias correction - until someone unambiguously proves which is best.


Victor Venema said...

Frank, you need to estimate the right value of the your biases b1, b2, and b3. That was exactly what NOAA now did in their updated dataset. They took the bias between buoys and ship inlet measurements into account, which they had not done before, but which previous work by the UK Hadley Centre showed to be important.

Not sure if the term "methodologically homogeneous" is clearly defined. However the satellite temperatures are surely not homogeneous and need large adjustments. The earliest dataset that did not have these yet even erroneously showed a cooling trend. The new update of UAHv5 to the unpublished UAHv6 again makes rather large adjustments. I would not call that a-priory homogeneous and would not assume that the m (of your mt) would be the same of the other datasets or the m of reality.

I agree that we need error estimate for remaining inhomogeneities, not just for interpolation. The surface temperature dataset HadCRUT partially does so. I do not know of such an estimates for the satellite temperatures; possibly my lack of expertise of these estimates, if you know of any please let me know.

Estimating the error introduced by unknown unknowns is hard. For the land station temperatures this may be possible because you have so much redundant information (many stations measuring the same regional climate signal), for the satellite or SST temperatures the unknown unknowns are a challenge to say the least.