Wednesday, 30 September 2015

UK global warming policy foundation (GWPF) not interested in my slanted questions

Mitigation sceptics like to complain that climate scientists do not want to debate them, but actually I do not get many informed questions about station data quality at my blog and when I come to them my comments are regularly snipped. Watts Up With That (WUWT) is a prominent blog of the mitigation sceptical movement in the US and the hobby of its host, Anthony Watts, is the quality of station measurements. He even set up a project to make pictures of weather stations. One might expect him to be thrilled to talk to me, but Watts hardly ever answers. In fact last year he tweeted: "To be honest, I forgot Victor Venema even existed." I already had the impression that Watts does not read science blogs that often, not even about this own main topic.

Two years ago Matt Ridley, adviser to the Global Warming Policy Foundation (GWPF), published an erroneous post on WUWT about the work of two Greek colleagues, Steirou and Koutsoyiannis. I had already explained the errors in a three year old blog post and thus wanted to point the WUWT readers to this mistake in a polite comment. This comment got snipped and replaced with:
[sorry, but we aren't interested in your slanted opinion - mod]
Interesting. I think such a response tells you a lot about a political movement and whether they believe themselves that they are a scientific movement.

Now the same happened on the homepage of the Global Warming Policy Foundation (GWPF).

To make accurate estimates of how much the climate has changed, scientists need to remove other changes from the observations. For example, a century ago thermometers were not protected as well against (solar) radiation as they are nowadays and the observed land station temperatures were thus a little too high. In the same period the sea surface temperature was measured by taking a bucket of water out of the sea. While the measurement was going on, the water cooled by evaporation and the measured temperature was a little too low. Removing such changes makes the land temperature trend 0.2°C per century stronger in the NOAA dataset, while removing such changes from the sea surface temperature makes this trend smaller by about the same amount. Because the oceans are larger, the global mean trend is thus made smaller by climatologists.

Selecting two regions where upward land surface temperature adjustments were relatively large, Christopher Booker accused scientists of fiddling with the data. In these two Telegraph articles he naturally did not explain his readers how large the effect is globally, nor why it is necessary, nor how this is done. That would have made his conspiracy theory less convincing.

That was the start for the review of the Global Warming Policy Foundation (GWPF). Christopher Booker wrote:
Paul [Homewood], I thought you were far too self-effacing in your post on the launching of this high-powered GWPF inquiry into surface temperature adjustments, It was entirely prompted by the two articles I wrote in the Sunday Telegraph on 24n January and 7 February, which as I made clear at the time were directly inspired by your own spectacular work on South America and the Arctic.
Not a good birth and Stoat is less impressed by the higher powers of the GWPF.

This failed birth resulted in a troubled childhood by giving the review team a list of silly loaded questions.

This troubled childhood was followed by an adolescence in disarray. The Policy Foundation asked everyone to send them responses to the silly loaded questions. I have no idea why. A review team should know the scientific literature themselves. It is a good custom to ask colleagues for advice on the manuscript, but a review team normally has the expertise to write a first draft themselves.

I was surprised that there were people willing to submit something to this organization. Stoat found two submissions. If Earth First! would make a review on the environmental impact of coal power plants, I would also not expect many submissions from respected sources.

When you ask people to help you and they invest their precious life time into writing responses for you, the least you can do is read the submissions carefully, give them your time, publish them and give a serious response. The Policy Foundation promised: "After review by the panel, all submissions will be published and can be examined and commented upon by anyone who is interested."

Nick Stokes submitted a report in June and recently found out the the Policy Foundation had wimped out and had changed their plans in July:
"The team has decided that its principal output will be peer-reviewed papers rather than a report.
Further announcements will follow in due course."
To which Stokes replied on his blog:
"So...no report! So what happens to the terms of reference? The submissions? How do they interact with "peer-reviewed papers"?"
The review team of the Policy Foundation now walked back. Its chairman, Terence Kealey a British biochemist, wrote this Tuesday:
"The panel has decided that its primary output should be in the form of peer-reviewed papers rather than a non-peer reviewed report. Work is ongoing on a number of subprojects, each of which the panel hopes will result in a peer reviewed paper.
One of our projects is an analysis of the numerous submissions made to the panel by members of the public. We anticipate that the submissions themselves will be published as an appendix to that analysis when it is published."
That sounded good. The review panel focussing on doing something useful, rather than answering their ordained silly loaded questions. And they would still take the submission somewhat seriously. Right? The text is a bit vague so I asked in the comments:
"How many are "numerous submissions"?
Any timeline for when these submissions will be published?"
I thought that was reasonably politely formulated. But these questions were removed within minutes. Nick Stokes happened to have see them. Expecting this kind of behaviour by now, after a few years in this childish climate "debate", I naturally made the screen shot below.

Interesting. I think such a response tells you a lot about a political movement and whether they believe themselves that they are a scientific movement.

[UPDATE. Reminder to self: next time look in spam folder before publishing a blog post.

Yesterday evening, I got a friendly mail by the administrator of the GWPF review homepage, Andrew Montford, better known to most as the administrator of the UK mitigation sceptical blog Bishop Hill. A blog where people think it is hilarious to remove the V from my last name.

He wrote that the GWPF newspage was not supposed to have comments, that my comment was therefore (?) removed. Montford was also so kind to answer my questions:
1. Thirty-five.
2. This depends on the progress on the paper in question. Work is currently at an early stage.

Still a pity that the people interested in this review cannot read this answer on their homepage. No timeline.
]

[UPDATE 2019: The GWPF seems to have stopped paying for their PR page about their "review", https://www.tempdatareview.org/. It now hosts Chinese advertisements for pills. I am not aware of anything coming out of the "review", no report, no summary of the submitted comments written by volunteers in their free time for the GWPF, no article. If you thought this was a PR move to attack science from the start, you may have had a point.]






Related reading

Moyhu: GWPF wimps out

And Then There's Physics: Some advice for the Global Warming Policy Foundation

Stoat: What if you gave a review and nobody came?

Sunday, 27 September 2015

AP, how about the term "mitigation sceptic"?


The Associate Press has added an entry into their stylebook on how to address those who reject mainstream climate science. The stylebook provide guidance to the journalists of the news agency, but is also used by many other newspapers. No one has to follow such rules, but journalists and many other writers often follow such style guides for accurate and consistent language. It probably also has an entry on whether you should write stylebook or style book.

The new entry advices to steer clear from the terms "climate sceptic" and "climate change denier", but to use the long form "those who reject mainstream climate science" or if that is too long "climate doubter".

Peter Sinclair just published an interview by the US national public radio (NPR) channel with Associated Press’ Seth Borenstein, who wrote the entry. Peter writes: the sparks are flying. It also sound as if those sparks are read from paper.

How do you call John Christie, a scientists who rejects main stream science? How do you call his colleague Roy Spencer who wrote a book titled: "The Great Global Warming Blunder: How Mother Nature Fooled the World’s Top Climate Scientists." How do you call the Republican US Senator with the snowball, [[James Inhofe]], who wrote a book that climate science is a hoax? How do you call Catholic Republican US Senator [[Paul Gosar]] who did not want to listen to Pope talking about climate change? How do you call Anthony Watts, a blogger who is willing to publish everything he can spin into a story against mitigation and science? How do you call Tim Ball, a retired geography professor who likes to call himself climatology professor, who cites from Mein Kampf to explain that climate science is similar to the Big Lie of the Jewish World Conspiracy.

I would suggest: by their name. If you talk about a specific person, it is best to simply use their name. Most labels are inaccurate for a specific person. If positive we may be happy to accept an inaccurate label. A negative label will naturally be disputed and can normally be disputed.

We are thus not looking for a word for a specific person. We are looking for a term for the political movement that rejects mainstream climate science. I feel it was an enormous strawman of AP's Seth Borenstein to talk about John Christy. He is an enormous outlier, he may reject mainstream science, but as far as I know talks like a scientist. He is not representative of the political movement of Inhofe, Rush Limbaugh and Fox News. Have a look at the main blogs of this political movement: Watts Up With That, Climate Etc., Bishop Hill, Jo Nova. And please do not have a look at the even more disgusting smaller active blogs.

That is the political movement we need a name for. Doubters? That would not be the first term I would think of after some years of participating in this weird climate "debate". If there is one problem with this political movement, it is a lack of doubt. These people are convinced they see obvious mistakes in dozens of scientific problems, which the experts of those fields are unable to see, while they just need to read a few blog posts to get it. If you claim obvious mistakes you have two options: either all scientists are incompetent or they are all in a conspiracy. These are the non-scientists who know better than scientists how science is done. These are the people who understand the teachings of Jesus better than the Pope. Without any doubt.

It would be an enormous step forward in the climate "debate" if these people had some doubts. Then you would be able to talk to them. Then they might also search for information themselves to understand their problems better. Instead they like to call every source of information on mainstream science an activist resource to have an excuse not to try to understand the problem they are not doubting about.

I do think that the guidance of the AP is a big step forwards. It stops the defamation of the term that stands for people who advocate sceptical scientific thinking in every aspect of life. The sceptic organisation the Center for Inquiry has lobbied news organisations for a long time to stop the inappropriate use of the word sceptic. The problems the word doubter has is even more true for the term "sceptic". These people are not sceptical at all, especially they do not question their own ideas.

The style guide of The Guardian and the Observer states:
climate change denier
The [Oxford English Dictionary] defines a sceptic as "a seeker of the truth; an inquirer who has not yet arrived at definite conclusions".

Most so-called "climate change sceptics", in the face of overwhelming scientific evidence, deny that climate change is happening, or is caused by human activity, so denier is a more accurate term
I fully agree with The Guardian and NPR that "climate change denier" is the most accurate term for this group. They will complain about it because it does not put them in a good light. Which is rather ironic because this is the same demographic that normally complains about Political Correctness when asked to use an accurate term rather than a derogatory term.

The typical complaint is that the term climate change deniers associates them with holocaust deniers. I had that association before they said it. They are the group that promotes this association most actively. A denier is naturally simply someone who denies something, typically something that is generally accepted. The word existed long before the holocaust. The Oxford English Dictionary defines a denier as:
A person who denies something, especially someone who refuses to admit the truth of a concept or proposition that is supported by the majority of scientific or historical evidence:
a prominent denier of global warming
a climate change denier
In one-way communication, I see no problem with simply using the most accurate term. When you are talking with someone about climate science, however, I would say it is best to avoid the term. It will be used to talk about semantics rather than about the science and the science is our strong point in this weird climate "debate".

When you talk about climate change in public, you do so for the people who are listening in. That are many more people and they may have an open mind. The best way to show you have science on your side is to stick to one topic and go in depth, define your terms, ask for evidence and try to understand why you disagree. That is also what scientists would do when they disagree. Staying on topic is the best way to demonstrate their ignorance. You will notice that they will try everything to change the topic. Attend your listeners to this behaviour and keep on asking questions about the initial topic. To use the term "denier" would only make it easier for them to change the topic.

An elegant alternative is the term "climate ostrich". With apologies to this wonderful bird, that does not put his head in sand when trouble is in sight, but everyone immediately gets the connection that a climate ostrich is someone who does not see climate change as a problem. When climate ostriches venture out in the real world, they sometimes wrongly claim that no one has ever denied the greenhouse effect, but they are very sure it is not really a problem.

However, I am no longer convinced that everyone in this political movement does not see the problem. Part of this movement may accept the framing of the environmental movement and of development groups that climate change will hit the poor and vulnerable people most and like that a lot. Not everyone has the same values. Wanting to see people of other groups suffer is not a nice thing to say in public. What is socially acceptable in the US is to claim to reject mainstream science.

To also include this fraction, I have switched to the term "mitigation sceptic". If you listen carefully, you will hear that adaptation is no problem for many. The problem is mitigation. Mitigation is a political response to climate change. This term thus automatically makes clear that we are not talking about scientific scepticism, but about political scepticism. The rejection of mainstream science stems from a rejection of the solutions.



I have used "mitigation sceptic" for some time now and it seems to work. They cannot complain about the "sceptic" part. They will not claim to be a fan of mitigation. Only once someone answered that he was in favour of some mitigation policies for other reasons than climate change. But then these are policies to reduce American dependence on the Saudi Arabian torture dictatorship, or policies to reduce air pollution, or policies to reduce unemployment by shifting the tax burden from labour to energy. These may happen to be the same policies, but then they would not be policies to mitigate the impacts of climate change.

Post Scriptum. I will not publish any comments claiming that denier is a reference to the holocaust. No, that is not an infringement of your freedom of speech. You can start your own blog for anyone who wants to read that kind of stuff. That does not include me.

[UPDATE. John Mashey suggests the term "dismissives": Global Warming’s Six Americas 2009 carefully characterized the belief patterns of Americans, which they survey regularly. The two groups Doubtful and Dismissive are different enough to have distinct labels.

Ceist, in a comment suggested: "science rejecters".

Many options, no need for the very inaccurate term "doubter" for people who display no doubt. ]



Related reading


Newsweek: The Real Skeptics Behind the AP Decision to Put an End to the Term 'Climate Skeptics'.

Eli has a post on the topic, not for the faint at heart: Eli Explains It All.

Greg Laden defends Seth Borenstein as an excellent journalist, but also sees no "doubt": Analysis of a recent interview with Seth Borenstein about Doubt cf Denial.

My immature and neurotic fixation on WUWT or how to talk to mitigation sceptics in public.

How to talk to uncle Bob, the climate ostrich or how to talk to mitigation sceptics in your social circles.

Do dissenters like climate change?

Planning for the next Sandy: no relative suffering would be socialist.

Thursday, 24 September 2015

Model spread is not uncertainty #NWP #ClimatePrediction

Comparison of a large set of climate model runs (CMIP5) with several observational temperature estimates. The thick black line is the mean of all model runs. The grey region is its model spread. The dotted lines show the model mean and spread with new estimates of the climate forcings. The coloured lines are 5 different estimates of the global mean annual temperature from weather stations and sea surface temperature observations. Figures: Gavin Schmidt.

It seems as if 2015 and likely also 2016 will become very hot years. So hot that you no longer need statistics to see that there was no decrease in the rate of warming, you can easily see it by eye now. Maybe the graph also looks less deceptive now that the very prominent super El Nino year 1998 is clearly no longer the hottest.

The "debate" is therefore now shifting to the claim that "the models are running hot". This claim ignores the other main option: that the observations are running cold. Even assuming the observations to be perfect, it is not that relevant that some years the observed annual mean temperatures were close to lower edge of the spread of all the climate model runs (ensemble spread). See comparison shown at the top.

Now that we do not have this case for some years, it may be a neutral occasion to explain that the spread of all the climate model runs does not equal the uncertainty of these model runs. Because also some scientists seem to make this mistake, I thought this was worthy of a post. One hint is naturally that the words are different. That is for a reason.

Long long ago at a debate at the scientific conference EGU there was an older scientist who was really upset by ClimatePrediction.net, where the public can give their computer resources to produce a very large dataset with many different climate model runs with a range of settings for parameters we are uncertain about. He worried that the modeled distribution would be used as a statistical probability distribution. He was assured that everyone was well aware the model spread was not the uncertainty. But it seems he was right and this awareness has faded.



Ensemble weather prediction

It is easiest to explain this difference in the framework of ensemble weather prediction, rather than going to climate directly. Much more work has been done in this field (meteorology is bigger and decadal climate prediction has just started). Furthermore, daily weather predictions offer much more data to study how good the prediction was and how good the ensemble spread fits to the uncertainty.

While it is popular to complain about weather predictions, they are quite good and continually improving. The prediction for 3 days ahead is now as good as the prediction for the next day when I was young. If people really thought the weather prediction was bad, you have to wonder why they pay attention to it. I guess, complaining about the weather and predictions is just a save conversation topic. Except when you stumble upon a meteorologist.

Part of the recent improvement of the weather predictions is that not just one, but a large number of predictions is computed, what scientists call: ensemble weather prediction. Not only is the mean of such an ensemble more accurate than just the single realization we used to have, the ensemble spread also gives you an idea of the uncertainty of the prediction.

Somewhere in the sunny middle of a large high-pressure system you can be quite confident that the prediction is right; errors in the position of the high are then not that important. If this is combined with a blocking situation, where the highs and lows do not move eastwards much, it may be possible to make very confident predictions many days in advance. If a front is approaching it becomes harder to tell well in advance whether it will pass your region or miss it. If the weather will be showery, it is very hard to tell where exactly the showers will hit.

Ensembles give information on how predictable the weather is, but they do not provide reliable quantitative information on the uncertainties. Typically the ensemble is overconfident, the ensemble spread is smaller than the real uncertainty. You can test this by comparing predictions with many observation. In the figure below you can read that if the raw model ensemble (black line) is 100% certain (forecast probability) that it will rain more than 1mm/hr, it should only have been 50% sure. Or when 50% of the model ensemble showed rain, the observations showed 30% of such cases.


The "reliability diagram" for an ensemble of the regional weather prediction system of the German weather service for the probability of more than 1 mm of rain per hour. On the x-axis is the probability of the model, on the y-axis the observed frequency. The thick black line is the raw model ensemble. Thus when all ensemble members (100% probability) showed more than 1mm/hr, it was only rain that hard half the time. The light lines show results two methods to reduce the overconfidence of the model ensemble. Figure 7a from Ben Bouallègue et al. (2013).
To generate this "raw" regional model ensemble, four different global models were used for the state of the weather at the borders of this regional weather prediction model, the initial conditions of the regional atmosphere were varied and different model configurations were used.

The raw ensemble is still overconfident because the initial conditions are given by the best estimate of the state of the atmosphere, which has less variability than the actual state. The atmospheric circulation varies on spatial scales of millimeters to the size of the planet. Weather prediction models cannot model this completely, the computers are not big enough, rather they compute the circulation using a large number of grid boxes with are typically 1 to 25 km in size. The flows on smaller scales do influence the larger scale flow, this influence is computed with a strongly simplified model for turbulence: so called parameterizations. These parameterization are based on measurements or more detailed models. Typically, they aim to predict the mean influence of the turbulence, but the small-scale flow is not always the same and would have varied if it would have been possible to compute it explicitly. This variability is missing.

The same goes for the parameterizations for clouds, their water content and cloud cover. The cloud cover is a function of the relative humidity. If you look at the data, this relationship is very noisy, but the parameterization only takes the best guess. The parameterization for solar radiation takes these clouds in the various model layers and makes assumptions how they overlap from layer to layer. In the model this is always the same; in reality it varies. The same goes for precipitation, for the influence of the vegetation, for the roughness of the surface and so on. Scientists have started working on developing parameterizations that also simulate the variations, but this field is still in its infancy.

Also the data for the boundary conditions (height and roughness of the vegetation), the brightness of the vegetation and soil, the ozone concentrations and the amount of dust particles in the air (aerosols) are normally taken to be constant.

For the raw data fetishists out there: Part of this improvement in weather predictions is due to the statistical post processing of the raw model output. From simple to complicated: it may be seen in the observations that a model is on average 1 degree too cold, it may be known that this is two degrees for a certain region, this may be due to biases especially during sunny high-pressure conditions. The statistical processing of weather predictions to reduce such known biases is known as model output statistics (MOS). (This is methodologically very similar to the homogenization of daily climate data.)

The same statistical post-processing for the average can also be used to correct the overconfidence of the model spread of the weather prediction ensembles. Again from the simple to the complicated. When the above model ensemble is 100% sure it will rain, this can be corrected to 50%. The next step is to make this correction dependent on the rain rate; when all ensemble members show strong precipitation, the probability of precipitation is larger than when most only show drizzle.

Climate projection and prediction

There is no reason whatsoever to think that the model spread of an ensemble of climate projections is an accurate estimate of the uncertainty. My inexpert opinion would be that for temperature the spread is likely again too small, I would guess up to a factor two. The better informed authors of the last IPCC report seems to agree with me when they write:
The CMIP3 and CMIP5 projections are ensembles of opportunity, and it is explicitly recognized that there are sources of uncertainty not simulated by the models. Evidence of this can be seen by comparing the Rowlands et al. (2012) projections for the A1B scenario, which were obtained using a very large ensemble in which the physics parameterizations were perturbed in a single climate model, with the corresponding raw multi-model CMIP3 projections. The former exhibit a substantially larger likely range than the latter. A pragmatic approach to addressing this issue, which was used in the AR4 and is also used in Chapter 12, is to consider the 5 to 95% CMIP3/5 range as a ‘likely’ rather than ‘very likely’ range.
The confidence interval of the "very likely" range is normally about twice as large as the "likely" range.

The ensemble of climate projections is intended to estimate the long-term changes in the climate. It was never intended to be used on the short term. Scientists have just started doing that under the header of "decadal climate prediction" and that is hard. That is hard because then we need to model the influence of internal variability of the climate system, variations in the oceans, ice cover, vegetation and hydrology. Many of these influences are local. Local and short term variation that are not important for long-term projections of global means thus need to be accurate for decadal predictions. The to be predicted variations in the global mean temperature are small; that we can do this at all is probably because regionally the variations are larger. Peru and Australia see a clear influence of El Nino, which makes it easier to study. While El Nino is the biggest climate mode, globally its effect is just a (few) tenth of a degree Celsius.

Another interesting climate mode is the [[Quasi Biannual Oscillation]] (QBO), an oscillation in the wind direction in the stratosphere. If you do not know it, no problem, that is one for the climate mode connoisseur. To model it with a global climate model, you need a model with a very high top (about 100 km) and many model layers in stratosphere. That takes a lot of computational resources and there is no indication that the QBO is important for long-term warming. Thus naturally most, if not all, global climate model projections ignore it.

Ed Hawkins has a post showing the internal variability of a large number of climate models. I love the name of the post: Variable Variability. It shows the figure below. How variable the variability between models is shows how much effort modellers put into modelling internal variability. For that reason alone, I see no reason to simply equate the model ensemble spread with the uncertainty.



Natural variability

Next to the internal variability there is also natural variability due to volcanoes and solar variations. Natural variability has always been an important part of climate research. The CLIVAR (climate variability and predictability) program is a component of the World Climate Research Programme and its predecessor started in 1985. Even if in 2015 and 2016, the journal Nature will probably publish less "hiatus" papers, natural variability will certainly stay an important topic for climate journals.

The studies that sought to explain the "hiatus" are still useful to understand why the temperatures were lower some years than they otherwise would have been. At least the studies that hold; I am not fully convinced yet that the data is good enough to study such minute details. In the Karl et al. (2015) study we have seen that small updates and reasonable data processing differences can produce small changes in the short-term temperature trends that are, however, large relative to something as minute as this "hiatus" thingy.

One reason the study of natural variability will continue is that we need this for decadal climate prediction. This new field aims to predict how the climate will change in the coming years, which is important for impact studies and prioritizing adaptation measures. It is hoped that by starting climate models with the current state of the ocean, ice cover, vegetation, chemistry and hydrology, we will be able to make regional predictions of natural variability for the coming years. The confidence intervals will be large, but given the large costs of the impacts and adaptation measures, any skill has large economic benefits. In some regions such predictions work reasonably well. For Europe they seem to be very challenging.

This is not only challenging from a modelling perspective, but also puts much higher demands on the quality and regional detail of the climate data. Researchers in our German decadal climate prediction project, MiKlip, showed that the differences between the different model systems could only be assessed well using a well homogenized radiosonde dataset over Germany.

Hopefully, the research on decadal climate prediction will give scientists a better idea of the relationship between model spread and uncertainty. The figure below shows a prediction from the last IPCC report, the hatched red shape. While this is not visually obvious, this uncertainty is much larger than the model spread. The likelihood to stay in the shape is 66%, while the model spread shown covers 95% of the model runs. Had the red shape also shown the 95% level, it would have been about twice as high. How much larger the uncertainty is than the model spread is currently to a large part expert judgement. If we can formally compute this, we will have understood the climate system a little bit better again.






Related reading

In a blind test, economists reject the notion of a global warming pause

Are climate models running hot or observations running cold?

Reference

Ben Bouallègue, Zied, Theis, Susanne E., Gebhardt, Christoph, 2013: Enhancing COSMO-DE ensemble forecasts by inexpensive techniques. Meteorologische Zeitschrift, 22, p. 49 - 59, doi: 10.1127/0941-2948/2013/0374.

Rowlands, Daniel J., David J. Frame, Duncan Ackerley, Tolu Aina, Ben B. B. Booth, Carl Christensen, Matthew Collins, Nicholas Faull, Chris E. Forest, Benjamin S. Grandey, Edward Gryspeerdt, Eleanor J. Highwood, William J. Ingram, Sylvia Knight, Ana Lopez, Neil Massey, Frances McNamara, Nicolai Meinshausen, Claudio Piani, Suzanne M. Rosier, Benjamin M. Sanderson, Leonard A. Smith, Dáithí A. Stone, Milo Thurston, Kuniko Yamazaki, Y. Hiro Yamazaki & Myles R. Allen, 2012: Broad range of 2050 warming from an observationally constrained large climate model ensemble. Nature Geoscience, 5, pp. 256–260, doi: 10.1038/ngeo1430.

Thursday, 17 September 2015

Are climate models running hot or observations running cold?

“About thirty years ago there was much talk that geologists ought only to observe and not theorise; and I well remember some one saying that at this rate a man might as well go into a gravel-pit and count the pebbles and describe the colours. How odd it is that anyone should not see that all observation must be for or against some view if it is to be of any service!”
Charles Darwin

“If we had observations of the future, we obviously would trust them more than models, but unfortunately…"
Gavin Schmidt

"What is the use of having developed a science well enough to make predictions if, in the end, all we're willing to do is stand around and wait for them to come true?"
Sherwood Rowland

This is a post in a new series on whether we have underestimated global warming; this installment is inspired by a recent article on climate sensitivity discussed at And Then There's Physics.

The quirky Gavin Schmidt quote naturally wanted to say something similar to Sherwood Rowlands, but contrasted to Darwin I have to agree with Darwin and disagree with Schmidt. Schmidt got the quote from to Knutson & Tuleya (thank you ATTP in the comments).

The point is that you cannot look at data without a model, at least a model in your head. Some people may not be aware of their model, but models and observations always go hand in had. Either without the other is nothing. The naivete so often displayed at WUWT & Co. that you only need to look at the data is completely unscientific, especially when it is in all agony their cherry picked miniature part of the data.

Philosophers of science, please skip this paragraph. You could say that initially, in ancient Greece, philosophers only trusted logic and heavily distrusted the senses. This is natural at this time, if you put a stick in the water it looks bent, but if you feel with your hand it is still straight. In the 17th century British empiricism went to the other extreme and claimed that knowledge mainly comes from sensory experience. However, for science you need both, you cannot make sense of the senses without theory and theory helps you to ask the right questions to nature, without which you could observe whatever you'd like for eternity without making any real scientific progress. How many red Darwinian pebbles are there on Earth? Does that question help science? What do you mean with red pebbles?

In the hypothetical case of observations from the future, we would do the same. We would not prefer the observations, but use both observations and theory to understand what is going on. I am sure Gavin Schmidt would agree; I took his beautiful quote out of context.

Why I am writing this? What is left of "global warming has stopped" or "don't you know warming has paused?" is that models predicted more warming than we see in the observations. Or as a mitigation sceptic would say "the models are running hot". This difference is not big, this year we will probably get a temperature that fits to the mean of the projections, but we also have an El Nino year, thus we would expect the temperature to be on the high side this year, which it is not.


Figure from Cowtan et al. (2015). Caption by Ed Hawkins: Comparison of 84 RCP8.5 simulations against HadCRUT4 observations (black), using either air temperatures (red line and shading) or blended temperatures using the HadCRUT4 method (blue line and shading). The shaded regions represent the 90% range (i.e. from 5-95%) of the model simulations, with the corresponding lines representing the multi-model mean. The upper panel shows anomalies derived from the unmodified RCP8.5 results, the lower shows the results adjusted to include the effect of updated forcings from Schmidt et al. [2014]. Temperature anomalies are relative to 1961-1990.

If there is such a discrepancy, the naive British empiricist might say:
  • "the models are running hot", 
but the other two options are: And every of these three options has an infinity of possibilities. As this series will show, there are many observations that suggest that the station temperature "observations are running cold". This is just one of them. Then one has to weigh the evidence.

If there is any discrepancy a naive falsificationist may say that the theory is wrong. However, discrepancies always exist; most are stupid measurement errors. If a leaf does not fall to the ground, we do not immediately conclude that the theory of gravity is wrong. We start investigating. There is always the hope that a discrepancy can help to understand the problem better. It is from this better understanding that scientists conclude that the old theory was wrong.

Estimates of equilibrium climate sensitivity from the recent IPCC report. The dots indicate the mean estimates, the horizontal lines the confidence intervals. Only studies new to this IPCC report are labelled.

Looking at projections is "only" the last few decades, how does it look for the entire instrumental record? People have estimated the climate sensitivity from the global warming observed until now. The equilibrium climate sensitivity indicates how much warming is expected on the long term for a doubling of the CO2 concentration. The figure to the right shows that several lines of evidence suggest that the equilibrium climate sensitivity is about 3. This value is not only estimated from the climate models, but also from climatological constraints (such as the Earth having escaped from [[snow-ball Earth]]), from the response to volcanoes and from a diverse range of paleo reconstructions of past changes in the climate. And newly Andrew Dessler estimated the climate sensitivity to be 3 based on decadal variability.

The outliers are the "instrumental" estimates. Not only do they scatter a lot and have large confidence intervals; that is to be expected because global warming has only increased the temperature by 1°C up to now. However, these estimates are on average also below 3. This is a reason to critically assess the climate models, climatological constraints and paleo reconstructions, but the most likely resolution would be that the outlier category, the "instrumental" estimates, are not accurate.

The term "instrumental" estimate refers to highly simplified climate models that are tuned to the observed warming. They need additional information on the change in CO2 (quite reliable) and on changes in atmospheric dust particles (so-called aerosols) and their influence on clouds (highly uncertain). The large spread suggests that these methods are not (yet) robust and some of the simplifications also seem to produce biases towards too low sensitivity estimates. That these estimates are on average below 3 is likely mostly due to such problems with the method, but it could also suggest that "the observations are running cold".

In this light, the paper discussed over at And Then There's Physics is interesting. The paper reviews the scientific literature on the relationship between how well climate models simulate a change in the climate for which we have good observations and which is important for the climate sensitivity (water vapour, clouds, tropical thunderstorms and ice) and the climate sensitivity these models have. It argues that:
the collective guidance of this literature [shows] that model error has more likely resulted in ECS underestimation.
Given that these "emergent constraint" studies find that the climate sensitivity from dynamic climate models may well be too low rather than too high, it makes sense to investigate whether the estimates from the "instrumental" category, the highly simplified climate models, are too low. One reason could be because we have underestimated the amount of surface warming.

The top panel (A) shows a measure for the mixing between the lower and middle troposphere (LTMI) over warm tropical oceans. The observed range is between the two vertical dashed lines. Every coloured dot is a climate model. Only the models with a high equilibrium climate sensitivity are able to reproduce the observed lower tropospheric mixing.
The lower panel(B) shows a qualitative summary of the studies in this field. The vertical line is the climate sensitivity averaged over all climate models. For the models that reproduce water vapour well this average is about the same. For the models that reproduce ice (cryosphere), clouds, tropical thunder storms (ITCZ) well the climate sensitivity is higher.

Concluding, climate models and further estimates of the climate sensitivity suggest that we may underestimate the warming of the surface temperature. This is certainly not conclusive, but there are many lines of evidence that climate change is going faster than expected as we will in further posts in this series: Arctic sea ice and snow cover, precipitation, sea level rise predictions, lake and river warming, etc. In combination the [[consilienceof evidence]] suggests at least that "the observations running cold" is something we need to investigate.

Looking at the way station measurements are made there are also several reasons why the raw observations may show too little warming. The station temperature record is rightly seen as a reliable information source, but in the end it is just one piece of evidence and we should consider all of the evidence.

There are so many lines of evidence for underestimating global warming that science historian Naomi Oreskes wondered if climate scientists had a tendency to "err on the side of least drama" (Brysse et al., 2013). Rather than such a bias, all these underestimates of the speed of climate change could also have a common cause: an underestimate of global warming.

I did my best to give a fair view of the scientific literature, but like for most posts in this series this topic goes beyond my expertise (station data). Thus a main reason to write these posts is to get qualified feedback. Please use the comments for this or write to me.




Related information

Gavin Schmidt wrote the same 2 years ago from a modellers perspective: On mismatches between models and observations.

Gavin Schmidt's TED talk: The emergent patterns of climate change and corresponding article.

Climate Scientists Erring on the Side of Least Drama

Why raw temperatures show too little global warming

First post in this series wondering about a cooling bias: Lakes are warming at a surprisingly fast rate

References

Cowtan, Kevin, Zeke Hausfather, Ed Hawkins, Peter Jacobs, Michael E. Mann, Sonya K. Miller, Byron A. Steinman, Martin B. Stolpe, and Robert G. Way, 2015: Robust comparison of climate models with observations using blended land air and ocean sea surface temperatures. Geophysical Research Letters, 42, 6526–6534, doi: 10.1002/2015GL064888.

Fasullo, John T., Benjamin M. Sanderson and Kevin E. Trenberth, 2015: Recent Progress in Constraining Climate Sensitivity With Model Ensembles. Current Climate Change Reports, first online: 16 August 2015, doi: 10.1007/s40641-015-0021-7.

Schmidt, Gavin A. and Steven Sherwood, 2015: A practical philosophy of complex climate modelling. European Journal for Philosophy of Science, 5, no. 2, 149-169, doi: 10.1007/s13194-014-0102-9.

Brysse, Keynyn, Naomi Oreskes, Jessica O’Reilly and Michael Oppenheimer, 2013: Climate change prediction: Erring on the side of least drama? Global Environmental Change, 23, Issue 1, February 2013, Pages 327–337, doi: 10.1016/j.gloenvcha.2012.10.008.