It looks like the "Competitive Enterprise Institute" (CEI) just conned their dark money overlords with a stupid report rehashing all the same old claims of the mitigation sceptical movement the BEST project of Richard Muller already studied as a Red Team.
Conservative physics professor Richard Muller claimed that before his BEST project he "did not know whether global warming was real, was completely bogus or may was twice as bad as people said". He was at least open to all sides.
Joe D’Aleo, co-author of the CEI-affiliated report, made the embarrassingly uninformed and wrong claim that “nearly all of the warming they are now showing are in the adjustments.”
The report "On the Validity of NOAA, NASA and Hadley CRU Global Average Surface Temperature Data & The Validity of EPA’s CO2 Endangerment Finding - Abridged Research Report" (with annotations) by James P. Wallace III, Joseph S. D’Aleo, and Craig D. Idso provides no evidence for this claim; the graph above shows the opposite is true.
They don't do subtle. But really? You want to claim the Earth is not warming? In 2017?
Glaciers are melting, from the tropical [[Kilimanjaro]] glaciers, to the ones in the Alps and Greenland. Arctic sea ice is shrinking. The growing season in the mid-latitudes has become weeks longer. Trees bud and blossom earlier. Wine can be harvested earlier. Animals migrate earlier. The habitat of plants, animals and insects is shifting poleward and up the mountains. Lakes and rivers freeze later and break-up the ice earlier. The oceans are rising.
Even without looking at any thermometer data, even if we would not have invented the thermometer, physics professor Muller was not sure the Earth is warming? Some corporate lobbyists of the CEI claim the Earth is hardly warming? Really? And the same group of people like to say scientists should get out of the lab more often.
Richard Muller explained in the New York Times the main objections of the mitigation sceptics, which he studied and the CEI wants to study:
We carefully studied issues raised by skeptics: biases from urban heating (we duplicated our results using rural data alone), from data selection (prior groups selected fewer than 20 percent of the available temperature stations; we used virtually 100 percent), from poor station quality (we separately analyzed good stations and poor ones) and from human intervention and data adjustment (our work is completely automated and hands-off). In our papers we demonstrate that none of these potentially troublesome effects unduly biased our conclusions.In the end Muller and his team found:
Our results turned out to be close to those published by prior groups. We think that means that those groups had truly been very careful in their work, despite their inability to convince some skeptics of that. They managed to avoid bias in their data selection, homogenization and other corrections.
The CEI report carefully avoids any mention of the BEST project. If fact it avoids any mention of previous studies on their "issues". That could be because they are uninformed henchmen, because they want to con their even dumber sponsors or because they want to deceive their friends and keep the public "debate" going on ad nauseam.
If they were real sceptics they would inform themselves and if they do not agree with a claim respond to the arguments. A scientific article thus starts with a description of what is already known and then puts forward new arguments or new evidence. Just repeating ancient accusations, ignoring previous studies, does not lead to a better understanding or a better conversation.
Global mean temperature estimatesBefore going over the main mistakes of the report, let me explain how much the Earth is estimated to have warmed, why adjustments need to be made and how these adjustments are made.
The graph below shows the warming since 1880. The red line is the raw data, the blue line the warming estimate after adjustments to account for changes in the way temperature was measured. Directly using raw data, the warming estimate would have been larger. Due to adjustments about 10% of the warming is removed.
This would be a good point to remember that Joe D’Aleo wrong claimed that “nearly all of the warming they are now showing are in the adjustments.” It is really really hard to be more wrong. Joe D’Aleo gets points for effort.
The main reason why the raw data suggests more warming is how sea surface temperature was historically measured. The ocean surface warming estimates of the UK Hadley centre are shown below. The main adjustment necessary is for the transition of bucket observations to measurement at the engine cooling water inlet, which mostly happened in the decades around the WWII. The war itself is an especially difficult case.
Bucket measurements are made by hauling a bucket of water from the ocean and stirring a thermometer until it has the temperature of the water. The problem is that the water cools due to evaporation between the time it is lifted from the ocean and the time the thermometer is read.
This is not only a large adjustment, but also a large uncertainty. Initially is was estimated that the bucket measurements were about 0.4 °C colder. Nowadays the estimate, depending on the bucket and the period, is about 0.2 °C, but it can be anywhere between 0.4 °C and zero. We studied these biases with experiments on research vessels, in labs and numerical modelling and by comparing measurements made by different nearby ships/platforms.
The graph below shows the warming over land as estimated from weather station data by US NOAA (GHCNv3). Over land the warming was larger than the raw observations suggest. The adjustments are made by comparing every candidate station with its nearby neighbours. Changes in the regional climate will be the same in all stations, any change that only happens at the candidate station is not representative for the region, but likely a change in how temperature was observed.
There are many reasons why stations may not measure the regional climate correctly. The best known is urbanization of the local surrounding of the station. Cities are often warmer than the surrounding region and when cities grow this can produce a warming signal. This is a correct measurement, but not the large-scale warming of interest and should thus be removed. The counterpart of urbanization is that city stations are often moved to the outskirts, which typically produces a cooling jump that also needs to be removed. This can even be important for small villages.
City stations moving to cooler airports can produce an artificial cooling. Also modern equipment generally measures a bit cooler than early instruments.
Where the CEI report gives examples of data before and after adjustment, do you want to guess whether they showed the sea surface temperature or the land surface temperature?
Sea or land? What do you think? I'll wait.
If you guessed the land surface temperature you won the price: a free twitter account to tell the Competitive Enterprise Institute what you think of the quality of their propaganda.
The ocean is 71% of the Earth's surface. Thus if you combine these two temperature signals taking the area of the land and the ocean into account the net effect of the adjustments is a reduction of global warming.
A semi-regular reminder that adjustments in surface temp analyses for non-climate-related biases LOWER 20th C trends. You're welcome. pic.twitter.com/jPRAehujYN— Gavin Schmidt (@ClimateOfGavin) 6 July 2017
The Daily caller interviewed D'Aleo and calls the report a "peer-reviewed study". Suggesting that it underwent the quality control by scientists with expertise in the field that is typical for scientific publications. There is no evidence that the report is published in the scientific literature and the blog science quality, lack of clarity how the figures were computed and where their data comes from, the lack of evidence for the claims, the lack of references to the scientific literature makes it highly unlikely that this work is peer reviewed, to say it in a friendly way. There is no quality bar they will not limbo underneath; they don't do subtle.
HomogenisationThe estimation of the climatic changes at a station using neighbouring stations to remove local artefacts is called statistical homogenisation. The basic idea of comparing a candidate station with its neighbours is easy, but with typically multiple jumps at one station and also jumps in the neighbouring stations it becomes a beautiful statistical problem people can work on for decades.
Naturally scientists also study how well they can remove these artefacts. It is sad this needs to be mentioned, but the more friendly blog posts of the mitigation sceptical movement (implicitly) assume scientists are stupid and don't do any due diligence. Right, that is how we got the moon and produced smart phones.
Such a study was actually how I started with this topic. The homogenisation community needed an outsider to make a blind benchmarking of their methods. So I generated a dataset with homogeneous station data where you need to get the variability of the stations right and the variability (correlations) between the stations. As the name of this blog suggests just the job I like.
To this homogeneous data we added inhomogeneities. For me that was the biggest task, talking with dozens of experts from many different countries how inhomogeneities typically look like. How many (about one per 20 years), how big (about 0.8 °C per jump), how many gradual inhomogeneities and how big (to model urbanization), how often do multiple stations have a simultaneous jump (for example, due to a central change in the time of observation).
I gave this inhomogeneous station dataset to my colleagues, who homogenised it and, after everyone returned the data, we analysed the results. We found that all methods improved the quality of monthly temperature data. More importantly for us was that modern homogenisation packages were clearly better than traditional methods. The work of the last decade had paid off.
similar blind validation study for the homogenisation method NOAA used to homogenise GHCNv3 and shows something important. The four panels are four different assumptions about how inhomogeneities and the climate looks like. This study chose to make some inhomogeneity cases that were really easy and some that were really hard.
On the horizontal axis are three periods. The red crosses are the trends in the inhomogeneous data, the green crosses the ones in the homogeneous data, which the homogenisation algorithms are supposed to recover and the yellow/orange crosses are the trends of the homogenised data.
The important thing is that the yellow cross is always in between: homogenisation improved the trend estimates, but part of the error in the trend remains. In the most difficult case of this study, which I consider unrealistic, the homogenised result was in the middle. Half of the trend error was removed, half remained.
Because real raw station data shows too little warming and statistical homogenisation makes the trend larger, better homogenisation thus also means stronger temperature trends over land. Homogenisation became better because of better homogenisation algorithms and because we have more data due to continual digitisation efforts. With more data, the stations will on average be closer together and thus experience more similar weather. This means that it becomes easier to see homogeneities in their differences.
CEI claims from Daily CallerMichael Bastasch of the Daily Caller makes several unsupported or wrong claims about the report. Other claims are already wrong in report.
A new study found adjustments made to global surface temperature readings by scientists in recent years “are totally inconsistent with published and credible U.S. and other [New Zealand and upper air] temperature data.”No shit, Sherlock. Next you will tell me that cassoulet does not taste like a McDonald’s Hamburger, sea food or a cream puff. The warming of different air masses is different? Who would have thought so?
This becomes most Kafkaesk when the authors want to see the high number of 100 Fahrenheit days of the 1930s US [[Dust Bowl]] in the global monthly average temperature and call this a "cyclical pattern". Not sure whether a report aimed at the Tea Party folks should insult American farmers and claim they will mismanage their lands to produce a Dust Bowl in regular cycles.
JJA 2016 compared directly to JJA 1936. Only western US and ~sub-polar N. Atl cooler last year. In annual mean, only N. Atl. pic.twitter.com/GsGcQsJfT9— Gavin Schmidt (@ClimateOfGavin) 7 July 2017
The report is drenched in conspiratorial thinking:
Basically, “cyclical pattern in the earlier reported data has very nearly been ‘adjusted’ out” of temperature readings taken from weather stations, buoys, ships and other sources.They do not even critique the methods used or even mention them and do acknowledge that adjustments are necessary, but the pure outcome being inconvenient for their donors is enough to complain.
It also illustrates that the mitigation sceptical movement is preoccupied with the outcome and not with the quality of a study. Whether a new study is praised or criticized on their movement blog Watts Up With That depends on the outcome, on whether it can be spun as making their case against solving climate change stronger or weaker. On science blogs it depends on the quality and the strength of the evidence.
As I already showed above, the adjustments make the estimated warming smaller. The exact opposite is claimed by the Daily Caller:
In fact, almost all the surface temperature warming adjustments cool past temperatures and warm more current records, increasing the warming trend, according to the study's authors.The study provides no evidence for this. They do not show the warming before and after adjustment for the global temperature, only for the land temperature.
Is it too much to ask to inform yourself before you accuse scientists of wrongdoing? Is it too much to ask if you write a report about the global temperature to read some scientific articles on data problems in the sea surface temperature? Is is too much to ask if you talk about the 1940s to wonder whether the WWII might have influences the measurements?
“Each dataset pushed down the 1940s warming and pushed up the current warming.”The war increased the percentage of American navy vessels, which make engine intake measurements, and decreased the percentage of merchant ships, which make bucket measurements. That produces a spurious warm peak in the raw data.
Modern data also have a better coverage over the Earth. Locally there is more decadal variability, what they call "cyclical pattern". A better coverage will remove spurious decadal variability from the global average.
I have no clue why they would think this:
“You would think that when you make adjustments you’d sometimes get warming and sometimes get cooling. That’s almost never happened,” said D’Aleo, who co-authored the study with statistician James Wallace and Cato Institute climate scientist Craig Idso.The transitions in the measurements methods due to technological and economic changes can naturally affect the global average temperature. For example ships in the 19th century used bucket measurements, now most sea surface temperature data comes from buoys.
If you assume inhomogeneities can have no influence on the global mean, like D'Aleo, then why are the mitigation sceptics claiming to be worried about the influence of urbanization on the global mean temperature? If that were the main problem, the adjustments would tend to produce cooling more often than warming to remove this problem. They would not "sometimes get warming and sometimes get cooling".
The report was an embarrassing mixture of the worst of blog science. The Daily Caller post managed to make it worse.
The positive side of Trump claiming that his inauguration was the biggest evah, is that the public now understands were such wild claims come from. Science is harder to check than crowd sizes. Even if you do not know them personally, there are people on this globe willing to deny the existence of global warming without blinking an eye.
Quality of climate dataThe climate scientists of Climate Feedback had a look at an Breitbart article on the same report. Seven scientists analyzed the article and estimate its overall scientific credibility to be 'very low'. Breitbart article falsely claims that measured global warming has been “fabricated”.
Fact checker of urban legends Snopes judged the Breitbart article to be: False. Surprise. Had Breitbart known it to be true, they would not have published it.
Ars Technica: Thorough, not thoroughly fabricated: The truth about global temperature data. How thermometer and satellite data is adjusted and why it must be done.
John Timmer at Ars Technica is fed up with being served the same story about some upward adjusted stations every year: Temperature data is not “the biggest scientific scandal ever” Do we have to go through this every year?
Steven Mosher, a climate sceptic and member of the BEST project: all the adjustments demanded by the "sceptics".
The astronomer behind And Then There's Physics writes why the removal of non-climatic effects makes sense. In the comments he talks about adjustments made to astronomical data. Probably every numerical observational discipline of science performs data processing to improve the accuracy of their analysis.
Nick Stokes, an Australian scientist, has a beautiful post that explains the small adjustments to the land surface temperature in more detail.
Two posts of mine about some reasons for temperature trend biases: Temperature bias from the village heat island and Changes in screen design leading to temperature trend biases.
You may also be interested in the posts on how homogenization methods work (Statistical homogenisation for dummies) and how they are validated (New article: Benchmarking homogenisation algorithms for monthly data).
Just the facts, homogenization adjustments reduce global warming.
Zeke Hausfather: Major correction to satellite data shows 140% faster warming since 1998.
If you would like to read a peer reviewed scientific article showing the adjustments, the influence of the adjustments on the global mean temperature is also shown in Karl et al. (2015).
NOAA's benchmarking study: Claude N. Williams ,Matthew J.Menne, Peter W. Thorne, 2012: Benchmarking the performance of pairwise homogenization of surface temperatures in the United States. Journal Geophysical Research, doi: 10.1029/2011JD016761.
On my benchmarking study: New article: Benchmarking homogenisation algorithms for monthly data.
Corporate war on scienceThe Guardian on the CEI report and their attempt to attack the endangerment finding: Conservatives are again denying the very existence of global warming.
Another post on the CEI report: Silly Non-Study Supposedly Strengthens Endangerment Challenge.
My first post on the Red Cheeks Team.
My last post on the Red Team idea: The Trump administration proposes a new scientific method just for climate studies.
Great piece by climate scientist Ken Caldeira: Red team, blue team.
Phil Newell: One Team, Two Team, Red Team, Blue Team.
Why doesn't Big Oil fund alternative climate research? Hopefully a rhetorical question. They would have had a lot to gain if they thought the science were wrong, but they fund PR not science.
Union of Concerned Scientists on the funding of the war by Exxon: ExxonMobil Talks A Good Game, But It’s Still Funding Climate Science Deniers.
The New Republic on several attacks on science by Scott Pruitt: The End Goal of Trump’s War on Science.
Mother Jones: A Jaw-Dropping List of All the Terrible Things Trump Has Done to Mother Earth. Goodbye regulations designed to protect the environment and public health.