Showing posts with label Berkeley Earth. Show all posts
Showing posts with label Berkeley Earth. Show all posts

Friday, May 29, 2020

What does statistical homogenization tell us about the underestimated global warming over land?

Climate station data contains inhomogeneities, which are detected and corrected by comparing a candidate station to its neighbouring reference stations. The most important inhomogeneities are the ones that lead to errors in the station network-wide trends and in global trend estimates. 

An earlier post in this series argued that statistical homogenization will tend to under-correct errors in the network-wide trends in the raw data. Simply put: that some of the trend error will remain. The catalyst for this series is the new finding that when the signal to noise ratio is too low, homogenization methods will have large errors in the positions of the jumps/breaks. For much of the earlier data and for networks in poorer countries this probably means that any trend errors will be seriously under-corrected, if they are corrected at all.

The questions for this post are: 1) What do the corrections in global temperature datasets do to the global trend and 2) What can we learn from these adjustments for global warming estimates?

The global warming trend estimate

In the global temperature station datasets statistical homogenization leads to larger warming estimates. So as we tend to underestimate how much correction is needed, this suggests that the Earth warmed up more than current estimates indicate.

Below is the warming estimate in NOAA’s Global Historical Climate Network (Versions 3 and 4) from Menne et al. (2018). You see the warming in the “raw data” (before homogenization; striped lines) and in the homogenized data (drawn line). The new version 4 is drawn in black, the previous version 3 in red. For both versions homogenization makes the estimated warming larger.

After homogenization the warming estimates of the two versions are quite similar. The difference is in the raw data. Version 4 is based on the raw data of the International Surface Temperature Initiative and has much more stations. Version 3 had many stations that report automatically, these are typically professional stations and a considerable part of them are at airports. One reason the raw data may show less warming in Version 3 is that many stations at airports were in cities before. Taking them out of the urban heat island and often also improving the local siting of the station, may have produced a systematic artificial cooling in the raw observations.

Version 4 has more stations and thus a higher signal to noise ratio. One may thus expect it to show more warming. That this is not the case is a first hint that the situation is not that simple, as explained at the end of this post.


Figure from Menne et al. with warming estimates from 1880. See caption below.
The global land warming estimates based on the Global Historical Climate Network dataset of NOAA. The red lines are for version 3, the black lines for the new version 4. The striped lines are before homogenization and the drawn lines after homogenization. Figure from Menne et al. (2018).

The difference due to homogenization in the global warming estimates is shown in the figure below, also from Menne et al. (2018). The study also added an estimate for the data of the Berkeley Earth initiative.

(Background information. Berkeley Earth started as a US Culture War initiative where non-climatologists computed the observed global warming. Before the results were in, climate “sceptics” claimed their methods were the best and they would accept any outcome. The moment the results turned out to be scientifically correct, but not politically correct, the climate “sceptics” dropped them like a hot potato.)

We can read from the figure that in GHCNv3 over the full period homogenization increases warming estimates by about 0.3 °C per century, while this is 0.2°C in GHCNv4 and 0.1°C in the dataset of Berkeley Earth datasets. GHCNv3 has more than 7000 stations (Lawrimore et al., 2011). GHCNv4 is based on the ISTI dataset (Thorne et al., 2011), which has about 32,000 stations, but GHCN only uses those of at least 10 years and thus contains about 26,000 stations (Menne et al. 2018). Berkeley Earth is based on 35,000 stations (Rohde et al., 2013).


Figure from Menne et al. (2018) showing how much adjustments were made.
The difference due to homogenization in the global warming estimates (Menne et al., 2018). The red line is for smaller GHCNv3 dataset, the black line for GHCNv4 and the blue line for Berkeley Earth.

What does this mean for global warming estimates?

So, what can we learn from these adjustments for global warming estimates? At the moment, I am afraid, not yet a whole lot. However, the sign is quite likely right. If we could do a perfect homogenization, I expect that this would make the warming estimates larger. But to estimate how large the correction should have been based on the corrections which were actually made in the above datasets is difficult.

In the beginning, I was thinking: if the signal to noise ratio in some network is too low, we may be able to estimate that in such a case we under-correct, say, 50% and then make the adjustments unbiased by making them, say, twice as large.

However, especially doing this globally is a huge leap of faith.

The first assumption this would make is that the trend bias in data sparse regions and periods is the same as that of data rich regions and periods. However, the regions with high station density are in the [[mid-latitudes]] where atmospheric measurements are relatively easy. The data sparse periods are also the periods in which large changes in the instrumentation were made as we were still learning how to make good meteorological observations. So we cannot reliably extrapolate from data rich regions and periods to data sparse regions and periods. 

Furthermore, there will not be one correction factor to account for under-correction because the signal to noise ratio is different everywhere. Maybe America is only under-corrected by 10% and needs just a little nudge to make the trend correction unbiased. However, homogenization adjustments in data sparse regions may only be able to correct such a small part of the trend bias that correcting for the under-correction becomes adventurous or even will make trend estimates more uncertain. So we would at least need to make such computations for many regions and periods.

Finally, another reason not to take such an estimate too seriously are the spatial and temporal characteristics of the bias. The signal to noise ratio is not the only problem. One would expect that it also matters how the network-wide trend bias is distributed over the network. In case of relocations of city stations to airports, a small number of stations will have a large jump. Such a large jump is relatively easy to detect, especially as its neighbouring stations will mostly be unaffected.

Already a harder case is the time of observation bias in America, where a large part of the stations has experienced a cooling shift from afternoon to morning measurements over many decades. Here, in most cases the neighbouring stations were not affected around the same time, but the smaller shift makes it harder to detect these breaks.

(NOAA has a special correction for this problem, but when it is turned off statistical homogenization still finds the same network-wide trend. So for this kind of bias the network density in America is apparently sufficient.)

Among the hardest case are changes in the instrumentation. For example, the introduction of Automatic Weather Stations in the last decades or the introduction of the Stevenson screen a century ago. These relatively small breaks often happen over a period of only a few decades, if not years, which means that also the neighbouring stations are affected. That makes it hard to detect them in a difference time series.

Studying from the data how the biases are distributed is hard. One could study this by homogenizing the data and studying the breaks, but the ones which are difficult to detect will then be under-represented. This is a tough problem; please leave suggestions in the comments.

Because of how the biases are distributed it is perfectly possible that the trend biases corrected in GHCN and Berkley Earth are due to the easy-to-correct problems, such as the relocations to airports, while the hard ones, such as the transition to Stevenson screens, are hardly corrected. In this case, the correction that could be made, do not provide information on the ones that could not be made. They have different causes and different difficulties.

So if we had a network where the signal to noise ratio is around one, we could not say that the under-correction is, say, 50%. One would have to specify for which kind of distribution of the bias this is valid.

GHCNv3, GHCNv4 and Berkeley Earth

Coming back to the trend estimates of GHCN version 3 and version 4. One may have expected that version 4 is able to better correct trend biases, having more stations, and should thus show a larger trend than version 3. This would go even more so for Berkeley Earth. But the final trend estimates are quite similar. Similarly in the most data rich period after the second world war, the least corrections are made.

The datasets with the largest number of stations showing the strongest trend would have been a reasonable expectation if the trend estimates of the raw data would have been similar. But these raw data trends are the reason for the differences in the size of the corrections, while the trend estimates based on the homogenized are quite similar.

Many additional stations will be in regions and periods where we already had many stations and where the station density was no problem. On the other hand, adding some stations to data sparse regions may not be sufficient to fix the low signal to noise ratio. So the most improvements would be expected for the moderate cases where the signal to noise ratio is around one. Until we have global estimates of the signal to noise ratio for these datasets, we do not know for which percentage of stations this is relevant, but this could be relatively small.

The arguments of the previous section are also applicable here; the relationship between station density and adjustments may not be that easy. Especially that the corrections in the period after the second world war are so small is suspicious; we know quite a lot happened to the measurement networks. Maybe these effects all average out, but that would be quite a coincidence. Another possibility is that these changes in observational methods were made over relatively short periods to entire networks making it hard to correct them.

A reason for the similar outcomes for the homogenized data could be that all datasets successfully correct for trend biases due to problems like the transition to airports, while for every dataset the signal to noise ratio is not enough to correct problems like the transition to Stevenson screens. GHNCv4 and Berkeley Earth using as many stations as they could find could well have more stations which are currently badly sited than GHCNv3, which was more selective. In that case the smaller effective corrections of these two datasets would be due to compensating errors.

Finally, as small disclaimer: The main change from version 3 to 4 was the number of stations, but there were other small changes, so it is not just a comparison of two datasets where only the signal to noise ratio is different. Such a pure comparison still needs to be made. The homogenization methods of GHCN and Berkeley Earth are even more different.

My apologies for all the maybe's and could be's, but this is something that is more complicated than it may look and I would not be surprised if it will turn out to be impossible to estimate how much corrections are needed based on the corrections that are made by homogenization algorithms. The only thing I am confident about is that homogenization improves trend estimates, but I am not confident about how much it improves.

Parallel measurements

Another way to study these biases in the warming estimates is to go into the books and study station histories in 200 plus countries. This is basically how sea surface temperature records are homogenized. To do this for land stations is a much larger project due to the large number of countries and languages.

Still there are such experiments, which give a first estimate for some of the biases when it comes to the global mean temperature (do not expect regional detail). In the next post I will try to estimate the missing warming this way. We do not have much data from such experiments yet, but I expect that this will be the future.

Other posts in this series






References

Chimani, Barbara, Victor Venema, Annermarie Lexer, Konrad Andre, Ingeborg Auer and Johanna Nemec, 2018: Inter-comparison of methods to homogenize daily relative humidity. International Journal Climatology, 38, pp. 3106–3122. https://doi.org/10.1002/joc.5488

Gubler, Stefanie, Stefan Hunziker, Michael Begert, Mischa Croci-Maspoli, Thomas Konzelmann, Stefan Brönnimann, Cornelia Schwierz, Clara Oria and Gabriela Rosas, 2017: The influence of station density on climate data homogenization. International Journal of Climatology, 37, pp. 4670–4683. https://doi.org/10.1002/joc.5114

Lawrimore, Jay H., Matthew J. Menne, Byron E. Gleason, Claude N. Williams, David B. Wuertz, Russel S. Vose and Jared Rennie, 2011: An overview of the Global Historical Climatology Network monthly mean temperature data set, version 3. Journal of Geophysical Research, 116, D19121. https://doi.org/10.1029/2011JD016187

Lindau, Ralf and Victor Venema, 2018: On the reduction of trend errors by the ANOVA joint correction scheme used in homogenization of climate station records. International Journal of Climatology, 38, pp. 5255– 5271. Manuscript: https://eartharxiv.org/r57vf/ Article: https://doi.org/10.1002/joc.5728

Rohde, Robert, Richard A. Muller, Robert Jacobsen, Elizabeth Muller, Saul Perlmutter, Arthur Rosenfeld, Jonathan Wurtele, Donald Groom and Charlotte Wickham, 2013: A New Estimate of the Average Earth Surface Land Temperature Spanning 1753 to 2011. Geoinformatics & Geostatistics: An Overview, 1, no.1. https://doi.org/10.4172/2327-4581.1000101

Sutton, Rowan, Buwen Dong and Jonathan Gregory, 2007: Land/sea warming ratio in response to climate change: IPCC AR4 model results and comparison with observations. Geophysical Research Letters, 34, L02701. https://doi.org/10.1029/2006GL028164

Thorne, Peter W., Kate M. Willett, Rob J. Allan, Stephan Bojinski, John R. Christy, Nigel Fox, Simon Gilbert, Ian Jolliffe, John J. Kennedy, Elizabeth Kent, Albert Klein Tank, Jay Lawrimore, David E. Parker, Nick Rayner, Adrian Simmons, Lianchun Song, Peter A. Stott and Blair Trewin, 2011: Guiding the creation of a comprehensive surface temperature resource for twenty-first century climate science. Bulletin American Meteorological Society, 92, ES40–ES47. https://doi.org/10.1175/2011BAMS3124.1

Wallace, Craig and Manoj Joshi, 2018: Comparison of land–ocean warming ratios in updated observed records and CMIP5 climate models. Environmental Research Letters, 13, no. 114011. https://doi.org/10.1088/1748-9326/aae46f 

Williams, Claude, Matthew Menne and Peter Thorne, 2012: Benchmarking the performance of pairwise homogenization of surface temperatures in the United States. Journal Geophysical Research, 117, D05116. https://doi.org/10.1029/2011JD016761


Sunday, July 2, 2017

The Trump administration proposes a new scientific method just for climate studies



What could possibly go wrong?

[[Scott Pruitt]] is the former Oklahoma Attorney General who copied and pasted letters for pro-pollution lobbyists onto his letter head. Much of his previous work was devoted to suing the EPA. Now he works for the big money donors as head of the EPA.  This Scott Pruitt is allegedly working on formulating a new scientific method to be used for studying climate change alone. E&E News just reported that this special scientific method will use "red team, blue team" exercises to conduct an "at-length evaluation of U.S. climate science."

Let's ignore that it makes no sense to speak of US climate science when it comes to the results. Climate science is the same in every country. There tends to be only one reality.

Previously [[Rick Perry]], head of the Department of Energy (DOE) who campaigned on closing the DOE before he knew what it does, had joined the group calling to replace the scientific method with a Red Team Blue Team exercise.



A Red Team is supposed to challenge the claims of the Blue Team. It is an idea from hierarchical organisations, like the military and multinationals, where challenging the orthodoxy is normally not appreciated and thus needs to be specially encouraged when management welcomes it.

Poking holes is our daily bread

It could naturally be that the climate "sceptics" do not know that challenging other studies is build in into everything scientists do; they do not give the impression to know science that well. In their Think Tanks and multinational corporations they are probably happy to bend the truth to get ahead. They may think that that is how science works and they may not able to accept that a typical scientist is intrinsically motivated to figure out how reality works.
At every step of a study a scientist is aware that at the end it has to be written up very clearly to be criticised by peer reviewers before publication and by any expert in the field after publication. That people will build on the study and in doing so may find flaws. Scientific claims should be falsifiable, one should be able to show them wrong. The main benefit of this is that it forces scientists to very clearly describe the work and make it vulnerable to attack.

The first time new results are presented is normally in a working group seminar where the members of the Red Team are sitting around the table, ask specific questions during the talk and criticise the main ideas after the talk. These are scientists working with similar methods, but also ones who work on very different problems. All and especially the group leaders have an interest in defending the reputation of the group and making sure no nonsense spoils it.

The results are normally also presented at workshops, conferences and invited talks at other groups. At workshops leading experts will be there working on similar problems, but with a range of different methods and backgrounds. At conferences and invited talks there are in addition also many scientists from adjacent fields in the audience or scientists working with similar methods on other problems. A senior scientist will get blunt questions after the talk if anything is wrong with it. Younger scientists will get nicer questions in public and the blunt ones in private.

An important Red Team consists of your co-authors. Modern science is mostly done in teams. That is more efficient, reduces the chances of rookie errors and very easy due to the internet. The co-authors guarantee with their reputation for the quality of the study, especially for the part where they have expertise.

None of these steps are perfect and journalists should get away from their single-study fetish. But together these steps ensure that the quality of the scientific literature as a whole is high.

(It is actually good that none of these steps are perfect. Science works on the boundary of what is known, scientists that do not make errors are not pushing themselves enough. If peer review would only pass perfect articles that would be highly inefficient and not much would be published, it normally takes several people and studies until something is understood. It is helpful that the scientific literature is high quality, it does not need to be perfect.)

Andrew Revkin should know not to judge the quality of science by single papers or single scientists, that peer review does not need to be perfect and did not exist for most of the scientific era. But being a false balance kind of guy he regrettably uses "Peer review is often not as adversarial as intended" as argument to see merit in a Red Team exercise. While simultaneously acknowledging that "All signs point to political theater"

Red Team science

An optimistic person may think that the Red Team proposal of the Trump administration will follow the scientific method. We already had the BEST project of the conservative physics professor Richard Muller. BEST was a team of outside people have a look at the warming over land estimated from weather station observations. This project was funded in part by the Charles G. Koch Foundation. the Heartland Institute, hard core deniers funded by Koch Brother organisations.

The BEST project found that the previous scientific assessments of the warming were right.



The BEST project is also a reason not to be too optimistic about Pruitt's proposal. Before BEST published their results mitigation sceptics were very enthusiastic about their work and one of their main bloggers, Anthony Watts, claimed that their methods were so good and he would accept the outcome no matter the result. That changed when the result was in.

Judith Curry was part of BEST, but left before she would have had to connect her name with the results. Joseph Majkut of Niskanen Center, who wrote an optimistic Red Team article, claims there were people who changed their minds due to BEST, but did not give any examples yet.

It also looks as if BEST was punished for the result that was inconvenient for the funders. The funders are apparently no longer interest in studying the quality of climate observations. Berkeley Earth now mainly works on air pollution. While BEST did not even look at the largest part of the Earth yet: the oceans. The nice thing of being funded by national science foundations is that they care about the quality of the work, but not the outcomes.

If coal or oil corporations thought there was a minute possibility that climate science was wrong, they would fund their own research. Feel free to call that Red Team research. That they invest in PR instead shows how confident they are that the science is right. Initially Exxon did fund research, when it became clear climate change was a serious risk they switched to PR.

Joseph Majkut thinks that a well-executed Red Team exercise could convince people. In the light of the BEST project, the corporate funding priorities and the behaviour of mitigation sceptics in the climate "debate", I am sceptical. People who did not arrive at their position because of science will not change their position because of science.

Washington Republicans will change their mind when the bribes, aka campaign contributions, of the renewable energy sector are larger than those of the fossil fuel sector. Or when the influence of money is smaller than that of the people, like in the good old days.


Science lives on clarity

As a scientist, I would suggest just wait and see at this time. Let the Trump administration make a clear plan for this new scientific method. I am curious.

Let them tell us how they will select the members of the Red Team. Given that scientists are always critiquing each others work, I am curious how they plan to keep serious scientists out of their Red Team. I would be happy to join, there is still a lot of work to do on the quality of station data. Scientific articles typically end with suggestions for future research. That is the part I like writing the most.

Because the Trump administration is also trying to cut funding for (climate) science, I get the impression that scientists doing science is not what they want. I would love to see how they excuse keeping scientists like me out of the Red Team.

It would also be interesting to see how they will keep the alarmists out. Surely Peter Wadhams would like to defend his position that the Arctic will be ice free this year or the next. Surely Guy McPearson would like to explain why we are doomed and mainstream science, aka science, understates the problem in every imaginable way. I am sure Reddit Collapse of Civilization can suggest many more people with just as much scientific credibility as the people Scott Pruitt would like to invite. I hope they will apply to the Red Team.

That is just one question. Steven Koonin proposes in the Opinion section of the Wall Street Journal that:
A commission would coordinate and moderate the process and then hold hearings to highlight points of agreement and disagreement, as well as steps that might resolve the latter
Does this commission select the topics? Who are these organisers? Who selects them? What are the criteria? After decades of an unproductive blog climate "debate" we already know that there is no evidence that will convince the unreasonable. Will the commission simply write that the Red Team and the Blue Team disagreed about everything? Or will they make an assessment whether it is reasonable for the Red Team to disagree with with evidence?

Clearly Scott Pruitt himself would be the worst possible choice to select the commission. Then the outcome would trivially be: the two teams disagree and Commission Coal Industry declares the Red Team as winner. We already have an NIPCC report with a collection of blog "science". There is no need for a second one.

The then right-wing government of The Netherlands made a similar exercise: Climate Dialogue. They had a somewhat balanced commission and a few interesting debates on, for instance, climate sensitivity, the tropical hotspot, long-term persistence and Arctic sea ice. It was discontinued when it failed to find incriminating evidence. Just like funding for BEST stopped and confirming the general theme of the USA climate "debate": scientists judge studies based on their quality, mitigation "sceptics" based on the outcome.

A somewhat similar initiative in the US was the Climate Change National Forum, where a journalist determined the debating topics by selecting newspaper articles. The homepage is still there, but no longer current. Maybe Pruitt has a few bucks.


"This is yet another example of politicians engaging in unhelpful meddling in things they know nothing about."
Ken Caldeira


How will Pruitt justify not asking the National Academy of Sciences (NAS), whose job these kind of assessments is, to organise the exercise. Surely the donors of Pruitt will not find the NAS acceptable, they already did an assessment and naturally found the answer that does not fit their economic interests. (Like the findings on climate change of every other scientific organisations from all over the world does not fit their corruption-fuelled profits.)

I guess they will also not ask the Science Division of the White House.

Climate scientist Ken Caldeira called on Scott Pruitt to clarify the hypothesis he wants to test. Given the Trumpian overconfidence, the continual Trumpian own-goals, the Trumpian China-hoax extremism, the Trumpian incompetence and Trump's irrational donors wanting to go after the endangerment finding, I would would not be surprised if they go after the question whether the greenhouse effect exists, whether CO2 is a greenhouse gas or whether the world is warming. Pruitt said he wanted a "discussion about CO2 [carbon dioxide]."

That would be a party. There are many real and difficult questions and sources of uncertainties in climate science (regional changes, changes in extremes, the role of clouds, impacts etc.), but these stupid greenhouse-CO2-warming questions that dominate the low-rated US public "debate" are not among them.

The mitigation sceptical groups are not even able to agree with themselves which of these stupid three questions is the actual problem. I would thus suggest that the climate "sceptics" use their new "scientific method" themselves first to make their chaotic mess of incompatible claims into something.


Red Team PR exercise

Donald Trump has already helped climate action in America enormously by cancelling the voluntary Paris climate agreement. Climate change is slow and global. Everyone hopes someone else will solve it some time and attends to more urgent personal problems. When the climate hoaxer president cancelled the Paris agreement the situation became more dangerous and Americans started paying attention. This surge is seen above in the Google searches for climate change in the USA. This surge was noticeable in Reddit where there was a huge demand for reliable information on climate science and climate action.

The Red Team exercise would give undue weight to a small group of fringe scientists. This is a general problem in America, where many Americans have the impression that extremist positions are still under debate because the fossil fuel industries bought many politicians who in turn say stupid things on cable TV and in opinion sections. These industries also place many ads and in return corporate media is happy to put "experts" on TV that represent their positions. Reality is that 97% of scientists and scientific studies agree that climate change is real and caused by us.

On the optimistic side, just like cancelling Paris made Americans discover that Washington is completely isolated on the world stage in their denial that climate change is a risk, the Red Team exercise could also lead to more American learning how broad the support in the scientific community for climate change is and how strong the evidence.



If the rules of the exercise are clearly unfair, scientists will easily be able to explain why they do not join and ask Pruitt why he thinks he needs such unfair rules. While scientists are generally trusted, the opposite is true for Washington and the big corporations behind Pruitt.

The political donors have set up a deception industry with politicians willing to lie for them, media dedicated to spreading misinformation or at least willing to let their politician deceive the public, they have "think tanks" and their own fake version of the IPCC report and a stable of terrible blogs. These usual suspects writing another piece of misinformation for the EPA will hardly add to the load.

The most tricky thing could be to make clear to the public that science is not resolved in debates. The EPA official E&E News talked to was thinking of a "back-and-forth critique" by government-recruited experts. In science that back and forth is done on paper, to make sure it is clearly formulated, with time to check the claims, read the cited articles and crunch the data. If it is just talk, it is easy to make false claims, which cannot be fact checked on the spot. Unfortunately history has shown that the Red Team will likely be willing to make false claims in public.

If the rules of he exercise are somewhat fair, science will win big time; we have the evidence on our side. At this time, where America pays attention to climate change, that could be a really good advertisement for science and the strength of the evidence that climate change is a huge risk that cannot be ignored.

Concluding, I am optimistic. Either they make the rules unfair. It seems likely they will try to make this exercise into political theatre. Then we can ask them in public why they make the rules so unfair. Don't they have confidence in their position that climate change is a hoax?

If they make the rules somewhat fair, science will win big time. Science will win so much, you will be tired of all the winning, you will be begging, please mister scientist no more winning, I cannot take it any more.

Let me close with John Oliver on Coal. Oliver was sued over this informative and funny piece by coal Barron Robert Murray who also stands behind Scott Pruitt and Trump.



Related reading

Red/Blue & Peer Review by the presidents of the American Geophysical Union (AGU) & the National Academy of Sciences: "Is this a one-off proposal targeting only climate science, or will it be applied to the scientific community’s research on vaccine safety, nuclear waste storage, or any of a number of important policies that should be informed by science?"

Are debatable scientific questions debatable?

Why doesn't Big Oil fund alternative climate research?

My previous post on the Red Cheeks Team.

Great piece by climate scientist Ken Caldeira: Red team, blue team.

Josh Voorhees in Slate: EPA Chief Scott Pruitt Wants to Enlist a “Red Team” to Sow Doubts About Climate Change.

Andrew Freedman in Mashable: EPA to actually hold 'red-team' climate debates, and scientists are livid.

Ars Technica: Playing fossil’s advocate — EPA intends to form “red team” to debate climate science. Agency head reported to desire “back-and-forth critique” of published research by Scott K. Johnson.

The pro-climate libertarian Niskanen Center: Can a Red Team Exercise Exorcise the Climate Debate? May I summarise this optimistic post as: if this new "Red Team" scientific method turns out to be the normal scientific method it would be useful.

Talking Points Memo: Pruitt Is Reportedly Starting An EPA Initiative To Challenge Climate Science.

Audobon's letter to Scott Pruitt: "The oil and gas industry manufactures a debate to avoid legal responsibility for their pollution and to eke out a few more years of profit and power."

Rebecca Leber in Mother Jones (May 2017): Leading Global Warming Deniers Just Told Us What They Want Trump to Do.

Scott Pruitt will likely not ask a court of law. Then they would lose again.

The Red Team method would still be a better scientific method than the authoritarian Soviet method proposed by a comment on a large mitigation sceptical blog, WUWT: Does anyone know if the [American Meteorological Society] gets any federal funding like the National Academy of Science does? ... People sometimes can change their tune when their health of their pocketbook is at stake. Do you really want to get your science from authoritarians abusing the power of the state to determine the truth?

Our wise and climate-cynical bunny thinks the Red Team exercise is a Team B exercise, which is the kind of exercise a Red Team should prevent.

Brad Plumer and Coral Davenport in the New York Times: E.P.A. to Give Dissenters a Voice on Climate, No Matter the Consensus.

Steven Koonin in the Opinion section of Rupert Murdoch's Wall Street Journal (April 2017): A ‘Red Team’ Exercise Would Strengthen Climate Science. (pay-walled)

Kelly Levin of the World Resources Institute: Pruitt’s “Red Team-Blue Team” Exercise a Bad Fit for EPA Climate Science.

Statement by Ken Kimmell, President, Union of Concerned Scientists: EPA to Launch Program Critiquing Climate Science


* Photo at the top of Scott Pruitt at CPAC 2017 by Gage Skidmore under a Creative Commons Attribution-ShareAlike 2.0 Generic (CC BY-SA 2.0) license.