Sunday, 26 May 2013

Christians on the climate consensus

Dan Kahan thinks that John Cook and colleagues should shut up about the climate consensus; the consensus among climatologists that the Earth is warming and human action is the main cause. Kahan claims that research shows that talking about consensus is:
a style of advocacy that is more likely to intensify opposition ... then [to] ameliorate it
It sounds as if his main argument is that Cook efforts are counter productive because Cook is not an American Republican, which is hard to fix.

Katryn Hayhoe

As an example of how you communicate climate science the right way, Kahan mentions Katryn Hayhoe as an example. Hayhoe is an evangelical climate change researcher and stars in three beautifully made videos where Hayhoe talks about God and climate change.

Except that she also talks about her religion, I personally see no difference with any other message for the general public on climate change. She also openly speaks about the disinformation campaign by the climate ostriches.
The most frustrating thing about her position, she says, is the amount of disinformation which is targeted at her very own Christian community.
Maybe naively, but I was surprised that the Christian community is a special target. While I am not a Christian myself, my mother was a wise environmentally concious woman and a devout Christian. Also when in comes to organized religion, I remember mainly expressions of concern about climate change. Thus I thought that Christians are a positive, maybe even activist, force with respect to climate change.

Thus let's have look what the Christian Churches think about climate change.

Monday, 20 May 2013

On consensus and dissent in science - consensus signals credibility

Since Skeptical Science published the Pac Man of The Consensus Project, the benign word consensus has stirred a surprising amount of controversy. I had already started drafting this post before, as I had noticed that consensus is an abomination to the true climate ostrich. Consensus in this case means that almost all scientists agree that the global temperature is increasing and that human action is the main cause. That the climate ostriches do not like this fact, I can imagine, but acting as if consensus in itself is in bad thing in itself sounds weird to me. Who would be against the consensus that all men have to die?

Also the Greek hydrology professor Demetris Koutsoyiannis echoes this idea and seems to think that consensus is a bad thing (my emphasis):
I also fully agree with your statement. "This [disagreement] is what drives science forward." The latter is an important agreement, given a recent opposite trend, i.e. towards consensus building, which unfortunately has affected climate science (and not only).
So, what is the role of consensus in science? Is it good or bad is it helpful or destructive, should we care at all?

Credibility

In a recent post on the value of peer review for science and the press, I have argued that one should not dramatize the importance of peer review, but that it is a helpful filter to determine which ideas are likely worth studying. A paper which has passed peer review, has some a-priory credibility.

In my view, consensus is very similar, consensus lends an idea credibility. It does not say that an idea is true; if formulating carefully a scientist will never state that something is true, not even about the basics of statistical mechanics or evolution, which are nearly truisms and have been confirmed via many different lines of research.

Wednesday, 15 May 2013

Readership of all major "sceptic" blogs is going down

In my first post of this series I showed that the readership of WUWT and Climate Audit has gone down considerably according to social bookmarking site Alexa; see below. (It also showed that the number of comments at WUWT is down by 100 comments a day since beginning 2012.)


reach of WUWT according to Alexa

reach of Climate Audit according to Alexa

I looked a bit further on Alexa and this good news is not limited to these two. All the "sceptics" blogs I knew and had statistics are going down. Bishop Hill, Climate Depot, Global Warming, Judith Curry, Junk Science, Motls, and The Blackboard (Rank exploits) are all going down. Interestingly the curves look very different for every site and unfortunately they show some artificial spikes. Did I miss a well known blog?

Friday, 10 May 2013

Decline in the number of climate "sceptics", reactions and new evidence

My last post showing that the number of readers of Watts Up With That and Climate Audit are declining according to Alexa (social bookmarking) has provoked some interesting reactions. A little research suggests that the response post by Tom Nelson: Too funny: As global warming and Al Gore fall off the general public's radar, cherry-pickin' warmist David Appell argues that WUWT is "Going Gently Into That Good Night", could be a boomerang and another sign of the decline. More on that and two more indications that climate change ostriches are on their way back.

Public interest in climate change

An anonymous reader had the same idea as Tom Nelson, but did not write a mocking post, but politely asked:
"how do you know it is not a general diminution of interest in climate change?".
That is naturally possible and hard to check without access to the statistics of all climate related blogs and news pages. However, as you can see below, the number of readers of SkepticalScience and RealClimate seem to be stable according to Alexa. This suggests that the decline is not general, but specific to the "sceptic" community.

Sunday, 5 May 2013

The age of Climategate is almost over

It seems as if the age of Climategate is over (soon). Below you can see the number of Alexa (social bookmarking) users that visited What Up With That? At the end of 2009 you see a jump upwards. That is where Anthony Watts made his claim to fame by violating the privacy of climate scientist Phil Jones of the Climate Research Unit (CRU) and some of his colleagues.

Criminals broke into the CRU backup servers and stole and published their email correspondence. What was Phil Jones' crime? The reason why manners and constitutional rights are not important? The reason to damage his professional network? He is a climate scientist!

According to Watts and co the emails showed deliberate deception. However, there have been several investigations into Climategate, none of which found evidence of fraud or scientific misconduct. It would thus be appropriate to rename the Climategate to Scepticgate. And it is a good sign that this post-normal age is (almost) over and the number of visitors to WUWT is going back to the level before Climategate.

Since the beginning of 2012, the number of readers of WUWT is in a steady decline. It is interesting coincidence that I started commenting once in a while since February 2012. Unfortunately for the narcissistic part of my personality: correlation is not causation.

The peak in mid 2012 is Anthony Watts first failed attempt in writing a scientific study.

According to WUWT Year in review (Wordpress statistics), WUWT was viewed about 31,000,000 times in 2011 and 36,000,000 times in 2012. However, a large part of the visitors of my blog are robots and that problem is worse here as for my little read German language blog. Alexa more likely only counts real visitors.


Sunday, 28 April 2013

The value of peer review for science and the press

The value of peer review keeps on producing heated debates. An interesting example was the weekend that physics professor Richard Muller wrote an op-ed in the New York Times. Some claim that Anthony Watts halted his blog for 2 days and released a scientific manuscript and an accompanying press release on the same weekend to steal attention away from Mullers op-ed. Both the op-ed and the press release were about scientific claims that had not passed peer review. Thus the Washington Post asked: Is it okay to seek publicity for a work that is not peer reviewed?
Watts et al. manuscript

The eventful weekend at end of July 2012, resulted in two worthwhile blog post in the New York Times (Andrew C. Revkin at dotEarth) and the Washington Post (Jason Samenow).

The manuscript was clearly released prematurely and had serious methodological problems. A few days after the press release and the blog reviews, Anthony Watts still wrote: "I’m hoping to post up a revised draft, addressing many of those comments and corrections in the next day or two." And he opened a "work page" for the manuscript, which is so quiet you can hear crickets. Just when no one expected it any more, the zombie manuscript came back from the undead; this March Watts wrote about this manuscript: "we are preparing a paper for submission".

I am not a native speaking. May I ask, if you write "we are preparing", that indicates an ongoing action, right? Is there any lower limit on the intensity of this action?

The other side of the question about seeking the press before peer review is should a journalist only write about peer-reviewed studies? Further questions that came up since are: Is it unscientific to cite non-reviewed studies? Should the IPCC limit itself to reviewing only the peer-reviewed literature? Is peer review gate keeping? Is peer review necessary?

As often the context is important. What the value of peer review is, depends on who you are? An expert or not, a journalist or a newspaper reader? Another important part of the context is how controversial the finding is.

The Value of Peer Review for Science

Peer review gives an article credibility. As such peer review is "just" a filter, it does not guarantee that an article is right. Many peer-reviewed articles contain errors, many ideas outside of the peer-reviewed literature are worthwhile. However, on average the quality of peer-reviewed work is better. Thus peer-reviewed work is more likely worthy of your attention.

If you are a scientist and an idea/study is about something you are knowledgeable about there is no reason to limit yourself exclusively to peer-reviewed articles, but it is smart to prefer them. A scientist will only use peer review to preselect, because you simply cannot read and check everything. Life is short and attention a very limited resource. I also see no problem in citing studies that are not peer-reviewed, whether scientific reports or conference contributions. I do feel that by citing such studies, you give them some of your reputation, you become partially a reviewer and should read them as careful as a reviewer would.

Peer review is far from perfect. It is not intended to and cannot prevent fraud. Some bad papers will get through and some good ones will be rejected. This can be annoying for the scientists involved, but given that peer review is just a filter, it is not that bad for science in general. It is only since the second world war that peer review has become the standard. We also had scientific progress before that time.

Friday, 29 March 2013

Special issue on homogenisation of climate series

The open access Quarterly Journal of the Hungarian Meteorological Service "Időjárás" has just published a special issue on homogenization of climate records. This special issue contains eight research papers. It is an offspring of the COST Action HOME: Advances in homogenization methods of climate series: an integrated approach (COST-ES0601).

To be able to discuss eight papers, this post does not contain as much background information as usual and is aimed at people already knowledgeable about homogenization of climate networks.

Contents

Mónika Lakatos and Tamás Szentimrey: Editorial.
The editorial explains the background of this special issue: the importance of homogenisation and the COST Action HOME. Mónika and Tamás thank you very much for your efforts to organise this special issue. I think every reader will agree that it has become a valuable journal issue.

Monthly data

Ralf Lindau and Victor Venema: On the multiple breakpoint problem and the number of significant breaks in homogenization of climate records.
My article with Ralf Lindau is already discussed in a previous post on the multiple breakpoint problem.
José A. Guijarro: Climatological series shift test comparison on running windows.
Longer time series typically contain more than one inhomogeneity, but statistical tests are mostly designed to detect one break. One way to resolve this conflict is by applying these tests on short moving windows. José compares six statistical detection methods (t-test, Standard Normal Homogeneity Test (SNHT), two-phase regression (TPR), Wilcoxon-Mann-Whithney test, Durbin-Watson test and SRMD: squared relative mean difference), which are applied on running windows with a length between 1 and 5 years (12 to 60 values (months) on either side of the potential break). The smart trick of the article is that all methods are calibrated to a false alarm rate of 1% for better comparison. In this way, he can show that the t-test, SNHT and SRMD are best for this problem and almost identical. To get good detection rates, the window needs to be at least 2*3 years. As this harbours the risk of having two breaks in one window, José has decided to change his homogenization method CLIMATOL to using the semi-hierarchical scheme of SNHT instead of using windows. The methods are tested on data with just one break; it would have been interesting to also simulate the more realistic case with multiple independent breaks.
Olivier Mestre, Peter Domonkos, Franck Picard, Ingeborg Auer, Stéphane Robin, Emilie Lebarbier, Reinhard Böhm, Enric Aguilar, Jose Guijarro, Gregor Vertachnik, Matija Klan-car, Brigitte Dubuisson, and Petr Stepanek: HOMER: a homogenization software – methods and applications.
HOMER is a new homogenization method and is developed using the best methods tested on the HOME benchmark. Thus theoretically, this should be the best method currently available. Still, sometimes interactions between parts of an algorithm can lead to unexpected results. It would be great if someone would test HOMER on the HOME benchmark dataset, so that we can compare its performance with the other algorithms.

Sunday, 24 March 2013

New article on the multiple breakpoint problem in homogenization

An interesting paper by Ralf Lindau and me on the multiple breakpoint problem has just appeared in a Special issue on homogenization of the open access Quarterly Journal of the Hungarian Meteorological Service "Időjárás".

Multiple break point problem

Long instrumental time series contain non-climatological changes, called inhomogeneities. For example because of relocations or due to changes in the instrumentation. To study real changes in the climate more accurately these inhomogeneities need to be detected and removed in a data processing step called homogenization; also called segmentation in statistics.

Statisticians have worked a lot on the detection of a single break point in data. However, unfortunately, long climate time series typically contain more than just one break point. There are two ad hoc methods to deal with this.

The most used method is the hierarchical one: to first detect the largest break and then to redo the detection on the two subsections, and so on until no more breaks are found or the segments become too short. A variant is the semi-hierachical method in which old detected breaks are retested and removed if no longer significant. For example, SNHT uses a semi-hierachical scheme and thus also the pairwise homogenization algorithm of NOAA, which uses SNHT for detection.

The second ad hoc method is to detect the breaks on a moving window. This window should be long enough for sensitivity, but should not be too long because that increases the chance of two breaks in the window. In the Special issue there is an article by José A. Guijarro on this method, which is used for his homogenization method CLIMATOL.

While these two ad hoc methods work reasonably, detecting all breaks simultaneously is more powerful. This can be performed as an exhaustive search of all possible combinations (used by the homogenization method MASH). With on average one break per 15 to 20 years, the number of breaks and thus combinations can get very large. Modern homogenization methods consequently use an optimization method called dynamic programming (used by the homogenization methods PRODIGE, ACMANT and HOMER).

All the mentioned homogenization methods have been compared with each other on a realistic benchmark dataset by the COST Action HOME. In the corresponding article (Venema et al., 2012) you can find references to all the mentioned methods. The results of this benchmarking showed that multiple breakpoint methods were clearly the best. However, this is not only because of the elegant solution to the multiple breakpoint problem, these methods also had other advantages.