Tuesday, 20 June 2017

On the recent warming surge

"Incidentally, when in the journal Science in 2007 we pointed to the exceptionally large warming trend of the preceding 16 years, which was at the upper end of the [climate] model range, nobody cared, because there is no powerful lobby trying to exaggerate global warming."

"And of course in our paper we named natural intrinsic variability as the most likely reason. But when a trend at the lower end of the model range occurs it suddenly became a big issue of public debate, because that was pushed by the fossil fuel climate sceptics’ lobby. There is an interesting double standard there."

Maybe the comment is deleted in shame. At least I cannot find it any more. Someone on Reddit was making exactly the same errors as the fans of the infamous "hiatus" to argue that global warming since 2011 is exploding and we're soon gonna die. Deleted comments can be freely paraphrased.

The data

So let's have a look at what the data actually says. Below are warming curves of two surface temperature datasets and two upper air temperature satellite retrievals. I shortened the warming curve of Berkeley Earth to match the period to the one of GISTEMP. All the other datasets are shown over their full lengths. Taking this step back, looking at the overview, there clearly is no "hiatus" and no "warming surge".

The red dots are the annual temperature anomalies, which are connected by a thin grey line. The long-term trend is plotted by a thick blue line, which is a [[LOESS]] estimate.

If you want to see the "hiatus" you have to think the last two data points away and start in the temperature peak of 1998. Don't worry, I'll wait while you search for it.

A hiatus state of mind

After seeing reality, let's now get into the mindset of a climate "sceptic" claiming to have evidence for a "hiatus", but do this differently by looking at the data since 2011.

Naturally we only plot the recent part of the data, so that context is lost. I am sure the climate "sceptics" do not mind, they also prefer to make their "hiatus" plots start around the huge 1998 El Nino warming peak and not show the fast warming before 1998 for context.

The thick blue lines are quadratic functions fitted to the data. They fit snugly. As you can see both the linear and quadratic coefficients are statistically significant. So clearly the recent warming is very fast, faster than linear and we can soon expect to cross the 2 °C limit, right?

I am sure the climate "sceptics" do not mind what I did. They also cherry picked a period for their "hiatus" and applied a naive statistical test as if they had not cherry picked the period at all. The climate "sceptics" agreeing with Christopher Monckton doing this on WUWT will surely not object to our Redditor doing the same and will conclude with him that the end is nigh.

By the way, I also cherry picked the dataset. The curves of the upper air temperature retrievals are not as smooth and the quadratic terms were not statistically significant. But in case of the "hiatus" debate our "sceptical" friends also only showed data for the datasets showing the least warming. So I am sure they do not object to this now.

If you look at the full period you can see that the variability of the temperature signal is much larger than the variability around the quadratic fit. It is thus clearly a complete coincidence that the curve is so smooth. But, well, the "hiatus" proponents also just look statistically at their cherry picked period and ignore the actual uncertainties (including slowly varying ones.)

Some more knowledgeable catastrophists may worry that it looks as if 2017 may not be another record scorching year. No worries. Also no worries if 2018 is colder again. The Global Warming Policy Foundation thinks it is perfectly acceptable to ignore a few politically inconvenient years and claims that we just have to think 2015 and 2016 away and that thus the death of the "hiatus" has been greatly exaggerated. I kid you not. I have not seen any climate "sceptic" or any "luckwarmer" complaining about this novel statistical analysis method, so surely our Redditor can do the same. Catastrophic warming surge claims are save till at least 2019.

Don't pick cherries

Back to reality. What to do against cherry picking periods? My advice would be: don't cherry pick periods. Not for the global mean temperature, not for the temperature of the Antarctic Peninsula, not for any other climate variable. Just don't do it.

If you have a physical reason to expect a trend change, by all means use that date as start of the period to compute a trend. But for 1998 or 2011 there is no reason to expect a trend change.

Our cheerful Redditor had even a bit more physics in his claim. He said that the Arctic was warming and releasing more carbon dioxide and methane. However, these emissions were not large enough to make the increase in the global greenhouse concentrations speed up. And they count. From our good-natured climate "sceptics" I have never heard a physical explanation why global warming would have stopped in 1998. But maybe I have missed their interest in what happens with the climate system.

If you have no reason to expect a trend change the appropriate test would be for a trend change test at an unknown date. Such a test "knows" that cherry picked periods can have hugely different trends and thus correctly only sees larger trend changes over longer periods as statistically significant. Applied to temperature data the result of such as test does not see any "hiatus" nor any "warming surge".

This test gave the right result during the entire "hiatus" madness period. Don't fool yourself. Use good stats.

Related reading

Cranberry picking short-term temperature trends

Statistically significant trends - Short-term temperature trend are more uncertain than you probably think

How can the pause be both ‘false’ and caused by something?

Atmospheric warming hiatus: The peculiar debate about the 2% of the 2%

Sunday, 11 June 2017

My unsolicited advice for a UK government

After my spectacular success as UK election pollster let my try my luck as UK political adviser. Offering outsider continental perspective from a citizen of a countries where coalition governments are normal.

The electoral results could not have been more complicated. The conservatives lost their majority. Against British custom collaboration is necessary.

Conservative Prime Minister Theresa May has proposed to govern together with the [[Democratic Unionist Party]] (DUP), a small deeply conservative Northern Irish protestant party with ties to terrorists after decades of religious terrorism.

That coalition has a majority of only 2 seats.

Other possible Conservative coalitions would be with the Liberal Democrats, with the Scottish National Party and the taboo option with Labour. Reversely, the only realistic coalition for Labour would be with the conservatives.

A Tory-DUP coalition is highly problematic. The DUP will have to get something in return for their collaboration. Typically a small coalition partner needs to get a lot more than their vote share to make it worthwhile and to survive the next election. Given how extreme they are, a majority of only 2 seats will often mean that more moderate conservatives will not be willing to give them what they bargained for and need.

The UK government is, furthermore, supposed to be neutral in Northern Irish problems. I have the impression that the UK does not fully realise that also in the Brexit negotiations Northern Ireland will be an extremely difficult problem. If the UK really wants a hard border this would cut Northern Ireland off from the rest of Ireland, which will cause economic problems for everyone and will make it harder to visit family and friends (for Catholics).

The DUP would at least make sure this problem is not ignored until it is too late, but their role would be far from neutral.

h/t Simon Noone‏

The two seat majority will likely dissipate quite quickly in future special elections. That could lead to another election in the middle of the Brexit negotiations.

The Brexit clock is ticking and the UK will have to make decades worth of political decisions in about one year time. I do not see that happening with a two-seat majority. It would have been a herculean task with a large majority and a capable prime minister.

More logical coalition parties for the Conservatives would be the Liberal Democrats and the Scottish National Party. However, they both do not want to be add-ons to a dominant Conservative party, especially after the bad experiences the Liberal Democrats had in their recent coalition with the Conservatives. Still given the situation I feel these parties should give this option more thought (after the disposal of May).

For the UK it would be highly unorthodox, but I see a coalition of the Conservatives and labour as the best option. Possibly together with the Liberal Democrats and the Scottish National Party. In Germany and The Netherlands we have such coalitions of conservatives and social democrats more often, although only as coalition of last resort. In Switzerland the government is always made up of all parties governing in partnership.

Not being used to coalition governments the UK is lacking a culture of adult behaviour and professional political debate, especially between the Tories and Labour. Maybe a neutral Prime Minister (of the Liberal Democrats) could help to hold a grand coalition together. I realise that that makes the coalition even more unorthodox. But these are also not normal times.

This would be nothing for a non-communicative prime minister with brittle personality and a penchant for kabuki theatre. Also Boris Johnson would also be an utter catastrophe, I hope this does not need explaining, but he may be a useful tool for regicide.

The kind of personality able to lead a grand coalition would also be the best personality for the Brexit negotiations with the EU. Many in the UK seem to see the EU as the enemy who is now laughing at UK's misery. On Twitter someone asked for a German word for the current predicament. Most offered was "Schadenfreude". I would say "Mitleid" (sympathy), "Verunsicherung" (confusion) and "Beunruhigung" (worry).

The UK gutter press like to cite anonymous antagonistic EU assholes. Anyone on twitter will be able to confirm that anonymous assholes are never short in supply. But that a few percent of humanity does not deserve that title does not change geography: Europe and the UK will always be neighbours. People in England seem to often forget that we are even neighbours with a land border.

The EU is interested in friendly relations with its neighbours. A concept that may be difficult to comprehend for (formerly) fascist rags. We are grateful for Monty Python, British expertise on the weather, and for the liberation from fascist occupation. A Russian liberation would have been much less pleasant. Just like young Brits got to know other Europeans as mostly nice people with shared values, European appreciate the UK. One day these young people may want to join their EU friends again.

No deal or a bad deal is damaging for the UK, but thus also unfavourable for the EU. A no-deal Brexit in no way fits to the nearly 50/50 outcome of the referendum.

Thus negotiations based on a common appreciation of long-term friendly relations would give the best results for all. There are so many topics that need to be negotiated that kabuki theatre on Gibraltar or holding EU citizens in the UK hostage are an enormous distraction. Next to EU negotiations the UK will have to negotiate trade relations with dozens of countries and trading blocks. The UK will have to build up institutions on the EU borders and institutions for the safety of drugs, chemical consumer product, banking products, etc.

Before the Brexit referendum we had ludicrous have-your-cake.and-eat-it propaganda. We are now nine months after the referendum and it is about time to have a public discussion what Brexit means. It is unfortunately only a slight exaggeration that this discussion was limited to Brexit is Brexit.  It is not.

It beats me why the UK also wants to leave [[Euratom]], which is not part of the EU. If necessary that could have been done later. They do not only work on nuclear power, but also nuclear medicine. Does the UK want the European Centre for Medium-Range Weather Forecasts, [[ECMWF]], to stay in Reading? It is not officially part of the EU, but has many of the same member states.

Does the UK want to keep student exchanges with the continent, [[Erasmus]]? Does the UK want to stay part of the European research community? If yes how? There are programmes for applied joint European research, [[Horizon 2020]], for fundamental science, [[ERC]], and for scientific coordination, [[COST]]. The UK could pay into those programs. Switzerland pays their researchers directly if they are part of it and in case of COST even gives them some bonus funding for additional research to stimulate participation.

Those are just the examples from my field, science. Similar hard choices will have to be made on so many topics. Personally I would suggest to change as little as possible and (initially) contribute to EU programmes. Just the minimal Brexit program, rights of EU citizens currently in the UK and UK citizens currently in the EU (residence, work, healthcare, benefits, pensions, voting rights), trade and borders (seas, fishing, exploration, Northern Ireland, Gibraltar), are though to get done in about a year. Loosening the ties on all the other topics can be done later.

That requires a friendly and communicative leader and government. Clearly Theresa may should go and Boris Johnson is not an alternative. A grand coalition may be the best way to get a concrete national debate going and to have fruitful negotiations.

[UPDATE. Oh my. When even a reasonable guy like John Oliver buys into the argument that the EU will be extra hard on the UK to scare others from leaving. Because everyone wants to be member of a union that has no upsides, but will beat you up when you leave and diligently works on making it worse to leave yourself.

The EU exists because it has benefits for its members. The UK may not be able to see this after decades of propaganda, but the other EU states surely do. Leaving the EU will mean losing these benefits. No need for additional punishments. Except if you think it is a malicious punishment not to be in applied research program Horizon 2020 when not paying for it or expecting the discounts of a customer loyalty card without having a card.

Otherwise an excellent video with a guest appearance by Lord Buckethead.


Related reading

Political support grows for cross-party approach to Brexit negotiations. Theresa May under pressure from across political spectrum to build plural coalition – including Jeremy Corbyn – before EU talks.

Der Spiegel looks at the years ahead: A Wave of Anger Crashes over Britain - Brexit Is Dead.

New York Review of Books: Britain: The End of a Fantasy

Desmog UK: Let's Take A Closer Look at the DUP's Climate Science Denial

Tuesday, 6 June 2017

Comrade Trend predicts the UK general election outcome

Poll whisperer Nate Silver himself just predicted that in the UK elections the conservatives would win by 7 %. My prediction is 2 %.

The prediction of Silver is a simple average of the last polls of the ten main polling organisations. He uses the same data as I do, kindly gathered by the volunteers of Wikipedia, who also made the plot below.

The statistics normally used to estimate the outcome of elections assumes that there is a fixed value and the surveys are noise around this value. In this case you have to average (in a smart way) to remove the noise of the surveys and get a better estimate of the fixed value.

Wikipedia uses the same assumption to compute the curve of the running average of all polls and takes the average over the 10 most recent polls. The distance between the curves of the Conservatives and Labour in the Wikipedia graph above is about 8 %.

Comrade Trend

However, if the polls are moving it typically keeps on moving in the same direction. Germans call this Genosse Trend, Comrade Trend. If you are going up in the polls Comrade Trend is a great friend.

This could be because it takes time until "everyone" has heard the news, only small part of the population is a news junky. You may also want to wait whether refuting information comes in some time later, whether a campaign changes its position or you may want to talk about the news with your family and friends.

It takes some time until people have found time to read the manifestos (Conservative | Labour). Hopefully many do, there is a real choice this time. Also the strengths and weaknesses of the campaigns mostly stay the same during the campaign. Each time Theresa May openly refuses to answer basic policy questions she loses voters, at least for people like me.

Comrade Trend seems like a good assumption to me. The polls in the UK move so fast that the difference in assumptions really matters this time. The smoothing method [[LOESS]] estimates the trend at a certain time to make the best estimate at that time. It fits a line to the data over a subperiod. Estimating a fixed value would correspond to assuming the line is horizontal over this subperiod. LOESS thus takes Comrade Trend into account. It gives the figure below.

The predictions for election day, this Thursday, were made by assuming a linear trend for the period since the 1st of May.

Election turnout

An important reason that it is hard for polling to predict the outcome of an election is that it is not known who will actually get off the couch and vote. The figure/tweet below shows for one of the polls how much difference various turnout assumptions makes. The spread in the polls is very large this election. That may well be because turnout is very important this time. Older people tend to favour the conservatives and faithfully show up. The question we will not know until election day is how much young people will show up.

Polling bias

Then there is the question whether polls are not just noisy estimate, but whether they are also biased relative to the election. Nate Silver writes:
Exactly how strong the Conservative tendency to outperform their polls has been depends on where you measure from. Since 1992, Conservatives have beaten their final polling margin over Labour by an average of 4.5 percentage points, and have done so in all but one election. (That was 2010, when both Conservatives and Labour gained ground as Liberal Democrats’ support collapsed, but Labour slightly outperformed its polling margin against the Tories.) Go all the way back to 1945, however, and the average Conservative overperformance is just 1.8 percentage points and is not statistically significant.
I prefer to look at all the data and would say that the bias is not statistically significant. It would be weird for the bias to have become worse, except maybe for really recent elections where parts of the population can no longer be reached with landline telephones. That you can find a period with a higher bias may well be cherry picking. As long as no one can provide a reason for a change in bias since 1992, picking a specific period after looking at the data is statistically suspect.


According to Nate Silver UK polling tends to have a relatively high uncertainty and misses the election outcome typically by about 4 %. Estimating a trend is harder than estimating a mean, thus it could be that my 2 % prediction is even a bit more uncertain. Thus if my best estimate is right and the conservatives are only 2 % ahead the election is a toss up: the difference is less than the uncertainty.

Let's see who turns out.

[UPDATE. If you put in my numbers for the conservatives and Labour and add 8 % for the Liberal Democrats, 4 % for UKIP and 3 % for the Greens (estimated from Wikipedia graph), the Electoral Calculus app computes the following seat distribution:

National Prediction: Conservative majority 4

Party2015 Votes2015 SeatsPred VotesGainsLossesNet ChangePred Seats
CON 37.8%331 40.0%610-4327
LAB 31.2%232 38.1%141+13245
LIB 8.1%8 7.6%05-53
UKIP 12.9%1 3.8%01-10
Green 3.8%1 2.9%00+01
SNP 4.9%56 4.7%01-155
PlaidC 0.6%3 0.5%02-21
Minor 0.8%0 2.5%00+00
N.Ire 18 00+018

[UPDATE. Just before closing of the election let me publish my last updated graph here. Currently Tories are 4.2 % ahead. That is still within the uncertainty and while it is quite likely that the Conservatives will win, there is still a Trump chance that Labour will win. Let's see who shows up to vote.


[POST ELECTION UPDATE. One of the best polls was by YouGov. Good with respect to statistical methodology and accuracy in this election. Interestingly, they did not find a steep trend towards Labour, but had a stable 3% lead for the conservatives since 27 May. Thus Comrade Trend may be more the inertia of the other polling organisations than that of the public.]

Related reading

YouGov, the heavily criticized poll that got it right: UK election: The day after. "But the general picture is clear: the model was a huge success in an election which most politicians, pollsters and commentators got badly wrong."

Nate Silver at 538: Are The U.K. Polls Skewed?

Response to Nate Silver, part one (because it’s early!) This post warns that the averaging method in Nate Silver's post is very basic and not comparable to the advanced methods he uses for US polls.

UK polling report: How the polls have changed since 2015

* The code used to generate my prediction plot is on Github.

Tuesday, 30 May 2017

The IPCC underestimates global warming

A month ago the New York Times insulted its subscribers with a climate change column by Bret Stephens, their new hire from the Wall Street Journal. The bizarre text was mostly a sequence of claims that did not follow from the arguments presented.

The column also contained one fact. Which was wrong and later corrected. Stephens claimed:
Anyone who has read the 2014 report of the Intergovernmental Panel on Climate Change knows that, while the modest (0.85 degrees Celsius, or about 1.5 degrees Fahrenheit) warming of the Northern Hemisphere since 1880 is indisputable, as is the human influence on that warming, much else that passes as accepted fact is really a matter of probabilities.
As a dutiful watcher of Potholer54, which any real skeptic should be, you know that it is a good idea to check the source and Stephens helpfully provided a link to the Summary for Policymakers of the 5th assessment synthesis report of the Intergovernmental Panel on Climate Change (IPCC). This summary mentions the number "0.85" in the sentence:
The globally averaged combined land and ocean surface temperature data as calculated by a linear trend show a warming of 0.85 [0.65 to 1.06] °C over the period 1880 to 2012, when multiple independently produced datasets exist (Figure SPM.1a). {1.1.1, Figure 1.1}

Figure SPM.1a. Annually and globally averaged combined land and ocean surface temperature anomalies relative to the average over the period 1986 to 2005. Colours indicate different data sets.

Thus Stephens confused the global temperature with the temperature of the Northern Hemisphere. Not a biggy, but indicative of the quality of Stephens' writing.

A related weird claim is that the "warming of the earth since 1880 is indisputable, as is the human influence on that warming, much else that passes as accepted fact is really a matter of probabilities."

As quoted above the warming since 1880 is not exactly known, but probably between 0.65 and 1.06 °C. That it was warming is so certain that a journalist may call it "indisputable". There is thus no conflict between probabilities and certainty. In fact they go hand in hand. When scientists talk about uncertainties, they are quantifying how certain we are.

However, I did not want to attack the soft target Bret Stephens. The hard target IPCC is much more interesting. They put some thought in their writing. More precisely I have problems when they write in the summary for policy makers: "temperature data as calculated by a linear trend show a warming of 0.85". That means that they fitted a linear function to the data — using [[least squares regression]] — and used this trend and the length of the period to estimate the total warming over this period.

This is a problem because calculating the total amount of warming using a linear trend underestimates global warming.* I show this below for two global temperature datasets by comparing the linear warming estimate with a nonlinear (LOESS) warming estimate. The linear estimate is smaller: For NASA's GISTEMP it is 0.05 °C smaller and for Berkeley Earth it is 0.1 °C smaller.

Such linear estimates are well suited for comparing different datasets because it is well defined how to compute a linear trend and the bias will be similar in the different datasets. That is why linear estimates are used a lot in the scientific literature and scientists reading this know that a linear estimate can be biased when the curve itself is not linear.

But this was a warming estimate for the summary for policy makers. Policy makers and the public in general should get an unbiased estimate of the climatic changes we have seen and are before us.

Tending to underplay the problem is quite typical. There is even an article on climate Scientists Erring on the Side of Least Drama and also The Copenhagen Diagnosis gives several examples such as the very low predictions for the decline in sea ice or the increase in sea level.

When it comes the warming found in station data, we did study the main possible warming bias (urbanization) in depth, but hardly did any work on cooling biases that may lead us to underestimate the amount of warming.

In a recent study to define what "pre-industrial" means when it comes to the 2 °C warming limit, the authors suggest a comparison period with relatively few volcanoes, which is thus relatively warm. This underestimates the warming since "pre-industrial". The authors wanted to be "conservative". I think we should be unbiased.

I understand that scientists want to be careful before crying wolf, whether we have a problem or not. However, when it comes to the size of the wolf, we should give our best estimate and not continually err on the side of a Chihuahua.

Related reading

Climate Scientists Erring on the Side of Least Drama

Why raw temperatures show too little global warming

The NY Times promised to fact check their new climate denier columnist — they lied

* The linear estimate is typically smaller, sometimes a lot, whether the actual underlying function is convex or concave. I had expected this estimate to be always smaller, but noticed while writing this post that for polynomial functions, f(t) = tp, it can also be a few percent higher for p between 1 and 2. Below you can see 4 example curves, where the time runs between zero and one and thus also f(t) goes from zero to one. The total "warming" in all cases is exactly one. The linear estimates are generally less than one, except for the f(t) = t1.5 example.

The bottom graph shows these linear estimates as a function of exponent p, where you can see that for an exponent between 1 (linear) and 2 (quadratic) the estimates can be a little higher than one, while they are generally lower. Probably Carl Friedrich Gauss or at least Paul Lévy already wrote an article about this, but it was a small surprise to me.

** Top photo of a Chihuahua is licensed under the Creative Commons Attribution 3.0 Unported license.
Bottom Chihuahua photo by Inanishi is licensed under the Creative Commons Attribution-NonCommercial-NoDerivs 2.0 Generic (CC BY-NC-ND 2.0) license.

*** The R code to generate the plots with the linear and nonlinear warming from two global temperature datasets is on GitHub. The R code to study the influence of the polynomial exponent on the linear estimate is also on GitHub.

Thursday, 11 May 2017

Trump's I-am-not-a-witch moment

Dear Director Comey,
While I greatly appreciate you informing me, on three separate occasions, that I am not under investigation, I nevertheless concur with the judgment of the Department of Justice that you are not able to effectively lead the Bureau.
Donald J. Trump
Donald, even if Comey were prime evil himself, you can not fire someone who is investigating you. (Even if Democrats did or do not like him.)

Donald, it would have been better not to talk about the FBI investigating you. (It only puts more focus on these investigations.)

Donald, you can not talk to Comey about ongoing investigations, especially investigations about yourself. (And there is no evidence that Comey informed you.)

Donald, do not write about your FBI investigations when your official reason for firing Comey was that he was not fair to Clinton. (And a many months old incident is not a good excuse for firing someone in a panic.)

Donald, you cannot write that refraining from investigating you would be a reason not to fire Comey. (The rule of law makes America a free country.)

Related reading

Vox: Experts on authoritarianism are absolutely terrified by the Comey firing

NBC News: Trump Interview With Lester Holt: President Asked Comey If He Was Under Investigation. And admits that his Clinton story about Comey’s firing was an excuse at best.

Mother Jones: What the Hell Is Going on With Trump's Delay on the All-Important Paris Decision? Ivanka saves the world? Hah.

Secular Talk: Congressman Is Now Officially A Justice Democrat

Sunday, 30 April 2017

Red Cheeks Team

The idea has been simmering a few years, but now that their president is in office, the mitigation sceptical movement is increasingly pushing the idea of a "Red Team". In the last "hearing" of Lamar Smith in US House of Representatives, John Christy and Judith Curry repeated the claim that climate science needs a "Red Team". Roger Pielke Sr. contributes some tweets. In Murdoch's Wall Street Journal also Steven Koonin made this call on the opinion pages (pay-walled). This is the part of the newspaper the journalists of the news section are embarrassed by.

The [[Red Team]] should aim to destroy the results of climate science. This idea comes from the military and corporations. Monolithic strongly hierarchical organisations where dissent is normally not appreciated and there is thus a need to break this culture by explicitly ordering a group to show that plans may not work out. Many mitigation sceptics work in such organisations and this seems to guide their erroneous thinking of how science works.

Red Teams galore

Science is organised in thousands of Red Teams already. Every national weather service is independent. Some countries even have multiple ones. That makes about 200 Red Teams. I do not know of any weather service that does not find it is warming.

If you are a conspiracy theorist, thinking they are making up the warming, these 200 Red Teams would have to coordinate intensively to make sure that the weather variability is smoothly correlated from one country to its neighbours, that the climate modes (El Nino, North Atlantic Oscillation, etc.) look similar in a region, including [[teleconnections]] to other continents, including modes and teleconnections not known at the time, and make sure that the spatial pattern of the long-term warming fits to the physics and all the other observed variables.

Proof of how devious climatologists are is that no communications were intercepted showing this biggest coordination project in human history. Scientists may be too stupid to fool WUWT, but they are at least smarter than the NSA and HCSQ.

In Germany alone there are easily over a hundred university groups and research institutes (partially) working on climate and climate change. There are close collaborations with related fields, from statistics and physics to geography and economics. Also counting these there could be hundreds of Red Teams in a medium-sized country like Germany already.

Scientific progress

Every single of these groups would love to be the one to show there is a problem that needs attention. That is why scientists become scientist. No one has ever gotten funding saying: my field has solved all problems, but I would like to keep on doing what I have always done.

Many of these groups only partially work on climate change. Meteorology is a much larger field than climatology; Meteorology is directly needed to save lives and avoid economic damages, while climatology "only" produces politically inconvenient results about the future.

Thus for many of these groups it would be no problem whatsoever if they showed evidence for the most extreme case that the world is not warming or humans are not responsible. On the contrary, that would be an enormous boost to their reputations and they can use that social capital to work on related problems.

Explicit Red Teams are for organisation working like planning economies. Science is a free market system.

Climate “sceptics” sometimes give the impression that they think that a single study will vindicate their political battle and settle it once and for all. Reality is that there are many different lines of evidence that it should be warming, that it is warming (see graph below), and that we are creating the increase in atmospheric CO2 (see also additional arguments by Richard Alley) and the warming.

A challenge to one of these lines of evidence will have to refute a lot of evidence and this will normally take years. It would not be enough to disproof one line of evidence, but all of them should fall. Not only the instrumental temperature record would need to be wrong, but the melting of glaciers, the sea level rise, the start of spring, the decrease in Arctic sea ice and so on.

If I had the winning idea that something is fundamentally flawed, it would take decades of research until we again have a consensus on how the climate is (not) changing, just like our current understanding of climate change took decades to centuries to develop.

Science or PR?

John Christy:
One way to aid congress in understanding more of the climate issue than what is produced by biased “official” panels of the climate establishment is to organize and fund credible “Red Teams” that look at issues such as natural variability, the failure of climate models and the huge benefits to society from affordable energy, carbon-based and otherwise. I would expect such a team would offer to congress some very different conclusions regarding the human impacts on climate.
The benefits from affordable energy do not change the human impacts on climate. That kind of sloppy thinking hints at the Red Team being intended as a PR shop.

As an aside, the benefits of energy are naturally large and the energy sector used to be substantial, but it is nowadays just a few percent of the economy. Even if another energy source would be a lot more expensive that would nowadays hardly change the economy.

The idea that climatology does not study natural variability is ludicrous. I used to think I was not a climatologist because I had never computed an empirical orthogonal function ([[EOF]]) to study the North Atlantic Oscillation. The biggest group in the World Climate Research Program is CliVar, studying, hold it: Climate variability. What the mitigation sceptical movement knows about natural variability, from El Nino to the QBO, they know from the scientific community.

The "failure of climate models" presupposes models failed. If we assume they failed and cannot be used to assess the quagmire we are in, the uncertainties would be even larger. We would still have a lot of physics and observations of the (deep) past to make clear that climate change is real. Uncertainties can go both ways and mean higher risks. This should thus be a topic the Red Team should avoid.

The suggestion to only study the "failure of climate models" is a strange way of doing science. Normal science would be to study what the main discrepancies between models and observations are, try to understand what the causes are and then try to fix this. Sounds like Christy is more interested in the first step than in understanding and fixing.

This fits to his approach to his UAH tropospheric temperature dataset. Already in the 1990s when the UAH dataset still showed cooling and contained major errors (did not take the change of the orbit of the satellites into account, had a minus wrong in the software, etc.) he blamed models for the differences without first trying to understand the reasons.

Chinese calligraphy with water on a stone floor. Do not dig in, but let your position flow with the evidence.
The main reason for the discrepancy is that there is an amplification of temperature changes in the tropical upper troposphere. The stronger long-term warming this causes in models is called the "tropical hotspot". Christy's UAH temperature trends do not show it. It is seen in the stronger response to El Nino in the tropospheric temperatures, both in models and in observations. The hotspot is observed in the radiosonde winds and in a recent carefully homogenized radiosonde temperature dataset.

Christy seems to be happy with claiming that the models failed. I would not ignore the possibility that there are remaining errors in the UAH estimates, especially after all the errors that have already been found. Had I been Christy I would have tried to make my claim that the models are the problem stronger by trying to understand the reasons. Which processes are wrong in the models that produce this hotspot, but should not? That would then need to be something that does produce the hotspot signs in the winds and also shows the amplification for El Nino on shorter time scales. That sounds hard to me, but Christy had a few decades to study it.

A similar audit Red Team gave us the Berkeley Earth initiative, funded by the Heartland Institute funded by the Koch Brothers. Judith Curry was part of that, but got out before the politically inconvenient result was published. Anthony Watts, the host of the mitigation sceptical blog WUWT, claimed he would accept the results no matter what. That vow lasted until Berkeley Earth found the same result as any other global temperature dataset. Call me sceptical that this Red Team hullabaloo will have any impact on the US climate "debate".

I am sure it is a coincidence that the terms Red Team and Blue Team fit to the political configuration of country were the climate "debate" takes place, the United States of America. A country were the elite stays in power by pitting the Red Team and the Blue Team against each other. The main reason to support the Red Team is to at least not be the Blue Team. In Georgia the Republicans courted voters with the slogan: Make a liberal cry. Just the thing the coal and oil oligarchs would love to promote in the US climate "debate". Just the thing science should not want to replicate.

Scientists are humans, one of the main biases a scientists needs to fight against is not dig in and defend your own old studies, claims, methods or datasets. I respect scientists that developed good homogenisation methods and talk about their downsides and the strengths of other methods. I respect that because it is hard and promotes scientific progress. To force a scientist to take the Red or Blue position strengthens defensiveness and thus hurts progress. It is political thinking. In science the evidence determines your position. It is the end, not the start.

At least John Christy seems to mostly think of doing research, Judith Curry and Steven Koonin want to make jet another audit or report. This would an alternative IPCC report, following the example of the NIPCC report, the Nonsense IPCC, an embarrassing regurgitation of zombie WUWT myths authored by tobacco stooge Fred Singer. The NIPCC report is clearly PR. If the Red Team advocates think they have a scientific case, one would expect them to seek funding for science that would convince scientists, rather than bypassing the scientific literature and going straight to the public.

Red and Blue Team framing not only contributes to the politicization of science, it also promotes the false-balance media narrative of two equal groups. Judith Curry, John Christy and Steven Koonin should be debating Peter Wadhams, Guy McPherson and Reddit Collapse.

An important political strategy of the mitigation sceptical movement is to pretend that the science is not in yet. That is why they keep on claiming there is no scientific consensus, provoking consensus studies that find that nearly all scientists and articles agree on the basics. (And then complain that consensus exists.)

Once people understand that scientists agree there is a problem, they want solutions. Some extremists in the mitigation sceptical movement may claim that solar and wind energy spell the end of civilization, but that is a hard sell. Sun and wind have enormous and bipartisan support in the USA.

The 97% of climate scientists who agree on the basics include many conservative scientists. That there is a problem is not a partisan issue. How to solve it, that is politics. The Paris climate agreement was signed by nearly 200 countries and thus many conservative governments. They accept that climate change is a real problem. European conservative parties may be less active, but do not deny there is a problem.

In Europe only Trumpian racist parties deny there is a problem. That the climate "debate" is mostly an American problem shows that the problem is not conservative versus liberal, that it is not a lack of scientific evidence, it is not a problem of the communication of science. The problem is the corrupting influence of money in US politics and media. A Red Team will not solve this.

There is PLENTY of ROOM within the climate science to hold extreme views, both about the science and policy. However, there is no room for the lunacies of unicorns, for sun nuts, for folks who don't get chaos, for radiative physics deniers
Steve Mosher

Details please

ATTP asks very good questions on how this exercise should be organised. "Who would make up the team/teams?" Who are the organisers, the arbiters? Who selects them? What are the criteria? How do they want to prevent normal scientists from joining the Red Team? Scientists normally determine themselves what they work on. Should they be forced to waste their time on this? "How would this work be funded?" Is special funding needed because Red Team ideas do not have sufficient merit to be funded normally? "How would the programme be assessed?"

With all those Red Teams in science, I am curious how the Red Team advocates want to make sure their Red Team will be on their political side and stay there. Writing explicitly into the funding conditions that the applicant has to have a history of deceiving the public in the media and producing bullshit blog posts would probably be too much honesty.

Who would be in the Red Teams? Will they fund conspiracy theorists like Tim Ball and Christopher Monckton and corporate smoking shills like Fred Singer and Steven Milloy? That honesty would be a great sight.


The tension is clear when John Christy writes:
Decisions regarding funding for “Red Teams” should not be placed in the hands of the current “establishment” but in panels populated by credentialed scientists who have experience in examining these issues.
The people with experience and credentials are the "establishment" in science.

The mitigation sceptical movement could maybe organise a Red Team Blue Team exercise themselves and in that way figure out what their position is beyond the only thing they agree on: that climate science is wrong. That way they can demonstrate how enormously valuable this new scientific method is and it would have the added benefit that they waste their own time. I am curious whether they can come to an agreement about whether the Earth is warming or cooling and whether the greenhouse exists and CO2 can produce warming.

I look forward to a detailed proposal for Red Team research and would expect that it will demonstrate how ludicrous the idea is.

There is no real need for government funding of Red Teams whatsoever. Every large oil and coal corporation has a huge incentive to show climate science wrong. If they thought that there was a chance of one in a million that climate science was wrong, they would pour millions into studying that rather than in PR misinformation campaigns by networks of thoughtless tanks.

If John Christy was making the case that science funding should not only be based on scientific merit, but that part of the funding should also be based on what is politically important he may have a case. Already a considerable part of the funding goes to studies that inform (local) governments and companies on how to adapt to climate change. This is mostly a service to society and scientifically less inspiring. If it weren’t so hard to do, this would be a task for engineering firms.

Similarly, it would be politically important to study the tropospheric temperature trend in more detail for its importance in the American climate “debate”. It has nearly no scientific value because the tropospheric temperature series is so short and buggy. Each update there are huge changes in the trend estimates and the two available series show large differences, although they both try to take into account the currently known problems of the raw data. There is also no societal need for this dataset; no one lives in the tropical troposphere. Consequently only a few people at the University of Alabama in Huntsville and at Remote Sensing system work on these datasets once in a while. Occasionally there is some help in finding the problems with this data from external scientists.

More independent research in this field, leading to the development of new high quality tropospheric temperature datasets would be politically valuable. Maybe that should also be a funding consideration.

Related reading

How a scheme to discredit climate science spread from conservative media to the EPA chief. Scott Pruitt has embraced the “red team/blue team” idea that got exposure from Daily Caller and WSJ.

And Then There's Physics: Red Team vs Blue Team

Stoat on Red teams: The East is Red

Benjamin Santer, Kerry Emanuel and Naomi Oreskes in the Washington Post: Attention Scott Pruitt: Red teams and blue teams are no way to conduct climate science

The killer Rabbet: The Squeegee Kid Returns or Steve Koonin on Team B

The Blue Team at Daily Kos: Deniers Calling for a Red Team to Create Debate on Climate Science

Why doesn't Big Oil fund alternative climate research?

* Top photo with birds, shame by Tiago Almeida used with a Creative Commons Attribution-NonCommercial-NoDerivs 2.0 Generic (CC BY-NC-ND 2.0) license.

* Black and white photo of boy, shame, by Lee Carson used with a Creative Commons Attribution-NonCommercial-NoDerivs 2.0 Generic (CC BY-NC-ND 2.0) license.

* Photo of "Taoist monk" by Antoine Taveneaux - Own work. Licensed under CC BY-SA 3.0 via Wikimedia Commons.

* Photo of ashamed woman by Naika Lieva used with a Creative Commons Attribution-ShareAlike 2.0 Generic (CC BY-SA 2.0) license.

Monday, 24 April 2017

"Hiatus": Signal and Variability

Stefan Rahmstorf, Grant Foster and Niamh Cahill just summarized the statistical evidence for the mirage people call the "pause" of global warming in their new article: "Global temperature evolution: recent trends and some pitfalls."

The Open Access paper is clearly written; any natural scientist should be able to follow the arguments. The most important part may be a clear explanation of the statistical fallacies that lead some people to falsely claim there was such a thing as a "hiatus" or "slowdown".

Suppose that Einstein had stood up and said: I have worked very hard and I have discovered that Newton got everything right and I have nothing to add. Would anyone ever know who Einstein was? ... The idea that we would not want to be Einstein, if we could overturn global warming ... how exiting would that be? Of the tenth of thousands of scientists there is not one who has the ego to do that? It's absurd, it is absolutely unequivocally absurd! We are people.

I have studied the "hiatus" problem hard (1, 2, 3, 4), read this new paper and I have nothing to add. Unfortunately.

Well, okay, maybe one thing. Just because a trend change is not statistically significant, does not mean you cannot study why it changed. It only means that you are likely looking at noise and thus likely will not find a reason. But if you think there may be a great reward in the result that can make high-risk research worthwhile. Looking at how small the trend differences are and knowing how uncertain short-term trends are, I am not going to do it, but anyone else is welcome.

That there was no decline in the long-term trends also does not mean that it is not interesting to study the noise around this trend. The biggest group in the World Climate Research Program studies Climate variability. That by itself shows how important it is.

This blog is called Variable Variability. I love variability. It is an intrinsic property of complex systems and its behaviour over temporal and spatial averaging scales can tell us a lot about the climate system. It also has large impacts. Droughts and floods fuelled by El Nino are just one example. It is a pity most people just want to average this away.

One man's noise may be another man's music

Now that we take the climate system into unknown territories predictions of the seasonal, annual and decadal variability have become even more important to plan ahead and protect communities. Historian Sam White suggests that the problem of the little ice age in Europe was not the cold winters, but the unpredictability of the weather. Better predictions will help a lot in coping with climate change and already produce useful results for the tropics.

Variability lovers of the world, let's stand up for the importance of our work and not try to faithlessly justify it with middle of the road research on overstudied averages.

Related reading

Science Media Centre asked three scientists for a reaction to the study: expert reaction to climate hiatus statistics

Cranberry picking short-term temperature trends

Statistically significant trends - Short-term temperature trend are more uncertain than you probably think

How can the pause be both ‘false’ and caused by something?

Atmospheric warming hiatus: The peculiar debate about the 2% of the 2%


Rahmstorf, Stefan, Grant Foster and Niamh Cahill, 2017: Global temperature evolution: recent trends and some pitfalls. Environmental Research Letters, 12, No. 5, https://doi.org/10.1088/1748-9326/aa6825.

Monday, 10 April 2017

Upcoming meetings for homogenisation scientists

There are several new meetings coming up that may be interesting for people working on homogenisation. If you know of more, please write a comment. Please note that the abstract submission deadline for EMS is already in 11 days.

Urban climate summer school
21-26 August 2017 | Bucharest, Romania. Registration deadline: 15 May 2017
Climate monitoring; data rescue, management, quality and homogenization
4–8 September 2017 | Dublin, Ireland. Abstract deadline: 21 April 2017.
11th EUMETNET Data Management Workshop
18–20 October 2017 | Zagreb, Croatia. Abstracts deadline: 31 May 2017
C3S Data Rescue Service Capacity Building and 10th ACRE Workshops
4-8 December 2017 | Auckland, New Zealand.
Workshop - Data Management for Climate Services
April 2018 | Lima, Peru.

Climate monitoring; data rescue, management, quality and homogenization

EMS Annual Meeting: European Conference for Applied Meteorology and Climatology 2017 | 4–8 September 2017 | Dublin, Ireland
The abstract submission deadline: 21st April 2017.

OSA3.1. Climate monitoring; data rescue, management, quality and homogenization
Convener: Manola Brunet-India
Co-Conveners: Ingeborg Auer, Dan Hollis, Victor Venema

Robust and reliable climatic studies, particularly those assessments dealing with climate variability and change, greatly depend on availability and accessibility to high-quality/high-resolution and long-term instrumental climate data. At present, a restricted availability and accessibility to long-term and high-quality climate records and datasets is still limiting our ability to better understand, detect, predict and respond to climate variability and change at lower spatial scales than global. In addition, the need for providing reliable, opportune and timely climate services deeply relies on the availability and accessibility to high-quality and high-resolution climate data, which also requires further research and innovative applications in the areas of data rescue techniques and procedures, data management systems, climate monitoring, climate time-series quality control and homogenisation.

In this session, we welcome contributions (oral and poster) in the following major topics:
  • Climate monitoring , including early warning systems and improvements in the quality of the observational meteorological networks
  • More efficient transfer of the data rescued into the digital format by means of improving the current state-of-the-art on image enhancement, image segmentation and post-correction techniques, innovating on adaptive Optical Character Recognition and Speech Recognition technologies and their application to transfer data, defining best practices about the operational context for digitisation, improving techniques for inventorying, organising, identifying and validating the data rescued, exploring crowd-sourcing approaches or engaging citizen scientist volunteers, conserving, imaging, inventorying and archiving historical documents containing weather records
  • Climate data and metadata processing, including climate data flow management systems, from improved database models to better data extraction, development of relational metadata databases and data exchange platforms and networks interoperability
  • Innovative, improved and extended climate data quality controls (QC), including both near real-time and time-series QCs: from gross-errors and tolerance checks to temporal and spatial coherence tests, statistical derivation and machine learning of QC rules, and extending tailored QC application to monthly, daily and sub-daily data and to all essential climate variables
  • Improvements to the current state-of-the-art of climate data homogeneity and homogenisation methods, including methods intercomparison and evaluation, along with other topics such as climate time-series inhomogeneities detection and correction techniques/algorithms, using parallel measurements to study inhomogeneities and extending approaches to detect/adjust monthly and, especially, daily and sub-daily time-series and to homogenise all essential climate variables
  • Fostering evaluation of the uncertainty budget in reconstructed time-series, including the influence of the various data processes steps, and analytical work and numerical estimates using realistic benchmarking datasets

Related are the sessions: Metrology for meteorology and climate and Climate change detection, assessment of trends, variability and extremes.

Urban climate summer school

University of Bucharest, Bucharest, Romania
August 21-26, 2017
Registration deadline: 15 May 2017

Organizers : Research Institute of University of Bucharest (ICUB), Urban Climate Research Center at Arizona State University (ASU), Urban Water Innovation Network (ASU-CSU), Society for Urban Ecology (SURE), Interdisciplinary Center of Advanced Research on Territorial Dynamics (CICADIT)

Rationale and goals : Urban areas impart significant local to regional scale environmental perturbation. Urban-induced effects, simultaneously with impacts owing to long-lived emissions of greenhouse gases, may trigger additional physical and socioeconomic consequences that affect the livelihoods of urban dwellers. While urban areas amass more than 50% of the world population, and three of four Europeans live in a city, the systematic monitoring and assessment of urban climates, mitigation of and adaptation to adverse effects, and the strategic prioritization of potential solutions may enable enhanced preparedness of populations and local authorities. Such challenges call for enduring scientific advancements, improved training and increased awareness of topical issues.

This summer school aims to provide structured information and skill-building capabilities related to climate change challenges in urban areas, with a primary focus of creating an active pool of young scientists that tackle the major sustainability challenges facing future generations. The critical areas to be covered refer to
(1) modern monitoring of urban environments
(2) modelling tools used in urban meteorology and climatology
(3) adaptation and mitigation strategies and their prioritization
(4) exploring critical linkages among environmental factors and emerging and chronic health threats and health disparities. Those attending can expect to gain an understanding of the state-of-the-art and be capable to use the most appropriate tools to address specific problems in their respective fields of interest.
The summer school is intended for doctoral and post-doctoral students who already have basic knowledge and interest for urban climate issues.

More information ...

11th EUMETNET Data Management Workshop

Zagreb, Croatia, 18 – 20 October 2017
More information will appear later on the homepage: http://meteo.hr/DMW_2017

Main Topics

  • Data rescue: investigation, cataloguing, digitization, imaging
  • Climate observations: standards and best practices, definition of climatological day, mean values
  • Metadata: WMO Information System (WIS), INSPIRE, climate networks rating guides
  • Quality control: automatic/manual of climate time-series, on-line data, real-time observations
  • Homogenisation of climate time-series from sub-daily to monthly scale, homogenisation methods, assessment of inhomogeneity
  • Archiving: retention periods, depository, climate service centres and data collections for scientific and public use, databases, data access, user interface, data distribution

Call for Abstracts

Presentations will be oral or posters. Abstracts should be written in English, short, clear, concise. Figures, tables, mathematical symbols and equations should not be included. Abstracts should be sent before May 31st 2017 and send to dmw2017@zamg.ac.at. Authors will be informed about the acceptance of their papers by the scientific committee early in September.

Conference Venue and Programme

The workshop will be held in the building of Croatian State Archives: Marulićev trg 21, Zagreb, Croatia.

Wednesday, October 18th 2017

08:30-09:30 registration
09:30-16:00 sessions
17:00 - guided tour, ice breaker

Thursday, October 19th 2017
09:00-17:00 sessions
19:00 workshop dinner

Friday, October 20th 2017
09:00-15:30 sessions

Further Information

Conference registration fee is 80 €. Details on registration procedures and the workshop in general will be available
on the website: meteo.hr/DMW_2017 (later)
Contact: dmw@cirus.dhz.hr

Scientific Organization

Ingeborg Auer (ZAMG)
Peer Hechler (WMO)
Dan Hollis (UKMO)
Yolanda Luna (AEMET)
Dubravka Rasol (DHMZ)
Ole Einar Tveito (MET Norway)

C3S Data Rescue Service Capacity Building and 10th ACRE Workshops

The C3S Data Rescue Service Capacity Building and 10th ACRE Workshops will be held at NIWA in Auckland, New Zealand during the week of the 4th-8th of December this year. There is no homepage on this meeting yet, but more information will follow later on: www.met-acre.net. This homepage also gives information on the previous annual ACRE workshops.

Workshop - Data Management for Climate Services

Taller – Gestión de Datos para los Servicios Climáticos

Location: Lima, Peru
Time: April 2018 (date to be defined)
Organized by: CLIMANDES - Climate services to support decision making in the Andes Supported by: Swiss Agency for Development and Cooperation (SDC) and the World Meteorological Organization (WMO)
Region: Ibero-American Countries
Duration: 3 days (9:00 a.m. - 5 p.m.)
Number of participants: 80 - 100


The implementation of the WMO-led Global Framework for Climate Services (GFCS) strengthens the capabilities of National Meteorological and Hydrological Services (NMHSs) through its five pillars (Observations and Monitoring; Capacity Development; User Interface Platform; Research, Modeling and Prediction; Climate Services Information System). In this context, SENAMHI and MeteoSwiss are developing the first workshop on "Data Management for Climate Services" focusing mainly on the first three of the mentioned pillars. The workshop will be carried out in Peru by members of the CLIMANDES project with the support of SDC and WMO.

The workshop "Data Management for Climate Services" is addressed towards both the technical and the academic community involved in the implementation of national climate services. The workshop focuses on sharing knowledge and experiences from the provision of high-quality climate services targeted at WMO's priority areas and their citizens. The methodologies will cover topics such as quality control, homogenization, gridded data, climate products, use of open source software, and will include practical examples of climate services implemented in the Ibero-American region. The workshop will contribute to the continuous improvement of technical and academic capacities by creating a regional and global network of professionals active in the generation of climate products and services.


  • Strengthen data management systems for the provision of climate services.
  • Share advances in the implementation of climate services in the Ibero-American region.
  • Interchange with other NMHSs on best practices in climate methodologies and products.
  • Improve the regional and global collaborations of the NMHSs of the Ibero-American region.
  • Show the use of open-source software.

Outcome The following outcomes of the workshop are envisaged:
  • A final report providing a synthesis of the main results and recommendations resulting from the event.
  • The workshop builds the first platform to exchange technical and scientific knowhow in Ibero-America (WMO RA-III and IV), and among participants from all other regions.
  • Hence, the workshop contributes to the creation of a regional and global network in which knowhow, methodologies, and data are continuously shared.


The workshop will consist of four sessions consisting of presentations, posters and open discussions on:

● Session 1:
  • Data rescue methods: methods for data rescue and cataloguing; data rescue projects
  • Metadata: methods of metadata rescue for the past and the present; systems for metadata storage; applications and use of metadata
  • Quality control methods: methods for quality control of different meteorological observations of different specifications; processes to establish operational quality control

● Session 2:
  • Homogenization: methods for the homogenization of monthly climate data; projects and results from homogenization projects; investigations on parallel climate observations; use of metadata for homogenization

● Session 3:
  • Gridded data: verification of gridded data based on observations; products based on gridded data; methods to produce gridded data; adjustments of gridded data in complex topographies such as the Andes

● Session 4:
  • Products and climate information: methods and tools of climate data analysis; presentation of climate products and information; products on extreme events
  • Climate services in Ibero-America: projects on climate services in Ibero-America
  • Interface with climate information users: approaches to building the interface with climate information users; experiences from exchanges with users; user requirements on climate services

Furthermore, hands-on sessions on capacity building, e-learning, the use of open-source software, and on ancestral knowledge in Ibero-America will take place during the workshop. The workshop is complemented by an additional training day on climate data homogenization and a field visit at the end of the workshop.


The Meteorological and Hydrological Service of Peru SENAMHI will organize the workshop on “Data Management for Climate Services” in close collaboration with the Federal Office of Meteorology and Climatology MeteoSwiss. The workshop is part of the project CLIMANDES 2 (Climate services to support decision making in the Andes) which is supported by the Swiss Agency for Development and Cooperation SDC and by the World Meteorological Organization (WMO).

For more information and to get notified when the date is known please contact: Climandes.

Sunday, 19 March 2017

Did the lack of an election threshold save The Netherlands?

The Netherlands. Also known as flat Switzerland and as the inventors of the stock market crash. A country you think of so little that we were surprised by the international attention for the Dutch election last week. Although The Netherlands is the 17th economy in the world we are used to being ignored,* typically not making any trouble.

But this time the three part question was whether after Brexit and Trump also The Netherlands, France and Germany would destroy their societies in response to radical fundamentalist grandpas campaigning against radical fundamentalist Muslims. The answer for the Dutch part is: no.

To be honest, this was clear before the election. The Netherlands has a representative democracy. The government is elected by the parliament. The seats in parliament depend closely on the percentage of votes a party gets. This is a very stable system and even when Trump was inaugurated, the anti-Muslim party PVV polled at 20%, no way near enough to govern. The PVV survey results plotted below are in seats, 20% is 30 seats. Every line is one poling organization.

Due to the Syrian refugee crisis the PVV jumped up in September 2015. They went down during the primaries as the Dutch people got to know Trump and the refugees turned out to be humans in need of our help. After getting elected, Trump favorability went up; Americans gave Trump the benefit of the doubt. The same happened to the PVV; if America elects Trump, he cannot be that bad? Right? Right? While Trump was trampling America as president and filled his cabinet with shady corrupt characters, the PVV dropped from 20% to 13% (20 seats).

There is no guarantee the drop of the PVV was due to Trump, but the temporal pattern fits and the leader of the PVV, Geert Wilders, is a declared fan of Trump. People campaigning against the PVV made sure to tie Wilders to Trump. For example in this AVAAZ advertisement below. I hope AVAAZ will also make such videos for France and Germany.

I would certainly not have minded the election being a few months later to give Trump the possibility to demonstrate his governing skills more clearly. This will also help France and Germany. In addition Germans know their history very well and know that German fascism ended with holocaust it did not start with it. It started with hatred and discrimination. The most dangerous case is France with its winner-takes-all presidential system.

Fascism: I sometimes fear... (by Michael Rosen)

I sometimes fear that
people think that fascism arrives in fancy dress
worn by grotesques and monsters
as played out in endless re-runs of the Nazis.

Fascism arrives as your friend.
It will restore your honour,
make you feel proud,
protect your house,
give you a job,
clean up the neighbourhood,
remind you of how great you once were,
clear out the venal and the corrupt,
remove anything you feel is unlike you...

It doesn't walk in saying,
"Our programme means militias, mass imprisonments, transportations, war and persecution."

I expect that it also hurted the PVV that Wilders did not show up for most of the debates. Without the solution-free animosity of Wilder it was possible to have an adult debate about solutions to the problems in The Netherlands. Refreshing and interesting. The last days he did show up, the level immediately dropped, making clear what the main Dutch political problem is. Wilders.

As the graph below shows the Dutch parliament will have 13 parties. This has triggered a debate whether we need an election threshold. 

A poll made around the election shows that a majority of 68% would be in favor of an election threshold of at least 2 seats (1.3%) and 28% even favor a threshold of 5 seats (3.3%). As the map below shows such a threshold would fortunately still be on the low side internationally.

   ≥1%, <2%
   ≥2%, <3%
   ≥3%, <4%
   ≥4%, <5%
   ≥5%, <6%
   ≥6%, <7%
   Each chamber has a different threshold.

I think a threshold, even a low one, is a bad idea. The short-term gains are small, the short-term problems are big and we risk a long-term decline of the Dutch political culture, which is already at a low due to Wilders. The arguments are not specific for The Netherlands. I hope these thresholds go down everywhere they exist.

The main argument in favor is that small parties make it harder to form a coalition government. This is true, small parties need visible influence to make governing worthwhile and survive the next election, which means they get an over-proportional piece of the pie. This makes other coalition partners worse of, which makes negotiations harder.

However, next to the small parties, which are hard to include in a government, we also have the PVV, which is hard to include because of their ideology and lack of workable ideas. The small parties in this election (PvdD, 50+, SGP, DENK, FvD) have 17 seats combined, while PVV has 20 seats. Getting rid of the small parties would thus reduce the problem by less than half. Not having large toxic parties in parliament would be at least as important.

Also without small parties we now need four parties to build a government. The election threshold would need to be very high to reduce that to three parties. So the benefits are small.

If the threshold were that high, an immediate problems would be that people voting for small parties are not represented in parliament and also less in the media. This is unfair.

This can have severe consequences. In Turkey the election threshold is 10% and in 2002 they had a case where 7 sitting parties were below this threshold and a whooping 46% of all votes were without representation in the parliament. That is a big price to pay for making it somewhat easier to build a government.

An election threshold also stimulates strategic voting, where people do not vote the party they agree with, but a party that will get into parliament or government. In the last Dutch election election a quarter of the voters voted strategically. The right wing VVD and the social democrat PvdA were competing for the number one spot. In the end they made a coalition government, which was thus not supported by the population, was highly unpopular and lost heavily this election. That is not a dynamic you want to enforce.

Strategic voting can also mean that a new party that does have sufficient support to pass the threshold does not get votes because many do not trust they will make it and many keep on voting for an existing party they like less.

Last week's Dutch election had a turnout of 80%. Having more parties means that people can find a better match to their ideas. A faithful ideologue may just need two parties, his own and the one of the enemy. If you just think of the left-right axis, you may be tempted to think you only need two or maybe four parties to cover all ideas. Whatever "left" and "right" means. It feels real, but has those funny names because it is so hard to define.

Political scientists often add a second axis: conservative to progressive. The graph below puts the Dutch parties on both axis. Left to right on the horizontal axis and progressive at the top and conservative at the bottom. The parties that care most about the environment and poor people (GroenLinks, SP, Christen Unie, D66) are still all over the map. The vertical axis also shows how materialistic the parties are, with parties that care about the distribution of money and power in the middle and parties that find immaterial values important at the top and the bottom. In other words: we need multiple parties to span the range of political thought and have parties that fit well enough to get out and vote.

Having a choice also means that it pays to pay attention to what happens in politics. American pundits like to complain that Americans are badly informed about politics and the world, but why would the voter pay attention? The US set up an electoral system where the voter has nearly no choice. The US has two parties that are way-out-there for most people.

Because of the districts a vote nearly never matters, especially after [[Gerrymandering]]. There are just a few swing districts and swing states where a vote matters. That is really bad for democracy. Changing the system is more helpful than blaming the voters.

Let me translate the party names for the foreigners. GroenLinks is a left-wing green party. D66 an individual freedom loving (liberal) party with a focus on democratic renewal. PvdA is traditionally a social democratic party, but has lost its moorings. SP is a social democratic party like the PvdA was two decades ago. GroenLinks and SP typically vote with each other, but GroenLinks are the educated people and SP the working class. (It is sad that does not mix.)

VVD used to be a pro-business individual liberty party, but has become more conservative and brown. CDA a center-right Christian democratic party. Christen Unie is an actually Christian party that tries to follow the teachings of Christ and cares about the environment and the (global) poor. SGP is a quite fundamentalist Christian party that likes the Old Testament more. PVV is the anti-Muslim authoritarian party. For the Americans: Most of the policies of Bernie Sanders are Christian democratic (although they would use different words to justify them).

That politics is much more than one axis can also be seen in a transition matrix. The one below shows how voters (or non-voters) in 2003 voted in 2006. A reading example is that people who voted CDA in 2003, voted CDA in 2006 in 71% of the cases and voted PvdA in 3% of the cases. There are many transition that do not follow the left-right axis or the conservative-progressive axis. People are complicated and have a range of interests.

2003 CDA PvdA VVD SP GroenLinks D66 Christen Unie PVV Other Non voters
CDA 71 3 6 6 0 0 4 2 1 6
PvdA 3 59 2 20 3 1 1 1 1 9
VVD 23 3 55 3 0 1 1 5 2 7
SP 4 11 0 70 6 0 2 4 2 2
GroenLinks 3 7 1 25 46 1 4 0 2 9
D66 8 17 17 15 12 23 2 0 5 0
Christen Unie 2 2 0 2 0 0 91 2 0 0
LPF 7 4 18 14 0 1 0 36 5 15
Other 10 2 2 10 2 0 7 2 57 7
Non voters 6 6 3 9 0 0 0 5 1 70

The main problem is on the long-term. An election threshold limits competition between parties. A threshold makes it harder to split up a party or to start a new one. That is nice for the people in power, but not good for the democracy within the party and for the voters. Parties become more vehicles of power and less places to discus problems and ideas.

With a high threshold the party establishment can kick people or small groups out without having to fear much consequences. A wing of a party can take over power; neutralize others with near impunity. When a party does not function well, becomes corrupt, starts to hold strange positions or sticks to outdated ideas, voters cannot easily go to an alternative. In the map with thresholds above you can see that high thresholds are typical for unpleasant not too democratic countries.

You see it in the USA where the corporate Democrats thought they could completely ignore the progressives because they would be forced to vote for them lacking a real alternative and in the face of grave danger to the Republic. Politics in Germany is much more about power (with a 5% threshold) than in The Netherlands, where politicians make compromises and try to get many people on board. There is no way to prove this but I think the election threshold is important for this.

That is why countries with low thresholds have parties with new ideas such as environmentalism or the hatred of Muslims or old fashioned niche ideas like general racism. In the latter cases you may like that these ideas are not represented in parliament, but the danger is that it suddenly blows up and Trump becomes president. Then it is much better to have Wilders in parliament making a fool of himself, making public that many of his politicians have lurid and criminal pasts, and demonstrating that he cannot convert his hatred into working policies and legislation. It also gives the decent parties the possibility to respond in time to the real problems the voters of such parties have, which they project on minorities.

The lack of competition also promotes corruption. It makes corruption less dangerous. In the extreme American case of two parties a lobbyist only has to convince party D that he can also bribe party R and both party can vote for a bill that transfers power to corporations on a Friday evening without any possibility of voters to intervene. In the extreme case the corruption becomes legalized and the politicians mostly respond to the wishes of the donor class and ignore everyday citizens. The disillusionment with democracy this creates makes it possible for anti-democratic politicians like Trump or Wilders to go beyond their small racist niche.

So my clear advice is: Netherlands, do not introduce an election threshold. America, get rid of your district system or at least introduce more competition with a [[ranked voting system]].

Related reading

In Dutch: Which effects would an election threshold have had on the 2012 election? Welke effecten zou een kiesdrempel hebben?

To my surprise The Netherlands already has a small election threshold, you need votes for at least one seat and otherwise there is no rounding up. See Wikipedia in Dutch on election thresholds: Kiesdrempel

In Dutch: How good were the polls? Hoe dicht zaten de peilingen bij de uitslag?

* Also Angela Merkel has visited The Netherlands only 6 times in her 12 years of rule.