Sunday, 31 January 2016

The difference between Bernie Sanders and Hillary Clinton on climate change?



Just two days ago 350 Action published a comparison of the plans of the presidential candidates to combat climate change. 350 Action is the political arm of climate action group 350.org, which was founded by Bill McKibben. They tried to ask all candidates 70 questions. A summary of the differences between Sanders and Clinton can be found above. They clearly found that Bernie Sanders plans to do more.

For the non-Americans reading this blog let me add that on the Republican side they had "more luck eliciting declarations of climate denial and defenses of the fossil fuel industry than any significant evolution on the issue." The US Senate voted this week about whether human activity significantly contributes to climate change. A weird thing of itself. The more so in 2016! Of the 54 Republican senators just five accepted that statement. Relative to that extremism, the differences between the Democrats are small.

In December also Think Progress made a comparison of the 3 Democratic candidates and they similarly found Bernie Sanders to have more positions favored by the environmental movement. For the record, Martin O’Malley scored even better.

Both organizations state that it took some time of campaigning against Bernie Sanders before Hillary Clinton improved her climate change plans.

Bernie Sanders clearly came out as the favorite presidential candidate during a “Climate Emergency Caucus” of the environmental group The Climate Mobilization, which fights for zero greenhouse gas emissions by 2025. Sanders won 69 percent of votes at their mock Democratic caucus. Clinton, O’Malley and uncommitted all got about 10 percent.



If you care about climate change, Bernie Sanders is clearly your man. However even if Hillary Clinton would have had better plans, I would go for Sanders because you also have to be able to execute the plans and there are other important issues apart from climate change.

Democracy

In my assessment the main problem in America is the excessive influence of money. Everywhere rich people unfortunately have more influence, but the way corporations and billionaires determine US policies destroys the democratic heart of America. This is to a large part possible because of unlimited campaign contributions, which lead to legal bribery.

The oligarchy is first of all deeply undemocratic. Corporations have different interests than the people. It is amazing the kind of obvious highly popular policies cannot pass Congress. Renewable energy is enormously popular in the public, but does not get much political support. People who are not allowed to fly because they are on the terrorist watch list can buy an automatic weapon. The US Congress explicitly voted against a bill fixing this problem. In 2008 the population had to bail out the banks to avoid an even larger depression because they are to big to fail. That is the end of the market mechanism, when the upsides are private and the downsides get privatized. That is calling for taking too much risk, but the banks are now bigger than in 2008. Because companies legally bribed so many politicians, politics is not able to fix these obvious problems.

The money makes rational debate impossible. The politicians cannot negotiate and compromise because they have to do what their donors want them to do. That is why you get the childish debates we see in the climate "debate" because a real debate is not possible. It would be better when the donors would sit at the table. Like in the medieval times when the local war lords were "advising" the king.


"If government is to play its role in creating a successful economy, we must restore comity, compromise, openness to evidence"
Ben Bernanke


The bribed politicians also have a huge influence on the public and published opinion. By saying crazy things in the media, such as James Inhofe saying that climate change is a hoax and Cruz calling climate science dogma, these kind of statements start to sound acceptable. Most people do not take the time to carefully review the evidence; people are social animals and we normally negotiate our opinions interacting with others and the opinion of leaders is very influential. Especially authoritarians are susceptible to picking up the opinion of their leaders. If only a weather presenter from Chico, California, would blog daily about all those obvious problems with climate science that the experts do not see, the situation in the USA would be fully different. Money in politics is an important reason for the American exceptionalism in the climate "debate".

Bernie Sanders sees money in politics as the main problem that needs fixing. For that reason alone, I would vote for him if I could. Without fixing this, it is nearly impossible to fix other problems. Without fixing this, solving climate change is like running a marathon with a 50kg sack of rice on your back. First the weight needs to be removed.

Money became so dominant in large part due to disastrous Supreme court decisions that money is speech and that corporations and humans. One way to fix that is a better Supreme court. The next president will select one to three Supreme court candidates. Executive actions can make the money streams more transparent, which would likely reduce them and make them less influential. The president can press for a constitutional amendment. (Simultaneously, the people can try to get an amendment via the states.)

Winning

In national polls Clinton has more support among likely Democrat primary voters, but in this stage national polls are not very informative. Just imagine someone calling you up to ask you about something you normally do not think much about. Would you like to carpet bomb Agrabah? Polls are very different from elections and referendums. National poling results at the moment largely reflect name recognition. When an election comes up, people start paying more attention and talk to each other. Only when we get closer to elections, do polls start to have value.

National polls are especially not very informative yet because Clinton has a much higher name recognition than Sanders.

This Monday there is the first caucus in Iowa and after that in New Hampshire. In New Hampshire Sanders is well ahead by now. In Iowa Clinton and Sanders are too close together and the polls do not agree with each other. The main problem is determining who is a likely voter and especially whether young people will show up. Young people overwhelmingly support Sanders. Normally they hardly show up, except in 2008, when they thought they could get real change. I would expect this to happen again. This time it is worth it. Some polling organizations even only classify people who voted in previous caucuses as likely voters. This completely excludes young people.

[UPDATE after Iowa. The result was basically a tie Clinton and Sanders in Iowa. Clinton had 0.4% more "votes". Interestingly, it was a tie for almost all subgroups (income, education), except for young people who support Sanders more and women who support Clinton more. It was even a tie for people who voted Clinton in 2008.

You can see this as a win for Clinton because Iowa is quite white and Clinton does better among people of color. Thus you could argue that Sanders should have won. I do not see this argument as very convincing. There is nothing special about Clinton's policies when it comes to minorities compared to Sanders; that is a policy tie. This can easily change.

I find it more convincing to say that Sanders won. He came from nothing. She was the clear favorite at the beginning of the campaign. Clinton has a lot of name recognition, support of other (local) politicians, more money (from large donors) and had several year to prepared herself. It now becomes harder to ignore him in the media, where he did not get covered much up to now. And when people get to know him, they like him and his policies. So I would say: in Iowa a tie, nationally Sanders won.]

(For the same reason, polling results for Trump are unreliable because many of this supporters normally do not go to caucuses and one can only guess whether they will go the the Iowa caucus this time. [UPDATE. While wining in the polls, Trump lost, almost became third. Not good for his image.])

When Iowa and New Hampshire go to Sanders, the primaries start to get interesting. That is when the corporation will start fighting back, when people will start to inform themselves about Sanders. That is when we will learn how he handles stress and whether he will do a good job in the general presidential election.

I expect he will, but then I am biased as European. The published opinion will try to convince the public that Sander's plans are impossible. For me it is hard to imagine they can pull such nonsense off, most of what Sanders wants is completely mainstream in Europe. No matter how right-wing a European party is, I cannot imagine them accepting that people die because they waited too long with going to the doctor. That sounds as if death panels are okay as long as the hands of the panelists are invisible. It is mainstream in Europe that college is not only a personal benefit, but contributes to society and prosperity as well, that everyone who has the skills and the drive should be able to go to university.

I am afraid after the first primaries will also be when Clinton will show here inner Republican even more. The last two week she already started deceiving the electorate to attack political opponents. I have no problem with playing hard, but I do like politics to be about ideas.

Winning the primaries also depends on whether the voters believe you can win the election. That is hard to judge, but the evidence at this moment does not support Clinton's claim that she is more electable. There is also not a strong case yet to say that Sanders is more electable, but his numbers are going up as people get to know him, while Clinton's numbers are stable or go down.

In match-up polls between one Democrat and one Republican candidate Sanders on average performs better. Clinton wins over Donald Trump with a difference of +2.7%, but Sanders wins with +5.3% and the recent two polls are even above 10%.

Clinton versus Cruz would be won by Cruz by +1.3%, although the recent one Clinton wins. Sander versus Cruz would be won by Sanders by 3.3%, although the recent ones Cruz wins marginally.

Like normal polls, these match-ups do not say much. It is very hard for people to imagine the real choice they would have to make and they hardly know the candidates yet. Thus rather than look at the current polls, we have to try to understand the dynamics of the campaigns. The Daily Kos writes:
[Bernie Sanders] has the overwhelming support of independents, whereas Hillary has lukewarm support from them at best, giving him a huge general election advantage. He also has crossover appeal to Republicans, earning up to 25% of their support in his home state. Already, numerous Republicans for Bernie have been documented. But Bernie is also best positioned to win because he will bring new voters to the polls, who are then likely to vote Democrat—the young, the poor, and the disillusioned.
I would expect the real difference between a Republican and any Democrat to be large it the end. Now the Republican candidates can hide their ignorance or lack of empathy during the debates in an enormous field, in debates that cannot go into depth. In the main campaign there will be only two candidates during the debates; any Republican ignorant outsider will be destroyed there. And if a debate contrasts Cruz or Rubio to a human being, they will look even more extreme and even less sympathetic.

No sitting Republican Senator has endorses either Trump or Cruz. Their celebrity couple name is Crump (ht Stephen Colbert). The deeply conservative magazine the National Review made an entire issue Against Trump.



There are many decent Republicans who will be put in a tough spot when one of the currently leading candidates will become the official Republican candidate. I expect that easily 20% will not vote for one of these radicals. In case Hillary Clinton is the Democratic candidate they will mostly stay at home. In case Bernie Sanders is the candidate a considerable part will vote for him.


“It’s like being shot or poisoned… what does it really matter?”
Sen. Lindsey Graham on Ted Cruz and Donald Trump


Climate change will become an ever large liability for the Republicans. In this primary they cannot soften on the issue, but in the general election they will look completely out of touch with reality. Even people who do not care about climate change itself, will have some doubts about giving such people the nuclear codes. That in a year that quite likely again becomes a record warm year. The third record in a row.



In Vermont Sanders got a decent amount of votes from Republicans. They hate money in politics as much as Democrats. Independents like Sanders more than Clinton. He is sympathetic and trustworthy, with a very consistent voting record. People are fed up with the establishment, which you can see by the popularity of completely incompetent outsiders in the Republican primary. Sanders not taking money from the establishment and running for real change can distance himself from that.

In a race against Trump, he can claim to be his own man, while he can claim that Clinton needs to do what her donors want. In contrast Sanders can also claim to be independent and he actually wants to stop the legal bribery. Trump does not.

In the past, a candidate in the middle would have an advantage. They take some of the voters from the other party and the wings of their party were forced to vote for them to prevent worse. Nowadays, however, with only 50 to 60% of the electorate actually votes, the most important job of a candidate is to get the supporters to actually vote. I would expect Sanders to be able to generate more enthusiasm than Clinton. He has more supporters and large rallies. Both are helped by a radicalized Republican party that makes Democrats clear that they need to vote.

Last month, I made this prediction.


I am reasonably confident Sanders will win; naturally this is not science, just my personal assessment. The Democrats also wining both chambers is a more daring prediction. On the positive side, many more Republican seats are up for election. Let's concentrate on the House, which is more difficult than the Senate. Charlie Cook and David Wasserman:
Today, the Cook Polit­ic­al Re­port counts just 33 [House] seats out of 435 as com­pet­it­ive, in­clud­ing 27 held by Re­pub­lic­ans and six held by Demo­crats.
Still to win the House, the Democrats "would need to win as much as 55 percent of the popular vote, according to the Cook Political Report's David Wasserman". A ten percent difference is large, but has been done before.

Making this happen will depend on turnout and thus on enthusiasm and the hope to finally transferring the power back from the corporations to the citizens. This will not be easy, but easier with Sander.

If Congress does not change color, the climate "debate" suggests to me that reaching out, Clinton's strategy, does not help one thing. We have seen how well it worked for Obama. The only thing that helps is pressure from the electorate. Without sticks, the carrots do not work. Without disinfecting sunlight lighting the ugly spots and fear of being unseated nothing good will happen in Washington. If the Congressmen expect that their donors can no longer help (as much) them the next election, they may feel freer to actually do their job.

The problem that remains is getting money out of the media. Getting money out of politics partially solves that problem; the media gets a large part of that money from the political ads. I sometimes wonder if ads are there to influence consumers or the media. Getting all money out of the media is tough because the freedom of the press should not be endangered in the process. Any suggestions?



Related reading

Democracy is more important than climate change #WOLFPAC

National Review: Conservatives against Trump

Read more at: http://www.nationalreview.com/article/430126/donald-trump-conservatives-oppose-nomination

In 50-49 vote, US Senate says climate change not caused by humans

On Climate Questions, Only One Candidate Has All the Right Answers

Voter's Guide: How the Candidates Compare on Climate and Energy

Thursday, 21 January 2016

Ars Technica: Thorough, not thoroughly fabricated: The truth about global temperature data

How thermometer and satellite data is adjusted and why it must be done.
published on Ars Technica by Scott K. Johnson - Jan 21, 2016 4:30pm CET


“In June, NOAA employees altered temperature data to get politically correct results.”

At least, that's what Congressman Lamar Smith (R-Tex.) alleged in a Washington Post letter to the editor last November. The op-ed was part of Smith's months-long campaign against NOAA climate scientists. Specifically, Smith was unhappy after an update to NOAA’s global surface temperature dataset slightly increased the short-term warming trend since 1998. And being a man of action, Smith proceeded to give an anti-climate change stump speech at the Heartland Institute conference, request access to NOAA's data (which was already publicly available), and subpoena NOAA scientists for their e-mails.

Smith isn't the only politician who questions NOAA's results and integrity. During a recent hearing of the Senate Subcommittee on Space, Science, and Competitiveness, Senator Ted Cruz (R-Tex.) leveled similar accusations against the entire scientific endeavor of tracking Earth’s temperature.

“I would note if you systematically add, adjust the numbers upwards for more recent temperatures, wouldn’t that, by definition, produce a dataset that proves your global warming theory is correct? And the more you add, the more warming you can find, and you don’t have to actually bother looking at what the thermometer says, you just add whatever number you want.”

There are entire blogs dedicated to uncovering the conspiracy to alter the globe's temperature. The premise is as follows—through supposed “adjustments,” nefarious scientists manipulate raw temperature measurements to create (or at least inflate) the warming trend. People who subscribe to such theories argue that the raw data is the true measurement; they treat the term “adjusted” like a synonym for “fudged.”

Peter Thorne, a scientist at Maynooth University in Ireland who has worked with all sorts of global temperature datasets over his career, disagrees. “Find me a scientist who’s involved in making measurements who says the original measurements are perfect, as are. It doesn’t exist,” he told Ars. “It’s beyond a doubt that we have to—have to—do some analysis. We can’t just take the data as a given.”

Speaking of data, the latest datasets are in and 2015 is (as expected) officially the hottest year on record. It's the first year to hit 1°C above levels of the late 1800s. And to upend the inevitable backlash that news will receive (*spoiler alert*), using all the raw data without performing any analysis would actually produce the appearance of more warming since the start of records in the late 1800s.

We're just taking the temperature—how hard can it be?

So how do scientists build datasets that track the temperature of the entire globe? That story is defined by problems. On land, our data comes from weather stations, and there’s a reason they are called weather stations rather than climate stations. They were built, operated, and maintained only to monitor daily weather, not to track gradual trends over decades. Lots of changes that can muck up the long-term record, like moving the weather station or swapping out its instruments, were made without hesitation in the past. Such actions simply didn’t matter for weather measurements.

The impacts of those changes are mixed in with the climate signal you’re after. And knowing that, it’s hard to argue that you shouldn’t work to remove the non-climatic factors. In fact, removing these sorts of background influences is a common task in science. As an incredibly simple example, chemists subtract the mass of the dish when measuring out material. For a more complicated one, we can look at water levels in groundwater wells. Automatic measurements are frequently collected using a pressure sensor suspended below the water level. Because the sensor feels changes in atmospheric pressure as well as water level, a second device near the top of the well just measures atmospheric pressure so daily weather changes can be subtracted out.

If you don't make these sorts of adjustments, you’d simply be stuck using a record you know is wrong.

You can continue reading at Ars Technica. The article still explains several reasons for inhomogeneities in the temperature observations, how they are removed with statistical homogenization methods, how these methods have been validated, the uncertainties in the sea surface temperature and satellite estimates of the tropospheric temperature and why it is so hard to get the right. It finishes with the harassment campaign against NOAA of Lamar Smith because of a minimal update.

Enjoy and bookmark, it is a very thorough and accessible overview.

Saturday, 16 January 2016

The transition to automatic weather stations. We’d better study it now.

This is a POST post.

The Parallel Observations Science Team (POST) is looking across the world for climate records which simultaneously measure temperature, precipitation and other climate variables with a conventional sensor (for example, a thermometer) and modern automatic equipment. You may wonder why we take the painstaking effort of locating and studying these records. The answer is easy: the transition from manual to automated records has an effect on climate series and the analysis we do over them.

In the last decades we have seen a major transition of the climate monitoring networks from conventional manual observations to automatic weather stations. It is recommended to compare these instruments before the substitution is effective with side by side measurements, which we call parallel measurements. Climatologists have also set up many longer experimental parallel measurements. They tell us that in most cases both sensors do not measure the same temperature or collect the same amount of precipitation. A different temperature is not only due to the change of the sensor itself, but automatic weather stations also often use a different, much smaller, screen to protect the sensor from the sun and the weather. Often the introduction of automatic weather stations is accompanied by a change in location and siting quality.

From studies of single temperature networks that made such a transition we know that it can cause large jumps; the observed temperatures at a station can go up or down by as much as 1°C. Thus potentially this transition can bias temperature trends considerably. We are now trying to build a global dataset with parallel measurements to be able to quantify how much the transition to automatic weather stations influences the global mean temperature estimates used to study global warming.

Temperature

This study is led by Enric Aguilar and the preliminary results below were presented at the Data Management Workshop in Saint Gallen, Switzerland last November. We are still in the process of building up our dataset. Up to now we have data from 10 countries: Argentina (9 pairs), Australia (13), Brazil (4), Israel (5), Kyrgyzstan (1), Peru (31), Slovenia (3), Spain (46), Sweden (8), USA (6); see map below.


Global map in which we only display the 10 countries for which we have data. The left map is for the maximum temperature (TX) and the right for the minimum temperature (TN). Blue dots mean that the automatic weather station (AWS) measures cooler temperatures than the conventional observation, red dots mean the AWS is warmer. The size indicates how large the difference is, open circles are for statistically not significant differences.

The impact of the automation can be better assessed in the box plots below.


The bias of the individual pairs are shown as dots and summarized per country with box plots. For countries with only a few pairs the boxplots should be taken with a grain of salt. Negative values mean that the automatic weather stations are cooler. We have data for Argentina (AR), Australia (AU), Brazil (BR), Spain (ES), Israel (IL), Kyrgyzstan (KG), Peru (PE), Sweden (SE), Slovenia (SI) and the USA (US). Panels show the maximum temperature (TX), minimum temperature (TN), mean temperature (TM) and Diurnal temperature range (DTR, TX-TN).

On average there are no real biases in this dataset. However, if you remove Peru (PE) the differences in the mean temperature are either small or negative. That one country is so important shows that our dataset is currently too small.

To interpret the results we need to look at the main causes for the differences. Important reasons are that Stevenson screens can heat up in the sun on calm days, while automatic sensors are sometimes ventilated. The automatic sensors are, furthermore, typically smaller and thus less affected by direct radiation hitting them than thermometers. On the other hand, in case of conventional observation, the maintenance of the Stevenson screens—cleaning and painting—and detection of other problems may be easier because they have to be visited daily. There are concerns that plastic screens get more grey and heat more in the sun. Stevenson screens have more thermal inertia, they smooth fast temperature fluctuations, and will thus show lower highs and higher lows.

Also the location often changes with the installation of automatic weather stations. America was one of the early adopters. The US National Weather Service installed analogue semi-automatic equipment (MMTS) that did not allow for long cables between the sensor and the display inside a building. Furthermore, the technicians only had one day per station and as a consequence many of the MMTS systems were badly sited. Nowadays technology has advanced a lot and made it easier to find good sites for weather stations. This is maybe even easier now than it used to be for manual observations because modern communication is digital and if necessary uses radio making distance much less a concern. The instruments can be powered by batteries, solar or wind, which frees them from the electricity grid. Some instruments store years of data and need just batteries.

In the analysis we thus need to consider whether the automatic sensors are placed in Stevenson screens and whether the automatic weather station is at the same location. Where the screen and the location did not change (Israel and Slovenia), the temperature jumps are small. Whether the automatic weather station reduces radiation errors by mechanical ventilation is likely also important. Because of these different categories, the number of datasets needed to get a good global estimate becomes larger. Up to now, these factors seem to be more important than the climate.

Precipitation

For most of these countries we also have parallel measurements for precipitation. The figure below was made by Petr Stepanek, who leads this part of the study.


Boxplots for the differences in monthly precipitation sums due to automation. Positive values mean that the manual observations record more precipitation. Countries are: Argentina (AG), Brazil (BR), The Check Republic (CZ), Israel (IS), Kyrgyzstan (KG), Peru (PE), Sweden (SN), Spain (SP) and the USA (US). The width of the boxplots corresponds to the size of the given dataset.

For most countries the automatic weather stations record less precipitation. This is mainly due to smaller amounts of snow during the winter. Observers often put a snow cross in the gauge in winter to make it harder for snow to blow out of it again. Observers simply melt the snow gathered in a pot to measure precipitation, while early automatic weather stations did not work well with snow and sticky snow piling up in the gauge may not be noticed. These problems can be solved by heating the gauge, but unfortunately the heater can also increase the amount of precipitation that evaporates before it could be registered. Such problems are known and more modern rain gauges use different designs and likely have a smaller bias again.

Database with parallel data

The above results are very preliminary, but we wanted to show the promise of a global dataset with parallel data to study biases in the climate record due to changes in the observing practises. To proceed we need more datasets and better information on how the measurements were performed to make this study more solid.

In future we also want to look more at how the variability around the mean is changing. We expect that changes in monitoring practices have a strong influence on the tails of the distribution and thus on estimates of changes in extreme weather. Parallel data offer a unique opportunity to study this otherwise hard problem.

Most of the current data comes from Europe and South America. If you know of any parallel datasets especially from Africa or Asia, please let us know. Up to now, the main difficulty for this study is to find the persons who know where the data is. Fortunately, data policies do not seem to be a problem. Parallel data is mostly seen as experimental data. In some cases we “only” got a few years of data from a longer dataset, which would otherwise be seen as operational data.

We would like to publish the dataset after publishing our papers about it. Again this does not seem to lead to larger problems; sometimes people prefer to first publish an article themselves, which causes some delays, and sometimes we cannot publish the daily data itself, but “only” monthly averages and extreme value indices, this makes the results less transparent, but these summary values contain most of the information.

Knowledge of the observing practices is very important in the analysis. Thus everyone who contributes data is invited to help in the analysis of the data and co-author our first paper(s). Our studies are focused on global results, but we will also provide everyone with results for their own dataset to gain a better insight into their data.

Most climate scientists would agree that it is important to understand the impact of automation on our records. So does the World Meteorological Organization. In case it helps you to convince your boss: the Parallel Observations Science Team is part of the International Surface Temperature Initiative (ISTI). It is endorsed by the Task Team on Homogenization (TT-HOM) of the World Meteorological Organization (WMO).

We expect that this endorsement and our efforts to raise awareness about our goals and their importance will help us to locate and study parallel observations from other parts of the world, especially Africa and Asia. We also expect to be able to get more data from Europe; the regional association for Europe of the WMO has designated the transition to automatic weather stations as one of its priorities and is helping us to get access to more data. We want to have datasets for all over the world to be able to assess whether the station settings (sensors, screens, data quality, etc.) have an impact, but also to understand if different climates produce different biases.

If you would like to collaborate or have information, please contact me.



Related reading

The ISTI has made a series of brochures on POST in English, Spanish, French and German. If anyone is able to make further translations, that would be highly appreciated.

Parallel Observations Science Team of the International Surface Temperature Initiative.

Irrigation and paint as reasons for a cooling bias

Temperature trend biases due to urbanization and siting quality changes

Changes in screen design leading to temperature trend biases

Temperature bias from the village heat island

Thursday, 7 January 2016

Interesting EGU sessions and conferences in 2016

Just a quick post to advertise some interesting (new) EGU sessions and conferences this year.

At EGU there will be four interesting sessions that fit to the topic of this blog. The abstract deadline is already next Wednesday, the 13th of January at 13 CET. The conference is half April in Vienna, Austria.

Climate Data Homogenization and Climate Trend and Variability Assessment
The main session for all things homogenization.

Taking the temperature of Earth: Variability, trends and applications of observed surface temperature data across all domains of Earth's surface
On measuring temperatures: surface itself (skin temperature), surface air over land, see surface temperature, marine air temperature. With a large range of observational methods, including satellites.

Transition into the Anthropocene-causes of climate change in the 19th and 20th century
A session on climate change in the very challenging early instrumental period, where the variability of station observations has large uncertainties. This session is new, as far as I can see. But EGU is big, I hope I did not miss it last year.

Historical Climatology
Even further back in time is the session on historical climatology where people mainly look at non-instrumental evidence of climatic changes and their importance for human society.

Also this year there will be an EMS conference. Like every second year, in 2016 it will be combined with the European Conference on Applied Climatology (ECAC) and thus has more climate goodies than average. This year it will be half September in Trieste, Italy.

The main session for fans of homogenization is: Climate monitoring; data rescue, management, quality and homogenization.

Fans of variability may like the session on Spatial Climatology.

A conference I really enjoyed the last two times I was there is the International Meeting on Statistical Climatology. Its audience is half statisticians and half climatologists. Everyone loves beautiful statistical and methodological questions. Great!! This year is will be in June in Canmore, Alberta, Canada.

It also has a session on homogenization: Climate data homogenization and climate trends/variability assessment.

If I missed any interesting sessions or conferences do let us know in the comments (also if it is your own).




Descriptions

Climate Data Homogenization and Climate Trend and Variability Assessment
Convener: Xiaolan L. Wang
Co-Conveners: Enric Aguilar, Rob Roebeling, and Petr Stepanek

The accuracy and homogeneity of climate data are indispensable for many aspects of climate research. In particular, a realistic and reliable assessment of historical climate trends and variability is hardly possible without a long-term, homogeneous time series of climate data. Accurate and homogeneous climate data are also indispensable for the calculation of related statistics that are needed and used to define the state of climate and climate extremes. Unfortunately, many kinds of changes (such as instrument and/or observer changes, and changes in station location and environment, observing practices and procedure, etc.) that took place in the period of data record could cause non-climatic changes (artificial shifts) in the data time series. Such artificial shifts could have huge impacts on the results of climate analysis, especially those of climate trend analysis. Therefore, artificial changes shall be eliminated, to the extent possible, from the time series prior to its application, especially its application in climate trends assessment.

This session calls for contributions that are related to bias correction and homogenization of climate data, including bias correction and validation of various climate data from satellite observations and from GCM and RCM simulations, as well as quality control/assurance of observations of various variables in the Earth system. It also calls for contributions that use high quality, homogeneous climate data to assess climate trends and variability and to analyze climate extremes, including the use of bias-corrected GCM or RCM simulations in statistical downscaling. This session will include studies that inter-compare different techniques and/or propose new techniques/algorithms for bias-correction and homogenization of climate data, for assessing climate trends and variability and analysis of climate extremes (including all aspects of time series analysis), as well as studies that explore the applicability of techniques/algorithms to data of different temporal resolutions (annual, monthly, daily) and of different climate elements (temperature, precipitation, pressure, wind, etc) from different observing network characteristics/densities, including various satellite observing systems.



Transition into the Anthropocene-causes of climate change in the 19th and 20th century
Convener: Gabriele Hegerl
Co-Convener: Stefan Brönnimann

This session focuses on the long view of climate variability and change as available from long records, reconstructions, reanalysis efforts and modelling, and we welcome analysis of temperature, precipitation, extreme events, sea ice, and ocean. Contributions are welcome that evaluate changes from historical data on the scale of large regions to the globe, analyse particular unusual climatic events, estimate interdecadal climate variability and climate system properties from long records, attribute causes to early observed changes and model or data assimilate this period. We anticipate that bringing observational, modelling and analysis results together will improve understanding and prediction of the interplay of climate variability and change. "



Taking the temperature of Earth: Variability, trends and applications of observed surface temperature data across all domains of Earth's surface
See also their homepage.
Convener: Darren Ghent
Co-Conveners: Nick Rayner, Stephan Matthiesen, Simon Hook, G.C. Hulley, Janette Bessembinder


Surface temperature (ST) is a critical variable for studying the energy and water balances of the Earth surface, and underpinning many aspects of climate research and services. The overarching motivation for this session is the need for better understanding of in-situ measurements and satellite observations to quantify ST. The term "surface temperature" encompasses several distinct temperatures that differently characterize even a single place and time on Earth’s surface, as well as encompassing different domains of Earth’s surface (surface air, sea, land, lakes and ice). Different surface temperatures play inter-connected yet distinct roles in the Earth’s surface system, and are observed with different complementary techniques.

The EarthTemp network was established in 2012 to stimulate new international collaboration in measuring and better understanding ST across all domains of the Earth’s surface including air, land, sea, lakes, ice. New and existing international projects and products have evolved from network collaboration (e.g. ESA Climate Change Initiative SST project, EUSTACE, FIDUCEO, International Surface Temperature Initiative, ESA GlobTemperature, HadISST, CRUTEM and HadCRUT). Knowledge gained during this EarthTemp session will be documented and published as part of the user requirements exercises for such projects and will thus benefit the wider community. A focus of this session is the use of ST's for assessing variability and long-term trends in the Earth system. In addition there will be opportunity for users of surface temperature over any surface of Earth on all space and timescales to showcase their use of the data and their results, to learn from each others' practice and to communicate their needs for improvements to developers of surface temperature products. Suggested contributions can include, but are not limited to, topics like:

* The application of ST in climate science
* How to improve remote sensing of ST in different environments
* Challenges from changes of in-situ observing networks over time
* Current understanding of how different types of ST inter-¬relate
* Nature of errors and uncertainties in ST observations
* Mutual/integrated quality control between satellite and in-situ observing systems.
* What do users of surface temperature data require in practical applications?



Historical Climatology
Convener: Stefan Grab
Co-Conveners: Rudolf Brazdil, David Nash, Georgina Endfield


Historical Climatology has gained momentum and worldwide recognition over the last couple of decades, particularly in the light of rapid global climate and environmental change. It is now well recognized that in order to better project future changes and be prepared for those changes, one should look to, and learn from, the past. To this end, historical documentary sources, in many cases spanning back several hundred years and far beyond instrumental weather records, offer detailed descriptive (qualitative) accounts on past weather and climate. Such documentary sources typically include, amongst others: weather diaries, ship log books, missionary reports and letters, historical newspapers, chronicles, accounting and government documents etc. Such proxies have particular advantages in that they in most cases offer details on the specific timing and placement of an event. In addition, valuable insights may be gained on environmental and anthropogenic consequences and responses to specific weather events and climate anomaly. Similarly, oral history records, based on people’s personal accounts and experiences of past weather offer important insights on perceptions of climate change, and details on past and sometimes ‘forgotten’ weather events and their consequences.

This session welcomes all studies using documentary, historical instrumental and oral history based approaches to: produce historical climate chronologies (multi-decadal to centennial scale), gain insights into past climatic periods or specific weather events, detail environmental and human consequences to past climate and weather, share people’s experiences and perceptions of past climate, weather events and climate change, and reflect on lessons learnt (coping and adaptation) from past climate and weather events. Whilst welcoming contributions from all global regions, we particularly appeal for contributions from Asia and the Middle East.



Climate monitoring; data rescue, management, quality and homogenization
Convener: Manola Brunet-India
Co-Conveners: Hermann Mächel, Victor Venema, Ingeborg Auer, Dan Hollis


Robust and reliable climatic studies, particularly those assessments dealing with climate variability and change, greatly depend on availability and accessibility to high-quality/high-resolution and long-term instrumental climate data. At present, a restricted availability and accessibility to long-term and high-quality climate records and datasets is still limiting our ability to better understand, detect, predict and respond to climate variability and change at lower spatial scales than global. In addition, the need for providing reliable, opportune and timely climate services deeply relies on the availability and accessibility to high-quality and high-resolution climate data, which also requires further research and innovative applications in the areas of data rescue techniques and procedures, data management systems, climate monitoring, climate time-series quality control and homogenisation.

In this session, we welcome contributions (oral and poster) in the following major topics:

• Climate monitoring , including early warning systems and improvements in the quality of the observational meteorological networks

• More efficient transfer of the data rescued into the digital format by means of improving the current state-of-the-art on image enhancement, image segmentation and post-correction techniques, innovating on adaptive Optical Character Recognition and Speech Recognition technologies and their application to transfer data, defining best practices about the operational context for digitisation, improving techniques for inventorying, organising, identifying and validating the data rescued, exploring crowd-sourcing approaches or engaging citizen scientist volunteers, conserving, imaging, inventorying and archiving historical documents containing weather records

• Climate data and metadata processing, including climate data flow management systems, from improved database models to better data extraction, development of relational metadata databases and data exchange platforms and networks interoperability

• Innovative, improved and extended climate data quality controls (QC), including both near real-time and time-series QCs: from gross-errors and tolerance checks to temporal and spatial coherence tests, statistical derivation and machine learning of QC rules, and extending tailored QC application to monthly, daily and sub-daily data and to all essential climate variables

• Improvements to the current state-of-the-art of climate data homogeneity and homogenisation methods, including methods intercomparison and evaluation, along with other topics such as climate time-series inhomogeneities detection and correction techniques/algorithms (either absolute or relative approaches), using parallel measurements to study inhomogeneities and extending approaches to detect/adjust monthly and, especially, daily and sub-daily time-series and to homogenise all essential climate variables

• Fostering evaluation of the uncertainty budget in reconstructed time-series, including the influence of the various data processes steps, and analytical work and numerical estimates using realistic benchmarking datasets



Spatial Climatology
Convener: Ole Einar Tveito
Co-Conveners: Mojca Dolinar, Christoph Frei


Gridded representation of past and future weather and climate with high spatial and temporal resolution is getting more and more important for assessing the variability of and impact of weather and climate on various environmental and social phenomena. They are also indispensable as validation and calibration input for climate models. This increased demand requires new efficient methods and approaches for estimating spatially distributed climate data as well as new efficient applications for managing and analyzing climatological and meteorological information at different temporal and spatial scales. This session addresses topics related to generation and application of gridded climate data with an emphasis on statistical methods for spatial analysis and spatial interpolation applied on observational data.

An important aspect in this respect is the creation and further use of reference climatologies. The new figures calculated for the latest normal period 1981-2010 are now recommended as reference period for assessments of regional and local climatologies. For this period new observation types (e.g. satellite and radar data) are available, and contributions taking advantage of multiple data sources are encouraged.

Spatial analysis using e.g. GIS is a very strong tool for visualizing and disseminating climate information. Examples showing developments, application and products of such analysis for climate services are particularly welcome.

The session intends to bring together experts, scientists and other interested people analyzing spatio-temporal characteristics of climatological elements, including spatial interpolation and GIS modeling within meteorology, climatology and other related environmental sciences.



Climate data homogenization and climate trends/variability assessment
Convener: Xiaolan Wang, Lucie Vincent, Markus Donat and Lisa Alexander

The accuracy and homogeneity of climate data are indispensable for many aspects of climate research. In particular, a realistic and reliable assessment of historical climate trends and variability is hardly possible without a long-term, homogeneous time series of climate data. Accurate and homogeneous climate data are also indispensable for the calculation of related statistics that are needed and used to define the state of climate and climate extremes. Unfortunately, many kinds of changes (such as instrument and/or observer changes, and changes in station location and exposure, observing practices and procedure, etc.) that took place in the period of data record could cause non-climatic sudden changes (artificial shifts) in the data time series. Such artificial shifts could have huge impacts on the results of climate analysis, especially those of climate trend analysis. Therefore, artificial changes shall be eliminated, to the extent possible, from the time series prior to its application, especially its application in climate trends assessment.

This session calls for contributions that are related to bias correction and homogenization of climate data, including bias correction and validation of various climate data from satellite observations and from GCM and RCM simulations, as well as quality control/assurance of observations of various variables in the Earth system. It also calls for contributions that use high quality, homogeneous climate data to assess climate trends and variability and to analyze climate extremes, including the use of bias-corrected GCM or RCM simulations in statistical downscaling.

Sunday, 3 January 2016

Harassment and that powerful male PI

After decades of sexual harassment astronomer Geoffrey Marcy has been fired. What surprised me reading many comments was how invincible he was thought to be. I am not nearly as powerful, but can maybe still offer a view from the other side. My impression is naturally subjective and I work in a different field and in another country, but I wonder if Geoffrey Marcy was really that invincible. Unfortunately, people having that impression is partially a self-fulfilling prophesy. Making this picture more realistic may thus help reduce harassment, which is why I wanted to write about this.

Your scientific network is crucial

This impression of invincibility may partially stem from the lone genius syndrome in the media, which is not even right for Einstein. To make a story more pleasant to read journalists like to personalize everything and simplify their story enormously, which leaves little space for history and multiple contributors.

More realistic is that it is very hard to do science in isolation. Feedback is very important. If you only get feedback after publication, you will progress very slowly. Collaboration is also important to get new skills into a study. Understanding new methods and datasets takes a long time. Without collaboration you would have to invest a lot of time and still have a high risk of making rookie mistakes. Without collaboration your productivity and quality will be much lower.

Also the peer review of research proposal and scientific articles makes it important to keep friendly relationships with most colleagues. A research proposal always has weaknesses; if everything was clear already it would not be a science project. Even if not, a reviewer can always claim a proposal is too ambitious or not ambitious enough. Judging a proposal to be “very good” rather than “excellent” is always possible and reduces the funding probability a lot given the low percentage that is funded.

Negotiating conflicts

Due to the importance of collaboration and good relationships open conflicts are rare. There are open debates about specific scientific questions; questions that have an answer and enhance your scientific reputation if you are right. Vaguer questions that do not have an answer are not worth an open conflict that may damage relationships.

That normally most scientists are on talking terms with each other may give outsiders and young scientists the feeling that the seniors are one solid block. This is certainly not the case. Inside you can see the cracks. Most conflicts between groups are for long-forgotten historical reasons. Most conflicts between people because their personalities do not match.

(Hardly ever do you see conflicts for political reasons. I just send an email to Iran and got a friendly answer. My colleague from Serbia still talks to me after we bombed his country based on flimsy evidence. As far as I can judge the climate “debate” is nearly never a source of conflict; the quality of someone work naturally does influence collaboration decisions.)

Conflicts in science are not resolved by a strong man and presented at a press conference, rather it is a continual collective negotiation. That makes the conflicts and the influence of evidence of harassment less visible.

Two examples: If I had a preliminary result I was not sure about yet and I presented it at a conference to get feedback and thus had added a no-tweet sign to my talk. In this hypothetical case, our friend of the freedom to tweet, Gavin Schmidt, could naturally tweet about it anyway. I would then see that as a breach of trust and would only collaborate with Gavin if really no other scientist had the competencies I would need for a specific project. Gavin likely could not care less about a dwarf from an adjacent field. More important is how the scientific community sees the situation. If they see my request as reasonable, the tweet would hurt Gavin’s reputation. For something as small as this he will not lose his post as director of GISS, but people might be a little more reluctant to collaborate and less willing to share preliminary results with him lest he may tweet it. His closest friends may communicate that to Gavin in private; most will just draw their conclusions and not say anything. That makes those close friends so important, because it is better to know.

That Gavin has tweeted that he does not agree with a conference presentation of Peter Wadhams may cool their relationship. However, there was no breach of trust because Wadhams had said similar things in public before. Thus Wadhams’ unreasonable complaints about Gavin’s tweet probably only lead to further reputation losses for Wadhams.

One of the famous quotes from the stolen emails of climategate is that Phil Jones writes that he wants to keep two articles out of the IPCC report even if he has to redefine what peer reviewed means. What the mitigation sceptics normally do not say, because that does not fit their narrative, is that these two articles were referenced and discussed in the IPCC report.

Given that there are no further climategate emails on this topic I assume that Phil himself thought better of it later. In this case there is no loss of reputation; scientists are also humans, he corrected his initial error and no damage was done. If not his colleagues made clear what they thought of this and Phil decided that this was not worth a loss of reputation. If the article in question was really bad, the loss of reputation would have been limited; if the article was half-way decent the loss of reputation would have been large.

Summarizing, while visually maybe nothing seems to happen and it is difficult to fire faculty members, spreading evidence of harassment does result in reputation loss, reductions in collaboration and productivity and the ability to do the research you would like to do.

Young and female scientists

Young scientists and female scientists are also more important than they may realise. When you are young, doing science is centred around writing articles and somewhat later also on getting funding for your own (small) project. The next step in a scientific career is leading larger projects. At least in Germany and the EU, it is impossible to get funding for such projects without involving female scientists. If you lose support of (nearly) all female colleagues, your career stops right there.

The final step in a scientific career is getting your people into permanent positions. This stabilises your power and somehow professors tend to think that their topic is so important that it should grow at the expense of others. For this, and for the successful completion of scientific projects, you need to be a good talent scout and find young people you can build up.

When I studied physics in Groningen, rumour had it that there were so many professors that they were fighting to be allowed to give lectures and thus get access to the students. That way you can interest the students for your topic and assess who are the good ones. Also companies often pay for professorships to be able to search for talent and lure the good students to work for them.

Binding good students to your group with interesting bachelor and master projects and with student help positions gives professors the possibility to interest the good ones to pursue a PhD in their group. The best of those will hopefully become important scientists and grow the field.

Gossip and punishing bad behaviour

When I was a PhD student, two American scientists misbehaved, I made sure that everyone knew, especially all Americans I knew. When they did it to you, they likely did it to others. Multiple similar bad stories can do real damage to a scientists reputation and their work.

Even if your case is not strong or big enough to go to the police, talk to other female scientists. There are bound to be more victims, which makes the case stronger and makes it more important that the harassment stops. Talk to other male scientists, you may be surprised how many will support you. I know of one multiple harassment case. Too small to be a legal case, but once the leaders of that community heard about it, they talked to the guy, made it clear that any future complaint would have severe consequences and I did not hear about new complaints.

Fighting back is necessary; make sure people know what happened. If bad behaviour goes unpunished it will proliferate. Economic games have shown that collaborative behaviour will only prevail if cooperative people are willing to punish bad behaviour. It also showed that people from all over the world are willing to punish bad behaviour at a cost and derive pleasure from that. Gossip and punishing bad behaviour are the two main pillars of our human civilization.

Takeaways

The media likes to portray scientists as solitary geniuses. In reality scientists need to be good collaborators to become heroes in their field. Many young scientists think that senior scientists are one big pack and may feel that it would not help to talk to senior scientists about the poor behaviour of another. But this is not true either: there are conflicts, they are just not very visible.

It would be good if (female) scientists would speak about (sexual) harassment at work and this likely does more damage than they think and see. I know actually doing this is difficult and there are unfortunately likely repercussions if you do so openly. In addition much should change in the organization of science and in society to reduce harassment. I just want to say: you have more power than you may think.



Related reading

How to end sexual harassment in astronomy on changes in the organization that are needed.

Nature: Scientific groups revisit sexual-harassment policies. Interesting: The American Geophysical Union did not receive any harassment complaints since it started tracking them 5 years ago. It could be underreporting, I hope it means that the situation is not as bad in the geosciences.

Nature: Science and sexism: In the eye of the Twitterstorm. Social media is shaking up how scientists talk about gender issues.

Astropixy: How not to "deal with it". A example of harassment by a non-famous astronomer, makes clear that unfortunately getting rid of a faculty member is hard.

Nature Blog. The faculty series: Learning to collaborate. "Collaborations are the key to success in modern scientific research, says Michelle Ma."

The ultimatum game, a key experiment showing intrinsic fairness and altruism among strangers

*PI: Principle Investigator. The leader of a scientific project and often a group.