Tuesday, 31 March 2015

Temperature trend biases due to urbanization and siting quality changes

The temperature in urban areas can be several degrees higher than their surrounding due to the Urban Heat Island (UHI). The additional heat stress is an important medical problem and studied by bio-meteorologists. Many urban geographers study the UHI and ways to reduce the heat stress. Their work suggests that the UHI is due to a reduction in evaporation from bare soil and vegetation in city centers. The solar energy that is not used for evaporation goes into warming of the air. In case of high-rise buildings there are, in addition, more surfaces and thus more storage of heat in the buildings during the day, which is released during the night. High-rise buildings also reduce radiative cooling (infrared) at night because the surface sees a smaller part of the cold sky. Recent work suggests that cities also influence convection (often visible as cumulus (towering) clouds).

To study changes in the temperature, a constant UHI bias is no problem. The problem is an increase in urbanization. For some city stations this can be clearly seen in a comparison with nearby rural stations. A clear example is the temperature at the station in Tokyo, where the temperature since 1920 rises faster than in surrounding stations.



Scientists like to make a strong case, thus before they confidently state that the global temperature is increasing, they have naturally studied the influence of urbanization in detail. An early example is Joseph Kincer of the US Weather Bureau (HT @GuyCallendar) who studied the influence of growing cities in 1933.

While urbanization can be clearly seen for some stations, the effect on the global mean temperature is small. The Fourth Assessment Report from the IPCC, states the following.
Studies that have looked at hemispheric and global scales conclude that any urban-related trend is an order of magnitude smaller than decadal and longer time-scale trends evident in the series (e.g., Jones et al., 1990; Peterson et al., 1999). This result could partly be attributed to the omission from the gridded data set of a small number of sites (<1%) with clear urban-related warming trends. ... Accordingly, this assessment adds the same level of urban warming uncertainty as in the TAR: 0.006°C per decade since 1900 for land, and 0.002°C per decade since 1900 for blended land with ocean, as ocean UHI is zero.
Next to the removal of urban stations, the influence of urbanization is reduced by statistical removal of non-climatic changes (homogenization). The most overlooked aspect may, however, be that urban stations do not often stay at the same location, but rather are relocated when the surrounding is seen to be no longer suited or the meteorological offices simply cannot pay the rent any more or the offices are relocated to airports to help with air traffic safety.

Thus urbanization does not only lead to an gradual increase in temperature, but also to downward jumps. Such a non-climatic change often looks like an (irregular) sawtooth. This can lead to artificial trends in both directions; see sketch below. In the end, what counts is how strong the UHI was in the beginning and how strong it is now.



The first post of this series was about a new study that showed that even villages have a small "urban heat island". For a village in Sweden (Haparanda) and Germany (Geisenheim) the study found that the current location of the weather station is about 0.5°C (1°F) colder than the village center. For cities you would expect a larger effect.

Around the Second World War many city stations were moved to airports, which largely takes the stations out of the urban heat island. Comparing the temperature trend of stations that are currently at airports with the non-airport stations, a number of people have found that this effect is about 0.1°C for the airport stations, which would suggest that it is not important for the entire dataset.

This 0.1°C sounds rather small to me. If we have urban heat islands of multiple degrees and people worry about small increases in the urban heat island effect, then taking a station (mostly) out of the heat island should lead to a strong cooling. Furthermore, cities are often in valleys and coasts and the later build airports thus often are at a higher and thus cooler location.

A preliminary study by citizen scientist Caerbannog suggests that airport relocations can explain a considerable part of the adjustments. These calculations need to be performed more carefully and we need to understand why the apparently small difference for airport stations translates to a considerable effect for the global mean. A more detailed scientific study on relocations to airports is unfortunately still missing.

Also the period where the bias increases in GHCNv3 corresponds to the period around the second world war where many stations were relocated to airports, see figure below. Finally, also that the temperature trend bias in the raw GHCNv3 data is larger than the bias in the Berkeley Earth dataset suggests that airport relocations could be important. Airport stations are overrepresented in GHCNv3, which contains a quite large fraction of airport stations.



With some colleagues we have started the Parallel Observations Science Team (POST) in the International Surface Temperature Initiative. There are some people interested in using parallel measurements (simultaneous measurements at cities and airports) to study the influence of these relocations. There seems to be more data than one may think. We are, however, still looking for a leading author (hint).



In the 19th century and earlier, thermometers were expensive scientific instruments and meteorological observations were made by educated people, apothecaries, teachers, clergymen, and so on. These people lived in the city. Many stations have subsequently been moved to better and colder locations. Whether urbanization produces a cold or a warm bias is thus an empirical and historical question. The evidence seems to show that on average the effect is small. It would be valuable when the effect of urbanization and relocations would be studies together. That may lead to an understanding of this paradox.



Related posts

Changes in screen design leading to temperature trend biases

Temperature bias from the village heat island

Climatologists have manipulated data to REDUCE global warming

Homogenisation of monthly and annual data from surface stations

Sunday, 8 March 2015

How can the pause be both ‘false’ and caused by something?

Judith Curry asked Michael Mann:
"How can the pause be both ‘false’ and caused by something?"
She really did, if you do not believe me, here is the link to her blog post.

I have trouble seeing a contradiction, but I have seen this meme more often among the mitigation sceptics. Questions like, how can you claim there is no hiatus when so many scientists are studying it?

Let's first formulate it abstractly, then give a neutral example, before we go to the climate change case where some people suddenly become too creative.

"How can the pause be false" can be translated to: how can you claim A is still related to t?

While "caused by something" can be translated to: A is also related to X, Y, and Z.

I hope the abstract case makes clear that you can claim that A is related to X, Y and Z without claiming that A is not related to t.

The neutral, I hope, example is: How can the claim that economic growth needs free markets, property rights and rule of law be false, while economists are studying the influence of the [[Lehman Brothers]] crash on economic growth?

I know, analogies do not work in the climate "debate". Someone will always claim that they do not fit. Which is always right. That is why they are called analogies.

There is no statistically significant change in the trend. People who think they see that in a the temperature signal are often just shown a small part of the data and they overestimate the significance of short-term trends. The uncertainty in a 10-year trend is not 10 times as large as the uncertainty of a 100-year trend. A 10-year trend is 100 times more uncertain.

That there is no change in the temperature trend is visually clearly seen by these two elegant graphs made by Tamino.




What causes these deviations from the trend line or the deviations from the average model projections is naturally an interesting question. Something that climatologists used to simply call: natural variability, small stuff, impossible to understand in detail.

It is a great feat that climatologists now dare to say something about these minor deviations. Remember that we had more than half a degree of warming over many decades before climatologists said with any kind of confidence that global warming is real.

Even if these dare devils turn out to be wrong, it tells a lot about the quality of our modern climate monitoring capabilities, climate models and analysis tools, that scientists are willing to stick their neck out and say: I think I know what might have caused these minimal deviations of a tenth, maybe two tenth of a degree Celsius. Pretty amazing.


Friday, 27 February 2015

Stop all harassment of all scientists now

This week Greenpeace revealed that aerospace engineer Willie Soon got funding of over one million dollars from the fossil fuel industry and related organizations. However, Soon did not declare these conflicts of interest, while many scientific journals he wrote for do require this. His home institution is now investigating these irregularities.

Possibly inspired by this event, Member of the U.S. House of Representatives for Arizona Raúl M. Grijalva (Dem) has requested information on funding of seven mitigation sceptical scientists that have testified before the US congress. Next to funding also the drafts of the testimonies and any correspondence about them are requested. The seven letters went to the employers of: David Legates, John Christy, Judith Curry, Richard Lindzen, Robert Balling, Roger Pielke and Steven Hayward.

This is harassment of scientists for their politically inconvenient positions. That is wrong and should not happen. This is a violation of the freedom of research.

Judith Curry and Roger Pielke Jr. are on twitter and I was happy to see that basically everyone agreed that targeting scientists for political aims is wrong. There was a lot of support for them from all sides of this weird US climate "debate".


[UPDATE. The American Meteorological Society also sees these letters as wrong: "Publicly singling out specific researchers based on perspectives they have expressed and implying a failure to appropriately disclose funding sources — and thereby questioning their scientific integrity — sends a chilling message to all academic researchers... We encourage the Committee to rely on the full corpus of peer-reviewed literature on climate science as the most reliable source for knowledge and understanding that can be applied to the policy options before you."]

After these signs of solidarity, maybe this is the right moment to agree that all harassment of all scientists is wrong and change the rules. I would personally prefer that the Freedom of Science gets constitutional rank everywhere, like it already has in Germany having learned from the meddling in science during the Third Reich. At least we should change the law and stop the harassment, especially by the governments and politicians.

Next to the affair around Willie Soon, the open record requests by Democrat Rep. Grijalva may also have been inspired by the plans of Oklahoma Republican Sen. Jim Inhofe and California Republican Rep. Dana Rohrabacher. They announced this week their plans to investigate the politically inconvenient temperature record of NASA-GISS. This not too long after a congressional audit which kept the scientists of the National Climate Data Centre (NOAA-NCDN) from their important work.

The new investigation of GISS is all the more ironic as the GISS dataset is nowadays basically the NCDC dataset, but with a reduced trend due to an additional attempt to reduce the effect of urbanization. Probably these two mitigation sceptical politician do not even know that they will investigate an institution that makes the trend smaller. Just like most mitigation sceptics do not seem to know that the net effect of all adjustments for non-climatic changes is a reduction in global warming.

It is not as if this was the first harassment of inconvenient climate scientists. As the Union of Concerned Scientists writes:
Notably, the requests from Rep. Grijalva are considerably less invasive than a request made in 2005 by Rep. Joe Barton for materials from Penn State climate scientist Michael Mann. Rep. Barton’s request sought not only funding information but also data, computer code, research methods, information related to his participation in the Intergovernmental Panel on Climate Change (including report reviewers), and detailed justifications of several of his scientific calculations. The Barton requests were roundly condemned by scientists, and were part of a long history of harassment of Dr. Mann and his colleagues.

Then we did not yet mention the report in which Republican Senator James Inhofe called for the persecution of 17 named inconvenient climate scientists and more not named scientists. We did not yet mention the political attack of the Republican Attorney General Ken Cuccinelli on Michael Mann, which the Virginia Supreme Court halted after costing the university nearly $600,000 for legal fees.

The harassment is not limited to the USA. There was an action of the mitigation sceptics to ask Phil Jones of the UK Climate Research Unit (CRU) for the data and contracts with the weather services for five random countries. The same Phil Jones of the politically inconvenient CRUT global temperature curve whose emails were stolen and widely distributed by the mitigation sceptics. Emails which did not contain any evidence of wrongdoings, but spreading them is an efficient way to punish Jones for his inconvenient work by hurting his professional network.

In New Zealand, the Member of Parliament Mr. Hide filed 80 parliamentary questions to the the maintainers of the temperature record of New Zealand. As a result the minister requested an audit by the Australian Weather Service, which costed the researchers several months of work. The mitigation sceptics in New Zealand have on started a foundation for juridical attacks. When the judge ordered these mitigation sceptics to pay the costs of the trail, NZ$89,000, the foundation filed for bankruptcy, leaving the tax payers with the costs.

Open records laws

Open records does not help to check a scientific consensus. Scientists judge the evidence collectively, as historian of science Naomi Orekes states. Chris Mooney writes, in a beautiful piece on motivated reasoning, explains that this means that individual scientists are not important:
We should trust the scientific community as a whole but not necessarily any individual scientist. Individual scientists will have many biases, to be sure. But the community of scientists contains many voices, many different agendas, many different skill sets. The more diverse, the better.
This may come as a surprise to people who get their science from blogs or mass media. Mitigation sceptical blogs like to single out a small number of public scientists. Possibly not to make it visible how many scientists stand behind our understanding of climate change, possibly to dissuade other scientists from speaking in public.

Journalists like to focus on people and tell stories about how single persons revolutionised entire scientific fields. Even in the most medially hyped examples of clearly brilliant scientists such as Einstein and Feynman this is not true. If other scientists had not checked the consequences of their claims, these claims would have been a cry in deep vacuum. Journalists also tend to exaggerate the importance of single (new) scientific articles with spectacular and thus make the huge number of less sexy articles invisible.

These journalists would like to be allowed to do almost anything for a juicy story. In the Columbia Journalism Review, a journalist complained that exemptions for science “made it difficult for journalists to look into (Penn State’s) football sex abuse scandal.” I fail to see why journalists should have extra rights to investigate sex scandals, when they happen at public institutions. If they want more rights, then everyone should be affected, also private universities, companies and journalists. Personally I am not sure if I want to live in such a post-privacy world.

This kind of invasion of privacy has real consequences. I notice that colleagues in countries with Freedom of Information Act (FoIA) harassment write short and formal mails, whereas they are just as talkative at conferences or on the telephone. That is only natural, when you write for the front page of the New York Times you put in more effort to make sure that every detail is right and clear. If you have to put in that amount of effort for every mail to a colleague, you naturally write less. This hinders scholarly communication.

Mitigation sceptics that are convinced that top scientists control science and determine what is acceptable for publication should welcome strong confidentiality even more. That allows the honest scientists to build a coalition and get rid of these supposedly dishonest scientists. More realistically, scientists need to be able to warn their colleagues about errors in the work or character of other people.

Finally, scientists are also humans and write about private things. That is important to keep networks strong. The more so because for many science is not just a profession, but a vocation. Thus boundaries between private and work can be vague.

Thus I would argue that scientific findings should be checked by scientists. That includes everyone who invested the time to make himself an expert, not just the professionals. However, that is not the role of journalists and certainly not of politicians.

Openness

There are naturally cases where the public may want to know more about what is happening at academic institutions. The funding of science is a good example. It has been shown that the funding source influences the results of medical research. This should thus be disclosed.

Industry funding of science is welcome, but it should happen openly. Representatives of industry have complained that that would make it impossible for them to collaborate with universities, because that would give competing firms strategic information and would allow them to copy the ideas. I would argue that when the research is that straightforward, it is not something for a university.

At some universities in Switzerland and at the University of Cologne there have recently been protests against companies funding large amounts of research and about contractual conditions that limit scientists in their work. This can also limit the freedom of science and we need rules for this. It should not lead to a situation where industry research is sold as independent academic research.

It should never be possible to firms to stop the publication of results. A contract with Willie Soon stated that he was not allowed to disclose his funding. I feel such conditions should not be allowed. Some people complained about contractual obligations or scientists to inform the firms of their work. I would feel that that is still okay, it is natural that the firms are interested in a transfer the knowledge. Also in case of larger joint projects, scientists are expected to show up at general meetings of the project.

Sometimes replication of scientific work is hard, for example in case of large dietary studies or clinical trails. This makes it hard for scientists to check such research without access to the original data. Also in climatology it would be very beneficial if the governments around the world would allow a free transfer of observational data. Climate data is often withheld for commercial and military reasons. While climate data is valuable, the number of people willing to pay for them is small and the fees minimal. The military value of climate data is disappearing due to the accuracy of modern global weather predictions.

In all the above cases and maybe more, we should not work with open record laws, but make the information open for everyone for every case. In this way these laws cannot be abused for attacks on scientists doing politically inconvenient work. Criminal investigations should naturally be possible, scientists are humans, but should never, ever be ordered by politicians, such as in the case of Senator Inhofe. We should protect the separation of powers, such political orders should be illegal.



Related reading

Climate of Incivility - Climate McCarthyism is Wrong Whether Democratic or Republican
Michael Shellenberger of the [[Break Through Institute]] agree that the political intimidation of science from any side should stop.

An insider’s story of the global attack on climate science.
Jim Salinger, who produced a temperature dataset for New Zealand, talks about how he and this employer were harassed by politicians and mitigation sceptics.

What Kinds of Scrutiny of Scientists are Legitimate?
Michael Halpern of the Union of Concerned Scientists argues that the information requests send to the 7 mitigation sceptics go too far: the funding requests are appropriate, but communication between researchers should be protected.

He is also the author of a detailed report: Freedom to Bully: How Laws Intended to Free Information Are Used to Harass Researchers.

No Scientist Should Face Harassment. Period.
Gretchen Goldman of the Union of Concerned Scientists want to limit open records requests: "Science is an iterative process and researchers should be free to discuss, challenge, and develop ideas with a certain level of privacy."

Stoat: Raúl M. Grijalva is an idiot. He’s a politician and be relying on his own ability to evaluate what’s said. Or in the case of climate science, just read the IPCC report, its what its here for.

My Depressing Day With A Famous Climate Skeptic
Astrophysics professor Adam Frank: "What I had seen was a scientist whose work, in my opinion, was simply not very good. ... But Soon's little string of papers were being heralded in the highest courts of public opinion as a significant blow to everyone else's understanding of Earth's climate."

Why scientists often hate records requests, The shadow side of sunlight laws
Anna Clark of the Columbia Journalism Review writes about open records requests from the side of journalists and advocates teaching people how to make the requests not too intrusive.

Why all we believe our own favorite scientific ‘experts’ — and why they believe themselves
Chris Mooney on motivated reasoning. Great, clear exposition.

Democrats on climate 'witch hunt', conservatives say
politico reports on the 7 letters and lists several mitigation sceptics that complain now, but did not see any problems with attacks against climate scientists in the past.

Thursday, 12 February 2015

Just the facts, homogenization adjustments reduce global warming

Climatologists make adjustments to climate data to remove non-climatic changes (homogenization). This fact is used to accuse them of fiddling with temperature data to create or exaggerate global warming. This is often done by showing for a small piece of the data and suggesting it is typical. Often mentioned is the USA, where the raw data only show half the warming of the adjusted data. However, the USA is big, but still only 2% of the Earth's surface.

In recent weeks we had a similar case in The Telegraph about Paraguay. Last year we had similar misleading stories about two stations in Australia and the stations in New Zealand.

Global temperature collections contain thousands of stations. CRUTEM contains 4,842 quality stations and Berkeley Earth collected 39,000 unique stations. No wonder some are strongly adjusted up, just as some happen to be strongly adjusted down. In fact it would be easy to present a station where the raw data shows a cooling trend of several degrees being adjusted to a warming trend. However, then the reader might start to think if the raw data is really better.

The information on small regions or a few stations is normally not put into perspective: the average trend over all stations is only adjusted upwards slightly.

It is normally not explained why these adjustments are made nor how these adjustments are made.

Zeke Hausfather, an independent researcher that is working with Berkeley Earth, made a beautiful series of plots to show the size of the adjustments.

The first plot is for the land surface temperature from climate stations. The data is from the Global Historical Climate Dataset (GHCNv3) of NOAA (USA). Their method to remove non-climatic effects (homogenization) is well validated and recommended by the homogenization community.

They adjust the trend upwards. In the raw data the trend is 0.6°C per century since 1880 while after removal of non-climatic effects it becomes 0.8°C per century. See the graph below. But it is far from changing a cooling trend into strong warming. (A small part of the GHCNv3 raw data was already homogenized before they received it, but this will not change the story much.)



Not many people know, however, that the sea surface temperature trend is adjusted downward. These downward adjustments happen to be about the same size, but go into the other direction. See below the sea surface temperature of the Hadley Centre (HadSST3) of the UK MetOffice.



Being land creatures people do not always realise how big the ocean is, but 71% of the Earth is ocean. Thus if you combine these two temperature signals taking the area of the land and the ocean into account you get the result below. The net effect of the adjustments is a reduction of global warming.



It is pure coincidence that this happens, the reasons for the adjustments are fully different.

The land surface temperature trend has to be adjusted up because old temperatures were often too high due to insufficient protection against warming by the sun, possibly because the siting of the stations improved and there are likely more reasons.

The old sea surface temperature are adjusted downward because old measurements were made by taking a bucket of water out of the ocean and the water cooled by evaporation during the measurement. Furthermore, modern measurements are made at the water inlet of the engine and the hull of the ship warms the water a little before it is measured.

But while it is a pure coincidence and while other datasets may show somewhat different numbers (the BEST adjustments are smaller), the downward adjustment does clearly show that climatologists do not have an agenda to exaggerate global warming. That would still be true if the adjustments had happened to go upward.



Related reading

Phil Plait at Bad Astronomy comment on the Telegraph piece: No, Adjusting Temperature Measurements Is Not a Scandal

Kevin Cowtan made two videos on the claim of the Telegraph on Paraguay and the Arctic. The second video shows how to check such claims yourself.

John Timmer at Ars Technica is also fed up with being served the same story about some upward adjusted stations every year: Temperature data is not “the biggest scientific scandal ever” Do we have to go through this every year?

The astronomer behind And Then There's Physics writes why the removal of non-climatic effects makes sense. In the comments he talks about adjustments made to astronomical data. Probably every numerical observational discipline of science performs data processing to improve the accuracy of their analysis.

Steven Mosher, a climate "sceptic" who has studied the temperature record in detail and is no longer sceptical about that reminds of all the adjustments demanded by the "sceptics".

Nick Stokes, an Australian scientist, has a beautiful post that explains the small adjustments to the land surface temperature in more detail.

My two most recent posts were about some reasons for temperature trend biases: Temperature bias from the village heat island and Changes in screen design leading to temperature trend biases

You may also be interested in the posts on how homogenization methods work (Statistical homogenisation for dummies) and how they are validated (New article: Benchmarking homogenisation algorithms for monthly data)

Tuesday, 10 February 2015

Climatologists have manipulated data to REDUCE global warming

Climatologists are continually accused of fiddling with the data to make global warming stronger for political purposes by political activists.

A typical scam is to show a few stations that have been adjusted upwards and act as if that is typical. For example, recently The Telegraph article, "The fiddling with temperature data is the biggest science scandal ever", wrote about someone comparing
temperature graphs for three weather stations in Paraguay against the temperatures that had originally been recorded. In each instance, the actual trend of 60 years of data had been dramatically reversed, so that a cooling trend was changed to one that showed a marked warming.
Three, I repeat: 3 stations. For comparison, global temperature collections contain thousands of stations. CRUTEM contains 4,842 quality stations and Berkeley Earth collected 39,000 unique stations. No wonder some are strongly adjusted up, just as some happen to be strongly adjusted down. In fact it would be easy to present a station where the raw data shows a decreasing trend of several degrees being adjusted upwards, but then the reader might start to think if the raw data is really better.

What these people do not tell their readers is that the average trend over all station is only adjusted upwards slightly. That would put things too much in perspective. What these people do not tell their readers is why these adjustments are made. That might make some think that it may make sense. What these people normally do not tell their readers is how these adjustments are made. That would not sound sufficiently arbitrary and conspirational.

Last year we had similar scams about two stations in Australia and the stations in New Zealand.

In an internet poll, 88% of the readers of the abysmal Telegraph piece agree with the question: "Has global warming been exaggerated by scientists?"

I hope that after reading this post, these 88% will agree that they have been conned by The Telegraph. That scientists have actually made global warming smaller.

Zeke Hausfather, an independent researcher that is working with Berkeley Earth, made a beautiful series of plots to show the size of the adjustments.

The first plot is for the land surface temperature from climate stations. The data is from the Global Historical Climate Dataset (GHCNv3) of NOAA (USA). Their method to remove non-climatic effects (homogenization) is well validated and recommended by the homogenization community.

They adjust the trend upwards. In the raw data the trend is 0.6°C per century since 1880 while after removal of non-climatic effects it becomes 0.8°C per century. See below. But it is far from changing a cooling trend into strong warming.

(In case you believe many national weather services are also in the conspiracy: a small part of the GHCNv3 raw data was already homogenized before they received it.)



Not many people know, however, that the sea surface temperature trend is adjusted downward. That does not fit the narrative of WUWT & Co. It sounds like even many scientists did not know that. These downward adjustments happen to be about the same size, but go into the other direction. See below the sea surface temperature of the Hadley Centre (HadSST3) of the UK MetOffice.



Being land creatures people do not always realise how big the ocean is. Thus if you combine these two temperature signals taking the area of the land and the ocean into account you get the result below. The net effect of the adjustments is a reduction of global warming.



It is pure coincidence that this happens, the reasons for the adjustments are fully different.

The land surface temperature trend has to be adjusted up because old temperatures were often too high due to insufficient protection against warming by the sun and possibly because the siting of the stations improved. There are likely more reasons.

The sea surface temperature are adjusted downward because old measurements were made by taking a bucket of water out of the ocean and the water cooled by evaporation during the temperature measurement. Furthermore, modern measurements are made at the water inlet of the engine and the hull of the ship warms the water a little before it is measured.

But while it is a pure coincidence and while other datasets may show somewhat different numbers (the BEST adjustments are smaller), the downward adjustment does clearly show that climatologists do not have an agenda to exaggerate global warming. Like all reasonable people already knew. That would still be true if the adjustments had happened to go upward.

[UPDATE:

Small networks

The smaller the networks, the larger the size of the non-climatic changes typically is.

A recent paper about the US mountain network (SNOWTEL) explained that their mountain stations showed more warming than the lower lying USHCN stations. This could be a snow-albedo feedback (that the warming reduces the white snow and reveals the dark surface leading to more warming. However they found it was a non-climatic change in the temperature due to in the installation of new equipment. The new instruments recorded about 1.5°C higher minimum temperatures; an extraordinary large change (the maximum temperature was hardly affected). Accurate data is not just important for trends, but also for physics (snow-albedo feedback).

That is another case of climatologists reducing warming and a feedback.

But what did a well-know blog of the mitigation sceptics, WUWT, write? They headlined: "Another bias in temperature measurements discovered" and opened: "From the “temperature bias only goes one way department”".

The second comment is by "cg": "Lying in Weather Reporting is common place and shamelessly just like the Global Financiers want it. Pure Evil."
Brute: "You sound insane."
KaiserDerden: "no more insane than you do claiming CO2 controls the weather/climate… actually less so in fact ...
Brute: "I have never said a single word regarding how “CO2 controls the weather/climate”. It is curious how much paranoia one finds around here... just about as much as one finds among the warmist cults...."
Sun Spot: "@Brute, you sound sanctimonious"
Ofay Cat: "CG ... you have it right ... those others are uninformed or misinformed. Which means Liberal."
cg: "Thanks"
]

Let's end on a depression note. Rob Honeycutt says:
Take note. Proving the conspiracy wrong is sure to be taken as proof you’re part of the conspiracy.

It would be interesting to track, but I somehow doubt the number of “skeptic” posts with accusations of fraud is going to change. And I think this is merely because the source of the “skepticism” isn’t rooted in true scientific skepticism. It’s formed on an ideological basis. So, asking them to accept the data as correct is the same, from their standpoint, as asking them to change their ideology.

End of rant. Sorry for the tone. One sometimes gets the impression that WUWT & Co. select the most stupid memes possible to produce the largest antagonistic effect possible. It would be too easy to talk about the real caveats, the ones also mentioned by the enemy in the IPCC reports. For example, that assessing the impacts of climate change is enormously difficult because it involves ecosystems and humans. For example, that estimating trends in extreme weather is very challenging and very much current research; also partially due to non-climatic changes in the daily data.

[UPDATE. This version got a bit snarkier than usual, which maybe warranted in talking to hardcore mitigation sceptics. To link to in discussions with people who might be open for debate, I have written a second matter-of-fact version: Just the facts, homogenization adjustments reduce global warming. In case of doubt, when you do not know people well, that is probably also the best version.]



Related reading

Phil Plait at Bad Astronomy comment on the Telegraph piece: No, Adjusting Temperature Measurements Is Not a Scandal

John Timmer at Ars Technica is also fed up with being served the same story about some upward adjusted stations every year: Temperature data is not “the biggest scientific scandal ever” Do we have to go through this every year?

The astronomer behind And Then There's Physics writes why the removal of non-climatic effects makes sense. In the comments he talks about adjustments made to astronomical data. Probably every numerical observational discipline of science performs data processing to improve the accuracy of their analysis.

Steven Mosher, a climate "sceptic" who has studied the temperature record in detail and is no longer sceptical about that reminds of all the adjustments demanded by the "sceptics".

Nick Stokes, an Australian scientist, has a beautiful post that explains the small adjustments to the land surface temperature in more detail.

My two most recent posts were about some reasons for temperature trend biases: Temperature bias from the village heat island and Changes in screen design leading to temperature trend biases

You may also be interested in the posts on how homogenization methods work (Statistical homogenisation for dummies) and how they are validated (New article: Benchmarking homogenisation algorithms for monthly data)

Sunday, 8 February 2015

Changes in screen design leading to temperature trend biases

In the lab, temperature can be measured with amazing accuracies. Outside, exposed to the elements, measuring the temperature of the air is much harder. For example, if the temperature sensor gets wet, due to rain or dew, the evaporation leads to a cooling of the sensor. The largest cause of exposure errors are solar and heat radiation. For these reasons, thermometers need to be protected from the elements by a screen. Changes in the radiation error are an important source of non-climatic changes in station temperature data. Innovations leading to reductions in these errors are an major source of temperature trend biases.

A wall measurement at the Mathematical Tower in Kremsmünster. You mainly see the bright board to protect the instruments against rain, which is on the first floor, at the base of the window, a little right of the entrance.

History

The history of changes in exposure is different in every country, but in broad lines follows this pattern. In the beginning thermometers were installed in unheated rooms or in front of a window of an unheated room on the North (poleward) side of a building.

When this was found to lead to too high temperatures a period of innovation and diversity started. For example, small metal cages were added to the North wall measurements. More importantly free standing structures were designed: stands, shelters, houses and screens. In the Common Wealth the Glaisher (Greenwich) stand was prevalent. It has a vertical wooden board, a small roof and sides, but it is fully open in the front and in summer you have to rotate it to ensure that no direct sun gets onto the thermometer.

Shelters were build with larger roofs and sides, but still open to the front and the bottom, for example the Mountsouris and Wild screens. Sometimes even small houses or garden sheds were build, in the tropics with a thick thatched roof.

In the end, the [[Stevenson screen]] (Cotton Region Shelter) won the day. This screen is closed to all sides. It has double Louvre walls, double boards as roof and a board as bottom. (Early designs sometimes did not have a bottom.)

In the recent decades there is a move to Automatic Weather Stations (AWS), which do not have a normal (liquid in glass) thermometer, but an electrical resistance temperature sensor and is typically screened by multiple round plastic cones. These instruments are sometimes mechanically ventilated. Some countries have installed their automatic sensors in Stevenson screens to reduce the non-climatic change.


The photo on the left shows an open shelter for meteorological instruments at the edge of the school square of the primary school of La Rochelle, in 1910. On the right one sees the current situation, a Stevenson-like screen located closer to the ocean, along the Atlantic shore, in place named "Le bout blanc". Picture: Olivier Mestre, Meteo France, Toulouse, France.

Radiation errors

To understand when and where the temperature measurements have most bias, we need to understand how solar and heat radiation leads to measurement errors.

The temperature sensor should have the temperature of the air and should thus not be warmed or cooled by solar or heat radiation. The energy exchange between sensor and air due to ventilation should thus be large relative to the radiative exchanges. One of the reasons why temperature measurements outside are so difficult is that these are conflicting requirements: closing the screen for radiation will also limit the air flow. However, with a smart design, mechanical ventilation and small sensors this conflict can be partially resolved.

For North-wall observations direct solar radiation on the sensor was sometimes a problem during sunrise and sunset. In addition the sun may heat the wall below the thermometer and warm the rising air. Even for Stevenson screens some solar radiation still gets into the screen. Furthermore, the sun shining on the screen warms it, which can then warm the air flowing through the screen. For this reason it is important that the screen is regularly painted white and cleaned.

Scattered solar radiation (clouds, vegetation, surface) is important for older screens being open to the front. The open front also leads to a direct cooling of the sensor as it emits heat radiation. The net heat radiation flux is especially large when the back radiation of the atmosphere is low, thus when there are no clouds and the air is dry. Warm air can contain more humidity, thus these effects are generally also largest when it is cold.

Because older screens did not have a bottom, a hot surface below the screen could be a problem during the day and a cold surface during the night. This especially happens when the soil is dry and bare.

All these effects are most clearly seen when the wind is calm.

Concluding, we expect the cooling bias at night to be largest when the weather is calm, cloud free and the air is dry (cold). We also expect a warming bias during the day to be largest when the weather is calm and cloud free. In addition we can get a warm bias when the soil is dry and bare and in summer during sunrise and sunset.

Thus all things being equal, the radiation error is expected to be largest in sub-tropic, tropical and continental climates and small in maritime, moderate and cold climates.


Schematic drawing of the various factors that can lead to radiation errors.

Parallel measurements

We know how large these effects are from parallel measurements, where an old and new measurement set-up are compared side by side. Unfortunately, there are not that many of parallel measurements for the transition to Stevenson screens. Many parallel measurements in North-West Europe, a maritime, moderate or cold climate, where the effects are expected to be small of those are described in a wonderful review article by David Parker (1994) and he concludes that in the mid-latitudes the past warm bias will be smaller than 0.2°C. In the following, I will have a look at the parallel measurements outside of this region.

In the topics, the bias can be larger. Parker also describes two parallel measurements of a tropical thatched house with a Stevenson screen. One in India and one in Ceylon (Sri Lanka). They both have a bias of about 0.4°C. The bias naturally depends on the design, a comparison of a normal Stevenson screen with one with a thatched roof in Samoa shows almost no differences.


This picture shows three meteorological shelters next to each other in Murcia (Spain). The rightmost shelter is a replica of the Montsouri (French) screen, in use in Spain and many European countries in the late 19th century and early 20th century. In the middle, Stevenson screen equipped with automatic sensors. Leftmost, Stevenson screen equipped with conventional meteorological instruments.
Picture: Project SCREEN, Center for Climate Change, Universitat Rovira i Virgili, Spain.


Recently two beautiful studies were made with modern automatic equipment to study the influence of the screens. With automatic sensors you can make measurements every 10 minutes, which helps in understanding the reasons for the differences. In Spain they have build two replicas of the French screen used around 1900. One was installed in [[La Coruna]] (more Atlantic) and one in [[Murcia]] (more Mediterranean). They showed that the old measurements had a temperature bias of about 0.3°C; the Mediterranean location had, as expected, a somewhat larger bias than the Atlantic one.

The second modern study was in Austria, at the Mathematical Tower in Kremsmünster (depicted at the top of this post). This North-wall measurement was compared to a Stevenson screen (Böhm et al., 2010). It showed a temperature bias of about 0.2°C. The wall was oriented North-North-East and during sunrise in summer the sun could shine on the instrument.

For both the Spanish and the Austrian examples it should be noted that small modern sensors were used. It is possible that the radiation errors would have been larger had the original thermometers been used.

Comparing a Wild screen with a Stevenson screen at the astronomical observatory in [[Basel]], Switzerland, Renate Auchmann and Stefan Brönnimann (2012) found clear signs of radiation errors, but the annual mean temperature was somehow not biased.


Parallel measurement with a Wild screen and a Stevenson screen in Basel, Switzerland.
In [[Adelaide]], Australia, we have a beautiful long parallel measurement of the Glaisher (Greenwich) stand with a Stevenson screen (Cotton Region Shelter). It runs 61 complete years (1887-1947) and shows that the historical Glaisher stand recorded on average 0.2°C higher temperatures; see figure with annual cycle below. The negative bias in the minimum temperature at night is almost constant throughout the year, the positive bias is larger and strongest in summer. Radiation errors thus not only affect the mean, but also the size of the annual cycles. They will also affect the daily cycle, as well as the weather variability and extremes in the temperature record.

The exact size of the bias of this parallel measurement has a large uncertainty, it varies considerably from year to year and the data also shows clear inhomogeneities itself. For such old measurements, the exact measurement conditions are hard to ascertain.

The annual cycle of the temperature difference between a Glaisher stand and a Stevenson screen. For both the daily maximum and the daily minimum temperature. (Figure 1 from Nicholls et al. (1996)

Conclusions

Our understanding of the measurements and limited evidence from parallel measurements suggest that there is a bias of a few tenth of a Centigrade in observations made before the introduction of Stevenson screens. The [[Stevenson screen]]
was designed in 1864, most countries switched in the decades around 1900, but some countries did not switch until the 1960ies.

The last few decades there was a new transition to automatic weather stations (AWS). Some countries have installed the automatic probes in Stevenson screens, but most have installed single unit AWS with multiple plastic cones as screen. The smaller probe and mechanical ventilation could make the radiation errors smaller, but depending on the design possibly also more radiation gets into the screen and the maintenance may also be worse now that the instrument is no longer visited daily. An review article on this topic is still dearly missing.

Last month we have founded the Parallel Observations Science Team (POST) as part of the International Surface Temperature Initiative (ISTI) to gather and analyze parallel measurements and see how they affect the climate record. (Not only with respect to the mean, but also for changes in day and annual cycles, weather variability and weather extremes.) Theo Brandsma will lead our study on the transition to Stevenson screens and Enric Aguilar the transition from conventional observations to automatic weather stations. If you know of any dataset and/or want to collaborate please contact us.

Acknowledgement

With some colleagues I am working on a review paper on inhomogeneities in the distribution of daily data. This work, especially with Renate Auchmann, has greatly helped me understand radiation errors. Mistakes in this post are naturally my own. More on non-climatic changes in daily data later.



Further reading

Just the facts, homogenization adjustments reduce global warming: The adjustments to the land surface temperature increase the trend, but the adjustments to the sea surface temperature decrease the trend.

Temperature bias from the village heat island

A database with parallel climate measurements describes the database we want to build with parallel measurements

A database with daily climate data for more reliable studies of changes in extreme weather gives somewhat more background

Statistical homogenisation for dummies

New article: Benchmarking homogenisation algorithms for monthly data

References

Auchmann, R., and S. Brönnimann, 2012: A physics-based correction model for homogenizing sub-daily temperature series. Journal Geophysical Research, 117, D17119, doi: 10.1029/2012JD018067.

Böhm, R., P.D. Jones, J. Hiebl, D. Frank, M. Brunetti, M.Maugeri, 2010: The early instrumental warm-bias: a solution for long central European temperature series 1760–2007. Climatic Change, 101, no. 1-2, pp 41-67, doi: 10.1007/s10584-009-9649-4.

Brunet, M., Asin, J., Sigró, J., Bañón, M., García, F., Aguilar, E., Palenzuela, J. E., Peterson, T. C. and Jones, P., 2011: The minimization of the screen bias from ancient Western Mediterranean air temperature records: an exploratory statistical analysis. International Journal Climatology, 31, pp, 1879–1895, doi:
10.1002/joc.2192.

Nicholls, N., R. Tapp, K. Burrows, D. Richards. Historical thermometer exposures in Australia. International Journal of Climatology, 16, pp. 705-710, doi: 10.1002/(SICI)1097-0088(199606)16:6<705::AID-JOC30>3.0.CO;2-S, 1996.

Parker, D. E., 1994: Effects of changing exposure of thermometers at land stations. International Journal of Climatology, 14, pp. 1–31, doi: 10.1002/joc.3370140102.

Thursday, 29 January 2015

Temperature bias from the village heat island

The most direct way to study how alterations in the way we measure temperature affect the registered temperatures is to make simultaneous measurements the old way and the current way. New technological developments have now made it much easier to study the influence of location. Modern batteries have made it possible to just install an automatically recording weather station anywhere and obtain several years of data. It used to be necessary to have nearby electricity access, permissions to use it and dig cables in most cases.

Jenny Linden used this technology to study the influence of the siting of weather stations on the measured temperature for two villages. One village was in North Sweden, one in the West of Germany. In both cases the center of the village was about half a degree Centigrade (one degree Fahrenheit) warmer than the current location of the weather station on grassland just outside the villages. This is small compared to the urban heat island found in large cities, but it is comparable in size to the warming we have seen since 1900 and thus important for the understanding of global warming. In urban areas, the heat island can be multiple degrees and is studied much because of the additional heat stress it produces. This new study may be the first for villages.

Her presentation (together with Jan Esper and Sue Grimmond) at EMS2014 (abstract) was my biggest discovery in the field of data quality in 2014. Two locations is naturally not not enough for strong conclusions, but I hope that this study will be the start of many more, now that the technology has been shown to work and the effects to be significant for climate change studies.

The experiments


A small map of Haparanda, Sweden, with all measurement locations indicated by a pin. Mentioned in the text are Center and SMHI current met-station.
The Swedish case is easiest to interpret. The village [[Haparanda]] with 5 thousand inhabitants is in the North of Sweden, on the border with Finland. It has a beautiful long record, measurements started in 1859. Observations started on a North wall in the center of the village and were continued there until 1942. Currently the station is on the edge of the village. It is thought that the center did not change much any more since 1942. Thus the difference could be interpreted as the cooling bias due to the relocation from the center to its current location in the historical observations. The modern measurement was not at the original North wall, but free standing. Thus only the difference of the location can be studied.

As so often, the minimum temperature at night is affected most. It has a difference of 0.7°C between the center and the current location. The maximum temperature only shows a difference of 0.1°C. The average temperature has a difference of 0.4°C.

The village [[Geisenheim]] is close to Mainz, Germany, and was the first testing location for the equipment. It has 11.5 thousand inhabitants and is on the right bank of the Rhine. Also this station has a quite long history and started in 1884 in a park and stayed there until 1915. Now it is well-sited outside of the village in the meadows. A lot has changed in Geisenheim between 1915 and now. So we cannot make any historical interpretation of the changes, but it is interesting to compare the measurements in the center with the current ones to compare with Haparanda and to get an idea how large the maximum effect would theoretically be.



A small map of Geisenheim, Germany. Compared in the text are Center and DWD current met-station. The station started in Park.
The difference in the minimum temperature between the center and the current location is 0.8°C. In this case also the maximum temperature has a clear difference of 0.4°C. The average temperature has a difference of 0.6°C.

The next village on the list is [[Cazorla]] in Spain. I hope the list will become much longer. If you have any good suggestions please comment below or write Jenny Linden. Especially locations where the center is still mostly like it used to be are of interest. And as much different climate regions should be sampled as possible.

The temperature record

Naturally not all stations started in villages and even less exactly in the center. But this is still a quite common scenario, especially for long series. In the 19th century thermometers were expensive scientific instruments. The people making the measurements were often the few well-educated people in the village or town, priests, apothecaries, teachers and so on.

Erik Engström, climate communicator of the Swedish weather service (SMHI) wrote:
In Sweden we have many stations that have moved from a central location out to a location outside the village. ... We have several stations located in small towns and villages that have been relocated from the centre to a more rural location, such as Haparanda. In many cases the station was also relocated from the city centre to the airport outside the city. But we also have many stations that have been rural and are still rural today.
Improvements in siting may be even more interesting for urban stations. Stations in cities have often been relocated (multiple times) to better sited locations, if only because meteorological offices cannot afford the rents in the center. Because the Urban Heat Island is stronger, this could lead to even larger cooling biases. What counts is not how much the city is warming due to its growth, but the siting of the first station location versus its current one.

More specifically, it would be interesting to study how much improvements in siting have contributed to a possible temperature trend bias in the recent decades. The move to the current locations took place in 2010 in Haparanda and in 2006 in Geisenheim. Where it should be noted that the cooling bias did not take place in one jump: decent measurements are likely to have been recorded since 1977 in Haparanda, and since 1946 in Geisenheim; For Geisenheim the information is not very reliable).

It would make sense to me that the more people started thinking about climate change, the more the weather services realized that even small biases due to imperfect siting are important and should be avoided. Also modern technology, automatic weather stations, batteries and solar panels, have made it easier to install stations in remote locations.

An exception here is likely the United States of America. The Surface Stations project has shown many badly sited stations in the USA and the transition to automatic weather stations is thought to have contributed to this. Explanations could be that America started early with automation, the cables were short and the technician had only one day to install the instruments.

When also villages have a small urban effect, it is also possible that this gradually increases while the village is growing. Such a gradual increase can also be removed by statistical homogenization by comparison with its neighboring stations. However, if too many stations have a such a gradual inhomogeneity, the homogenization methods will no longer be able to remove this non-climatic increase (well). Thus this finding makes it more important to make sure that sufficient really rural stations are used for comparison.

On the other hand, because a village is smaller, one may expect that the "gradual" increases are actually somewhat jumpy. Rather than being due to many changes in a large area around the station, in case of a village the changes may be expected to be more often nearer to the station and produce a small jump. Jumps are easier to remove by statistical homogenization than smooth gradual inhomogeneities, because the probability of something happening simultaneously in the neighboring station is smaller.



A parallel measurement in Basel, Switzerland. A historical Wild screen, which is open to the bottom and to the North and has single Louvres to reduce radiation errors, measures in parallel with a Stevenson screen (Cotton Region Shelter), which is close to all sides and has double Louvres.

Parallel measurements

These measurements at multiple locations are an example of parallel measurements. The standard case is that an old instrument is compared to a new one while measuring side by side. This helps us to understand the reasons for biases in the climate record.

From parallel measurements we, for example, also know that the way temperature was measured before the introduction of Stevenson Screens has caused a bias in the old measurements of up to a few tenth of a degree. With differences of 0.5°C being found for two locations Spain and two tropical countries, while the differences in North West Europe are typically small.

To be able to study these historical changes and their influence on the global datasets, we have started an initiative to build a database with parallel measurements under the umbrella of the International Surface Temperature Initiative (ISTI), the Parallel Observations Science Team (POST). We have just started and are looking for members and parallel datasets. Please contact us if you are interested.


Sunday, 25 January 2015

We have a new record

Daily Mail with a stupid headline: Data: Gavin Schmidt, of Nasa's Goddard Institute for Space Studies, admits there's a margin of error. Schmidt look appropriately on photo.
The look of Gavin Schmidt accurately portrait my feelings for the Daily Mail.
It seems the word record has a new meaning.

2014 was a record warm year for the global temperature datasets maintained by the Americans: NOAA, GISS and BEST, as well as for the Japanese dataset. For HadCRUT from the UK it seems not to be clear which year will be highest.*

[UPDATE: data is now in: HadCRUT4 global temperature anomalies:
2014 0.563°C
2010 0.555°C
I could imagine that that is too close to call, the value to of 2014 could still change with new data coming in.]

The method of Cowtan and Way (C&W) is expected to see 2014 as the second warmest year. [It now does.]

(The method of C&W is currently seen as the most accurate method, at least for short-term trends; it makes recent temperature estimates more accurate using satellite tropospheric temperatures to fill the gaps between the temperature stations.)

Up to now I had always thought that you set a record when you get the largest or lowest value, whichever is hardest. The world record in marathon is the fastest time in an official race. The worlds best football player is the one getting most votes from sports journalists. And so on.

Climate change, however, has a special place in the heart of some Americans. These people do not see the question whether 2014 was a record in the datasets as an interesting question; the normal definition. Rather they claim, you are only allowed to call a year a record if you are sure that it was the highest value for the unknown actual global mean temperature. That is not the same.

Last September a new marathon world record was set in Berlin. Dennis Kimetto set the world record with a time of 2:02:57, while the number two of the same race, Emmanuel Mutai, set the world second best time with 2:03:13. Two records in one race! Clearly the conditions were ideal (the temperature, the wind, the flat track profile). Had other good runners participated in this race, they may well have been faster.

Should we call it a record? According to the traditional definition, Kimetto run fastest and has a record.

According to the new definition, we cannot be sure that Kimetto is really the fastest marathon runner on the world and we do not know what the world record is. Still newspapers around the world simply wrote about the record as if it were a fact.

When Cristiano Ronaldo was voted world footballer of the year 2014 with 37.66% of the votes, the BBC simply headlined: Cristiano Ronaldo wins Ballon d’Or over Lionel Messi & Manuel Neuer.

According to the traditional definition, Ronaldo is fairly seen as the best football player. According to the new definition, we cannot tell who the best football player is. He had such a small percentage of the votes, journalists clearly are error prone and they have a bias for forwards and against keepers.

In the sports cases it is clear that the probabilities are low, but hard to quantify them. In case of the global mean temperature we can and statistics is fun. All American groups were very active in communicating the probability that the global mean temperature itself was the highest in 2014. An interesting information quantum for the science nerd that may have put some people on the wrong foot.




And just for the funsies.


* Interesting, that Germany, France and China do not have their own global temperature datasets. Okay, Germany makes an effort not to look like a world power, but one would have expected France to have one. China is making a considerable effort in homogenization lately and has a large network already. I would not be surprised if they had their own global dataset soon, maybe using the raw data collection of the International Surface Temperature Initiative.

[UPDATE. I swear, I did not know, but Ronan Connolly pointed me to a new article on a Chinese global dataset. :) It integrates the long series of four other global datasets: CRUTEM3, GHCN-V3, GISSTMP and Berkeley.]



More information

A Deeper Look: 2014′s Warming Record and the Continued Trend Upwards
An informative article by Zeke Hausfather puts the 2014 record into perspective. The trend is important.

How ‘Warmest Ever’ Headlines and Debates Can Obscure What Matters About Climate Change
Andrew C. Revkin with a long piece with a similar opinion.

Thoughts on 2014 and ongoing temperature trends
The article by Gavin Schmidt at RealClimate is very informative, but more technical. For someone liking stats. He begins with some media critique: for the media a record is clearly an important hook. (They want news.)

Sunday, 4 January 2015

How climatology treats sceptics

2014 was an exiting year for me, a lot happened. It could have gone wrong, my science project and thus employment ended. This would have been the ideal moment to easily get rid of me, no questions asked. But my follow-up project proposal (Daily HUME) to develop a new homogenization method for global temperature datasets was approved by the German Science Foundation.

It was an interesting year. The work I presented at conferences was very skeptical of our abilities to removed non-climatic changes from climate records (homogenization). Mitigation skeptics sometimes claim that my job, the job of all climate scientists, is to defend the orthodoxy. They might think that my skeptical work would at least hurt my career, if not make me an outright outcast, like they are.

Knowing science, I did not fear this. What counts is the quality of your arguments, not whether a trend goes up or down, whether a confidence interval becomes larger or smaller. As long as your arguments are strong, the more skeptical, the better, the more interesting the work is. What would hurt my reputation would be if my arguments were just as flimsy as those of the mitigation skeptics.

With a bunch colleagues we are working on a review paper on non-climatic changes in daily data. Daily data is used to study climatic changes in extreme weather: heat waves, cold spells, heavy rain, etc. Much too simplified we found that the limited evidence suggests that non-climatic changes affect the extremes more than the mean, that removing them is very hard, while most large daily data collections are not homogenized or only for changes in the mean. In other words, we found that the scientific literature supports the hunch of the climate skeptics of the IPCC:
"This [inhomogeneous data] affects, in particular, the understanding of extremes, because changes in extremes are often more sensitive to inhomogeneous climate monitoring practices than changes in the mean." Trenberth et al. (2007)
Not a nice message, but a large number of wonderful colleagues is happy to work with me on this review paper. Thank you for your trust.

Last May at the homogenization seminar in Budapest, I presented this work, while my colleague presented our joint work on homogenization when the size of the breaks is small. Or, formulated more technically: homogenization when the variance of the break signal is small relative to the variance of the difference time series (the difference between two nearby stations). The positions of the detected breaks are in this case not much better than random breaks. This problem was found by Ralf, a great analytical thinker and skeptic. Thank you for working with me.

Because my project ended and I did not know whether I would get the next one and especially not whether I would get it in time, I have asked two groups in Budapest whether they could support me during this bridge period. Both promised they would try. The next week the University of Bern offered me a job. Thank you Stefan and Renate, I had a wonderful time in Bern and learned a lot.

Thus my skeptical job is on track again and more good things happened. For the next good news I first have to explain some acronyms. The World Meteorological Organisation ([[WMO]]) coordinates the work of the (national) meteorological services around the world, for example by defining standards for measurements and data transfer. The WMO has a Commission for Climatology (CCl). For the coming 4-year term this commission has a new Task Team on Homogenization (TT-HOM). It cannot be much more than 2 years ago that I asked a colleague what this abbreviation he had used "CCl" stood for. Last spring they asked whether I wanted to be member of the TT-HOM. This autumn they made me chair. Thank you CCl and especially Thomas and Manola. I hope to be worthy of your trust.

Furthermore, I was asked to be co-convener of the session on Climate monitoring; data rescue, management, quality and homogenization at the Annual Meeting of the European Meteorological Society. That is quite an honor for a homogenization skeptic that is just an upstart.

More good things happened. While in Bern, Renate and I started working on a database with parallel measurements. In a parallel measurement an old measurement set-up stands next to a new one to directly compare the difference between them and to thus determine the non-climatic change this difference in set-ups produced. Because I am skeptical of our abilities to correct non-climatic changes in daily data, I hope that in this way we can study how important they are. A real skeptic does not just gloat when finding a problem, but tries to solve them as well. The good news is that the group of people working on this database is now a expert team of the International Surface Temperature Initiative (ISTI). Thank you ISTI steering committee and especially Peter.

In all this time, I had only one negative experience. After presenting our review article on daily data a colleague asked me whether I was a climate "skeptic". That was clearly intended as a threat, but knowing all those other colleagues behind me I could just laugh it off. In retrospect, my choice of words was also somewhat unfortunate. As an example, I had said that climatic changes in 20-year return levels (an extreme that happens on average every 20 years) probably cannot be studied using homogenized data given that the typical period between two non-climatic changes is 20 years. Unfortunately, this colleague afterwards presented a poster on climatic changes in the 20-year return period. Had I known that, I would have chosen another example. No hard feelings.

That is how climatology treats skeptics. I cannot complain. On the contrary, a lot of people supported me.

If you can complain, if you feel like a persecuted heretic (and not only claim that as part of your political fight), you may want to reconsider whether your arguments are really that strong. You are always welcome back.


A large part of the homogenization community at a project meeting in Bucharest 2010. They make a homogenization skeptic feel at home. Love you guys.

Related posts


On consensus and dissent in science - consensus signals credibility


Why doesn't Big Oil fund alternative climate research?

Are debatable scientific questions debatable?

Falsifiable and falsification in science

Peer review helps fringe ideas gain credibility


Reference

Trenberth, K.E., et al., 2007: Observations: Surface and Atmospheric Climate Change. In: Climate Change 2007: The Physical Science Basis. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA.