Wednesday 13 September 2017

My EMS2017 highlights

When I did my PhD, our professor wanted everyone to write short reports about conferences they had attended. It was a quick way for him to see what was happening, but it is also helpful to remember what you learned and often interesting to read yourself again some time later. Here is my short report on last week's Annual Meeting of the European Meteorological Society (EMS), the European Conference for Applied Meteorology and Climatology 2017, 4–8 September 2017, Dublin, Ireland.

This post is by its nature a bit of lose sand, but there were some common themes: more accurate temperature measurements by estimating the radiation errors, eternal problems estimating various trends, collaborations between WEIRD and developing countries and global stilling.

Radiation errors

Air temperature sounds so easy, but is hard to measure. What we actually measure is the temperature of the sensor and because air is a good isolator, the temperature of the air and the sensor can easily be different. For example, due to self-heating of electric resistance sensors or heat flows from the sensor holder, but the most important heat flow is from radiation. The sun shining on the sensor or the sensor losing heat via infra-red radiation by contact with the cold atmosphere.



In the [[metrology]] (not meteorology) session there was a talk and several posters on the beautiful work by the Korea Research Institute of Standards and Science to reduce the influence of radiation on the temperature measurements. They used two thermometers one dark and one light coloured to estimate how large radiation errors are and to be able to correct for them. This set-up was tested outside and in their amazing calibration laboratory.

These were sensors to measure the vertical temperature profile, going up to 15 km high. Thus they needed to study the sensors over a huge range of temperatures (-80°C to 25°C); it is terribly cold at the tropopause. The dual sensor was also exposed to a large range of solar irradiances from 0 to 1500 Watts per square meter; the sun is much stronger up there. The pressure ranged from 10 hPa to the 1000 hPa we typically have at the surface. The low pressure makes the air an even better isolator. The radiosondes drift with the wind reducing ventilation, thus the wind only needed to be tested from 0 to 10 meters per second.

I have seen this set-up to study radiation errors for automatic weather stations, it would be great to also use it for operational stations to reduce radiation errors.

The metrological organisation of the UK is working on a thermometer that does not have a radiation error by directly measuring the temperature of the air. Micheal de Podesta does so by measuring the speed of sound very accurately. The irony is that it is hard to see how well this new sound thermometer works outside the lab because the comparison thermometer has radiation errors.

Micheal de Podesta live experiments with the most accurate thermometer in human history:



To lighten up this post: I was asked to chair the metrology session because the organiser of the session (convener) gave a talk himself. The talks are supposed to be 12 minutes with 3 minutes for questions and changing to the next speaker. Because multiple sessions are running at the same time and people may switch it is important to stick to the time. Also people need some time between the time blocks to recharge.

One speaker crossed the 12 minutes and had his back towards me so that I could not signal his time was up. Thus I walked across the screen to the other side in front of him. This gave some praise on Twitter.

If you speak a foreign language (and are nervous) it can be hard to deviate from the prepared talk.

Satellite climate data

There were several talks on trying to make a stable dataset from satellite measurements to make them useful for climate change studies. Especially early satellites were not intended for quantitative use, but only to look at the moving cloud systems. And also later the satellites were mostly designed for meteorological uses, rather than climate studies.

Interesting was Ralf Quast looking at how the spectral response of the satellites deteriorated while in space. The sensitivity for visible light did not decline similarly for all colours, but deteriorated faster for blues than for reds. This was studied by looking at several calibration targets expected to be stable: the Sahara desert, the dark oceans, and the bright top of tropical convective clouds. The estimates for post-launch measurements were similar to pre-launch calibrations in the lab.

Gerrit Hall explained that there are 17 sources of uncertainties for visible satellite measurements from the noise when looking at the Earth and when looking at the space for partial calibration to several calibration constants and comparisons to [[SI]] standards (the measurement units everyone, but the USA uses).

The noise levels also change over time, typically going up over the life time, but sometimes also going down for a period. The constant noise level in the design specification often used for computations of uncertainties is just a first estimate. When looking at space the channels (measuring different frequencies of light) should be uncorrelated, but they are not always.

Global Surface Reference Network

Peter Thorne gave a talk about a future global surface climate reference network. I wrote about this network for climate change studies before.

A manuscript describing the main technical features of such a network is almost finished. The Global Climate Observing System of WMO is now setting up a group to study how we can make this vision a reality to make sure that future climatologists can study climate change with a much higher accuracy. The first meeting will be in November in Maynooth.

Global stilling

The 10-meter wind speed seems to be declining in much of the mid-latitudes, which is called "global stilling". It is especially prevalent in middle Europe (as the locals say, in my youth this was called east Europe). The last decade there seems to be an uptick again; see graph to the right from The State of the Climate 2016.

Cesar Azorin-Molina presented the work of his EU project STILLING in a longer talk in the Climate monitoring session giving an overview of global stilling research. Stilling is also expected to be one of the reasons for the reduction in pan evaporation.

The stilling could be due to forest growth and urbanization, both make the surface rougher to the wind, but could also be due to changes in the large scale circulation. Looking at vertical wind profiles one can get an idea about the roughness of the surface and thus study whether that is the reason, but there is not much such data available over longer periods.

If you have such data, know of such data, please contact Cesar. Also for normal wind data, which is hard to get, especially observations from developing countries. The next talk was about a European wind database and its quality control, this will hopefully improve the data situation in Europe.

This was part of the climate monitoring session, which has a focus on data quality because Cesar also studied the influence of the ageing of cup anemometers that measure the wind speed. Their ball bearings tend to wear out, producing lower observed wind speeds. By making parallel measurements with new equipment and a few year old instruments he quantified this problem, which is quite big.

Because these anemometers are normally regularly calibrated and replaced I would not expect that this would produce problems for the long-term trend. Only if the wearing is larger now than it was in the past it would create a trend bias. But this does create quite a lot of noise in the difference time series between one station and a neighbour, thus making relative homogenisation harder.



Marine humidity observations

My ISTI colleague Kate Willet was recipient of the WCRP/GCOS International Data Prize 2016. She leads the ISTI benchmarking group and is especially knowledgeable when it comes to humidity observations. The price was a nice occasion to invite her to talk about the upcoming HadISD marine humidity dataset. It looks to become a beautiful dataset with carefully computed uncertainties.

There is a decline in the 2-meter relative humidity over land since about 2000 and it is thus interesting to see how this changes over the ocean. Preliminary results suggest that also over the ocean the relative humidity is declining. Both quality control of individual values and bias corrections are important.

Developing countries

There was a workshop on the exchange of information about European initiatives in developing countries. Saskia Willemse of Meteo Swiss organised it after her experiences from a sabbatical in Bhutan. Like in the rest of science a large problem is that funding is often only available for projects and equipment, while it takes a long time to lift an organisation to a higher level and the people need to learn how to use the equipment in praxis and it is a problem that equipment is often not interoperable.

More collaboration could benefit both sides. Developing countries need information to adapt to climate change and improve weather predictions. To study the climate system, science needs high quality observations from all over the world. For me it is, for example, hard to find out how measurements are made now and especially in the past. We have no parallel measurements in Africa and few in Asia. The Global Climate Observing System (GCOS) Reference Upper-Air Network (GRUAN) has much too few observations in developing countries. We will probably run into the same problem again with a global station reference network.

At the next EMS (in Budapest) there will be a session on this topic to get a discussion going how we can better collaborate. The organisers will reach out to groups already doing this kind of work in WMO, UNEP and the World Bank. One idea was to build a blog to get an overview of what is already happening.

I hope that it will be possible to have sustainable funding for weather services in poor countries, for capacity building and for making observations in return for opening up their data stores. That would be something the UN climate negotiations could do via the [[Green Climate Fund]]. Compared to the costs of reducing greenhouse gases and adapting our infrastructure the costs of weather services are small and we need to know what will happen for efficient planning.

Somewhat related to this is the upcoming Data Management Workshop (DMW) in Peru modelled after the European EUMETNET DMWs, but hopefully with more people from South and Central America. The Peru workshop is organised by Stefanie Gubler of the Swiss Climandes project and will be held from 28th of May to the 1st of June 2018. More information follows later.

Wet bulb temperature

For the heat stress of workers, the wet bulb temperature is important. This is the temperature of a well-ventilated thermometer covered in a wet piece of cloth. If there is some wind the wet bulb temperature is gives an indication of the thermal comfort of a sweating person.

The fun fact I discovered is that the weather forecasts for the wet bulb temperature are more accurate than for the temperature and the relative humidity individually. There is even some skill up to 3 weeks in advance. Skill here only means that the weather prediction is better than using the climatic value. Any skill can have economic value, but sufficiently useful forecasts for the public would be much shorter-term.

Plague

The prize for the best Q&A goes to the talk on plague in the middle ages and its relationship with the weather in the previous period (somewhat cool previous summer, somewhat warm previous winter and a warm summer: good rat weather).

Question: why did you only study the plague in the Middle Ages?
Answer: I am a mediaevalist.

Other observational findings

Ian Simpson studied different ways to compute the climate normals (the averages over 30 years). The main difference between temperature datasets were in China due to a difference in how China itself computes the daily mean temperature (from synoptic fixed hour measurements at 0, 6, 12, 18 hours universal time) and how most climatological datasets do it (from the minimum and maximum temperature). Apart from that the main differences were seen when data was incomplete because datasets use different methods to handle this.

There was another example where the automatic mode (joint detection) of HOMER produced bad homogenisation results. The manual mode of HOMER is very similar to PRODIGE, which is a good HOME recommended method, but the joint detection part is new and was not studied well yet. I would advice against its use by itself.

Lisa Hannak of the German weather service looked at inhomogeneities in parallel data: manual observations made next to automatic measurements. Because they are so highly correlated it is possible to see very small inhomogeneities and quite frequent ones. An interesting new field. Not directly related to EMS, but there will be a workshop on parallel data in November as part of the Spanish IMPACTRON project.

The European daily climate dataset ECA&D, which is often used to study changes in extreme weather, will soon have a homogenised version. Some breaks in earlier periods were not corrected because there were no good reference stations in this period. I would suggest to at least correct the mean in such a case, that is better than doing nothing and having a large inhomogeneity in a dataset people expect to be homogenised is a problem.

One of the things that seems to help us free meteorological/climate data is that there is a trend towards open government. This means that as much as possible the data the government has gathered is made available to the public via an [[API]]. Finland is just working on such an initiative and also freed the data of the weather service. There are many people and especially consultants using such data. We can piggy back on this trend.

One can also estimate humidity with GPS satellites. Such data naturally also need to be homogenised. Roeland Van Malderen works on a benchmark to study how well this homogenisation would work.

The Austrian weather service ZAMG is working on an update for the HISTALP dataset with temperature and precipitation for the Greater Alpine Region. The new version will use HOMER. Two regions are ready.

It was great to see that Mexico is working on the homogenisation of some of their data. Unfortunately the network is very sparse after the 90s, which makes homogenisation difficult and the uncertainty in the trends large.

2 comments:

  1. A thermometer that directly measures the temperature of the air?

    What is its practical point. How accurate is it compared to the usual means of measurement? Are we talking here about 100ths of a degree more accurate

    Do we need accuracy to this level of accuracy or is it intended for highly specialist applications.?

    Tonyb

    ReplyDelete
  2. The radiation error is a considerable error, especially for summer maximum temperatures. Changes in this error are thus also an important error source for studying changes. More accuracy normally means that you can see new things. If the price were right, I would not mind making such measurements everywhere, but I was especially thinking of the global climate reference network, which should become the future stable climate backbone of the global climate observing system.

    ReplyDelete

Comments are welcome, but comments without arguments may be deleted. Please try to remain on topic. (See also moderation page.)

I read every comment before publishing it. Spam comments are useless.

This comment box can be stretched for more space.