Showing posts with label EMS. Show all posts
Showing posts with label EMS. Show all posts

Thursday, 19 September 2019

European Meteorological Society Meeting highlights on station data quality and communication #EMS2019

Last week I was at the Annual Meeting of the European Meteorological Society in Copenhagen, Denmark. Here are the highlights for station data (quality) and communication.

Warming in Svalbard

Øyvind Nordli and colleagues estimated the warming on the Arctic island of Svalbard/Spitsbergen; see figure below. They use the linear red line to estimate the total warming and claim 3.8°C of warming. I would say it warmed a whooping 6°C (11°F). The graph already mostly shows that such a linear trend based estimate will underestimate the total warming.

The monthly data was already published in 2014. At that time I would have called it 5°C of warming; recent years were very warm.

They put a lot of work in the homogenization; even made modern parallel measurements to estimate the effect of past relocations of the station. The next, almost published, paper is on the daily data, so that we can study changes in the number of growing, freezing or melting days.



Warming in the tropical hot spot

There is a small region high up in the air in the tropics that is dear to many climate "skeptics", the tropical hot spot. It is one of the coldest places on Earth which warms strongly when the world is warming (for any reason). Because some observations do not show as much warming there, climate "skeptics" have declared this region to be the arbiter of climate truth, these observations and satellite estimates to the be best we have and most informative for the changes of our climate.


The warming for a GISS model equilibrium run for a 2% increase in solar forcing showing a maximum around 20N to 20S around 300mb (10 km).

Back to reality, it is really hard to make good measurements of such a cold place starting at such a tropically warm place. The thermometer needs to be reliable over about 100°C of range. That is a lot. It is not that easy to launch a weather balloon up to such heights and colds; the balloon will expand enormously. While the countries making these measurements are among the poorest on Earth.

What I had not realized is how few weather balloon make it to such heights. A poster by Souleymane Sy showed this; see Figure below. For trend estimates the sharp drop off above the pressure level of 300mb is especially very worrying. Changes in this drop off level due to changes in equipment can easily lead to changes in the estimated temperature. There is a part of the tropical hot spot below 300mb; that would be the part I would prioritize in trend estimates.


Number of radiosonde stations recording at least a given percentage of temperature and relative humidity monthly data at mandatory pressure levels since 1978 to present time for the Tropics (20° North to 20° South).

Weather forecasts in America and Europe

Communication at the EMS mostly means presenting the daily TV weather forecasts. There was a lovely difference between American and European presenters. The Americans were explaining how to dumb down your forecast as much as possible. A study found that most high school students in Alabama could not find their county on a map of Alabama; so the advice is to put a city name on every number on the map. The Europeans presented their educational work.

Our Irish friends had made three one-hour shows about the weather on consecutive days between 7 and 8pm when normally the soaps are running; light information in a botanical garden with a a small audience.

German weather presenter Karsten Schwanke got a price for his educational weather forecasts, which add information on climate change; for example in case of Dorian show the increase in the sea surface temperature. For Schwanke providing context is the main task of TV weather, the local numbers are available from a weather app.


Karsten Schwanke explains the relationship between the jet stream, wild fires and the drought in Europe. In German.

An increasing problem is fake weather predictions. Amateurs who can make a decent map are often seen as reliable sources, which can be dangerous in case of severe weather.

American weather caster Jay Trobec reported that it is common to have weather information three times during a news block, before, in the middle and at the end. In Europe you just get weather at the end. In America the weather is live, a presenter explaining everyone should leave the disaster area they went to to make this live broadcast. In Europe typically reported and the weather shown in videos. Trobec stated that during severe weather people watch TV rather than use the internet.


Live hurricane weather. :-)

The difference is likely that there is not that much severe weather in Europe, you normally watch the weather to see if you have to take an umbrella with you, rarely to see whether your house will soon be destroyed. Live weather would be looking at a weather presenter slowly getting wet in the drizzle. In addition, European public media have an educational mandate, they are paid by the public to make society better, while in America media is commercial and will do whatever makes money.

In the harbor of Copenhagen is the famous little mermaid. Tourists ships went to see it, had to keep quite a distance and could only show her back. Typically the boats only waited a few seconds because there was nothing to see. But due to commercial pressure they had to have the little mermaid on their tour schedule. They follow demand, whether the outcome is good or not.

Short hits communication

  • When asked what 30% probability of rain means for a weather prediction most people gave the wrong answer: that 30% of the region would experience rain. The formally correct answer is that 30% of the cases this prediction is made you will experience rain. To be fair to the people, I often explain the need to give such a percentage by saying that in case of showers we cannot tell whether it rains in Bonn or Cologne. I feel this is quite common explanation and the main effect. The German weather service is working on providing more detailed probabilistic information to weather brigades. That seems to be appreciated (and they answered the question mostly right).
  • Amanda Ruggeri won the journalism award for her story on sea level rise in Miami, which was reviewed by ClimateFeedback who found its scientific credibility to be "very high". Recommended read.
  • EUMETSAT operates the European satellites once in space. They also make MOOCs ([[Massive Open Online  Courses]]). They have one on the oceans and one on the atmosphere. They are a great way to introduce these topics to new people and in future they plan to do more live. 
  • Climate change is seen as the Top Global Threat according to global polling by the Pew Institute. In 2018 67 percent of the world sees climate change as a major threat to their country.  
  • During a Q&A someone remarked that it would be good to talk about the history of climatology more because people are spreading the rumor that climatology is a new field of science trying to make it sound less solid.
  • In case I have any Finnish speaking readers, Finland has a two-yearly bulletin on weather and climate, recently revamped.
  • Copernicus has a "new" journal on statistical climatology, ideally suited for homogenization studies: Advances in Statistical Climatology, Meteorology and Oceanography (ASCMO). It does not have an Impact Factor yet, but seeing the editorial team and reading a few articles it is clearly a serious journal and likely will get one soon. It is worth building up such a journal to have an outlet for statistical/methodological studies on climate. We already published there once; post upcoming.
  • Did you know about STATMOS, an American Research Network for Statistical Methods for Atmospheric and Oceanic Sciences?

Short hits observations

  • I had seen people use measurements of cosmic rays to estimate the soil moisture between the surface and the probe, but it was new to me to use it to measure the amount of snow on top of a glacier.
  • Michal Zak of the Czech Hydrometeorological Institute and colleagues had an interesting way to estimate how urban a station is. They computed the absolute day to day differences of the maximum and of the minimum temperature and subtracted them from each other. If the maximum temperature varies more a station is likely urban, if the minimum varies more it is likely rural. For Prague and its surrounding the differences between stations were not particularly large and smaller than its seasonal cycle, but it could be a useful check. This could also be a measure that could help one to selected climatologically similar pairs of stations in relative statistical homogenization.
  • The Homogenization Seminar in Budapest will be from 18 to 21 of May 2020. Announcements will follow, e.g., on the homogenization list. (I should write less mails to the homogenization list; at EMS someone asked to be added to the homogenization newsletter.) 
  • Carla Mateus studied Data Rescue (DARE) as a scientific problem. By creating one really high quality transcribed dataset as a benchmark, she studied how accurately various groups transcribed historical observations. Volunteers of the Irish meteorological society were an order of magnitude more accurate (0.3% errors) than students (3.3%). Great talk.
  • Our colleagues from Catalonia studied the influence of the time of observation. Manual observations tend to be made at 8am, while automatic measurements often use a normal calendar day. This naturally mattered most for the minimum temperature. With statistical homogenization the small breaks are hard to find, to formulate it diplomatically.
  • Monika Lakato has ambitious plans to study changes in hourly precipitation in Hungary motivated by increases in rain intensity (precipitation amount on rainy days).
  • Peter Domonkos studied how well network-wide trends are corrected in the new MULTITEST benchmark dataset (the presentation as pptx file). He found that his method (ACMANTv4) was able to reduce this error by about 30% and others were worse. It would be interesting to study what is different in the MULTITEST dataset or this analysis because the results of Williams et al. (2012) are much more optimistic; here 50 to 90% of the trend error is removed for similarly dense networks.
  • ACMANTv4 is on GitHub and about to be published. Some colleagues already used it. 

Meteorological Glossaries

Miloslav Müller gave a talk on the new Slovak meteorological glossary, listing many other glossaries. So I now have a bookmark folder full of glossaries.
To finish with a great audience comment on the last day, not directly weather related: "In Russian education everything is explained, you do not have to remember or study." I loved that expression. That is the reason I studied physics, I also loved biology, but you have to remember so much and my memory is very poor for random stuff like names of organisms. When you understand something, you (I?) automatically remember it, it does not even feel like learning.

Related reading

The IPCC underestimates global warming. This post explains why using linear regression underestimates total warming

Annual Meeting of the European Meteorological Society

Wednesday, 20 December 2017

Where is a climate data scientist to go in 2018?

Where is a climate data scientist to go in the next year? There are two oldies in Old Europe (EGU and EMS) and two three new opportunities in the Southern Hemisphere (Early Instrumental Meteorological Series, AMOS and the Data Management Workshop in Peru).

Early Instrumental Meteorological Series - Conference and Workshop

The Oeschger Centre for Climate Change Research (OCCR) organises a conference and workshop on Early Instrumental Meteorological Series. Hosts are Stefan Brönnimann (Institute of Geography) and
Christian Rohr (Institute of History).

The first two days are organised like a conference, the last two days like a workshop. It will take place from 18 to 21 June 2018 at the University of Bern, Switzerland. Registration and abstract submission are due by 15 March 2018.
The goal of this conference and workshop is to discuss the state of knowledge on early instrumental meteorological series from the 18th and early 19th century. The first two days will be in conference-style and will encompass invited talks from different regions of the world (including participation by skype) on existing compilations and on individual records, but also on instruments and archives as well as on climate events and processes. Contributed presentations (most will be posters) are welcome.

The third and fourth days target a smaller audience and are in workshop-style. The goal is to compile a detailed inventory of all early instrumental records: What has been measured, where, when and by whom? Is the location of the original data known? Have they been imaged, digitised, homogenised, or are they already in existing archives? This work will help to focus future data rescue activities.

Data Management Workshop in Peru

New is the “Data Management for Climate Services” Workshop taking place in Lima - Peru from the 28 of May to 1 of June 2018. I am trying to learn Spanish, but languages are clearly not my strong point. At a restaurant I would now be able to order cat, turtle and chicken. I think I will eat a lot of chicken. Fortunately the workshop will be carried out in two languages and will have a professional translation service from Spanish – English / English – Spanish.

The workshop is inspired by the series of EUMETNET Data Management Workshops held every two years in Europe. It would be great if similar initiatives would be tried on other continents.

The abstract submission deadline is soon: the 15 of January 2018.
  • Session 1: METADATA
    • Methods for data rescue and cataloguing; data rescue projects.
    • Methods of metadata rescue for the past and the present; systems for metadata storage; applications and use of metadata.
    • Methods for quality control of different meteorological observations of different specifications; processes to establish operational quality control.
  • Session 2: DATA HOMOGENIZATION
    • Methods for the homogenization of monthly climate data; projects and results from homogenization projects; investigations on parallel climate observations; use of metadata for homogenization.
  • Session 3: GRIDDED DATA
    • Verification of gridded data based on observations; products based on gridded data; methods to produce gridded data; adjustments of gridded data in complex topographies such as the Andes.
  • Session 4: CLIMATE SERVICES
    • Products and climate information: methods and tools of climate data analysis; presentation of climate products and information; products on extreme events
    • Climate services in Ibero-America: projects on climate services in Ibero-America.
    • Interface with climate information users: approaches to building the interface with climate information users; experiences from exchanges with users; user requirements on climate services.

EMS Annual Meeting

This time the EMS Annual Meeting: European Conference for Applied Meteorology and Climatology will be in Budapest, Hungary, from 3 to 7 September 2018. The abstract submission deadline is still far away, but if you have ideas for sessions this is the moment to say something. The call for sessions is open until January 4th. New sessions can be proposed.

The session on "Climate monitoring: data rescue, management, quality and homogenization" is organised by Manola Brunet-India, Ingeborg Auer, Dan Hollis and me. If you have any suggestions for improvements of our please tell me. New this year is that we have explicitly added marine data. The forgotten 70% will be forgotten no longer.
Robust and reliable climatic studies, particularly those assessments dealing with climate variability and change, greatly depend on availability and accessibility to high-quality/high-resolution and long-term instrumental climate data. At present, a restricted availability and accessibility to long-term and high-quality climate records and datasets is still limiting our ability to better understand, detect, predict and respond to climate variability and change at lower spatial scales than global. In addition, the need for providing reliable, opportune and timely climate services deeply relies on the availability and accessibility to high-quality and high-resolution climate data, which also requires further research and innovative applications in the areas of data rescue techniques and procedures, data management systems, climate monitoring, climate time-series quality control and homogenisation.
In this session, we welcome contributions (oral and poster) in the following major topics:
  • Climate monitoring , including early warning systems and improvements in the quality of the observational meteorological networks.
  • More efficient transfer of the data rescued into the digital format by means of improving the current state-of-the-art on image enhancement, image segmentation and post-correction techniques, innovating on adaptive Optical Character Recognition and Speech Recognition technologies and their application to transfer data, defining best practices about the operational context for digitisation, improving techniques for inventorying, organising, identifying and validating the data rescued, exploring crowd-sourcing approaches or engaging citizen scientist volunteers, conserving, imaging, inventorying and archiving historical documents containing weather records.
  • Climate data and metadata processing, including climate data flow management systems, from improved database models to better data extraction, development of relational metadata databases and data exchange platforms and networks interoperability.
  • Innovative, improved and extended climate data quality controls (QC), including both near real-time and time-series QCs: from gross-errors and tolerance checks to temporal and spatial coherence tests, statistical derivation and machine learning of QC rules, and extending tailored QC application to monthly, daily and sub-daily data and to all essential climate variables.
  • Improvements to the current state-of-the-art of climate data homogeneity and homogenisation methods, including methods intercomparison and evaluation, along with other topics such as climate time-series inhomogeneities detection and correction techniques/algorithms, using parallel measurements to study inhomogeneities and extending approaches to detect/adjust monthly and, especially, daily and sub-daily time-series and to homogenise all essential climate variables.
  • Fostering evaluation of the uncertainty budget in reconstructed time-series, including the influence of the various data processes steps, and analytical work and numerical estimates using realistic benchmarking datasets.
The next step is to analyse the data to understand what happens with the climate system. For this there is the session: "Climate change detection, assessment of trends, variability and extremes".

AMOS-ICSHMO

It is too late to submit abstracts, but you can still visit the Joint 25th AMOS National Conference and 12th International Conference for Southern Hemisphere Meteorology and Oceanography, AMOS-ICSHMO 2018, to be held at UNSW Sydney from 5 to 9 February 2018.

New is a session on "Data homogenisation and other statistical challenges in climatology" organised by Blair Trewin and Sandy Burden.
This session is intended as a forum to present work addressing major statistical challenges in climatology, from the perspectives of both climatologists and statisticians. It is planned to have a particular focus on climate data homogenisation, including the potential for merging observations from multiple sources. However, papers on all aspects of statistics in climatology are welcome, including (but not limited to) spatial analysis and uncertainty, quality control, cross-validation, and extreme value and threshold analysis. Statistical analyses of temperature and rainfall will be of most interest, but studies using any meteorological data are welcome.
If I see it right The session has four talks:
  • Testing for Collective Significance of Temperature Trends (Radan Huth)
  • Investigating Australian Temperature Distributions using Record Breaking Statistics and Quantile Regression (Elisa Jager)
  • A Fluctuation in Surface Temperature in Historical Context: Reassessment and Retrospective on the Evidence (James Risbey)
  • The Next-Generation ACORN-SAT Australian Temperature Data Set (Blair Trewin)
And there is a session on "Historical climatology in the Southern Hemisphere" organised by Linden Ashcroft, Joëlle Gergis, Stefan Grab, Ruth Morgan and David Nash.
Historical instrumental and documentary records contain valuable weather and climate data, as well as detailed records of societal responses to past climatic conditions. This information offers valuable insights into current and future climate research and climate change adaptation strategies. While the use of historical climate information is a well-developed field in the Northern Hemisphere, a vast amount of untapped resources exist in the southern latitudes. Recovering this material has the potential to dramatically improve our understanding of Southern Hemisphere climate variability and change. In this session we welcome interdisciplinary submissions on the rescue, interpretation and analysis of historical weather, climate, societal and environmental information across the Southern Hemisphere. This can include:
  • Instrumental data rescue (land and ocean) projects and practices
  • Comparison of documentary, instrumental and palaeoclimate reconstructions
  • Historical studies of extreme events
  • Past social engagement with weather, climate and the natural environment
  • Development of long-term climate records and chronologies.
It has five talks:
  • An Australian History of Anthropogenic Climate Change (Ruth Morgan)
  • Learning from the Present to Understand the Past: The Case of Precipitation Covariability between Tasmania and Patagonia (Martin Jacques-Coper)
  • Learning from Notorious Maritime Storms of the Late 1800’s (Stuart Browning)
  • Climate Data Rescue Activities at Meteo-France in the Southern Hemisphere (Alexandre Peltier)
  • Recovering Historic Southern Ocean Climate Data using Ships’ Logbooks and Citizen Science (Petra Pearce)

EGU General Assembly

EGU will be held in Vienna, Austria, from 8 to 13 April 2018. The abstract submission deadline is looming: the 10th of January.

The main session from my perspective is: "Climate Data Homogenization and Analysis of Climate Variability, Trends and Extremes", organised by Xiaolan Wang, Rob Roebeling, Petr Stepanek, Enric Aguilar and Cesar Azorin-Molina.
Accurate, homogeneous, and long-term climate data records are indispensable for many aspects of climate research and services. Realistic and reliable assessments of historical climate trends and climate variability are possible with accurate, homogeneous and long-term time series of climate data and their quantified uncertainties. Such climate data are also indispensable for assimilation in a reanalysis, as well as for the calculation of statistics that are needed to define the state of climate and to analyze climate extremes. Unfortunately, many kinds of changes (such as instrument and/or observer changes, changes in station location and/or environment, observing practices, and/or procedures) that took place during data collection period could cause non-climatic changes (artificial shifts) in the data time series. Such shifts could have huge impacts on the results of climate analysis, especially when it concerns climate trend analysis. Therefore, artificial shifts need to be eliminated, as much as possible, from long-term climate data records prior to their application.

The above described factors can influence different essential climate variables, including atmospheric (e.g., temperature, precipitation, wind speed), oceanic (e.g., sea surface temperature), and terrestrial (e.g., albedo, snow cover) variables from in-situ observing networks, satellite observing systems, and climate/earth-system model simulations. Our session calls for contributions that are related to:
  • Correction of biases, quality control, homogenization, and validation of essential climate variables data records.
  • Development of new datasets and their analysis (spatial and temporal characteristics, particularly of extremes), examining observed trends and variability, as well as studies that explore the applicability of techniques/algorithms to data of different temporal resolutions (annual, monthly, daily, sub-daily).
  • Rescue and analysis of centennial meteorological observations, with focus on wind data prior to the 1960s, as a unique source to fill in the gap of knowledge of wind variability over century time-scales and to better understand the observed slowdown (termed “stilling”) of near-surface winds in the last 30-50 years.
Also the session on "Atmospheric Remote Sensing with Space Geodetic Techniques" contains a fair bit of homogenisation. For most satellite datasets homogenisation is done very differently as they do not have as much redundant data, but the homogenisation of humidity datasets based on the geodetic data of the global navigation satellite system ([[GNSS]], consisting of GPS, GLONASS and Galileo) is very similar.
Today atmospheric remote sensing of the neutral atmosphere with space geodetic techniques is an established field of research and applications. This is largely due to the technological advances and development of models and algorithms as well as, the availability of regional and global ground-based networks, and satellite-based missions. Water vapour is under sampled in current operational meteorological and climate observing systems. Advancements in Numerical Weather Prediction Models (NWP) to improve forecasting of extreme precipitation, requires GNSS troposphere products with a higher resolution in space and shorter delivery times than are currently in use. Homogeneously reprocessed GNSS observations on a regional and global scale have high potential for monitoring water vapour climatic trends and variability, and for assimilation into climate models. Unfortunately, these time series suffer from inhomogeneities (for example instrumental changes, changes in the station environment), which can affect the analysis of the long-term variability. NWP data have recently been used for deriving a new generation of mapping functions and in Real-Time GNSS processing these data can be employed to initialise Precise Point Positioning (PPP) processing algorithms, shortening convergence times and improving positioning. At the same time, GNSS-reflectometry is establishing itself as an alternative method for retrieving soil moisture.
We welcome, but not limit, contributions on the subjects below:
  • Physical modelling of the neutral atmosphere using ground-based and radio-occultation data.
  • Multi-GNSS and multi-instruments approaches to retrieve and inter-compare tropospheric parameters.
  • Real-Time and reprocessed tropospheric products for forecasting, now-casting and climate monitoring applications.
  • Assimilation of GNSS measurements in NWP and in climate models.
  • Methods for homogenization of long-term GNSS tropospheric products.
  • Studies on mitigating atmospheric effects in GNSS positioning and navigation, as well as observations at radio wavelengths.
  • Usage of NWP data in PPP processing algorithms.
  • Techniques on retrieval of soil moisture from GNSS observations and studies of ground-atmosphere boundary interactions.
Also for ecological data homogenisation is often needed. Thus the session "Digital environmental models for Ecosystem Services mapping" by Miquel Ninyerola, Xavier Pons and Lluis Pesquer may also be interesting.
The session aims to focus on understanding, modelling, analysing and improving each step of the process chain for producing digital environmental surface grids (terrain, climate, vegetation, etc.) able to be used in Ecosystem Services issues: from the sensors (in situ as well as Earth Observation data) to the map dissemination. In this context, topics as data acquisition/ingestion, data assimilation, data processing, data homogenization, uncertainty and quality controls, spatial interpolation methods, spatial analysis tools, derived metrics, downscaling techniques, box-tools, improvements on metadata and web map services are invited. Spatio-temporal analyses and model contribution of large series of environmental data and the corresponding auxiliary Earth Observation data are especially welcome as well as studies that combine cartography, GIS, remote sensing, spatial statistics and geocomputing. A rigorous geoinformatics and computational treatment is required in all topics.
EGU also has a nice number of Open Science, Science communication and Publishing sessions[, you can find links in my new post]. I hope I will find the time to also write about them in a next post.

Other conferences

The Budapest homogenisation workshop was this year, so I do not expect another one in 2018. In case you missed it, the proceedings is now published and contains many interesting extended abstracts.

Also the last EUMETNET Data Management Workshop was in 2017. If there are any interesting meetings that I missed, please tell us in the comments.

Wednesday, 13 September 2017

My EMS2017 highlights

When I did my PhD, our professor wanted everyone to write short reports about conferences they had attended. It was a quick way for him to see what was happening, but it is also helpful to remember what you learned and often interesting to read yourself again some time later. Here is my short report on last week's Annual Meeting of the European Meteorological Society (EMS), the European Conference for Applied Meteorology and Climatology 2017, 4–8 September 2017, Dublin, Ireland.

This post is by its nature a bit of lose sand, but there were some common themes: more accurate temperature measurements by estimating the radiation errors, eternal problems estimating various trends, collaborations between WEIRD and developing countries and global stilling.

Radiation errors

Air temperature sounds so easy, but is hard to measure. What we actually measure is the temperature of the sensor and because air is a good isolator, the temperature of the air and the sensor can easily be different. For example, due to self-heating of electric resistance sensors or heat flows from the sensor holder, but the most important heat flow is from radiation. The sun shining on the sensor or the sensor losing heat via infra-red radiation by contact with the cold atmosphere.



In the [[metrology]] (not meteorology) session there was a talk and several posters on the beautiful work by the Korea Research Institute of Standards and Science to reduce the influence of radiation on the temperature measurements. They used two thermometers one dark and one light coloured to estimate how large radiation errors are and to be able to correct for them. This set-up was tested outside and in their amazing calibration laboratory.

These were sensors to measure the vertical temperature profile, going up to 15 km high. Thus they needed to study the sensors over a huge range of temperatures (-80°C to 25°C); it is terribly cold at the tropopause. The dual sensor was also exposed to a large range of solar irradiances from 0 to 1500 Watts per square meter; the sun is much stronger up there. The pressure ranged from 10 hPa to the 1000 hPa we typically have at the surface. The low pressure makes the air an even better isolator. The radiosondes drift with the wind reducing ventilation, thus the wind only needed to be tested from 0 to 10 meters per second.

I have seen this set-up to study radiation errors for automatic weather stations, it would be great to also use it for operational stations to reduce radiation errors.

The metrological organisation of the UK is working on a thermometer that does not have a radiation error by directly measuring the temperature of the air. Micheal de Podesta does so by measuring the speed of sound very accurately. The irony is that it is hard to see how well this new sound thermometer works outside the lab because the comparison thermometer has radiation errors.

Micheal de Podesta live experiments with the most accurate thermometer in human history:



To lighten up this post: I was asked to chair the metrology session because the organiser of the session (convener) gave a talk himself. The talks are supposed to be 12 minutes with 3 minutes for questions and changing to the next speaker. Because multiple sessions are running at the same time and people may switch it is important to stick to the time. Also people need some time between the time blocks to recharge.

One speaker crossed the 12 minutes and had his back towards me so that I could not signal his time was up. Thus I walked across the screen to the other side in front of him. This gave some praise on Twitter.

If you speak a foreign language (and are nervous) it can be hard to deviate from the prepared talk.

Satellite climate data

There were several talks on trying to make a stable dataset from satellite measurements to make them useful for climate change studies. Especially early satellites were not intended for quantitative use, but only to look at the moving cloud systems. And also later the satellites were mostly designed for meteorological uses, rather than climate studies.

Interesting was Ralf Quast looking at how the spectral response of the satellites deteriorated while in space. The sensitivity for visible light did not decline similarly for all colours, but deteriorated faster for blues than for reds. This was studied by looking at several calibration targets expected to be stable: the Sahara desert, the dark oceans, and the bright top of tropical convective clouds. The estimates for post-launch measurements were similar to pre-launch calibrations in the lab.

Gerrit Hall explained that there are 17 sources of uncertainties for visible satellite measurements from the noise when looking at the Earth and when looking at the space for partial calibration to several calibration constants and comparisons to [[SI]] standards (the measurement units everyone, but the USA uses).

The noise levels also change over time, typically going up over the life time, but sometimes also going down for a period. The constant noise level in the design specification often used for computations of uncertainties is just a first estimate. When looking at space the channels (measuring different frequencies of light) should be uncorrelated, but they are not always.

Global Surface Reference Network

Peter Thorne gave a talk about a future global surface climate reference network. I wrote about this network for climate change studies before.

A manuscript describing the main technical features of such a network is almost finished. The Global Climate Observing System of WMO is now setting up a group to study how we can make this vision a reality to make sure that future climatologists can study climate change with a much higher accuracy. The first meeting will be in November in Maynooth.

Global stilling

The 10-meter wind speed seems to be declining in much of the mid-latitudes, which is called "global stilling". It is especially prevalent in middle Europe (as the locals say, in my youth this was called east Europe). The last decade there seems to be an uptick again; see graph to the right from The State of the Climate 2016.

Cesar Azorin-Molina presented the work of his EU project STILLING in a longer talk in the Climate monitoring session giving an overview of global stilling research. Stilling is also expected to be one of the reasons for the reduction in pan evaporation.

The stilling could be due to forest growth and urbanization, both make the surface rougher to the wind, but could also be due to changes in the large scale circulation. Looking at vertical wind profiles one can get an idea about the roughness of the surface and thus study whether that is the reason, but there is not much such data available over longer periods.

If you have such data, know of such data, please contact Cesar. Also for normal wind data, which is hard to get, especially observations from developing countries. The next talk was about a European wind database and its quality control, this will hopefully improve the data situation in Europe.

This was part of the climate monitoring session, which has a focus on data quality because Cesar also studied the influence of the ageing of cup anemometers that measure the wind speed. Their ball bearings tend to wear out, producing lower observed wind speeds. By making parallel measurements with new equipment and a few year old instruments he quantified this problem, which is quite big.

Because these anemometers are normally regularly calibrated and replaced I would not expect that this would produce problems for the long-term trend. Only if the wearing is larger now than it was in the past it would create a trend bias. But this does create quite a lot of noise in the difference time series between one station and a neighbour, thus making relative homogenisation harder.



Marine humidity observations

My ISTI colleague Kate Willet was recipient of the WCRP/GCOS International Data Prize 2016. She leads the ISTI benchmarking group and is especially knowledgeable when it comes to humidity observations. The price was a nice occasion to invite her to talk about the upcoming HadISD marine humidity dataset. It looks to become a beautiful dataset with carefully computed uncertainties.

There is a decline in the 2-meter relative humidity over land since about 2000 and it is thus interesting to see how this changes over the ocean. Preliminary results suggest that also over the ocean the relative humidity is declining. Both quality control of individual values and bias corrections are important.

Developing countries

There was a workshop on the exchange of information about European initiatives in developing countries. Saskia Willemse of Meteo Swiss organised it after her experiences from a sabbatical in Bhutan. Like in the rest of science a large problem is that funding is often only available for projects and equipment, while it takes a long time to lift an organisation to a higher level and the people need to learn how to use the equipment in praxis and it is a problem that equipment is often not interoperable.

More collaboration could benefit both sides. Developing countries need information to adapt to climate change and improve weather predictions. To study the climate system, science needs high quality observations from all over the world. For me it is, for example, hard to find out how measurements are made now and especially in the past. We have no parallel measurements in Africa and few in Asia. The Global Climate Observing System (GCOS) Reference Upper-Air Network (GRUAN) has much too few observations in developing countries. We will probably run into the same problem again with a global station reference network.

At the next EMS (in Budapest) there will be a session on this topic to get a discussion going how we can better collaborate. The organisers will reach out to groups already doing this kind of work in WMO, UNEP and the World Bank. One idea was to build a blog to get an overview of what is already happening.

I hope that it will be possible to have sustainable funding for weather services in poor countries, for capacity building and for making observations in return for opening up their data stores. That would be something the UN climate negotiations could do via the [[Green Climate Fund]]. Compared to the costs of reducing greenhouse gases and adapting our infrastructure the costs of weather services are small and we need to know what will happen for efficient planning.

Somewhat related to this is the upcoming Data Management Workshop (DMW) in Peru modelled after the European EUMETNET DMWs, but hopefully with more people from South and Central America. The Peru workshop is organised by Stefanie Gubler of the Swiss Climandes project and will be held from 28th of May to the 1st of June 2018. More information follows later.

Wet bulb temperature

For the heat stress of workers, the wet bulb temperature is important. This is the temperature of a well-ventilated thermometer covered in a wet piece of cloth. If there is some wind the wet bulb temperature is gives an indication of the thermal comfort of a sweating person.

The fun fact I discovered is that the weather forecasts for the wet bulb temperature are more accurate than for the temperature and the relative humidity individually. There is even some skill up to 3 weeks in advance. Skill here only means that the weather prediction is better than using the climatic value. Any skill can have economic value, but sufficiently useful forecasts for the public would be much shorter-term.

Plague

The prize for the best Q&A goes to the talk on plague in the middle ages and its relationship with the weather in the previous period (somewhat cool previous summer, somewhat warm previous winter and a warm summer: good rat weather).

Question: why did you only study the plague in the Middle Ages?
Answer: I am a mediaevalist.

Other observational findings

Ian Simpson studied different ways to compute the climate normals (the averages over 30 years). The main difference between temperature datasets were in China due to a difference in how China itself computes the daily mean temperature (from synoptic fixed hour measurements at 0, 6, 12, 18 hours universal time) and how most climatological datasets do it (from the minimum and maximum temperature). Apart from that the main differences were seen when data was incomplete because datasets use different methods to handle this.

There was another example where the automatic mode (joint detection) of HOMER produced bad homogenisation results. The manual mode of HOMER is very similar to PRODIGE, which is a good HOME recommended method, but the joint detection part is new and was not studied well yet. I would advice against its use by itself.

Lisa Hannak of the German weather service looked at inhomogeneities in parallel data: manual observations made next to automatic measurements. Because they are so highly correlated it is possible to see very small inhomogeneities and quite frequent ones. An interesting new field. Not directly related to EMS, but there will be a workshop on parallel data in November as part of the Spanish IMPACTRON project.

The European daily climate dataset ECA&D, which is often used to study changes in extreme weather, will soon have a homogenised version. Some breaks in earlier periods were not corrected because there were no good reference stations in this period. I would suggest to at least correct the mean in such a case, that is better than doing nothing and having a large inhomogeneity in a dataset people expect to be homogenised is a problem.

One of the things that seems to help us free meteorological/climate data is that there is a trend towards open government. This means that as much as possible the data the government has gathered is made available to the public via an [[API]]. Finland is just working on such an initiative and also freed the data of the weather service. There are many people and especially consultants using such data. We can piggy back on this trend.

One can also estimate humidity with GPS satellites. Such data naturally also need to be homogenised. Roeland Van Malderen works on a benchmark to study how well this homogenisation would work.

The Austrian weather service ZAMG is working on an update for the HISTALP dataset with temperature and precipitation for the Greater Alpine Region. The new version will use HOMER. Two regions are ready.

It was great to see that Mexico is working on the homogenisation of some of their data. Unfortunately the network is very sparse after the 90s, which makes homogenisation difficult and the uncertainty in the trends large.

Wednesday, 15 October 2014

Scientific meetings. The freedom to tweet and the freedom not to be tweeted



Some tweets from a meeting on Arctic sea ice reduction organised by the Royal Society recently caused a stir, when the speaker cried "defamation" and wrote letters to the employers of the tweeters. Stoat and Paul Matthews have the story.

The speaker's reaction was much too strongly, in my opinion, most tweets were professional and respectful critique should be allowed. I have only seen one tweet, that should not have been written ("now back to science").

I do understand that the speaker feels like people are talking behind his back. He is not on twitter and even if he were: you cannot speak and tweet simultaneously. Yes, people do the same on the conference floors and in bars, but then you at least do not notice it. For balance it should be noted that there was also plenty of critique given after the talk; that people were not convinced was thus not behind his back.

Related to this, a blog post is just a long tweet, Paige Brown Jarreau asks:

Almost all scientists use both papers and meetings for communication. Tweets and blogs do not have that status; they could complement the informal discussions at meetings, but do differ in that everyone can read them, for all time. Social media will never be and should never be a substitute for the scientific literature.

Imagine that I had some preliminary evidence that the temperature increase since 1900 is nearly zero or that we may already have passed the two degree limit. I would love to discuss such evidence with my colleagues, to see if they notice any problems with the argumentation, to see if I had overlooked something, to see if there are better methods or data that would make the evidence stronger. I certainly would not like to see such preliminary ideas as a headline in the New York Times until I had gathered and evaluated all the evidence.

The problem with social media is that the boundaries between public and private are blurring. After talking about such a work at a conference, someone may tweet about it and before you know it the New York Times is on the telephone.

Furthermore, you always communicate with a certain person or audience and tailor your message to the receiver. When I write on my blog, I explain much more than when I talk to a colleague. Reversely, if someone hears or reads my conversation with a colleague this may be confusing because of the lack of explanation and give the wrong impression. In person at a conference a sarcastic remark is easily detected, on the written internet sarcasm does not work, especially when it comes to climate "debate" where there is no opinion too exotic.

This is not an imaginary concern. The OPERA team at CERN that found that neutrinos could travel faster than light got into trouble this way. The team was forced to inform the press prematurely because blogs started writing about their finding. The team made it very clear that this was still very likely a measurement error: “If this measurement is confirmed, it might change our view of physics, but we need to be sure that there are no other, more mundane, explanations. That will require independent measurements.” But a few months after the error was found, a stupid loose cable, the spokesperson and physics coordinator of OPERA had to resign. I would think that that would not have happened without all the premature publicity.

If I were to report that the two degree limit has already been reached, that the raw temperature data had a severe cooling bias, a multimedia smear campaign without comparison would start. Then I'd better have the evidence in my pocket. The OPERA example shows that even if you do not overstate your case, your job is in jeopardy. Furthermore, such a campaign would make further work extremely difficult, even in a country like Germany that has Freedom of Research in its constitution to prevent political interference with science:
Arts and sciences, research and teaching shall be free.
(Kunst und Wissenschaft, Forschung und Lehre sind frei)
This fortunate fact, for example, disallows FOIA harassment of scientists.

That openness is not necessary in the preliminary stages fits to the pivotal role of the scientific literature in science. In an article a scientist describes his findings in all the detail necessary for others to replicate it and build on it. That is the moment everything comes in the open. If the article is written well that is all one should need.

I hope that one day all scientific articles will be open access so that everyone can read them. I personally prefer to publish my data and code, if relevant, and would encourage all scientists to do so. However, how such a scientific article came into existence is not of anyone's business.

All the trivial and insightful mistakes that were made are not of anyone's business. And we need a culture in which people are allowed to make mistakes to get ahead in science. As a saying goes: if you are not wrong half of the time you are not pushing yourself enough to the edge of our understanding. By putting preliminary ideas in the limelight too soon you stifle experimentation and exploration.

In the beginning of a project I often request a poster to be able to talk about it with my most direct colleagues, rather than requesting a talk, which would broadcast the ideas to a much broader audience. (A secondary reason is that a well-organised poster session also provides much more feedback.) Once the ideas have matured a talk is great to tell everyone about it.

If a scientists chooses to show preliminary work before publication that is naturally fine. For certain projects the additional feedback my be valuable or even necessary as in case of collaboration with citizen scientists. And normally the New York Times will not be interested. However, we should not force people to work that way. It may not be ideal for every scientific question or person.

Opening up scientific meetings with social media and webcasts may intimidate (young) researchers and in this way limit discussion. Even at an internal seminar, students are often too shy to ask questions. On the days the professor is not able to attend, there are often much more questions. External workshops are even more intimidating, large conferences are even worse, and having to talk to a global audience because of social media is the worst of all.

More openness is not automatically more or better debate. It can stifle debate and also move it to smaller closed circles, which would be counter productive.

Personally I do not care much who is listening, as long as the topic is science I feel perfectly comfortable. The self-selected group of scientists that blogs and tweets probably feels the same. However, not everyone is that way. Some people who are much smarter than I am would like to first sharpen their pencils and think a while before they comment. I know from feedback by mail and at conferences that much more of my colleagues read this blog than I had expected because they hardly write comments. Writing something for eternity without first thinking about it for a few days, weeks or months is not everyone's thing. This is something we should take into account before we open informal communication up too much.

In spring I asked the organisers of a meeting how we should handle social media:
A question we may want to discuss during the introduction on Monday morning: Do people mind about the use of social media during the meeting? Twitter and blogs, for example. What we discuss is also interesting for people unable to attend the meeting, but we should also not make informal discussions harder by opening up to the public too much.
I was thinking about people saying in advance if they do not want their talk to be public and maybe we should also keep the discussions after the talks private, so that people do not have the think twice about the correctness of every single sentence.
The organisation kindly asked me to refrain from tweeting. Maybe that was the reply because they were busy and had never considered the topic. But that reply was fine by me. How appropriate social media are depends on the context and this was a small meeting, where opening it up to the world would be a large change in atmosphere.

I guess social media is less of a problem the general assembly of the European Geophysical Union (EGU), where you know that there is much press around. Especially for some of the larger sessions where there can be hundreds of scientists and some journalists in the audience. You would not use such large audiences to bounce some new ideas, but to explain the current state of the art.

Even EMS and EGU the organisation provides some privacy: it is officially not allowed to make photos of the posters. I would personally prefer that every scientist can indicate him or herself whether this is okay for his poster (and if you make rules, you should also enforce them).

Another argument against tweeting is that it distracts the tweeter. At last weeks EMS2014 there was no free Wi-Fi in the conference rooms (just in a separate working room). I thought that was a good thing. People were again listening to the talks, like in the past, and not tweeting, surfing or doing their email.

[UPDATE. Doug McNeall, the MetOffice guy that convinced me to start tweeting, has written a response on his blog.]

Related Reading

Letter to Science by Germán Orizaola and Ana Elisa Valdés: Free the tweet at scientific conferences

Kathleen Fitzpatrick (Director of Scholarly Communication) gives some sensible Advice on Academic Blogging, Tweeting, Whatever. For example: “If somebody says they’d prefer not to be tweeted or blogged, respect that” and “Do not let dust-ups such as these stop you from blogging/tweeting/whatever”.

I previously wrote about: The value of peer review for science and the press. It would be nice if the press would at least wait until a study is published. Even better would be to wait until several study have been made. But that is something we, as scientists, cannot control.

* Photo by Juan Emilio used with a Creative Commons CC BY-SA 2.0 licence.

Tuesday, 18 September 2012

Future research in homogenisation of climate data – EMS2012 in Poland

By Enric Aguilar and Victor Venema

The future of research and training in homogenisation of climate data was discussed at the European Meteorological Society in Lodz by 21 experts. Homogenisation of monthly temperature data has improved much in the last years, as seen in the results of the COST-HOME project. On the other hand the homogenization of daily and subdaily data is still in its infancy and this data is used frequently to analyse changes in extreme weather. It is expected that inhomogeneities in the tails of the distribution are stronger than in the means. To make such analyses on extremes more reliable, more work on daily homogenisation is urgently needed. This does not mean than homogenisation at the monthly scale is already optimal, much can still be improved.

Parallel measurements

Parallel measurements with multiple measurement set-ups were seen as an important way to study the nature of inhomogeneities in daily and sub-daily data. It would be good to have a large international database with such measurements. The regional climate centres (RCC) could host such a dataset. Numerous groups are working on this topic, but more collaboration is needed. Also more experiments would be valuable.

When gathering parallel measurements the metadata is very important. INSPIRE (an EU Directive) has a standard format for metadata, which could be used.

It may be difficult to produce an open database with parallel measurements as European national meteorological and hydrological services are often forced to sell their data for profit.(Ironically, in the Land the Free (markets), climate data is available freely, the public already paid for it with their tax money after all.) Political pressure to free climate data is needed. Finland is setting a good example and will free its data in 2013.