Showing posts with label precipitation. Show all posts
Showing posts with label precipitation. Show all posts

Saturday, January 16, 2016

The transition to automatic weather stations. We’d better study it now.

This is a POST post.

The Parallel Observations Science Team (POST) is looking across the world for climate records which simultaneously measure temperature, precipitation and other climate variables with a conventional sensor (for example, a thermometer) and modern automatic equipment. You may wonder why we take the painstaking effort of locating and studying these records. The answer is easy: the transition from manual to automated records has an effect on climate series and the analysis we do over them.

In the last decades we have seen a major transition of the climate monitoring networks from conventional manual observations to automatic weather stations. It is recommended to compare these instruments before the substitution is effective with side by side measurements, which we call parallel measurements. Climatologists have also set up many longer experimental parallel measurements. They tell us that in most cases both sensors do not measure the same temperature or collect the same amount of precipitation. A different temperature is not only due to the change of the sensor itself, but automatic weather stations also often use a different, much smaller, screen to protect the sensor from the sun and the weather. Often the introduction of automatic weather stations is accompanied by a change in location and siting quality.

From studies of single temperature networks that made such a transition we know that it can cause large jumps; the observed temperatures at a station can go up or down by as much as 1°C. Thus potentially this transition can bias temperature trends considerably. We are now trying to build a global dataset with parallel measurements to be able to quantify how much the transition to automatic weather stations influences the global mean temperature estimates used to study global warming.

Temperature

This study is led by Enric Aguilar and the preliminary results below were presented at the Data Management Workshop in Saint Gallen, Switzerland last November. We are still in the process of building up our dataset. Up to now we have data from 10 countries: Argentina (9 pairs), Australia (13), Brazil (4), Israel (5), Kyrgyzstan (1), Peru (31), Slovenia (3), Spain (46), Sweden (8), USA (6); see map below.


Global map in which we only display the 10 countries for which we have data. The left map is for the maximum temperature (TX) and the right for the minimum temperature (TN). Blue dots mean that the automatic weather station (AWS) measures cooler temperatures than the conventional observation, red dots mean the AWS is warmer. The size indicates how large the difference is, open circles are for statistically not significant differences.

The impact of the automation can be better assessed in the box plots below.


The bias of the individual pairs are shown as dots and summarized per country with box plots. For countries with only a few pairs the boxplots should be taken with a grain of salt. Negative values mean that the automatic weather stations are cooler. We have data for Argentina (AR), Australia (AU), Brazil (BR), Spain (ES), Israel (IL), Kyrgyzstan (KG), Peru (PE), Sweden (SE), Slovenia (SI) and the USA (US). Panels show the maximum temperature (TX), minimum temperature (TN), mean temperature (TM) and Diurnal temperature range (DTR, TX-TN).

On average there are no real biases in this dataset. However, if you remove Peru (PE) the differences in the mean temperature are either small or negative. That one country is so important shows that our dataset is currently too small.

To interpret the results we need to look at the main causes for the differences. Important reasons are that Stevenson screens can heat up in the sun on calm days, while automatic sensors are sometimes ventilated. The automatic sensors are, furthermore, typically smaller and thus less affected by direct radiation hitting them than thermometers. On the other hand, in case of conventional observation, the maintenance of the Stevenson screens—cleaning and painting—and detection of other problems may be easier because they have to be visited daily. There are concerns that plastic screens get more grey and heat more in the sun. Stevenson screens have more thermal inertia, they smooth fast temperature fluctuations, and will thus show lower highs and higher lows.

Also the location often changes with the installation of automatic weather stations. America was one of the early adopters. The US National Weather Service installed analogue semi-automatic equipment (MMTS) that did not allow for long cables between the sensor and the display inside a building. Furthermore, the technicians only had one day per station and as a consequence many of the MMTS systems were badly sited. Nowadays technology has advanced a lot and made it easier to find good sites for weather stations. This is maybe even easier now than it used to be for manual observations because modern communication is digital and if necessary uses radio making distance much less a concern. The instruments can be powered by batteries, solar or wind, which frees them from the electricity grid. Some instruments store years of data and need just batteries.

In the analysis we thus need to consider whether the automatic sensors are placed in Stevenson screens and whether the automatic weather station is at the same location. Where the screen and the location did not change (Israel and Slovenia), the temperature jumps are small. Whether the automatic weather station reduces radiation errors by mechanical ventilation is likely also important. Because of these different categories, the number of datasets needed to get a good global estimate becomes larger. Up to now, these factors seem to be more important than the climate.

Precipitation

For most of these countries we also have parallel measurements for precipitation. The figure below was made by Petr Stepanek, who leads this part of the study.


Boxplots for the differences in monthly precipitation sums due to automation. Positive values mean that the manual observations record more precipitation. Countries are: Argentina (AG), Brazil (BR), The Check Republic (CZ), Israel (IS), Kyrgyzstan (KG), Peru (PE), Sweden (SN), Spain (SP) and the USA (US). The width of the boxplots corresponds to the size of the given dataset.

For most countries the automatic weather stations record less precipitation. This is mainly due to smaller amounts of snow during the winter. Observers often put a snow cross in the gauge in winter to make it harder for snow to blow out of it again. Observers simply melt the snow gathered in a pot to measure precipitation, while early automatic weather stations did not work well with snow and sticky snow piling up in the gauge may not be noticed. These problems can be solved by heating the gauge, but unfortunately the heater can also increase the amount of precipitation that evaporates before it could be registered. Such problems are known and more modern rain gauges use different designs and likely have a smaller bias again.

Database with parallel data

The above results are very preliminary, but we wanted to show the promise of a global dataset with parallel data to study biases in the climate record due to changes in the observing practises. To proceed we need more datasets and better information on how the measurements were performed to make this study more solid.

In future we also want to look more at how the variability around the mean is changing. We expect that changes in monitoring practices have a strong influence on the tails of the distribution and thus on estimates of changes in extreme weather. Parallel data offer a unique opportunity to study this otherwise hard problem.

Most of the current data comes from Europe and South America. If you know of any parallel datasets especially from Africa or Asia, please let us know. Up to now, the main difficulty for this study is to find the persons who know where the data is. Fortunately, data policies do not seem to be a problem. Parallel data is mostly seen as experimental data. In some cases we “only” got a few years of data from a longer dataset, which would otherwise be seen as operational data.

We would like to publish the dataset after publishing our papers about it. Again this does not seem to lead to larger problems; sometimes people prefer to first publish an article themselves, which causes some delays, and sometimes we cannot publish the daily data itself, but “only” monthly averages and extreme value indices, this makes the results less transparent, but these summary values contain most of the information.

Knowledge of the observing practices is very important in the analysis. Thus everyone who contributes data is invited to help in the analysis of the data and co-author our first paper(s). Our studies are focused on global results, but we will also provide everyone with results for their own dataset to gain a better insight into their data.

Most climate scientists would agree that it is important to understand the impact of automation on our records. So does the World Meteorological Organization. In case it helps you to convince your boss: the Parallel Observations Science Team is part of the International Surface Temperature Initiative (ISTI). It is endorsed by the Task Team on Homogenization (TT-HOM) of the World Meteorological Organization (WMO).

We expect that this endorsement and our efforts to raise awareness about our goals and their importance will help us to locate and study parallel observations from other parts of the world, especially Africa and Asia. We also expect to be able to get more data from Europe; the regional association for Europe of the WMO has designated the transition to automatic weather stations as one of its priorities and is helping us to get access to more data. We want to have datasets for all over the world to be able to assess whether the station settings (sensors, screens, data quality, etc.) have an impact, but also to understand if different climates produce different biases.

If you would like to collaborate or have information, please contact me.



Related reading

The ISTI has made a series of brochures on POST in English, Spanish, French and German. If anyone is able to make further translations, that would be highly appreciated.

Parallel Observations Science Team of the International Surface Temperature Initiative.

Irrigation and paint as reasons for a cooling bias

Temperature trend biases due to urbanization and siting quality changes

Changes in screen design leading to temperature trend biases

Temperature bias from the village heat island

Tuesday, June 9, 2015

Comparing the United States COOP stations with the US Climate Reference Network

Last week the mitigation sceptics apparently expected climate data to be highly reliable and were complaining that an update led to small changes. Other weeks they expect climate data to be largely wrong, for example due to non-ideal micro-siting or urbanization. These concerns can be ruled out for the climate-quality US Climate Reference Network (USCRN). This is a guest post by Jared Rennie* introducing a recent study comparing USCRN stations with nearby stations of the historical network, to study the differences in the temperature and precipitation measurements.


Figure 1. These pictures show some of instruments from the observing systems in the study. The exterior of a COOP cotton region shelter housing a liquid-in-glass thermometer is pictured in the foreground of the top left panel, and a COOP standard 8-inch precipitation gauge is pictured in the top right. Three USCRN Met One fan-aspirated shields with platinum resistance thermometers are pictured in the middle. And, a USCRN well-shielded Geonor weighing precipitation gauge is pictured at the bottom.
In 2000 the United States started building a measurement network to monitor climate change, the so called United States Climate Reference Network (USCRN). These automatic stations have been installed in excellent locations and are expected not to show influences of changes in the direct surroundings for decades to come. To avoid loss of data the most important variables are measured by three high-quality instruments. A new paper by Leeper, Rennie, and Palecki now compares the measurements of twelve station pairs of this reference network with nearby stations of the historical US network. They find that the reference network records slightly cooler temperature and less precipitation and that there are almost no differences in the temperature variability and trend.

COOP and USCRN

The detection and attribution of climate signals often rely upon long, historically rich records. In the United States, the Cooperative Observer Program (COOP) has collected many decades of observations for thousands of stations, going as far back as the late 1800’s. While the COOP network has become the backbone of the U.S. climatology dataset, non-climatic factors in the data have introduced systematic biases, which require homogenization corrections before they can be included in climatic assessments. Such factors include modernization of equipment, time of observation differences, changes in observing practices, and station moves over time. A part of the COOP stations with long observations is known as the US Historical Climate Network (USHCN), which is the default dataset to report on temperature changes in the USA.

Recognizing these challenges, the United States Climate Reference Network (USCRN) was initiated in 2000. 15 years after its inception, 132 stations have been installed across the United States with sub-hourly observations of numerous weather elements using state-of-the-art instrumentation calibrated to traceable standards. For a high data quality temperature and precipitation sensors are well shielded and for continuity the stations have three independent sensors, so no data loss is incurred. Because of these advances, no homogenization correction is necessary.

Comparison

The purpose of this study is to compare observations of temperature and precipitation from closely spaced members of USCRN and COOP networks. While the pairs of stations are near to each other they are not adjacent. Determining the variations in data between the networks allows scientists to develop an improved understanding of the quality of weather and climate data, particularly over time as the periods of overlap between the two networks lengthen.

To ensure observational differences are the result of network discrepancies, comparisons were only evaluated for station pairs located within 500 meters. The twelve station pairs chosen were reasonably dispersed across the lower 48 states of the US. Images of the instruments used in both networks are provided in Figure 1.

The USCRN stations all have the same instrumentation: well-shielded rain gauges and mechanically ventilated temperature sensors. Two types of thermometers are used: modern automatic electrical sensors known as the maximum-minimum temperature sensor (MMTS ) and old-fashioned normal thermometers, which now have to be called liquid-in-glass (LiG) thermometers. Both are naturally ventilated.

An important measurement problem for rain gauges is undercatchment: due to turbulence around the instruments not all droplets land in the mouth. This is especially important in case of high winds and for snow and can be reduced by wind shields. The COOP rain gauges are unshielded, however, and have been known to underestimate precipitation in windy conditions. COOP gauges also include a funnel, which can be removed before snowfall events. The funnel reduces evaporation losses on hot days, but can also get clogged by snow. Hourly temperature data from USCRN were averaged into 24 hour periods to match daily COOP measurements at the designated observation times, which vary by station. Precipitation data was aggregated into precipitation events and also matched with respective COOP events.

Observed differences and their reasons

Overall, COOP sensors in shields naturally ventilated reported warmer daily maximum temperatures (+0.48°C) and cooler daily minimum temperatures (-0.36°C) than USCRN sensors, which have better solar shielding and fans to ventilate the instrument. The magnitude of temperature differences were on average larger for stations operating LiG systems, than those for the MMTS system. Part of the reduction in network biases with the MMTS system is likely due to the smaller-sized shielding that requires less surface wind speed to be adequately ventilated.

While overall mean differences were in line with side-by-side comparisons of ventilated and non-ventilated sensors, there was considerable variability in the differences from station to station (see Figure 2). While all COOP stations observed warmer maximum temperatures, not all saw cooler minimum temperatures. This may be explained by differing meteorological conditions (surface wind speed, cloudiness), local siting (heat sources and sinks), and sensor and human errors (poor calibration, varying observation time, reporting error). While all are important to consider, meteorological conditions were only examined further by categorizing temperature differences by wind speed. The range in network differences for maximum and minimum temperatures seemed to reduce with increasing wind speed, although more so with maximum temperature, as sensor shielding becomes better ventilated with increasing wind speed. Minimum temperatures are highly driven by local radiative and siting characteristics. Under calm conditions one might expect radiative imbalances between naturally and mechanically aspirated shields or differing COOP sensors (LiG vs MMTS). That along with local vegetation and elevation differences may help to drive these minimum temperature differences.


Figure 2. USCRN minus COOP average minimum (blue) and maximum (red) temperature differences for collocated station pairs. COOP stations monitoring temperature with LiG technology are denoted with asterisks.

For precipitation, COOP stations reported slightly more precipitation overall (1.5%). Similar to temperature, this notion was not uniform across all station pairs. Comparing by season, COOP reported less precipitation than USCRN during winter months and more precipitation in the summer months. The dryer wintertime COOP observations are likely due to the lack of gauge shielding, but may also be impacted by the added complexity of observing solid precipitation. An example is removing the gauge funnel before a snowfall event and then melting the snow to calculate liquid equivalent snowfall.

Wetter COOP observations over warmer months may have been associated with seasonal changes in gauge biases. For instance, observation errors related to gauge evaporation and wetting factor are more pronounced in warmer conditions. Because of its design, the USCRN rain gauge is more prone to wetting errors (that some precipitation sticks to the wall and is thus not counted). In addition, USCRN does not use an evaporative suppressant to limit gauge evaporation during the summer, which is not an issue for the funnel-capped COOP gauge. The combination of elevated biases for USCRN through a larger wetting factor and enhanced evaporation could explain wetter COOP observations. Another reason could be the spatial variability of convective activity. During summer months, daytime convection can trigger unorganized thundershowers whose scale is small enough where it would report at one station, but not another. For example, in Gaylord Michigan, the COOP observer reported 20.1 mm more than the USCRN gauge 133 meters away. Rain radar estimates showed nearby convection over the COOP station, but not the USCRN, thus creating a valid COOP observation.


Figure 3. Event (USCRN minus COOP) precipitation differences grouped by prevailing meteorological conditions during events observed at the USCRN station. (a) event mean temperature: warm (more than 5°C), near-freezing (between 0°C and 5°C), and freezing conditions (less than 0°C); (b) event mean surface wind speed: light (less than 1.5 m/s), moderate (between 1.5 m/s and 4.6 m/s), and strong (larger than 4.6 m/s); and (c) event precipitation rate: low (less than 1.5 mm/hr), moderate (between 1.5 mm/hr and 2.8 mm/hr), and intense (more than 2.8 mm/hr).

Investigating further, precipitation events were categorized by air temperature, wind speed, and precipitation intensity (Figure 3). Comparing by temperature, results were consistent with the seasonal analysis, showing lower COOP values (higher USCRN) in freezing conditions and warmer COOP values (lower USCRN) in near-freezing and warmer conditions. Stratifying by wind conditions is also consistent, indicating that the unshielded gauges in COOP will not catch as much precipitation as it should, showing a higher USCRN value. On the other hand, COOP reports much more precipitation in lighter wind conditions, due to higher evaporation rate in the USCRN gauge. For precipitation intensity, USCRN observed less than COOP for all categories.


Figure 4. National temperature anomalies for maximum (a) and minimum (b) temperature between homogenized COOP data from the United States Historical Climatology Network (USHCN) version 2.5 (red) and USCRN (blue).
Comparing the variability and trends between USCRN and homogenized COOP data from USHCN we see that they are very similar for both maximum and minimum national temperatures (Figure 4).

Conclusions

This study compared two observing networks that will be used in future climate and weather studies. Using very different approaches in measurement technologies, shielding, and operational procedures, the two networks provided contrasting perspectives of daily maximum and minimum temperatures and precipitation.

Temperature comparisons between stations in local pairings were partially attributed to local factors including siting (station exposure), ground cover, and geographical aspects (not fully explored in this study). These additional factors are thought to accentuate or minimize anticipated radiative imbalances between the naturally and mechanically aspirated systems, which may have also resulted in seasonal trends. Additional analysis with more station pairs may be useful in evaluating the relative contribution of each local factor noted.

For precipitation, network differences also varied due to the seasonality of the respective gauge biases. Stratifying by temperature, wind speed, and precipitation intensity showed these biases are revealed in more detail. COOP gauges recorded more precipitation in warmer conditions with light winds, where local summertime convection and evaporation in USCRN gauges may be a factor. On the other hand, COOP recorded less precipitation in colder, windier conditions, possibly due to observing error and lack of shielding, respectively.

It should be noted that all observing systems have observational challenges and advantages. The COOP network has many decades of observations from thousands of stations, but it lacks consistency in instrumentation type and observation time in addition to instrumentation biases. USCRN is very consistent in time and by sensor type, but as a new network it has a much shorter station record with sparsely located stations. While observational differences between these two separate networks are to be expected, it may be possible to leverage the observational advantages of both networks. The use of USCRN as a reference network (consistency check) with COOP, along with more parallel measurements, may prove to be particularly useful in daily homogenization efforts in addition to an improved understanding of weather and climate over time.




* Jared Rennie currently works at the Cooperative Institute for Climate and Satellites – North Carolina (CICS-NC), housed within the National Oceanic and Atmospheric Administration’s (NOAA’s) National Centers for Environmental Information (NCEI), formerly known as the National Climatic Data Center (NCDC). He received his masters and bachelor degrees in Meteorology from Plymouth State University in New Hampshire, USA, and currently works on maintaining and analyzing global land surface datasets, including the Global Historical Climatology Network (GHCN) and the International Surface Temperature Initiative’s (ISTI) Databank.

Further reading

Ronald D. Leeper, Jared Rennie, and Michael A. Palecki, 2015: Observational Perspectives from U.S. Climate Reference Network (USCRN) and Cooperative Observer Program (COOP) Network: Temperature and Precipitation Comparison. Journal Atmospheric and Oceanic Technology, 32, pp. 703–721, doi: 10.1175/JTECH-D-14-00172.1.

The informative homepage of the U.S. Climate Reference Network gives a nice overview.

A database with parallel climate measurements, which we are building to study the influence of instrumental changes on the probability distributions (extreme weather and weather variability changes).

The post, A database with daily climate data for more reliable studies of changes in extreme weather, provides a bit more background on this project.

Homogenization of monthly and annual data from surface stations. A short description of the causes of inhomogeneities in climate data (non-climatic variability) and how to remove it using the relative homogenization approach.

Previously I already had a look at trend differences between USCRN and USHCN: Is the US historical network temperature trend too strong?

Friday, April 24, 2015

I set a WMO standard and all I got was this lousy Hirsch index - measuring clouds and rain

Photo of lidar ceilometer in front of WMO building

This week we had the first meeting of the new Task Team on Homogenization of the Commission for Climatology. More on this later. This meeting was at the headquarters of the World Meteorological Organization (WMO) in Geneva, Switzerland. I naturally went by train (only 8 hours), so that I could write about scientists flying to meetings without having to justify my own behaviour.

The WMO naturally had to display meteorological instruments in front of the entrance. They are not exactly ideally sited, but before someone starts screaming: the real observations are made at the airport of Geneva.

What was fun for me to see was that they tilted their ceilometer under a small angle. In the above photo, the ceilometer is the big white instrument on the front right of the lodge. A ceilometer works by the same principle as a radar, but it works with light and is used to measure the height of the cloud base. It sends out a short pulse of light and detects how long (short) it takes until light scattered by the cloud base returns to the instrument. The term radar stands for RAdio Detection And Ranging. A ceilometer is a simple type of lidar: LIght Detection And Ranging.

For my PhD and first postdoc I worked mostly on cloud measurements and we used the same type of ceilometer, next to many other instruments. Clouds are very hard to measure and you need a range of instruments to get a reasonable idea of how a cloud looks like. The light pulse of the ceilometer extinguishes very fast in a water cloud. Thus just like we cannot see into a cloud with our eyes, the ceilometer cannot do much more than detect the cloud base.

We also used radars, the radiowaves transmitted by a radar are only weakly scattered by clouds. This means that the radio pulses can penetrate the cloud and you can measure the cloud top height. Radiowaves, however, scatter large droplets much much stronger than small ones. The small freshly developed cloud droplets that are typically found at the cloud base are thus often not detected by the radar. Combining both radar and lidar, you can measure the cloud extend of the lowest cloud layer reasonably accurately.

You can also measure the radiowaves emitted by the atmosphere with a so-called radiometer. If you do so at multiple wavelengths that gives you an idea of the total amount of cloud water in the atmosphere, but it is hard to say at which height the clouds are, but we know that from the lidar and radar. If you combine radar, ceilometer and radiometer, you can measure the clouds quite accurately.

To measure very thin clouds, which the radiowave radiometer does not see well, you can add an infra-red (heat radiation) radiometer. Like the radar, the infra-red radiometer cannot look into thick clouds, for which the radiowave radiometer is thus important. And so on.

Cheery tear drops illustrate the water cycle for kids
Cheery tear drops illustrate the water cycle for kids. You may think that every drop of rain that falls from the sky or each glass of water that you drink, is brand new, but it has always been here and is part of the The Water Cycle.

Why is the lidar tilted? That is because of the rain. People who know rain from cartoons may think that a rain drop is elongated like a tear drop or like a drop running down a window. Free falling rain drops are, however, actually wider than high. Small ones are still quite round due to the surface tension of the droplet, but larger ones deform more easily. Larger drops fall faster and thus experience more friction by the air. This friction is strongest in the middle and makes the droplet broader than high. If a rain drop gets really big the drop base can become flat and even get a dip in the middle of its base. The next step would be that the friction breaks up the big drop.

If a lidar is pointed vertically, it will measure the light reflected back by the flattened base of the rain drops. When their base is flat, drops will reflect almost like a mirror. If you point the lidar at an angle, the surface of the drop will be rounder and the drop will reflect the light in a larger range of directions. Thus the lidar will measure less reflected light coming back from rain drops when it is tilted. Because the aim of the ceilometer is to measure the base of the cloud, it helps not to see the rain too much. That improves the contrast.

I do not know if anyone uses lidar to estimate the rain rate, there are better instruments for that, but even in that case, the small tilt is likely beneficial. It makes the relationship between the rain rate and the amount of back scattered light more predictable, because it depends less on the drop size.

The large influence of the tilting angle of the lidar can be seen in the lidar measurement below. What you see is the height profile of the amount of scattered light for a period of about an hour. During this time, I have changed the titling angle of the lidar every few minutes to see whether this makes a difference. The angle away from the vertical in degrees is written near the bottom of the measurement. In the rain, below 1.8 km, you can see the above explained effect of the tilting angle.


The lidar backscatter (Vaisala CT-75K) in the rain as a function of the pointing angle (left). The angle in degrees is indicated by the big number at the bottom (zenith = 0). The right panel shows the profiles of the lidar backscatter, radar reflectivity (dBZ), and radar velocity (m/s) from the beginning (till 8.2 hrs) of the measurement. For more information see this conference contribution.

In the beginning of the above measurement (until 8.2h), you can see a layer with only small reflections at 1.8 km. This is the melting layer where snow and ice melts into rain droplets. Thus the small reflections you see between 2.5 and 2 km are the snow falling from the cloud, which is seen as a strong reflection at 2.5 km.

An even more dramatic example of a melting layer can be seen below at 2.2 km. The radar sees the melting layer as a strongly reflecting layer, whereas the melting layer is a dark band for the lidar.


Graph with radar reflection for 23rd April; click for bigger version.

Graph with lidar reflection for 23rd April; click for bigger version.

The snow reflects the light of the lidar stronger than the melting particles. When the snow or ice particle melts into rain drops, they become more transparent. Just watch a snowflake or hailstone melt in your hand. Snowflakes, furthermore, collapse and become smaller and the number of particles per volume decreases because the melted particles fall faster. These effects reduce the reflectivity in the top of the melting layer where the snow melts.

What is still not understood is why the reflectivity of the particles increases again below the melting layer. I was thinking of specular reflections by the flat bottoms of the rain drops, which develop when the particles are mostly melted and fall fast. However, you can also see this increase in reflections below the melting layer in the tilted lidar measurements. Thus specular reflections cannot explain it fully.

Another possible explanation would be if the snowflake is very large, the drop it produces is too large to be stable and explodes in many small drops. This would increase the total surface of the drops a lot and the amount of light that is scattered back depends mainly of the surface. This probably does not happen so explosively in nature as in the laboratory example below, but maybe it contributes some.



To be honest, I am not sure whether we were the first ones to tilt the lidar to see the cloud base better. It is very well possible that the instrument can be tilted like for this purpose. But if we were and the custom spread all the way the WMO headquarters, it would be one of the many ideas and tasks academics perform that does not lead to more citations or a better [[Hirsch index]]. These citations are unfortunately the main way in which managers and bureaucrats nowadays measure scientific output.

For my own publications, which I know best, I can clearly say that if I rank them for my own estimate of how important they are, you will get a fully different list than when you rank them for the number of citations. These two ranked lists are related, but only to a small degree.

The German Science Foundation (DFG) thus also rightly rejects in its guidelines on scientific ethics the assessment of individuals or small groups by their citation metrics (page 22). When you send a research proposal to the DFG you have to indicate that you read these guidelines. I am not sure whether all people involved with the DFG have read the guidelines, though.


Further information

A collection of beautiful remote sensing measurements.

On cloud structure. Essay on the fractal beauty of clouds and the limits of the fractal approximation.

Wired: Should We Change the Way NSF Funds Projects? Trust scientists more. Science is wasteful, if we knew the outcome in advance, it would not be science.

On consensus and dissent in science - consensus signals credibility.

Peer review helps fringe ideas gain credibility.

Are debatable scientific questions debatable?


* Cartoon of tear shaped rain drop by USGS. The diagram of raindrop shapes is from NASA’s Precipitation Measurement Missions. Both can thus considered to be in the U.S. public domain.