tag:blogger.com,1999:blog-90934361613261553592024-03-10T02:46:24.739+00:00Variable VariabilityVictor Venemahttp://www.blogger.com/profile/02842816166712285801noreply@blogger.comBlogger268125tag:blogger.com,1999:blog-9093436161326155359.post-32847671353130897182022-08-23T15:33:00.001+01:002022-08-28T01:06:53.231+01:00One more reason I dislike linking climate change and extinction<p>Life is full of suffering. Especially in its final stage. Sorry to bother you with this truism we spend so much effort keeping away. But visiting a hospital I found one more reason to dislike the talk about climate change and extinction.</p>
<p>Trump supporters using this argument, or its more vague alternative that we should only care about climate change if it is catastrophic (CAGW, Catastrophic Anthropogenic Global Warming), is clearly in bad faith. They would never ask themselves whether Trump should sign <a href="https://trumpwhitehouse.archives.gov/trump-administration-accomplishments/" rel="nofollow">his main legislative achievement</a>, a 2 Trillion Dollar give away <a href="https://www.cambridge.org/core/journals/review-of-international-studies/article/global-inequality-and-the-trump-administration/DC3768AC522B7ADF4C30E1F774017452">for corporations and the rich</a> by asking whether <i>not</i> doing so would be catastrophic. Whether every baby, kid, adult and elder keeping their 6,000 Dollar would lead to human extinction. (The money will have to be paid back some day and without big change it will not be the rich and corporations doing so.) They do not argue like that about topics they care about. Doing it for climate change is your typical nonsense from the US culture war.</p>
<p>Lately people who care about climate change seem to have mirrored the talking point. It looks like people thinking: if they say climate change is not catastrophic, I will say it is. (And without defining what you mean, you enter one of the more pointless twigs of the US culture war.) The term "catastrophic" does not convince people? Let's call it the end of civilization or the end of humanity. (While people are actually convinced, polls all around the world show majorities that see climate change as a problem. Insufficient action is due to incumbent power and politics.)</p>
<p>The end of humanity or even "just" of civilization is not in the IPCC reports. This simple fact makes me rather unpopular on the Reddit forum community on the collapse of civilization, where the cheerful people hang out. If you want to fight fire with fire and mimic Trump supporters in their lack of care about whether a claim is true please note that it is also counter productive to make people despair about solving climate change. Just like talking about inaction, rather than insufficient action. Saying we need <a href="https://www.un.org/sustainabledevelopment/climate-facts-and-figures/">to do 3 or 5 times more to stay below 1.5 °C or 2°C warming</a> makes solving the problem sound much more doable than saying we are going extinct after decades of inaction.</p>
<p>Now the new, for me, argument. My reason to care about climate change is that it leads to more suffering, the more warming the more suffering. Until extinction ends the suffering. So extinction is kinda downplaying the problem.</p>
<p>Maybe I am strange that way. I am not a vegetarian. But I do care that the animals had a good life. That they are mostly outside enjoying the weather and having fun in each other's company. That they get real food, are healthy and are selected for their robustness, not maximum productivity. I wish those rules were much more strict.</p>
<p>Dying is naturally not nice for the ones you leave behind. But in case of extinction, our cows, chickens and pigs will not like it, but most species will thrive and rejoice if they could. The suffering is the problem.</p>
<p>That we will not collapse due to climate change alone, does not mean civlization will not collapse and may have to recover (which is another good reason to keep fosil fuels in the ground). My favorite animated science channel just made a video on this.</p>
<p><iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/W93XyXHI8Nw" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe><br>
<i>In a Nutshell: Is Civilization on the Brink of Collapse?</i></p>
Victor Venemahttp://www.blogger.com/profile/02842816166712285801noreply@blogger.com0tag:blogger.com,1999:blog-9093436161326155359.post-10072543024223937452022-07-29T13:48:00.007+01:002022-07-29T18:15:23.170+01:00The 10th anniversary of the still unpublished Watts et al. (2012) manuscript Anthony Watts:
<blockquote>Something’s happened. From now until Sunday July 29th [2012], around Noon PST, WUWT will be suspending publishing. At that time, there will be a major announcement that I’m sure will attract a broad global interest due to its controversial and unprecedented nature.</blockquote>
<p>Watts suspended his holiday plans, put his blog on hold over the weekend to work on something really important. With this announcement PR expert Watts created a nice buzz. Out came a deeply flawed manuscript on the influence of the direct surrounding of weather stations (micro-siting) on temperature trends.</p>
<p>Even before reading it the science internet was disappointed. David Appell responded: "<a href="https://davidappell.blogspot.com/2012/07/watts-et-al-clunk.html">Clunk. That, to me, seems to be the sound of the drama queen's preprint hitting the Internet.</a>". William Connolley: "<a href="https://scienceblogs.com/stoat/2012/07/29/watts-disappoints ">Watts disappoints</a> ... its just a paper preprint. All over the world scientists produce draft papers and send them off for peer review. Only dramah queens pimp them up like this."</p>
<p>Roger Pielke Sr. burned another part of his scientific reputation build by his regional climate modelling work by <a href="https://pielkeclimatesci.wordpress.com/2012/07/29/comments-on-the-game-changer-new-paper-an-area-and-distance-weighted-analysis-of-the-impacts-of-station-exposure-on-the-u-s-historical-climatology-network-temperatures-and-temperature-trends-by-w/">writing a press release</a> about his godson's manuscript: </p>
<blockquote>"This paper is a game changer ... this type of analysis should have been performed by Tom Karl and Tom Peterson at NCDC, Jim Hansen at GISS and Phil Jones at the University of East Anglia (and Richard Muller). However, they apparently liked their answers and did not want to test the robustness of their findings.. ... Anthony’s new results also undermine the latest claims by Richard Muller of BEST ... His latest BEST claims are, in my view, an embarrassment."</blockquote>
<p>After all the obvious problems became clear, which somehow this eminent scientist could not find himself, he wrote <a href="https://pielkeclimatesci.wordpress.com/2012/08/01/my-involvement-with-watts-et-al-2012-and-mcnider-et-al-2012-papers/">a new blog post</a>:</p>
<blockquote>"To be very specific, I did not play a role in their data analysis. He sent me the near final version of the discussion paper and I recommended added text and references. I am not a co-author on their paper. I am now working with them to provide suggestions as to how to
examine the TOB question regarding its effect on the difference in the trends found in Watts et al 2012."</blockquote>
<p>The Watts et al. (2012) study is so <a href="http://www.skepticalscience.com/watts_new_paper_critique.html">fundamentally wrong in its basic design</a> and execution that it is still not published now ten years later. While Watts naturally keeps on citing it to claim one cannot trust observed temperature trends. This fits to his new job at the Heartland Institute, a company so immoral that they still work for Big Tobacco. </p><p>Below you can find some details on a recent study from Italy, which suggests that had Watts' study been done right, it would have found that micro-siting is a minor problem for climate trends.</p>
<p>The question of how micro-siting influences temperature observations is an interesting one. Expecting to see an influence on <i>trends</i> is another matter. I have no clue how that was supposed to work and Watts et al. (2012) also did not explain the extraordinary physics.</p>
<p>Even if such a thing existed Watts et al. (2012) could not have found convincing evidence on <i>trends</i>; the most fundamental problem of the study setup is that the study tries to analyse trends, which requires at least two points in time, but only had siting information for one point in time. Why this is a problem was <a href="https://scienceblogs.com/stoat/2012/07/29/watts-disappoints#comment-1774527">explained well at the time by Pete</a>:
</p><blockquote>Someone has a weather station in a parking lot. Noticing their error, they move the station to a field, creating a great big cooling-bias inhomogeneity. Watts comes along, and seeing the station correctly set up says: this station is sited correctly, and therefore the raw data will provide a reliable trend estimate.</blockquote>
<p>To see an influence of micro-siting you need to something to compare with. You need to either have two points in time with information on micro-siting or two or more points spatially. Our Italian metrological (the science of measuring, not meteorologists, the science of the weather) colleagues of <a href="https://www.meteomet.org/">Meteomet</a> did the latter.</p>
<p><a href="https://doi.org/10.1002/joc.7044">Coppa et al. (2021)</a> installed a weather station only 1-meter from a road and as comparison had a weather station 100 meters away from the road, perfectly sited in the middle of a grass field. More precisely they installed seven stations at 1, 5, 10, 20, 30, 50 and 100 m from the road and this is a 2-lane asphalt road, which is half a meter above the grass and leads to an airport in the surrounding of Turin, Italy.</p>
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg9W4G-y1lZKS32vCgi9q3FHW2qRDev4IHRyAOJ7j_dqkqp7am1hU0odEBTekr15xz7YRWhbsf83OJmEF-kKJ8Cotg03rvtKjXvlKznSunAi2mcJNZC5q64slSgUs4DcVj5Gl1VC4bzGHc/s0/GHCNv3_raw_adjusted_difference.jpg" style="display: block; padding: 1em 0px; text-align: center;"><img alt="" border="0" data-original-height="374" data-original-width="561" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg9W4G-y1lZKS32vCgi9q3FHW2qRDev4IHRyAOJ7j_dqkqp7am1hU0odEBTekr15xz7YRWhbsf83OJmEF-kKJ8Cotg03rvtKjXvlKznSunAi2mcJNZC5q64slSgUs4DcVj5Gl1VC4bzGHc/s0/GHCNv3_raw_adjusted_difference.jpg" /></a></div>
<p>Climatologically the most important plot in the paper is the one below. Let me walk you through it. On the y-axis is the temperature differences in Celsius compared to the seventh station, the one 100 meter from the road. The plot shows six box plot triplets; there are the six temperature differences. The three colors are for the daily maximum temperature (white), the daily average temperature (red) and the daily minimum temperature (blue). Careful, more common would be that the red color denotes the maximum temperature. The thick part of the box plots spans 50% of all observed temperature differences, the horizontal bar inside it the mean temperature difference.</p>
<p>So the temperature difference of the station closest to the road to the well-sited station is ΔT₁, the triplet at the left. The maximum temperature close to the road is 0.12 °C warmer, the average temperature is about 0.2 °C warmer and the minimum temperature is 0.3°C warmer. With increasing distance from the road, these small effects gradually become smaller, which gives confidence that these differences, while small, are real. This is somewhat less true for the maximum temperature, which behaves more erratically.</p>
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiQ4KClg0iCYOC9RNlNBGG07mUl3G6BmkEUD0hpeD5cMv1weSjs0A6wRSTUkIAHYBho_Td5yM0ltKRPMNatBdMWjHifDYgpSdA9TIwYpm_MTNR2u0p6sAvdxUNDJClalnhXVP1YlRbcL4Y/s1078/screen_shot_roads_results_climate.png" style="display: block; padding: 1em 0px; text-align: center;"><img alt="" border="0" data-original-height="627" data-original-width="1078" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiQ4KClg0iCYOC9RNlNBGG07mUl3G6BmkEUD0hpeD5cMv1weSjs0A6wRSTUkIAHYBho_Td5yM0ltKRPMNatBdMWjHifDYgpSdA9TIwYpm_MTNR2u0p6sAvdxUNDJClalnhXVP1YlRbcL4Y/s600/screen_shot_roads_results_climate.png" width="600" /></a></div>
<p>This metrological study is important for climatology, even if it basically found a null effect. Understanding uncertainties in measurements helps us focus on the real problems. Unfortunately such studies are not cited much and unfortunately too often the importance of science is judged by the number of citations. This study clearly illustrates why this is a bad way to micro-manage science.</p>
<p>What does this mean for observed global warming trends? To make a worst case estimate one could assume that all stations were perfectly sited on lush grasslands in the past and are now close to a road in a subtropical climate with harsh sun light to get a trend error of 0.2 °C in the mean temperature of land stations, which represent a third of the Earth's surface. So even with such unrealistic assumptions this would change the global temperature trend much less than 10%.</p>
<p>The opposite scenario might be more realistic. Climate stations often started close to buildings as the then expensive scientific instruments had to be read by observers. Nowadays it is easy to build an automatic climate station with autonomous power and radio communication far from buildings.</p>
<p>The upside of this being the 10th anniversary is that people could check the micro-siting of the stations again and have two time points. It would likely give a null result, but that is valid.</p>
<h2 style="text-align: left;">Related reading</h2>
<p><a href="https://variable-variability.blogspot.com/2012/07/blog-review-of-watts-et-al-2012.html">My quick review of the Watts et al. (2012) manuscript</a>.</p>
<div class="ref"><h3 style="text-align: left;">Reference</h3><div> <span class="author">Coppa, G</span>, <span class="author">Quarello, A</span>, <span class="author">Steeneveld, G-J</span>, <span class="author">Jandrić, N</span>, <span class="author">Merlone, A</span>, <span class="pubYear">2021</span>: <span class="articleTitle"><a href="https://iris.inrim.it/bitstream/11696/69030/4/joc.7044.pdf">Metrological evaluation of the effect of the presence of a road on near-surface air temperatures</a></span>. <i>International Journal of Climatology</i>. <span class="vol">41</span>: <span class="pageFirst">3705</span>– <span class="pageLast">3724</span>. <a class="linkBehavior" href="https://doi.org/10.1002/joc.7044">https://doi.org/10.1002/joc.7044</a></div>
<div> <span class="author">Ronald D. Leeper</span>, <span class="author">John Kochendorfer</span>, <span class="author">Timothy A. Henderson</span>, <span class="author">Michael A. Palecki</span>, <span class="pubYear">2019</span>: <span class="articleTitle">Impacts of Small-Scale Urban Encroachment on Air Temperature Observations</span>. <i>Journal of Applied Meteorology and Climatology</i>. <span class="vol">58</span>: <span class="pageFirst">1369</span>– <span class="pageLast">1380</span>. <a class="linkBehavior" href="https://doi.org/10.1175/JAMC-D-19-0002.1">https://doi.org/10.1175/JAMC-D-19-0002.1</a></div></div>
<p></p><p></p>Victor Venemahttp://www.blogger.com/profile/02842816166712285801noreply@blogger.com9tag:blogger.com,1999:blog-9093436161326155359.post-45313288287943125232021-12-25T06:01:00.002+00:002021-12-26T01:06:04.241+00:00New German Sovereign Tech Fund will fund open source digital infrastructure to avert the next log4j<div style="float: right; margin-bottom: 10px; margin-left: 20px;"><a href="https://xkcd.com/2347/"><img alt="XKCD cartoon of an intricate tower made of blocks, all resting on a tiny block near the bottom, whose removal would topple the building. The top is called All modern digital infrastrucutre. The tiny block is marked as A project some random person in Nebraska has been thanklessly maintaining since 2003" border="0" data-original-height="978" data-original-width="770" src="https://blogger.googleusercontent.com/img/a/AVvXsEhttz9r2JCE6xN87_6dGeqoTRl2VAETePTtHXkFqKeLlDCAbr1inhBT7C_ntCp3cQFu0OJjIZQXuHJ-KSzwaIS6hDbn6xs8Zsbodz5kM7_u5X_10oOoDnC75AptGGf4vzICWe7Va4YRdsUgYBPDVN5IMkcMrqgrl8ZGwInOsegpN7LW-AT5o13UGMxY" width="335" /></a></div>
<p>The famous XKCD cartoon has resulted in an open source digital infrastructure fund. Thank you Randall.</p>
<p>Late in the afternoon, just before a national holiday, is not the best time to get attention. Which is probably the main reason that the press did not (yet) write about what Franziska Brantner (the new Green deputy minister for the economy) <a href="https://nitter.net/fbrantner/status/1474027542420070410">wrote on Twitter</a>:</p>
<blockquote>We will tackle the Sovereign Tech Fund! Log4j has shown that sustainably secured and reliable open source solutions are the basis for the innovative strength and digital sovereignty of the German economy. We will therefore promote open source enabling technologies from 2022 onwards.</blockquote>
<p>[[<a href="https://en.wikipedia.org/wiki/Log4Shell">Log4j</a>]] is a security vulnerability in a 21-year old Java library that is used a lot, which is easy to exploit and existed for almost a decade before being noticed. As a Free and Open Source Software (FOSS) it was used widely and produces a lot of value, despite there not being much funding for producing FOSS. In this way much of the digital economy depends on the dedication of unpayed hobbyists, as <a href="https://www.explainxkcd.com/wiki/index.php/2347:_Dependency">XKCD Explained</a> explains well. </p>
<p>The German Sovereign Tech Fund will step into this gap. We will have to see how the government will implement it, but the name comes from a feasibility study by the <a href="https://sovereigntechfund.de">Open Knowledge Foundation</a>, which proposed a fund to support "<i>the development, scaling and maintenance of digital and foundational technologies. The goal of the fund could be to sustainably strengthen the open source ecosystem, with a focus on security, resilience, technological diversity, and the people behind the code.</i>" </p>
<p>Such a fund had not explicitly made it into the coalition agreement of the new government to the lament of the FOSS community. Although it does fit to the spirit of the agreement. </p><p>Deputy minister Franziska Brantner carbon copied Patrick Beuth, a journalist who <a href="https://www.spiegel.de/netzwelt/web/log4j-sicherheitsluecke-wie-loescht-man-ein-brennendes-internet-a-27729847-8e28-4187-b4a2-468a45137fb4">recently wrote about log4j</a> in the magazine Der Spiegel and mentioned the Sovereign Tech Fund as a solution. So log4j seems to have been the clincher.</p>
<p>This announcement adds to a period of hope for digital rights. Most of my life they have become worse, more privacy for the powerful, more vulnerability for us. Things which were protected in the analogue world (taking to each other, sending a letter) have been criminalized and subjected to surveillance. The fast creation of abusive monopolies is the official business model in Silicon valley. Social media monopolies sprouted who do not care how much damage they do to society and our democracy, while Europe was increasingly becoming a digital colony. </p><p>However, lately with the EU privacy law, the <a href="http://variable-variability.blogspot.com/2020/07/friendly-micro-blogging-Twitter-scientists-no-nasties-surveillance.html">rise of the Fediverse</a>, the upcoming EU <a href="https://en.alexandrageese.eu/digital-services-act-greens-efa-successes/">Digital Services Act</a> and a good <a href="https://netzpolitik.org/2021/koalitionsvertrag-das-plant-die-ampel-in-der-netzpolitik/">coalition agreement</a> in Germany, it is starting to look like it is actually possible for digital right to improve.</p>
<p>This proposal is for a fund of 10 million Euro per year, which is a good start. Especially when similar EU proposals also manage to get funded. There is also project funding for new software tools: <a href="https://prototypefund.de/en/">the Prototype Fund</a> in Germany or the <a href="https://www.ngi.eu/about/">Next Generation Internet</a> (NGI) and <a href="https://www.ngi.eu/ngi-projects/ngi-zero/">NGI-zero</a> initiative in Europe. </p><p>What I feel is still missing are stable public institutions where coders can jointly work on large tasks, such as maintaining Firefox or extending what is possible in the Fediverse. If we would compare the situation in software to science, we now have funding for projects by the National Science Foundation and agencies, but there are no equivalents yet of the National Institute of Health, research institutes or universities.<br /></p>
<p>More in general we need a real solution to invest in goods and services with enormous societal and economic value that do not have much market value (research and development, security, (preventative) healthcare, weather services, justice, software, (digital) infrastructure, governance, media, ...). We are no longer in the 19th century. These kinds of cases are an increasing large part of the future economy.</p>
<h2>Related reading</h2>
<p>Patrick Beuth (Der Spiegel): <a href="https://www.spiegel.de/netzwelt/web/log4j-sicherheitsluecke-wie-loescht-man-ein-brennendes-internet-a-27729847-8e28-4187-b4a2-468a45137fb4">Wie löscht man ein brennendes Internet?</a></p>
<p>XKCD Explained on the <a href="https://www.explainxkcd.com/wiki/index.php/2347:_Dependency">XKCD on software dependencies.</a></p><p><a href="https://en.alexandrageese.eu/wp-content/uploads/German-Government-Coalition-Agreement-Digital-Chapter.pdf">The digitization section of the coalition agreement in English.</a> <br /></p>
<p><a href="https://pretalx.c3voc.de/rc3-2021-cbase/talk/UQVKPQ/">Monday the 27th of December there is a session on the Sovereign Tech Fund at the remote Chaos Computer Congress.</a></p>
<p><a href="https://en.alexandrageese.eu/digital-services-act-greens-efa-successes/">Digital Services Act: Greens/EFA successes</a></p>
<p><a href="http://variable-variability.blogspot.com/2020/07/friendly-micro-blogging-Twitter-scientists-no-nasties-surveillance.html">Micro-blogging for scientists without nasties and surveillance </a></p>
Victor Venemahttp://www.blogger.com/profile/02842816166712285801noreply@blogger.com0tag:blogger.com,1999:blog-9093436161326155359.post-24025797900479886162021-05-06T13:56:00.006+01:002021-05-06T13:58:31.423+01:00We launched a new group to promote the translation of the scientific literature <p>
</p><p>Tell your story, tell your journey, they say. Climate Outreach advised: tell about how you came to accept climate change is a problem. Maybe I am too young, but still not being 50 I have accepted climate change was a risk we should care about already as a kid.</p><p>Also otherwise, I do not remember suddenly changing my mind often, so that I could talk about my journey. Where the word "remember" may do a lot of the work. Is it useful not to remember such things to make it easier on you to change your mind? Or do many people work with really narrow uncertainty intervals even when they do not have a clue yet?<br /></p><p>But when it comes to translations of scientific articles, I changed a lot. When I was doing cloud research I used to think that knowing English was just one of the skills a scientist needs. Just like logic, statistics, coding, knowing the literature, public speaking, and so on.</p><p>Working on historical climate data changed this. I regularly have to communicate with people from weather services from all over the world and many do not speak English (well), while they do work that is crucial for science. Given how hard we make it for them to participate they do an amazing job; I guess the World Meteorological Organization translating all their reports in many languages helps.</p><p>The most "journey" moment was at the Data Management Workshop in Peru, where I was the only one not speaking Spanish. A colleague told me that she translated important scientific articles into Spanish and send them by email to her colleagues. Just like Albert Einstein translated scientific articles into English for those who did not master the language of science at the time.</p><p>This got me thinking about a database where such translations could be made available. When you search for an article and can see which translations are available. Or where you can search for translated articles on a specific topic. Such a resource would make producing translations more worthwhile and would thus hopefully stimulate their production.<br /></p><p>Gathering literature, bookmarks on this topic and noticing who else was interested in this topic, I have invited a group of people to see if we can collaborate on this topic. After a series of pandemic video calls, we decided to launch as a group, somewhat unimaginatively called: "Translate Science". Please find below the part of our <a href=" https://blog.translatescience.org/launch-of-translate-science/">launch blog post</a> about why translations are important.<br /></p><p>(To be fair to me, and I like being fair to me, for a fundamental science needing expensive instruments such as cloud studies it makes more sense to simply do it in English. While for sciences that directly impact people, climate, health, agriculture, two-way communication within science, with the orbit around science and with society is much more important.</p><p>But even in the clouds sciences I should probably have paid more attention to studies in other languages. One of our group members works on turbulence and droplets and found many worthwhile papers in Russian. I had never considered that and might have found some turbulent gems there as well.)</p><p><br /></p><h2 style="text-align: left;"></h2><blockquote><h2 style="text-align: left;">The importance of translated articles</h2><p>English as a common language has made global communication within science easier. However, this has made communication with non-English communities harder. For English-speakers it is easy to overestimate how many people speak English because we mostly deal with foreigners who do speak English. It is thought that that about one billion people speak English. That means that seven billion people do not. For example, at many weather services in the Global South only few people master English, but they use the translated guidance reports of the World Meteorological Organization (WMO) a lot. For the WMO, as a membership organization of the weather services, where every weather service has one vote, translating all its guidance reports into many languages is a priority.</p>
<p>Non-English or multilingual speakers, in both African (and non-African) continents, could participate in science on an equal footing by having a reliable system where scientific work written in non-English language is accepted and translated into English (or any other language) and vice versa. <a data-id="https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0238372" data-type="URL" href="https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0238372">Language barriers should not waste scientific talent.</a> </p>
<p>Translated scientific articles open science to regular people, science enthusiasts, activists, advisors, trainers, consultants, architects, doctors, journalists, planners, administrators, technicians and scientists. Such a lower barrier to participating in science is especially important on topics such as climate change, environment, agriculture and health. The easier knowledge transfer goes both ways: people benefiting from scientific knowledge and people having knowledge scientists should know. Translations thus help both science and society. They aid innovation and tackling the big global challenges in the fields of climate change, agriculture and health. </p>
<p>Translated scientific articles speed up scientific progress by tapping into more knowledge and avoiding double work. They thus improve the quality and efficiency of science. Translations can improve <a data-id="https://blogs.lse.ac.uk/covid19/2020/06/18/long-read-science-needs-to-inform-the-public-that-cant-be-done-solely-in-english/" data-type="URL" href="https://blogs.lse.ac.uk/covid19/2020/06/18/long-read-science-needs-to-inform-the-public-that-cant-be-done-solely-in-english/">public disclosure, scientific engagement and science literacy</a>. The production of translated scientific articles also creates a training dataset to improve automatic translations, which for most languages is still lacking.</p></blockquote><p></p><p><a href="https://blog.translatescience.org/launch-of-translate-science/">The full post at the Translate Science blog explains more about who we are, what we would like to do to promote translations and how you can join.</a></p><p><br /></p>Victor Venemahttp://www.blogger.com/profile/02842816166712285801noreply@blogger.com2tag:blogger.com,1999:blog-9093436161326155359.post-29535650289354732012021-04-22T14:28:00.001+01:002022-01-07T14:02:59.478+00:00The confusing politics behind John Stossel asking Are We Doomed?<div class="separator" style="clear: both;"><a href="https://climatefeedback.org/evaluation/video-promoted-by-john-stossel-for-earth-day-relies-on-incorrect-and-misleading-claims-about-climate-change/" style="clear: left; display: block; float: left; padding: 1em 0px; text-align: center;"><img alt="" border="0" data-original-height="534" data-original-width="1024" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj7ozdSNCopm66LN8rMB_cnTzatGR8XpktAfFw9ySRcsrLL7BtGhAGcSdGqbeKA_zOErwJBZ_MQMsdoyD7E3N2JBR32idTMakF83Kc9zYsYr9MbG4K48XiMZIQisv1Vhc82HYz4mLORlqk/s600/video-promoted-by-john-stossel-for-earth-day-relies-on-incorrect-and-misleading-claims-about-climate-change-1024x534.png" width="600" /></a></div><br clear="all" />
<p>As member of Climate Feedback I just reviewed a YouTube video by John Stossel. In that review I could only respond to factual claims, which were the boring age-old denier evergreens. Thus not surprisingly the video got a solid <a href="https://climatefeedback.org/evaluation/video-promoted-by-john-stossel-for-earth-day-relies-on-incorrect-and-misleading-claims-about-climate-change/">"very low" scientific credibility</a>. But it got over 25 million views, so I guess responding was worth it.</p>
<p>The politics of the video were much more "interesting". As in: "May you live in interesting times". Other options would have been: crazy, confusing, weird.</p>
<p>That starts with the title of the video: "Are We Doomed?". Is John Stossel suggesting that damages are irrelevant if they are not world ending? I would be surprised if that were his general threshold for action. "Shall we build a road?". Well, "Are We Doomed?" "Should we fund the police? Well, "Are We Doomed?" "Shall I eat an American taco?" Well, "Are We Doomed?"</p>
<p>Are we not to invest in a more prosperous future unless we are otherwise doomed? That does not seem to be the normal criterion for rational investments any sane person or corporation would use.</p>
<p>Then there is his stuff about sea level rise:</p>
<blockquote>"Are you telling me that people in Miami are so dumb that they are just going to sit there and drown?”</blockquote>
<p>That remind me of a similar dumb statement by public intellectual Ben Shapiro (I hope people hear the sarcasm, in the US you can never be sure) and the wonderful response to it by H Bomber Guy:</p>
<iframe allow="accelerometer; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="" frameborder="0" height="315" src="https://www.youtube-nocookie.com/embed/RLqXkYrdmjY?start=224" title="YouTube video player" width="560"></iframe>
<p>Bomber also concludes that this, this, ... whatever it is, has nothing to do with science:</p>
<blockquote>"How have things reached a point, where someone thinks they can get away with saying something this ridiculous in front of an audience of people? And how have things reached the point where some people in that audience won't recognize it for the obvious ignorant bullshit that it is? <br /></blockquote><blockquote>This led me down a particular hole of discovery. I realized that climate deniers aren't just wrong, they're obviously wrong. In very clear ways, and that makes the whole thing so much more interesting. How does this work if it's so paper thin?"</blockquote>
<p>Politically interesting is that Stossel wants Floridians to get lost and Dutch people to pay an enormous price, in this video, while the next Stossel video Facebook suggests has the tagline: "Get off my property". And Wikipedia claims that Stossel is a "Libertarian pundit".</p>
<p>So do we have to accept any damages Stossel wants to us to suffer under? Do we have to leave our house behind? Does Stossel get to destroy our community and our family networks? Is Stossel selling authoritarianism where he gets to decide who suffers? Or is Stossel selling markets with free voluntary transaction and property rights?</p>
<p>In America, lacking a diversity of parties, both ideologies are within the same (Republican) party, but these are two fundamentally different ideas. But either you are a Conservative and believe in property rights or you are an Authoritarian and think you can destroy other people's property when you have the power.</p>
<p>You can reconcile these two ideas with the third ideological current in the Republican party: childish Libertarianism, where you get to pretend that the actions of person X never affect person Y. An ideology for teenagers and a lived reality for the donor class that funds US politics and media, who never suffer consequences for their terrible behavior.<br /></p>
<p>But in this video Stossel rejects this childish idea and accepts that Florida suffers damages:</p>
<blockquote>"Are you telling me that people in Miami are so dumb that they are just going to sit there and drown?” </blockquote>
<p>So, John Stossel, do you believe in property rights or don't you?</p>
Victor Venemahttp://www.blogger.com/profile/02842816166712285801noreply@blogger.com0tag:blogger.com,1999:blog-9093436161326155359.post-55134042204939900362021-04-16T14:09:00.007+01:002021-04-16T14:11:28.865+01:00Antigen rapid tests much less effective for screening than previously thought according to top German virologist Drosten <p>Hidden in a long German language podcast on the pandemic Prof. Dr. Christian Drosten talked about an observation that has serious policy implications.</p>
<p>At the moment this is not yet based on any peer reviewed studies, but mostly on his observations and those of his colleagues running large diagnosis labs. So it is important to note that he is a top diagnostic virologist from German who specialized on emerging and Corona viruses and made the first SARS-CoV-2 PRC test.</p>
<p>In the Anglo-American news Drosten is often introduced as the German Fauci. This fits as being one of the most trusted national sources of information. But Drosten has much more expertise, both Corona virusses and diagnostic testing are his beat.</p>
<p>Tim Lohn wrote an article about this in Bloomberg: "<a href="https://www.bloomberg.com/news/articles/2021-04-14/rapid-covid-tests-are-missing-early-infections-virologist-says">Rapid Covid Tests Are Missing Early Infections, Virologist Says.</a>" And found two experts making similar claims.</p>
<p>Let me give a longer and more technical explanation than Tim Lohn of what Prof. Dr. Christian Drosten claims. Especially because there is no peer reviewed study yet, I feel the explanation is important.</p>
<p>If you have COVID symptoms (day 0), sleep on it and test the next day the antigen tests are very reliable. But on day zero itself and especially on the one or two days before where you were already infectious they are not as reliable. So they are good for (self-)diagnosis, but less good for screening, for catching those first days of infectiousness. The PCR tests are sensitive enough for those pre-symptomatic cases, if only people would test with PCR that early and would immediately get the result.</p>
<div class="separator" style="clear: both;"><a href="https://arxiv.org/ftp/arxiv/papers/2103/2103.04979.pdf" style="display: block; padding: 1em 0; text-align: center; clear: left; float: left;"><img alt="" border="0" width="600" data-original-height="433" data-original-width="776" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhJqFGrk0Aq70wqGu9bIu-WHRA7KxeKjHdPMqLYeKD6U8F09dRzoBYE_TpWz6sW_GOA4XIu1V41UjjiX0J_mDOHeyEjkGRZtAKstJ76dMu3o3IcJ3WoEsakG2l1HcbPPy7hQQIKGFPag0g/s600/Screenshot_2021-04-16+Compact+Copy+of+EJEP+Submission+of+Rapid+antigen+tests+their+sensitivity%252C+benefits+for+epidemic+contr%255B...%255D.png"/></a></div><br clear="all">
<p><i>Figure from Jitka Polechová et al.</i></p>
<p>In those pre-symptomatic days there is already a high viral load, but this is mostly active virus. The antigen test detects the presence of the capsid of the virus, the protective shell of the virus. The PCR test detects virus RNA. When infecting a cell, the capsid proteines are produced first, before the RNA is produced. So in that respect one might expect the rapid tests to be able to find virus a few hours earlier.</p>
<p>But here we are talking about a few days. The antigen test can best detected capsids in a probe sample when epithelial cells die and mix with the mucus, which takes a few days. So the difference between the days before and after symptoms is the amount of dead virus material, which the rapid tests can detect to get reliable results. That is the reason why in the time after symptom onset the antigen tests predict infectiousness well. But in those early days possibly not.</p>
<p>This was not detected before because the probes used to study how well the tests work were mostly from symptomatic people; it is hard to get get positive probes from people who are infectious before they are symptomatic. Because you do not often have pre-symptomatic cases with both a PCR and an anti-gen tests, also the observations of Drosten are based on just a few cases. He strongly encouraged systematic studies to be made and published, but this will take a few months.</p>
<p>In the Bloomberg article Tim Lohn quotes Rebecca Smith who found something similar:</p>
<blockquote>In a <a href="https://www.medrxiv.org/content/10.1101/2021.03.19.21253964v2.full.pdf">paper</a> published in March -- not yet peer reviewed -- researchers led by Rebecca L. Smith at the University of Illinois at Urbana-Champaign found that, among other things, PCR tests were indeed better at detecting infections early on than a Quidel rapid antigen test. But the difference narrowed after a few days, along with when the different tests were repeatedly used on people.</blockquote>
<p>The article also quotes Jitka Polechová of the University of Vienna, who wrote a <a href="https://arxiv.org/ftp/arxiv/papers/2103/2103.04979.pdf">review</a> comparing PCR tests to antigen tests:</p>
<blockquote>“Given that PCR tests results are usually not returned within a day, both testing methods are similarly effective in preventing spread if used correctly and frequently.”</blockquote>
<p>This is a valid argument for comparing the tests when are used for diagnostics or as additional precautions for dangerous activities that have to take place.</p>
<p>However, at least in Germany, rapid tests are also used as part of opening up the economy. Here people can, for example, go into the theatre or a restaurant after having been tested. This is something one would not use a PCR for, because it would not be fast enough. These people at theatres and restaurants may think they are nearly 100% safe, but actually 3 of the on average 8 infectious days would not be detected. If, in addition, people behave more dangerously, thinking they are safe, opening a restaurant this way may not be much less dangerous than opening a restaurant without any testing.</p>
<p>So we have to rethink this way of opening up activities inside and rather try to meet people outside.</p>
<h2>Related reading</h2>
<p>Original source: Das Coronavirus-Update von NDR Info, edition 84: "<a href="https://www.ndr.de/nachrichten/info/podcast4684.html%20">(84) Nicht auf Tests und Impfungen verlassen</a>". Time stamp: "00:48:09 Diagnostik-Lücke bei Schnelltests"</p>
<p>Northern German public media (NDR) article: '<a href="https://www.ndr.de/nachrichten/info/Drosten-Schnelltests-sind-wohl-weniger-zuverlaessig-als-gedacht,coronavirusupdate178.html">Drosten: "Schnelltests sind wohl weniger zuverlässig als gedacht."</a>' Translated: <a href="https://translate.google.com/translate?sl=auto&tl=en&u=https://www.ndr.de/nachrichten/info/Drosten-Schnelltests-sind-wohl-weniger-zuverlaessig-als-gedacht,coronavirusupdate178.html">Drosten: "Rapid tests are probably less reliable than expected"</a></p>
<p>Tim Lohn in Bloomberg: "<a href="https://www.bloomberg.com/news/articles/2021-04-14/rapid-covid-tests-are-missing-early-infections-virologist-says">Rapid Covid Tests Are Missing Early Infections, Virologist Says.</a>"</p>
<p>Jitka Polechová, Kory D. Johnson, Pavel Payne,Alex Crozier, Mathias Beiglböck, Pavel Plevka, Eva Schernhammer. <a href="https://arxiv.org/ftp/arxiv/papers/2103/2103.04979.pdf">Rapid antigen tests: their sensitivity, benefits forepidemic control,and use in Austrian schools.</a> Not reviewed preprint.</p>
Victor Venemahttp://www.blogger.com/profile/02842816166712285801noreply@blogger.com0tag:blogger.com,1999:blog-9093436161326155359.post-53701563485608287962021-01-22T15:58:00.002+00:002021-02-06T23:25:08.994+00:00 New paper: Spanish and German climatologists on how to remove errors from observed climate trends<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiNejnJLqNi6n5HjLRsJ5lM-hNPqvyj7aopSIK8gDOPCwy80fe1AHI5qal0kjBeMpqCJYYY1Y8-G-WjWfY2HZnfkg7fjrSMKG2CvhXQ8vrGzOJYKYpLa3hgqJDr9P0O2Bo_Z4913Ak7Tfw/s0/French_screen_Stevenson_screen_SCREEN.jpg" style="display: block; padding: 1em 0px; text-align: center;"><img alt="" border="0" data-original-height="355" data-original-width="472" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiNejnJLqNi6n5HjLRsJ5lM-hNPqvyj7aopSIK8gDOPCwy80fe1AHI5qal0kjBeMpqCJYYY1Y8-G-WjWfY2HZnfkg7fjrSMKG2CvhXQ8vrGzOJYKYpLa3hgqJDr9P0O2Bo_Z4913Ak7Tfw/s0/French_screen_Stevenson_screen_SCREEN.jpg" /></a></div>
<i>This picture shows three meteorological shelters next to each other in Murcia (Spain). The rightmost shelter is a replica of the Montsouri (French) screen, in use in Spain and many European countries in the late 19th century and early 20th century. Leftmost, Stevenson screen equipped with conventional meteorological instruments, a set-up used globally for most of the 20th century. In the middle, Stevenson screen equipped with automatic sensors. The Montsouri screen is better ventilated, but because some solar radiation can get onto the thermometer <a href="https://variable-variability.blogspot.com/2015/02/temperature-trend-bias-radiation-errors-screen.design.html">it registers somewhat higher temperatures than a Stevenson screen</a>. Picture: Project SCREEN, Center for Climate Change, Universitat Rovira i Virgili, Spain.</i>
<p>The
instrumental climate record is human cultural heritage, the product
of the diligent work of many generations of people all over the
world. But changes in the way temperature was measured and in the
surrounding of weather stations can produce spurious trends. An
international team, with participation of the University Rovira i
Virgili (Spain), State Meteorological Agency (AEMET, Spain) and
University of Bonn (Germany), has made a great endeavour to provide
reliable tests for the methods used to computationally eliminate such
spurious trends. These so-called “homogenization methods“ are a
key step to turn the enormous effort of the observers into accurate
climate change data products. The results have been published in the
prestigious Journal of Climate of the American Meteorological
Society. The research was funded by the Spanish Ministry of Economy
and Competitiveness.</p>
<p>Climate
observations often go back more than a century, to times before we
had electricity or cars. Such long time spans make it virtually
impossible to keep the measurement conditions the same across time.
The best-known problem is the growth of cities around urban weather
stations. Cities tend to be warmer, for example due to reduced
evaporation by plants or because high buildings block cooling. This
can be seen comparing urban stations with surrounding rural stations.
It is less talked about, but there are similar problems due to the
spread of irrigation.</p>
<p>The
most common reason for jumps in the observed data are relocations of
weather stations. Volunteer observers tend to make observations near
their homes; when they retire and a new volunteer takes over the
tasks, this can produce temperature jumps. Even for professional
observations keeping the locations the same over centuries can be a
challenge either due to urban growth effects making sites unsuitable
or organizational changes leading to new premises. Climatologist from
Bonn, Dr. Victor Venema, one of the authors: “<i>a quite typical
organizational change is that weather offices that used to be in
cities were transferred to newly build airports needing observations
and predictions. The weather station in Bonn used to be on a field in
village Poppelsdorf, which is now a quarter of Bonn and after several
relocations the station is currently at the airport Cologne-Bonn.</i>”</p>
<p>For
global trends, the most important changes are technological changes
of the same kinds and with similar effects all over the world. Now we
are, for instance, in a period with widespread automation of the
observational networks.</p>
<p>Appropriate
computer programs for the automatic homogenization of climatic time
series are the result of several years of development work. They work
by comparing nearby stations with each other and looking for changes
that only happen in one of them, as opposed to climatic changes that
influence all stations.</p>
<p>To
scrutinize these homogenization methods the research team created a
dataset that closely mimics observed climate datasets including the
mentioned spurious changes. In this way, the spurious changes are
known and one can study how well they are removed by homogenization.
Compared to previous studies, the testing datasets showed much more
diversity; real station networks also show a lot of diversity due to
differences in their management. The researchers especially took care
to produce networks with widely varying station densities; in a dense
network it is easier to see a small spurious change in a station. The
test dataset was larger than ever containing 1900 station networks,
which allowed the scientists to accurately determine the differences
between the top automatic homogenization methods that have been
developed by research groups from Europe and the Americas. Because of
the large size of the testing dataset, only automatic homogenization
methods could be tested.</p>
<p>The
international author group found that it is much more difficult to
improve the network-mean average climate signals than to improve the
accuracy of station time series.</p>
<p>The
Spanish homogenization methods excelled. The method developed at the
Centre for Climate Change, Univ. Rovira i Virgili, Vila-seca, Spain,
by Hungarian climatologist Dr. Peter Domonkos was found to be the
best at homogenizing both individual station series and regional
network mean series. The method of the State Meteorological Agency
(AEMET), Unit of Islas Baleares, Palma, Spain, developed by Dr. José
A. Guijarro was a close second.</p>
<p>When
it comes to removing systematic trend errors from many networks, and
especially of networks where alike spurious changes happen in many
stations at similar dates, the homogenization method of the American
National Oceanic and Atmospheric Agency (NOAA) performed best. This
is a method that was designed to homogenize station datasets at the
global scale where the main concern is the reliable estimation of
global trends.</p>
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhhvZCreaKJnEgXLjnEfEh5oSkcn5xzEacny2MVlkgrWguiyGNB6vn4KeORmkBQRKAGHfNa3YKPpkjuBNthmi-uDlxLIZBSN4PFQ1QX3kAisjSPi6mFfXi3w493Ldk9jHVROJLGUV-nvxQ/s0/open_shelter_uccle_belgium_p1020419.jpg" style="display: block; padding: 1em 0px; text-align: center;"><img alt="" border="0" data-original-height="387" data-original-width="516" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhhvZCreaKJnEgXLjnEfEh5oSkcn5xzEacny2MVlkgrWguiyGNB6vn4KeORmkBQRKAGHfNa3YKPpkjuBNthmi-uDlxLIZBSN4PFQ1QX3kAisjSPi6mFfXi3w493Ldk9jHVROJLGUV-nvxQ/s0/open_shelter_uccle_belgium_p1020419.jpg" /></a></div>
<i><a href="http://variable-variability.blogspot.com/2015/10/extreme-temperatures-stevenson-screen-open-shelter-weather-variability.html">The earlier used Open Screen used at station Uccle in Belgium</a>, with two modern closed thermometer Stevenson screens with a double-louvred walls in the background.</i>
<h2>Quotes from participating researchers</h2>
<p>Dr.
Peter Domonkos, who earlier was a weather observer and now writes a
book about time series homogenization: “<i>This study has shown the
value of large testing datasets and demonstrates another reason why
automatic homogenization methods are important: they can be tested
much better, which aids their development.</i>”</p>
<p>Prof.
Dr. Manola Brunet, who is the director of the Centre for Climate
Change, Univ. Rovira i Virgili, Vila-seca, Spain, Visiting Fellow at
the Climatic Research Unit, University of East Anglia, Norwich, UK
and Vice-President of the World Meteorological Services Technical
Commission said: “<i>The study showed how important dense station
networks are to make homogenization methods powerful and thus to
compute accurate observed trends. Unfortunately, still a lot of
climate data needs to be digitized to contribute to an even better
homogenization and quality control.</i>”</p>
<p>Dr.
Javier Sigró from the Centre for Climate Change, Univ. Rovira i
Virgili, Vila-seca, Spain: “<i>Homogenization is often a first step
that allows us to go into the archives and find out what happened to
the observations that produced the spurious jumps. Better
homogenization methods mean that we can do this in a much more
targeted way.</i>”</p>
<p>Dr.
José A. Guijarro: “<i>Not only the results of the project may help
users to choose the method most suited to their needs; it also helped
developers to improve their software showing their strengths and
weaknesses, and will allow further improvements in the future.</i>”</p>
<p>Dr.
Victor Venema: “<i>In a previous similar study we found that
homogenization methods that were designed to handle difficult cases
where a station has multiple spurious jumps were clearly better.
Interestingly, this study did not find this. It may be that it is
more a matter of methods being carefully fine-tuned and tested.</i>”</p>
<p>Dr.
Peter Domonkos: “<i>The accuracy of homogenization methods will likely
improve further, however, we never should forget that the spatially
dense and high quality climate observations is the most important
pillar of our knowledge about climate change and climate
variability.</i>”</p><h3 style="text-align: left;">Press releases<br /></h3><p>Spanish weather service, AEMET: <a href="http://www.aemet.es/en/noticias/2021/01/Articulo_metodos_homogeneizacion">Un equipo internacional de climatólogos estudia cómo minimizar errores en las tendencias climáticas observadas</a> <br /><br />URV university in Tarragona, Catalonian: <a href="https://diaridigital.urv.cat/un-equip-internacional-de-climatolegs-estudia-com-es-poden-minimitzar-errades-en-les-tendencies-climatiques-observades/">Un equip internacional de climatòlegs estudia com es poden minimitzar errades en les tendències climàtiques observades</a> <br /><br />URV university, Spanish: <a href="https://diaridigital.urv.cat/es/un-equipo-internacional-de-climatologos-estudia-como-se-pueden-minimizar-errores-en-las-tendencias-climaticas-observadas/">Un equipo internacional de climatólogos estudia cómo se pueden minimizar errores en las tendencias climáticas observadas</a> <br /></p><p></p><p></p><p></p><p>URV university, English: <a href="https://diaridigital.urv.cat/en/an-international-team-of-climatologists-is-studying-how-to-minimise-errors-in-observed-climate-trends/">An international team of climatologists is studying how to minimise errors in observed climate trends</a></p><h3 style="text-align: left;">Articles <br /></h3><p>Tarragona 21: <a href="http://diaridigital.tarragona21.com/un-equip-internacional-de-climatolegs-amb-presencia-de-la-urv-estudia-com-es-poden-minimitzar-errades-en-les-tendencies-climatiques-observades/">Climatòlegs de la URV estudien com es poden minimitzar errades en les tendències climàtiques observades</a> <br /></p><p>Genius Science, French: <a href="https://genius-science.fr/2021/02/04/une-equipe-de-climatologues-etudie-comment-minimiser-les-erreurs-dans-la-tendance-climatique-observee/">Une équipe de climatologues étudie comment minimiser les erreurs dans la tendance climatique observée</a> <br /><br />Phys.org: <a href="https://phys.org/news/2021-02-team-climatologists-minimize-errors-climate.html">A team of climatologists is studying how to minimize errors in observed climate trend</a></p><p> </p>Victor Venemahttp://www.blogger.com/profile/02842816166712285801noreply@blogger.com0tag:blogger.com,1999:blog-9093436161326155359.post-62722646964455095752020-11-20T12:51:00.022+00:002020-11-22T14:16:31.855+00:00Yes, it makes sense not to have diner parties while the schools are still open. Think of it as a Corona contact budget.<p></p><p style="text-align: right;"><i><span class="css-901oao css-16my406 r-1qd0xha r-ad9z0x r-bcqeeo r-qvutc0"> </span></i></p><p style="text-align: right;"><i><span class="css-901oao css-16my406 r-1qd0xha r-ad9z0x r-bcqeeo r-qvutc0">Can the kids go to school in restaurants</span></i></p><p style="text-align: right;"><i><span class="css-901oao css-16my406 r-1qd0xha r-ad9z0x r-bcqeeo r-qvutc0"><a href="https://twitter.com/winterjessica/status/1329145036693512192">Jessica Winter, editor New Yorker</a><br /></span></i></p><p><i></i></p><p><i></i></p><p><i></i></p><p><i></i></p><p><i><span class="css-901oao css-16my406 r-1qd0xha r-ad9z0x r-bcqeeo r-qvutc0"></span></i> </p>
<p>Analogies can be enlightening. Bad faith actors will always find something to nit pick, but for those interested in understanding analogies can help to open a toolbox of existing ideas and argumentative structures.<br /></p><p>I wondered whether it may be useful to talk about Corona contacts as a budget. </p><p>It would avoid arguments like "if churches can be open, why can't we have concerts under similar conditions". "If you cannot meet indoors with more than 15 people, then why are schools open? Math!" </p><p>One would never argue "if we just bought his flat, why can't we buy a summer house?" Maybe you have the budget to buy a summer house, but buying a flat does not mean you can also afford the summer house.</p><p>Similarly in the political realm: "if we can have social security, why can't we have a basic income (social security for all)?" For me a basic income is freedom, fulfilment of human potential and prosperity, but you will have to find the money. "If we can spend 10% of our GDP on healthcare as an average OECD country, why can't we spend 20%?" You can, and America does, but it will still be hard to find the funding for the additional 10% if countries with universal health care wanted to destroy their system and adopt the American partial system. <br /></p>
<div style="float: right; margin-bottom: 10px; margin-left: 20px;"><a href="https://commons.wikimedia.org/wiki/File:Injured_Piggy_Bank_With_Crutches_(6093699369).jpg"><img alt="" border="0" data-original-height="480" data-original-width="573" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi7pJXRjOHO0qz6cMLBvmrSqXDBjrZ0lrtdNNufD4Pz9Bw1cFAq1Q1I1SUMYwAZNAue9zZdw_-gokNyYFjONFADSagmD43xAAP0IJ56StU15fu-Ruf0anvp_ApoaQA37X996hEXFsvNu0c/s400/573px-Injured_Piggy_Bank_With_Crutches_%25286093699369%2529.jpg" width="300" /></a></div><p>When it comes to budgets it is immediately clear that you have to set priorities and invest wisely. <br /></p><p>The reproduction number of the SARS-CoV-2 virus is between two and three. Let's assume for this article that it is two to get easier numbers. This means that one infected person <i>on average</i> infects two other people. If we reduce number of infectious contacts by more than half the virus would decline.<br /></p><p>The "on average" does a lot of work. How many people one person infects varies widely. As a rule of the thumb for SARS2: Four of five infected people only infect one other person or none, while one in five infects many people. It is only two on average.</p><p>And you have to average over a population that is in contact with each other. When in France no one has any contacts, while in Germany life continues as normal, the virus will spread like wildfire in Germany. But if inside the city of Bonn half of the people disappear, the remaining people have less contacts than before. The remaining half should not especially seek each out for the analogy to hold.</p><p>How does this analogy help? If we look at the budget of a country like Germany, it makes clear that we should look for reductions where we spend a lot. Work, school, free time. I am as annoyed by the anti-Corona protests as many complaining about them, but compared to 80 million inhabitants that see each other at work and school (indoors) every day, these protests, even if they were really big, are a completely insignificant number of contacts. And the right to protest is a foundation of our societies and should thus have a high priority. I think it is fine to mandate masks at protests and if you do so you should uphold the rule of law.<br /></p><p>Less than 20% of Germany is younger than 20. So we could afford to spend our contacts there and ask the other 80% to do more. People often argue that children not going to school is disruptive for the economy. I would also argue a pandemic last one year is a large part of their lives, while additionally young people mostly do this to protect others. There is naturally no need to squander our budget, we could require older kids to wear masks to reduce the effective number of contacts, install air filters or <a href="https://www.microbe.tv/twiv/twiv-666/">far UC-V lights</a> in class rooms or reduce the number of days children go to school.</p><p>Some feel we should close the schools to protect teachers, but the main reason to care about avoiding contacts is, even now, not about the people being infected today, but about the spreading the virus and all the people who will die because of that. </p><p>If we life above our contact budget most of the dying happens after several links in the chain of infection and no longer close to the school: The teacher or student infects 2 others, they infect 4 other, 8, 16, 32, 64, 128, ... Those 128 will reside all over the city/county, if not state and have many different professions. If we would life within our Corona budget and the level of infection would be and stay low, the entire community, including teachers, would be safe.<br /></p><p>The exponential growth of a virus also nicely fits to the exponential growth of money in your [[<a href="https://en.wikipedia.org/wiki/Savings_account">savings account</a>]]. I added a link for young people. A savings account used to be a place where you would keep you money and the bank would give you a percentage of the amount as a thank you, which they called "interest". People who are into money and budgets likely still remember this and how it was normal to "invest" money to have more money later. </p><p>When the press talks about exponential growth, I tend to worry they simply mean fast growth. Economic growth is much slower than the pandemic, but when it comes to money people get glowing eyes and talk enthusiastically about compound interest and putting something aside for later. </p><p>Similarly when a society invests in less contacts, we can have more freedom
later. Even more so because once the number of infections is low enough
track and trace becomes much more efficiently and you get double returns
on investment. Like an investment banker who has to pay less taxes
because ... reasons.</p><p>At least the financial press should know the famous example of exponential growth: <a href="https://www.npr.org/sections/krulwich/2012/09/15/160879929/that-old-rice-grains-on-the-chessboard-con-with-a-new-twist">the craftsman who "only" asks the king for rice as payment</a> for his chessboard: one grain on the first square, two on the second, four on the third square and so on. </p><p> <iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="" frameborder="0" height="315" src="https://www.youtube-nocookie.com/embed/k1s-1Jg3C6Q" width="560"></iframe>
</p><br /><div></div><div>What is true for infections is also true for hospital beds and ICU beds. Once half of your patients are COVID-19 patient, it is only a matter of one more doubling time and the capacity is filled. Exponential growth is not just fast, it overwhelms linear systems like
hospitals where you cannot keep on doubling the number of beds. </div><div> </div><div>If we let it get this far we are forcing doctors to choose who lives. Who is in the ICU too long and would likely stay there a long time while this capacity could be used for multiple new patients. Who is removed for the ICU to die. A healthy society does not put doctors in such a position.<br /></div><div><br /></div><div>With good care around 1 percent of people die in the West (in young societies in Africa less). Supporters of the virus tend to use this number or even much lower fantasy numbers. However, if we let it get out of control like this, ignore the exponential growth and the delay between infections and deaths, the hospital care would collapse and a few percent would die.</div><div><br /></div><div>Many more people need to got the hospital. <a href="https://www.rki.de/DE/Content/InfAZ/N/Neuartiges_Coronavirus/Steckbrief.html#doc13776792bodyText16">In Germany this is 17%.</a> A recent French study reported that <a href="https://www.journalofinfection.com/article/S0163-4453(20)30562-4/fulltext">after 110 day most patients are still tired and have trouble breathing</a>, many did not yet work again. </div><div> </div><div>At the latest when the hospitals collapse people will reduce contacts, even if not mandated. It is much smarter make an investment earlier, to reduce our number of infectious contacts earlier. </div><div> </div><div>A well-know American president said it is smart to go bankrupt. It is smarter to make money. <br /></div><div> </div><div>Investing early pays of even more because then more subtle measures are still possible, while in an emergency a much more invasive lockdown will be necessary and, for those that only care about money, more damage to the economy will be done.<br /></div><div></div><div></div><div><p>(As many of my readers are interested in climate change, let me add
that I find it weird that when it comes to protecting the climate people
often talk about it as a cost and not as an investment that will pay good dividends in the future, just like any other investment. If you mind that our kids will thus have it better than we have it, you can finance the investments with loans, like any business would.) <br /></p><h2>Related reading</h2>
<div style="text-align: left;"><a href="https://variable-variability.blogspot.com/2020/08/primer-herd-immunity-Social-Darwinists.html">A primer on herd immunity for Social Darwinists</a></div><div style="text-align: left;"> </div>
<div style="text-align: left;"><a href="https://variable-variability.blogspot.com/2020/04/opening-germany-randomized-controlled-trial-schools.html">Opening up Germany in a Randomized Controlled Trial</a></div><div style="text-align: left;"> </div>
<div style="text-align: left;">Translations of a popular scientific podcast: <a href="https://variable-variability.blogspot.com/2020/03/german-virologist-Christian-Drosten.html">Leading German virologist Prof. Dr. Christian Drosten goes viral (intro & part 18)</a></div><div style="text-align: left;"> </div>
<div style="text-align: left;"><i>* <a href="https://www.flickr.com/photos/teegardin/6093699369/">The photo of the injured piggy bank</a> by <a class="external text" href="https://www.flickr.com/people/26373139@N08" rel="nofollow">Ken Teegardin</a> is licensed under the <a class="extiw" href="https://en.wikipedia.org/wiki/en:Creative_Commons" title="w:en:Creative Commons">Creative Commons</a> <a class="external text" href="https://creativecommons.org/licenses/by-sa/2.0/deed.en" rel="nofollow">Attribution-Share Alike 2.0 Generic</a> license.
</i></div></div>
<p></p>Victor Venemahttp://www.blogger.com/profile/02842816166712285801noreply@blogger.com0tag:blogger.com,1999:blog-9093436161326155359.post-21293885366799492642020-11-09T18:28:00.001+00:002020-11-09T18:28:21.179+00:00Science Feedback on Steroids<p><a href="https://climatefeedback.org">Climate Feedback</a> is a group of climate scientists reviewing press articles on climate change. By networking this valuable work with science-interested citizens we could put this initiative on steroids.</p>
<p>Disclosure, I am <a href="https://climatefeedback.org/community/">member</a> of Climate Feedback.</p><h2 style="text-align: left;">How Climate Feedback works <br /></h2>
<p>Climate Feedback works as follows. A science journalist monitors which stories on climate change are shared much on social media and invites publishing climate scientists with relevant expertise to review the factual claims being made. The scientists make detailed reviews on concrete claims, ideally using web annotations (see example below), sometimes by email.</p><p> </p>
<p><a href="https://via.hypothes.is/https://edition.cnn.com/2020/08/14/weather/greenland-ice-sheet/index.html"><img alt="" border="0" data-original-height="947" data-original-width="1591" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgyMyeXpd2UGRx3FT6Ja3BYbL5zItE4qN_S_vhM_pVx7HlYJkdTChy32BgEql39hx05OfMIW0lRwj7SJ-2ioCUkUzmDbutbFCHB0T0x_EknxuFZTi_QVxgEuIbNM2WL8Kqdgwjej_h_JPI/s600/Screenshot_2020-11-08+Greenland%2527s+ice+sheet+has+melted+to+a+point+of+no+return%252C+study+finds.png" width="600" /></a></p>
<p> </p><p>They also write a short summary of the article and grade its scientific credibility. These comments, summaries and grades are then summarized in a graphic and an article written by the science journalist. </p><p>Climate Feedback takes care of spreading the reviews to the public and to the publication that was reviewed. Climate Feedback is also part of a network of fact checking organizations giving them more credibility and they add metadata to the review pages that social media and search engines can show their users.</p><p> </p>
<p><a href="https://climatefeedback.org/evaluation/article-by-cnn-exaggerates-studys-implications-for-future-greenland-ice-loss/"><img alt="" border="0" data-original-height="761" data-original-width="1170" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjL6f8_0YWhRxiy3krXdTrnEMs8VW7B61jppQ0louzzL8WKrmQYrxKJTVAQ73jXiT1TDjCM2eplfgkkE7t7nXww-JBeVfZorCRzD9pODop0IAl8KpA9nF2ev3jzOBZJLQGLtwEFit8LOJ0/s600/Screenshot_2020-11-08+Article+by+CNN+exaggerates+study%25E2%2580%2599s+implications+for+future+Greenland+ice+loss+with+point+of+no+return%255B...%255D.png" width="600" /></a></p>
<p> </p><p>For scientists this is a very efficient fact checking operation. The participants only have to respond to the claims they have expertise on. If there are many claims outside my expertise I can wait until my colleagues added their web annotations before I write my summary and determine my grade. Especially compared to writing a blog post Climate Feedback is very effective.</p>
<p>The initiative recently branched out to reviewing health claims with a new <a href="https://healthfeedback.org">Health Feedback</a> group. The umbrella is now called <a href="https://sciencefeedback.co">Science Feedback</a>.</p>
<h2 style="text-align: left;">The impact <br /></h2><p>But there is only so much a group of scientists can do and by the time the reviews are in and summarized the article is mostly old news. Only a small fraction of readers would see any notifications social media systems could put on posts spreading them.</p>
<p>This is still important information for people who closely follow the topic, helps them to see how such reviews are done, assess which publication are reliable and helps to see which groups are credible. </p><p>The reviews may be most important for the journalists and the publications involved. Journalists doing high quality work can now demonstrate this to editors who will mostly not be able to assess this themselves. Some journalists have even asked for reviews of important pieces to showcase the quality of their work. Reversely editors can seek out good journalists and cut ties with journalists regularly hurt their reputation. The latter naturally only helps publications that care about quality.</p>
<h2>The Steroids</h2>
<p>With a larger group we could review more articles and have results while people are still reading it. There are not enough (climate) scientists to do this. </p><p>For Climate Feedback I only review articles on topics where I have expertise. But I think I would still do a decent job outside of my expertise. It is hard to determine how good a good article is, but the ones that are clearly bad are easy to identify and this does not require much expertise. At least in the climate branch of the US culture war the same tropes are used over and over again, the same "thinking" errors are made over and over again. </p><p>Many who are interested in climate change are interested in scientific detail, but are not scientists, would probably do a good job identifying these bad articles. Maybe even better. They say that magicians were better at debunking paranormal claims than scientists. We live in a bubble where most argue in good faith and science-interested normal citizens may well have a better BS detector.</p>
<p>However, how do we know who is good at this? Clearly not everyone, otherwise such a service would not be needed. We would have the data from Climate Feedback and Health Feedback to determine which citizen scientist's assessments predict the assessments of the scientists well. We could also ask people to classify the topic of the article. I would be best at observational climatology, decent in physical climatology and likely only average when it comes to many climate change impacts and economic questions. We could also ask people how confident they are in their assessments.</p>
<p>In the end it would be great to <b>ingest ratings</b> in a user friendly way with 1) a browser add-on on the article homepage itself, 2) replying to posts mentioning the article on social media, like replying to a tweet adding the handle of the <a href="https://twitter.com/PubPeerBot">PubPeerBot</a> automatically <a href="https://pubpeer.com/static/faq#31">submits the tweet to PubPeer</a>.</p>
<p>A server would <b>compute the ratings</b> and as soon as there is enough data create a review homepage with the ratings as metadata to be used by search engines and social media sites. We will have to see if they are willing to use such a statistical product. Also an application programming interface (API) and ActivityPub can be used to <b>spread the information</b> to interested parties. <br /></p><p>I would be happy to use this information on the <a href="https://variable-variability.blogspot.com/2020/07/friendly-micro-blogging-Twitter-scientists-no-nasties-surveillance.html">micro-blogging system for scientists</a> Frank Sonntag and I have set up. I presume more Open Social Media communities would be grateful for the information to make their place more reality-friendly. A browser add-on could also display the feedback on the article's homepage itself and on posts linking to it.</p><h2 style="text-align: left;">How to start? <br /></h2>
<p>Before creating such a huge system I would propose a much smaller feasibility study. Here people would be informed about articles Climate or Health Feedback are working on and they can return their assessments until the one of Climate Feedback is published. This could be a simple email distribution list to distribute the articles and a cloud-based spread sheet or web form to return the results. </p><p>This system should be enough to study whether citizens can distinguish fact from fiction well enough (I expect so, but knowing for sure is valuable) and develop statistical methods to estimate how well people are doing, how to compute an all over score and how many reviews are needed to do so.<br /></p>
<p>This set-up points to two complications the full system would have. Firstly, only citizen's assessments that are made before the official feedback can be used. this should not be too much of a problem as most readers will read the article before the official feedback is published. </p>
<p>Secondly, as the number of official feedbacks will be small many volunteers will likely not review any of these articles themselves or just a few. Thus the assessment of how accurate the predictions of person A of articles X, Y and Z are may have to be assessed comparing their assessments with those of B, C and D who review X, Y or Z as well as one of the articles Climate Feedback reviewed. This makes the computation more complicated and uncertain, but if B, C and D are good enough, this should be doable. Alternatively, we would have to keep on informing our volunteers of the articles being reviewed by the scientists themselves.</p><p>This new system could be part of Science Feedback or an independent initiative. I feel, it would at least be good to have a separate homepage as the two systems are quite different and the public should not mix them up. A reason to keep it separate is that this system could also be used in combination with other fact checkers, but we could also make that organizational change when it comes to that.<br /></p><p>Another organization question is whether we would like Google and Facebook to have access to this information or prefer a license that excludes them. Short term it is naturally best when they also use it to inform as many people as possible. Long-term it would also be valuable to break the monopolies of Google and Facebook. Having alternative services that can deliver better quality due to our assessments could contribute to that. They have money, we have people.<br /></p>
<p>I asked on Twitter and Mastodon whether people would be interested in contributing to such a system. Fitting to my prejudice people on Twitter were more willing to review (I do more science on Twitter) and people on Mastodon were more willing to build software (Mastodon started with many coders).</p>
<p>What do you think? Could such a system work? Would enough people be willing to contribute? Is it technologically and statistically feasible? Any ideas to make the system or the feasibility study better?</p>
<h2>Related reading</h2><div style="text-align: left;"><a href="https://variable-variability.blogspot.com/2020/07/friendly-micro-blogging-Twitter-scientists-no-nasties-surveillance.html">Micro-blogging for scientists without nasties and surveillance</a> </div><div style="text-align: left;"> </div><div style="text-align: left;">Climate Feedback explainer from 2016: <span style="font-weight: normal;"><a href="http://variable-variability.blogspot.com/2016/04/climate-scientists-annotationClimateFeedback.html">Climate scientists are now grading climate journalism</a></span></div><div style="text-align: left;"><span style="font-weight: normal;"> </span></div><div style="text-align: left;"><span style="font-weight: normal;">Discussion of a controversial Climate Feedback and the grading system used:
<a href="http://variable-variability.blogspot.com/2017/07/nitpicking-climate-doomsday-warning-allowed.html">Is nitpicking a climate doomsday warning allowed?</a></span></div><div style="text-align: left;"><span style="font-weight: normal;"> </span> </div>
Victor Venemahttp://www.blogger.com/profile/02842816166712285801noreply@blogger.com0tag:blogger.com,1999:blog-9093436161326155359.post-5570117292800065612020-10-12T02:38:00.001+01:002020-10-16T23:02:18.001+01:00The deleted chapter of the WMO Guidance on the homogenisation of climate station data<p>The Task Team on Homogenization (TT-HOM) of the Open Panel of CCl Experts on Climate Monitoring and Assessment (OPACE-2) of the Commission on Climatology (CCl) of the World Meteorological Organization (WMO) has published their <a href="https://library.wmo.int/index.php?lvl=notice_display&id=21756">Guidance on the homogenisation of climate station data</a>.</p>
<p>The guidance report was a bit longish, so at the end we decided that the last chapter on "Future research & collaboration needs" was best deleted. As chair of the task team and as someone who likes tp dream about what others could do in a comfy chair, I wrote most of this chapter and thus we decided to simply make it a blog post for this blog. Enjoy.</p>
<h2>Introduction</h2>
<p>
This guidance is based on our current best understanding of
inhomogeneities and homogenisation. However, writing it also makes clear there is a need
for a better understanding of the problems.</p>
<p>A better mathematical understanding of statistical homogenisation is
important because that is what most of our work is based on. A
stronger mathematical basis is a prerequisite for future
methodological improvements.</p>
<p>
A stronger focus on a (physical) understanding of inhomogeneities
would complement and strengthen the statistical work. This kind of
work is often performed at the station or network level, but also
needed at larger spatial scales. Much of this work is performed using
parallel measurements, but they are typically not internationally
shared.
</p>
<p>
In an observational science the strength of the outcomes depends on a
consilience of evidence. Thus having evidence on inhomogeneities from
both statistical homogenisation and physical studies strengthens the
science.</p>
<p>
This chapter will discuss the needs for future research on
homogenisation grouped in five kinds of problems. In the first
section we will discuss research on improving our physical
understanding and physics-based corrections. The next section is about break detection, especially about two fundamental problems
in statistical homogenisation: the inhomogeneous-reference problem
and the multiple-breakpoint problem.
</p>
<p>Next write about computing uncertainties in trends and
long-term variability estimates from homogenised data due to
remaining inhomogeneities. It may be possible to improve correction
methods by treating it as a statistical model selection problem. The last section discusses whether
inhomogeneities are stochastic or deterministic and how that may
affect homogenisation and especially correction methods for the
variability around the long-term mean.</p>
<p>
For all the research ideas mentioned below, it is understood that in
future we should study more meteorological variables than
temperature. In addition, more studies on inhomogeneities across
variables could be helpful to understand the causes of
inhomogeneities and increase the signal to noise ratio.
Homogenisation by national offices has advantages because here all
climate elements from one station are stored together. This helps in
understanding and identifying breaks. It would help homogenisation
science and climate analysis to have a global database for all
climate elements, like iCOADS for marine data. A Copernicus project
has started working on this for land station data, which is an
encouraging development.
</p>
<h2>Physical understanding</h2>
<p>
It is a good scientific practice to perform parallel measurements in
order to manage unavoidable changes and to compare the results of
statistical homogenisation to the expectations given the cause of the
inhomogeneity according to the metadata. This information should also
be analysed on continental and global scales to get a better
understanding of when historical transitions took place and to guide
homogenisation of large-scale (global) datasets. This requires more
international sharing of parallel data and standards on the reporting
of the size of breaks confirmed by metadata.</p>
<p>
The Dutch weather service KNMI published a protocol how to manage
possible future changes of the network, who decides what needs to be
done in which situation, what kind of studies should be made, where
the studies should be published and that the parallel data should be
stored in their central database as experimental data. A translation
of this report will soon be published by the WMO (Brandsma et al.,
2019) and will hopefully inspire other weather services to formalise
their network change management.</p>
<p>
Next to statistical homogenisation, making and studying parallel
measurements, and other physical estimates, can provide a second line
of evidence on the magnitude of inhomogeneities. Having multiple
lines of evidence provides robustness to observational sciences.
Parallel data is especially important for the large historical
transitions that are most likely to produce biases in network-wide to
global climate datasets. It can validate the results of statistical
homogenisation and be used to estimate possibly needed additional
adjustments. The Parallel Observations Science Team of the
International Surface Temperature Initiative (ISTI-POST) is working
on building such a global dataset with parallel measurements.</p>
<p>
Parallel data is especially suited to improve our physical understand
of the causes of inhomogeneities by studying how the magnitude of the
inhomogeneity depends on the weather and on instrumental design
characteristics. This understanding is important for more accurate
corrections of the distribution, for realistic benchmarking datasets
to test our homogenisation methods and to determine which additional
parallel experiments are especially useful.
</p>
<p>
Detailed physical models of the measurement, for example, the flow
through the screens, radiative transfer and heat flows, can also help
gain a better understanding of the measurement and its error sources.
This aids in understanding historical instruments and in designing
better future instruments. Physical models will also be paramount for
understanding the impact of the surrounding on the measurement
— nearby obstacles and surfaces influencing error sources and air
flow — to changes in the measurand, such as
urbanisation/deforestation or the introduction of irrigation.
Land-use changes, especially urbanisation, should be studied together
with relocations they may provoke.</p>
<h2>Break detection</h2>
<p>
Longer climate series typically contain more than one break. This
so-called multiple-breakpoint problem is currently an important
research topic. A complication of relative homogenisation is that
also the reference stations can have inhomogeneities. This so-called
inhomogeneous-reference problem is not optimally solved yet. It is
also not clear what temporal resolution is best for detection and
what the optimal way is to handle the seasonal cycle in the
statistical properties of climate data and of many inhomogeneities.</p>
<p>
For temperature time series about one break per 15 to 20 years is
typical and multiple breaks are thus common. Unfortunately, most
statistical detection methods have been developed for one break and
for the null hypothesis of white (sometimes red) noise. In case of
multiple breaks the statistical test should not only take the noise
variance into account, but also the break variance from breaks at
other positions. For low signal to noise ratios, the additional break
variance can lead to spurious detections and inaccuracies in the
break position (Lindau and Venema, 2018a).</p>
<p>
To apply single-breakpoint tests on series with multiple breaks, one
ad-hoc solution is to first split the series at the most significant
break (for example, the standard normalised homogeneity test, SNHT)
and investigate the subseries. Such a greedy algorithm does not
always find the optimal solution. Another solution is to detect
breaks on short windows. The window should be short enough to contain
only one break, which reduces power of detection considerably. This
method is not used much nowadays.</p>
<p>
Multiple breakpoint methods can find an optimal solution and are
nowadays numerically feasible. This can be done in a hypothesis
testing (MASH) or in a statistical model selection framework. For a
certain number of breaks these methods find the break combination
that minimize the internal variance, that is variance of the
homogeneous subperiods, (or you could also state that the break
combination maximizes the variance of the breaks). To find the
optimal number of breaks, a penalty is added that increases with the
number of breaks. Examples of such methods are PRODIGE (Caussinus &
Mestre, 2004) or ACMANT (based on PRODIGE; Domonkos, 2011b). In a
similar line of research Lu et al. (2010) solved the multiple
breakpoint problem using a minimum description length (MDL) based
information criterion as penalty function.</p>
<p>
This penalty function of PRODIGE was found to be suboptimal (Lindau
and Venema, 2013). It was found that the penalty should be a function
of the number of breaks, not fixed per break and that the relation
with the length of the series should be reversed. It is not clear yet
how sensitive homogenisation methods respond to this, but increasing
the penalty per break in case of low SNR to reduce the number of
breaks does not make the estimated break signal more accurate (Lindau
and Venema, 2018a).</p>
<p>
Not only the candidate station, also the reference stations will have
inhomogeneities, which complicates homogenisation. Such
inhomogeneities can be climatologically especially important when
they are due to network-wide technological transitions. An example of
such a transition is the current replacement of temperature
observations using Stevenson screens by automatic weather stations.
Such transitions are important periods as they may cause biases in
the network and global average trends and they produce many breaks
over a short period.
</p>
<p>
A related problem is that sometimes all stations in a network have a
break at the same date, for example, when a weather service changes
the time of observation. Nationally such breaks are corrected using
metadata. If this change is unknown in global datasets one can still
detect and correct such inhomogeneities statistically by comparison
with other nearby networks. That would require an algorithm that
additionally knows which stations belong to which network and
prioritizes correcting breaks found between stations in different
networks. Such algorithms do not exist yet and information on which
station belongs to which network for which period is typically not
internationally shared.</p>
<p>
The influence of inhomogeneities in the reference can be reduced by
computing composite references over many stations, removing reference
stations with breaks and by performing homogenisation iteratively.</p>
<p>
A direct approach to solving this problem would be to simultaneously
homogenise multiple stations, also called joint detection. A step in
this direction are pairwise homogenisation methods where breaks are
detected in the pairs. This requires an additional attribution step,
which attributes the breaks to a specific station. Currently this is
done by hand (for PRODIGE; Caussinus and Mestre, 2004; Rustemeier et
al., 2017) or with ad-hoc rules (by the Pairwise homogenisation
algorithm of NOAA; Menne and Williams, 2009).</p>
<p>
In the homogenisation method HOMER (Mestre et al., 2013) a first
attempt is made to homogenise all pairs simultaneously using a joint
detection method from bio-statistics. Feedback from first users
suggests that this method should not be used automatically. It
should be studied how good this methods works and where the problems
come from.</p>
<p>
Multiple breakpoint methods are more accurate as single breakpoint
methods. This expected higher accuracy is founded on theory (Hawkins,
1972). In addition, in the HOME benchmarking study it was numerically
found that modern homogenisation methods, which take the multiple
breakpoint and the inhomogeneous reference problems into account, are
about a factor two more accurate as traditional methods (Venema et
al., 2012).</p>
<p>
However, the current version of CLIMATOL applies single-breakpoint
detection tests, first SNHT detection on a window then splitting, to
achieve results comparable to modern multiple-breakpoint methods with
respect to break detection and homogeneity of the data (Killick,
2016). This suggests that the multiple-breakpoint detection principle
may not be as important as previously thought and warrants deeper
study or the accuracy of CLIMATOL is partly due to an unknown
unknown.
</p>
<p>
The signal to noise ratio is paramount for the reliable detection of
breaks. It would thus be valuable to develop statistical methods that
explain part of the variance of a difference time series and remove
this to see breaks more clearly. Data from (regional) reanalysis
could be useful predictors for this.</p>
<p>
First methods have been published to detect breaks for daily data
(Toreti et al., 2012; Rienzner and Gandolfi, 2013). It has not been
studied yet what the optimal resolution for breaks detection is
(daily, monthly, annual), nor what the optimal way is to handle the
seasonal cycle in the climate data and exploit the seasonal cycle of
inhomogeneities. In the daily temperature benchmarking study of
Killick (2016) most non-specialised detection methods performed
better than the daily detection method MAC-D (Rienzner and Gandolfi,
2013).</p>
<p>
The selection of appropriate reference stations is a necessary step
for accurate detection and correction. Many different methods and
metrics are used for the station selection, but studies on the
optimal method are missing. The knowledge of local climatologists
which stations have a similar regional climate needs to be made
objective so that it can be applied automatically (at larger scales).
</p>
<p>
For detection a high signal to noise ratio is most important, while
for correction it is paramount that all stations are in the same
climatic region. Typically the same networks are used for both
detection and correction, but it should be investigated whether a
smaller network for correction would be beneficial. Also in general,
we need more research on understanding the performance of (monthly
and daily) correction methods.</p>
<h2>Computing uncertainties</h2>
<ul><li><p>
Also after homogenisation uncertainties remain in the data due to
various problems: Not all breaks in the candidate station have been
and can be detected.</p>
</li><li><p>
False alarms are an unavoidable trade-off for detecting many real
breaks.</p>
</li><li><p>
Uncertainty in the estimation of correction parameters due to
limited data.
</p>
</li><li><p>
Uncertainties in the corrections due to limited information on the
break positions.</p>
</li></ul>
<p>
From validation and benchmarking studies we have a reasonable idea
about the remaining uncertainties that one can expect in the
homogenised data, at least with respect to changes in the long-term
mean temperature. For many other variables and changes in the
distribution of (sub-)daily temperature data individual developers
have validated their methods, but systematic validation and
comparison studies are still missing.</p>
<p>
Furthermore, such studies only provide a general uncertainty level,
whereas more detailed information for every single station/region and
period would be valuable. The uncertainties will strongly depend on
the signal to noise ratios, on the statistical properties of the
inhomogeneities of the raw data and on the quality and
cross-correlations of the reference stations. All of which vary
strongly per station, region and period.</p>
<p>
Communicating such a complicated errors structure, which is mainly
temporal, but also partially spatial, is a problem in itself.
Furthermore, not only the uncertainty in the means should be
considered, but, especially for daily data, uncertainties in the
complete probability density function need to be estimated and
communicated. This could be communicated with an ensemble of possible
realisations, similar to Brohan et al. (2006).</p>
<p>
An analytic understanding of the uncertainties is important, but is
often limited to idealised cases. Thus also numerical validation
studies, such as the past HOME and upcoming ISTI studies are
important for an assessment of homogenisation algorithms under
realistic conditions.
</p>
<p>
Creating validation datasets also help to see the limits of our
understanding of the statistical properties of the break signal. This
is especially the case for variables other than temperature and for
daily and (sub-)daily data. Information is needed on the real break
frequencies and size distributions, but also their auto-correlations
and cross-correlations, as well as explained in the next section the
stochastic nature of breaks in the variability around the mean.
</p>
<p>
Validation studies focussed on difficult cases would be valuable for
a better understanding. For example, sparse networks, isolated island
networks, large spatial trend gradients and strong decadal
variability in the difference series of nearby stations (for example,
due to El Nino in complex mountainous regions).
</p>
<p>
The advantage of simulated data is that it can create a large number
of quite realistic complete networks. For daily data it will remain
hard for the years to come to determine how to generate a realistic
validation dataset. Thus even if using parallel measurements is
mostly limited to one break per test, it does provide the highest
degree of realism for this one break.
</p>
<h2>Deterministic or stochastic corrections?</h2>
<p>
Annual and monthly data is normally used to study trends and
variability in the mean state of the atmosphere. Consequently,
typically only the mean is adjusted by homogenisation. Daily data, on
the other hand is used to study climatic changes in weather
variability, severe weather and extremes. Consequently, not only the
mean should be corrected, but the full probability distribution
describing the variability of the weather.</p>
<p>
The physics of the problem suggests that many inhomogeneities are
caused by stochastic processes. An example affecting many instruments
are differences in the response time of instruments, which can lead
to differences determined by turbulence. A fast thermometer will on
average read higher maximum temperatures than a slow one, but this
difference will be variable and sometimes be much higher than the
average. In case of errors due to insolation the radiation error
will be modulated by clouds. An insufficiently shielded thermometer
will need larger corrections on warm days, which will typically be
more sunny, but some warm days will be cloudy and not need much
correction, while other warm days are sunny and calm and have a dry
hot surface. The adjustment of daily data for studies on changes in
the variability is thus a distribution problem and not only a
regression bias-correction problem. For data assimilation (numerical
weather prediction) accurate bias correction (with regression
methods) is probably the main concern.
</p>
<p>
Seen as a variability problem, the correction of daily data is
similar to statistical downscaling in many ways. Both methodologies
aim to produce bias-corrected data with the right variability, taking
into account the local climate and large-scale circulation. One
lesson from statistical downscaling is that increasing the variance
of a time series deterministically by multiplication with a fraction,
called inflation, is the wrong approach and that the variance that
could not be explained by regression using predictors should be added
stochastically as noise instead (Von Storch, 1999). Maraun (2013)
demonstrated that the inflation problem also exists for the
deterministic Quantile Matching method, which is also used in daily
homogenisation. Current statistical correction methods
deterministically change the daily temperature distribution and do
not stochastically add noise.</p>
<p>
Transferring ideas from downscaling to daily homogenisation is likely
fruitful to develop such stochastic variability correction methods.
For example, predictor selection methods from downscaling could be
useful. Both fields require powerful and robust (time invariant)
predictors. Multi-site statistical downscaling techniques aim at
reproducing the auto- and cross-correlations between stations (Maraun
et al., 2010), which may be interesting for homogenisation as well.</p>
<p>
The daily temperature benchmarking study of Rachel Killick (2016)
suggests that current daily correction methods are not able to
improve the distribution much. There is a pressing need for more
research on this topic. However, these methods likely also performed
less well because they were used together with detection methods with
a much lower hit rate than the comparison methods.</p>
<p>
The deterministic correction methods may not lead to severe errors in
homogenisation, that should still be studied, but stochastic methods
that implement the corrections by adding noise would at least
theoretically fit better to the problem. Such stochastic corrections
are not trivial and should have the right variability on all temporal
and spatial scales.</p>
<p>
It should be studied whether it may be better to only detect the
dates of break inhomogeneities and perform the analysis on the
homogeneous subperiods (removing the need for corrections). The
disadvantage of this approach is that most of the trend variance is
in the difference in the mean of the HSPs and only a small part is in
the trend within the HPSs. In case of trend analysis, this would be
similar to the work of the Berkeley Earth Surface Temperature group
on the mean temperature signal. Periods with gradual inhomogeneities,
e.g., due to urbanisation, would have to be detected and excluded
from such an analysis.</p>
<p>
An outstanding problem is that current variability correction methods
have only been developed for break inhomogeneities, methods for
gradual ones are still missing. In homogenisation of the mean of
annual and monthly data, gradual inhomogeneities are successfully
removed by implementing multiple small breaks in the same direction.
However, as daily data is used to study changes in the distribution,
this may not be appropriate for daily data as it could produce larger
deviations near the breaks. Furthermore, changing the variance in
data with a trend can be problematic (Von Storch, 1999).</p>
<p>
At the moment most daily correction methods correct the breaks one
after another. In monthly homogenisation it is found that correcting
all breaks simultaneously (Caussinus and Mestre, 2004) is more
accurate (Domonkos et al., 2013). It is thus likely worthwhile to
develop multiple breakpoint correction methods for daily data as
well.</p>
<p>
Finally, current daily correction methods rely on previously detected
breaks and assume that the homogeneous subperiods (HSP) are
homogeneous (i.e., each segment between breakpoints assume to be
homogeneous) . However, these HSP are currently based on detection of
breaks in the mean only. Breaks in higher moments may thus still be
present in the "homogeneous" sub periods and affect the
corrections. If only for this reason, we should also work on
detection of breaks in the distribution.</p>
<p>
</p><h2>
Correction as model selection problem</h2>
<p>
The number of degrees of freedom (DOF) of the various correction
methods varies widely. From just one degree of freedom for annual
corrections of the means, to 12 degrees of freedom for monthly
correction of the means, to 40 for decile corrections applied to
every season, to a large number of DOF for quantile or percentile
matching.</p>
<p>
A study using PRODIGE on the HOME benchmark suggested that for
typical European networks monthly adjustment are best for
temperature; annual corrections are probably less accurate because
they fail to account for changes in seasonal cycle due to
inhomogeneities. For precipitation annual corrections were most
accurate; monthly corrections were likely less accurate because the
data was too noisy to estimate the 12 correction constants/degrees of
freedom.</p>
<p>
What is the best correction method depends on the characteristics of
the inhomogeneity. For a calibration problem just the annual mean
could be sufficient, for a serious exposure problem (e.g., insolation
of the instrument) a seasonal cycle in the monthly corrections may be
expected and the full distribution of the daily temperatures may need
to be adjusted. The best correction method also depends on the
reference. Whether the variables of a certain correction model can be
reliably estimated depends on how well-correlated the neighbouring
reference stations are.</p>
<p>
An entire regional network is typically homogenised with the same
correction method, while the optimal correction method will depend on
the characteristics of each individual break and on the quality of
the reference. These will vary from station to station, from break to
break and from period to period. Work on correction methods that
objectively select the optimal correction method, e.g., using an
information criterion, would be valuable.
</p>
<p>
In case of (sub-)daily data, the options to select from become even
larger. Daily data can be corrected just for inhomogeneities in the
mean (e.g., Vincent et al., 2002, where daily temperatures are
corrected by incorporating a linear interpolation scheme that
preserves the previously defined monthly corrections) or also for the
variability around the mean. In between are methods that adjust for
the distribution including the seasonal cycle, which dominates the
variability and is thus effectively similar to mean adjustments with
a seasonal cycle. Correction methods of intermediate complexity with
more than one, but less than 10 degrees of freedom would fill a gap
and allow for more flexibility in selecting the optimal correction
model.
</p>
<p>
When applying these methods (Della-Marta and Wanner, 2006; Wang et
al., 2010; Mestre et al., 2011; Trewin, 2013) the number of quantile
bins (categories) needs to be selected as well as whether to use
physical weather-dependent predictors and the functional form they
are used (Auchmann and Brönnimann, 2012). Objective optimal methods
for these selections would be valuable.</p><h2 style="text-align: left;">Related information</h2><p>WMO <a href="https://library.wmo.int/index.php?lvl=notice_display&id=21756">Guidelines on Homogenization</a> (English, French, Spanish) </p><p>WMO guidance report: <a href="https://library.wmo.int/doc_num.php?explnum_id=4217 ">Challenges in the Transition from Conventional to Automatic Meteorological Observing Networks for Long-term Climate Records</a><br /></p><p><br /></p>
<p></p>Victor Venemahttp://www.blogger.com/profile/02842816166712285801noreply@blogger.com0tag:blogger.com,1999:blog-9093436161326155359.post-23865257807272617792020-08-30T17:58:00.006+01:002020-11-01T14:12:30.051+00:00A primer on herd immunity for Social Darwinists<div>Herd immunity has been proposed as a way to deal with the new Corona virus. In the best case, it is a call to slow down the spread of the SARS-CoV-2 virus
enough so that the hospitals can just handle the flood of patients, in the worst case it is a fancy term for just doing nothing and let everyone get sick. </div><div><br /></div><div>Trump often talked about letting the pandemic wash over America. <a href="https://www.washingtonpost.com/politics/trump-task-forces-coronavirus-pandemic/2020/04/11/5cc5a30c-7a77-11ea-a130-df573469f094_story.html">Insider reports confirm he was indeed talking about herd immunity.</a> When I first heard claims that Boris Johnson was pursuing herd immunity, I assumed his political opponents were smearing him and trying to get him to act, but <a href="https://www.theguardian.com/commentisfree/2020/apr/29/the-guardian-view-on-herd-immunity-yes-it-was-part-of-the-plan">it seems as if this was really his plan</a>. Of all world leaders Jair Bolsonaro may be most in denial about the pandemic, which he calls a little flu. <a href="https://brazilian.report/coronavirus-brazil-live-blog/2020/07/02/bolsonaro-chases-a-herd-immunity-that-might-never-come/">He also advocated herd immunity.</a> All these leaders have downplayed the threat, which by itself helps spread the decease, and advocated policies that promote infection, leading to more infected, sick and dead people.</div><div><br /></div><div> America and Brazil lead the COVID-19 death rankings unchallenged with respectively <a href="https://www.worldometers.info/coronavirus/#countries">187 and 120 thousand total deaths</a> and around one thousand people dying every day the last month. The UK is the country with the most COVID-19 deaths in Europe, while it was lucky to get it late. <br /></div><div><br /></div><div>This "strategy" has a certain popularity among Trump-like politicians. I do not think they know what they are doing. Scientific advice tends to come from a humanist perspective where every life is valued. Such advice is naturally rejected by Social Darwinians, who in the best case do not care about most people. While these politicians naturally see themselves as more valuable than us, they tend not to excel in academics. So let me explain why herd immunity is also a bad policy from their perspective, even if up to now people from groups they hate had a higher risk of dying. <br /></div><div><br /></div><div></div><div style="text-align: left;"><h2>Herd immunity </h2></div><div>If we do not take any preventative measures one SARS-CoV-2 infected person infects two or three further people. This may not sound like much, but this is an example of exponential growth. It is the same situation of <a href="https://www.npr.org/sections/krulwich/2012/09/15/160879929/that-old-rice-grains-on-the-chessboard-con-with-a-new-twist">the craftsman who "only" asks the king for rice as payment</a> for his chessboard: one grain on the first square, two on the second, four on the third square and so on. <br /></div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://en.wikipedia.org/wiki/Wheat_and_chessboard_problem" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="476" data-original-width="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhl2Y5gvv89o2rvjQ6dCshFpkXpvyCZMU-f4JNUwCKSTtH12J9Y60Q-EowsUMxhoQW3STI1tVupA85MhIbGkoya8AF7OZVP6p9hUQVPYvEX8X0HcNPW76y1vCB5bfQHrjW8r9gbkexgtY4/d/640px-Wheat_and_chessboard_problem.jpg" /></a></div><div><br /></div><div><br /></div><div>If we assume that one person only infects two other people, that is that the base reproduction number is two, then the sequence is: 1, 2, 4, 8, 16, 32, 64, 128, 256, 512, ...</div><div>1024, </div><div>2048, </div><div>4096, </div><div>8192, </div><div>16,384, </div><div>32,768, </div><div>65,536, </div><div>131,072, </div><div>262,144, </div><div>524,288, </div><div>1,048,576, </div><div>2,097,152, </div><div>4,194,304, </div><div>8,388,608, </div><div>16,777,216, </div><div>33,554,432, </div><div>67,108,864, </div><div>134,217,728, </div><div>268,435,456, </div><div>536,870,912, ...</div><div><br /></div><div>Those are just 30 steps to get to half a billion and the steps take about 5 days in case of SARS-CoV-2. So that is half a year. With good care 1 to 2 percent of people die, that would be at this point 5 to 10 million people. Do be highly optimistic and it is still 2 million people.</div><div><br /></div><div>Many more people need to got the hospital. <a href="https://www.rki.de/DE/Content/InfAZ/N/Neuartiges_Coronavirus/Steckbrief.html#doc13776792bodyText16">In Germany this is 17%.</a> A recent French study reported that <a href="https://www.journalofinfection.com/article/S0163-4453(20)30562-4/fulltext">after 110 day most patients are still tired and have trouble breathing</a>, many did not yet work again. That would be around 200 million people with long-lasting health problems.<br /></div><div><br /></div><div>This will naturally not happen in reality. People will take action to reduce the reproduction number, whether the government mandates it or not. And at a certain moment an infected person will not infect as many people because many are already immune. If the base reproduction number is two and half the population is immune, the infected person will only infect one other person, that is the effective reproduction number is one.</div><div><br /></div><div>The actual base reproduction number is most likely larger than two and reality is more complicated, so experts estimates that the actual herd immunity level is not 50%, but between 60 and 70%. More complication is that it is possible that people are sufficiently immune to avoid getting ill again, but the immunity may not prevent people from getting infected and transmitting the virus. There is a strong case of a 33 year old man from Hong Kong, <a href="https://arstechnica.com/science/2020/08/first-confirmed-case-of-sars-cov-2-reinfection-reported-in-hong-kong/">who got infected twice, but did not get ill</a>. If this were typical, <a href="https://www.nbcnews.com/science/science-news/can-herd-immunity-help-stop-coronavirus-experts-warn-it-s-n1207351?cid=eml_mrd_20200515&utm_source=Sailthru&utm_medium=email&utm_campaign=Morning%20Rundown%20May%2015%2C%202020&utm_term=Morning%20Rundown">herd immunity would not exist</a>.<br /></div><div><br /></div><div>You may have heard experts say that once this immunity level has been reached, that the pandemic is over. But this does not mean that the virus is gone. Europe needed several months of an effective reproduction number well below one to get to low infection numbers (and the virus is still not gone). This was after a drastic decrease in the effective reproduction number (R) due to public health measures, in case of herd immunity it would initially be around one, and only very slowly go below one.</div><div><br /></div><div>Say that when we reach R = 1 when one million people are infected, the after one step later (5 days) another one million people are infected. One million of 30% of the world population is not much. So also R will still be almost one. In other words, it would take several years for the virus to go away even in the best case. In the worst case, the virus mutates, people lose some immunity and new babies are immunologically naive. So most likely SARS-CoV-2 would stay with us forever.</div><div><br /></div><div>Reaching herd immunity will not help Trump. He will still be bunkered down in the White House surrounded by staff that is tested every day so as not to infect him, while the calls on others do go out, without a good testing system, and die for him. Trump's billionaire buddies will still need to lock themselves up in their mansions or high-sea yachts, counting how much richer they got from Trump's COVID-19 bailout. The millionaire hosts at Fox News will stay at home telling others to go out to work even if their work place is not safe and to accelerate the pandemic by sending kids to schools even it the schools are not safe. They will still need to wait until there is a vaccine. The herd immunity strategy only ensures that up to that time the largest number of people have died.</div><div><br /></div><div>When the virus is everywhere, good luck trying to keep it out of elderly homes. <a href="https://www.pewresearch.org/politics/2018/08/09/an-examination-of-the-2016-electorate-based-on-validated-voters/">In 2016 Trump won in the age groups above 50</a>. The UK Conservatives had a <a href="https://www.ipsos.com/ipsos-mori/en-uk/how-britain-voted-2019-election">47 point lead among those aged 65+</a>. In Brazil <a href="https://sxpolitics.org/brazilian-2018-presidential-elections-in-figures/19183">Bolsonaro had a 16 percent point lead</a> for people older than 60. They will be the ones dying and seeing their friends die. This is not helpful for the popular support of far right politicians. <br /></div><div><br /></div><div>The elite may think that it will be the poorest 70% that get infected. Far right Republicans may hope that it will affect Democrats and people of color more. It is true that at the moment poor people are more affected as they cannot afford staying at home even if their place of work is not safe. It is true that initially mostly blue states and cities were affected in America.</div><div><br /></div><div>Let's take the theoretical case where the poorest 70% are infected or immune and the richest 30% still immunologically naive. As soon as one of these 30% are infected, it will spread like wild fire as rich people tend to hang out with rich people, so the virus would easily find two or three rich people to infect next. <br /></div><div><br /></div><div>That is one reason why it is too simple to equate a base reproduction number with a herd immunity of 50%. This would be the case if the population were perfectly mixed. But any network were the immunity level is not yet 50% is up for grabs. In the end everyone will get it, rich or poor, red or blue.</div><div><br /></div><div>The only Social Darwinists for whom this pays are billionaires who have their own private hospital, with their own nurses, doctors at their mansion. They would have a chance of 1 to 2 percent to die. While if they manage to convince the people to go for herd immunity and not even to stay below the carrying capacity of the hospitals around 5% of the population would die. That is a 2 to 3% survival difference. Not sure that is worth getting all your politicians kicked out of office.</div><div><br /></div><div>It naturally also helps the high frequency traders. <a href="https://www.desmogblog.com/robert-mercer">Like the Mercer family who funded Trump in 2016 when no one thought he was a good investment.</a> They have made so much money from the chaos Trump produces. Up or down the high frequency trader wins. Down goes faster. They live their lives on chaos, suffering and destruction. I presume they have a private hospital, they have the money.<br /></div><div><br /></div><div>But for the average Joe Social Darwinist there are nearly no gains and it is bad politics. It hurts your country compared to more social democratic countries and at home it helps lefties get into power and implement disgusting policies that help everyone. </div><div><br /></div><div><h2 style="text-align: left;">Related reading</h2></div><div></div><div><a href="https://www.washingtonpost.com/politics/trump-coronavirus-scott-atlas-herd-immunity/2020/08/30/925e68fe-e93b-11ea-970a-64c73a1c2392_story.html">New Trump pandemic adviser pushes controversial ‘herd immunity’ strategy, worrying public health officials</a></div><div><br /></div><div>Nature: <a href="https://www.nature.com/articles/d41586-020-02948-4">The false promise of herd immunity for COVID-19.</a> Why proposals to largely let the virus run its course — embraced by Donald Trump’s administration and others — could bring “untold death and suffering”. <br /></div>Victor Venemahttp://www.blogger.com/profile/02842816166712285801noreply@blogger.com0tag:blogger.com,1999:blog-9093436161326155359.post-28200644879121559242020-07-26T17:03:00.009+01:002022-05-15T19:21:00.390+01:00Micro-blogging for scientists without nasties and surveillance<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh1ke_KCBrZ5xPBp4t0TdL13uD1Anktn7nC2joSj193QeCzbBVfHf_XYnsBJ3IFsEJcnASjJWTI26DZoKlZSY5RPBD4Am1xdK1NoQodw2DdVjjc9nSz-HBFDgkLbAy7JwrJ_GqZVYbdm5Q/s1201/mastodon_preview.jpg" style="margin-left: 1em; margin-right: 1em;"><img alt="Start screen picture of Mastodon: A Mastodon playing with paper airplanes." border="0" data-original-height="630" data-original-width="1201" height="336" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh1ke_KCBrZ5xPBp4t0TdL13uD1Anktn7nC2joSj193QeCzbBVfHf_XYnsBJ3IFsEJcnASjJWTI26DZoKlZSY5RPBD4Am1xdK1NoQodw2DdVjjc9nSz-HBFDgkLbAy7JwrJ_GqZVYbdm5Q/w640-h336/mastodon_preview.jpg" width="640" /></a></div><div><br /></div><div><br /></div><div>Two years ago I joined Mastodon to get to know a more diverse group of people here in Bonn. Almost two thousand messages later, I can say I really like it there and social networks like Mastodon are much more healthy for society as well. Together with Frank Sonntag we have recently set up a Mastodon server for publishing scientists. Let me explain how it works and why this system is better for the users and society.<br /></div><div><br /></div><div>Mastodon looks a lot like Twitter, i.e. it is a micro-blogging system, but many tweaks make it a much friendlier place where you can have meaningful conversations. One exemplary difference is that there are no quote tweets. Quoting rather than simply replying is often used by large accounts to bully small ones by pulling in many people into the "conversation" who disagree. I do miss quote tweets, they can also be used for good, to highlight what is interesting about a tweet or to explain something that the writer assumed their readers know, but your readers may not know.
But quote tweets make the atmosphere more adversarial, less about understanding and talking with each other. Conflict leads to more engagement and more time on the social network, so Twitter and Facebook like it, but pitting groups against each other is not the public debate that makes humanity better. <br /></div><div><br /></div><div>The main difference under the hood is that the system is not controlled by one corporation. There is not one server, but many servers that seamlessly talk with each other, just like the email system. The communication protocol (<a href="https://www.w3.org/TR/activitypub/">ActivityPub</a>) is a standard of the World Wide Web Consortium, just like HTML and HTTPS, which powers the web. <br /></div><div><br /></div><div>This means that you can chose the server and interface you like and still talk to others, while people on Twitter, Facebook, Instagram, WordPress and Tumblr can only talk to other people in their silo. As they say the modern internet is a group of five websites, each consisting of screenshots from the other four. It is hard to leave these silos, it would cut you off from your friends. This is also why the system naturally evolves into a few major players. Their service is as bad as one would expect with the monopoly power this network effect gives them. <br /></div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://rame.altervista.org/mastostart/" style="margin-left: 1em; margin-right: 1em;"><img alt="The Fediverse and its soial networks as icons" border="0" data-original-height="690" data-original-width="1228" height="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhIfqOqpl4MBusTeeXdDX86StREoNQ2s5dDPDIGcCzoiY5out9z-0UG1jfzFtACzQSDRinqWc-Jds8nk1sMz2SakhLbRlgdjY0LGDj7eURgyyCmuScW6PfqhFo_ubwVf3EtHVRoAbPjd1g/w640-h360/Fediverse-Galaxy.jpg" width="640" /></a></div><div><br /></div><div>ActivityPub is not only used by Mastodon, but also by other micro-blogging social networks such as <a href="https://pleroma.social">Pleroma</a>, blogging networks such as <a href="http://Write.as">Write.as</a>, podcasting services such as <a href="https://funkwhale.audio">FunkWhale</a> and file hosting such as <a href="https://nextcloud.com/blog/nextcloud-introduces-social-features-joins-the-fediverse/">NextCloud</a>. There is a version of Instagram (<a href="https://pixelfed.org">PixelFed</a>) and of YouTube (<a href="https://joinpeertube.org">PeerTube</a>). With ActivityPub all these social networks can talk to each other. Where they do different things, the system is designed to degrade gracefully. FixelFed shows photos more beautifully, has collections and filters, but Mastodon gracefully shows the recent photos as a photo below a message. PeerTube shows one large video on a page, just like Twitter, Mastodon shows the newest videos in small below a message in the news feed. The full network is called the fediverse, a portmanteau of federation and universe.</div><div><br /></div><div>Currently all these services are ad-free and tracking-free. The coding of the open source software is largely a labor of love, even if some coders are supported by micro-funding, for example Patreon or Liberapay. Most servers are maintained by people as hobby, some (like for email) by organization for their members, some larger ones again use Patreon or Liberapay, some are even coops.</div><div><br /></div><div>This means that technology enthusiasts from the middle class are mostly behind these networks. That is better than a few large ad corporations, but still not as democratic as one would like for such an important part of our society. <br /></div><div><br /></div><div><h2 style="text-align: left;">Moderation</h2></div><div>Not only can these networks talk to each other, they also themselves consist of many different servers each maintained by another group, just as the email system. This means that moderation of the content is much better than on Twitter or Facebook. The owners of the servers want to create a functional community, while these communities are relatively small. So they can invest much more time per moderation decision than a commercial silo would. Also if the moderation fails, people will go somewhere else. <br /></div><div><br /></div><div>Individual moderation decisions only pertain one server and are thus less impactful and can consequently be more forceful. If you do not like the moderation, you can move to another server that fits your values better. If you are kicked off a server, you can go to another one and still talk to your friends. Facebook kicking someone off Facebook or Twitter kicking someone off Twitter is somewhat of a big deal and is thus only done in extreme cases, when someone already created a lot of damage to the social fabric, while others make the atmosphere toxic staying below the radar.<br /></div><div><br /></div><div>If someone is really annoying they may naturally be removed from many servers. Then it does become a problem for this person, but that only happens when many server administrator agree you are not welcome. So maybe that person is really not an enrichment for humanity.<br /></div><div><br /></div><div>The extreme example would be Nazis. Some Nazis were too extreme for Twitter and started their own micro-blogging network. Probably most Nazis know the name already, but I think it is a good policy not to help bad actors with PR. As this network was used to coordinate their violent and inhumane actions, Google and Apple have removed their apps from their app stores. I may like that outcome, but these corporations should not have that power. Next this network started using ActivityPub, so that they can use ActivityPub apps. The main Activity network does not like Nazis, so they all blocked this network.</div><div><br /></div><div>I feel this is a good solution for society, everyone has their freedom of speech, but Nazis cannot harass decent people. They can tell each other pretty lies, where being responsible for killing more than 138 thousand Americans is patriotism, but 4 is treason, where the state brutalizing people expressing their 1st amendment rights is freedom, but wearing a mask not to risk the lives of others is tyranny. At least we do not have to listen to the insanity. (The police should naturally listen to stop crime.)</div><div><br /></div>
<iframe allowfullscreen="" frameborder="0" height="315" sandbox="allow-same-origin allow-scripts allow-popups" src="https://conf.tube/videos/embed/d8c8ed69-79f0-4987-bafe-84c01f38f966" width="560"></iframe>
<div><br /></div>
<div>Many of the societal problems of Facebook and Co.
would be much reduced if we would legislate that such large networks
open up to competition by implementing open communication protocols like ActivityPub. Then they would be forced to deliver a good product to keep their customers. If they do not change many will flee the repulsive violent conspiracy surveilance hell they were only still part of to be able to talk to grandma.<br /></div><div><br /></div><div>Because there are nearly no Nazis and other unfriendly characters, the fediverse is very popular with groups they would otherwise harass and bully into silence. It is a colorful bunch. This illustrates that extending the right to free speech to the right to be amplified by others does not optimize the freedom of speech, but in reality excludes many voices.</div><div><br /></div><div>A short encore: the coders of the ActivityPub apps also do not like Nazis. So they hard coded Nazi blocks into their apps. It is open source software, so the Nazis can remove this, but Google and Apple will not accept their apps. The latter is the societal problem, the coders are perfectly in their right not to want their work be used to destroy civilization.</div><div><br /></div><div><h2 style="text-align: left;">Open Science<br /></h2></div><div>The fediverse looks a lot like the Open Science tool universe I am dreaming of. Many independent groups and servers that seamlessly communicate with each other. The Grassroots post-publication peer review system I am working on should be able to gather reviews from all the other review and endorsement systems. They and repositories should be able to display grassroots reviews.</div><div><br /></div><div>The reviews could be aided by displaying information on retractions from the Retraction Watch database. I hope someone will build a service that also warns when a cited article is retracted. The review could show or link to open citations of the article and statistics checks, as well as plagiarism and figure tampering checks.</div><div><br /></div><div>We could have systems that warn authors of new articles and manuscripts they may find interesting given their publication history and warn editors of manuscripts that fit to their journal. I recently made a longer list of <a href="https://zenodo.org/record/3923961">useful integrations and services</a> and put it on Zenodo. <br /></div><div><br /></div><div>These could all be independent services that work together via ActivityPub and APIs, but the legacy publishers are working on collaborative science pipelines that create network effects, to ensure you are forced to use the largest service where you colleagues are and cannot leave, just like Facebook, Google and Twitter.<br /></div><div><br /></div><div><h2 style="text-align: left;">FediScience</h2></div><div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgabfYHRyQiyuqAHQ1v6EMhpQ7cfQgzDr5DG_dOgP4R0yrW81CGaw-p8r19QhVi8QvuzYnC2Pg0f8fImFYaTWNqtDostnszvyRXHC5ex3suAHJys0gGsH7WMll1UKSGqM-qDMqHGvifvT0/s253/elephant_ui_plane.png" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img alt="A mastodon with a paperplane in its trunk." border="0" data-original-height="194" data-original-width="253" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgabfYHRyQiyuqAHQ1v6EMhpQ7cfQgzDr5DG_dOgP4R0yrW81CGaw-p8r19QhVi8QvuzYnC2Pg0f8fImFYaTWNqtDostnszvyRXHC5ex3suAHJys0gGsH7WMll1UKSGqM-qDMqHGvifvT0/d/elephant_ui_plane.png" /></a></div>I am explaining all this to illustrate that such a federated social network is much better for society and its users. I really like the atmosphere on Mastodon. You can have real conversations with interesting people, without lunatics jumping in between or groups being pitted against each other. If people hear less and less of me on Twitter, that is one of the reasons.</div><div><br /></div><div>So I hope that this kind of network is the future and to help getting there we have started a Mastodon server for publishing scientists. "We" is me and former meteorologist Frank Sonntag who leads a small digital services company, <a href="https://akm.services">AKM-services</a>. So for him setting up a Mastodon server was easy.</div><div><br /></div><div>Two years ago he had to drag me to Mastodon a bit, when we tried to set up a server just for the Earth Sciences. That did not work out. By now that I have learned to love Mastodon, it has gotten a lot bigger and more people are aware of the societal problems due to social media. So it is time for another try with a larger target audience, all scientists. We have called it: <a href="https://fediscience.org"><b>FediScience</b></a>.<br /></div><div><br /></div><div>Mastodon is still quite small with about half a million active users; Twitter is 100 times bigger. My impression is that at least many climate scientists are on Twitter for science communication. For many leaving Twitter is not yet a realistic option, but FediScience could be a friendly place to talk to colleagues, nerd out about detailed science, while staying on Twitter for more comprehensible Tweets on the main findings.</div><div><br /></div><div>Once we have a nice group together, we can together decide on the local rules. How we would like to moderate, who will do the moderation, with whom our server federates, who is welcome, how long the messages are, whether we want equations, ... In the end I hope the server will be run by an association with the users as members.<br /></div><div><br /></div><div><h2 style="text-align: left;">My network empire</h2></div><div>My solution to Mastodon still being small was to stay on Twitter to talk about climate science, the political problems leading to the climate branch of the American culture war and anything that comes up on this blog: <a href="https://twitter.com/VariabilityBlog">Variable Variability</a>. As the goal of my <a href="https://bonn.social/@VictorVenema">Mastodon account in Bonn</a> is to build a local network for a digital non-profit, there I talk about the open web, data privacy more, often write in German and only occasionally write about climate. I aim to use my new account at <a href="https://fediscience.org/@VictorVenema">FediScience</a> to talk about (open) science and to enjoy finally a captive audience that understands the statistics of variability. As administrator I will try to help people find their way in the fediverse.</div><div><br /></div><div>Next to this the grassroots open review journals are on <a href="https://fediscience.org/@GrassrootsReview">Mastodon</a>, <a href="https://twitter.com/Grassr_Journals">Twitter</a> and <a href="https://old.reddit.com/r/GrassrootsJournals/">Reddit</a>. And I have inherited the Open Science Feed from Jon Tennant, which is on <a href="https://fediscience.org/@OpenScienceFeed">Mastodon</a>, <a href="https://twitter.com/OpenScienceR">Twitter</a> and <a href="https://old.reddit.com/r/Open_Science/">Reddit</a>. Both deserve to get an <a href="https://indieweb.org">IndieWeb homepage</a> and a newsletter, but all newsletters I know are full of trackers, suggestions for ethical ones are welcome. For even more fun, I also created a Twitter feed for the <a href="https://twitter.com/TaminoClimate">climate statistics blog Tamino</a> and scientific skeptic <a href="https://twitter.com/Potholer54T">Potholer54's YouTube channel</a>. I should probably put them on Mastodon as well. That makes this blog my 12th social media channel. Pro-tip: with Firefox "containers" you can be logged in into multiple Mastodon, Twitter or Reddit accounts. <br /></div><div><br /></div><div>Every member of FediScience can invite their colleagues to join the network. Please do. If you share the link in public, please make it time limited.</div><div><br /></div><div>Please let other scientists know about FediScience, whether by mail or via one of the social media silos. These are good Tweets to spread: <br /></div><div><br /></div><div><span class="css-901oao css-16my406 r-1qd0xha r-ad9z0x r-bcqeeo r-qvutc0">My own Tweet: <a href="https://twitter.com/VariabilityBlog/status/1287738860726870017">The post below explains why this #Mastodon system is better for us & for society.</a></span></div><div>Ruth Mottram: <a href="https://twitter.com/ruth_mottram/status/1284865843873173504">come and join us, the water is lovely...</a></div><div><span class="css-901oao css-16my406 r-1qd0xha r-ad9z0x r-bcqeeo r-qvutc0">Alex Holcombe: <a href="https://twitter.com/ceptional/status/1287918756849545217">many tweaks make it a much friendlier place where you can have meaningful conversations.</a></span></div><div><span class="css-901oao css-16my406 r-1qd0xha r-ad9z0x r-bcqeeo r-qvutc0"> </span></div><div><span class="css-901oao css-16my406 r-1qd0xha r-ad9z0x r-bcqeeo r-qvutc0"><a href="https://fediscience.org">Sign-up to FediScience.</a> <br /></span></div>
<div><h2 style="text-align: left;">Glossary </h2><div>When you join Mastodon, the following glossary is helpful.<br /></div><div><br /></div></div>
<style>
th, td {
padding-left: 10px;
}
</style>
<table border="0">
<tbody><tr><td>The Bird Site</td> <td>Twitter</td></tr>
<tr><td>Fediverse</td> <td>All federated social media sites together</td></tr>
<tr><td>Instance</td> <td>Sever running Mastodon</td></tr>
<tr><td>Toot</td> <td>Tweet</td></tr>
<tr><td>Boost</td> <td>Retweet</td></tr>
<tr><td>ActivityPub (AP)</td> <td>The main communication protocol in the fediverse</td></tr>
<tr><td>Content Warning (CW)</td><td>A convenient way to give a heads up</td></tr>
<tr><td><a href="https://nitter.net/VariabilityBlog">Nitter.net</a></td> <td>A mirror site of Twitter without tracking, popular for linking to in Mastodon<br /></td></tr>
</tbody></table>
<p> </p>
Victor Venemahttp://www.blogger.com/profile/02842816166712285801noreply@blogger.com2tag:blogger.com,1999:blog-9093436161326155359.post-37875907369549620012020-05-29T15:25:00.006+01:002020-07-13T22:50:43.674+01:00What does statistical homogenization tell us about the underestimated global warming over land?<div>Climate station data contains inhomogeneities, which are <a href="http://variable-variability.blogspot.com/2012/08/statistical-homogenisation-for-dummies.html">detected and corrected</a> by comparing a candidate station to its neighbouring reference stations. The most important inhomogeneities are the ones that lead to errors in the station network-wide trends and in global trend estimates. <br /></div><div><br /></div><div>An earlier post in this series argued that statistical homogenization will <a href="https://variable-variability.blogspot.com/2020/05/statistical-homogenization-under-correction-trends.html">tend to under-correct errors in the network-wide trends in the raw data</a>. Simply put: that some of the trend error will remain. The catalyst for this series is the new finding that <a href="https://variable-variability.blogspot.com/2020/04/break-detection-deceptive-noise-break-signal-variance.html">when the signal to noise ratio is too low, homogenization methods will have large errors in the positions of the jumps/breaks</a>. For much of the earlier data and for networks in poorer countries this probably means that any trend errors will be seriously under-corrected, if they are corrected at all. <br /></div><div><br /></div><div>The questions for this post are: 1) What do the corrections in global temperature datasets do to the global trend and 2) What can we learn from these adjustments for global warming estimates? <br /></div><div><br /></div><div><h2 style="text-align: left;">
The global warming trend estimate </h2></div><div></div><div>In the global temperature station datasets statistical homogenization leads to larger warming estimates. So as we tend to underestimate how much correction is needed, this suggests that the Earth warmed up more than current estimates indicate. <br /></div><div><br /></div><div>Below is the warming estimate in NOAA’s Global Historical Climate Network (Versions 3 and 4) from Menne et al. (2018). You see the warming in the “raw data” (before homogenization; striped lines) and in the homogenized data (drawn line). The new version 4 is drawn in black, the previous version 3 in red. For both versions homogenization makes the estimated warming larger. <br /></div><div><br /></div><div>After homogenization the warming estimates of the two versions are quite similar. The difference is in the raw data. Version 4 is based on the raw data of the International Surface Temperature Initiative and has much more stations. Version 3 had many stations that report automatically, these are typically professional stations and a considerable part of them are at airports. One reason the raw data may show less warming in Version 3 is that <a href="http://variable-variability.blogspot.com/2017/01/some-programing-skills-compute-global-temperatures.html">many stations at airports were in cities before</a>. Taking them out of the urban heat island and often also improving the local siting of the station, may have produced a systematic artificial cooling in the raw observations. <br /></div><div><br /></div><div>
Version 4 has more stations and thus a higher signal to noise ratio. One may thus expect it to show more warming. That this is not the case is a first hint that the situation is not that simple, as explained at the end of this post. <br /></div><div><br /></div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjek3QxtKw19b6Ig2KTMq5WStY_-74gvSqwPuGQ0vN0VSXEYgVfaK1r_cc8Y7Jj1-TSCl6E8RuJMnoOgnTC2aFu3irIYphEwyERLSi2guiA8c_Ljybx_aw9duHFPA_RuXHn0wPRmJouqks/" style="margin-left: 1em; margin-right: 1em;"><img alt="Figure from Menne et al. with warming estimates from 1880. See caption below." border="0" data-original-height="540" data-original-width="827" height="418" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjek3QxtKw19b6Ig2KTMq5WStY_-74gvSqwPuGQ0vN0VSXEYgVfaK1r_cc8Y7Jj1-TSCl6E8RuJMnoOgnTC2aFu3irIYphEwyERLSi2guiA8c_Ljybx_aw9duHFPA_RuXHn0wPRmJouqks/w640-h418/GHCNv3v4_warming_estimates.png" width="640" /></a></div><div></div><div><i>The global land warming estimates based on the Global Historical Climate Network dataset of NOAA. The red lines are for version 3, the black lines for the new version 4. The striped lines are before homogenization and the drawn lines after homogenization. Figure from Menne et al. (2018). <br /></i></div><div><i><br /></i></div><div>The difference due to homogenization in the global warming estimates is shown in the figure below, also from Menne et al. (2018). The study also added an estimate for the data of the Berkeley Earth initiative. <br /></div><div><br /></div><div><i>(Background information. Berkeley Earth started as a US Culture War initiative where non-climatologists computed the observed global warming. Before the results were in, climate “sceptics” claimed their methods were the best and they would accept any outcome. The moment the results turned out to be scientifically correct, but not politically correct, <a href="https://www.nytimes.com/2011/04/04/opinion/04krugman.html?_r=1">the climate “sceptics” dropped them like a hot potato.</a>)</i> <br /></div><div></div><div><br /></div><div>We can read from the figure that in GHCNv3 over the full period homogenization increases warming estimates by about 0.3 °C per century, while this is 0.2°C in GHCNv4 and 0.1°C in the dataset of Berkeley Earth datasets. GHCNv3 has more than 7000 stations (Lawrimore et al., 2011). GHCNv4 is based on the ISTI dataset (Thorne et al., 2011), which has about 32,000 stations, but GHCN only uses those of at least 10 years and thus contains about 26,000 stations (Menne et al. 2018). Berkeley Earth is based on 35,000 stations (Rohde et al., 2013).</div><div> <br /></div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg-FBEdcDVikDqpAjf0z4pxrBuCDtkvdH-tOWDRinIhbG-Y-aFAkzWipfBs98QJ2N98fi0SOdIjZW03xsD6uH6PdvvaiqR9FGqs7l7KAD1ISYMNE-nXJtwwN-Hz3OpyMJCHyBNmauj_qGM/" style="margin-left: 1em; margin-right: 1em;"><img alt="Figure from Menne et al. (2018) showing how much adjustments were made." border="0" data-original-height="569" data-original-width="903" height="404" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg-FBEdcDVikDqpAjf0z4pxrBuCDtkvdH-tOWDRinIhbG-Y-aFAkzWipfBs98QJ2N98fi0SOdIjZW03xsD6uH6PdvvaiqR9FGqs7l7KAD1ISYMNE-nXJtwwN-Hz3OpyMJCHyBNmauj_qGM/w640-h404/GHCNv4_BEST_homogenization_adjustments.png" width="640" /></a></div><div></div><div><i>The difference due to homogenization in the global warming estimates (Menne et al., 2018). The red line is for smaller GHCNv3 dataset, the black line for GHCNv4 and the blue line for Berkeley Earth.</i> <br /></div><div><br /></div><div><h2 style="text-align: left;">What does this mean for global warming estimates? </h2></div><div>So, what can we learn from these adjustments for global warming estimates? At the moment, I am afraid, not yet a whole lot. However, the sign is quite likely right. If we could do a perfect homogenization, I expect that this would make the warming estimates larger. But to estimate how large the correction should have been based on the corrections which were actually made in the above datasets is difficult. <br /></div><div><br /></div><div>In the beginning, I was thinking: if the signal to noise ratio in some network is too low, we may be able to estimate that in such a case we under-correct, say, 50% and then make the adjustments unbiased by making them, say, twice as large. <br /></div><div><br /></div><div>
However, especially doing this globally is a huge leap of faith. <br /></div><div><br /></div><div>The first assumption this would make is that the trend bias in <b>data sparse regions and periods</b> is the same as that of data rich regions and periods. However, the regions with high station density are in the [[<a href="https://en.wikipedia.org/wiki/Middle_latitudes">mid-latitudes</a>]] where atmospheric measurements are relatively easy. The data sparse periods are also the periods in which large changes in the instrumentation were made as we were still learning how to make good meteorological observations. So we cannot reliably extrapolate from data rich regions and periods to data sparse regions and periods. <br /></div><div><br /></div><div>Furthermore, there will not be one correction factor to account for under-correction because <b>the signal to noise ratio is different everywhere</b>. Maybe America is only under-corrected by 10% and needs just a little nudge to make the trend correction unbiased. However, homogenization adjustments in data sparse regions may only be able to correct such a small part of the trend bias that correcting for the under-correction becomes adventurous or even will make trend estimates more uncertain. So we would at least need to make such computations for many regions and periods. <br /></div><div><br /></div><div>Finally, another reason not to take such an estimate too seriously are the <b>spatial and temporal characteristics of the bias</b>. The signal to noise ratio is not the only problem. One would expect that it also matters how the network-wide trend bias is distributed over the network. In case of<a href="http://variable-variability.blogspot.com/2017/01/some-programing-skills-compute-global-temperatures.html "> relocations of city stations to airports</a>, a small number of stations will have a large jump. Such a large jump is relatively easy to detect, especially as its neighbouring stations will mostly be unaffected. <br /></div><div><br /></div><div>Already a harder case is the <a href="http://variable-variability.blogspot.com/2012/08/a-short-introduction-to-time-of.html">time of observation bias in America</a>, where a large part of the stations has experienced a cooling shift from afternoon to morning measurements over many decades. Here, in most cases the neighbouring stations were not affected around the same time, but the smaller shift makes it harder to detect these breaks. <br /></div><div><br /></div><div><i>(NOAA has a special correction for this problem, but when it is turned off statistical homogenization still finds the same network-wide trend. So for this kind of bias the network density in America is apparently sufficient.)</i> <br /></div><div><br /></div><div>Among the hardest case are changes in the instrumentation. For example, <a href="http://variable-variability.blogspot.com/2016/01/transition-automatic-weather-stations-parallel-measurements-ISTI-POST.html">the introduction of Automatic Weather Stations</a> in the last decades or the <a href="http://variable-variability.blogspot.com/2015/02/temperature-trend-bias-radiation-errors-screen.design.html">introduction of the Stevenson screen</a> a <a href="Century ago: http://variable-variability.blogspot.com/2016/02/early-global-warming-transition-Stevenson-screens.html">century ago</a>. These relatively small breaks often happen over a period of only a few decades, if not years, which means that also the neighbouring stations are affected. That makes it hard to detect them in a difference time series. <br /></div><div><br /></div><div>Studying from the data how the biases are distributed is hard. One could study this by homogenizing the data and studying the breaks, but the ones which are difficult to detect will then be under-represented. This is a tough problem; please leave suggestions in the comments. <br /></div><div><br /></div><div>Because of how the biases are distributed it is perfectly possible that the trend biases corrected in GHCN and Berkley Earth are due to the easy-to-correct problems, such as the relocations to airports, while the hard ones, such as the transition to Stevenson screens, are hardly corrected. In this case, the correction that could be made, do not provide information on the ones that could not be made. They have different causes and different difficulties. <br /></div><div><br /></div><div>So if we had a network where the signal to noise ratio is around one, we could not say that the under-correction is, say, 50%. One would have to specify for which kind of distribution of the bias this is valid. <br /></div><div><br /></div><div><h2 style="text-align: left;">GHCNv3, GHCNv4 and Berkeley Earth </h2></div><div>Coming back to the trend estimates of GHCN version 3 and version 4. One may have expected that version 4 is able to better correct trend biases, having more stations, and should thus show a larger trend than version 3. This would go even more so for Berkeley Earth. But the final trend estimates are quite similar. Similarly in the most data rich period after the second world war, the least corrections are made. <br /></div><div><br /></div><div>The datasets with the largest number of stations showing the strongest trend would have been a reasonable expectation if the trend estimates of the raw data would have been similar. But these raw data trends are the reason for the differences in the size of the corrections, while the trend estimates based on the homogenized are quite similar. <br /></div>
<div><br /></div><div>Many additional stations will be in regions and periods where we already had many stations and where the station density was no problem. On the other hand, adding some stations to data sparse regions may not be sufficient to fix the low signal to noise ratio. So the most improvements would be expected for the moderate cases where the signal to noise ratio is around one. Until we have global estimates of the signal to noise ratio for these datasets, we do not know for which percentage of stations this is relevant, but this could be relatively small. <br /></div>
<div><br /></div><div>The arguments of the previous section are also applicable here; the relationship between station density and adjustments may not be that easy. Especially that the corrections in the period after the second world war are so small is suspicious; we know quite a lot happened to the measurement networks. Maybe these effects all average out, but that would be quite a coincidence. Another possibility is that these changes in observational methods were made over relatively short periods to entire networks making it hard to correct them. <br /></div>
<div><br /></div><div>A reason for the similar outcomes for the homogenized data could be that all datasets successfully correct for trend biases due to problems like the transition to airports, while for every dataset the signal to noise ratio is not enough to correct problems like the transition to Stevenson screens. GHNCv4 and Berkeley Earth using as many stations as they could find could well have more stations which are currently badly sited than GHCNv3, which was more selective. In that case the smaller effective corrections of these two datasets would be due to compensating errors. <br /></div><div><br /></div><div>Finally, as small disclaimer: The main change from version 3 to 4 was the number of stations, but there were other small changes, so it is not just a comparison of two datasets where only the signal to noise ratio is different. Such a pure comparison still needs to be made. The homogenization methods of GHCN and Berkeley Earth are even more different. <br /></div><div><br /></div>
<div style="text-align: left;">My apologies for all the maybe's and could be's, but this is something that is more complicated than it may look and I would not be surprised if it will turn out to be impossible to estimate how much corrections are needed based on the corrections that are made by homogenization algorithms. The only thing I am confident about is that homogenization improves trend estimates, but I am not confident about how much it improves.</div><div style="text-align: left;"><br /></div>
<div style="text-align: left;"><h2>Parallel measurements </h2></div><div>Another way to study these biases in the warming estimates is to go into the books and study station histories in 200 plus countries. This is basically how sea surface temperature records are homogenized. To do this for land stations is a much larger project due to the large number of countries and languages. <br /></div><div><br /></div><div>Still there are such experiments, which give a first estimate for some of the biases when it comes to the global mean temperature (do not expect regional detail). In the next post I will try to estimate the missing warming this way. We do not have much data from such experiments yet, but I expect that this will be the future. <br /></div><div><br /></div><h2>Other posts in this series</h2><div>
Part 5: <a href="https://variable-variability.blogspot.com/2020/05/statistical-homogenization-under-correction-trends.html">Statistical homogenization under-corrects any station network-wide trend biases</a> <br /></div><div><br /></div><div>Part 4: <a href="https://variable-variability.blogspot.com/2020/04/break-detection-deceptive-noise-break-signal-variance.html">Break detection is deceptive when the noise is larger than the break signal</a> <br /></div><div><br /></div><div>
Part 3: <a href="https://variable-variability.blogspot.com/2020/04/correcting-inhomogeneities-perfect-breaks.html">Correcting inhomogeneities when all breaks are perfectly known</a> <br /></div><div><br /></div><div>Part 2: <a href="https://variable-variability.blogspot.com/2020/03/trend-errors-raw-temperature-station-inhomogeneities.html">Trend errors in raw temperature station data due to inhomogeneities</a> <br /></div><div><br /></div><div>Part 1: <a href="https://variable-variability.blogspot.com/2020/02/estimating-statistical-properties-inhomogeneities-homogenization.html">Estimating the statistical properties of inhomogeneities without homogenization</a></div><div><br /></div>
<h2>References</h2><div>
Chimani, Barbara, Victor Venema, Annermarie Lexer, Konrad Andre, Ingeborg Auer and Johanna Nemec, 2018: Inter-comparison of methods to homogenize daily relative humidity. <i>International Journal Climatology</i>, <b>38</b>, pp. 3106–3122. <a href="https://doi.org/10.1002/joc.5488">https://doi.org/10.1002/joc.5488</a><br /></div><div><br /></div><div>Gubler, Stefanie, Stefan Hunziker, Michael Begert, Mischa Croci-Maspoli, Thomas Konzelmann, Stefan Brönnimann, Cornelia Schwierz, Clara Oria and Gabriela Rosas, 2017: The influence of station density on climate data homogenization. <i>International Journal of Climatology</i>, <b>37</b>, pp. 4670–4683. <a href="https://doi.org/10.1002/joc.5114">https://doi.org/10.1002/joc.5114</a> <br /></div><div><br /></div><div>Lawrimore, Jay H., Matthew J. Menne, Byron E. Gleason, Claude N. Williams, David B. Wuertz, Russel S. Vose and Jared Rennie, 2011: An overview of the Global Historical Climatology Network monthly mean temperature data set, version 3. <i>Journal of Geophysical Research</i>, <b>116</b>, D19121. <a href="https://doi.org/10.1029/2011JD016187">https://doi.org/10.1029/2011JD016187</a> <br /></div><div><br /></div><div>
Lindau, Ralf and Victor Venema, 2018: On the reduction of trend errors by the ANOVA joint correction scheme used in homogenization of climate station records. <i>International Journal of Climatology</i>, <b>38</b>, pp. 5255– 5271. Manuscript: <a href="https://eartharxiv.org/r57vf/">https://eartharxiv.org/r57vf/</a> Article: <a href="https://doi.org/10.1002/joc.5728">https://doi.org/10.1002/joc.5728</a> <br /></div><div><br /></div><div>Rohde, Robert, Richard A. Muller, Robert Jacobsen, Elizabeth Muller, Saul Perlmutter, Arthur Rosenfeld, Jonathan Wurtele, Donald Groom and Charlotte Wickham, 2013: A New Estimate of the Average Earth Surface Land Temperature Spanning 1753 to 2011. <i>Geoinformatics & Geostatistics: An Overview</i>, <b>1</b>, no.1. <a href="https://doi.org/10.4172/2327-4581.1000101">https://doi.org/10.4172/2327-4581.1000101</a> <br /></div><div><br /></div><div>Sutton, Rowan, Buwen Dong and Jonathan Gregory, 2007: Land/sea warming ratio in response to climate change: IPCC AR4 model results and comparison with observations. <i>Geophysical Research Letters</i>, <b>34</b>, L02701. <a href="https://doi.org/10.1029/2006GL028164">https://doi.org/10.1029/2006GL028164</a> <br /></div><div><br /></div><div>Thorne, Peter W., Kate M. Willett, Rob J. Allan, Stephan Bojinski, John R. Christy, Nigel Fox, Simon Gilbert, Ian Jolliffe, John J. Kennedy, Elizabeth Kent, Albert Klein Tank, Jay Lawrimore, David E. Parker, Nick Rayner, Adrian Simmons, Lianchun Song, Peter A. Stott and Blair Trewin, 2011: Guiding the creation of a comprehensive surface temperature resource for twenty-first century climate science. <i>Bulletin American Meteorological Society</i>, <b>92</b>, ES40–ES47. <a href="https://doi.org/10.1175/2011BAMS3124.1">https://doi.org/10.1175/2011BAMS3124.1</a> <br /></div><div><br /></div><div>Wallace, Craig and Manoj Joshi, 2018: Comparison of land–ocean warming ratios in updated observed records and CMIP5 climate models. <i>Environmental Research Letters</i>, <b>13</b>, no. 114011. <a href="https://doi.org/10.1088/1748-9326/aae46f">https://doi.org/10.1088/1748-9326/aae46f</a> <br /></div><div><br /></div><div>Williams, Claude, Matthew Menne and Peter Thorne, 2012: Benchmarking the performance of pairwise homogenization of surface temperatures in the United States. <i>Journal Geophysical Research</i>, <b>117</b>, D05116. <a href="https://doi.org/10.1029/2011JD016761">https://doi.org/10.1029/2011JD016761</a></div><div><br /></div><div><br /></div>Victor Venemahttp://www.blogger.com/profile/02842816166712285801noreply@blogger.com0tag:blogger.com,1999:blog-9093436161326155359.post-43909735899935669412020-05-01T15:15:00.000+01:002020-05-12T19:32:59.982+01:00Statistical homogenization under-corrects any station network-wide trend biases<a href="https://www.ncdc.noaa.gov/crn/"><img alt="Photo of a station of the US Climate Reference Network with a prominent wind shield for the rain gauges." data-original-height="660" data-original-width="1491" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhdgwEj-pL-Tx7_U0_7QrMJ5KmL6WZ8_f9Eii2LL5TKGUzzOMj3tjreI6Xa2yncwufbZUfblXQ20s-CZB1oCpvUyNd00wAFv3KrjjsZkHJ6TVYhHU-gtaCBgBgD2h8FbE0wT3ZfP6-T3bw/s660/USCRN_station_wy_moose_0.jpg" border="0"></a><br />
<i>A station of the US Climate Reference Network.</i><br clear="all"><br />
<br />
In the last blog post I made the argument that the <a href="https://variable-variability.blogspot.com/2020/04/break-detection-deceptive-noise-break-signal-variance.html">statistical detection of breaks</a> in climate station data has problems when the noise is larger than the break signal. The post before argued that the best homogenization correction method we have can remove network-wide trend biases <a href="https://variable-variability.blogspot.com/2020/04/break-detection-deceptive-noise-break-signal-variance.html">perfectly if all breaks are known</a>. In the light of the last post, we naturally would like to know how well this correction method can remove such biases in the more realistic case when the breaks are imperfectly estimated. That should still be studied much better, but it is interesting to discuss a number of other studies on the removal of network-wide trend biases from the perspective of this new understanding.<br />
<br />
So this post will argue that it theoretically makes sense that (unavoidable) inaccuracies of break detection lead to network-wide trend biases only being partially corrected by statistical homogenization. <br />
<br />
1) We have seen this in our study of the correction method in response to small errors in the break positions (<a href="https://eartharxiv.org/r57vf/">Lindau and Venema, 2018</a>).<br />
<br />
2) The benchmarking study of NOAA’s homogenization algorithm shows that if the breaks are big and easy they are largely removed, while in the scenario where breaks are plentiful and small half of the trend bias remains (<a href="ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/papers/williams-etal2012.pdf">Williams et al., 2012</a>).<br />
<br />
3) Another benchmarking study show that with the network density of Switzerland homogenization can find and remove clear trend biases, while if you thin this network to be similar to Peru the bias cannot be removed (<a href="https://doi.org/10.1002/joc.5114">Gubler et al., 2017</a>).<br />
<br />
4) Finally, a benchmarking study of relative humidity station observations in Austria could not remove much of the trend bias, which is likely because relative humidity is not correlated well from station to station (<a href="https://doi.org/10.1002/joc.5488">Chimani et al., 2018</a>).<br />
<br />
Statistical homogenization on a global scale makes warming estimates larger (Lawrimore et al., 2011; Menne et al., 2018). Thus if it can only remove part of any trend bias, this would mean that quite likely the actual warming was larger.<br />
<br />
<div style="float: right; margin-left:20px; margin-bottom:10px;width:347px;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh5KQlUF50s30qRdEKnm-amxQ3I6UDK0vPf39r4Rj2-psfvsdGskbbKn0ot13o4HW-1T_6zcYPgDnXBQW5FDRXcTh4JNGXdOfYRSBNafeL7BzPLDHOnr4U_dPzemfeLquT5vUdGLgQpHw0/s1600/undercorrection_ANOVA.2.png" imageanchor="1" ><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh5KQlUF50s30qRdEKnm-amxQ3I6UDK0vPf39r4Rj2-psfvsdGskbbKn0ot13o4HW-1T_6zcYPgDnXBQW5FDRXcTh4JNGXdOfYRSBNafeL7BzPLDHOnr4U_dPzemfeLquT5vUdGLgQpHw0/s1600/undercorrection_ANOVA.2.png" data-original-width="695" data-original-height="1350" width="347" /></a><br clear="all"><i>Figure 1: The inserted versus remaining network-mean trend error. Upper panel for perfect breaks. Lower panel for a small perturbation of the break position. The time series are 100 annual values and have 5 break. Figure 10 in Lindau and Venema (2018).</i></div><h2>Joint correction method</h2>First, what did our study on the correction method (Lindau and Venema, 2018) say about the importance of errors in the break position? As the paper was mostly about perfect breaks, we assumed that all breaks were known, but that they had a small error in their position. In the example to the right, we perturbed the break position by a normally distributed random number with standard deviation one (lower panel), while for comparison the breaks are perfect (upper panel).<br />
<br />
In both cases we inserted a large network-wide trend bias of 0.873 °C over the length of the century long time series. The inserted errors for 1000 simulations is on the x-axis, the average inserted trend bias is denoted by x̅. The remaining error after homogenization is on the y-axis. Its average is denoted by y̅ and basically zero in case the breaks are perfect (top panel). In case of the small perturbation (lower panel) the average remaining error is 0.093 °C, this is 11 % of the inserted trend bias. That is the under-correction for is a quite small perturbation: 38 % of the positions is not changed at all. <br />
<br />
If the standard deviation of the position perturbation is increased to 2, the remaining trend bias is 21 % of the inserted bias. <br />
<br />
In the upper panel, there is basically no correlation between the inserted and the remaining error. That is, the remaining error does not depend on the break signal, but only on the noise. In the lower panel with the position errors, there is a correlation between the inserted and remaining trend error. So in this more realistic case, it does matter how large the trend bias due to the inhomogeneities is.<br />
<br />
This is naturally an idealized case, position errors will be more complicated in reality and there would be spurious and missing breaks. But this idealized case fitted best to the aim of the paper of studying the correction algorithm in isolation. <br />
<br />
It helps understand where the problem lies. The correction algorithm is basically a regression that aims to explain the inserted break signal (and the regional climate signal). Errors in the predictors will lead to an explained variance that is less than 100 %. One should thus expect that the estimated break signal is smaller than the actual break signal. It is thus expected that the trend change due to the estimated break signal produces is smaller than the actual trend change due to the inhomogeneities.<br />
<br />
<h2>NOAA’s benchmark</h2>That statistical homogenization under-corrects when the going gets tough is also found by the benchmarking study of NOAA’s Pairwise Homogenization Algorithm in <a href="ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/papers/williams-etal2012.pdf">Williams et al. (2012)</a>. They simulated temperature networks like the American USHCN network and added inhomogeneities according to a range of scenarios. (Also with various climate change signals.) Some scenarios were relatively easy, had few and large breaks, while others were hard and contained many small breaks. The easy cases were corrected nearly perfectly with respect to the network-wide trend, while in the hard cases only half of the inserted network-wide trend error was removed. <br />
<br />
The results of this benchmarking for the three scenarios with a network-wide trend bias are shown below. The three panels are for the three scenarios. Each panel has results (the crosses, ignore the box plots) for three periods over which the trend error was computed. The main message is that the homogenized data (orange crosses) lies between the inhomogeneous data (red crosses) and the homogeneous data (green crosses). Put differently, green is how much the climate actually changed, red is how much the estimate is wrong due to inhomogeneities, orange shows that homogenization moves the estimate towards the truth, but never fully gets there.<br />
<br />
If we use the number of breaks and their average size as a proxy for the difficulty of the scenario, the one on the left has 6.4 breaks with an average size of 0.8 °C, the one in the middle 8.4 breaks (size 0.4 °C) and the one on the right 10 breaks (size 0.4 °C). So this suggests there is a clear dose effect relationship; although there surely is more than just the number of breaks.<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhMLm8sMPmWXoUFva_mbW1ub2AsKeYS3DuCvW2AQ21PjwdTJfK7agKUjuxvZywo0BIoHhQWC7QEcL9sapBaZi2-i10fEs7TWWAEJ3qCx3c3ewPhLTCxyp8vASuoHRkiOQmwj4GwFpCc8vE/s1600/USHCN_combined2.png" imageanchor="1" ><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhMLm8sMPmWXoUFva_mbW1ub2AsKeYS3DuCvW2AQ21PjwdTJfK7agKUjuxvZywo0BIoHhQWC7QEcL9sapBaZi2-i10fEs7TWWAEJ3qCx3c3ewPhLTCxyp8vASuoHRkiOQmwj4GwFpCc8vE/s1600/USHCN_combined2.png" data-original-width="1000" data-original-height="599" width="500" /></a><br />
<i>Figures from <a href="ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/papers/williams-etal2012.pdf">Williams et al. (2012)</a> showing the results for three scenarios. This is a figure I created from parts of Figure 7 (left), Figure 5 (middle) and Figure 10 (right; their numbers). </i><br />
<br />
When this study appeared in 2012, I found the scenario with the many small breaks much too pessimistic. However, our recent <a href="https://variable-variability.blogspot.com/2020/02/estimating-statistical-properties-inhomogeneities-homogenization.html ">study estimating the properties of the inhomogeneities of the American network</a> found a surprisingly large number of breaks: more than 17 per century; they were bigger: 0.5 °C. So purely based on the number of breaks the hardest scenario is even optimistic, but also size matters. <br />
<br />
Not that I would already like to claim that even in a dense network like the American there is a large remaining trend bias and the actual warming was much larger. There is more to the difficulty of inhomogeneities than their number and size. It sure is worth studying. <br />
<br />
<h2>Alpine benchmarks</h2>The other two examples in the literature I know of are examples of under-correction in the sense of basically no correction because the problem is simply too hard. <a href="https://rmets.onlinelibrary.wiley.com/doi/full/10.1002/joc.5114">Gubler et al. (2017)</a> shows that the raw data of the Swiss temperature network has a clear trend bias, which can be corrected with homogenization of its dense network (together with metadata), but when they thin the network to a network density similar to that of Peru, they are unable to correct this trend bias. For more details see <a href="https://homogenisation.grassroots.is/assessments/the-influence-of-station-density-on-climate-data-homogenization/">my review of this article in the Grassroots Review Journal on Homogenization</a>. <br />
<br />
Finally, <a href="https://doi.org/10.1002/joc.5488">Chimani et al. (2018)</a> study the homogenization of daily relative humidity observations in Austria. I made a beautiful daily benchmark dataset, it was a lot of fun: on a daily scale you have autocorrelations and a distribution with an upper and lower limit, which need to be respected by the homogeneous data and the inhomogeneous data. But already the normal homogenization of monthly averages was much too hard. <br />
<br />
Austria has quite a dense network, but relative humidity is much influenced by very local circumstances and does not correlate well from station to station. My co-authors of the Austrian weather service wanted to write about the improvements: "an improvement of the data by homogenization was non‐ideal for all methods used". For me the interesting finding was: nearly no improvement was possible. That was unexpected. Had we expected that we could have generated a much simpler monthly or annual benchmark to show no real improvement was possible for humidity data and saved us a lot of (fun) work.<br />
<br />
<h2>What does this mean for global warming estimates?</h2>When statistical homogenization only partially removes large-scale trend biases what does this mean for global warming estimates? In the global temperature datasets statistical homogenization leads to larger warming estimates. So if we tend to underestimate how much correction is needed, this would mean that the Earth most likely warmed up more than current estimates indicate. How much exactly is hard to tell at the moment and thus needs a nuanced discussion. Let me give you my considerations in the next post.<br />
<br />
<br />
<h2>Other posts in this series</h2>Part 5: <i>Statistical homogenization under-corrects any station network-wide trend biases</i><br />
<br />
Part 4: <a href="https://variable-variability.blogspot.com/2020/04/break-detection-deceptive-noise-break-signal-variance.html">Break detection is deceptive when the noise is larger than the break signal</a><br />
<br />
Part 3: <a href="https://variable-variability.blogspot.com/2020/04/correcting-inhomogeneities-perfect-breaks.html">Correcting inhomogeneities when all breaks are perfectly known</a><br />
<br />
Part 2: <a href="https://variable-variability.blogspot.com/2020/03/trend-errors-raw-temperature-station-inhomogeneities.html">Trend errors in raw temperature station data due to inhomogeneities</a><br />
<br />
Part 1: <a href="https://variable-variability.blogspot.com/2020/02/estimating-statistical-properties-inhomogeneities-homogenization.html">Estimating the statistical properties of inhomogeneities without homogenization</a><br />
<br />
<h2>References</h2>Chimani Barbara, Victor Venema, Annermarie Lexer, Konrad Andre, Ingeborg Auer and Johanna Nemec, 2018: Inter-comparison of methods to homogenize daily relative humidity. <i>International Journal Climatology</i>, <b>38</b>, pp. 3106–3122. <a href="https://doi.org/10.1002/joc.5488">https://doi.org/10.1002/joc.5488</a>. <br />
<br />
Gubler, Stefanie, Stefan Hunziker, Michael Begert, Mischa Croci-Maspoli, Thomas Konzelmann, Stefan Brönnimann, Cornelia Schwierz, Clara Oria and Gabriela Rosas, 2017: The influence of station density on climate data homogenization. <i>International Journal of Climatology</i>, <b>37</b>, pp. 4670–4683. <a href="https://doi.org/10.1002/joc.5114">https://doi.org/10.1002/joc.5114</a> <br />
<br />
Lawrimore, Jay H., Matthew J. Menne, Byron E. Gleason, Claude N. Williams, David B. Wuertz, Russell S. Vose and Jared Rennie, 2011: An overview of the Global Historical Climatology Network monthly mean temperature data set, version 3. <i>Journal Geophysical Research</i>, <b>116</b>, D19121. <a href="https://doi.org/10.1029/2011JD016187">https://doi.org/10.1029/2011JD016187</a> <br />
<br />
Lindau, Ralf and Victor Venema, 2018: On the reduction of trend errors by the ANOVA joint correction scheme used in homogenization of climate station records. <i>International Journal of Climatology</i>, <b>38</b>, pp. 5255– 5271. Manuscript: <a href="https://eartharxiv.org/r57vf/">https://eartharxiv.org/r57vf/</a>, paywalled article: <a href="https://doi.org/10.1002/joc.5728">https://doi.org/10.1002/joc.5728</a><br />
<br />
Menne, Matthew J., Claude N. Williams, Byron E. Gleason, Jared J. Rennie and Jay H. Lawrimore, 2018: The Global Historical Climatology Network Monthly Temperature Dataset, Version 4. <i>Journal of Climate</i>, <b>31</b>, 9835–9854. <br />
<a href="https://doi.org/10.1175/JCLI-D-18-0094.1">https://doi.org/10.1175/JCLI-D-18-0094.1</a><br />
<br />
Williams, Claude, Matthew Menne and Peter Thorne, 2012: Benchmarking the performance of pairwise homogenization of surface temperatures in the United States. <i>Journal Geophysical Research</i>, <b>117</b>, D05116. <a href="https://doi.org/10.1029/2011JD016761">https://doi.org/10.1029/2011JD016761</a> <br />
<br />
Victor Venemahttp://www.blogger.com/profile/02842816166712285801noreply@blogger.com0tag:blogger.com,1999:blog-9093436161326155359.post-36161070089587034942020-04-27T17:49:00.000+01:002020-05-03T02:57:59.832+01:00Break detection is deceptive when the noise is larger than the break signalI am disappointed in science. It is impossible that it took this long for us to discover that break detection has serious problems when the signal to noise ratio is low. However, as far as we can judge this was new science and it certainly was not common knowledge, which it should have been because it has large consequences.<br />
<br />
This post describes a paper by Ralf Lindau and me about how break detection depends on the signal to noise ratio (Lindau and Venema, 2018). The signal in this case are the breaks we would like to detect. These breaks could be from a change in instrument or location of the station. We detect breaks by comparing a candidate station to a reference. This reference can be one other neighbouring station or an average of neighbouring stations. The candidate and reference should be sufficiently close so that they have the same regional climate signal, which is then removed by subtracting the reference from the candidate. The difference time series that is left contains breaks and noise because of measurement uncertainties and differences in local weather. The noise thus depends on the quality of the measurements, on the density of the measurement network and on how variable the weather is spatially.<br />
<br />
The signal to noise ratio (SNR) is simply defined as the standard deviation of the time series containing only the breaks divided by the standard deviation of time series containing only the noise. For short I will denote these as the break signal and the noise signal, which have a break variance and a noise variance. When generating data to test homogenization algorithms, you know exactly how strong the break signal and the noise signal is. In case of real data, you can estimate it, for example with the methods I described in <a href="https://variable-variability.blogspot.com/2020/02/estimating-statistical-properties-inhomogeneities-homogenization.html ">a previous blog post</a>. In that study, we found a signal to noise ratio for annual temperature averages observed in Germany of 3 to 4 and in America of about 5.<br />
<br />
Temperature is studied a lot and much of the work on homogenization takes place in Europe and America. Here this signal to noise ratio is high enough. That may be one reason why climatologists did not find this problem sooner. Many other sciences use similar methods, we are all supported by a considerable statistical literature. I have no idea what their excuses are.<br />
<br />
<a href="https://commons.wikimedia.org/wiki/File:Royal_Air_Force-_Italy,_the_Balkans_and_South-east_Europe,_1942-1945._CNA1969.jpg" ><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjg-btvQrsz_fteSTNpiVTDFK0ngRgro4fSeUGNdmpwoBvuDQ0up_60xpQ-tj-MGMR_4p5Z_PT8mP9v0La8XdRTbjTA7f0srsjwLTA7td_ggLZaKu6aW_4QLrj6oe2VYvHaUryoZyU4FUI/s500/Royal_Air_Force-_Italy%252C_the_Balkans_and_South-east_Europe%252C_1942-1945._CNA1969.jpg" data-original-width="800" data-original-height="548" width="500"/></a><br />
<br />
<h2>Why a low SNR is a problem</h2>As scientific papers go, the discussion is quite mathematical, but the basic problem is relatively easy to explain in words. In statistical homogenization we do not know in advance where the break or breaks will be. So we basically try many break positions and search for the break positions that result in the largest breaks (or, for the algorithm we studied, that explain the most variance). <br />
<br />
If you do this for a time series that contains only noise, this will also produce (small) breaks. For example, in case you are looking for one break, due to pure chance there will be a difference between the averages of the first and the last segment. This difference is larger than it would be for a predetermined break position, as we try all possible break positions and then select the one with the largest difference. To determine whether the breaks we found are real, we require that they are so large that it is unlikely that they are due to chance, while there are actually no breaks in the series. So we study how large breaks are in series that only contains noise to determine how large such random breaks are. Statisticians would talk about the breaks being statistically significant with white noise as the null hypothesis.<br />
<br />
When the breaks are really large compared to the noise one can see by eye where the positions of the breaks are and this method is nice to make this computation automatically for many stations. When the breaks are “just” large, it is a great method to objectively determine the number of breaks and the optimal break positions. <br />
<br />
The problem comes when the noise is larger than the break signal. Not that it is fundamentally impossible to detect such breaks. If you have a 100-year time series with a break in the middle, you would be averaging over 50 noise values on either side and the difference in their averages would be much smaller than the noise itself. Even if noise and signal are about the same size the noise effect is thus expected to be smaller than the size of such a break. To put it in another way, the noise is not correlated in time, while the break signal is the same for many years; that fundamental difference is what the break detection exploits.<br />
<br />
However, to come to the fundamental problem, it becomes hard to determine the positions of the breaks. Imagine the theoretical case where the break positions are fully determined by the noise, not by the breaks. From the perspective of the break signal, these break positions are random. The problem is, also random breaks explain a part of the break signal. So one would have a combination with a maximum contribution of the noise plus a part of the break signal. Because of this additional contribution by the break signal, this combination may have larger breaks than expected in a pure noise signal. In other words, the result can be statistically significant, while we have no idea where the positions of the breaks are.<br />
<br />
In a real case the breaks look even more statistically significant because the positions of the breaks are determined by both the noise and the break signal. <br />
<br />
That is the fundamental problem, the test for the homogeneity of the series rightly detects that the series contains inhomogeneities, but if the signal to noise ratio is low we should not jump to conclusions and expect that the set of break positions that gives us the largest breaks has much to do with the break positions in the data. Only if the signal to noise ratio is high, this relationship is close enough.<br />
<br />
<h2>Some numbers</h2>This is a general problem, which I expect all statistical homogenization algorithms to have, but to put some numbers on this, we need to specify an algorithm. We have chosen to study the multiple breakpoint method that is implemented in PRODIGE (Caussinus and Mestre, 2004), HOMER (Mestre et al., 2013) and ACMANT (Domonkos and Coll, 2017), these are among the best, if not the best, methods we currently have. We applied it by comparing pairs of stations, like PRODIGE and HOMER do.<br />
<br />
For a certain number of breaks this method effectively computes the combination of breaks that has the highest break variance. If you add more breaks, you will increase the break variance those breaks explain, even if it were purely due to noise, so there is additionally a penalty function that depends on the number of breaks. The algorithm selects that option where the break variance minus such a penalty is highest. A statistician would call this a model selection problem and the job of the penalty is to keep the statistical model (the step function explaining the breaks) reasonably simple. <br />
<br />
In the end, if the signal to noise ratio is one half, the breaks that explain the largest breaks are just as “good” at explaining the actual break signal in the data as breaks at random positions.<br />
<br />
With this detection model, we derived the plot below, let me talk you through this. On the x-axis is the SNR, on the right the break signal is twice as strong as the noise signal. On the y-axis is how well the step function belonging to the detected breaks fits to the step function of the breaks we actually inserted. The lower curve, with the plus symbols, is the detection algorithm as I described above. You can see that for a high SNR it finds a solution that closely matches what we put in and the difference is almost zero. The upper curve, with the ellipse symbols, is for the solution you find if you put in random breaks. You can see that for a high SNR the random breaks have a difference of 0.5. As the variance of the break signal is one, this means that half the variance of the break signal is explained by random breaks.<br />
<br />
<a href="https://www.adv-stat-clim-meteorol-oceanogr.net/4/1/2018/" ><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhY1KkBgwixTr2bBubXR1KNorSqlEJVDyFgOhM458HZ-kkXgYIGhJOsTaRUP_FTKMrb5MlSZWaKDFzipIjpP7kmMFnW5Ls_4zxwaPvD95kBJbEB0XiCYg52wL_zqy5UnmcLmByVX8tzmpc/s630/SNR_mean_squared_deviations.png" data-original-width="840" data-original-height="788" width="630"/></a><br />
<i>Figure 13b from Lindau and Venema (2018).</i><br />
<br />
When the SNR is about 0.5, the random breaks are about as good as the breaks proposed by the algorithm described above.<br />
<br />
One may be tempted to think that if the data is too noisy, the detection algorithm should detect less breaks, that is, the penalty function should be bigger. However, the problem is not the detection of whether there are breaks in the data, but where the breaks are. A larger penalty thus does not solve the problem and even makes the results slightly worse. Not in the paper, but later I wondered whether setting more breaks is such a bad thing, so we also tried lowering the threshold, this again made the results worse.<br />
<br />
<h2>So what?</h2>The next question is naturally: is this bad? One reason to investigate correction methods in more detail, <a href="https://variable-variability.blogspot.com/2020/04/correcting-inhomogeneities-perfect-breaks.html">as described in my last blog post</a>, was the hope that maybe accurate break positions are not that important. It could have been that the correction method still produces good results even with random break positions. This is unfortunately not the case, already quite small errors in break positions deteriorate the outcome considerably, this will be the topic of the next post.<br />
<br />
Not homogenizing the data is also not a solution. As I described in a previous blog post, the breaks in Germany are small and infrequent, but they still have a considerable influence on the trends of stations. The figure below shows the trend differences between many pairs of nearby stations in Germany. Their differences in trends will be mostly due to inhomogeneities. The standard deviation of 0.628 °C per century for the pairs translated to an average error in the trends of individual stations of 0.4 °C per century.<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiwftLtFmcD548O8xdIEoengB_4av8_9euQsKbWK9RJB0lgcQgUmWPDQ8m-TRaNE40wzkzbH9p3Kg-S-rDbDiCnJUv-fZ2g916Ob36WQ2HRbKBxIFQ5GVM57r20j3xWjoHmWNmSI1AhNHI/s1600/SNR_Figure2_station_trend_errors.png" imageanchor="1" ><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiwftLtFmcD548O8xdIEoengB_4av8_9euQsKbWK9RJB0lgcQgUmWPDQ8m-TRaNE40wzkzbH9p3Kg-S-rDbDiCnJUv-fZ2g916Ob36WQ2HRbKBxIFQ5GVM57r20j3xWjoHmWNmSI1AhNHI/s700/SNR_Figure2_station_trend_errors.png" data-original-width="872" data-original-height="838" width="700"/></a><br />
<i>The trend differences (y-axis) of pairs of stations (x-axis) in the German temperature network. The trends were computed from 316 nearby pairs over 1950 to 2000. Figure 2 from Lindau and Venema (2018).</i><br />
<br />
This finding makes it more important to work on methods to estimate the signal to noise ratio of a dataset before we try to homogenize it. This is easier said than done. The method introduced in Lindau and Venema (2018) gives results for every pair of stations, but needs some human checks to ensure the fits are good. Furthermore, it assumes the break levels behave like noise, while in Venema and Lindau (2019) we found that <a href="https://variable-variability.blogspot.com/2020/02/estimating-statistical-properties-inhomogeneities-homogenization.html">the break signal in the USA behaves like a random walk</a>. This 2019 method needs a lot of data, even the results for Germany are already quite noisy, if you apply it to data sparse regions you have to select entire continents. Doing so, however, biases the results to those subregions were the there are many stations and would thus give too high SNR estimates. So computing SNR worldwide is not just a blog post, but requires a careful study and likely the development of a new method to estimate the break and noise variance. <br />
<br />
Both methods compute the SNR for one difference time series, but in a real case multiple difference time series are used. We will need to study how to do this in an elegant way. How many difference series are used depends on the homogenization method, this would also make the SNR method dependent. I would appreciate to also have an estimation method that is more universal and can be used to compare networks with each other.<br />
<br />
This estimation method should then be applied to global datasets and for various periods to study which regions and periods have a problem. Temperature (as well as pressure) are variables that are well correlated from station to station. Much more problematic variables, which should thus be studied as well, are precipitation, wind, humidity. In case of precipitation, there tend to be more stations. This will compensate some, but for the other variables there may even be less stations.<br />
<br />
We have some ideas how to overcome this problem, from ways to increase the SNR to completely different ways to estimate the influence of inhomogeneities on the data. But they are too preliminary to already blog about. Do subscribe to the blog with any of the options below the tag cloud near the end of the page. ;-)<br />
<br />
When we digitize climate data that is currently only available on paper, we tend to prioritize data from regions and periods where we do not have much information yet. However, if after that digitization the SNR would still be low, it may be more worthwhile to digitize data from regions/periods where we already have more data and get that region/period to a SNR above one.<br />
<br />
The next post will be about how this low SNR problem changes our estimates of how much the Earth has been warming. Spoiler: the climate “sceptics” will not like that post.<br />
<br />
<br />
<h2>Other posts in this series</h2>Part 5: <a href="https://variable-variability.blogspot.com/2020/05/statistical-homogenization-under-correction-trends.html">Statistical homogenization under-corrects any station network-wide trend biases</a><br />
<br />
Part 4: <i>Break detection is deceptive when the noise is larger than the break signal</i><br />
<br />
Part 3: <a href="https://variable-variability.blogspot.com/2020/04/correcting-inhomogeneities-perfect-breaks.html">Correcting inhomogeneities when all breaks are perfectly known</a><br />
<br />
Part 2: <a href="https://variable-variability.blogspot.com/2020/03/trend-errors-raw-temperature-station-inhomogeneities.html">Trend errors in raw temperature station data due to inhomogeneities</a><br />
<br />
Part 1: <a href="https://variable-variability.blogspot.com/2020/04/correcting-inhomogeneities-perfect-breaks.html">Estimating the statistical properties of inhomogeneities without homogenization</a><br />
<br />
<h2>References</h2>Caussinus, Henri and Olivier Mestre, 2004: Detection and correction of artificial shifts in climate series. <i>The Journal of the Royal Statistical Society, Series C (Applied Statistics)</i>, <b>53</b>, pp. 405-425. <a href="https://doi.org/10.1111/j.1467-9876.2004.05155.x">https://doi.org/10.1111/j.1467-9876.2004.05155.x</a> <br />
<br />
Domonkos, Peter and John Coll, 2017: Homogenisation of temperature and precipitation time series with ACMANT3: method description and efficiency tests. <i>International Journal of Climatology</i>, <b>37</b>, pp. 1910-1921. <a href="https://doi.org/10.1002/joc.4822">https://doi.org/10.1002/joc.4822</a> <br />
<br />
Lindau, Ralf and Victor Venema, 2018: The joint influence of break and noise variance on the break detection capability in time series homogenization. <i>Advances in Statistical Climatology, Meteorology and Oceanography</i>, <b>4</b>, p. 1–18. <a href="https://doi.org/10.5194/ascmo-4-1-2018">https://doi.org/10.5194/ascmo-4-1-2018</a><br />
<br />
Lindau, R, Venema, V., 2019: A new method to study inhomogeneities in climate records: Brownian motion or random deviations? <i>International Journal Climatology</i>, <b>39</b>: p. 4769– 4783. Manuscript: <a href="https://eartharxiv.org/vjnbd/">https://eartharxiv.org/vjnbd/</a> Article: <a href="https://doi.org/10.1002/joc.6105">https://doi.org/10.1002/joc.6105</a><br />
<br />
Mestre, Olivier, Peter Domonkos, Franck Picard, Ingeborg Auer, Stephane Robin, Émilie Lebarbier, Reinhard Boehm, Enric Aguilar, Jose Guijarro, Gregor Vertachnik, Matija Klancar, Brigitte Dubuisson, Petr Stepanek, 2013: HOMER: a homogenization software - methods and applications. <i>IDOJARAS, Quarterly Journal of the Hungarian Meteorological Society</i>, <b>117</b>, no. 1, pp. 47–67.<br />
<br />
Victor Venemahttp://www.blogger.com/profile/02842816166712285801noreply@blogger.com0tag:blogger.com,1999:blog-9093436161326155359.post-1232400880003534122020-04-23T22:43:00.000+01:002020-05-03T02:20:25.075+01:00Correcting inhomogeneities when all breaks are perfectly known<div style="float: right; margin-bottom: 10px; margin-left: 20px;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhhvZCreaKJnEgXLjnEfEh5oSkcn5xzEacny2MVlkgrWguiyGNB6vn4KeORmkBQRKAGHfNa3YKPpkjuBNthmi-uDlxLIZBSN4PFQ1QX3kAisjSPi6mFfXi3w493Ldk9jHVROJLGUV-nvxQ/s1600/open_shelter_uccle_belgium_p1020419.jpg" imageanchor="1" ><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhhvZCreaKJnEgXLjnEfEh5oSkcn5xzEacny2MVlkgrWguiyGNB6vn4KeORmkBQRKAGHfNa3YKPpkjuBNthmi-uDlxLIZBSN4PFQ1QX3kAisjSPi6mFfXi3w493Ldk9jHVROJLGUV-nvxQ/s400/open_shelter_uccle_belgium_p1020419.jpg" data-original-width="516" data-original-height="387" width="400" /></a></div>Much of the scientific literature on the statistical homogenization of climate data is about the detection of breaks, especially the literature before 2012. Much of the more recent literature studies complete homogenization algorithms. That leaves a gap for the study of correction methods. <br />
<br />
<b>Spoiler:</b> if we know all the breaks perfectly, the correction method removes trend biases from a climate network perfectly. I found the most surprising outcome that in this case the size of the breaks is irrelevant for how well the correction method works, what matters is the noise.<br />
<br />
This post is about a study filling this gap by Ralf Lindau and me. The post assumes you are familiar with statistical homogenization, if not <a href="https://variable-variability.blogspot.com/p/homogenization.html">you can find more information here</a>. For correction you naturally need information on the breaks. To study correction in isolation as much as possible, we have assumed that all breaks are known. That is naturally quite theoretical, but it makes it possible to study the correction method in detail.<br />
<br />
The correction method we have studied is a so-called joint correction method, that means that the corrections for all stations in a network are computed in one go. The somewhat unfortunate name ANOVA is typically used for this correction method. The equations are the same as those of the ANOVA test, but the application is quite different, so I find this name confusing.<br />
<br />
This correction method makes three assumptions. 1) That all stations have the same regional climate signal. 2) That every station has its own break signal, which is a step function with the positions of the steps given by the known breaks. 3) That every station also has its own measurement and weather noise. The algorithm computes the values of the regional climate signal and the levels of the step functions by minimizing this noise. So in principle the method is a simple least square regression, but with much more coefficients than when you use it to compute a linear trend. <br />
<br />
<h2>Three steps</h2>In this study we compute the errors after correction in three ways, one after another. To illustrate this let’s start simple and simulate 1000 networks of 10 stations with 100 years/values. In the first examples below these stations have exactly five breaks, whose sizes are drawn from a normal distribution with variance one. The noise, simulating measurement uncertainties and differences in local weather, is also noise with a variance of one. This is quite noisy for European temperature annual averages, but happens earlier in the climate record and in other regions. Also to keep it simple there is no net trend bias yet.<br />
<br />
<div style="float: right; margin-bottom: 10px; margin-left: 20px;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhUewarMqB38cEMND3FMBikI_rl9KEeQYtva3-JgAZZRsjO6PEhyphenhyphenHfjy-sacAzNgxzYh5aBBmSlSzo-aLf9eln_JEQg2Va7RGOjSdZ2mIQK0xLixe6QAjsu4FVK8zhNTc2EOKYduU6nrvY/s1600/anova_fig2.png" imageanchor="1" ><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhUewarMqB38cEMND3FMBikI_rl9KEeQYtva3-JgAZZRsjO6PEhyphenhyphenHfjy-sacAzNgxzYh5aBBmSlSzo-aLf9eln_JEQg2Va7RGOjSdZ2mIQK0xLixe6QAjsu4FVK8zhNTc2EOKYduU6nrvY/s300/anova_fig2.png" data-original-width="639" width="300" data-original-height="612" /></a></div>The figure to the right is a scatterplot with theoretically 1000*10*100=1 million yearly temperature averages as they were simulated (on the x-axis) and after correction (y-axis). <br />
<br />
Within the plots we show some statistics, on the top left these are 1) the mean of x, i.e. the mean of the inserted inhomogeneities. 2) Then the variance of the inserted inhomogeneities x. 3) Then the mean of the computed corrections y. 4) Finally the variance of the corrections.<br />
<br />
In the lower right, 1) the correlation (r) is shown and 2) the number of values (n). For technical reasons, we only show a sample of the 1 million points in the scatterplot, but these statistics are based on all values.<br />
<br />
The results look encouraging, they show a high correlation: 0.96. And the points nicely scatter around the x=y line. <br />
<br />
<div style="float: right; margin-bottom: 10px; margin-left: 20px;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi72TsN37hrn5TxFFzFSCZ5JAnWbmc0iRcJKs-cCOzN-7C7o5H-EVBdcOiIsc2vGIJZm_dWdH8UmIf4driI-7Q2XA2TnI-amwHBdWOfhhKtLnRXGXqQAo0TBDLm5sHD6p6gH7c1RWGGnlc/s1600/anova_fig3.png" imageanchor="1" ><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi72TsN37hrn5TxFFzFSCZ5JAnWbmc0iRcJKs-cCOzN-7C7o5H-EVBdcOiIsc2vGIJZm_dWdH8UmIf4driI-7Q2XA2TnI-amwHBdWOfhhKtLnRXGXqQAo0TBDLm5sHD6p6gH7c1RWGGnlc/s300/anova_fig3.png" data-original-width="671" data-original-height="634" width="300" /></a></div>The second step is to look at the trends of the stations, there is one trend per station, so we have 1000*10=10,000 of them. See figure to the right. The trend is computed in the standard way using least squares linear regression. Trends would normally have the unit °C per year or century. Here we multiplied the trend with the period, so the values are the total change due to the trend and have unit °C.<br />
<br />
The values again scatter beautifully around x=y and the correlation is as high as before: 0.95. <br />
<br />
The final step is to compute the 1000 network trends. The result is shown below. The averaging over 10 stations reduces the noise in the scatterplot and the values beautifully scatter around the x=y line, while the correlation is now smaller, it is still decent: 0.81. Remember we started with quite noisy data where the noise was as large as the break signal.<br />
<br />
<div style="float: left; margin-bottom: 10px; margin-left: 20px;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh6UJXB-NHcJVpOqWDtjQfSIDR-cuYi0mliy9_ZPsMM7axSu7TPsxWtufTWTmMNZ-nCKfBTz1nom-0dH6qLsMB2qMg2gVQ0UZ8tfeHNfkDdmph5Vn_6A2VzDrpP9LfRX2fSDe4XflzzLDo/s1600/anova_fig4.png" imageanchor="1" ><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh6UJXB-NHcJVpOqWDtjQfSIDR-cuYi0mliy9_ZPsMM7axSu7TPsxWtufTWTmMNZ-nCKfBTz1nom-0dH6qLsMB2qMg2gVQ0UZ8tfeHNfkDdmph5Vn_6A2VzDrpP9LfRX2fSDe4XflzzLDo/s300/anova_fig4.png" data-original-width="664" data-original-height="634" width="300" /></a></div><br clear="all"><br />
<h2>The remaining error</h2>In the next step, rather than plotting the network trend after correction on the y-axis, we plotted the difference between this trend and the inserted network mean trend, which is the trend error remaining after correction. This is plotted in the left panel below. For this case the uncertainty after correction is half of the uncertainty before correction in terms of the printed variances. It is typical to express uncertainties as standard deviations, then the remaining trend error is 71%. Furthermore, their averages are basically zero, so no bias is introduced.<br />
<br />
With a signal having as much break variance as noise variance from the measurement and weather differences between the stations, the correction algorithm naturally cannot reconstruct the original inhomogeneities perfectly, but it does so decently and its errors have nice statistical properties.<br />
<br />
Now if we increase the variance of the break signal by a factor two we get the result shown in the right panel. Comparing the two panels, it is striking that the trend error after correction is the same, it does not depend on the break signal, only the noise determines how accurate the trends are. In case of large break signals this is nice, but if the break signal is small, this will also mean that the the correction can increase the random trend error. That could be the case in regions where the networks are sparse and the difference time series between two neighboring stations consequently quite noisy. <br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgnMjDQpQeimlQVZcqo7peI7dqW9zdN_T9yT4WFZ-QfZF-H8VoCahjzQtkM3kygIx4F9in9_E6S1Ipb44BikHOJCqvUlfI5vt8Wu4QnRCDnJDrPzCMV664U1reK53iABIvFOisaYm1kRzI/s1600/anova_fig78.png" imageanchor="1" ><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgnMjDQpQeimlQVZcqo7peI7dqW9zdN_T9yT4WFZ-QfZF-H8VoCahjzQtkM3kygIx4F9in9_E6S1Ipb44BikHOJCqvUlfI5vt8Wu4QnRCDnJDrPzCMV664U1reK53iABIvFOisaYm1kRzI/s1600/anova_fig78.png" data-original-width="690" data-original-height="324" /></a><br />
<br />
<h2>Large-scale trend biases</h2>This was all quite theoretical, as the networks did not have a bias in their trends. They did have a random trend error due to the inserted inhomogeneities, but averaging over many such networks of 10 stations the trend error would tend to zero. If that were the case in reality, not many people would work on statistical homogenization. The main aim is the reduce the uncertainties in large-scale trends due to (possible) large-scale biases in the trends.<br />
<br />
<div style="float: right; margin-bottom: 10px; margin-left: 20px;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhpt4TJNlafG00vNR7yTU3zsX_-enioWEvgFyna3Te0dFcoapNMidhMi7PsBLgKilfdm-ad10wHihwOtUgUKpOcmcx1Wxz3M_IHB6s1NEGhl3EcVrC9Kg_L9ZSZmr35PgwMfwhF0ZvyOsI/s1600/anova_fig14.png" imageanchor="1" ><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhpt4TJNlafG00vNR7yTU3zsX_-enioWEvgFyna3Te0dFcoapNMidhMi7PsBLgKilfdm-ad10wHihwOtUgUKpOcmcx1Wxz3M_IHB6s1NEGhl3EcVrC9Kg_L9ZSZmr35PgwMfwhF0ZvyOsI/s300/anova_fig14.png" data-original-width="683" data-original-height="647" width="300" /></a></div>Such large-scale trend biases can be caused by changes in the thermometer screens used, the transition from manual to automatic observations, urbanization around the stations or relocations of stations to better sites.<br />
<br />
If we add a trend bias to the inserted inhomogeneities and correct the data with the joint correction method, we find the result to the right. We inserted a large trend bias to all networks of 0.9 °C and after correction it was completely removed. This again does not depend on the size of the bias or the variance of the break signal. <br />
<br />
However, this all is only true if all breaks are known, before I write a post about the more realistic case were the breaks are perfectly known, I will first have to write a post about how well we can detect breaks. That will be the next homogenization post.<br />
<br />
<h2>Some equations</h2>Next to these beautiful scatterplots, the article has equations for each of the the above mentioned three steps 1) from the inserted breaks and noise to what this means for the station data, 2) how this affects the station trend errors, and 3) how this results in network trends. <br />
<br />
<div style="float: right; margin-bottom: 10px; margin-left: 20px;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi81OK8xqabRHByhL8_MD_FTkQjs0rr09QCoaJGYy9IG1LQaKpE1Bd-1QoolGvgYOTNYN_pS3ZLQcLoyxzjdJ9m_S4jcVsZlmI5iWPsbxPBAMpKCba4JK2obBbMowwnKckOzvdvzWvIiyU/s1600/anova_fig9.png" imageanchor="1" ><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi81OK8xqabRHByhL8_MD_FTkQjs0rr09QCoaJGYy9IG1LQaKpE1Bd-1QoolGvgYOTNYN_pS3ZLQcLoyxzjdJ9m_S4jcVsZlmI5iWPsbxPBAMpKCba4JK2obBbMowwnKckOzvdvzWvIiyU/s450/anova_fig9.png" data-original-width="683" data-original-height="489" width="450" /></a></div>With equations for the influence of the size of the break signal (the standard deviation of the breaks) and the noise of the difference time series (the standard deviation of the noise) one can then compute how the trend errors before and after correction depend on the signal to noise ratio (SNR), which is the standard deviation of the breaks divided by the standard deviation of the noise. There is also a clear dependence on the number of breaks. <br />
<br />
Whether the network trends increase or decrease due to the correction method is determined by the quite simple equation: 6 times the SNR divided by the number of breaks. So if the SNR is one, as in the initial example of this post and the number of breaks is 6 or smaller the correction would improve the trend error, while if there are more than 7 breaks the correction would add a random trend error. This simple equation ignores a weak dependence of the results on the number of stations in the networks.<br />
<br />
<h2>Further research</h2>I started saying that the correction methods was a research gap, but homogenization algorithms have many more steps beyond detection and correction, which should also be studied in isolation if possible to gain a better understanding. <br />
<br />
For example, the computation of a composite reference. The selection of reference stations. The combination of statistical homogenization with metadata on documented changes in the measurement setup. And so on. The last chapter of the <a href="https://eartharxiv.org/8qzrf/">draft guidance on homogenization</a> describes research needs, including research on homogenization methods. There are still of lot of interesting and important questions.<br />
<br />
<br />
<h2>Other posts in this series</h2>Part 5: <a href="https://variable-variability.blogspot.com/2020/05/statistical-homogenization-under-correction-trends.html">Statistical homogenization under-corrects any station network-wide trend biases</a><br />
<br />
Part 4: <a href="https://variable-variability.blogspot.com/2020/04/break-detection-deceptive-noise-break-signal-variance.html">Break detection is deceptive when the noise is larger than the break signal</a><br />
<br />
Part 3: <i>Correcting inhomogeneities when all breaks are perfectly known</i><br />
<br />
Part 2: <a href="https://variable-variability.blogspot.com/2020/03/trend-errors-raw-temperature-station-inhomogeneities.html">Trend errors in raw temperature station data due to inhomogeneities</a><br />
<br />
Part 1: <a href="https://variable-variability.blogspot.com/2020/02/estimating-statistical-properties-inhomogeneities-homogenization.html">Estimating the statistical properties of inhomogeneities without homogenization</a><br />
<br />
<h2>References</h2>Lindau, R, V. Venema, 2018: On the reduction of trend errors by the ANOVA joint correction scheme used in homogenization of climate station records. <i>International Journal of Climatology</i>, <b>38</b>, pp. 5255– 5271. Manuscript: <a href="https://eartharxiv.org/r57vf/">https://eartharxiv.org/r57vf/</a> Article: <a href="https://doi.org/10.1002/joc.5728">https://doi.org/10.1002/joc.5728</a><br />
<br />
Victor Venemahttp://www.blogger.com/profile/02842816166712285801noreply@blogger.com1tag:blogger.com,1999:blog-9093436161326155359.post-42718060650667473552020-04-20T07:15:00.000+01:002020-04-20T07:15:06.457+01:00Corona Virus Update: the German situation improved, but if we relax measures so much that the virus comes back it will not be just local outbreaks (part 32)The state of the epidemic has improved in Germany and we now have about half number of people getting ill compared to the peak we had a month ago. This has resulted in calls to relax the social distancing measures. The German states have decided that mostly nothing will happen the next two weeks, but small shops will open again (and some other shops) and in some states some school classes may open again (although I am curious whether that will actually happen, German Twitter is not amused).<br />
<br />
<a href="https://www.rki.de/DE/Content/InfAZ/N/Neuartiges_Coronavirus/Situationsberichte/2020-04-19-de.pdf?__blob=publicationFile" ><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhVS8HDOl8IulLd3OTKHWVxkjkB2dPYdUyprrDo37CqZsyrtu-aSNuBqYqH6XD2IvYwHldBJNVhm2_KbtQQosV48hpzQg9cis8dMB1ybrncpWEaWSOyoqGtd7Ef5Ity2wX944TD1WqPuaM/s600/Screenshot-2020-4-15+2020-04-15-de+pdf.png" data-original-width="967" data-original-height="616" width="600"/></a><br />
<i>An estimate of the number of new cases by the date these people got ill. Similar graphs tend to show the new cases for the date they were known with the health departments, by looking at the date people became ill, which is often much earlier, you can see a faster response to changes in social distancing. In dark blue you see the cases where the date someone got ill is known. In grey were it was estimated because only the case is know, but not when someone became ill. In light blue is an estimate for how many cases will still come in.</i><br />
<br />
So in the last episode of the Corona Virus Update science journalist Korinna Henning tried to get the opinion of Christian Drosten on these political measures. He does not like giving political advice, but he did venture that some politicians seem to wrongly think measures can be relaxed without the virus coming back. The two weeks that the lockdown continues should be used to prepared other measures that can replace the lockdown-type measures, such a track and trace CoronaApp and the public wearing everyday masks.<br />
<br />
Another reason it may be possible to relax measures somewhat would be that the virus may spread less efficiently in summer. It is not expected to go away, but the number of people who are infected by one infected person may go down a bit.<br />
<br />
When the virus comes back, either because we relaxed social distancing too much too early or because of the winter, it will look differently from this first wave. This first wave was characterized by local outbreaks. A second wave would be everywhere as the virus (and it various mutations) are spreading evenly geographically.<br />
<br />
Korinna Henning asks Drosten to explain why it is easier for him to call COVID-19 a pandemic than for the World Health Organization. This question was inspired by Trump complaining that the WHO called the pandemic too late. Drosten notes that it has political consequences when the WHO calls the situation a pandemic, but that does not influence the situation in your country and what Trump could have done.<br />
<br />
Really interesting was the part at the end on some possible (not guaranteed) positive surprises. <br />
<br />
<div style="float: right; margin-bottom: 10px; margin-left: 20px; width:297px; "><a href="https://virologie-ccm.charite.de/metas/person/person/address_detail/drosten/"><img border="0" data-original-height="337" data-original-width="297" height="337" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgOdNzlursTpq66xOJLNlrq8YTmLqfI26a0qEMCOjaHivkc_iwfkqEpC4ASQPbscXG07ySbm4QHJnJSqdcTfTN86CcaSmJA0kaZyXg0NiSAeW1ukRseHneHJP1rhB8WpZ8B5sLZ_gKW2Ac/s1600/drosten-christian-institut-fuer-virologie-charite_297x337.jpg" width="297" /></a><br />
<i>Prof. Dr. Christian Drosten, expert for emerging viruses and developer of the WHO SARS-CoV-2 virus test, which was used in 150 countries.</i></div><h3>The situation and measures in Germany</h3><b>Korinna Hennig:</b><br />
<blockquote>What's your assessment, how long would [the reproductive rate] have to stay below one for it to have a really long-term effect and we're not going to say that at some point we have to close all the schools again.</blockquote><b>Christian Drosten:</b><br />
<blockquote>I believe there is talk of months [in <a href="https://www.helmholtz.de/index.php?id=6551">a report by the Helmholtz Association</a>]. I can well believe that this is the case. However, this is not the path that has been chosen in essence [by the German government], but rather - I believe - the idea has arisen that the intention is to keep it within the current range, perhaps by taking additional measures to reduce the pressure a little more. <br />
<br />
That is an important point of view, and one that needs to be understood. It is not primarily a question of saying that we have now achieved a great deal, that the measures have already had a considerable impact. And now we are simply letting them go a tad, because we no longer want to. Then at some point we will have to take a look and then we will have to consider how to proceed, that is one view. <br />
<br />
The other is that everything will work out fine. Sometimes you can hear that between the lines. I have the feeling, particularly among the general public, that many people, even in politics, are speculating that it will not come back at all, that it will not pick up any momentum. Unfortunately, that is not what the epidemiological modelers are saying, but it is generally assumed that, if nothing is offered as a counter-offer to this relaxation of measures, it will really get out of hand. <br />
<br />
And the idea is, of course - and this is a very real idea in Germany - that people say that they are now relaxing these measures to a small extent, but to a really small extent. It is rather the case that corrections are being made in places where we think we can perhaps get away with it without the efficient reduction of transmission suffering in the first place. And now, in the time that has been gained by the decision, it is preparing to allow other measures to come into force. And this of course includes the great promise of automated case tracking.<br />
<br />
The cell phone tracking ... doesn't have to do the job completely, but you can combine it. You could say that there is a human manual case tracking system, but it gets help from such electronic measures, while you introduce these electronic measures. After all, this is not something that is introduced overnight; there must be some transition. I believe that the few weeks of time that have now been gained once again can be used to introduce such measures, and that is where a great deal of faith comes from at the moment.<br />
<br />
Of course, there are other things to hope for as additional effects, such as, for example, a recommendation on the wearing of masks by the public. That could have an additional effect. Of course there will also be a small additional effect on seasonality. We have already discussed this, and there are studies which say that, unfortunately, there is probably not a large effect on seasonality, but there is a small effect on seasonality. <br />
<br />
That is where things are coming together, so that we hope that the speed of propagation will perhaps slow down again overall and that we will at least be able to enter an region over the summer and into the autumn, where we will unfortunately see the effect of winter coming again, a possible winter wave, but where we will then have the first pharmaceutical interventions. Perhaps a first drug, with which certain risk patients could be treated in an early stage. Maybe first use studies, so efficacy studies of first vaccines. This is the overall concept, which one hopes will work.</blockquote>Currently one infected person infects 0.7 or 0.8 other persons (RO, the reproduction number). That is behind the decline in the number of new cases. Theoretically you could thus allow for 25% more contacts while still being in a stable situation. I would be surprised if the small relaxations decided for the next two weeks would do that. I do worry that these relaxations make people take to problem less seriously and that can quickly lead to 25% more contacts.<br />
<br />
I would personally prefer this decline to continue until we get to a level where containment by manual tracking infected people and their contacts becomes an effective way to fight the epidemic; <a href="https://www.youtube.com/watch?v=3z0gnXgK8Do">Mailab explains it well in German</a>.<br />
<br />
If we get <a href="https://variable-variability.blogspot.com/2020/04/privacy-track-trace-app-corona.html">the tracking of infected people with a CoronaApp working</a>, it would matter much less at which level of contagion we start, but I do not expect that the CoronaApp will be able to do all the work, it will likely need to be complemented by manual tracking. With the current plans, according to rumours in the media, placing less emphasis on privacy of the users, I worry that too few will participate to make any kind of dent. An app were we can only hope and need to trust that the government keeps its side of the bargain and does not abuse the data would also be less useful in large parts of the world where you can definitely not trust the government.<br />
<br />
That some states are already starting with opening up some classes is in principle a good thing. But it goes too fast, the schools are not prepared yet and I see quite some backlash coming. If done well, by opening a few school classes we could have learned how to do this before we do more and <a href="https://variable-variability.blogspot.com/2020/04/opening-germany-randomized-controlled-trial-schools.html">we could study how much this contributes to a higher reproduction number</a> R0. If we are lucky maybe hardly; see the last section on possible positive surprises.<br />
<br />
<h3>Summertime</h3>The flu normally goes away in summer, this is not expected for SARS-2, but the reproduction number could be 0.5 lower, that is that one infected person would infect half a person less. Without measures it is expected to be between 2 and 3 and we have to keep this reproduction number below 1 to avoid that the situation gets out of hand again. The summer may thus help a bit, which could mean less stringent restrictions.<br />
<br />
It is not well understood what exactly makes the summer harder for the flu and even less for SARS-2. One aspect is likely that people are outside more and ventilate buildings more, which dilutes and dries the virus. Also when it comes to schools, it may be an option to do the classes outside, where the distancing rules could be less strict than indoors.<br />
<br />
Museum could create large sculpture gardens outside for the summer. As the conference centres are empty and unused they could be used as social distancing museums. The empty hotels could be used to quarantine people who might otherwise infect other people in their households. We have to support the hotels anyway to survive until the pandemic is over.<br />
<br />
I have often dreamed of conferences while walking outside in nature. You could transmit the voice of the speaker with a headset. The power points slides with Comic Sans would be missing. This may be the year to start this as alternative to video conferences. (Although there would still be transport.)<br />
<br />
<h3>World Health Organization and Trump</h3><b>Korinna Hennig:</b><br />
<blockquote>Could you briefly explain again what the difference is when you say here in the podcast for example: Yes, we have a pandemic in an early phase. And the WHO is still hesitating for a very long time. What is the crucial difference when the WHO makes such an assessment?</blockquote><b>Christian Drosten:</b><br />
<blockquote>So I am only an individual and can give my opinion, which you can follow or not. You can take me for someone who knows what he's doing. Or you can say: He's just a fool and he says things here. <br />
<br />
Of course, this has different consequence with the WHO. In the case of a UN organisation, this has certain consequences, not only when it comes to saying that this is a pandemic, but also, and especially, when it comes to saying that this is PHEIC, i.e. Public Health Emergency of International Concern. That is a term used in the context of international health regulations. This then also has consequences for intergovernmental organisations. This scope has certainly also led to delays in all these decisions by the WHO. <br />
<br />
Of course there are advisory bodies. After all, the WHO is not a person, but an opinion-forming and opinion-collecting organisation. Experts are called together, committees that have to vote at some point and where there is sometimes disagreement. And then they say that we will meet again next week and until then we will observe the situation again. This then leads to decisions that are perceived as a delay by some countries. This is an ex post evaluation of the WHO's behaviour. <br />
<br />
At the moment this is again all about politics. And it is about a decision by Donald Trump, who has now said that he is suspending the WHO payments, the contributions, because the WHO did not say certain things early on. <br />
<br />
It was, of course, known relatively early on from individual case reports that cases had already been introduced in the USA. And now to say that it is a pandemic that is taking place in all other countries ... So the statement that this is a pandemic is to acknowledge the situation, that this is far is widespread. This has nothing to do with the assessment for your own country. Since you know, it is in your own country, you have to ask yourself: Will do I act or not?</blockquote><b>Korinna Hennig:</b><br />
<blockquote>And there are of course financial liabilities between countries that are linked to the WHO.</blockquote><br />
<h3>Local outbreaks in wave 1, everywhere in wave 2</h3>If there is a second wave, it will not look like this first wave.<br />
<b>Christian Drosten:</b><br />
<blockquote>What happened in the case of the Spanish flu was this: We also had a first wave there in some major US cities - that is very, very well documented - that caught our attention. However, it did not occur in all places, but was distributed extremely unevenly locally. It was conspicuous here and there, and elsewhere people did not even notice that this disease existed at all. <br />
<br />
Even there, even at that time, people were already working with curfews and similar things. This was also happening in spring, by the way. Then it went into the summer and apparently there was a strong seasonal effect. And you didn't even notice the disease anymore. And under the cover of this seasonal effect - we can perhaps now envisage this as, under the cover of the social distancing measures that are currently in force - this illness has, however, unnoticed, spread much more evenly geographically. <br />
<br />
And then, when the Spanish flu hit a winter wave, the situation was suddenly quite different. Then chains of infection started at the same time in all places because the virus had spread unnoticed everywhere and no one had paid any attention to it. This is of course an effect that will also occur in Germany, because we do not have a complete ban on leaving and travelling here, and of course we do not have zero transmission either, but we have an R, i.e. a reproduction number that is around or sometimes perhaps even slightly below one. But that does not mean that no more is being transmitted. <br />
So you can look at our homepage, for example, at the Institute of Virology at the Charité - we have now published a whole set of [virus] sequences from Germany. You can see that the viruses in Germany are already very much intermixed, that the local clustering is slowly disintegrating and that all viruses can be found in all places. So let me put it very simply.It is slowly but surely becoming very intermixed. ...<br />
<br />
We'll be in a different situation when winter sets in. ... Suddenly you'd be surprised that the virus starts everywhere at once. Of course it is a completely different impact that such a wave of infection would have.</blockquote>What I find interesting to see it that there is nearly no difference in virus activity between cities and rural regions in Germany anymore. If anything, just looking at the map below, I have the impression that rural regions have more virus activity. On the other hand, in the beginning, I feel there was more activity in the cities.<br />
<br />
<a href="https://www.rki.de/DE/Content/InfAZ/N/Neuartiges_Coronavirus/Situationsberichte/2020-04-19-de.pdf?__blob=publicationFile" ><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjYNGj9Tg15cmZmmGVpI-P5wQHmTfFBzRpKY3Z6cfcsBgF2_oM7tEaUnCFbaPRSKJL_L_PQE5xvqncK3TLbpYupPqiIxj73vipSkrCSB6YC4fEkdpvagj4K65ttemQW8nVis-mXHzRr8h0/s1600/Screenshot-2020-4-20+T%25C3%25A4glicher+Lagebericht+des+RKI+zur+Coronavirus-Krankheit-2019+%2528COVID-19%2529+-+2020-04-19-de+pdf%25281%2529.png" data-original-width="507" data-original-height="684" width="507" /></a><br />
<i>Yesterday's map of the RKI, the German CDC, of the number of new cases over the last week per 100,000 inhabitants. The larger cities are denoted by a small red dot, the location of the smaller cities can sometimes be seen as a smaller region in a different colour. The darkest region is an outbreak, which was likely due to a strong beer feast.</i><br />
<br />
<h3>Positive surprises</h3><b>Christian Drosten:</b><br />
<blockquote>It is also quite possible that there will be positive surprises. For example, we still know nothing about children. It is even the case that in studies that are very systematically designed, this effect is often still left out. We know from other coronavirus diseases, especially MERS, that not only are children hardly affected, but they are hardly ever infected. Now the question is, of course, whether this is also the case with this disease, that not only they do not get any symptoms and are therefore not so conspicuous in the statistics, but that they are somehow resistant in a certain way and that they do not even have to be counted in the population to be infected. So what is 70% of the population? Is it possible to consider the 20 percent of children as finished, because they do not get infected at all? In reality, only 50 percent of the population need to be infected? This is a big gap, which can also be interpreted as a great hope. <br />
<br />
And there is something else - we are anticipating that, epidemiological modellers are doing that, and they are taking that into account: That there may be an unnoticed background immunity from the common cold corona viruses, because they are already related in some way to the SARS-2 virus. It could happen, however, that certain people, because they have had a cold from such a corona virus in the last year or two, are protected in a previously unnoticed way. <br />
<br />
All I want to say is that we are currently observing more and more - and a major study has just come out of China in the preprint realm - that in well-observed household situations, the secondary attack rate, that is to say the rate of infected persons who become infected when there is an index case in the household, an infected person, is quite low. It is in the range of 12, 13, 14 percent. Depending on the correction, you can also say that it is perhaps 15, 16, 17 percent. But it does not lie at 50 or 60 percent or higher, where you would then say that these are probably just random effects. The one who didn't get infected wasn't at home during the infectious period or something. <br />
<br />
How is it possible that so many people who were supposed to be in the household are not infected? Is there some sort of background immunity involved? <br />
<br />
And there are these residual uncertainties. But at this stage, even if you include all these residual uncertainties in these models, you still get the picture that the medical system and the intensive care unit capacity would be overloaded. That is why it is certainly right at the moment to have taken these measures. We must now carry out intensive research work as quickly as possible, as we clarify issues such as: What is really wrong with the children? Do they not get seriously ill, but are they in fact infected and are giving off the virus and carrying it into the family? Or are they resistant in some way? The other question that we absolutely must also answer is: why do relatively few, perhaps even cautiously put, unexpectedly few get infected in the household? This is a realisation that is now maturing so slowly. <br />
<br />
As I said, a new preprint has just appeared from China, and a few other studies suggest that this is the case. The Munich case tracking study, for example, has already hinted at this a bit. You have to take a closer look at that. Is there perhaps a hitherto unnoticed backgroundimmunity, even if only partial immunity? <br />
<br />
That wouldn't mean that we were wrong at this point in time, and what we have done now was wrong. At the moment, even if you factor in these effects, you get the impression that it's right to stop this, that we're not getting into such a rampage that we can no longer control. But for the estimation of how long the whole thing will last, new information could arise from this. It could then be - and I would like to say this now, perhaps as a message of hope - that in a few weeks or months, new information will come out of science that says that the infection activity will probably stop earlier than we thought because of this special effect. <br />
<br />
But I don't want to say that I can announce something now. These are not hints from me, or data that have been available for a long time, but that I wouldn't want to say in public or anything. Rather, they are simply fundamental considerations that we simply know too little about this disease at the moment. And that the knowledge, which is actually growing from week to week, will also influence the current projections.</blockquote><br />
<br />
<h2>Other podcasts</h2>Part 31: <a href="https://variable-variability.blogspot.com/2020/04/corona-virus-update-reinfected-cured-patients.html">Corona Virus Update: Don't take stories about reinfected cured patients too seriously.</a><br />
<br />
Part 28: <a href="https://variable-variability.blogspot.com/2020/04/corona-virus-update-exit-strategy-masks-loss-smell-taste.html">Corona Virus Update: exit strategy, masks, aerosols, loss of smell and taste.</a><br />
<br />
Part 27: <a href="https://variable-variability.blogspot.com/2020/04/corona-virus-update-tracking-infections.html">Corona Virus Update: tracking infections by App and do go outside</a><br />
<br />
Part 23: <a href="https://variable-variability.blogspot.com/2020/04/corona-virus-update-funding-publishing-arrival-endemic.html">Corona Virus Update: need for speed in funding and publication, virus arrival, from pandemic to endemic</a><br />
<br />
Part 22: <a href="https://variable-variability.blogspot.com/2020/03/corona-virus-update-scientific-studies-cures-covid-19-Remdesivir-Chloroquin-Favipiravir-camostat.html">Corona Virus Update: scientific studies on cures for COVID-19.</a> <br />
<br />
Part 21: <a href="https://variable-variability.blogspot.com/2020/03/corona-virus-update-tests-tests-tests.html">Corona Virus Update: tests, tests, tests and how they work.</a><br />
<br />
Part 20: <a href="https://variable-variability.blogspot.com/2020/03/corona-virus-update-Case-tracking-teams-infections-Germany-Infectiousness.html">Corona Virus Update: Case-tracking teams, slowdown in Germany, infectiousness.</a><br />
<br />
Part 19: <a href="https://variable-variability.blogspot.com/2020/03/corona-virus-update-christian-drosten-outside-face-masks-children.html">Corona Virus Update with Christian Drosten: going outside, face masks, children and media troubles.</a><br />
<br />
Part 18: <a href="https://variable-variability.blogspot.com/2020/03/german-virologist-Christian-Drosten.html">Leading German virologist Prof. Dr. Christian Drosten goes viral</a>, topics: Air pollution, data quality, sequencing, immunity, seasonality & curfews.<br />
<br />
<h2>Related reading</h2><a href="https://www.ndr.de/nachrichten/info/coronaskript178.pdf">This Corona Virus Update podcast and its German transcript.</a> Part 32.<br />
<br />
<a href="https://www.ndr.de/nachrichten/info/Coronavirus-Update-Die-Podcast-Folgen-als-Skript,podcastcoronavirus102.html">All podcasts and German transcripts of the Corona Virus Update.</a><br />
<br />
Victor Venemahttp://www.blogger.com/profile/02842816166712285801noreply@blogger.com1tag:blogger.com,1999:blog-9093436161326155359.post-40984724040378239672020-04-16T07:33:00.000+01:002020-04-16T18:19:03.585+01:00Corona Virus Update: Don't take stories about reinfected cured patients too seriously (part 31)<div style="float: right; margin-bottom: 10px; margin-left: 20px;"><a href="https://virologie-ccm.charite.de/metas/person/person/address_detail/drosten/"><img border="0" data-original-height="337" data-original-width="297" height="337" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgOdNzlursTpq66xOJLNlrq8YTmLqfI26a0qEMCOjaHivkc_iwfkqEpC4ASQPbscXG07ySbm4QHJnJSqdcTfTN86CcaSmJA0kaZyXg0NiSAeW1ukRseHneHJP1rhB8WpZ8B5sLZ_gKW2Ac/s1600/drosten-christian-institut-fuer-virologie-charite_297x337.jpg" width="297" /></a><br />
<i>Prof. Dr. Christian Drosten</i></div>The last Corona Virus Update Podacast with <a href="https://variable-variability.blogspot.com/2020/03/german-virologist-Christian-Drosten.html">specialist for emerging viruses Prof. Dr. Christian Drosten</a> had two main topics. The internationally most important one is about press reports that cured patients would be reinfected or even that people may not become immune after recovering from the disease. ThEN WHat AbOuT hErD iMmUNiTy? <br />
<br />
I have seen people who are normally careful and well informed talk about these "reinfections". However, it is very likely just a problem with measurement accuracy when in the final stages of the disease the amount of virus becomes very low and hard to detect, especially in samples taken from the throat. <br />
<br />
The other half of the podcast was about a study on the spread of SARS-CoV-2 in the German municipality Heinsberg. A region not too far from Bonn were there was a big early outbreak after a Carnival party. At a press conference some preliminary results were presented without any detail on the methods, on how these results were computed. The numbers suggested less people may die and more may be infected without knowing it. <br />
<br />
There was first a wave of publicity praising the results and discussing the political implications. Then after consulting scientists there was a wave of publicity claiming the study was rubbish, while all the scientists had said was that they did not have information on the methods and thus could not comment. Sometimes they explained the kind of information they would need to have and that was spun into the study doing this this wrong, which was not claimed. On social media people started attacking the Heinsberg scientists or those asking for more information, which can only be based on whether they liked the numbers (politically) because they knew about the methods even less. For a day Germany looked like the US culture war. Social media has a mob problem that needs addressing. <br />
<br />
It was not a glorious hour for science reporting by (probably mostly) political journalists. Anyway because this is much ado about nothing until we have a manuscript describing the methods and purely German I have skipped this part. I was nodding a lot, yes those are the kinds of problems you have interpreting measurements, yes you really need to know the measurement process well to assess the results. There are so many similarities between sciences. <br />
<br />
It may still be fun for the real virology science nerd to learn the kind of details that matter to interpret a study. <a href="https://www.ndr.de/nachrichten/info/coronaskript176.pdf">They can read the German transcript.</a> <br />
<br />
<h3>The basic problem determining whether someone is ill</h3><b>Korinna Hennig:</b><br />
<blockquote>Over the weekend there have been several reports from China and South Korea about patients who were considered to have recovered or were discharged from hospital and have now tested positive again. So this is not about antibodies, but about the actual virus detection in the throat swab, for example, or from the lungs. Is it conceivable that the virus is reactivated? You also examined the course of the PCR tests on the Munich patients.</blockquote><b>Christian Drosten:</b><br />
<blockquote>This phenomenon can be described as follows: A patient is discharged from the hospital, verified as corona negative and as cured. And a moment later - it could be days, three or four days, or even up to seven or eight days - the patient is tested again. And suddenly he is positive for the virus in the PCR. It is said that the patient may have become newly infected, or in reality he was not immune at all, although he survived the disease. Or the virus has come back again, and you know certain infectious diseases, herpes viruses are the prime example, which can always come back. <br />
<br />
One asks the question: is this perhaps the case with this new virus? Unfortunately, there are still very few precise descriptions in the scientific literature of how the virus is excreted in patients in different types of samples, for example in swabs taken from the throat or in lung secretion, also known as sputum, or in stool samples - these are all the types of samples we know that the virus is detectable. Only a few studies have so far described how this behaves over time in relation to excretion. <br />
<br />
We have made and published one of them. We have made an overview picture of this excretion over time in nine patients from Munich. ... This shows the detection limit of the polymerase chain reaction. And you can see clearly, especially towards the end of the disease process, when the patients recover, that there is still virus present. It is sometimes detectable, sometimes for a few days in a row, then again for a few days in a row it is not detectable. This always jumps above and below the detection limit. <br />
<br />
These are simply statistical phenomena that occur. A PCR can only test a certain sample, a certain sample volume for virus. There are statistical distribution phenomena which mean that the virus has in principle been there the whole time, but the test cannot always detect it. You have to picture it like this, I often explain it to students like this: you have a swimming pool full of water and goldfish are swimming in it. And there is no doubt that they are there. But now you take a sample from this paddling pool with a bucket, blindfolded. And then you may have a goldfish in your bucket and sometimes not. Still, one would not deny that there are goldfish in the swimming pool. ...</blockquote><br />
<h3>Reporting of the results</h3><blockquote>And now the question is simply how to deal with it. I can tell you that here in Germany something like this would not happen, because we have a culture here, where results like this are questioned relatively quickly and rules are always seen with the possibility of an exception. In other words, a German health authority would practically say: well, okay, that's obvious, that's what happened now. <br />
<br />
But in the Asian culture of public health there is a much greater strictness in dealing with such rules. That is not so bad. I don't want to criticize it now. It is simply a cultural difference that when such a rule is established, it is adhered to.And when it is then said that we now agree that a patient who has been PCR negative twice in a row, we define him as cured and discharge him. ...<br />
<br />
It is a thoroughness to say: No, this rule will not be questioned now, this is no exception, but we just enter it into the table. The patient was tested negative twice and now he is positive again. And now we test a few hundred of such discharge courses and enter all this in the table and discuss it only after we have the table completely. Then we write this together and write a scientific publication about it. This is exactly what happened, several times. <br />
<br />
These scientific publications are now in a public resource and readable, but now this discussion process is starting. So, now it's starting with people reading such publications, who perhaps do not know the details and say: What is this? It looks like a reinfection. What is going on with this virus? And it's being spread again through even more discussion channels. This creates excitement and uncertainty.</blockquote>As a scientist, I would prefer the "Asian" process, that is the cleaner data, where you know exactly what happened. You have to understand the measurement process, but the scientific literature is for scientists. <br />
<br />
I like the movement to open science, which makes it easier for people to participate in science and also for scientists to do science, but the scientific literature is not written for normal people and it will lead to problems when people with half-knowledge start reading the scientific literature. In this case it was probably innocent, in many cases bad actors abuse this to mislead the people.<br />
<br />
<h3>Study one</h3>How the samples were take for one of the studies was not fully clear, as can happen with preprints.<br />
<blockquote>So it may well be that at one point when the patient was discharged, they simply took swabs from the throat, and at another time they may have looked in the lung secretion that someone coughed up. Such things can happen, these are two different types of samples. <br />
<br />
And we know well, that the lung secretion stays positive much longer after discharge. And we also believe that it is not infectious for others. Using cell culture virus isolation studies, which we also did in our publication we tried this. We already believe it's no longer infectious. We've never been able to isolate an infectious virus. ...</blockquote><br />
<h3>Study two</h3><blockquote>In the other study it is actually more interesting, it is a bit more explicit. They examined 172 patients beyond the point of discharge. In 25 of them, the test was positive again, on average after 5.23 days after discharge. There it is also clearly stated, the discharge criterion was two negative throat swabs in a row. <br />
<br />
So: The patient had to have a negative throat-swab twice, then he was discharged as cured. But we know exactly that the throat-swab is the sample that becomes negative earliest in patients. So in the second week of illness, many patients no longer have a positive throat-swab on most of the days that one tests, while stool and sputum are still reliably almost always positive. <br />
<br />
And then it is said that of these 25 patients, 24 patients had severe histories. For me, this indicates that if someone has a severe history, he will of course be discharged later. Then he will be treated in hospital for a longer time. And especially with these patients we know that the virus in their throat is almost always completely gone. So the virus in the throat has had time to be eliminated. So in severe cases, the throat swab is no longer positive after this long time.</blockquote>Let me set a break here to let this sink in. If it were really a problem of people being re-infected because they did not acquire immunity, it would be the patients who got most ill, who did not acquire immunity. If it really were a matter of immunity, the opposite would be more logical.<br />
<blockquote>Then it is said that 25 patients have been diagnosed as positive. But in 14 of them, the laboratory test was positive again after they had been discharged from the stool, i.e. not from the throat-swab, and this tells me that we have exactly this mix-up here. For we know that the stool samples in particular remain positive for the virus for a long time, and I have to say that here too, by the way, we have not found any infectious virus in them. This is probably again only dead, excreted virus. <br />
<br />
And with others it was throat swabs, which then tested positive again. But then we have to say again, a throat swab can also contain naturally coughed up lung mucus. You cough up the stuff and it sticks to the back of your throat. <br />
<br />
You can see from the way in which it was done methodically and from the samples in which it was found, and also from the type of patients, that people say that these are patients who have been seriously ill for a long time, that there is a risk of falling into this trap, into this confusion. I would even suspect that the authors themselves simply know that this "mistake" could be present here. ...</blockquote><br />
<br />
<h2>Other podcasts</h2>Part 28: <a href="https://variable-variability.blogspot.com/2020/04/corona-virus-update-exit-strategy-masks-loss-smell-taste.html">Corona Virus Update: exit strategy, masks, aerosols, loss of smell and taste.</a><br />
<br />
Part 27: <a href="https://variable-variability.blogspot.com/2020/04/corona-virus-update-tracking-infections.html">Corona Virus Update: tracking infections by App and do go outside</a><br />
<br />
Part 23: <a href="https://variable-variability.blogspot.com/2020/04/corona-virus-update-funding-publishing-arrival-endemic.html">Corona Virus Update: need for speed in funding and publication, virus arrival, from pandemic to endemic</a><br />
<br />
Part 22: <a href="https://variable-variability.blogspot.com/2020/03/corona-virus-update-scientific-studies-cures-covid-19-Remdesivir-Chloroquin-Favipiravir-camostat.html">Corona Virus Update: scientific studies on cures for COVID-19.</a> <br />
<br />
Part 21: <a href="https://variable-variability.blogspot.com/2020/03/corona-virus-update-tests-tests-tests.html">Corona Virus Update: tests, tests, tests and how they work.</a><br />
<br />
Part 20: <a href="https://variable-variability.blogspot.com/2020/03/corona-virus-update-Case-tracking-teams-infections-Germany-Infectiousness.html">Corona Virus Update: Case-tracking teams, slowdown in Germany, infectiousness.</a><br />
<br />
Part 19: <a href="https://variable-variability.blogspot.com/2020/03/corona-virus-update-christian-drosten-outside-face-masks-children.html">Corona Virus Update with Christian Drosten: going outside, face masks, children and media troubles.</a><br />
<br />
Part 18: <a href="https://variable-variability.blogspot.com/2020/03/german-virologist-Christian-Drosten.html">Leading German virologist Prof. Dr. Christian Drosten goes viral</a>, topics: Air pollution, data quality, sequencing, immunity, seasonality & curfews.<br />
<br />
<h2>Related reading</h2><a href="https://www.ndr.de/nachrichten/info/coronaskript176.pdf">This Corona Virus Update podcast and its German transcript.</a> Part 31.<br />
<br />
<a href="https://www.ndr.de/nachrichten/info/Coronavirus-Update-Die-Podcast-Folgen-als-Skript,podcastcoronavirus102.html">All podcasts and German transcripts of the Corona Virus Update.</a><br />
<br />
Roman Wölfel, Victor M. Corman, Wolfgang Guggemos, Michael Seilmaier, Sabine Zange, Marcel A. Müller, Daniela Niemeyer, Terry C. Jones, Patrick Vollmar, Camilla Rothe, Michael Hoelscher, Tobias Bleicker, Sebastian Brünink, Julia Schneider, Rosina Ehmann, Katrin Zwirglmaier, Christian Drosten & Clemens Wendtner, 2020: Virological assessment of hospitalized patients with COVID-2019. <i>Nature</i>. <a href="https://doi.org/10.1038/s41586-020-2196-x">https://doi.org/10.1038/s41586-020-2196-x</a><br />
<br />
Ye, G., Pan, Z., Pan, Y., Deng, Q., Chen, L., Li, J., Li, Y., & Wang, X., 2020: <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7102560/">Clinical characteristics of severe acute respiratory syndrome coronavirus 2 reactivation.</a> <i>The Journal of infection</i>, <b>80</b>(5), e14–e17. Advance online publication. <a href="https://doi.org/10.1016/j.jinf.2020.03.001">https://doi.org/10.1016/j.jinf.2020.03.001</a><br />
<br />
Jing Yuan, MD, Shanglong Kou, PhD, Yanhua Liang, MS, JianFeng Zeng, MS, Yanchao Pan, PhD, Lei Liu, MD, 2020: <a href="https://academic.oup.com/cid/advance-article/doi/10.1093/cid/ciaa398/5817588?searchresult=1">PCR Assays Turned Positive in 25 Discharged COVID-19 Patients</a>. <i>Clinical Infectious Diseases</i>, ciaa398. <a href="https://doi.org/10.1093/cid/ciaa398">https://doi.org/10.1093/cid/ciaa398</a><br />
<br />
Victor Venemahttp://www.blogger.com/profile/02842816166712285801noreply@blogger.com0tag:blogger.com,1999:blog-9093436161326155359.post-32270798087582010132020-04-14T07:13:00.000+01:002020-04-14T07:13:00.429+01:00Opening up Germany in a Randomized Controlled TrialIt is now clear that for now Germany has managed to avoid spiralling into a situation where the new Coronavirus overburdens the healthcare system. In fact, I think we can say that <i>the number of cases is declining</i>. <br />
<br />
<i>So in Germany the discussion has started about slowly opening up society again.</i> On Wednesday the 15th of April the government wants to decide what to do next week. While the number of confirmed infections is going down, I feel it would be good to basically keep the current measures in place for two more weeks. This would lower the numbers to where the <a href="https://variable-variability.blogspot.com/2020/04/privacy-track-trace-app-corona.html">tracking and tracing of infected people becomes an effective way to keep infections down</a>, which means less restrictions long-term.<br />
<br />
But <i>we could use this two weeks for an experiment, which will help us make better decisions</i>. The best experiments are [[<a href="https://en.wikipedia.org/wiki/Randomized_controlled_trial">randomized controlled trials</a>]], where you have two conditions and randomly one of them. This is typically how new medicines are tested. Here one would randomly assign a pill or a placebo to patients.<br />
<br />
In case of COVID-19 measures the two conditions could be a relaxation of measures or not. Because this is about the spread of a virus in a community, you cannot randomly select people, you will have to randomly select regions. As Germany is a federal state, a logical selection would be randomly assigning states, but you could also do it for municipalities. That would be better scientifically, but harder to implement.<br />
<br />
Without mitigation measures one infected person infects 2 or 3 others. We have to bring this number below 1 to stop the epidemic. <a href="https://variable-variability.blogspot.com/2020/04/privacy-track-trace-app-corona.html">About half of the infections are transmitted by people with symptoms</a> and half by people before they have symptoms. Some are transmitted by people who will never get symptoms and some via the environment, without direct contact. So quarantining people (with symptoms) is important, but not enough, we also <i>need to reduce the number of physical contacts between people without symptoms, that means basically all of us. But it does not have to go to zero</i>, which is why essential people are still working and supermarkets are open.<br />
<br />
So we have to decide which physical contacts to allow until we have a cure or a vaccine and which ones we do not. <i>This is a compromise between how important the contact is and how dangerous it is.</i> Keeping supermarkets open is clearly important, people have to eat. Most dangerous are close contacts, with many people, over a longer time, inside buildings. Parties with thousands of people are clearly dangerous and, while nice, less important.<br />
<br />
Those two decisions are easy, supermarkets open, parties closed. <br />
<br />
<i>The most difficult decision I see is about whether to open or close school.</i><br />
<br />
On the one hand, this would be important. We cannot have our kids locked in at home, children need to move. We cannot have them miss school for one and a half year, the more so as this sacrifice does not help them as school children do not get ill. Children not going to school also prevents many parents doing essential work from going to work or working from home efficiently.<br />
<br />
On the other hand, going to school would be dangerous. With many children, this means an enormous number of contacts. And it will be hard to change the behaviour of kids at school to reduce the contacts. (Within one class I am not even sure whether we should try.)<br />
<br />
What makes the decision even harder is the uncertainty in how infectious children are. We know they can be infected, but as they do not have many symptoms, they may be less efficient in spreading it than adults.<br />
<br />
So studying the influence of opening schools would be a good use of a randomized controlled trial. <i>You could do this carefully by only having one or two years go back to school.</i> Rather than switching from compulsory schooling, to closings schools, back to compulsory schooling, we could also <i>make it voluntary</i>. Parents who are in a health risk group could then opt keeping their children at home. While parents who most urgently need to work could opt to send their children to school.<br />
<br />
Whatever we decide, I think it would be a good use of our time to use it for an experiment that helps us make better decisions about a disease we do not know much about yet.<br />
<br />
<h2>
Related reading</h2>
The German National Academy of Sciences, Leopoldina, released their recommendation this Monday, they recommend opening the schools stepwise: <a href="https://www.leopoldina.org/uploads/tx_leopublication/2020_04_13_Coronavirus-Pandemie-Die_Krise_nachhaltig_%C3%BCberwinden_final.pdf">Dritte Ad-hoc-Stellungnahme: Coronavirus-Pandemie – Die Krise nachhaltig überwinden</a><br />
<br />
<a href="https://variable-variability.blogspot.com/2020/04/privacy-track-trace-app-corona.html">A privacy respecting track and trace app to fight Corona is possible and effective</a><br />
<br />
German public radio channel NDR Info makes a daily podcast with virologist Christian Drosten, on my blog you can find <a href="https://variable-variability.blogspot.com/2020/04/corona-virus-update-exit-strategy-masks-loss-smell-taste.html">translations of parts of these interviews</a>.<br />
<br />
The German CDC, the RKI makes <a href="https://www.rki.de/DE/Content/InfAZ/N/Neuartiges_Coronavirus/Situationsberichte/Gesamt.html">wonderful informative daily situation reports</a>, in German and English.<br />
<br />Victor Venemahttp://www.blogger.com/profile/02842816166712285801noreply@blogger.com10tag:blogger.com,1999:blog-9093436161326155359.post-59079915732205205472020-04-13T07:39:00.000+01:002020-04-13T07:39:07.828+01:00A privacy respecting track and trace app to fight Corona is possible and effective<b>An app to track and trace infections seems to be a promising way out of the lockdowns. </b>Tracking the contacts of infected people is a main strategy to fight this epidemic as long as we do not have a cure or vaccine. It is the main strategy used in South Korea and they are able to keep the number of new infections <a href="https://www.worldometers.info/coronavirus/country/south-korea/">below 100 per day with it</a>.<br />
<br />
When the virus spreads more widely, like in many countries who did not take the virus serious enough soon enough, it becomes difficult for the health departments to track and trace so many people. In addition, a part of the contacts will not be known to the infected person and will thus not be tracked; for example, someone sitting next to you on public transport or in a restaurant. <br />
<br />
<h2>How the app works</h2>For the last case South Korea uses GPS information from mobile phones. I am not comfortable with the state having all that location data, but fortunately there is a better alternative. This is a great cartoon <a href="https://ncase.me/contact-tracing/">explaining how contact tracing can be done fully respecting privacy</a>. The short version of this is below.<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiRcbI2ihGQCnV0QeXcr2scs5rpREoB9aFRfD1XYxpAemLzz4HzQ4Fv-R4QzQ4C2zGUER9ZeIBH6NSX2GPK87Nl2uDc4qlMDz2Djm_UA58lan9oMqt1zBq3ggXs3RraLuxbcAfM4WiFAX8/s1600/onepage.png" imageanchor="1"><img border="0" data-original-height="1600" data-original-width="501" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiRcbI2ihGQCnV0QeXcr2scs5rpREoB9aFRfD1XYxpAemLzz4HzQ4Fv-R4QzQ4C2zGUER9ZeIBH6NSX2GPK87Nl2uDc4qlMDz2Djm_UA58lan9oMqt1zBq3ggXs3RraLuxbcAfM4WiFAX8/s1600/onepage.png" width="501" /></a><br />
<br />
<b>The Chaos Computer Club (CCC), German's most reliable technology activists, <a href="https://ccc.de/en/updates/2020/contact-tracing-requirements">explain the conditions to make this work</a></b> and have promised to warn about bad apps. I am happy use such an app and will listen to the CCC for advice. The CCC is comparable to America's [[<a href="https://en.wikipedia.org/wiki/Electronic_Frontier_Foundation">Electronic Frontier Foundation</a>]].<br />
<br />
Let's hope data brokers Google and Apple getting involved does not mess this up. At least one group of scientists who started this approach are <a href="https://www.covid-watch.org/press_releases/google_apple_press_release">hopeful Google and Apple will help</a>. We need many people participating, so we need something everyone can embrace. <br />
<br />
<h2>How effective it is</h2>A recently published study published in Science claims that such <a href="https://science.sciencemag.org/content/early/2020/04/09/science.abb6936">fast contact tracing could be as effective as a lockdown</a> if 60% participate & 60% heeds its warnings. <br />
<br />
To compute this they first estimate how the virus spreads; this paragraph can be skipped if you are not interested in the scientific basis. They estimated how long the incubation time is (5.5 days). On average it takes 5.0 days between one infected person and the next to show symptoms. So the moment someone gets ill, the people they have infected have started infecting other people. They estimate that on average 1 person infects 2 others. (This is a low value, other studies tend to find between 2 and 3.) The direct transmission from a symptomatic individual to someone else ("symptomatic transmission") explains 0.8 infections of those 2 infections. So even if if we theoretically would remove this fully the number of infections would still grow exponentially. Infected people infect 0.9 people before they show symptoms ("Pre-symptomatic transmission"). People without symptoms infect 0.1 people ("asymptomatic transmission"), while "environmental transmission", infections where people did not meet, account for 0.2 infections.<br />
<br />
So it is important to be fast. This is the advantage of the app over a health departments trying to reach people by phone and email. Still <b>it is worthwhile to both do manual and app tracing</b>. A person from the health department calling you telling you your friend or colleague is ill and explaining how quarantine works is likely more effective than a notification by your phone. For this manual work to be effective we need to get the number of new infections down.<br />
<br />
<b>The speed of the testing is an important part of this strategy. </b>It will thus work better in countries like Germany with a strong testing program than in America where much less testing is done, which in the short term makes the numbers look better, but does not make the situation better. The paper also studies how effective it would be if people with symptoms can warn people before being tested. This is naturally faster, but false warnings triggered by hostile actors can abuse the system. To avoid this one can tie the app to test results, where the health care providers can give the app user a code in case of a positive test.<br />
<br />
When the app warns someone that they have been in contact with an infected person, this person will have to go into quarantine. This will work better in a country with <b>paid sick leave</b> and when the government gives the warning of the app the same status as "sick certificates" from the doctor.<br />
<br />
The proximity detection by Bluetooth is far from perfect, so there will be false positives, but I would argue that that is still better assuming everyone had contact with an infected person and putting all of society on lockdown.<br />
<b><br />
</b> <b>Enough people will have to participate.</b> Fortunately it does not have to be all, apparently 60% is already enough. The privacy invading app of Singapore only has <a href="https://www.lightbluetouchpaper.org/2020/04/12/contact-tracing-in-the-real-world/">a take up of 10 to 15%</a>. I would personally not use such an app, I'd rather take a small risk dying than giving a government really dangerous powers, while I would be happy to use the above described one. So I would expect the adoption of a decent app to be higher.<br />
<br />
<b>One would install the app to help others</b>, so this may work less well on countries where the ruling class has pitted groups against each other to solidify their power. <a href="https://www.lightbluetouchpaper.org/2020/04/12/contact-tracing-in-the-real-world/">Ross Anderson from the UK is pessimistic</a> about the adoption of such an app. I am quite optimistic. But we will have to do the experiment. Do note that when reading the second opinion of Anderson that I feel he does not accurately describe how the app would work; part of my text above is based on such misunderstandings others may have.<br />
<br />
Prof. Dr. Christian Drosten, one of the main virologists in Germany who specializes in emerging viruses, thinks the app could work to reduce infections. In a recent podcast of the public radio channel NDR Info he <a href="https://variable-variability.blogspot.com/2020/04/corona-virus-update-tracking-infections.html">talked about the app</a>:<br />
<blockquote>This is a study from the group of Christophe Fraser, certainly one of the best epidemiological modelers. It's a very interesting study, I think. It's published in Science. ... <br />
<br />
The main outcome of the study is that you are too late with a simple [manual] identification of cases and contact tracing, because the whole thing depends on identifying symptomatic patients. So it really comes down to the last day. ...<br />
<br />
And you can say in a nutshell, if the epidemics ran at the same speed as in the beginning in Wuhan ... then you could already lower R0 below one. This is amazing.<br />
<br />
There are a few caveats on that. It is then said that in reality the speed of propagation in Europe is already faster than it was at the beginning in Wuhan. There are certainly several reasons for this. Population density, behaviour of the populations, but also how far the infection has already progressed. This of course makes it even more difficult again, so that a higher degree of cooperation among the population is actually needed. ...<br />
<br />
You could combine such an App, for example by other general factors that reduce the transmission of the infection, such as wearing masks. ...<br />
<br />
[The study models a situation where] there is no general lockdown. Companies can work, schools can teach, everything can work, but not for everyone at all times. There will come a time when you have this message on your mobile phone: "Please go into home quarantine." If you could then show this and your employer would say: Well, that's how it is, home quarantine this week. Then I find, that is at least a very interesting model one should not refuse thinking about.</blockquote><br />
Drosten can naturally only judge the effectiveness. The Chaos Computer Club (CCC), German's most reliable technology activists <a href="https://ccc.de/en/updates/2020/contact-tracing-requirements">support the technical concept</a>. It naturally depends on implementation details and while they will not recommend an app, they have promised to warn about bad apps. <b>I will listen to the advice of the CCC.</b><br />
<br />
Electronic Frontier Foundation makes clear that a trace and track app can only be part of a package of measures and rightly emphasise <a href="https://www.eff.org/deeplinks/2020/04/challenge-proximity-apps-covid-19-contact-tracing">the importance of consent</a>.<br />
<blockquote><b>Informed, voluntary, and opt-in consent is the fundamental requirement </b>for any application that tracks a user’s interactions with others in the physical world. Moreover, people who choose to use the app and then learn they are ill must also have the choice of whether to share a log of their contacts. Governments must not require the use of any proximity application. Nor should there be informal pressure to use the app in exchange for access to government services. Similarly, private parties must not require the app’s use in order to access physical spaces or obtain other benefits.<br />
<br />
Individuals should also have the opportunity to turn off the proximity tracing app. Users who consent to some proximity tracking might not consent to other proximity tracking, for example, when they engage in particularly sensitive activities like visiting a medical provider, or engaging in political organizing.</blockquote>A German conservative politician wanted to force people to use the app. He did not have a good day on social media. Well deserved. That is the most effective way to destroy trust and in times of Corona we need high compliance and thus solutions that have broad support.<br />
<br />
The German National Academy of Sciences, Leopoldina, recommends three measures to replace the lockdowns. 1) Such an app. 2) Massive testing. 3) Everyone wearing simple masks in public. (<a href="https://www.leopoldina.org/presse-1/nachrichten/ad-hoc-stellungnahme-coronavirus-pandemie/">In German.</a>)<br />
<br />
In the Netherlands, Arjen Lubach asks many questions on how such an app would be used. (<a href="https://www.youtube.com/watch?v=S2g0GiCHyJE">video in Dutch.</a>) Would your boss be allowed to force you to use such an app? Would this be a condition to use public transport? Would a restaurant be allowed to require customers to use an app? Would you be forced to share your random numbers when you find out that you are infected? Could you turn off the app? Could you ignore the warning of the app? <br />
<br />
I had not considered many of these questions because I considered it natural to each time opt for <b>the most free option and expect that that leads to much more people participating and thus to the largest effect</b>. Any force to use the app would only make sense on a societal level. A boss or a restaurant has no advantages from such a measure, just like the users themselves only help society, not themselves.<br />
<br />
My impression is that a main reason Germany got through this pandemic with only a blue eye is that the population was well informed, understood the danger, knew what to do and was very cooperative. It is relatively easy in science, but I have seen a huge part of people working from home well before there were any rules to do so. Meetings were cancelled well before the limits for the maximum number of participants went down to that level.<br />
<br />
The alternative to so much compliance would be quite draconian rules and a lot surveillance and enforcement, leading to much more violations of freedom and economic damage. Thus I would expect that <b>the best way to make the app a success is to respect the privacy of the citizens and respect their autonomy to make the right decisions.</b> In countries were this is not possible, I am sceptical of the app helping much, except if they go full China and most countries do not have the enormous repressive system that would necessitate. <br />
<br />
Where I do agree with Arjen Lubach is that <b>we have to have this discussion now</b>. It is not a matter of using the app or not, but how do we want to use it. We should have that discussion before we introduce it. Just like we should discuss all other measures and whether and when they can be relaxed. Even if we do not know exactly when yet, we can already discuss what has priority, opening school, shops, restaurants or car factories?<br />
<br />
<br />
<b>Disclaimer.</b> In am just a simple climate scientist, not a virologist, nor an epidemiologist or encryption specialist. I had wanted to stay out of this topic and not pretend to be an instant Corona specialist, but the dumb people do not show such restraint and only few actual experts speak up. Those that do, report that <a href="https://wissenschaftkommuniziert.wordpress.com/2020/04/06/ein-offener-brief-prof-drosten-halten-sie-bitte-kurs/">they find it unpleasant</a>. As a climate scientist I am unfortunately used to the well-funded hate mobs <a href="https://www.youtube.com/watch?v=JFnhTo6Wd80&list=PL029130BFDC78FA33&index=14&t=0s">trying to bully others into silence</a> and will not let myself be intimidated. Plus a large part of this post is about societal issues, where everyone should participate, not just experts.<br />
<br />
<h2>Related reading</h2>The position paper of the EFF is long, but worthwhile: <a href="https://www.eff.org/deeplinks/2020/04/challenge-proximity-apps-covid-19-contact-tracing">The Challenge of Proximity Apps For COVID-19 Contact Tracing.</a><br />
<br />
If anyone would like to get involved, there is <a href="https://github.com/shankari/covid-19-tracing-projects">a list with COVID-19 contact tracing projects</a> around the world. <br />
<br />
German public radio channel NDR Info makes a daily podcast with virologist Christian Drosten, on my blog you can find <a href="https://variable-variability.blogspot.com/2020/04/corona-virus-update-exit-strategy-masks-loss-smell-taste.html">translations of parts of these interviews</a>.<br />
<br />
The German CDC, the RKI makes <a href="https://www.rki.de/DE/Content/InfAZ/N/Neuartiges_Coronavirus/Situationsberichte/Gesamt.html">wonderful informative daily situation reports</a>, in German and English.<br />
<br />
Victor Venemahttp://www.blogger.com/profile/02842816166712285801noreply@blogger.com0tag:blogger.com,1999:blog-9093436161326155359.post-76683229201149226482020-04-10T08:23:00.000+01:002020-04-15T22:54:31.376+01:00Corona Virus Update: exit strategy, masks, aerosols, loss of smell and taste (part 28)<div style="float: right; margin-bottom: 10px; margin-left: 20px;"><a href="https://virologie-ccm.charite.de/metas/person/person/address_detail/drosten/"><img border="0" data-original-height="337" data-original-width="297" height="337" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgOdNzlursTpq66xOJLNlrq8YTmLqfI26a0qEMCOjaHivkc_iwfkqEpC4ASQPbscXG07ySbm4QHJnJSqdcTfTN86CcaSmJA0kaZyXg0NiSAeW1ukRseHneHJP1rhB8WpZ8B5sLZ_gKW2Ac/s1600/drosten-christian-institut-fuer-virologie-charite_297x337.jpg" width="297" /></a><br />
<i>Prof. Dr. Christian Drosten</i></div>Today's podcast had a wide range of topics, from the proposal for an exit from the lockdown by the German National Science Academy, to face masks (which is one of their proposals), to transfer of the SARS-CoV-2 virus by droplets and by tiny airborne particles (aerosols), how long a patient is contagious and a new study on the loss of smell and taste as a symptom of COVID-19.<br />
<br />
The Corona Virus Update Podcast is an initiative of the German public radio channel NDR Info. Today science journalist Anja Martini does the interview with Prof. Dr. Christian Drosten. He is an expert for emerging viruses at the [[<a href="https://en.wikipedia.org/wiki/Charit%C3%A9">research hospital Charité</a>]]. Fittingly the hospital was founded outside the city walls of Berlin three centuries ago to help fight an outbreak of the bubonic plague, which had already depopulated large parts of East Prussia. <br />
<br />
<h3>An exit strategy</h3>In the <a href="https://variable-variability.blogspot.com/2020/04/corona-virus-update-tracking-infections.html">previous podcast</a> Drosten talked about <a href="https://science.sciencemag.org/content/early/2020/04/09/science.abb6936">a study</a>, which suggested that a mobile phone app, which can help trace back contacts of infected people, would be quite effective in reducing the spread of the virus. About as powerful as a lockdown.<br />
<br />
Three day later the German National Academy of Science, Leopoldina, recommended three measures, which could become an alternative for a lockdown. 1) This app, 2) more testing, 3) wearing simple masks in public. <br />
<br />
[EDIT: It goes viral in America that <a href="https://www.theverge.com/2020/4/10/21216715/apple-google-coronavirus-covid-19-contact-tracing-app-details-use">Apple and Google</a> will somehow help with such apps. That most of the work is already done by governments is not something that gets much millimetres, while it is not that clear to me what Apple and Google will contribute. They say first an API. Maybe that helps to make different apps interoperable? In a second phase they want to integrate it in the OS. If that means that the data (also) goes to Apple and Google, that would be an efficient way to kill the project.] <br />
<br />
Leopoldina presents a model, which suggests this would be enough to keep new infections per day close to zero in May, although they also show data from South Korea, which has a similar strategy, were there is still a decent amount of new infections going on. So the model does not capture reality fully.<br />
<br />
<b>Anja Martini:</b><br />
<blockquote>The Leopoldina, the National Academy of Science, <a href="https://www.leopoldina.org/presse-1/nachrichten/ad-hoc-stellungnahme-coronavirus-pandemie/">issued a second statement from its working group on the virus at the end of last week</a>. You are also part of this working group. ... It recommends - over and above the measures that we have already taken so far, in other words keeping our distance - hygiene and quarantine in the event of suspicion, isolation: Consistent wearing of masks, including in local public transport and at school, more tests, including random tests, and the use of cell phone data, which we have already discussed here. If this is done, the number of people infected by an infected person could, according to the calculations, be reduced to less than one by the middle or end of May. Even if, after Eastern, more public life were to be gradually allowed again. That is cause for optimism for the time being, isn't it? Please explain this prognosis to us!</blockquote><b>Christian Drosten:</b><br />
<blockquote>Of course, one looks for ways to get out of the current measures. And an organisation like the Leopoldina, which is made up of scientists, also looks at the latest scientific data. Just last week we discussed <a href="https://science.sciencemag.org/content/early/2020/04/09/science.abb6936">a study published in "Science"</a> about the effects that can be expected from such mobile apps, i.e. mobile phone apps that allow much more detailed and faster case tracking. <br />
<br />
We can simply track a certain number of infected people at the local health departments. At some point, the capacity runs out. You can't make an infinite number of phone calls and contact an infinite number of contacts and tell them to stay home and so on. It just runs out at some point. A mobile app is not exhausted that quickly and it also gets behind it much faster. That's the one provision there.</blockquote><div style="float: right; margin-bottom: 10px; margin-left: 20px;"><a href="https://commons.wikimedia.org/wiki/File:Co-op_Cafeteria_detail,_Colleges_and_Universities_-_University_of_California_-_University_of_California,_Berkeley,_California._Open_air_barber_shop_during_influenza_epidemic_-_NARA_-_26428662_(cropped).jpg"><img border="0" data-original-height="480" data-original-width="619" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj-ST1g9fwS7Mm6qJzk1iiWvqGzVGtqagzYc3HOm_oV9oFRb4MrcJxAp8nrBUBgfT6w3lsNlUWtT89LpbB1jGUk7h6E2sEABrLiMZlNoAVXzF2NrtmhCogFjz7RIhEAxVqDDJVJaxJJ8gU/s1600/Co-op_Cafeteria_detail%252C_Colleges_and_Universities_-_University_of_California_-_University_of_California%252C_Berkeley%252C_California._Open_air_barber_shop_during_influenza_epidemic_-_NARA_-_26428662_%2528cropped%2529.jpg" width="320" /></a></div>If <a href="https://science.sciencemag.org/content/early/2020/03/30/science.abb6936">the modelling study on the impact of this app</a> is right, this should do most of the work.<br />
<br />
So one can wonder why masks are additionally proposed by Leopoldina. My impression from previous podcasts is that Drosten is quite sceptical of masks. While there is evidence that they reduce the amount of viral material an infected person produces, there is not much evidence on how much they would contain the spread of the virus.<br />
<br />
Maybe that lack of strong evidence is why Drosten wonders whether people would be persuaded to wear the masks. I do not see much of a problem, but maybe I am too optimistic. Wearing a mask is a much smaller limitation than staying at home. And I recently came by this beautiful photo of California during the Spanish flu, where people are wearing masks at an outdoor barber shop. Another culture not used to masks that were willing to wear them when needed.<br />
<blockquote>You can achieve considerable increases if you add some general effects [additional measures] to this very special tracking via mobile apps. A general effect can be the wearing of masks if everyone does it. In our society, we certainly do not have the best starting conditions to let everyone wear masks. There will quickly be people who say they don't want to, they don't see the point or they can't do it. </blockquote><blockquote>We have currently, of course, an additional argument in public, namely: you cannot buy any masks at all, because there are none. That is why it is of course not very promising at first to consider what would happen if a general obligation to wear masks were to be imposed ad hoc? <br />
<br />
This is a relatively complicated phenomenon, ... to impose such a thing in a society where the whole thing is not culturally anchored and not trained. That is the one difficulty. It is of course taken into consideration in a forum like the Leopoldina, where social scientists, psychologists and so on are also represented. This is precisely why the totality of the expertise is represented, not only life scientists are in it, but also sociologist.</blockquote><br />
<h3>Types of masks</h3><b>Christian Drosten:</b><br />
<blockquote>We have hardly any scientific evidence that says that self-protection through simple masks works. Of course, there are much more complicated, elaborate masks for special wearers, i.e. for certain occupational groups, who also provide self-protection. <br />
<br />
But these masks have actually never been available in large numbers. They are not so easy to produce so fast, as far as I know. By the way, they are also not easy to wear for everyone. You have to imagine that here in medicine there are preliminary occupational medical examinations for employees who have to wear these very safe self-protection masks in their professional life. Not everyone is able to do this, for example, if there is any doubt, the medical profession must carry out lung function tests. And something like that cannot be recommended for the normal population.</blockquote>I am not sure I understand his claim on the simple masks: <br />
<blockquote>With these [simple] masks it is the case, there is no scientific evidence of a benefit for self-protection. There is, however, starting evidence, which has not been very virus-specific so far, for the protection of others. But this of course presupposes that really everyone, everyone, everyone in society, in public life, must wear these masks. </blockquote>I would expect that when half of all people do it you get half of the effect. But maybe Drosten means that for this to help for your own protection everyone would have to do it. Also if only half would do it, to help the others, one could expect that the participation drops. That is a kind of [[<a href="https://en.wikipedia.org/wiki/Public_goods_game">public goods game</a>]] It could also be that he does not expect much of an effect and that half would thus really not be worth it.<br />
<br />
<h3>Droplets and aerosols</h3>A large part of the podcast was about the difference between droplets with virus and aerosols. Droplets would be defined as being large enough to drop to the ground by gravity within a minute, while aerosols can stay in the air for hours. It was a long and nuanced discussion about evidence on how these particles are produced and removed, how infectious they are and how important they are.<br />
<br />
People are worried about the aerosols, about "airborne virus" because it means that you could be infected without having noticed someone coughing. But in the end the droplets are most important: "we are pretty sure that the vast majority of the viruses that are released in these diseases of the upper respiratory tract ... are these larger droplets - and they fall to the ground". So to focus on what is important, I only translated a small part:<br />
<br />
<b>Christian Drosten:</b><br />
<blockquote>These large droplets over five microns (and they can can be much bigger, they can also be 100 micrometers, i.e. a tenth of a millimeter, so that you can really see them with the naked eye) - these are the droplets that we are talking about in a droplet infection. In other words, what you give off - which is part of a moist speech [when people spatter when they talk], for example, but also comes out when you cough or sneeze - and which falls to the ground within a radius of one and a half to two meters. <br />
<br />
In this research into the common cold, we are pretty sure that the vast majority of the viruses that are released in these diseases of the upper respiratory tract (i.e. the diseases that mainly occur in the throat and nose) are these larger droplets - and they fall to the ground. Much of our precautions and infection prevention considerations are based on this insight. <br />
<br />
Then there is something else, namely aerosols, whose particle size is less than five micrometers. For the experts, it must be said that this is of course not a sharply defined size, and an aerosol that really floats in the air and stays in the air longer, the actual droplets are even much smaller, they are less than one micrometer in size. ...<br />
<br />
If I release such a droplet and it floats in the air in front of me, then it starts to dry and then it becomes smaller. The smaller it gets, the more likely it is that it will remain in the air for a long time. But at the same time there is another effect, namely when this droplet gets smaller and smaller, it will eventually be too small for the virus, and the virus will dry out and will no longer be infectious.</blockquote><br />
So on the one hand aerosols are potentially more problematic by staying in the air longer, on the other hand they are likely less contagious, while many studies only analyse whether virus is present, not whether the material is infectious. Reading such studies one should pay attention to this difference. <br />
<br />
<h3>How long is someone contagious</h3>An <a href="https://www.medrxiv.org/content/10.1101/2020.03.29.20046557v1">interesting preprint studied</a> how much virus could be found in hospital rooms of COVID-19 patients.<br />
<br />
<b>Christian Drosten:</b><br />
<blockquote>Wipe samples were taken - in 30 different hospital rooms, from 30 different patients, all of whom had the disease, in a hospital in Singapore, from all kinds of surfaces, and tested them for virus again. <br />
<br />
By the way, I have to add here, in all these studies, especially the last study that we discussed first, and this one too, it is always only a viral detection of RNA and not of infectivity in the cell culture.</blockquote><b>Anja Martini:</b><br />
<blockquote>In other words, a virus that can be detected, but which possibly no longer infects anyone. </blockquote><b>Christian Drosten:</b><br />
<blockquote>Right, exactly. A desiccated virus, it still has the same amount of RNA and you can still detect it. None of this means anything directly about infectivity right now, it just means that virus has got there. <br />
<br />
And here it is the case that a lot of deposited RNA has already been found in these samples. In the floor samples, for example, more than half of the wipe samples were virus-positive, i.e. viral RNA could be detected - which suggests that the virus is deposited to a considerable extent, which is a sign of the fact that the virus is in fact deposits considerably, which favours the concept of a coarser drops. <br />
<br />
But then, something else and very important, I think: With these 30 patients were these virus swab samples always positive only in the first week of symptoms. In the the second week, when the patients were still definitely sick the wipe samples were no longer positive. So no more virus settles on the surfaces, was accordingly also no significant virus concentration more in the room air. </blockquote>Note, this is just one study. Decisions should be based on all available evidence and an uncertainty estimate. <br />
<br />
<h3>Infection via surfaces</h3>If I understand it right, when someone coughs in their hands and then shakes hands, that is seen as droplet transmission and not as transmission via a surface. This route is important and a reason for the advice not to cough in your hands and to wash them regularly.<br />
<br />
<b>Anja Martini:</b><br />
<blockquote>The insight we have from this is that we are infected via the air we breathe, via coughing, via aerosols, and not, as is a question much asked by listeners here, what is actually the case with infection via surfaces?</blockquote><b>Christian Drosten:</b><br />
<blockquote>Infection via surfaces themselves has been modelled, for example, in <a href="https://variable-variability.blogspot.com/2020/04/corona-virus-update-tracking-infections.html">the study by Christophe Fraser that we discussed last week</a>. He comes to the conclusion that perhaps ten percent of all transmissions could function via surfaces at all.<br />
<br />
Many people I talk to don't really believe in the relevance of surface transmission. ...<br />
<br />
We do not currently assume that this virus is significantly transmitted via surfaces. The current measures to prevent transmission are aimed at preventing both droplet and airborne transmission, especially - to say it again - droplet transmission. And the studies that have now been discussed here, that have now been published, do not suggest - even if small-droplet aerosols have been detected - that this mechanism would be the main focus.</blockquote><b>Anja Martini:</b><br />
<blockquote>This means, once again asking from the consumer's point of view, ... can we actually neglect surface disinfectants in our private lives?</blockquote><b>Christian Drosten:</b><br />
<blockquote>I am almost sure that it is not worthwhile to pay a lot of attention in the household to treat all kinds of surfaces with disinfectant. In a hospital, of course, this may be different. ...</blockquote>My impression is that this was scientific "may", which mostly means "will". We sometimes talk in a somewhat weird way.<br />
<blockquote>Images from television, for example, in China, where tanker lorries are driving through the streets with disinfectants, I think that has more of a psychological effect on the population than a real effect in curbing the transmission of infection.</blockquote>I love those videos of teams with disinfectant sprayers walking through the streets as if they could be eye to eye with a terrorist any second. <br />
<br />
<h3>Loss of smell and taste</h3><b>Anja Martini:</b><br />
<blockquote>What does a possible disease or even an infection with the virus actually do to the sense of taste and smell? This was already something of an observation in the press. There was also a Belgian study. Now there is one from Iran based on an online questionnaire.</blockquote><b>Christian Drosten:</b><br />
<blockquote>Yes, I think it's a very interesting study. There are already clear indications. In the Munich patient observation we have already seen a loss of the sense of taste and smell in almost half of the cases. So this has already been published. <br />
<br />
There is now even a functional study that has just been published - and it says that it is a very specific type of cells in the olfactory system, in the nose, in the olfactory bulb, that is actually infected and affected by this virus. <br />
<br />
But that is not what we want to discuss here. Interestingly, it is a study from Iran. I think it is simply great to see that this kind of useful research also comes from a country that is highly affected and where we all know that the data situation is unclear. The science there has to work in a difficult system, has also difficulties, for example, to get certain reagents. But but here comes a very interesting study, from the preprint realm, to the public. <br />
<br />
Iranian scientists conducted a survey - also supported by apps and the Internet - and reached 15,000 people with this survey. Of these, 10,000 actually had a loss or impairment of the sense of smell. In fact, 76 percent of these 10,000 patients - an impressively large number - had a sudden loss. <br />
<br />
You can tell the difference between saying that suddenly I couldn't smell anything anymore. Or whether you say, well, I just had a cold. And 75 percent, a similarly high rate, actually had influenza-like symptoms. So now that not only a runny nose is part of it, but also a noticeable fever and so on. This was clarified by questionnaires. <br />
<br />
83 percent also had a loss of taste, which was also described, also in the Munich patients, so that a loss of taste is also involved. They could no longer taste or smell anything. ... <br />
<br />
And if I suddenly couldn't smell anything anymore in my everyday life, I would stay at home and try to clarify what is going on with me at the moment, in the current situation.</blockquote><br />
<br />
<h2>Other podcasts</h2>Part 27: <a href="https://variable-variability.blogspot.com/2020/04/corona-virus-update-tracking-infections.html">Corona Virus Update: tracking infections by App and do go outside</a><br />
<br />
Part 23: <a href="https://variable-variability.blogspot.com/2020/04/corona-virus-update-funding-publishing-arrival-endemic.html">Corona Virus Update: need for speed in funding and publication, virus arrival, from pandemic to endemic</a><br />
<br />
Part 22: <a href="https://variable-variability.blogspot.com/2020/03/corona-virus-update-scientific-studies-cures-covid-19-Remdesivir-Chloroquin-Favipiravir-camostat.html">Corona Virus Update: scientific studies on cures for COVID-19.</a> <br />
<br />
Part 21: <a href="https://variable-variability.blogspot.com/2020/03/corona-virus-update-tests-tests-tests.html">Corona Virus Update: tests, tests, tests and how they work.</a><br />
<br />
Part 20: <a href="https://variable-variability.blogspot.com/2020/03/corona-virus-update-Case-tracking-teams-infections-Germany-Infectiousness.html">Corona Virus Update: Case-tracking teams, slowdown in Germany, infectiousness.</a><br />
<br />
Part 19: <a href="https://variable-variability.blogspot.com/2020/03/corona-virus-update-christian-drosten-outside-face-masks-children.html">Corona Virus Update with Christian Drosten: going outside, face masks, children and media troubles.</a><br />
<br />
Part 18: <a href="https://variable-variability.blogspot.com/2020/03/german-virologist-Christian-Drosten.html">Leading German virologist Prof. Dr. Christian Drosten goes viral</a>, topics: Air pollution, data quality, sequencing, immunity, seasonality & curfews.<br />
<br />
<h2>Related reading</h2><a href="https://www.ndr.de/nachrichten/info/coronaskript162.pdf">This Corona Virus Update podcast and its German transcript.</a> Part 28.<br />
<br />
<a href="https://www.ndr.de/nachrichten/info/Coronavirus-Update-Die-Podcast-Folgen-als-Skript,podcastcoronavirus102.html">All podcasts and German transcripts of the Corona Virus Update.</a><br />
<br />
<a href="https://www.nature.com/articles/s41591-020-0843-2">Respiratory virus shedding in exhaled breath and efficacy of face masks</a><br />
<br />
<a href="https://www.medrxiv.org/content/10.1101/2020.03.29.20046557v1">Detection of Air and Surface Contamination by Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) in Hospital Rooms of Infected Patients</a><br />
<br />
<a href="https://www.medrxiv.org/content/10.1101/2020.03.23.20041889v1">Coincidence of COVID-19 epidemic and olfactory dysfunction outbreak</a><br />
<br />
<a href="https://www.nejm.org/doi/full/10.1056/NEJMc2004973">Aerosol and Surface Stability of SARS-CoV-2 as Compared with SARS-CoV-1</a><br />
<br />
A paper from 2004 that shows that even while normally breathing out some people produce tiny droplets: <a href="https://www.pnas.org/content/101/50/17383">Inhaling to mitigate exhaled bioaerosols</a><br />
<br />
<a href="https://www.nap.edu/read/25769/chapter/1">A letter from the American Academy of Sciences on droplets and aerosols.</a><br />
<br />
News article in Science Magazine on the relevance of small droplets: <a href="https://www.sciencemag.org/news/2020/04/you-may-be-able-spread-coronavirus-just-breathing-new-report-finds">You may be able to spread coronavirus just by breathing, new report finds</a><br />
<br />
Victor Venemahttp://www.blogger.com/profile/02842816166712285801noreply@blogger.com1tag:blogger.com,1999:blog-9093436161326155359.post-220900818952633562020-04-07T07:31:00.000+01:002020-04-11T03:18:59.242+01:00Corona Virus Update: tracking infections by App and do go outside (part 27)<div style="float: right; margin-bottom: 10px; margin-left: 20px;"><a href="https://virologie-ccm.charite.de/metas/person/person/address_detail/drosten/"><img border="0" data-original-height="337" data-original-width="297" height="337" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgOdNzlursTpq66xOJLNlrq8YTmLqfI26a0qEMCOjaHivkc_iwfkqEpC4ASQPbscXG07ySbm4QHJnJSqdcTfTN86CcaSmJA0kaZyXg0NiSAeW1ukRseHneHJP1rhB8WpZ8B5sLZ_gKW2Ac/s1600/drosten-christian-institut-fuer-virologie-charite_297x337.jpg" width="297" /></a><br />
<i>Prof. Dr. Christian Drosten</i></div>This edition of the Corona Virus Update Podcast was about how effective an App could be that monitors with whom we have been in close contact lately and warns them if we develop symptoms or are tested positively.<br />
<br />
Even if it is voluntary and only a part of the population uses the App, in addition to only a part heading its warning to stay inside it would have a drastic effect, which is comparable to a lockdown.<br />
<br />
In addition informatics researchers think this tracking can be done in a way that respects the users privacy. Only who was close for a longer time is stored (this is determined by the short-range Bluetooth transmitter of your mobile phone). No locations are stored; if all data were on a national server, the contact information would allow very accurate estimates of ones position, but the contacts are stored on your own phone. Only when someone reports to be ill, this information would go to a national server and all other users could anonymously check this encrypted information. This seems to be a powerful way to fight the virus and would enable other restrictions to be release. If they can deliver on privacy claims, the software is open source, so we can check and the use of the App is voluntary, I would be in.<br />
<br />
[EDIT: It possible to deliver the privacy promises, <a href="https://www.ccc.de/en/updates/2020/contact-tracing-requirements">the CCC writes</a>: "With the help of these technologies, it is possible to unfold the epidemilogical potential of contact tracing without creating a privacy disaster." The CCC are Germany's prime technology experts and privacy activists. I will follow their advice.<br />
<br />
This <i>contact tracing</i> App should not be confused with the <i>Corona Data Donation</i> App ("<a href="https://corona-datenspende.de/">Corona-Datenspende-App</a>") of the Robert Koch-Institut, which was launched today for Android and iOS. This voluntary App uploads data from your fitness-tracker to estimate statistically how many people are ill from their temperature or pulse. This is a lot of private information, but these people already uploaded all this information to a corporation. It is only pseudonymous and only needs to be used by a much smaller part of the population to be useful for monitoring the situation and scientific research. As long as participants <a href="https://www.bfdi.bund.de/DE/Home/Kurzmeldungen/2020/09_Statement-Datenspende-App-RKI.html?nn=5216976">give informed consent and data can be deleted again</a> this is fine.]<br />
<br />
To make the computations on how effective the App would be, one needs estimates for how the virus in transmitted. This study computed that without intervention one person infects two others. (Other studies seem to be in the range of two to three.) About half of these infections are after symptoms appear and about half before. Because of this it is not enough to only isolate people with symptoms.<br />
<br />
The other two factions are much smaller: On average one infected person infects 0.2 via contact infections (environmental transmission) and people who do not show any symptoms are responsible for 0.1 infections (asymptomatic transmission). <br />
<br />
A part of the interview I skipped was about Trump's favorite drug, <a href="https://theconversation.com/a-small-trial-finds-that-hydroxychloroquine-is-not-effective-for-treating-coronavirus-135484">hydroxychloroquine</a> and in particular about a new medRxiv, preprint on this drug. There is no clear evidence at the moment, so this only interesting in the light of Trump pushing it so hard, but less from a science point of view.<br />
<br />
At the end is a question on what people can do themselves to strengthen their immune system: go out and do sports. Going out while keeping your distance is not an infection danger.<br />
<br />
Prof. Dr. Christian Drosten specialises in emerging viruses and developed the WHO test; <a href="https://variable-variability.blogspot.com/2020/03/german-virologist-Christian-Drosten.html">more on his background</a>. This episode science journalist Anja Martini asks the questions.<br />
<br />
<h3>The epidemiological model behind the App</h3><br />
<h3></h3><b>Anja Martini:</b><br />
<blockquote>93 percent of Germans are in favour of the restrictions, i.e. the social distancing rules and staying at home. This is the result of a [high quality] survey. When it comes to setting up a mobile phone App, Germans are divided. <br />
<br />
Mr. Drosten, I remember at the beginning of the podcast [series] we talked briefly about apps in China and South Korea that analyse the movement data of mobile phone users in order to find possible infected persons. At that time you said that this was probably difficult in Germany, and I agreed with you.<br />
<br />
The situation has now changed. In other words, we are now talking about apps that work anonymously via Bluetooth and that work on a voluntary basis. There is already a first study from Oxford, involving scientists across Europe. What do you make of it?</blockquote><b>Christian Drosten:</b><br />
<blockquote>Yes, this is <a href="https://science.sciencemag.org/content/early/2020/03/30/science.abb6936">a study from the group of Christophe Fraser</a>, certainly one of the best epidemiological modelers. It's a very interesting study, I think. It's published in Science. It is about first of all calculating a much better, more accurate epidemiological model, which is simply much more fine-grained, where more information goes into it than was known until recently. The fact is that the scientific literature provides more and more data that can be evaluated and then fed into such models. <br />
<br />
The beginning of this study is actually made from the observation that there are now actually more and more descriptions of transmission pairs in the literature and therefore the [[<a href="https://en.wikipedia.org/wiki/Serial_interval">serial interval</a>]] of this infection can actually be better determined. So how long does it take from symptom to symptom or from infection to infection. With symptom to symptom one speaks of "clinical onset serial interval", with the other - from infection to infection - of serial interval. And what you actually need is the serial interval itself. But it's all relatively difficult to quantify exactly determine. That is why one can at least make a good approximation over the "clinical onset serial interval". <br />
<br />
This can then be derived again, also from literature reports, and that is how the study starts. 40 pairs of transmission from the literature are evaluated, they feed an already existing mathematical model to derive certain parameters and certain proportions of the overall transmission activity. <br />
<br />
The [[<a href="https://en.wikipedia.org/wiki/Basic_reproduction_number">basic reproduction number R0</a>]], has been recalculated here as two. That is a relatively low value, if you look at what other analyses have found before. In some cases it was more like two and a half.</blockquote><br />
<b>Anja Martini:</b><br />
<blockquote>So [the study computes that] one person infects two others.</blockquote><br />
<b>Christian Drosten:</b><br />
<blockquote>Right. Now we have the option of decomposing these transfers into parts. ... The asymptomatic [part] means a carrier that never shows symptoms. And pre-symptomatic [part] of course means that it is transmitted before the carrier has symptoms. But you can find this carrier later, because he gets symptoms then. So of course you can still identify the contact patients later. This is a consideration that will be discussed later in the publication. <br />
<br />
Let's first give the values that are derived: Pre-symptomatic 0.9, i.e. a part of 0.9 of of the R0 value two, symptomatic transmission has a part of 0.8 and then environmental transmission 0.2, asymptomatic transmission 0.1. If you add these four values together, you get two again.<br />
<br />
If you now look at the figures, you will see that the overall pre-symptomatic transmission share is 46 percent of the total transmission activity. It is a similar figure from what we discussed a few days ago from another working group, from another paper.<br />
<br />
The value R0 of two is apparently good news. Because when we get a R0 of two, [rather than higher other literature estimates] then we have less [transmissions] that must be reduced to reduce the R0 below one and thus also to bring the epidemic to get a standstill. <br />
<br />
However, if you now realize that 46 percent of all this transmission activity takes place before the symptoms, it will of course then again be very difficult to reduce these transmissions. Because you can only isolate symptomatic patients. These considerations are now being fed into an interesting calculation that wants to find out: What can actually be done with certain interventions to detect an infected person? <br />
<br />
How long does it take to detect it? And how many have been infected by the infected person in this time, because 46 percent of the transmission happens before the symptoms start? And because it also takes some time before a diagnosis is made after the onset of symptoms and then for the contacts to be identified. <br />
<br />
A very important number plays a role in this, namely the serial interval of the infection, which has been recalculated here, which actually tells us: Even if one isolates immediately at the beginning of the symptoms, i.e. immediately removes a symptomatic person from the transmission situation, then not only has he already infected people, but these people who are subsequently infected are themselves also already contagious at the time when the first patients show the symptoms. <br />
<br />
We have actually already observed something like this in the Munich case tracking study and were surprised by it. But now there is, in principle, quantitative evidence, which really backs up the whole thing with numbers and rates, that this is actually happening. </blockquote><br />
<h3>Difference between manual and automatic tracking</h3><b>Christian Drosten:</b><br />
<blockquote>The main outcome of the study is that you are too late with a simple [manual] identification of cases and contact tracing, because the whole thing depends on identifying symptomatic patients. So it really comes down to the last day. ...<br />
<br />
In other words, here it is calculated in a formally very correct way and very robustly on the very latest figures, that from a certain point in time of the epidemic, targeted diagnostics plus case tracking plus isolation of contacts cannot stop this epidemic. This is no longer possible. <br />
<br />
What you can do to stop such an epidemic is to simply do a lockdown. Then you don't have to track cases, everyone will be at home. You can of course do a combination of measures where you say there is a lockdown, which is a bit milder. Like the contact ban. ...<br />
<br />
So here [the study] conceives a hypothetical App. This App can record the symptoms at the onset of the symptoms - so you just type it into your mobile phone: I have symptoms now. Then the App says: Okay, I've already sent the data you sent to the lab. That means the App can already do the registration for laboratory diagnostics. In principle, you can be diagnosed immediately - the App itself triggers the diagnostic process. <br />
<br />
Then the information about this diagnosis, if it is positive, will be included. And at that moment, the App can start to trace back which other mobile phones were in your vicinity. Of course you can also tell how long the contact should be and so on. ... And these holders of the other mobile phones are then informed. "You were in contact with a patient during the infectious period of that patient".<br />
<br />
And you can say in a nutshell, if the epidemics ran at the same speed as in the beginning in Wuhan and if 60 percent of the case identifications via the App were successful (which means, you have to realize, that if 60 percent of the population would install such an App and if then again about 60 percent of those who are informed that they should stay at home actually stay at home), then you could already lower R0 below one. This is amazing. <br />
<br />
There are a few caveats on that. It is then said that in reality the speed of propagation in Europe is already faster than it was at the beginning in Wuhan. There are certainly several reasons for this. Population density, behaviour of the populations, but also how far the infection has already progressed. This of course makes it even more difficult again, so that a higher degree of cooperation among the population is actually needed. ...<br />
<br />
But it is achievable, it is an achievable goal to use such apps to make these inevitable time delay in reporting activity to bridge the gap. To communicate the essential information "You have been in contact with an infected person, you should get tested now" and the time you gain there, that would actually do much more or almost the same as a real lockdown - under this mathematical model. <br />
<br />
Then there are a few follow-up effects and a few possible opions. One possibility, for example, is that in a "high incidents situation" - a place where there's a very serious epidemic going on, or at a time when there's a wave of infections - you could bring even more speed into the whole system by saying we're going to leave out this testing stuff. We're reprogramming this App now. If I check this box now, I have symptoms, the App doesn't tell me: "Okay, I've already signed you up with the lab for testing," but the App says: "Okay, we see you as positive now.</blockquote><br />
<b>Anja Martini:</b><br />
<blockquote>Then I'll stay home.</blockquote><br />
<b>Christian Drosten:</b><br />
<blockquote>Right. Anything symptomatic is now defined as positive without testing. This is, of course, an intervention measure, that this criterion is aggravated.</blockquote><br />
<h3>Combining the App with other measures</h3><b>Christian Drosten:</b><br />
<blockquote>Of course you have to say that you could combine such an App, for example by other general factors that reduce the transmission of the infection, such as wearing masks. This is of course not included here, because we do not know exactly how much wearing masks could perhaps reduce the overall transmission activity if everyone wore a mask, and there are no estimates of the numbers. But it is conceivable that this combination of a mask, if everyone wears it in society, if it has an effect, that this effect will be added to such a finely controlled App. <br />
<br />
And that is a real prospect. In this public discussion, which is of course already going on at the moment with some desperation: How do we exit these measures? And what do we do next? <br />
<br />
I'm fascinated by the thought that such an App, especially if many would participate, would provide us with an instrument to achieve a completely different subtlety of control and to be able to say that normal life can go on. <br />
<br />
There is no general lockdown. Companies can work, schools can teach, everything can work, but not for everyone at all times. There will come a time when you have this message on your mobile phone: "Please go into home quarantine. If you could then show this and your employer would say: Well, that's how it is, home quarantine this week. Then I find, that is at least a very interesting model one should not refuse thinking about.</blockquote>At the end of the interview Drosten comes back to the App and emphasises another advantage:<br />
<blockquote>And to think about such smarter measures that are really feasible and which, by the way, can even be implemented in poor countries, where lockdown does not work the same way, but where everyone still has a mobile phone in their pocket. Of course, we must think about this and set an example.</blockquote><br />
<h3>What can you do for your immune system?</h3><b>Anja Martini:</b><br />
<blockquote>Quite a lot of people also ask themselves again and again: Can we do more? For example, can we do something for our immune system and build it up? Maybe vitamin C, vitamin D. You got any ideas? Can you do that? Running?</blockquote><b>Christian Drosten:</b><br />
<blockquote>So of course it is always good to have a good immune system. And of course, it's always good to be fit as a fiddle. Surely it's not the case that you will immediately be infected by running in the park, just because you encounter other people. That's certainly not something to worried about, going outside and running. This, you can, I think, recommend. <br />
<br />
But that's where it stops.<br />
<br />
What you can say is to stay away from people who might be infected. Right now that would just be anybody you meet, for example when you go shopping or something. <br />
<br />
There is this rule in the USA that says: Six feet, six seconds. So six feet apart and six seconds of contact, you should take that as a rule. So that you keep this minimum distance and don't stay so close to somebody for so long. That's probably such a good way of thinking when moving around in public places.</blockquote><br />
<br />
<h2>Other podcasts</h2>Part 28: <a href="https://variable-variability.blogspot.com/2020/04/corona-virus-update-exit-strategy-masks-loss-smell-taste.html">Corona Virus Update: exit strategy, masks, aerosols, loss of smell and taste.</a><br />
<br />
Part 26: <a href="https://variable-variability.blogspot.com/2020/04/corona-virus-update-on-vaccines.html">Corona Virus Update on Vaccines: clinical trials, various types, for whom and when.</a><br />
<br />
Part 23: <a href="https://variable-variability.blogspot.com/2020/04/corona-virus-update-funding-publishing-arrival-endemic.html">Corona Virus Update: need for speed in funding and publication, virus arrival, from pandemic to endemic</a><br />
<br />
Part 22: <a href="https://variable-variability.blogspot.com/2020/03/corona-virus-update-scientific-studies-cures-covid-19-Remdesivir-Chloroquin-Favipiravir-camostat.html">Corona Virus Update: scientific studies on cures for COVID-19.</a> <br />
<br />
Part 21: <a href="https://variable-variability.blogspot.com/2020/03/corona-virus-update-tests-tests-tests.html">Corona Virus Update: tests, tests, tests and how they work.</a><br />
<br />
Part 20: <a href="https://variable-variability.blogspot.com/2020/03/corona-virus-update-Case-tracking-teams-infections-Germany-Infectiousness.html">Corona Virus Update: Case-tracking teams, slowdown in Germany, infectiousness.</a><br />
<br />
Part 19: <a href="https://variable-variability.blogspot.com/2020/03/corona-virus-update-christian-drosten-outside-face-masks-children.html">Corona Virus Update with Christian Drosten: going outside, face masks, children and media troubles.</a><br />
<br />
Part 18: <a href="https://variable-variability.blogspot.com/2020/03/german-virologist-Christian-Drosten.html">Leading German virologist Prof. Dr. Christian Drosten goes viral</a>, topics: Air pollution, data quality, sequencing, immunity, seasonality & curfews.<br />
<br />
<h2>Related reading</h2><a href="https://www.ndr.de/nachrichten/info/coronaskript160.pdf">This Corona Virus Update podcast and its German transcript.</a> Part 27.<br />
<br />
<a href="https://www.ndr.de/nachrichten/info/Coronavirus-Update-Die-Podcast-Folgen-als-Skript,podcastcoronavirus102.html">All podcasts and German transcripts of the Corona Virus Update.</a><br />
<br />
The study in Science Magazine on the impact an App could have: <a href="https://science.sciencemag.org/content/early/2020/03/30/science.abb6936">Quantifying SARS-CoV-2 transmission suggests epidemic control with digital contact tracing</a><br />
<br />
Victor Venemahttp://www.blogger.com/profile/02842816166712285801noreply@blogger.com0tag:blogger.com,1999:blog-9093436161326155359.post-43202066578014646422020-04-06T07:07:00.000+01:002020-04-11T03:22:11.628+01:00Corona Virus Update on Vaccines: clinical trials, various types, for whom and when (part 26)<div style="float: right; margin-left:20px; margin-bottom:10px;"><a href="https://virologie-ccm.charite.de/metas/person/person/address_detail/drosten/" ><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgOdNzlursTpq66xOJLNlrq8YTmLqfI26a0qEMCOjaHivkc_iwfkqEpC4ASQPbscXG07ySbm4QHJnJSqdcTfTN86CcaSmJA0kaZyXg0NiSAeW1ukRseHneHJP1rhB8WpZ8B5sLZ_gKW2Ac/s1600/drosten-christian-institut-fuer-virologie-charite_297x337.jpg" data-original-width="297" data-original-height="337" width="297" height="337"/></a><br clear="all"><i>Prof. Dr. Christian Drosten</i></div>This edition of the Corona Virus Update Podcast with leading German virologist Christian Drosten was all about vaccines. How can we speed up the development of a vaccine, how do the various types of vaccines work and how fast can they be produced, who would get the first doses available and when will vaccines be available?<br />
<br />
The development of vaccines is a race against the time. In an interview with Trevor Noah <a href="https://www.youtube.com/watch?v=iyFT8qXcOrM">Bill Gates explained that the USA is building 7 manufacturing plants for 7 possible vaccines</a>, knowing that somewhere along the way they will focus on 2 of those 7 possible candidates and that thus 5 plants will never be used.<br />
<br />
Large parts of this interview were about fundamentals. This was really interesting, I would almost go to the library to get a text book on vaccines, but I did not understand much of it well enough to translate it with confidence. So while probably still quite nerdy, this post is mostly about more practical matters.<br />
<br />
<h3>Shortcut: use existing backbone vaccine system</h3><b>Korinna Hennig:</b><br />
<blockquote>Today we want to tackle the big issue of vaccines, which is a complicated and convoluted one. We actually have a strange situation. The development of vaccines has never been as fast as it is at present. The USA have already reported the first tests on volunteers. And yet all of this is still too slow - in terms of the virus - because several longer phases of clinical testing are prescribed. <br />
<br />
Two weeks ago, you said here in the podcast that we need shortcuts for vaccine approval. Before we get into the big issue of "What is happening? What are the vaccine candidates aiming at?" I would still like to ask, in a very abstract way: At what point in the long process is such a shortcut even conceivable?</blockquote><b>Christian Drosten:</b><br />
<blockquote>This shortcut is not only conceivable, but has already been envisaged for some time. For example, what you can do is to use so-called vectors, vaccine vectors that we already know. ... We sometimes speak of the backbone of the vaccine. ... for example one that works well, which is MVA, which is [[<a href="https://en.wikipedia.org/wiki/Modified_vaccinia_Ankara">Modified Vaccinia Ankara</a>]]. This is a variant of the vaccinia virus, which was used for smallpox vaccination in the past, and is an extremely well tolerated vaccine carrier. And proteins or antigens from the new coronavirus can now be integrated into this system and can then be applied to humans and gets an immune response to these proteins of the new coronavirus. <br />
<br />
But for this carrier system, and this also applies to some other carrier systems, a great deal of safety data is known from other diseases for whose vaccines these carrier systems have also been used. In other words, we know exactly and do not necessarily have to repeat everything in this emergency situation, such as how laboratory animals react to it. For example, how the basic solution of the vaccine is tolerated and so on. Many things, including pharmacokinetic issues. For example, how is this distributed in the muscle when the vaccine is injected into the muscle? <br />
<br />
All these things have already been resolved. It is absolutely not to be expected that this marginal change of such a known carrier system will, due to the adaptation to this other virus, will lead to relevant differences in important places. Because one has both in this case now for the MERS virus experience with the MVA, as with other carrier vehicles, i.e. with other vectors, you also have experience for other vaccination targets, for other diseases. These are then only very minor adjustments.</blockquote><br />
<h3>How to infect human volunteers</h3>As helpful background information, I have added <a href="https://en.wikipedia.org/wiki/Clinical_trial">the phases of the clinical trails for drugs</a> in the table below, which I "borrowed" from our friends at Wikipedia. In case of vaccines you do not only need to test the drug, but also expose the volunteers to a potentially dangerous virus. How to do this in a realistic and safe way is not trivial. <br />
<br />
(As a aside, interesting that Wikipedia used Roman numerals for the phases and then included a Arabic <a href="https://en.wikipedia.org/wiki/Roman_numerals#Zero">zero</a>.)<br />
<br />
<table border="1"><tbody>
<tr> <th>Phase</th> <th>Aim</th> <th>Notes<br />
</th></tr>
<tr> <td>0</td> <td><a href="https://en.wikipedia.org/wiki/Pharmacodynamics" title="Pharmacodynamics">Pharmacodynamics</a> and <a href="https://en.wikipedia.org/wiki/Pharmacokinetics" title="Pharmacokinetics">pharmacokinetics</a> in humans</td> <td>Phase 0 trials are optional first-in-human trials. Single subtherapeutic doses of the study drug or treatment are given to a small number of subjects (typically 10 to 15) to gather preliminary data on the agent's pharmacodynamics (what the drug does to the body) and pharmacokinetics (what the body does to the drugs). For a test drug, the trial documents the absorption, distribution, metabolization, and removal (excretion) of the drug, and the drug's interactions within the body, to confirm that these appear to be as expected.<br />
</td></tr>
<tr> <td>I</td> <td>Screening for safety</td> <td>Often are first-in-person trials. Testing within a small group of people (typically 20–80) to evaluate safety, determine safe dosage ranges, and identify <a href="https://en.wikipedia.org//wiki/Side_effect" title="Side effect">side effects</a>.<br />
</td></tr>
<tr> <td>II</td> <td>Establishing the preliminary efficacy of the drug, usually against a placebo</td> <td>Testing with a larger group of people (typically 100–300) to determine efficacy and to further evaluate its safety.<br />
</td></tr>
<tr> <td>III</td> <td>Final confirmation of safety and efficacy</td> <td>Testing with large groups of people (typically 1,000–3,000) to confirm its efficacy, evaluate its effectiveness, monitor side effects, compare it to commonly used treatments, and collect information that will allow it to be used safely.<br />
</td></tr>
<tr> <td>IV</td> <td>Safety studies during sales</td> <td>Postmarketing studies delineate risks, benefits, and optimal use. As such, they are ongoing during the drug's lifetime of active medical use.</td></tr>
</tbody></table><br />
<b>Korinna Hennig:</b><br />
<blockquote>We have already talked in this podcast about how important clinical testing is, that you first test for safety and intolerance, that you test it in animal experiments, that you then go to small groups and only in phase three do you do large cohort studies with many volunteers. Is it possible to run any of these processes in parallel?</blockquote><b>Christian Drosten:</b><br />
<blockquote>Yeah, it is already such that the preclinical evaluation can be shortened considerably because it is already known that these vaccines are very well tolerated. And that we will then carry out a safety study in a group of volunteers, in humans. If the vaccines are well tolerated, then it is possible to expand the vaccine relatively quickly, i.e. after an initial efficacy study, the trials can be expanded relatively quickly. <br />
<br />
Then, of course, there is always the question - and this is also being discussed to some extent at the moment, as there are many commentaries on it in the medical literature - of how to deal with a situation where people would say, for example: "There is a crisis group of volunteers, they are all healthy and they would be willing to help. In principle, they would roll up their sleeves and say: "Vaccinate me and then give me the real virus in my throat so that I can get infected or something, so that the vaccine can then prove that it has protected me." <br />
<br />
This simple consideration, the heroic volunteer - how to deal with it, it's not that simple. Such a person, who most of all means well and would like to have an approved vaccine quickly, is not in a position to judge for themselves, and so there is a person in charge of the experiment, a doctor and a scientist, who has many things to consider. <br />
<br />
For example, you cannot simply put a laboratory virus in somebody's throat to make them get infected. The question is: how much virus is there in the natural infection? These exposure infections, which are known in animal experiments, where you give laboratory animals a defined dose of a laboratory virus and then see whether the vaccination you have previously given protects them, cannot simply be transferred to humans. <br />
<br />
We do not know how the normal patient would naturally be exposed to the virus. This leads to the fact that in such studies, where one would like to shorten many things, one again needs a different kind of parallel exposure experiments in a good animal model. ...<br />
<br />
Then, apart from that, there is a completely different line of reasoning. And that is that in this situation, which we have at the moment, with a lot of infection events taking place outside, you naturally have a situation where, in such broader studies of the effects of a vaccine in humans, you do not necessarily say, "You will infect the vaccinated persons after the vaccination", but rather simply say, "You vaccinate persons and you measure whether they get antibodies, for example". Or you can measure whether the immune cells of the person's body are activated and react against the virus. So you take blood from people after vaccination and then you extract immune cells from the blood and measure whether these immune cells have become sensitive to the virus in the test tube. ...<br />
<br />
Here we get almost without wanting it and naturally also plan with it, information about the then actual protective effect. The virus will circulate until then, and of course we will also record among the inoculated patients who will later become infected. Of course, this will also be compared with the population in which the whole thing is taking place.</blockquote><br />
<h3>Most promising type of vaccine</h3>I skipped a large part of the interview describing two part immune system (cellular and humoral response), how a vaccine triggers them and an example of why a vaccine can in the worst case backfire. Thus why one has to be careful before exposing volunteers. You better read an independent text, than my possibly inaccurate translation.<br />
<br />
There are many ways to make a vaccine and Korinna Henning asked about the most promising route. <br />
<br />
<b>Christian Drosten:</b><br />
<blockquote>The natural infection [response] is a mixture of cellular and humoral activity of the immune system. Humoral means antibody formation. Cellular means immune cell activation. Now we can say in one approach that we make particularly good antibodies. In another approach, however, we can also say that we make particularly good immune cell stimulation through a carrier vector of a vaccine, which stimulates the immune cells better than the natural virus would do. This means that we pick out the strengths of the immune system and stimulate them in a very special way. ...<br />
<br />
It is not at the moment that it can be said that one way is already the more promising. One can certainly say that with the very simple way of the ordinary inactivated vaccine, you have to look very carefully and be very careful because of the dangers. And what I have just described, this antibody-mediated exacerbation, is only one of the dangers, the nasty surprises that can be experienced with such simple vaccines.<br />
<br />
That's why it's right to focus on the more technically advanced vaccines. Here there is a sense where one can already say a little bit about the direction. And that is vaccines that aim to make particularly high neutralising antibodies that often only use a simple protein as a vaccine substance. <br />
<br />
This protein is better produced in the biotechnological industry in a shorter time than very expensive modified live vaccines, i.e. vector vaccines that are mainly aimed at stimulating the cellular response in a particularly effective way. The production of this vector vaccine often simply quantitatively not so simple. Since you have to use a lot of production material in motion, i.e. many cell cultures in fermenters to achieve a high yield of these vaccines. <br />
<br />
While the production of such proteins, simply biotechnologically, i.e. one can say: is more straightforward, you know exactly how that works. There are fewer parameters to optimize in the pharmaceutical industry, the purification processes are often simpler.</blockquote><br />
<h3>Who gets the vaccine first?</h3><b>Christian Drosten:</b><br />
<blockquote>Clinical staff, where we have people who are basically healthy and basically able to make a good immune response. ... This could be one of those preferred groups to be vaccinated. <br />
<br />
And, of course, people will immediately think, no matter whether it's these vaccines or another ... of course we have to give it to the risk groups immediately. This consideration is perhaps a bit too simple in parts, because at the beginning, when the first vaccines are available, we may have to try to achieve a high impact in the population with a small amount of vaccine. <br />
<br />
So, vaccinating medical staff has the greatest effect if you prevent all of them from dropping out. Clearly that is important, everyone understands that immediately. When vaccinating elderly people, for example, in many cases there is a big problem with vaccine dose. They need more vaccine for the same immune response.<br />
<br />
And when the dose is limited, when the production of the vaccine is limited and you know that there is a group of patients who need five times more vaccine than the normal patient - then you will soon come to the point where you say that it is practically impossible to produce five times more vaccine. So you have to think, do you want to make five times more vaccine and vaccinate the people who are at risk? Or do we want to make five times more vaccine and thus vaccinate five times more normal patients, thereby significantly increasing the protection of the population with the vaccination and thus stopping the pandemic earlier? These are all considerations that have to be made individually for each specific vaccine. </blockquote><br />
<h3>When will we have a vaccine?</h3><b>Korinna Hennig:</b><br />
<blockquote>When we have talked about these biotechnological variants, biotechnologically produced protein: Does that include the 12 to 18 months it takes to get that far? Or is there still time to be gained through this very process?</blockquote><b>Christian Drosten:</b><br />
<blockquote>Right, you hear 12 to 18 months now. In this time range, which has always been said that if everything really goes well, if it goes very quickly, then, depending on the vaccine concept, you can expect to have an approved vaccine within one or one and a half years. In other words: next year at this time or next year in the summer. I can assure you that everyone is really trying extremely hard and that everyone is sitting down and talking to each other how we can still win time - because it is clear that the real relief of this situation comes from a vaccine. ...<br />
<br />
We will certainly have a staggered process. We will certainly have a situation, where already small amounts of a very first vaccine are available. Where we also have grey areas, where we say that the vaccine has not yet been approved at all, that is still part of the approval procedure, that is still part of the clinical trial, in other words an efficacy study. But there are already so many patients involved that they will benefit from the vaccine. These things will naturally happen. <br />
<br />
But if we think about it now, when we will probably have a vaccine for the general population, in other words: a vaccine is available, in sufficient quantity available, the whole logistics is also available, it is also filled in ampoules, it is already inoculated by doctors. Then we'll just have to say next year this time at the earliest this starts, and then by summer 2021 it starts for the broader public.</blockquote><br />
<br />
<h2>Other podcasts</h2>Part 28: <a href="https://variable-variability.blogspot.com/2020/04/corona-virus-update-exit-strategy-masks-loss-smell-taste.html">Corona Virus Update: exit strategy, masks, aerosols, loss of smell and taste.</a><br />
<br />
Part 27: <a href="https://variable-variability.blogspot.com/2020/04/corona-virus-update-tracking-infections.html">Corona Virus Update: tracking infections by App and do go outside.</a><br />
<br />
Part 23: <a href="https://variable-variability.blogspot.com/2020/04/corona-virus-update-funding-publishing-arrival-endemic.html">Corona Virus Update: need for speed in funding and publication, virus arrival, from pandemic to endemic</a><br />
<br />
Part 22: <a href="https://variable-variability.blogspot.com/2020/03/corona-virus-update-scientific-studies-cures-covid-19-Remdesivir-Chloroquin-Favipiravir-camostat.html">Corona Virus Update: scientific studies on cures for COVID-19.</a> <br />
<br />
Part 21: <a href="https://variable-variability.blogspot.com/2020/03/corona-virus-update-tests-tests-tests.html">Corona Virus Update: tests, tests, tests and how they work.</a><br />
<br />
Part 20: <a href="https://variable-variability.blogspot.com/2020/03/corona-virus-update-Case-tracking-teams-infections-Germany-Infectiousness.html">Corona Virus Update: Case-tracking teams, slowdown in Germany, infectiousness.</a><br />
<br />
Part 19: <a href="https://variable-variability.blogspot.com/2020/03/corona-virus-update-christian-drosten-outside-face-masks-children.html">Corona Virus Update with Christian Drosten: going outside, face masks, children and media troubles.</a><br />
<br />
Part 18: <a href="https://variable-variability.blogspot.com/2020/03/german-virologist-Christian-Drosten.html">Leading German virologist Prof. Dr. Christian Drosten goes viral</a>, topics: Air pollution, data quality, sequencing, immunity, seasonality & curfews.<br />
<br />
<h2>Related reading</h2><a href="https://www.ndr.de/nachrichten/info/coronaskript158.pdf">This Corona Virus Update podcast and its German transcript.</a> Part 26.<br />
<br />
<a href="https://www.ndr.de/nachrichten/info/Coronavirus-Update-Die-Podcast-Folgen-als-Skript,podcastcoronavirus102.html">All podcasts and German transcripts of the Corona Virus Update.</a><br />
<br />
America is a somewhat weird country where comedians often produce better news coverage than the normal news on TV. Trevor Noah of the The Daily Show asks Bill Gates thoughtful questions: <a href="https://www.youtube.com/watch?v=iyFT8qXcOrM">Bill Gates on Fighting Coronavirus.</a><br />
<br />
Victor Venemahttp://www.blogger.com/profile/02842816166712285801noreply@blogger.com0tag:blogger.com,1999:blog-9093436161326155359.post-54871351438977436152020-04-01T23:47:00.000+01:002020-04-11T03:22:55.566+01:00Corona Virus Update: need for speed in funding and publication, virus arrival, from pandemic to endemic (part 23)<div style="float: right; margin-left:20px; margin-bottom:10px;"><a href="https://virologie-ccm.charite.de/metas/person/person/address_detail/drosten/" ><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgOdNzlursTpq66xOJLNlrq8YTmLqfI26a0qEMCOjaHivkc_iwfkqEpC4ASQPbscXG07ySbm4QHJnJSqdcTfTN86CcaSmJA0kaZyXg0NiSAeW1ukRseHneHJP1rhB8WpZ8B5sLZ_gKW2Ac/s1600/drosten-christian-institut-fuer-virologie-charite_297x337.jpg" data-original-width="297" data-original-height="337" width="297" height="337"/></a><br clear="all"><i>Prof. Dr. Christian Drosten</i></div>For me an interesting part of the Corona Virus Update Podcast was a critique of the scientific funding and scientific publishing system by Christian Drosten. Working on a better (and potentially faster) post-publication peer review system, this is of interest to me. The rest can skip to a discussion on the evidence for when the virus arrived in Europe, in the light of many anecdotal claims the virus arrived much earlier. And we finish off with why Drosten expects the SARS-CoV-2 virus to become endemic, that is stay forever.<br />
<br />
A large part of the podcast was about a new research network of university hospitals to coordinate research on SOVID-19 in Germany. I presumed this was not too interesting for people outside of Germany, but readers interested in clinical research can try to read an automatic translation of the beginning of <a href="https://www.ndr.de/nachrichten/info/coronaskript152.pdf">the German transcript</a>. <br />
<br />
In episode 23, science journalist Korinna Hennig of the German public radio station NDR Info interviews virologist Prof. Dr. Christian Drosten of Charite research hospital in Berlin. He was intimately involved in the research on the first SARS virus and produced the WHO test for the SARS-CoV-2 virus. The podcast has become one of the most listened to podcasts in Germany in just a few episodes and is my main source of nerdy info on Corona.<br />
<br />
<h3>Question the whole system</h3>From talking about the new German network for clinical COVID-19 research, Drosten moves to problems the normal funding system has given the speed that is necessary to fight the pandemic. In the past universities and research institutes had their own resources, but nowadays most of the funding flows via research projects (third party funding). This means that research proposals have to be written, reviewed and assessed. Only a small part of these proposals lead to funding and writing them thus binds much time that is nowadays no longer spend on science. Politicians like this system because they can feel that this looks like a free market, while in reality there is no market, science is a global public good.<br />
<br />
Also the publishing system is not able to provide information fast enough. Normally one would write a solid manuscript, because only the best ones are accepted, they are reviewed, which takes a few months, updated, reviewed, etc. There is no time for that now. So the manuscripts are simply uploaded on manuscript server on the internet and can be downloaded by any scientist before they are reviewed by peers. These manuscripts are called preprints, as if they are to be printed in a journal (and as if most journals would still be printed on paper), but many will likely never pass review.<br />
<br />
That was my take. Here is Christian Drosten:<br />
<br />
<b>Christian Drosten:</b><br />
<blockquote>Our big problem in research, in the implementation of actual directly necessary scientific investigations - I am now not speaking about long-term basic research projects, I am talking about very specific questions: This new drug, is it helping or not? Will we know in a month? It would be good to know in a month. <br />
<br />
In this situation, we can absolutely no longer afford to launch complicated applications, where we are competing fiercely for pots of money that may be wrongly dimensioned, and where it is no longer possible to organise the review of these applications. The reviewers are themselves scientists. But they are then themselves involved in these outbreaks. <br />
<br />
In research funding, the more international and the more grandiose the whole thing becomes, the more this leads to a phenomenon that the qualification for obtaining such research funds is no longer necessarily the fact that one is really working on the problem, but that can lead to a situation where those who have specialised in obtaining research funds and not in treating these patients actually get the research funds. The qualification for obtaining such research funds is no longer necessarily the fact that one is really working on the problem, but that can lead to a situation where those who have specialised in obtaining research funds and not in treating these patients actually get the research funds.</blockquote><b>Korinna Hennig:</b><br />
<blockquote>That is anyway a big problem in scientific work, that it is said that this third-party funding has also become more and more important and eats away at an increasing proportion of the researchers in their actual everyday work.</blockquote><b>Christian Drosten:</b><br />
<blockquote>We see in the current scientific activity on the epidemic that the raising of third-party funds is no longer possible in its time frame. We urgently need other mechanisms for directing money to where it is really needed and where it can really be used. And where time is not stolen from those who treat and research patients. <br />
<br />
We have exactly the same in the publishing market. There, too, we see that important information is difficult to communicate in the classic publication system. This entire information market is changing at the moment. We always discuss the preprints here in the podcast, and I always say they are preprints. We can do this here because I know my way around quite well, because I have been working on exactly this topic for many years and always understand immediately or frequently relatively quickly whether a study is really really solid and provides really new information. Or whether what is written in the headline or in the abstract sounds strong, but is in fact dead in the water. <br />
<br />
This is something that is achieved in the normal publication process through an elaborate and drawn-out peer review process. But what we are seeing here at the moment is that the epidemic is moving much faster than the publication system is able to process the information. It is already difficult enough to collect the information while doing clinical research on patients. If, on top of that, the compilation of information is not sufficient because the results are submitted to a journal, but it is held up by reviewers who sometimes ask good questions, but sometimes ask these questions too late because they themselves are completely underwater and do not have time to review. And because they partly - let's say with a competitive idea - delay work, we know that everywhere. That is one of the weaknesses of the peer review system. <br />
<br />
Then at some point we get into a situation where we have to question the whole system, where we really have to say: Can we actually afford such a system in such a situation? And we are currently seeing a huge flood of important publications appearing in these preprint servers, and they are coming from China. The colleagues there in China who have carried out clinical research and described their patients are only now able to subsequently evaluate what they have observed. <br />
<br />
And the place where we see this first is in these preprint servers. You have to be very damn careful. Because in addition to many high-quality publications, which I also highlight from time to time in this podcast, there is a lot of dead wood.</blockquote><br />
<h3>When did the virus arrive in Germany and Europe?</h3><b>Korinna Hennig:</b><br />
<blockquote>More and more listeners are now emailing us with the question: Is it really impossible that the virus has not been around in Europe and Germany for some time? </blockquote><b>Christian Drosten:</b><br />
<blockquote>We've been getting a lot of requests lately from people saying, "I had this condition in December." ... I've had contacts where people have said: "I work for a large automotive supplier, and not the one that is known [in Starnberg]. ... "We had exactly the same thing. We also had visitors from China. And we also had a wave of infection afterwards and whole families got sick. Shouldn't we send samples?" So I always said: "Yeah, sure, send serum samples. We'll test it." <br />
<br />
And in none of these cases have we ever found any evidence - with all these anecdotal investigations that we have conducted so far in Germany. <br />
<br />
Also looking at the viral sequences it doesn't really look like it was there before mid-January in Europe. I remain open to this possibility, I'd like to add. I don't want to rule out this. But we, and others as well, from whom we know, have not found any evidence yet.</blockquote><br />
<h3>From pandemic to endemic</h3><b>Korinna Hennig:</b><br />
<blockquote>You said that you assume that SARS-CoV-2 will become endemic, i.e. that it will remain here permanently as a respiratory virus and not disappear completely at some point. Why are you so sure about that?</blockquote><b>Christian Drosten:</b><br />
<blockquote>Well, because it simply is spreading so far. Also because we can assume that a complete infection of the population will occur. In other words, we must assume that 60 or 70 percent of the population will be infected before the pandemic spread stops. <br />
<br />
Then, of course, the rest will be subsequently infected, so that will continue to be the case after the infection. And then we will have the same starting conditions [for the new Corona virus] as for the other endemic corona viruses. And they also manage to keep population niches open and to develop them and then reinfect the children who are born after the infection in order to keep them in the population. Nobody can say for sure at the moment whether this virus will remain in the end or not. Everything looks very much like it.</blockquote><br />
<h2>Other podcasts</h2>Part 28: <a href="https://variable-variability.blogspot.com/2020/04/corona-virus-update-exit-strategy-masks-loss-smell-taste.html">Corona Virus Update: exit strategy, masks, aerosols, loss of smell and taste.</a><br />
<br />
Part 27: <a href="https://variable-variability.blogspot.com/2020/04/corona-virus-update-tracking-infections.html">Corona Virus Update: tracking infections by App and do go outside</a><br />
<br />
Part 26: <a href="https://variable-variability.blogspot.com/2020/04/corona-virus-update-on-vaccines.html">Corona Virus Update on Vaccines: clinical trials, various types, for whom and when.</a><br />
<br />
Part 22: <a href="https://variable-variability.blogspot.com/2020/03/corona-virus-update-scientific-studies-cures-covid-19-Remdesivir-Chloroquin-Favipiravir-camostat.html">Corona Virus Update: scientific studies on cures for COVID-19.</a> <br />
<br />
Part 21: <a href="https://variable-variability.blogspot.com/2020/03/corona-virus-update-tests-tests-tests.html">Corona Virus Update: tests, tests, tests and how they work.</a><br />
<br />
Part 20: <a href="https://variable-variability.blogspot.com/2020/03/corona-virus-update-Case-tracking-teams-infections-Germany-Infectiousness.html">Corona Virus Update: Case-tracking teams, slowdown in Germany, infectiousness.</a><br />
<br />
Part 19: <a href="https://variable-variability.blogspot.com/2020/03/corona-virus-update-christian-drosten-outside-face-masks-children.html">Corona Virus Update with Christian Drosten: going outside, face masks, children and media troubles.</a><br />
<br />
Part 18: <a href="https://variable-variability.blogspot.com/2020/03/german-virologist-Christian-Drosten.html">Leading German virologist Prof. Dr. Christian Drosten goes viral</a>, topics: Air pollution, data quality, sequencing, immunity, seasonality & curfews.<br />
<br />
<h2>Related reading</h2><a href="https://www.ndr.de/nachrichten/info/coronaskript152.pdf">The Corona Virus Update podcast and its German transcript.</a> Part 23.<br />
<br />
Victor Venemahttp://www.blogger.com/profile/02842816166712285801noreply@blogger.com0tag:blogger.com,1999:blog-9093436161326155359.post-18921506840092910242020-03-31T06:44:00.000+01:002020-04-11T03:23:13.604+01:00Corona Virus Update: scientific studies on cures for COVID-19 (part 22)This edition of the Corona Virus Update Podcast focusses on the scientific work on cures for COVID-19. What are the various substances and drugs that are currently being tested, how do they work and how promising are they? But starts with a clarification of yesterday's podcast on lateral-flow antibody tests.<br />
<br />
This is <a href="https://www.ndr.de/nachrichten/info/coronaskript146.pdf">part 22 of the Podcast</a> recorded on Wednesday the 26th of March 2020. Science reporter Anja Martini of Germany public radio (NDR Info) talks to Professor Christian Drosten, the head of virology at the at a top German research hospital, the Charité in Berlin. He developed the first test for the virus, which was send to 150 countries by the WHO.<br />
<br />
<h3>Antibody tests for the public</h3>Anja Martini comes back to <a href="https://variable-variability.blogspot.com/2020/03/corona-virus-update-tests-tests-tests.html">yesterday's topic of antibody tests</a> because she received many questions:<br />
<blockquote>Offers are accumulating in doctor's offices: Namely 50 tests at a unit price of 22 euros, please pay one hundred percent in advance. What do you say? Should you be more careful with these things, or what kind of thought comes to mind?</blockquote><b>Christian Drosten:</b> <br />
<blockquote>Well, yeah, sure. Careful, definitely. It has to be said, these are lateral flow tests that can be manufactured in large quantities. That's a good thing that it' s technically possible. It's just that the current lateral-flow tests available ... have not yet been validated. So we do not know whether these antibody tests work as well as a real laboratory-based test, i.e. an ELISA test for antibodies, for one thing. <br />
<br />
And on the other hand, there is something we already know for sure, namely that antibody tests are too late for acute diagnostics. These antibody tests can only become positive after about ten days of the disease. There are a few patients who have antibodies after only seven days. But in today's situation, when you have a test for the new virus and wants to be tested, then you ask actually always: Did I get infected? Did my symptoms come from this virus maybe? In this situation an antibody test is not useful.</blockquote>These tests are mostly produced by Chinese companies. Racists abuse this situation to attack China by accusing them of producing bad tests to attack the West. This is Trumpian ignorance of people who should be attacking Xi for being an authoritarian like Trump. They should be attacking China for their concentration camps, for their lack of political freedom. But you do not get to complain that you are a gullible uninformed fool. <br />
<br />
<b>Anja Martini:</b><br />
<blockquote>This antibody test, which might become available for the general public, namely the self-test, how should I imagine it technically? ... You put a prick in your finger and then you can put it on a piece of paper and see if you have antibodies or not? <br />
</blockquote><b>Christian Drosten:</b> <br />
<blockquote>Yeah, that's pretty much how these tests work. There are several devices that extract a drop of blood from a fingertip. Then they record it. And then it runs from one side to the other in a test strip as a front, just like for a pregnancy test the urine. And at the end there is one stripe or two stripes. And if you see two stripes, the test is positive ... But as I said, all this is not yet technically validated. It will work somehow, maybe better or worse. But of course the normal laboratory-based test will also be widely available.</blockquote><br />
<h3>Polymerase Chain Reaction test</h3>Also Polymerase Chain Reaction (PCR) tests for the virus itself were discussed in <a href="https://variable-variability.blogspot.com/2020/03/corona-virus-update-tests-tests-tests.html">yesterday's podcast</a>. There are cases where the patient clearly had COVID-19, which you can see in a lung x-ray, but that the test does detect it. For example patients who stayed at home as long as possible and only arrive in the hospital the second week they are ill. In this case the virus has sometimes disappeared in the throat and is only present in the lungs. Either by sampling sputum the patient coughs up or with a suction catheter a doctor can take a sample from the lungs and detect the virus that way.<br />
<br />
<b>Christian Drosten:</b> <br />
<blockquote>This has also caused great concern in China, in Wuhan. So much so that, from a combined impression of this apparent unreliability of PCR from the throat, and also of the laboratories being overburdened, that practically no more PCR capacity was available, they switched to a diagnosis based on the CT image at the peak of the epidemic in Wuhan, because on average, patients were seen relatively late. They stayed at home for a long time, they did not want to go to the hospital. And then they changed the diagnosis.</blockquote><br />
<h3>Remdesivir</h3><b>Anja Martini:</b><br />
<blockquote>We want to look at drugs today, because there are now several drugs that are tested in hospitals. ... Remdesivir, for example, is a drug that was originally developed for Ebola and is now being tested in two studies on corona patients in Germany. What do you currently know about these studies and how they are going?</blockquote><b>Christian Drosten:</b> <br />
<blockquote>In the case of Remdesivir, for the time being, we have here a substance with a plausible and known mechanism. It's an inhibitor of viral RNA polymerase, the virus' replication enzyme. And we have had this substance in the literature for quite some time, it is clear that it is effective against corona viruses in cell culture and also in animal models. That's good. So not for every substance that is currently undergoing clinical trials, we do not have this convincing initial evidence. But for remdesivir it is very good, this initial evidence. There's a real mechanism.</blockquote>The above paragraph may be a bit too much in the weeds, but I included it to show how a scientist assesses a situation and the likelihood that something will work before the evidence is conclusive. Having two options where test tube tests show they work similarly against viruses, one would first go for the option where one understands why. Even if test tube showed somewhat less good results, I would still go for the one where we understand why. This is one way to protect yourself against problems with purely empirical evidence, which has produced reproducibility problems.<br />
<blockquote>And now the company that distributes Remdesivir, Gilead, has been allowing [[<a href="https://en.wikipedia.org/wiki/Expanded_access">compassionate use</a>]] protocols for quite some time. This means that in certain constellations, the drug is released for a single patient. This is a phase of the disease where the patient already needs oxygen but does not yet need catecholamines, i.e. drugs that support the circulation. This is already a critical phase in the course of the disease. This is the transition where they say soon the patient may have to go into intensive care. It's a critical time when you want to influence [the condition of] the patient. <br />
<br />
But the problem is that this is a direct antiviral substance, so we would like to administer it earlier. The virus attacks the respiratory tract in the first week of the disease. In the second week of the disease, when the virus deteriorates, we already have a combination of immune and viral effects that act in the lungs. This suggests that in this later phase you can't do as much if you specifically do something against the virus. You have to be careful that you might also do something against an excessive immune reaction. There are also clinical studies on this. And this is true for Remdesivir as well as for other substances where one could assume that there is a direct effect on the virus. <br />
</blockquote>The rest of this section is background information on how RNA viruses work and how Remdesivir interferes, which you can skip if you just care about your health, but I find it fascinating.<br />
<br />
<b>Anja Martini:</b><br />
<blockquote>How does Remdesivir work in this virus? What does it do?</blockquote><b>Christian Drosten:</b><br />
<blockquote>The virus is an RNA virus. And RNA viruses can't use the replication enzymes in the cell nucleus. Our cell nucleus has DNA. And when cells divide, the DNA has to be replicated. <br />
<br />
And some viruses, DNA viruses often, they can use these multiplication enzymes for themselves. So they abuse the duplication enzymes of the cell nucleus for their own genetic material. But RNA viruses cannot do this because our cells do not need to duplicate RNA. Our cells do possess RNA. This RNA is only copied from DNA and is actually the template for proteins. This is the so-called messenger RNA, in the simplest approximation. There are of course other complicated subforms of RNA and so on. But let us now talk about the main case. This messenger RNA is not being replicated. It is simply copied once. But for viral replication, we need proper duplication. And in that process we need to have a step where RNA is copied from RNA. The viral genome consists of RNA, and the product consists again of RNA, we say the replicative intermediate, and from that again RNA has to be copied back again. After all, we have plus and minus and then again a positive sense of the genetic information in this multiplication.<br />
<br />
All this leads us to the conclusion that the virus itself must bring along an RNA polymerase, an enzyme that carries out this multiplication, this transcription. There are different ways in which RNA viruses do this. Some RNA viruses have a functioning RNA polymerase in the virus particle. Polymerase is an enzyme that generates a polymer that transcribes. This takes a template, which is the genome of the virus, and makes a copy of it, a mirror image copy in the reading sense, and then takes this mirror image copy again and makes the next generation of genomes from it, which is then packaged. Some viruses bring this as a functioning enzyme, as a protein in the virus particle. <br />
<br />
Other viruses simply encode this, they carry the enzyme as genetic information. This is then converted into protein in the cell by ribosomes. The protein that is produced there can then duplicate the viral RNA. Coronaviruses do it in the latter way. Corona viruses bring genetic information with them in order to create an RNA polymerase in the cell do-it-yourself, which is what the cell does, and this enzyme is inhibited with emdesivir. ...<br />
<br />
We could also go back into detail here, because it is not quite so clear how things work exactly. We do not know whether the RNA polymerase itself is inhibited in its processivity, or whether the assembly of essential building blocks of the resulting RNA is inhibited, or whether the RNA polymerase continues to work, but makes so many copying errors that the viruses that come out of it are dead.</blockquote><br />
<h3>Chloroquin</h3><b>Anja Martini:</b><br />
<blockquote>Chloroquine, we still have to say, is an antimalarial drug that is not completely free of side effects, but also against the old SARS virus, at least in cell cultures, has been successful, right?</blockquote><b>Christian Drosten:</b> <br />
<blockquote>Right, exactly. In cell cultures and against all kinds of viruses. ...<br />
<br />
A lot of people who know about it, including myself, are very sceptical about chloroquine, whether it is really helpful in the end. But I also cannot say what it will look like in the end if a very large study is carried out with a large number of patients. And the up analyse the clinical fate of these patients, what would be the outcome for the patients? So there might be a very small effect. <br />
<br />
This effect does not necessarily have to be directly related to the virus, because chloroquine also has a strong influence on inflammatory processes in general. These also play a role in lung damage, so that it is not possible to say exactly what to expect.However, one thing can be said: A resounding effect that really decides the fate of the clinical outcome can hardly be expected with chloroquine. ... Let's put it this way, then it would be very easy to observe it. Then there would be no such contradictory clinical studies. If the effects are quite clear, it is also quite easy to prove the clinical effect.</blockquote><br />
<h3>Favipiravir</h3><b>Christian Drosten:</b> <br />
<blockquote>There is another substance called favipiravir. ... It is approved for use against influenza in several countries. So you can buy it in pharmacies against influenza, for example in Japan. ... This substance is also available in China against influenza. There is now a first study, which has been published, so perhaps we can clarify where we stand. So in case of Favipiravir, we know exactly what the mechanism is. <br />
<br />
But I have to say that when it came up to give favipiravir against the new virus, I was surprised, because years ago, when this substance was still in the experimental phase, we did not call it favipiravir, but T-705, which was the short name for a chemical substance at that time. And it did not work well in cell culture. We didn't pursue it further. ...<br />
<br />
Favipiravir is now being used in China after all. And a first study has just come out. ...<br />
<br />
And in contrast to the French study, which we discussed for chloroquine last week, here it is the case that they really looked at a clinical starting criterion. They simply asked: How is the improvement of the clinical picture seven days after starting the administration of the drug? Clinical picture means for example respiratory rate, fever and other general symptoms of the disease. ... <br />
<br />
Most of the cases here are quite normal initial cases, they are not intensive care cases. For example, there were only 18 severe cases here with pneumonia in a total of 116 people who were treated, so the overwhelming majority were not severe cases. And of course they have been included at an earlier stage accordingly. <br />
<br />
So now we can say that the difference that can be obtained in this rather optimal situation is that clinical symptoms improve in 56 percent of the cases where no treatment is given, and in 72 percent of the cases where treatment is given. That is a significant difference, a significant difference statistically. <br />
<br />
And that's amazing to me. I have to say that in view of the fact that we never actually saw a good effect of this substance in cell culture, I am still skeptical if this is real or if there's some kind of flaw in the clinical study. Now we have to see what other studies indicate. It's certainly not enough, to take one study and even more so one study that has not even been formally reviewed yet.</blockquote><br />
<h3>Camostat</h3><b>Anja Martini:</b><br />
<blockquote>I believe that you yourself are also working with Göttingen researchers on a drug at the moment. How does it work?</blockquote><b>Christian Drosten:</b> <br />
<blockquote>That's right. There are studies that we have done together with Stefan Pöhlmann's group in Göttingen, a really absolute specialist in virus entry. Stefan has seen that it is possible to reduce virus entry with a substance called camostat. ...<br />
<br />
So it is the case that this virus, this new SARS 2 virus, uses a certain transmembrane protease in a stronger way than the old known SARS virus. And that is, as the name suggests, a protein-cleaving enzyme, but this time it is not an enzyme from the virus but an enzyme from the cell. So the cell itself has this protein on its outer membrane. And with this protein, the cell involuntarily helps the virus enter the cell, by the passage through the membrane. This works in such a way that the surface protein of the virus is cut at one point, is clipped, and this clipping of the surface protein is the first step for the virus to pass through the cell membranes. This virus uses this cellular protein for this purpose. <br />
<br />
There is a drug that inhibits this cellular protein and this drug is called camostat. I am deliberately saying drug and not substance, because this substance is approved as a drug for chronic pancreatitis. And it is only approved in Japan. So in Japan you can buy it in the pharmacy. This much we know. We know it works in cell culture, and we know the drug is available in Japan. That's all we know. <br />
<br />
But on this basis we can now of course do something that cannot be done with other substances. Namely, we can say that we do not have time for large-scale animal experiments, but we have an approved substance here. In certain cases, we can now test this in clinical controlled trials to see whether patients benefit from it if they get this substance. This is a typical off-label use study. And we're going to do something like this start now.</blockquote>That sounds promising. Do note that Drosten is here talking about his own research. It is always harder to be just as sceptical about your own work, any scientist will be able to attest.<br />
<br />
<br />
<h2>Other podcasts</h2>Part 28: <a href="https://variable-variability.blogspot.com/2020/04/corona-virus-update-exit-strategy-masks-loss-smell-taste.html">Corona Virus Update: exit strategy, masks, aerosols, loss of smell and taste.</a><br />
<br />
Part 27: <a href="https://variable-variability.blogspot.com/2020/04/corona-virus-update-tracking-infections.html">Corona Virus Update: tracking infections by App and do go outside</a><br />
<br />
Part 26: <a href="https://variable-variability.blogspot.com/2020/04/corona-virus-update-on-vaccines.html">Corona Virus Update on Vaccines: clinical trials, various types, for whom and when.</a><br />
<br />
Part 23: <a href="https://variable-variability.blogspot.com/2020/04/corona-virus-update-funding-publishing-arrival-endemic.html">Corona Virus Update: need for speed in funding and publication, virus arrival, from pandemic to endemic</a><br />
<br />
Part 21: <a href="https://variable-variability.blogspot.com/2020/03/corona-virus-update-tests-tests-tests.html">Corona Virus Update: tests, tests, tests and how they work.</a><br />
<br />
Part 20: <a href="https://variable-variability.blogspot.com/2020/03/corona-virus-update-Case-tracking-teams-infections-Germany-Infectiousness.html">Corona Virus Update: Case-tracking teams, slowdown in Germany, infectiousness.</a><br />
<br />
Part 19: <a href="https://variable-variability.blogspot.com/2020/03/corona-virus-update-christian-drosten-outside-face-masks-children.html">Corona Virus Update with Christian Drosten: going outside, face masks, children and media troubles.</a><br />
<br />
Part 18: <a href="https://variable-variability.blogspot.com/2020/03/german-virologist-Christian-Drosten.html">Leading German virologist Prof. Dr. Christian Drosten goes viral</a>, topics: Air pollution, data quality, sequencing, immunity, seasonality & curfews.<br />
<br />
<br />
<h2>Related reading</h2><a href="https://www.ndr.de/nachrichten/info/coronaskript146.pdf">The Corona Virus Update podcast and its German transcript.</a> Part 22.<br />
<br />
Nature Magazine on the various possibilities where the virus comes from: <a href="https://www.nature.com/articles/s41591-020-0820-9">The proximal origin of SARS-CoV-2.</a> "This is strong evidence that SARS-CoV-2 is not the product of purposeful manipulation."<br />
<br />
European Medicines Agency: <a href="https://www.ema.europa.eu/en/news/covid-19-chloroquine-hydroxychloroquine-only-be-used-clinical-trials-emergency-use-programmes">COVID-19: chloroquine and hydroxychloroquine only to be used in clinical trials or emergency use programmes.</a> "The European Medicines Agency (EMA) is a decentralised agency of the European Union (EU) responsible for the scientific evaluation, supervision and safety monitoring of medicines in the EU."<br />
<br />
Victor Venemahttp://www.blogger.com/profile/02842816166712285801noreply@blogger.com0