Sunday, 29 March 2020

Corona Virus Update: Case-tracking teams, slowdown in Germany, infectiousness

The Corona Virus Update of Tuesday the 26th of March was about whether the South Korean strategy of case-tracking teams is something for Germany as well, what the data on the number of confirmed infections tells us about the spread of the virus in Germany and about a new study on the infectiousness of the virus and what that tells us about how to contain it.

This podcast is produced by the German public radio broadcaster NDR. Science journalist Anja Martini interviews Professor Christian Drosten, just like every day. He is the head of the Virology Department at the Berlin Charité, which is one of the main research hospitals in Germany. As a scientist he can speak more freely than, for example, the director of the RKI, the German CDC. He specializes in emerging viruses and developed the WHO test for the new Corona virus.

Case-tracking teams in South Korea

Can we learn anything from how South Korea handled the situation? Too long, didn't read: South Korea is very strong in case tracking. They test a lot and have many people working to track contacts of known infected people. This produced good results for fighting one big outbreak, but there are still new infections and we will have to see how well this strategy works in future.

Christian Drosten:
There are case-tracking teams that can follow every infected person and look: Who has there been contact with? Where are the contacts now? The contacts are isolated and monitored and so on. I think that is simply not feasible here, if only for personnel reasons. That's why the question of whether you can learn anything from it is a bit futile.

But it's also true that one shouldn't be fooled. For a very long time, there was the impression in Korea that the outbreak is now actually under control. But what's often forgotten to say is that a big part of the initial outbreak in Korea was a single event. ... And of course you could follow that very well. Of course, you have a list of participants and can say: Okay, they were all there, and we're really going after them now.

But this effect is now over in Korea. This transmission event is now so far in the past that it has been captured. And that has done a lot to bring down the curve in Korea right now. But what I am now hearing from Korea is that individual transmission chains are now starting up all over the country, because of course there have been other cases registered in parallel via multiple channels (let's consider the proximity to China). And that just now in Korea the new infections are clearly increasing, because it's not this very focused measure any more, but suddenly you have to be everywhere. And I could do that I imagine that this will also no longer manageable in Korea, to be everywhere at the same time. But in general, they do case tracking very carefully. And I have a feeling can do this better simply by staffing levels than we in Germany.

Spread in Italy and Germany

Anja Martini:
Italy is now going into the third week of quarantine. The first experts are now breathing a sigh of relief because the number of deaths has not increased any further. How much can we already tell from these figures and this development? How much can one learn from this, and what can we trust?
Christian Drosten:
So the number of cases [infections], which in Italy are apparently not so easy to count because probably not so much diagnosis is made, are the absolute numbers of infections. But of course what has to be counted are the deceased. And it takes on average about three weeks between the onset of symptoms and their passing away. Or between the infection, you have to say correctly, and death. And that's why that's the effect that happens in the statistics. A curfew and other quarantine and isolation measures were put in place three weeks ago. And now you can see the effect, even on the deceased. And that is unfortunately almost a natural constant that can be observed. It just takes three weeks.

And you see it more quickly with [infection] cases in countries that can detect the cases very reliably. ... But in this short period of maybe ten days at the most, we want to see that the increase in new infections is already decreasing, at least in a system, a country where we are already close to reality with diagnostics - which we hope we are in Germany. ... I would be very pleased if this were to be confirmed in the next few days. But one must also say that we will have to wait a little longer. It has to last a few days, this effect, before you can see: So you can see something.
The next paragraph required quite some explanations to interpret the science-speak of Drosten. I hope I did it right. He seems to be saying that already in this early phase the number of cases is decreasing compared to model computations based on the past. That means that the spread of the virus has become less efficient, likely due to all the policy measures taken. Even more summarized: the policies seem to work.
"[The number of infections are] perhaps already now in this early phase, the expected values [from models] are changing compared to the observed values. So there seems to be a difference, which is good. [Because this suggests the model parameters about how the virus spreads are improving] And if it stays like that for the next few days, then you will look at it for a while. And then, for example, in this difference you have a new basis for readjusting models. [Estimating how the policies have affected the spread of the virus] And then mathematicians and modelers in Germany will actually be called upon to take and evaluate this data and then to prepare it for policymakers, for example."
This will also affect the number of deaths, but the coming weeks it will still rise.
We will of course see changes in the deceased with a two or three week delay. Incidentally, we also have to remind ourselves, and I would perhaps like to say this again now, that despite the measures that we already have at the moment, the number of deaths will of course continue to rise, because this effect will continue. And this, too, will be reflected in model calculations. That will of course also be important, because it will give an insight into the seriousness of the cases. And this severity of cases must be taken into account in terms of hospital capacity.

So this kind of epidemiological modelling that is needed here at the moment is not just a pure description of the situation of the cases, but must also take into account when we reach the capacity limit of the medical system. So completely different figures have to be included, such as the number of beds or the number of ventilation places.

And in the very near future, the question will be posed to the scientific community: Where do we stand now? How can we now readjust? Must we leave the current measures as they are? Or can we relax the brakes a little in some places, because it is not just a pure, naked scientific consideration, but also because scientists are well aware that the current measures are of course causing great social and economic damage. And these things have to be weighed against each other.
Around Eastern we should have a better assessment of the situation.

Infectiousness study

Anja Martini asks Drosten about a new study from Hong Kong. He first explains how scientific publishing has changed due to the time pressures of the epidemic. If you are not interested in that you can skip the next long quote.

(The quote provides anecdotal evidence that without peer review scientists would focus much more on studies from well-known groups and that peer review thus helps outsiders gain the credibility they need to have people invest time in their work. I have a blog post on that.)

Christian Drosten:
This is a study that has been published on a preprint server. At the moment we have this very fast situation in scientific publication activity. Normally the review process of a scientific contribution takes weeks or even months. So sometimes it goes from a scientist to a journal. They don't send it to the journal for review. Then you send it to another journal and they send it out. The reviewers need two months, then the comments come back. And then the magazine says: "Fix it, please. And then another month goes by.

And you can't afford that right now with epidemiological research. And that's why at the moment, scientific articles are actually placed in online resources, the so-called preprint servers, as they are written. There are two very big ones, called bio-archives and med-archives.

I always go through them like this. I have to sort a lot of things, because they are not peer-reviewed scientific articles. That means there's also a lot of dead wood. There are a lot of things that you won't see officially appear in this form later on, because they won't survive the review process. That means, what I always do in my free minutes is that I look at the things that appear in a new way. And things that I think are of such a high quality, that will survive any review process, that's really well done, I discuss these things here sometimes. So then I say: this is interesting data. And so it is with this study here.

It comes from Hong Kong from a very well-known epidemiological modeling group, Gabriel Leung.
It is important when a patient is infectious.
[The study was about] when does this disease actually become infectious? Already before the symptoms or with the symptoms or after the symptoms? And this is very important, because with the old SARS corona virus we can briefly sum up: It was so easy to contain because it only becomes truly infectious long after the symptoms start, in the average patient.
Drosten himself also had a small study on this topic26. Currently still a preprint (not peer reviewed).
This study has also already shown that the virus replicates in the throat in the early phase of infection and that the virus is clearly detectable in swabs even in the very earliest swabs to such an extent that it is already on the descending branch even on day one and two in swabs. So the further one waits - and if one takes smears from a patient every day - the fewer and fewer, right from the beginning.
The new study found the same result, but with many more patients.
These authors found exactly the same thing in a group of 94 cases in Guangdong, i.e. in southern China near Hong Kong. ... And they saw that from day one the virus was on the decline. That means the peak of the virus must be before the first day.
The new study also quantified how long it takes to be infected.
Then they did something very interesting, something purely epidemiological: they also looked at transmission cases in the same context, namely 77 couples, 77 patients, where it is known that one person infected the other, and they looked closely: How long did it actually take?
I did not understand the explanation of how this works, but the key word is "[[Serial Interval]]". The result was:
The median is 5.2 days, the mean is 5.8, so this is a somewhat skewed distribution, but still with very close averages, so 5.2 to 5.8, you can say is the series inteval.
So the time to infect someone is about the same as the incubation time.
They also calculated the incubation time from their own earlier, very well done study: 5.2 days mean incubation time. This is of course interesting, because we have here a phenomenon where practically the serial interval is almost as long as the incubation time. This tells us that the average patient waits for the symptoms after infection for as long as it takes to transmit the infection between two patients. And if you look at it that way, it means that not only do we have a mean onset of transmission on the day the symptoms start, but probably before that as well. So the average patient is basically transferred [infected] on the day the symptoms start, but this is just the average patient. Some patients are not transferred until after the start of their symptoms and unfortunately some are transferred before the start of their symptoms. ...

it can be said that infectivity starts two and a half days before the onset of symptoms, on average. And the so-called area under the curve, i.e. the area covered by this probability curve, before the onset of symptoms, is 44 percent. In other words, it can be assumed that 44 percent of all infectious events occurred before the infective person was even ill. ...

This also means, of course, that if you lock yourself up at home as soon as the symptoms begin, you have already infected people if you allow a normal social life to continue. So this means that with normal rules of infection protection in case of a noticed illness you cannot contain this disease. There has to be social distancing in a targeted way, where the aim is to change behaviour - and not to identify symptoms and isolate the sufferers. That simply will not work with this disease.

Other podcasts

Part 19: Corona Virus Update with Christian Drosten: going outside, face masks, children and media troubles

Part 18: Leading German virologist Prof. Dr. Christian Drosten goes viral, topics: Air pollution, data quality, sequencing, immunity, seasonality & curfews.

Related reading

Corona Virus Update with Christian Drosten podcasts and transcripts (one day later).

Tuesday, 24 March 2020

Corona Virus Update with Christian Drosten (19): going outside, face masks, children and media troubles

Much of the information on the new Corona virus in Germany, at least for science nerds, is spread by the Corona Virus Update Podcast with Prof. Dr. Christian Drosten. He is the head of the virology department at one of the main university hospitals in Germany and specializes on emerging viruses.

It is a podcast of the German public radio NDR. The half an hour interview number 19, on Monday the 23 of March was held by science journalist Korinna Hennig. Topics were, going out of the house, face masks, children and the media creating trouble.

Walking and jogging

Korinna Hennig:
Meanwhile, many people have come to us who are worried when they walk in the forest and joggers walk very close to them and breathe on them. Safety distance is one of the words of the hour. Is this assumption that the duration of contact plays an important role still tenable despite everything?
Christian Drosten:
Of course, it is very difficult to say anything really tenable about this now. But in principle, when you're outside, what you breathe out naturally dilutes, and the virus also dilutes. Besides, you almost always have a little bit of wind.

And so you have to concentrate more on the situation in closed rooms if you are thinking about such transmission processes.

Face masks

Too Long, Didn't read: Face mask are scarce in Europe. Medical professionals in close contact with infected people need them the most. For normal people wearing a mask outside the house does not provide any protection, the main thing they would do is protect others in case you are infective (without knowing it). However, at the moment we do not have enough masks to use that as containment method.
First of all, of course, it must be said that there is a shortage of these masks in all countries, not only in Germany, but in the whole of Europe and practically all over the world. If we now look at Europe in particular, it really is a nationwide situation in which no country has any stocks or anything like that.

I know that the German Ministry of Health began weeks ago to secure stocks and place orders. So I would think that we in Germany are very well prepared. But things like that are going on, they take time. Orders take time. Production needs a time. And right now in Germany, as in all other countries too, we have a shortage of these masks in the market.

And in fact, hospitals are still being supplied. But it's not like they're in unlimited supply now. That is why purchasing departments at large hospitals are justifiably concerned if the public were to access the same stocks now. You have to imagine that at some point there will be market competition. And supply and demand will then drive up prices. If people in the public think that they can protect themselves from infection by wearing a mask, then of course at some point there will be people who pay lunar prices for something like that, even if it has little or no effect. ...

There must be no market competition. Because in the medical sector, in professions that work close to patients - that's not just the doctor and the nurse, but also in other areas, in nursing homes and so on - it's natural to have very close contact. And different rules apply in this close contact area. And there is definitely data that shows that such respiratory tract disease transmissions are reduced by the masks. ...

For the public, there are two considerations that can be made in this way. One is self-protection: I wear a mask to keep from getting sick. The other is other-protection: I am sick, wearing a mask so that someone else does not get sick, so that the virus is not transmitted further.

And for the latter, there are, let's say mechanical good reasons for doing so. It's easy for anyone to imagine. When I sneeze, I give off tiny droplets. And when I have a piece of cloth in front of my mouth, it can either be a cellulose cloth like a mask I bought, or it can of course be a scarf or something, these big droplets are then caught. ...

The consideration is, the further away you are from this source, the more you are dealing with a finer aerosol. And this is also inhaled sideways into a mask, whether you inhale it from the front of your mouth. Or you have a mask on and you suck it in from the side. That is then simply no longer a difference. That's why: The closer to the source, the better. That's why the mask has to be at the source and not at the receiver.

And that is certainly a perfectly plausible consideration. What is not so plausible is that I cannot protect myself in public with a mask. This is just maybe a little bit difficult to convey. But there is simply either no evidence in the literature or - depending on how you want to interpret it - almost no evidence that this could help.

Children

Do children get infected and ill?
So I think we can say that children do not get severe symptoms. There are simply no known descriptions. Well there are of course individual descriptions of severe, even unfortunately also of a few deceased children, but in view of the mass of cases it hardly seems to occur. Whereby the word hardly means: Just only in a very, very small percentage.

It is, of course, an important consideration, because now it can have two explanations: One may be that the children are not infected at all. That means that they are completely excluded. The other may be that the children are infected, become immune and at some point belong to the circle of those who have already been infected and become immune in society and then do their part to stop the epidemic. ...

Children, school children have a particular network function in society because they interact relatively intensively with several age groups. While other age groups are more in contact with their own age group. Therefore children have a very important function there. And we all want to find out in the next few weeks - by means of antibody tests, also in children - what the background infection rate is, in other words the silent infection. To ask the question: Have children perhaps, without realizing it, already contracted the infection? And may they already become immune unnoticed?
Drosten discusses a preprint (a scientific article that is not peer reviewed) about the early stage of the virus in Wuhan. In this stage all patients went to the hospital for isolation, including children because it was not known yet that they did not get ill.
One can deduce that in this early phase in Wuhan, the Authors speak of thousands to tens of thousands of unrecognized child cases that have occurred. And that, of course, gives hope in a certain way. Namely, on the one hand, if this is such a great effect, it will be possible to correct the actual infectious mortality. And what is even more important, if we know that the children are actually very actively infected, then this means that they also contribute to the [infected part of the population], in other words to this development of herd immunity. That is good news in principle, this [preprint]. What we need now is confirmation of this phenomenon through antibody tests, also in children, but not only.

About the person

Korinna Hennig:
Mr. Drosten, in conclusion, we have already mentioned here in the podcast that you are often accosted, exposed to hate mails and insults. Now we have just seen the opposite effect. Suddenly, all sorts of newspapers have begun to focus on you. You are very much in the spotlight and a hype has developed which has somehow taken on a life of its own. How do you feel about that? Are you coping with it?
Christian Drosten :
I have to admit that it makes me uneasy and I don't like it. I already have the feeling that a legend is being created. ... But of course that has very little to do with reality.

It worries me especially, when I see that that comes along with shortening of statements. For example, what has just happened this weekend is that there was a relatively differentiated interview in a large magazine, where two or three questions were asked about the topic of how can things continue now? So what do we do now? Now these measures are all in force and what does our future look like now? Can you get out of there?

And then I already said for example: Well, if you look at it, whether you fill football stadiums with people or go to school, then going to school is more important. That's why I believe that we won't have full football stadiums any time soon. But that we have to concentrate relatively soon on getting data to decide whether we can perhaps allow the whole school or just a few years of school again. Because that is really important. I was interested in this distinction, what is entertainment and what is essentially important in society? What can you focus on now if you want to get out of these contact measures again?

And then it was shortened, and that was done by the magazine itself on the Internet, of course to attract attention to this article. Basically, all they said was: "Drosten: No more football for a year." And then they wrote, which was not even mentioned in the interview, that this could probably be extended to include holding football matches without spectators. So even that, it is not true that I advised against that. That was not the content. And it wasn't directly described that way, but from the context it sounded that way. Then there is the fact that this article is also behind a paywall. That means, if you go to this internet message and want to watch the interview, then you have to pay. And that annoys me, because it was a whole afternoon of my time that I invested there. ...

It's just bad when media come and still try to make money out of this situation by such contrasts and such incentives. I think the media must stop that now. Otherwise we as scientists can no longer do the kind of things I am doing here. Some of my colleagues are much more cautious. That is of course the main reason why not many other scientists communicate in public, because these things happen all the time. It's just not bearable anymore.

This also scares me as a person because I naturally notice when something like this is announced. It went out on the servers some Sunday afternoon, I noticed it because aggressive comments suddenly appeared in my e-mail inbox that really attacked me. And where I notice, there are people I don't know, who don't know me, but who have found out my email address and who now attacking me. And that is, let's just say, the most harmless consequence. But I also find the misunderstandings that arise very serious. And we have to be clear, for what purpose? Ultimately, only for [newspaper] circulation.
This while the German language press is wonderful compared to the English language press.

Other podcasts

Part 18: Leading German virologist Prof. Dr. Christian Drosten goes viral, topics: Air pollution, data quality, sequencing, immunity, seasonality & curfews.

Related reading

Corona Virus Update with Christian Drosten podcasts and transcripts (one day later).

The Robert Koch-Institut (RKI) in Germany is comparable to the US Centers for Disease Control and Prevention (CDC). The RKI publishes a daily summary of the situation: situation reports. First in German, a little later also in English. It has statistics on the number of infections and mortality and on measures take to fight the problem. Something I especially like is that they list cases by date the people got ill (if known), not just raw numbers for one day, which are total based on an unknown number of previous days. This figure (number 3) suggests that the measures work and slowed down the spread of the virus.

This weekend the RKI was looking for staff to help track infected people. They got 10,000 applications and already stopped accepting new applications. Thanks, humanity.

Monday, 23 March 2020

Leading German virologist Prof. Dr. Christian Drosten goes viral

Every workday German public radio has an interview with [[Prof. Dr. Christian Drosten]]. In the 50s every family would have sat around their radio receiver and listened with red ears. At least my impression is that nearly everyone listens.

Germany's newspaper for academics called him "Germany's de facto explainer for the current outbreak". Christian Drosten is leader of the virology department of one of the most prestigious hospitals in the world, Berlin's Charité university hospital. He developed the first diagnostic test, which is called the WHO test in Anglo-America, and was shipped to 150 countries.

I guess our university press office would like me to mention that before he went to Berlin Drosten headed the Institute of Virology here at the University Hospital in Bonn.

Because these podcasts are appreciated so much, I thought it would be valuable to translate them into English. The virology is universal, the epidemiology and counter measures will depend on the local circumstances. The latter can be interesting for foreigners in Germany and for inspiration for other countries, so I will also translate such parts.

In one podcast Drosten discussed an article on the influence of closing schools on the Spanish Flu in American cities, where it worked. Some German newspaper made this into a story: "Drosten recommends closing schools", while he also talked about the societal differences between Germany now and America a century ago. As a climate scientist, I will not pretend to know the science good enough to contribute, but I do understand it better than political journalists and that I can promise not to leave out such important context.

If you know German you can find all podcasts with transcripts on the webpage of NDR. The topics of one podcast are all over the map, whatever is current. This podcast science journalist Korinna Hennig asked the questions.

Air pollution

The first interesting question was about the role of air pollution. Drosten answers:
"Yes, there is certainly some speculation about it. But what is perhaps even more important, if you want to talk about something like that, is of course smoking. And we don't even know what the reason for this surplus of male patients is. What is clear, however, is that in China it is mainly men who smoke. And of course it is also clear that in the generation of patients who are now particularly at risk, it is above all men who have smoked a lot throughout their lives. And of course, risk factors for cardiovascular disease are also more prevalent among men in this age group. And I think that all of this plays a role in this pattern. ... It is certainly always a good time to quit smoking, but now is probably a particularly good time."
That sounds like an interesting idea worth pursuing in more detail.

Data quality, comparability

Another interesting question was why "there been so many fewer deaths from SARS-CoV-2 in Germany than in other countries?"

The main explanation is that in other countries far fewer mild cases are recorded because less testing is done and this simply distorts the statistics. "We actually test much more than other countries." He does not expect that these differences between countries will converge when later more people will die in Germany. One reason is that soon the epidemic spreads to much that testing will no longer be able to keep up, even in Germany. Then reporting will change from confirmed cases to suspected cases.

Also data on hospital admissions is hard to compare between countries and even regions:
"There are still hospital admissions because of a diagnosis, with the intention to be rather safe and to admit patients to the hospital in order to isolate them. And in other regions there are already many cases and there one will be rather hesitant about admitting patients who are otherwise healthy."

Sequencing

Drosten and his team work on sequencing the genome of the virus. This used to be to study how the virus is spreading. The main road on which the virus travels to Germany used to be holidaymakers returning from Italy, but this is now changing and soon everything will be very mixed. The next task would be to study whether the virus is changing:
"The real issue then is whether the virus remains stable. And for that you simply have to continue sequencing viruses at a certain frequency. And always look, is the genome still complete? Have mutations crept in at important places? And do these mutations have any significance? In other words, then one has to switch over to the targeted examination of these viruses in the laboratory."

Immunity

If someone was ill and acquired immunity Drosten estimates that they can no longer carry the disease to others. To be sure large clinical studies are needed, which will be done later. To infect others the virus would have actively replicate in the throat.
"we know from a monkey experiment that once an infection has been overcome, one million infectious viruses can be introduced directly into the trachea of these monkeys and nothing happens. And that is already a very high level of challenge infection, as we call it in such a study. Now, of course, you have to say that these are not people, these are monkeys. Humans can be slightly different in detail. But there are other indications that suggest that we should have a very good immune response. For example, we know that over a long period of time, even in patients who say they have hardly noticed their infection, the virus not only replicates a little bit in the throat, but to a considerable extent in the lungs. And we should then be able to assume that a strong immune response is triggered."

Summer time

Korinna Hennig asked what the biological explanation is for the expectation that higher summer temperatures cannot contain the virus.

Christian Drosten:
"There will certainly be a small effect. A biological explanation, that is, it is just that one can see how endemic viruses decrease in frequency through the temperature effect. By endemic I mean those viruses that occur widely in the population. And these viruses have two problems when it gets warm. First, they have a permanent problem, namely there is population immunity. Then on top of that comes the second problem, let's say of the summer, in other words all the effects that this brings with it. Social distance outside and UV light, heat, dryness, so these things are not good for virus transmission, not conducive. And when that comes together with population immunity, then there is a stop to virus transmission in viruses like influenza. And now you can just look at influenza, for example, an endemic virus, to what extent is that stopped? And then you can compare a pandemic virus with it. To what extent will it be stopped? And it will not be stopped very much, but it will be stopped a bit. This comparative calculation can also be made for coronaviruses, and a study to which I referred before made this comparison. That's what was done there. And the estimate is that there will be a slight slowdown. The estimate is that half a unit of [the basic reproduction value] R0 can be subtracted. ... But at the same time, unfortunately, the estimate that the R0 value won't go below one due to this summer effect alone, that you have to do other things as well."

Curfew

The podcast ended with a more political question about the effectiveness of curfews. This question is hard to answer because we do not have data on this yet and you cannot study it in isolation: 
"It's all relatively difficult to say, because the curfew itself is one of several measures that are applied in addition to the non-pharmaceutical interventions. There is also something like closing schools, tracing of infected persons and isolation of infected persons at home. Then there is the quarantine of the environment, in the simplest case, for example, the family at home for 14 days. But also the identification of contacts and their isolation at home for 14 days. All these measures come together. And now it is relatively difficult to say, if you add something on top of this, such as a curfew, what difference does it make? There is no data at all for this, either in Germany or elsewhere in other studies, in modelling studies."
Speaking more as a private person he later ads:
"I am not necessarily someone who says that we need an immediate curfew. Especially under the impression that I have that a great many people are now taking this more and more seriously and are also thinking about it, and are staying at home of their own accord. I do think that perhaps we should allow a little more time."
The Sunday after this interview on Friday, the German federal government and the state governments agreed on strong limitations on the freedom of movement.

I hope this English summary is useful. If people enjoy it I am happy to do this for future podcasts are well.

Other podcasts

Part 19: Corona Virus Update with Christian Drosten: going outside, face masks, children and media troubles

Related reading

Translated interview published today (the 21st of March): "We Have To Bring Down the Number of Cases Now. Otherwise We Won't Be Able To Handle It" Published in Die Zeit, the German newspaper for academics. (I had some trouble reading it the first months I was in Germany due to their posh language, but the information is high quality. In German the verb comes at the end of a sentence, in a "well written" sentence that is the moment you understand the sentence. In the "best" Die Zeit articles they do the same for paragraphs and articles. Only after reading the last sentence do you understand the paragraph and only after reading the last paragraph do you understand the article.)

LA Times: Germany’s extensive medical network apparently helped in early stage of coronavirus. The article is actually about all the policy differences with respect to healthcare and the economy that makes handling the situation much easier in Germany. Not mentioned is that the number of infections in Germany is one of the highest in the world, Germany is big, Germans like holiday in Italy and while people still complain did a lot of testing compared to other countries.

Sunday, 1 March 2020

Trend errors in raw temperature station data due to inhomogeneities

Another serious title to signal this is again one for the nerds. How large is the uncertainty in the temperature trends of raw station data due to inhomogeneities? Too Long, Didn’t Read: they are big and larger in America than in Germany.

We came up with two very different methods (Lindau and Venema, 2020) to estimate this and we got lucky: the estimates match as well as one could expect.

Direct method

Let’s start simple, take pairs of stations and compute their difference time series, that is, we calculate the difference between two raw temperature time series. For each of these difference series you compute the trend and from all these trends you compute the variance.

If you compute these variances for pairs of stations in a range of distance classes you get the thick curves in the figure below for the United States of America (marked as U) and Germany (marked as G). This was computed based on data for the period 1901 to 2000 from the International Surface Temperature Initiative (ISTI) .


Figure 1. The variance of the trend differences computed from temperature data from the United States of America (U) and for Germany (G). Only the thick curves are relevant for this blog post and are used to compute the trend uncertainty. The paper used Kelvin [K], we could also have used degree Celsius [°C] like we do in this post.

When the distance between the pairs is small (near zero on the x-axis) the trend differences are mostly due to inhomogeneities, whereas when the distances get larger also real climate trend differences increase the variance of the trend differences. So we need to extrapolate the thick curves to a distance of zero to get the variance due to inhomogeneities.

For the USA this extrapolation gives about 1 °C2 per century2. This is the variance due to two stations. If the inhomogeneities are assumed to be independent, one station will have contributed half and the trend variance of one station is 0.5 °C2 per century2. To get an uncertainty measure that is easier to understand for humans, you can take the square root, which gives the standard deviation of the trends: 0.71 °C per century.

That is a decent size compared to the total warming over the last century of about 1.5 °C over land; see estimates below. This alone is a good reason to homogenize climate data to reduce such errors.


Figure 2. Warming estimates of the land surface air temperature from four different institutions. Figure taken from the last IPCC report.

The same exercise for Germany estimates the variance to be 0.5 °C2 per century2, the square root of half this variance gives a trend uncertainty of 0.5 °C per century.

In Germany the maximum distance for which we have a sufficient number of pairs (900 km) is naturally smaller than for America (over 1200 km). Interestingly, for America also real trend differences are important, which you can see in the trend variance increasing for pairs of stations that are further apart. In Germany this does not seem to happen even for the largest distances we could compute.

An important reason to homogenize climate data is to remove network-wide trend biases. When these trend biases are due to changes that affected all stations, they will hardly be visible in the difference time series. A good example is a change of the instruments affecting an entire observational network. It is also possible to have such large-scale trend biases due to rare, but big events, such as stations relocating from cities to airports, leading to a greatly reduced urban heat island effect. In such a case, the trend difference would be visible in the difference series and would thus be noticed by the above direct method.

Indirect method

The indirect method to estimate the station trend errors starts with an estimate of the statistical properties of the inhomogeneities and derives a relationship between these properties and the trend error.

Inhomogeneities

How the statistical properties of the inhomogeneities are estimated is described in Lindau and Venema (2019) and my previous blog post. To summarize, we had two statistical models for inhomogeneities. One where the size of the inhomogeneity between two breaks is given by a random number. We called this Random Deviations (RD) from a baseline. The second model is for breaks behaving like Brownian Motion (BM). Here the jump sizes are determined by random numbers. So the difference is whether the levels or the jumps are random numbers.

We found in both countries RD break with a typical jump size of about 0.5 °C. But the frequency was quite different, while in Germany we had one break every 24 years, in America it was once every 5.8 years.

Furthermore, in America there are also breaks that behave like Brownian Motion. For these break we only know the variance of the break multiplied by the frequency of the breaks, this is 0.45 °C2 per century2. We do not know not whether the value is due to many small breaks or one big one.

Relationship to trend errors

The next step is to relate these properties of the inhomogeneities to trend errors.

For the Random Deviation case, the dependence on the size of the breaks is trivial, it is simply proportional, but the dependence on the number of breaks is quite interesting. The numerical relationship is shown with +-symbols in the graph below.

Clearly when there are no breaks, there is also no trend error. On the other extreme of a large number of breaks, the error is due to a large number of independent random numbers, which to a large part cancel each other out. The largest trend errors are thus found for a moderate number of breaks.

To understand the result we start with the case without any variations in the break frequency. That is, if the break frequency is 5 breaks per century, every single 100-year time series has exactly 5 breaks. For this case we can derive an equation shown below as the thick curve. As expected it starts at zero and the maximum trend error is in case of 2 breaks in the time series.

More realistic is the case when there is a mean break frequency over all stations, but the number of breaks varies randomly per station. In case the breaks are independent of each other one would expect the number of breaks to follow a Poisson distribution. The thin lines in the graph below takes this scatter into account by computing a weighted average over the equation using Poisson weights. This smoothing reduces the height of the maximum and shifts it to a larger average break frequency, about 3 breaks per time series. Especially for more than 5 breaks, the numerical and analytical solutions fit very well.


Figure 3. The relationship between the variance of the trend due to inhomogeneities and the frequency of breaks, expressed as breaks per century. The plus-symbols are the results based on numerical simulation for 100-years time series. The thick line is the equation we found for a fixed break frequency, while the thin line takes into account random variations in the break frequency.

The next graph, shown below, also includes the case of Brownian Motion (BM), as well as the Random Deviation (RD) case. To make the BM and RD cases comparable, they both have jumps following a normal distribution with a variance of 1 °C2. Clearly the Brownian Motion case (with O-symbols) produces much larger trend errors than the Random Deviations case (+-symbols).


Figure 4. The variance of the trend as a function of the frequency of breaks for the two statistical models. The O-symbols are for Brownian Motion, the +-symbols for Random Deviations. The variance of the jump sizes was 1 °C2 in both cases.

That the variance of the trends due to inhomogeneities is a linear function of the number of breaks can be understood by considering that to a first approximation the trend error for Brownian Motion is given by a line connecting the first and the last segment of the break signal. If k is the number of breaks, the value of the last segment is the sum of k random values. Thus if the variance of one break is σβ2, the variance of the value of the last segment is kβ2 and thus a linear function of the number of breaks.

Alternatively you can do a lot of math and at the end find that the problem simplifies like a high school problem and that the actual trend error is 6/5 times the simple approximation from the previous paragraph.

Give me numbers

The variance of the trend error due to BM inhomogeneities in America is thus 6/5 times 0.45 °C2 per century2, which equals 0.54 °C2 per century2.

This BM trend error is a lot bigger than the trend error due to the RD inhomogeneities, which for 17.1 breaks per century and a break size distribution with variance 0.12 °C2 is 0.13 °C2 per century2.

One can add these two variances together to get 0.67 °C2 per century2. The standard deviation of this trend error is thus quite big: 0.82 °C per century and mostly due to the BM component.

In Germany, we found only RD breaks with a frequency of 4.1 breaks per century. Their size is 0.13 °C2. If we put this into the equation, the variance of the trends due to inhomogeneities is 0.34 °C2 per century2. Although the size of the RD breaks is about the same as in America, their influence on the station trend errors is larger, which is somewhat counter-intuitively because their number is lower. The standard deviation of the trend due to inhomogeneities is thus 0.58 °C per century in Germany.

Comparing the two estimates

Finally, we can compare the direct and indirect estimates of the trend errors. For America the direct (empirical) method found a trend error of 0.71 °C per century and the indirect (analytical) method 0.82 °C per century. For Germany the direct method found 0.5 °C per century and the indirect method 0.58 °C per century.

The indirect method thus found slightly larger uncertainties. Our estimates were based on the assumption of random station trend errors, which do not produce a bias in the trend. A difference in sensitivity to such biasing inhomogeneities in the observational data would be a reasonable explanation for these small differences. Also missing data may play a role.

Inhomogeneities can be very complicated. The break frequency does not have to be constant, the break sizes could depend on the year. Random Deviations and Brownian Motion are idealizations. In that light, it is encouraging that the direct and indirect estimates fit that well. These approximations seem to be sufficiently realistic, at least for the computation of station trend errors.


Related post

Estimating the statistical properties of inhomogeneities without homogenization

References

Lindau, R, Venema, V., 2020: Random trend errors in climate station data due to inhomogeneities. International Journal Climatology, 40, pp. 2393-2402. Open Access. https://doi.org/10.1002/joc.6340

Lindau, R, Venema, V., 2019: A new method to study inhomogeneities in climate records: Brownian motion or random deviations? International Journal Climatology, 39, p. 4769– 4783. Manuscript. https://eartharxiv.org/vjnbd/ https://doi.org/10.1002/joc.6105

Monday, 24 February 2020

Estimating the statistical properties of inhomogeneities without homogenization

One way to study inhomogeneities is to homogenize a dataset and study the corrections made. However, that way you only study the inhomogeneities that have been detected. Furthermore, it is always nice to have independent lines of evidence in an observational science. So in this recently published study Ralf Lindau and I (2019) set out to study the statistical properties of inhomogeneities directly from the raw data.

Break frequency and break size

The description of inhomogeneities can be quite complicated.

Observational data contains both break inhomogeneities (jumps due to, for example, a change of instrument or location) and gradual inhomogeneities (for example, due to degradation of the sensor or the instrument screen, growing vegetation or urbanization). The first simplification we make is that we only consider break inhomogeneities. Gradual inhomogeneities are typically homogenized with multiple breaks and they are often quite hard to distinguish from actual multiple breaks in case of noisy data.

When it comes to the year and month of the break we assume every date has the same probability of containing a break. It could be that when there is a break, it is more likely that there is another break, or less likely that there is another break.* It could be that some periods have a higher probability of having a break or the beginning of a series could have a different probability or when there is a break in station X, there could be a larger chance of a break in station Y. However, while some of these possibilities make intuitively sense, we do not know about studies on them, so we assume the simplest case of independent breaks. The frequency of these breaks is a parameter our method will estimate.

* When you study the statistical properties of breaks detected by homogenization methods, you can see that around a break it is less likely for there to be another break. One reason for this is that some homogenization methods explicitly exclude the possibility of two nearby breaks. The methods that do allow for nearby breaks will still often prefer the simpler solution of one big break over two smaller ones.


When it comes to the sizes of the breaks we are reasonably confident that they follow a normal distribution. Our colleagues Menne and Williams (2005) computed the break sizes for all dates where the station history suggested something happened to the measurement that could affect its homogeneity.** They found the break size distribution plotted below. The graph compares the histogram to a normal distribution with an average of zero. Apart from the actual distribution not having a mean of zero (leading to trend biases) it seems to be a decent match and our method will assume that break sizes have a normal distribution.


Figure 1. Histogram of break sizes for breaks known from station histories (metadata).


** When you study the statistical properties of breaks detected by homogenization methods the distribution looks different; the graph plotted below is a typical example. You will not see many small breaks; the middle of the normal distribution is missing. This is because these small breaks are not statistically significant in a noisy time series. Furthermore, you often see some really large breaks. These are likely multiple breaks being detected as one big one. Using breaks known from the metadata, as Menne and Williams (2005) did, avoids or reduces these problems and is thus a better estimate of the distribution of actual breaks in climate data. Although, you can always worry that the breaks not known in the metadata are different. Science never ends.



Figure 2. Histogram of detected break sizes for the lower USA.

Temporal behavior

The break frequency and size is still not a complete description of the break signal, there is also the temporal dependence of the inhomogeneities. In the HOME benchmark I had assumed that every period between two breaks had a shift up or down determined by a random number, what we call “Random Deviation from a baseline” in the new article. To be honest, “assumed” means I had not really thought about it when generating the data. In the same year, NOAA published a benchmark study where they assumed that the jumps up and down (and not the levels) were given by a random number, that is, they assumed the break signal is a random walk. So we have to distinguish between levels and jumps.

This makes quite a difference for the trend errors. In case of Random Deviations, if the first jump goes up it is more likely that the next jump goes down, especially if the first jump goes up a lot. In case of a random walk or Brownian Motion, when the first jump goes up, this does not influence the next jump and it has a 50% probability of also going up. Brownian Motion hence has a tendency to run away, when you insert more breaks, the variance of the break signal keeps going up on average, while Random Deviations are bounded.

The figure from another new paper (Lindau and Venema, 2020) shown below quantifies the big difference this makes for the trend error of a typical 100 years long time series. On the x-axis you see the frequency of the breaks (in breaks per century) and on the y-axis the variance of the trends (in Kelvin2 or Celsius2 per century2) these breaks produce.

The plus-symbols are for the case of Random Deviations from a baseline. If you have exactly two breaks per time series this gives the largest trend error. However, because the number of breaks varies, an average break frequency of about three breaks per series gives the largest trend error. This makes sense as no breaks would give no trend error, while in case of more and more breaks you average over more and more independent numbers and the trend error becomes smaller and smaller.

The circle-symbols are for Brownian Motion. Here the variance of the trends increases linearly with the number of breaks. For a typical number of breaks of more than five, Brownian Motion produces a much larger trend error than Random Deviations.


Figure 3. Figure from Lindau and Venema (2020) quantifying the trend errors due to break inhomogeneities. The variance of the jump sizes is the same in both cases: 1 °C2.

One of our colleagues, Peter Domonkos, also sometimes uses Brownian Motion, but puts a limit on how far it can run away. Furthermore, he is known for the concept of platform-like inhomogeneity pairs, where if the first break goes up, the next one is more likely to go down (or the other way around) thus building a platform.

All of these statistical models can make physical sense. When a measurement error causes the observation to go up (or down), once this problem is discovered it will go down (or up) again, thus creating a platform inhomogeneity pair. When the first break goes up (or down) because of a relocation, this perturbation remains when the the sensor is changed and both remain when the screen is changed, thus creating a random walk. Relocations are a frequent reason for inhomogeneities. When the station Bonn is relocated, the operator will want to keep it in the region, thus searching in a random directions around Bonn, rather than around the previous location. That would create Random Deviations.

In the benchmarking study HOME we looked at the sign of consecutive detected breaks (Venema et al., 2012). In case of Random Deviations, like HOME used for its simulated breaks, you would expect to get platform break pairs (first break up and the second down, or reversed) in 4 of 6 cases (67%). We detected them in 63% of the cases, a bit less, probably showing that platform pairs are a bit harder to detect than two breaks going in the same direction. In case of Brownian Motion you would expect 50% platform break pairs. For the real data in the HOME benchmark the percentage of platforms was 59%. So this does not fit to Brownian Motion, but is lower than you would expect from Random Deviations. Reality seems to be somewhere in the middle.

So for our new study estimating the statistical properties of inhomogeneities we opted for a statistical model where the breaks are described by a Random Deviations (RD) signal added to a Brownian Motion (BM) signal and estimate their parameters to see how large these two components are.

The observations

To estimate the properties of the inhomogeneities we have monthly temperature data from a large number of stations. This data has a regional climate signal, observational and weather noise and inhomogeneities. To separate the noise and the inhomogeneities we can use the fact that they are very different with respect to their temporal correlations. The noise will be mostly independent in time or weakly correlated in as far as measurement errors depend on the weather. The inhomogeneities, on the other hand, have correlations over many years.

However, the regional climate signal also has correlations over many years and is comparable in size to the break signal. So we have opted to work with a difference time series, that is, subtracting the time series of a neighboring station from that of a candidate station. This mostly removes the complicated climate signal and what remains is two times the inhomogeneities and two times the noise. The map below shows the 1459 station pairs we used for the USA.


Figure 4. Map of the lower USA with all the pairs of stations we used in this study.

For estimating the inhomogeneities, the climate signal is noise. By removing it we reduce the noise level and avoid having to make assumptions about the regional climate signal. There are also disadvantages to working with the difference series, inhomogeneities that are in both the candidate and the reference series will be (partially) removed. For example, when there is a jump because of the way the temperature is computed this leads to a change in the entire network***. Such a jump would be mostly invisible in a difference series. Although not fully invisible because the jump size will be different in every station.


*** In the past the temperature was read multiple times a day or a minimum and maximum temperature thermometer was used. With labor-saving automatic weather stations we can now sample the temperature many times a day and changing from one definition to another will give a jump.

Spatiotemporal differences

As test statistic we have chosen the variance of the spatiotemporal differences. The “spatio” part of the differences I already explained, we use the difference between two stations. Temporal differences mean we subtract two numbers separated by a time lag. For all pairs of stations and all possible pairs of values with a certain lag, we compute the variance of all these difference values and do this for lags of zero to 80 years.

In the paper we do all the math to show how the three components (noise, Random Deviation and Brownian Motion) depend on the lag. The noise does not depend on the lag. It is constant. Brownian Motion produces a linear increase of the variance as a function of lag, while the Random Deviations produce a saturating exponential function. How fast the function saturates is a function of the number of breaks per century.

The variance of the spatiotemporal differences for America is shown below. The O-symbols are the variances computed from the data. The other lines are the fits for the various parts of the statistical model. The variance of the noise is about 0.62 Kelvin2 or Celsius2 and shown as a horizontal line as it does not depend on the lag. The component of the Brownian Motion is the line indicated by BM, while the Random Deviation (RD) component is the curve starting at the origin and growing to about 0.47 K2. From how fast this curve growths we estimate that the American data has one RD break every 5.8 years.

The curve for Brownian Motion being a line already suggests that it is not possible to estimate how many BM breaks the time series contains, we only know the total variance, but not whether it is contained in many small ones or one big one.



Figure 5. The variance of the spatiotemporal differences as a function of the time lag for the lower USA.

The situation for Germany is a bit different; see figure below. Here we do not see the continual linear increase in the variance we had above for America. Apparently the break signal in Germany does not have a significant Brownian Motion component and only contains Random Deviation breaks. The number of breaks is also much smaller, the German data only has one break every 24 years. The German weather service seems to give undisturbed climate observations a high priority.

For both countries the size of the RD breaks is about the same and quite small, expressed as typical jump size it would be about 0.5°C.



Figure 6. The variance of the spatiotemporal differences as a function of the time lag L for Germany.

The number of detected breaks

The number of breaks we found for America is a lot larger than the number of breaks detected by statistical homogenization. Typical numbers for detected breaks are one per 15 years for America and one per 20 years for Europe, although it also depends considerably on the homogenization method applied.

I was surprised by the large difference between actual breaks and detected breaks, I thought we would maybe miss 20 to 25% of the breaks. If you look at the histograms of the detected breaks, such as Figure 2 reprinted below, where the middle is missing, it looks as if about 20% is missing in a country with a dense observational network.

But these histograms are not a good way to determine what is missing. Next to the influence of chance, small breaks may be detected because they have a good reference station and other breaks are far away, while relatively big breaks may go undetected because of other nearby breaks. So there is not a clear cut-off and you would have to go far from the middle to find reliably detected breaks, which is where you get into the region where there are too many large breaks because detection algorithms combined two or more breaks into one. In other words, it is hard to estimate how many breaks are missing by fitting a normal distribution to the histogram of the detected breaks.

If you do the math, as we do in Section 6 of the article, it is perfectly possible not to detect half of the breaks even for a dense observational network.


Figure 2. Histogram of detected break sizes for the lower USA.

Final thoughts

This is a new methodology, let’s see how it holds when others look at it, with new methods, other assumptions about the nature of inhomogeneities and other datasets. Separating Random Deviations and Brownian Motion requires long series. We do not have that many long series and you can already see in the figures above that the variance of the spatiotemporal differences for Germany is quite noisy. The method thus requires too much data to apply it to networks all over the world.

In Lindau and Venema (2018) we introduced a method to estimate the break variance and the number of breaks for a single pair of stations (but not BM vs RD). This needed some human inspection to ensure the fits were right, but it does suggest that there may be a middle ground, a new method which can estimate these parameters for smaller amounts of data, which can be applied world wide.

The next blog post will be about the trend errors due to these inhomogeneities. If you have any questions about our work, do leave a comment below.


Related post

Trend errors in raw temperature station data due to inhomogeneities


References

Lindau, R, Venema, V., 2020: Random trend errors in climate station data due to inhomogeneities. International Journal Climatology, 40, pp. 2393-2402. Open Access. https://doi.org/10.1002/joc.6340

Lindau, R, Venema, V., 2019: A new method to study inhomogeneities in climate records: Brownian motion or random deviations? International Journal Climatology, 39: p. 4769– 4783. Manuscript. https://eartharxiv.org/vjnbd/ https://doi.org/10.1002/joc.6105

Lindau, R. and Venema, V.K.C., 2018: The joint influence of break and noise variance on the break detection capability in time series homogenization. Advances in Statistical Climatology, Meteorology and Oceanography, 4, p. 1–18. https://doi.org/10.5194/ascmo-4-1-2018

Menne, M.J. and C.N. Williams, 2005: Detection of Undocumented Changepoints Using Multiple Test Statistics and Composite Reference Series. Journal of Climate, 18, 4271–4286. https://doi.org/10.1175/JCLI3524.1

Menne, M.J., C.N. Williams, and R.S. Vose, 2009: The U.S. Historical Climatology Network Monthly Temperature Data, Version 2. Bulletin American Meteorological Society, 90, 993–1008. https://doi.org/10.1175/2008BAMS2613.1

Venema, V., O. Mestre, E. Aguilar, I. Auer, J.A. Guijarro, P. Domonkos, G. Vertacnik, T. Szentimrey, P. Stepanek, P. Zahradnicek, J. Viarre, G. Müller-Westermeier, M. Lakatos, C.N. Williams, M.J. Menne, R. Lindau, D. Rasol, E. Rustemeier, K. Kolokythas, T. Marinova, L. Andresen, F. Acquaotta, S. Fratianni, S. Cheval, M. Klancar, M. Brunetti, Ch. Gruber, M. Prohom Duran, T. Likso, P. Esteban, Th. Brandsma, 2012: Benchmarking homogenization algorithms for monthly data. Climate of the Past, 8, pp. 89-115. https://doi.org/10.5194/cp-8-89-2012

Tuesday, 11 February 2020

Bernie Sanders is more electable than Joe Biden and will win

Bernie Sanders will become the 46th US president.

After Iowa and so many good New Hampshire polls for Sanders, it is about time to present my prediction for the 2020 presidential election before it is no longer an interesting take. I try to only write about such matters when I think the mainstream opinion is wrong and the published opinion is wrong about Sanders' electability.

Full disclose: I hope Sanders or Warren wins, the biggest problem America faces is crony capitalism. Systemic corruption is the foundation of nearly all US problems, which spill into the world, including insufficient climate action. Given this bias I will try to quantify as much as possible and give my sources.

Unfortunately it is not guaranteed Sanders will win and it is hard to quantify, but to go on the record with a clear prediction, let me state he has a chance of 54% of winning. This is based on a chance to win the primary of 60% and then a chance of 90% to win the general. This makes it a probabilistic prediction, just like "there is a 70% probability it will rain tomorrow", which needs multiple predictions to validate. For validation, you could combine it with my previous political predictions going against the mainstream:

1. I already have my warning for clear and present danger before the 2016 election: "there is now a real possibility Trump could become president". In the post you will find the reasons why the terrible pundits in the US media were wrong.

2. Another prediction was that the UK election in 2017 would be a lot closer than poll whisperer Nate Silver predicted because he ignored comrade trend. (Although he is an incompetent establishment pundit, but really good with numbers, so this was an interesting prediction.)

I am confident that Sanders will win an election against Trump (90%), but even if it looks good now winning the primary (60%) is harder because TV news keeps on repeating that Sanders cannot win the general election, as far as I have seen mostly without arguments and sometimes with very cherry picked or hacky evidence.

The power is shifting from corporate media to social media, independent media and membership supported media. The media and candidates can no longer be sure to get away with misinformation without risking their reputation. Although sometimes they slip into old patterns and claim that they said X in 1976 and I am shouting at my monitor that everyone has seen the video of you saying Y.

The power is also shifting from big donors to crowd funding. Even in the face of rising inequality, technology has made small donations so easy as to be competitive. Fortunately to spread the truth you also need less money than to spread lies and presidential candidates get a lot of free media.

As moving target it is hard to say how much difference this power shift makes in 2020, we can be sure the donor class and the media will throw the kitchen sink at Sanders. They hurt themselves doing so, but they despise him from their corporate core to their high-dollar hosts and guests. So I am not as confident about my primary prediction, not knowing how this will play out.

Sanders Beats Trump

The media is sure Sanders cannot win because Republicans would call him a socialist. One often has the impression that they and the Democratic leadership think you are not allowed to reply when Republicans say something. At every primary debate Sanders thus gets his socialism question, gives a strong answer, which the journalists apparently have forgotten again in the next debate. Maybe they are trying to train us into also thinking that resistance replying is futile.

Democratic leadership would like Sanders to cower, just like them, to be weak, to defend themselves against the unfair accusation of being a socialist with some soft spoken words. But if you are defending you are losing. It is much stronger to accept the label and fill it with content.

Is there a better campaign than replying and telling the American people about the high quality of living in social democratic countries, about the higher salaries for workers, about their vibrant market economies, about their high ranking in global indices for entrepreneurship and freedom, about their well-trained competitive work forces, about being treated with respect, about a government that works for all and not just for the donors? Even Danish politicians have started helping:



So what is the quantitative evidence whether Sanders or Biden is the stronger candidate?

Policies

1. The policies of Sanders are the most popular ones. This is already clear by most presidential candidates adopting or claiming to adopt the most popular Sanders policies. To be fair the difference with Biden, on average over all policies, is just one percent, but does not go in the direction the pundits would like you to think:
"Senator Bernie Sanders of Vermont edges out his Democratic opponents on health care, immigration, the environment and the economy, according to a Reuters/Ipsos poll. ... For health care, arguably Sanders' staple issue, the Senator claims 27.1 percent support, eclipsing Biden and Warren by 9 percent and 14.6 percent, respectively. On the environment, Sanders again edges out Biden by 9.7 percent and Warren by 8.2 percent. He also comes out ahead on the economy and jobs."
This week's Quinnipiac University poll asked Democrat and Democrat-leaning voters: "Regardless of how you intend to vote in the Democratic primary for president, which candidate do you think - has the best policy ideas?" Sanders was the choice of 27%, Warren of 16% and Biden of 14%. The voting intentions from the same poll, are 25%, 14% and 17%, respectively, which are higher for Biden than the policy support and lower for Sanders. This suggests that many people unfortunately plan on voting for a candidate they agree with less because they believe the media on electability.


Money and enthusiasm

2. Biden is losing the Money Primary. In the fourth quarter of 2019 he was 3rd with respect to donations. (In the 3rd quarter he was only 4th.)

In the fourth quarter Sanders had 1.8 million individual donors, while Biden had only half a million donors. This is a sign of enthusiasm. Just as the 10 million calls to voters made by Sanders volunteers.


The number of donors. Sanders is leading in 46 states. Graphic: The New York Times.


Electability according to the markets

3. The betting market PredictIt finds it most likely that Sanders will win the primary. The graph below shows the price of shares for Sanders winning, which are equal to the predicted probability he will win in percent. Sanders has a probability of 45% of winning the primary and another betting market gives him a 29% probability of winning the presidency.


The betting market PredictIt for the Democratic primary over last 90 days. The price of stocks in cent is the percentage change a candidate will win the primary.


Following Bayesian statistics, the probability of winning the presidency, P(presidency), is the probability of winning the primary, P(primary), times the probability of winning the presidency after having won the primary, P(presidency|primary). As an equation this reads:

P(presidency) = P(primary) x P(presidency|primary)

From this is follows that:

P(presidency|primary) = P(presidency) / P(primary)

The numbers for Sanders are:

P(presidency) / P(primary) = 29% / 45% = 64%

The probability of winning presidency if the nominee is thus 64% for Sanders. The same numbers for Biden are:

P(presidency|primary) = P(presidency) / P(primary) = 5% / 12% = 42%

So people willing to put money on their political assessment do not agree with the pundit class and see Sanders as 50% more electable than Biden.

To be fair, like the pundits, I also disagree with the betting market. They have a 54% chance of Trump winning. That is preposterous for a historically unpopular president, but betting against Big Money is a loosing strategy on the short term. One would have to hold the bet until election day to win and the chance of Trump winning is unfortunately not zero.

National head to head polling 2020

4. There is the simple polling of head to head races. According to a recent Survey USA poll, this is evidence favoring for Sanders.
The poll found that 52 percent of voters would choose Sanders and 43 percent Trump, giving the veteran senator a nine-point lead. Next was former vice president Joe Biden at 50 percent to Trump's 43 percent, a seven-point lead.
Looking back at older similar polls, the situation can also be reversed. On average I see no difference between the two candidates.

I personally do not like these head to head polls at this stage. Some candidates do quite poorly in head to head polls against Trump. If you look in detail, you will find that Trump gets about the same percentage against all candidates. What varies is how many people prefer the Democratic candidate or are undecided. My impression is that this is mostly measuring name recognition.

National head to head polling 2016

5. It is hard to imagine being in the situation of having the chose between X and Trump, the election is almost a year out and part of the supporters of candidate Y will say they do not know or would vote Trump during the primary, but in the end vote for their party.

However, for 2016 we have similar polling closer to the date of the election. Biden is naturally not Clinton, but in May 2016 PolitiFact found that Sanders beat Trump more easily, by 3 to 12 percent points more than Clinton.

Just a few days before the election a Gravis poll showed that Clinton would beat Trump by 2%, while Sanders would beat Trump by 10%. Caveat: the questions seem neutral, but the poll was commissioned by a politician who endorsed Sanders.

While both Trump and Clinton had net negative favorability values, Sanders net favorability grew during the campaign as people got more familiar with his ideas and ended on plus 17% favorability.

Michael Bloomberg acknowledged these facts right after the 2016 election: “Bernie Sanders would have beaten Donald Trump. Polls show he would have walked away with it. But Hillary Clinton got the nomination.”

Head to head polling swing states

6. Swing states show another picture than the national polls. What the swing states are will depend on the candidate, but to avoid cherry picking, let's take the ones from the Cook Report. Their toss ups for the Electoral College are: Arizona, Florida, North Carolina, Pennsylvania and Wisconsin.

Unfortunately all the head to head state polling we have are the averages of Real Clear Politics, which does not take the quality of the polls into account like 538 usually does. This makes manipulating the public opinion with bad polls easier.

  Biden vs TrumpSanders vs Trump
State TrumpBidenNetTrumpSandersNet
Arizona 47.0 47.3 +0.348.543.5 -5.0
Florida 45.3 48.0 +2.747.047.0 Tie
North Carolina44.8 48.2 +3.446.047.0 +1.0
Pennsylvania 43.3 50.3 +7.044.348.0 +3.7
Wisconsin 43.3 47.0 +3.744.746.7 +2.0
Average 44.7 48.2 +3.546.146.4 +0.3

Here Biden has a small advantage. Also Sanders would win most swing states, but with less of a margin according to these polls.

Personality

7. Sanders is personally very popular with Democrats and Americans. For example asking "which candidate do you think - cares the most about people like you?" 24% reply Sanders and 19% Biden, in a Quinnipiac University poll.

Asking which candidate is more honest in the same poll, 25% reply Sanders and 14% Biden. Thus Americans do not agree with political insiders and TV pundits who clearly dislike Sanders. Their dislike has a good reason, he would upend their corrupt self-dealing system.

Summary of the evidence

These are the more or less objective pieces of data we have, the rest is more political judgement. So let's summarize the evidence.

When it comes to policies Sanders is more popular. The money primary shows the money and enthusiasm is with Sanders. Looking at what betting markets expect to happen Sanders is more electable. And Americans see Sanders as some who cares about them and is honest. Biden also has good numbers, but not as good.

The mixed evidence comes from head to head polling. In swing states the Real Clear Politics polls give Biden an advantage, nationally the polling suggests that Sanders would beat Trump in 2020 and would have obliterated Trump in 2016.

Hope and change

On to the more subjective political assessment.

All the polling indicates that Americans are not happy and want change. Obama successfully campaigned on hope and change. That was not how he governed, but it was how he won elections as a skilful campaigner.

Biden runs on nothing will fundamentally change like Clinton ran on "America is already great". In 2016 Clinton won with the people who thought their candidates "Cares about people like me", "Has the right experience" or "Has good judgment", but Trump won the "Can bring needed change" with 83%, according to exit polling.

NYT and Trump endorsements

Intriguingly Biden did not even get the NYT endorsement, in fact he was not even in their top four, although they are his people. The NYT endorsement went to Warren and Klobuchar.

In public Trump may ignore Sanders, so much that I have the impression he deeply fears Sanders. But in private, in a secret recoding by his Ukrainian friend in crime, Lev Parnas, Trump admits that he fears Sanders the most. He may be an incompetent lazy fool, but he does know marketing.

Socialism

In the introduction I already argued that the Republicans making the same-old empty attacks by calling Sanders a socialist is welcome. There is also polling on this question. Data For Progress polled people whether they preferred Trump or Sanders with three different formulations:
  • No information: “If the 2020 U.S. Presidential election was held today, who would you vote for if the candidates were Bernie Sanders and Donald Trump?”
  • Partisan cues: “If the 2020 U.S. Presidential election was held today, who would you vote for if the candidates were Democrat Bernie Sanders and Republican Donald Trump?”
  • Socialists and billionaires: “If the 2020 U.S. Presidential election was held today, who would you vote for if the candidates were Democrat Bernie Sanders, who wants to tax the billionaire class to help the working class and Republican Donald Trump, who says Sanders is a socialist who supports a government takeover of healthcare and open borders?”
Calling Sanders a socialist did not hurt him. The only thing that ironically hurts a little is being called a Democrat.



Political record and campaign


A debate between Biden and Trump would look like the fight between Konstantin Chernenko and Ronald Reagan in Two Tribes Go To War. Biden runs on his record. He is thus vulnerable to what Trump does best and enjoys the most in life: denigrating other people in the media.


Frankie Goes To Hollywood - Two Tribes

Sanders runs on a policy platform and is thus less vulnerable to personal attacks. A platform with many policies Trump ran on in 2016, but did not execute because he campaigned as a populist, but governs as an establishment Republican plus hatred.

In times where people identify as Republican because they hate Democrats and identify as Democrat because they hate Republicans it is difficult to win elections by advocating for the policies of the other side. There are naturally policies that appeal to large majorities, that may thus also convince people from the other side.

That such policies are not implemented yet is because of the corrupting influence of money in politics and media. A politician who is free from such influences can make a highly attractive policy platform. A politician who floated up due to their support for the donor class and corporations is restricted. Corporations are not charities, they expect a return on investment. The donor class has different interests and world views than the rest of us. A policy package designed for them will be less attractive for voters.

The upside is the money, which clearly helps the campaign, as we can see in billionaire Bloomberg buying a preposterous vote share. In the past voters may have naively expected that the money did not have much influence and it also took time for the political class to become corrupted by it. But the distance between Washington DC and America has grown together with the length of the list of popular policies that have no chance of passing Congress.

Even if Biden would promise the same policies in the primary as Sanders, people by now expect a general election pivot and a cabinet full of people from the short lists of the donors. Consequently, there is now a much larger bonus for a reputation of honesty and consistency. Thus a people-power campaign needs less money in 2020.

Imagine Trump would legalize marijuana and remove American troops from Iraq. That would sink a Biden general election campaign. Biden not only voted for the Iraq war, already 5 years before the Iraq war, in 1998 Biden was making the case for a ground war.

There is a lot in Biden's record that can be used by the Trump campaign to suppress the Democratic turnout using targetted social media ads. Workers will get ads about Biden's position on the Permanent Normal Trade Relations with China and NAFTA. Poor and old people about Biden trying to reduce Social Security, Medicare and Medicaid.

Joe Biden lied about participating in the Civil Rights movement, admitted as much in 1987, but in this campaign he again started lying about it. Trump will not care about the hypocrisy of him pointing to such problems given his own abysmal record. His authoritarian followers will not see the targetted ads and would also not really care.

Sanders can hammer Trump on the promises Trump made and broke. Trump's budgets reduced Social Security, Medicaid and Medicare, which he had promised to protect. Trump's trade deals are almost the same and were negotiated with corporations at the table. Trump promised that his healthcare plan would cover everyone and would be cheaper, while millions lost their health insurance.

Project fear of the Democratic establishment likes to name drop candidates like George McGovern, but somehow do not mention Hillary Clinton, John Kerry or Al Gore. They especially do not mention Franklin D. Roosevelt, who won the presidency four times and whose New Deal has much in common with Sanders' platform. Also on the Republican side it seems to be hard to make the case that Republicans won who agreed with Democrats, while those who fought Democrats lost. Quite the opposite.

Winning the primary election


Polling aggregator 538 converts the polling information into a probability of winning the nomination by winning the majority of the delegates. The methodology seems to be sound and is likely the best estimate we have.



Nate Silver of 538 seemed a bit dismayed at how much the prediction changed after Iowa. The model gives a bonus for winning Iowa, which traditionally helps candidates in future races. Silver wondered whether the bonus was too large given that Sanders and Buttigieg are tied for one winning metric (the delegate equivalents). The bonus is to take the positive media coverage into account, but the media put much emphasis on the tie and less on Sanders winning the popular vote (in the first and final round), while Silver's model gave all three metrics equal weight.

My impression is that the jump was mostly so large because Biden lost so enormously and is on track to also losing the next two primaries. At the same time the competitors of Biden do not have much chance of winning. Buttigieg may do well today in New Hampshire, but hardly has any staff in subsequent states and nearly no support among non-white voters. Amy Klobuchar is rising, but still polling badly nationally.

In the betting market billionaire Bloomberg is the runner up after Sanders. He has spend $200 million on ads and bought himself 12% in national polling. This is still rising and it thus makes sense that a market would give him a bonus over polling. But as soon as he becomes a serious candidate people will bring up his atrocious record, today #BloombergIsARacist is trending as an appetiser. The media will be nice to him, Bloomberg is expected to spend a billion in ads and every media outlet wants to get some of that. But I expect that social media will keep him small.

Thus I do not see Buttigieg, Klobuchar or Bloomberg winning, but they all have a chance of succeeding Biden and will likely stay in the race a long time, splitting and wasting establishment votes.

Biden's campaign runs on money, which he only gets when he will likely win; the donors want a return on investment. So he may be forced out of the race, although I see him as the only serious competitor to Sanders. Warren might see it coming that she will stay below the 15% threshold for most primaries and chose to combine her campaign with Sanders'. However, she could also wait for Biden dropping out and may then have a chance.

[ Update after the New Hampshire primary. 538 now has "no one" as the most likely winner of the primary, but with Sanders as close second.

Sanders is the clear frontrunner in the current crowded field, but tends to get only a quarter of the vote. So it remains interesting what would happen when the field winnows. Some pundits simply add up all the other "moderate" candidates; that is not how it works.

A recent YouGov/Yahoo head to head poll of the main primary candidates suggests that Sanders would also win in that situation. Sanders would beat Klobuchar by 21 points, Bloomberg by 15 points, but also Biden by 4 points and Warren by 2 points. Also Warren would beat all the other candidates. Life-long Republican Bloomberg would loose against all other candidates. Biden got some hits, but of the "moderates" he is still the strongest competition against Sanders. ]

Nate Silver gives Sanders a chance of 46% of winning. Silver's model has an additional chance of 27% that no one will directly win a majority. If Sanders does have a clear plurality, it would be handing the presidency to Trump to nominate someone else. So also in case of a contested convention Sanders has a good chance of winning. Furthermore, polling for Sanders tends to go up in the weeks before primaries. That is the moment people start paying attention and talking to each other. So I feel my prediction of 60% chance of Sanders winning the primary is reasonable.

If we combine that with a 90% chance of winning the general election, the chance of stopping the class warfare against us is 54%. Let's hope for the best.

Related reading

Sunrise Movement endorses Bernie Sanders for President: "Senator Sanders has made it clear throughout his political career and in this campaign that he grasps the scale of the climate crisis, the urgency with which we must act to address it, and the opportunity we have in coming together to do so."

USA Today: Moderate Democrats have a duty to consider Sanders. He has a clear path to beating Trump. "This senator isn’t even my favorite senator running for the nomination. Yet one reason I have to seriously consider Sanders is that he has the clearest path to uniting the Democratic Party and ousting the evil clown in the Oval Office."

538: You’ll Never Know Which Candidate Is Electable

MostElectable.com

Bernie Sanders leads Donald Trump in polls, even when you remind people he’s a socialist. Socialism is unpopular, but America’s leading socialist isn’t.

Shaun King: 2 truths and 31 lies Joe Biden has told about his work in the Civil Rights Movement

Leftism Isn’t Very Appealing to Nonvoters. But Bernie Sanders Is.

Take the Money and Run. The 2020 Democratic primary has been as much about how candidates raise money as what they want to do once in office.

538: What Fourth-Quarter Fundraising Can Tell Us About 2020

Because it does not fit the stereotype: Bernie Sanders Leads Trump in Donations From Active-Duty Troops

If you want a counter-view there is this terrible piece by Jonathan Chait, be warned that it is filled to the brim with misinformation: Running Bernie Sanders Against Trump Would Be an Act of Insanity. He is also the author of "Liberals Should Support a Trump Republican Nomination". Countering all misinformation would be another blog post, luckily Jacobin did a part: "Jonathan Chait Is Wrong About Everything, Including Sanders' Electability."

USA Today: Trump loses almost every matchup with top 2020 Democrats in Florida, Wisconsin and Michigan, polls find