Pages

Tuesday, 4 March 2014

Falsifiable and falsification in science

"You keep using that word. I do not think it means what you think it means."

In a recent post, Interesting what the interesting Judith Curry finds interesting, I stated that "it is very easy to falsify the theory of global warming by greenhouse gasses." The ensuing discussions suggest that it could be interesting to write a little more about the role of falsifiable hypotheses and falsification in science. The main problem is that people confuse falsifiable and falsification, often do not even seem to notice there is a difference, whereas they have very different roles in science.

The power of science and falsification are beautifully illustrated in this video by asking normal people on the street to discover the rule behind a number sequence (h/t U Know I Speak Sense).



Falsifiable

Karl Popper only asked himself what distinguishes a scientific hypothesis from an ordinary idea.
Popper's beautiful thesis was that you can distinguish between a scientific and a non-scientific statement by asking oneself if it can be falsified. If it cannot, it is not science. Thus the worst one can say about an idea that is supposed to be scientific is that it is not even wrong.

Important side remark: Please, note that also non-scientific ideas can be valuable, Popper's philosophy itself is not science, just like most philosophy, political ideas, literature and religion.

And please note that wrong hypotheses are also scientific statements; that they are wrong automatically shows that they can be falsified. Even falsified hypothesis are still scientific hypothesis and can even still be useful. An good example would be classical mechanics. This illustrates that Popper did not think about whether hypothesis were right or wrong (falsified), useful or not, but whether a statement is scientific or not scientific.

To be falsifiable, falsification is only needed to be possible in principle. It does not matter whether falsification would be hard or easy for the question whether it is science. This is because the main value of the criterion is that it forces you to write up very clearly, very precisely what you are thinking. That allows other scientists to repeat your work, test the idea and build upon it. It is not about falsification, but about clarity.

That also implies that the daily job of a scientist is not to falsify hypothesis, especially not solid and well-validated ones. Scientists are also not writing down new falsifiable hypothesis most of the time, in fact they rarely do so. Those are the rare Eukeka moments.

The terms scientist and science are clearly much broader and also much harder to capture. The ambitious William M. Connolley set out to define science and what a scientist does in a recent post. Definitely worth reading, especially if you are not that familiar with science. Disclaimer: not surprisingly, the aim was not completely achieved.

Psycho analysis

A classical example for Popper of a non-scientific hypothesis would be Freud's psycho-analysis. The relationship between the current psychological problems of a patient and what happened long ago in the patients childhood is too flexible and not sufficiently well defined to be science. That does not mean that what happens to a child is not important, there are many modern findings that point into that direction (Joachim Bauer, 2010). If someone else would succeed in making Freud's ideas more specific and falsifiable, it would even be a valuable contribution to science. It also does not mean that psycho-analysis does not help patients. Finally, it also does not mean that it is wrong, rather it means that it is not even wrong. It is too vague.

Morphic fields

Another example is the idea of Rupert Sheldrake about morphic fields. Sheldrake claims that when an idea has been invented before, it becomes easier to reinvent it. He has a large number of suggestive examples where this seems to be the case. Thus there is a lot of information to validate his idea.

The problem is, it is impossible to falsify the idea. This idea is, again, too vague and if you do not find the effect in an experiment, you can always claim that the effect is smaller, that the experiment was not sensitive enough or not well executed.

When I was studying physics in at Groningen University, Sheldrake gave a talk and afterwards naturally got the question whether his ideas were falsifiable. He dogged the question and started about the science philosophy of Thomas Kuhn on paradigm changes that shows that in practice it can be hard to determine whether an idea is falsified. However, whether an idea is falsifiable is clearly another question as how falsification works, which will be discussed below. Then Sheldrake started fueling tribal sentiments, by complaining that only physicists would be allowed to have hypotheses with fields, why not biologists? Discrimination! As the climate "debate" illustrates, adding some tribal conflict is an effective way to reduce critical thinking.

This does not mean that the ideas of Sheldrake may not turn out to be valuable. The list of examples that validate his ideas is intriguing. This may well be a first step towards making a scientific hypothesis. That is also part of the work of a scientist, to translate a creative, fresh idea you got during a hike into a solid, testable scientific idea. Morphic fields are, however, not yet science.

Anthropogenic global warming

The hypothesis that the man-made increases in the concentration on greenhouse gasses leads to an increase in the global mean temperature can be falsified and is thus a scientific hypothesis. There is no need to go into details here, because Hans Custers just wrote an interesting post, "Is climate science falsifiable?", which lists ten ways to falsify the "AGW hypothesis". One would have been sufficient.

A clear example is that if the average world temperature drops one degree, back to values before 1900 and stays there for a long time without there being other reasons for the temperature decrease (e.g. volcanoes, sun, aerosols) the theory would be falsified. To get to ten ways, Custers has to come up with rather adventurous problems that are extremely unlikely because so many basic science and experiments would need to be wrong.

Seen in this light, the climate ostriches are almost right, it is highly unlikely that the theory of man-made global warming will be refuted, that would require highly surprising new findings and in most cases it would require basic physics, used in many sciences, to be wrong. However, just because it is highly unlikely in practice that the hypothesis will be falsified because there are so many independent lines of evidence and the hypothesis is well nested into a network of scientific ideas, that does not make it theoretically impossible, thus AGW is falsifiable.

Falsification

It doesn't matter how beautiful your theory is, it doesn't matter how smart you are. If it doesn't agree with experiment, it's wrong. Richard P. Feynman

This quote is a favorite one of the climate ostriches. Unfortunately, falsification is a little more complex in practice.

While two reasonable persons can likely agree upon the question whether a hypothesis is falsifiable, falsification is a much more complicated matter. The basic problem is that you never test just one hypothesis, but always a cluster of them. Such a cluster is called a paradigm by Thomas Kuhn in his book, The structure of scientific revolutions. Even if you would come up with a smart experiment that only tested one hypothesis, you would still have the definitions of the terms in the hypothesis, you have traditions of how to measure the variables in question, and so on.

Such a cluster can go into a lot of detail. For example the researchers at CERN who thought they might have measured that neutrinos can go a little faster than light, were also testing the hypothesis that they had connected the coaxial cable right. This detail turned out to be the problem and not Einstein's relativity theory. That is a good example in Richard Feynman's own field that falsification is not that trivial, as I am sure Feynman himself realized.

Flappy bird

In sciences dealing with the world outside of the laboratory even larger sets of hypothesis are tested simultaneously. Bart Verheggen gave a clear example in his post: A quick ‘n dirty guide to falsifying AGW. He wonders whether a flying bird can be used to falsify the theory of gravity. Doesn't the theory state that objects with mass fall down?

However, a bird in the sky is not just a point mass x meters above the Earth in vacuum. There are more forces at play. You can see that as an ad-hoc fix of the theory of gravity and a reason to develop a better theory. Most people seem to prefer to see the bird as an exception, to study how the exception can be explained and add an additional theory about aerodynamic forces to the explanation. More generally, if you notice something that seems to falsify your hypothesis, that is a reason to study why and improve your understanding of the problem, it is not a reason to immediately reject the hypothesis.

This is an example we know very well, which makes it strange to see a bird as falsifying gravity, but in practice it can be hard to judge whether something is an ad-hoc fix or a legitimate additional hypothesis. This is especially hard during so-called paradigm changes, periods in which important hypotheses are called into question and sometimes replaced by better ones.

Climate "debate"

What the climate ostriches see as falsification is typically similar to a flying bird. Sometimes it is an indication that reality is a bit more complicated. Sometimes not even that and the ostriches feel that something is a contradiction, when it is not. An example of the last case is the feeling that the CO2 concentration is too small to matter.



Whereas 280 parts per million can have quite an influence.


An example of of reality being a bit more complicated are claims that increases in Antarctic sea refute the AWG hypothesis. Maybe they think so because it suggests that the temperature in the Antarctic is not increasing. However, the AGW hypothesis only claims that the average global temperature is increasing, local variations are not excluded. Furthermore, the temperature is actually increasing, according the Berkeley Earth Surface Temperature project; see below.

Trying to understand the increase in sea ice, we may learn more about the climate system and our observations. It may be related to the water becoming more fresh due to more melt water from the land ice; fresh water freezes easier as salty water. It may be related to changes in the circulation. It may also be due to inaccurate observations. Recently a non-climatic change was found that changed the trend considerably. This was due to a change in the satellites used. Whatever the reason will turn out to be, given the observed Antarctic temperature increase, I do not expect that resolving this issue will refute the AGW hypothesis.



Another favorite "falsification" is the apparent slowdown in trend of the global mean temperature since the strong El Nino year 1998. I must say, I do not even know whether it would be right to talk about a slowdown. The period is so short that the uncertainties in the estimated trend is large. That there was no temperature increase cannot be excluded statistically, but also a continuation of the previous trend can not be excluded.

And the "slowdown" in air temperature would only be a sign the the AGW hypothesis is wrong, if it were a sign that heating of the climate system had stopped as well. However, the heating of the ocean is continuing, so is the sea level rise and the melting of the Arctic and total sea ice. In fact, if you do a back of the envelop computation, you can show that this "slowdown" of the air temperature maximally represents a deviation 1 in a thousand of the total anthropogenic heating of the climate system. Not the stuff refutations are made of.

It is an interesting example to better understand the fluctuations of the global temperature on decadal time scales. It seems that the slowdown can be explained by more heat going into the ocean, especially in the Pacific due to a special wind pattern. And there are many other smaller contributions (volcanoes, sun, lack of Arctic observations). So much actually, that Matthew England is wondering why the temperature did not decrease more (also here in German).

Paradigm changes

A famous example of a paradigm change is the transition from a geocentric (Earth center of universe) to a heliocentric worldview (Copernicus). The Copernican model was simpler, but, if I remember correctly, initially also less accurate as the geocentric model. That made the choice of the optimal model subjective. Adding up simplicity and accuracy is like adding apples and oranges.

With the observations of the moons around Jupiter and the phases of Venus by Galileo Galilei, the advantages of the heliocentric world view became clearer. Also the computations became more accurate, especially when the circular orbits of Copernicus were replaced by the elliptical ones of Keppler. And with classical mechanics and gravity we can now also understand the orbits. Thus by now it is clear which theory is best.

This route is probably typical. In the beginning, during the paradigm change, there is real reason for debate. What is the best hypothesis is partially comparing apples and oranges. However, after some time the evidence accumulates and it becomes clear what the best hypothesis is, to the point that a normal person would say the idea is right. A scientist should avoid such formulations and we now know that that sun in not the center of the universe and that classical mechanics has its limitations.

The structure of scientific revolutions gives many more such examples. Thomas Kuhn talks about paradigm changes in the realm of some large revolutions in science. And he calls the work in the period between the revolutions: normal science and puzzle solving. There is some truth to that, a lot of scientific work is figuring out what the consequences of the existing hypothesis are, which you can call puzzle solving. Although in case of a puzzle you know there is a solution and in science part of the job is finding interesting, solvable puzzles.

Furthermore, I would argue that "paradigm changes" also happen at smaller "disruptions" of the scientific network of ideas, down to single articles that make an interesting contribution, a little above “run-of-the-mill”.

For example, when I started using the surrogate data approach to generate 3D cloud fields, I met with quite some opposition in the beginning. People wondered why I did not use the traditional ways: fractal clouds and clouds from dynamical cloud models (LES). In the end, people realized that surrogate clouds were very well suited for empirical studies because you can easily make clouds that were similar to the ones observed. And it likely helped that I found an easier algorithm to generate the surrogate clouds. My clouds are not something that will make the history books, but within the field of 3D radiative transfer and clouds it was a minor revolution.

I am thus not sure whether Kuhn's distinction between normal science (puzzle solving within the paradigm) and paradigm changes really exists. With sufficient domain expertise one probably also sees small "paradigm changes" in normal interesting scientific articles. That would also explain why scientist have often shown quite good intuition for which theory would in the end prove to be best.

During a paradigm change, it may not be clear whether a theory is falsified. Scientists consequently have more criteria to guide them through such rough times. They may have a preference for theories that are easy to falsify, in the sense that with little assumptions they make bold and broad predictions. Theories that are very specific and easy to falsify are more likely right if they are not yet falsified. Scientists also have a preference for elegant, beautiful theories, even if that is poorly defined and subjective. And later, when the dust settles, falsification is not that important, because it is typically quite clear which theory help most in understanding the world.

The truth and the falsity of assertions

Many people and surprisingly many scientists do not like the idea of falsification. It sounds negative to claim that we can only prove an idea wrong. Once, I wrote a research proposal with falsification in the title. A colleague advised me to change this to validation. That was probably good advise, even if the proposal was still rejected.

I feel the issue is not so much one of proving ideas wrong, but as I have repeated so often in this post of making precise statements.

Another reason is that people like the idea of something being right, of something being solid and eternal, that gives some hold in a complex world. Never Ending Audit calls it a PR problem that science can only lose. That is an aspect of science that is hard to explain; long before Popper is was clear that we can never be sure and that science progresses by continually trying to find errors in our current understanding. On the other hand, one should also not exaggerate, as a rule of the thumb, the longer a hypothesis is the general understanding of a scientific field (consensus), the less likely it is that you will find an error in it, especially if you are not a Feynman.

For all science' talk about uncertainties, one should also not forget that science still provides an understanding of the world that is more solid as anything else. How can we explain this paradox?

First of all, I would argue that right and wrong are not very useful categories in science. It is important that scientific hypothesis and methods are precise (falsifiable), appropriate and produce new ideas. A flat Earth is wrong, but often a good assumption (study how a car drives), a spherical Earth is wrong (but used in climate models*), even an elliptical Earth is wrong and the true shape will be measured with more and more accuracy in future. When tackling a problem, it is thus not so much a question whether a hypothesis, method or a dataset is right, but whether it is fit for a specific purpose.

Another good example is the 1-dimensional model of the greenhouse effect. This model is wrong, it should take the geographical variation in surface temperature, humidity and clouds into account, it should model the radiative transfer (frequency) line by line, it even assumes in the solar part that the Earth is flat. However, it is because of these simplifications that it is useful, that it helps us to understand the problem better. Theoretically, you could also make a model that is just as complex as the Earth, but that would not help much in understanding the greenhouse effect. The idea of a model is that it only models the key processes needed to understand a certain question.

Thus I would argue we should reduce the importance we put on being right or wrong and emphasize being useful, interesting, precise and such characteristics.

Secondly, even if a hypothesis is found wrong, this typically does not change our understanding of everyday phenomena much. When classical mechanics was found wrong, no building or bridge collapsed and no artillery shell landed less precise. When quantum mechanics will be found wrong, your smart phone will still work and the internet will keep on buzzing. If we find out that radiative transfer of heat radiation works differently, heat seeking missiles will still work and the greenhouse effect will still exist (maybe it would change some details and values).

If a hypothesis is found wrong, this will normally expand our understanding, make it more general and more precise. Things which are well understood today and which have been studied from a large range of angels, will not suddenly change drastically even if something big like a falsification happens. Fresh snow will still be white.

philosophy of science

I hope that this post makes it clearer to outsiders how science works. For myself, as a scientist, I have the feeling that thinking a little about science in general is helpful when converting an idea into a work of science, especially when you do something that is relatively new or radical. And those are the best things in science, those are the moments for which one becomes a scientist.

And when you do something new, you can typically not copy the methods and article structure of a previous similar study and slightly modify it, you will often have to do something which is very different from what exists. In that case knowing a bit a of philosophy of science is very helpful, it helps you navigate in the dark. But stop before a philosopher starts talking about right and wrong.

* The Earth is elliptical because it spins and the rotational forces are strongest at the equator. If climate models would assume an elliptical Earth, it would also have to model these centrifugal foces. It turns out that these two factors compensate each other and it is a good approximation to assume that the Earth is a sphere and ignore these centrifugal forces.

Related reading

William M. Connolley tries to explain what science is. One of his interesting attempts is seeing science as tinkering to improve the scientific literature, putting the literature in the center and not the scientist (my rather liberal translation of his post). The post is a nice contrast to WUWT "science" that shows plots with obvious truths and does not embed these "new findings" in what is already known. And a contrast to climate ostriches that link directly to plots without the text that explains how the plot was computed and could explain how it changes the understanding in the scientific literature.

A quick ‘n dirty guide to falsifying AGW, where Bart Verheggen argues that not any deviation from common sense is a falsification.

On mismatches between models and observations. "Discrepancies between models and observations [are] .. more subtle than most people realise. Indeed, such discrepancies are the classic way we learn something new."

For the funsies: Newtongate: the final nail in the coffin of Renaissance and Enlightenment ‘thinking’.

Another post by Bart Verheggen fits well to this post. It gives some hints on how to determine which ideas are credible: Who to believe.

A Richard Feynman Primer For Deniers by Ingenious Pursuits. Recommended. And fitting to that a video of Richard Feynman lecturing on PseudoScience.

Recommended books

Joachim Bauer. Das Gedächtnis des Körpers. Wie Beziehungen und Lebensstile unsere gene steuern. (The memory of the body. How relationships and life style influence our genes), ISBN: 987-3-492-24179-3, Piper, Muenchen, Germany, 2010.

Paul Feyerabend. Against method: Outline of an Anarchistic Theory of Knowledge (1975), ISBN 0-391-00381-X,

Thomas Kuhn. The structure of scientific revolutions. Chicago: University of Chicago Press, 1962. ISBN 0-226-45808-3

Bruno Latour. Science in Action. How to Follow Scientists and Engineers Through Society, Harvard University Press, Cambridge Mass., USA, 1987.

Karl Popper. Conjectures and Refutations: The Growth of Scientific Knowledge, 1963, ISBN 0-415-04318-2.

21 comments:

  1. Unfortunately this, as such navel gazers are prone to, in the end just muddles on. It is inherently difficult, if not impossible, to come up with statements of how complex systems can be falsified.

    What does happen often (see Newton vs. Einstein) is that new limits are established for well established ideas. That does not mean that Newtonian physics has been falsified.

    The attack on climate science falls back on the idiocy that one brick missing means the building falls. Steve McIntyre, Carrick and such characters are responsible. Eli would bet they are mathematicians by training.

    ReplyDelete
  2. During a paradigm change it may be impossible to convince someone his favorite statement is falsified. When the dust settles things are typically quite clear. At least to most. It is probably unavoidable to have a few percent contrarians. Everyone has a tendency to look for confirmation of their ideas, like the people in the first video, and those few percent are likely to a large part additionally motivated by their political views.

    I would argue that Newtonian physics is falsified, but still useful. We now understand the problem better, including the limitations of Newtonian mechanics.

    The attack on climate science is by people determined not to understand it. Which seems to be better correlated to conservative and libertarian political views as to education. (Although I am surprised how many democrats in the USA are still not convinced.) If there is a relationship with eduction, I have the feeling it is more economists and engineers that are the problem. That again most likely correlates with conservative and libertarian views. Hard to say what comes first.

    ReplyDelete
  3. I think I'm kind of will Eli here. I also don't quite understand how one can define falsifiability of a complex model.

    ReplyDelete
  4. I would acknowledge that this is problem is especially severe for climate models. Ironically, because the atmospheric sciences have a tendency to build complicated models because they value a close correspondence with reality: good weather forecasts or climate hindcasts.

    Had the climate models been developed by physicists (oceanographers or LES modellers) and not by meteorologists they would probably be much simpler and easier to understand. (Large Eddy Simulation (LES) models typically have quite primitive cloud and radiation computations, very simple surface modules and often periodic boundary conditions. This makes these models cleaner, but the lack of realism of the clouds is then again frustrating. It is a trade off.)

    The complexity makes it harder to find why there is a discrepancy between models and observations. However, for the question whether a model is science, it only need to be falsifiable in principle. And this criterion is there to make sure that scientific statements are clear. A computer code gives very clear instructions how the computation should be performed. People regularly find errors in the code, which demonstrates that falsification and improvement is possible.

    ReplyDelete
  5. But doesn't what you're suggesting rather trivialise falsification. I guess one could hypothesise that one's model will match reality (within some uncertainty interval) and if it doesn't you claim it's falsified. I'm just not sure what that really tells you in a fundamental sense. It could be that some assumptions were wrong, some parameters were wrong, etc, but it doesn't immediately tell you anything specific.

    I guess I've always distinguished between a complex model potentially being wrong, and a hypothesis being falsified.

    ReplyDelete
  6. It does not trivialise falsification as something we should relentlessly try.

    It does trivialise falsification as an important act. Find a new discrepancy is nice and important, but only the beginning and understanding the discrepancies is what brings the better understanding.

    Without the second step, falsification is nearly useless.

    At your place, "Curious" George pointed out that latent heat does not have a temperature dependence in climate models. Like I wrote above climate models assume that the Earth is a sphere. There is no tractor driving through the model fields in harvest time that reduce the leaf area index. Many models assume that a year is 12 times 30 days. That is all wrong, that all "falsifies" the models, if you will, but does not lead to a better understanding, because we already knew these simplifications were made and expect (and I hope have studied) them to be irrelevant.

    What do you see as important distinction between a complex model and a hypothesis? A hypothesis will typically be simpler and thus easier to understand and falsify. I would consequently typically prefer an analytical result over a model result. (Having both can be even better, especially if the analytic results needs simplifications.) However, I would not see one of them as less scientific.

    ReplyDelete
  7. "More generally, if you notice something that seems to falsify your hypothesis, that is a reason to study why and improve your understanding of the problem, it is not a reason to immediately reject the hypothesis."

    True that.
    I think that all science-denying movements (either climate change, evolution, vaccines or AIDS) have in common that they take small pieces of evidence that challenge some small part of a theory and try to take it as a falsification of the whole field.
    In vaccine-denial land, the fact that some vaccinated people still get sick is taken as evidence that vaccines do not work instead of evidence that they are not 100% effective and of the complexity of the immune system and our interaction with disease.
    In evolution-denial land, the "sudden" appearance of new species or complex structures are seen as a failure of evolution instead of the need to amend our understanding of some specific processes (such as punctuated equilibrium or the inherent biases in the fossil records).

    In climate-denial land... well, we are all familiarized with the sort of tactics.

    I think that this can be explained very easily using the Taylor expansion of a function as an analogy. Our understanding of any natural process is always an approximation of reality. So let F be a function that represents the REAL process and f represent our theories and models, then

    f=F0 + F'dx + 1/2*F''(dx)^2 + 1/6*F'''(dx)^3 ...

    Basic ideas such as Newtonean mechanics may be represented a just the first order series. It's a good approximation of F but it could be better and has a limited scope. Further, more detailed models (such as Einstein's relativity) may add more terms to our "Taylor expansion of nature" but won't really change previously understood ones.
    So when climate scientists are arguing about the surface air temperature record of the last decade, they are arguing about 3rd or 4th order estimates that do not change the basic facts.


    Re: falsification, I think is important to understand that, despite popperian claims to the contrary, falsifiability is not the be all and end all. In fact, I (and many philosophers) would argue that scientific theories are not falsifiable, not even in principle since you can always blame inconsistencies between theory an measurements on auxiliary hypotheses or make ad-hoc changes on your theory.
    Also, it bears to note that falsifiability is not even a good demarcation between science and non-science since there are plenty of examples of classic pseudosciences that not only are falsibiable but have been falsified (astrology, homeopathy, reiki...).

    ReplyDelete
  8. I (and many philosophers) would argue that scientific theories are not falsifiable, not even in principle since you can always blame inconsistencies between theory an measurements on auxiliary hypotheses or make ad-hoc changes on your theory.

    I would say, that is the point Thomas Kuhn was making, that falsification is difficult in praxis. And also the point that Hans Custers was making that AGW hypothesis in not a good formulation, as it is a combination of several hypotheses.

    Irrespective of the difficulty of falsification and how stubborn someone may be to refrain from admitting his pet theory is wrong, I think that reasonable people can agree upon whether a hypothesis is falsifiable in some abstract sense, thinking away the psychology and sociology of the process.

    Also, it bears to note that falsifiability is not even a good demarcation between science and non-science since there are plenty of examples of classic pseudosciences that not only are falsibiable but have been falsified (astrology, homeopathy, reiki...).

    That is a good point. I mixed the terms statement and hypothesis quite freely in this post. I should only have made the claim for a scientific hypothesis and not just for any statement, for any sequence of words.

    One of the reasons to write this post was the estimate of Judith Curry of the fraction of the temperature increase in the last century that was due to CO2. She took what she wrongly thought was what the IPCC used as uncertainty range (50 to 95%) and then shifted that without justification to centre on 50% (the PR optimum), to claim that CO2 was responsible for 27.5 to 72.5% of the warming.

    The claim that "CO2 was responsible for 27.5 to 72.5% of the warming" is precise and thus falsifiable, but without any justification, it is not a hypothesis, just a random statement and thus not science.

    Thus we would indeed need additional criteria to define what a scientific hypothesis is.

    I would personally expect that a good criterion would be that a scientific hypothesis should have a foundation in the existing body of knowledge and woven into its tapestry. That could also be showing which parts of the existing body of science would need to be wrong. (That would give us a problem to define the beginning, but then the people starting it were philosophers, astrologist and alchemists.) And it is probably also an important criterion that a hypothesis lends itself to further exploration.

    That would exclude Curry and astrology, I guess, they have no foundation for their statements.

    I would argue that also excludes classical homeopathy. Their claim that one should use a substance that in high concentrations provokes the same symptoms comes out of the nothing, as far as I know. If not, I have no problem with calling it a falsified scientific hypothesis. Much of traditional medical practise has the same problem, however.

    ReplyDelete
  9. Victor, you might appreciate the work of Imre Lakatos, who many see as a useful alternative between Popper and Kuhn.

    A fairly recent, and these days the dominant, form of pseudoscience is 'counter-science' -that is, claims to scientific knowledge which arise specifically to combat some kind of established scientific knowledge. Ironically, these people love to use Popper, as Eli and Daniel point out.

    I think the problem comes from Popperian falsification being appropriate for the study of only certain kinds of objects. If the object of study has fairly mechanical properties, or can be isolated via experiment, we can make statements about it that could reasonably be falsified via a single 'black swan'.

    But if the object of study is a complex system, which manifests its empirical data statistically, then of course a crude falsificationist approach makes no sense. We can't get access to deterministic, 100% certainty, so using a lack of it as grounds for falsification is either ignorant or dishonest.

    Scientific knowledge itself manifests statistically. It is a constellation of elements -data, theories, methods- in which each element is in a kind of reticular relation of support with all the others. Consilience of these elements makes the reticular connections more dense and stable, so that there is a convergence towards a stable core of knowledge -what I interpret to be Kuhn's 'normal' science.

    Most of the everyday work is done outside this core, where less well supported elements sit at the periphery, either because they involve new information, or perhaps information that might upset or contradict the core.

    In this way, the whole constellation is a kind of Bayesian model. The deeply supported elements are more probable, the peripheral elements less so. This is why:
    1. paradigm shifts are (and should be) very rare and difficult to create, because you cannot overturn core elements, without replacing the network of supporting information, of which they are the hubs.
    2. Individual experiments, theories or models are likely to be falsifiable, but to falsify the entire paradigm cannot be done with any 'magic bullet'. This requires a new paradigm with greater explanatory power (ie: a more deeply consilient constellation of theories and evidence).
    3. Most scientific work depends on the established core ideas, which is why peripheral work, as in Victor's example of CERN, is unlikely to overturn core theory. This is of course always possible -just as it is possible to disprove the greenhouse effect- but the burden of proof is enormous, and we are never surprised that most self proclaimed Galileos' attempts to overturn centuries of science fall flat.

    What all this boils down to, is that we have to accept that certain objects of study can only be understood probabilistically. The rhetorical call for scientific certainty simply misunderstands the thing we study -and is certainly no excuse to go putting highly reliable science alongside counter-science, as though they deserve equal consideration.

    Sorry for the long post(I've basically just summarised my Phd!)

    ReplyDelete
  10. Mark Ryan, thanks for dropping by. Nice to have so many real philosophers here. This could be the post I learned most. And the post where I notice most that I should have formulated more carefully. :)

    Yes, Imre Lakatos is missing on the reading list, could you recommend a book?

    I wonder whether there is really so much conflict between Karl Popper and Thomas Kuhn, though. They worked on different topics. Once when I was more into philosophy, I read a proceeding of a conference where both Popper and Kuhn were speaking, they were both quite happy about the work of the other and saw it as complementary. Maybe I missed some subtle stabs in the back?

    I notice that I should not only have distinguished between hypothesis and normal statements, but also between models and hypothesis. Models are simplifications by definition and thus wrong by definition. Hopefully they are fit for purpose.

    However even when it comes to hypothesis or theories, I still wonder whether it is helpful to thing in terms of right or wrong, or x percent chance of being right. I would personally give every theory around zero percent probability of being right. It is just a matter of time until we find the problems. (Maybe except for thermodynamics/statistical physics, because that is almost math and also limits its domain of validity itself.)

    Even theories such as quantum mechanics and relativity are wrong. At least we know that they do not match for black holes, thus at least one of them is wrong, probably both. Still they are fit for purpose in the domain where they are applicable. The more we study and test and prod, the more confidence we have about where this domain lies.

    ReplyDelete
  11. What do you see as important distinction between a complex model and a hypothesis? A hypothesis will typically be simpler and thus easier to understand and falsify. I would consequently typically prefer an analytical result over a model result. (Having both can be even better, especially if the analytic results needs simplifications.) However, I would not see one of them as less scientific.

    Yes, I agree. My distinction probably just terminology then (and philosophically unsound). Because it's harder to know what's been falsified when dealing with a complex model, I typically just think "it's wrong. Why?" However, one could still think in terms of falsification, it's just not as simple when dealing with complex models, than with simpler hypotheses.

    ReplyDelete
  12. Some reading on Lakatos...here's one in which he takes issue with the Popperian idea that any theory has zero probability of being right..! ;)

    http://crl.ucsd.edu/~ahorowit/lakatos.pdf

    I think it is often useful to separate "Popper" and "Kuhn" from "Popperian" or "Kuhnian" ideas. "Popperian" arguments too easily see falsification in the sciences of complex and open systems, and "Kuhnian" arguments tend to overemphasise the social factors involved in producing scientific knowledge. It would be fair to say the original theorists were more subtle than the influences they left to us ( this is common in philosophy; Marx, for example, once declared "all I know is I am no Marxist").

    For me, stating that no science is ever truly right is a truism, just as saying that science is created in a social setting is also stating the obvious. I like to think of the creation of scientific knowledge as a process that is shaped by material and normative forcings; the important question is whether the balance of forcings in a research community will create tendencies towards them reflecting the nature of the real, material objects, and thereby become more 'true'(I mean object here in the sense of something we aim at, or focus on -in most natural sciences, the object is a system, or complex of systems).

    Richard Feynmann elegantly argued that the uncertainty principle was not a shortcoming of physics, but a material necessity, the absence of which would be a paradox that would undermine everything else we know. We should work with an 'uncertainty principle' about knowledge, too. Scientific knowledge is produced in a real, material world -we won't get the luxury of an ideal world in which we can know everything.

    So the real question is how do we know what is the most reliable, most true, knowledge? Lakatos introduced the idea of 'research programmes'; these were progressive when they moved towards greater consilience and explanatory power,either through supporting empirical evidence, or through theoretical advances. When a research program fails to account for new empirical anomalies, or introduces contradictions, it becomes degenerate. Lakatos' 1978 book, "The Methodology of Scientific Research Programmes" is on Scribd, for those who subscribe; this is a good account, I think: http://www.loyno.edu/~folse/Lakatos.html

    Lakatos did not express this idea so much in probabilistic terms, but I think we can look at progressive constellations of knowledge as increasing their probability over time, due to the fact that scientific communities recognise increasing consilience of multiple lines of evidence. Being "right" (although knowledge has zero chance of this in the idealised sense) is the end point to which it moves asymptotically.

    I think the possible certainty of a given domain of science is determined in the end by the kind of object it studies. The atmospheric sciences encounter their own domain-specific uncertainty, which is different -and more manageable- than that of the psychological or social sciences. No doubt one of the reasons so many engineers dispute climate science, is that their objects of study are highly deterministic, so they don't really get how to understand such noisy systems as climate.






    ReplyDelete
  13. And Then Theres Physics, I think you are right and we should distinguish between a model and a hypothesis. Like I just wrote in my answer to Mark Ryan: "Models are simplifications by definition and thus wrong by definition. Hopefully they are fit for purpose."

    Thus models are always wrong, finding that they are wrong is thus not very informative, one always needs the additional question: does it matter for the question I am studying. For a hypothesis this additional question is not necessary and one can immediately try to understand where the discrepancy comes from.

    ReplyDelete
  14. Mark Ryan, thank you for that interesting paper. I have the feeling that Imre Lakatos makes a bit of a caricature of the philosophy of Karl Popper. Or as we say in the less friendly climate debate he is fighting a strawman.

    The text acts as if falsifiable is a criterion to select a certain hypothesis. I would argue that it is rather a minimal condition and depending on the problem at hand, the scientist will select the most appropriate hypothesis within the set of all scientific hypotheses.

    A fine example is classical mechanics. It is wrong, but we understand why and when it is a good approximation. Using it is still science and not pseudo-science. Starting with relativity when that is not necessary would be madness and make it harder to understand the problem at hand.

    Lakatos is right, that there are more criteria to select what methods, hypothesis and tools you want to work with. Maybe philosophy can help science in trying to understand what that those criteria could be. Just calling one research program progressive and one regressive is not much help, that is only a description after the fact and thus not very informative as criterion during the fight with reality. That may also be a part where Bayesian probability can be helpful, just because a certain research program looks regressive does not mean that everyone should immediately abandon it. There is still some likelihood that someone will find a trick and manages to make such a program progressive again, make it contribute to our scientific understanding. Being stubborn as individual scientists is not always bad.

    You write:
    1) "he takes issue with the Popperian idea that any theory has zero probability of being right"
    2) "For me, stating that no science is ever truly right is a truism".
    To me that sounds contradictory and doesn't 2) preclude using a probabilistic approach to the question whether a hypothesis is true?

    Your ideas about why engineers have problems with climate science are probably part of the problem. An irony is that having a little noise and having to put more work into defining the right concepts was the reason for me to switch from physics to the atmospheric sciences. I find the atmosphere as a complex system much more interesting, exactly because it is a bit more messy, but still a physical system where you can measure what you need.

    P.S. Long comments are fine.:) The post was also long, I guess it is a topic where you need some more elaboration to explain ones position.

    ReplyDelete
  15. it seems to me that 'knowledge' is always cultural.
    behind 'knowledge' is some logic (a paradigm)
    'knowledge' is only an abstract concept when not related to behavior

    have a look at
    http://paradigm-shift-21st-century.nl/kuhn-thomas-biography.html

    ReplyDelete
  16. Interesting Blog and Thread here, but I disagree both with the logic and the sentiments of the argument, as both are used in this discussion. On logic, you are welcome to read my free work on skydrive http://1drv.ms/1tnKM6f which is full of sentiment, being an expose of the errors in logic used by scientific theories. The sentiment only comes in due to my incredulity as a lawyer and amateur "scientist" at the errors. Sheer incredulity that you can assess for yourself, as the bases for it are set out in the logic & facts presented, which is the main part of the work. Don't be upset at the sentiments, first consider the logic & fact and then decide if you agree that the errors of science are quite outrageous. You can also download from my site http://thehumandesign.net (not religious intelligent design, just a pattern within the laws of nature).

    As to the specifics of this thread, they are covered in detail in the work but my response is simply to say that a whole theory (which is what we seek, despite Gödel's rigid skepticism) includes both falsifiable and un-falsifiable aspects. We try to minimize the un-falsifiables, but they are still fundamental. As part of a whole theory, they will enable falsifiable aspects to fall into place, supportive rather than merely consistent. They have extreme parsimony and are not frivolous. For example, Heisenberg says "you cannot measure motion and/or direction and one position and instant" thus we can only close the gap to the limit of a wavelength. This is obvious. By definition you cannot measure motion or direction at one position and instant, which is a freeze frame for measurement, All measurement is by action-reaction freeze frames, and we measure the intervals between them, closed down to a limit of a wavelength. We measure either side of momentum itself, at the ends of the wavelength and cannot close measurement further. Motion is defined by two position and instants, for motion between them, There is not motion at a freeze frame, and likewise direction is defined by two potions and instants for a direction between them, just like motion between them. Momentum is un-falsifiable by definition, and we cannot close that gap, just measure an interval between two positions & instants, with a wavelength between them., Clear enough? A whole theory will fundamental include un-falsifiables, as momentum itself, a most fundamental event, is un-falsifiable. We cannot know what is happening is the gap and conjecture wildly about stuff appearing and disappearing, and Cats alive and dead, all complete nonsense arising from a simple limitation to measurement and therefore to knowledge.
    momentum itself, a most fundamental event, is un-falsifiable.

    ReplyDelete
  17. My blog is under moderation. That is why you did not see your comment immediately. There are many crazies and spammers in this world.

    I would argue that the falsification of theories does not require perfect prediction. Whether the uncertainty in the prediction is due to the theoretical quantum mechanical limit or due to practical limits to the accuracy of measurements. Many theories have been "falsified" or at least replaced although every measurement and every prediction has an uncertainty.

    Not directly related, the importance of prediction is overrated. The support of the heliocentric world view was much more the phases of Venus and the satellites around other planets than the prediction of the movement of the planets. These predictions were even initially worse than that of the previous system.

    ReplyDelete
  18. Victor
    >I would argue that the falsification of theories does not require perfect prediction. Whether the uncertainty in the prediction is due to the theoretical quantum mechanical limit or due to practical limits to the accuracy of measurements. Many theories have been "falsified" or at least replaced although every measurement and every prediction has an uncertainty.<

    That avoids the point in my view, and is shallow. The entire point for science is to remove uncertainties. Certainly they will remain in a general sense because people make mistakes with measurement in various ways, but in my post above I am talking about the very basis for any measurement itself. It is not sufficient to just say, oh well, things are difficult to measure and we have some uncertainties, instead you must root out the reason why there are ANY uncertainties by defining the limits to measurement as I have done. I have defined the problem for you above. I have defined the absolute logical nonsense of Heisenberg above, but you seem to somehow have overlooked that in your acceptance of "uncertainties" as a reality. Those Uncertainties, as defined by Heisenberg, have a ridiculous basis. Motion and direction for Momentum are NEVER in one spatial potion and instant, by definition. Sometimes a simple statement like that is ignored because it is so simple, but you can believe the logical definition (actually, it doesn't take belief, just reasoning in fact). Logic says Heisenberg is offending the very definitions of motion & direction, which BOTH require two spatial positions and instants, by definition, for motion or direction between them. So instead of acknowledging that fact and then moving on as I have done in my free work to exploring FURTHER what that means, science stops as you have done and says, Oh well, there are uncertainties, so what? Define them and expose the errant logic of Heisenberg. That's how to make progress.

    >My blog is under moderation. That is why you did not see your comment immediately. There are many crazies and spammers in this world.<

    I read lots of Blogs as I pass through them investigating them, and whenever they get stumped by my points, they reach for that description for my own logic! I hope that wasn't the point of making that rather obvious and prejudicial comment. It is prejudicial because, if you read my work, science itself is really very much a disgrace regarding logic & fact. You can dispute that of course, if you read the work, or back off from it if you choose. Personally I like "crazies" as you call them, as long as they provide an attempt at progress, however misguided. The real problem is flattening such people rather than tolerating and refuting them, or indeed finding a little bit of inspiration as an offshoot of what they write. It is better than being mired in repetitive mantras that go nowhere to solving heavy problems. Usually the "mantra people" do not have the slightest idea there is a problem, despite me setting out opposing logic clearly above. Interesting case study in this simple exchange.

    ReplyDelete
  19. No, Marcus Morgan, I did not overlook your arguments. I just responded to the part where I can response sensibly and to the part that is relevant for the topic of this post.

    If you want to get qualified feedback on QM, I would suggest contacting someone working on that. One of the things your new theory will have to explain is why the diffraction pattern of a two slit experiment disappears when you try to detect through with of the two slits the photon/electron went.

    When I filter comments to get rid of the "crazies" and I let your comment pass, what does that make you? A longer discussion on QM would, however, be off-topic below a post on falsifiability.

    ReplyDelete
  20. Hello, I have to write an essay for my course in Philosophy of Science about Falsification and in what instances falsification is used in components of International Studies. I was wondering whether it would be able to apply falsification upon the discipline of Philosophy of Science itself? And whether the over-use of falsification might actually prove to be counterproductive for the discipline itself? Any help would be greatly appreciated!

    ReplyDelete
  21. I would personally say that philosophy is not a science and that it is not possible to falsify Popper's demarcation criterion.

    I am not sure if there is an overuse of falsification in science. Scientists are quite comfortable using falsified theories, such as classical mechanics. There is an overuse falsification by the political activists against science.

    ReplyDelete

Comments are welcome, but comments without arguments may be deleted. Please try to remain on topic. (See also moderation page.)

I read every comment before publishing it. Spam comments are useless.

This comment box can be stretched for more space.