About two years ago, I was diagnosed with asthma. After some changes to my lifestyle, the last whole body plethysmography measurement showed that my lungs are fine again although I do not use any medication anymore. The asthma is gone!
I would like to share these changes with you, hoping they may also benefit you and also to hear back what benefited others and what not. My personal experiment is a little small (n=1), thus it may well be that some improvement were just by accident and not because of lifestyle change. I have to say that I only had very light asthma. I never had an asthmatic attack, but regularly did wheeze lightly when exhaling at night, my voice was not so strong anymore and my lungs produced too much mucus (leading to some coughing and a coated tongue). Another sign was that the reliever medication (Bronchodilators) made jogging a lot easier.
Asthma is on the rise the last 50 years in the West. Already this points to lifestyle factors being important. Not much seems to be known about which factors these are. It has been noted that children growing up at farms as less affected by asthma as urban children. Based on this, it has been theorized that childhood contact with microbes is beneficial, but I guess there are quite a few other differences between the life on the country side and in cities.
The main changes I made are that I started with intermittent fasting, and nowadays sleep on a firm surface, and do much more walking/hiking. Also important may have been that I do not any grains any more, do less jogging and more sprinting and that I regularly tanning for more vitamin D.
Pages
▼
Friday, 16 December 2011
Monday, 5 December 2011
Plan B for our planet
The Kyoto protocol is running out and the climate conference in Durban will likely end without a new stronger protocol to reduce (the growth) green house gas emissions. Maybe we need a plan B.
Under the Kyoto protocol a cap on the greenhouse gas emissions for the participating industrialized countries is set. Within this group emission rights can be traded, so that emissions are cut in the most efficient way. With a similar aim, emission can also be reduced by financing emission reductions in emerging economies and developing countries.
The problem of the Kyoto protocol is that the cap on the greenhouse emissions only makes sense if everyone is participating, or at least will participate as soon as they are rich enough. It is possible to do so, the Montreal protocol to curb emissions of chlorofluorocarbons (CFC) to protect the ozone layer works well. In case of the Montreal protocol only the producers of fridges, air conditionings and spray cans were affected. Greenhouse gasses are emitted by the energy, agricultural and building sectors. These are very powerful agents with an interest in the status quo. With propaganda and by encouraging conflicts, they can makes sure that there are always some large greenhouse gas producers not participating.
Maybe a global cap is not needed. Maybe we can see the problem as a dynamical one. How can we develop cost efficient technologies to reduce green house gas emissions. These technologies will be developed if there is a clear price signal; emitting CO2, wasting energy should be costly. Furthermore, economies of scale will be important to reduce the price and the increase competition.
For this, all we need are higher prices for greenhouse gas emissions (for non-renewable energy, etc.) in a large part of the world, but not necessarily all of the world. This part of the world should be allowed to protect itself against imports produced with cheap energy. That is all that the world would need to agree upon.
I expect that Europe would be the region that would start working this way. Due to the import levies, the playing field would be levelled and Europe's industry would be able to compete in the here and now with industries from the outside. In the long run, the Europe's industry would be come more efficient and would be world leader in green technologies. Technologies that will be needed everywhere once the prices of energy, concrete and fertilizer will start to rise due to shortages.
This is a quite attractive position. No disadvantage now, due to the levy, and likely advantages later, due to a technological leading role. It might well be that many countries would like to cooperate, get into this green region to be able to export without levies and to be part of the future.
An agreement that climate import levies are allowed may be easier to achieve as a global cap on greenhouse emissions.
Under the Kyoto protocol a cap on the greenhouse gas emissions for the participating industrialized countries is set. Within this group emission rights can be traded, so that emissions are cut in the most efficient way. With a similar aim, emission can also be reduced by financing emission reductions in emerging economies and developing countries.
The problem of the Kyoto protocol is that the cap on the greenhouse emissions only makes sense if everyone is participating, or at least will participate as soon as they are rich enough. It is possible to do so, the Montreal protocol to curb emissions of chlorofluorocarbons (CFC) to protect the ozone layer works well. In case of the Montreal protocol only the producers of fridges, air conditionings and spray cans were affected. Greenhouse gasses are emitted by the energy, agricultural and building sectors. These are very powerful agents with an interest in the status quo. With propaganda and by encouraging conflicts, they can makes sure that there are always some large greenhouse gas producers not participating.
Maybe a global cap is not needed. Maybe we can see the problem as a dynamical one. How can we develop cost efficient technologies to reduce green house gas emissions. These technologies will be developed if there is a clear price signal; emitting CO2, wasting energy should be costly. Furthermore, economies of scale will be important to reduce the price and the increase competition.
For this, all we need are higher prices for greenhouse gas emissions (for non-renewable energy, etc.) in a large part of the world, but not necessarily all of the world. This part of the world should be allowed to protect itself against imports produced with cheap energy. That is all that the world would need to agree upon.
I expect that Europe would be the region that would start working this way. Due to the import levies, the playing field would be levelled and Europe's industry would be able to compete in the here and now with industries from the outside. In the long run, the Europe's industry would be come more efficient and would be world leader in green technologies. Technologies that will be needed everywhere once the prices of energy, concrete and fertilizer will start to rise due to shortages.
This is a quite attractive position. No disadvantage now, due to the levy, and likely advantages later, due to a technological leading role. It might well be that many countries would like to cooperate, get into this green region to be able to export without levies and to be part of the future.
An agreement that climate import levies are allowed may be easier to achieve as a global cap on greenhouse emissions.
Further reading
More posts on economics.
Thursday, 1 December 2011
Where good ideas come from
Steven Johnson wrote a book on creativity and innovations "Where good ideas come from - The seven patterns of innovation". The book is very well written, very captivating. In fact, I was reading another book (written by a scientist), which I put on hold to be able to read Good Ideas.
Afterwards, I am somewhat disappointed by the book. It was captivating, the examples where interesting and many references worth diving into, but I did not learn much about innovation and creativity. I expect that many people who were already interested in the topic, will feel the same.
In the last chapter Johnson tries to do some science himself. He takes a long list of innovations and divides them up along two axis: individual versus team work (networked) and market versus non-market. Most of the innovations are in the category networked and non-market. He finds this interesting as markets are supposed to be so great and market incentives should lead people to make more inventions, but that doesn't seem to be the case.
If you look at the innovations themselves, you quickly notice that the market innovations include some services and are mainly product and their components: airplane, steel, induction motor, contact lenses, etc. The non-market innovations do include some consumer products, but are mainly scientific ideas, theories, discoveries and instruments: Braille, periodic table, RNS splicing, Chloroform, EKG, cosmic background radiation, etc.
Depending on how much consumer products and how many scientific ideas you put into the list, you can get any ratio of market to non-market innovations. Furthermore, wouldn't the Western societies give most money for basic research to universities, the list would also have been very different. Thus it is not possible to conclude from this list, whether market or non-market forces are best at innovating.
I like Johnsons conclusion, I think he is right from an understanding of what motivates people to innovate, but you cannot reach this conclusion from the analysis in the last chapter.
Want to read something good by Steven Johnson? Read his book Emergence or go to his blog, with many interesting ideas on innovation on the internet.
Afterwards, I am somewhat disappointed by the book. It was captivating, the examples where interesting and many references worth diving into, but I did not learn much about innovation and creativity. I expect that many people who were already interested in the topic, will feel the same.
In the last chapter Johnson tries to do some science himself. He takes a long list of innovations and divides them up along two axis: individual versus team work (networked) and market versus non-market. Most of the innovations are in the category networked and non-market. He finds this interesting as markets are supposed to be so great and market incentives should lead people to make more inventions, but that doesn't seem to be the case.
If you look at the innovations themselves, you quickly notice that the market innovations include some services and are mainly product and their components: airplane, steel, induction motor, contact lenses, etc. The non-market innovations do include some consumer products, but are mainly scientific ideas, theories, discoveries and instruments: Braille, periodic table, RNS splicing, Chloroform, EKG, cosmic background radiation, etc.
Depending on how much consumer products and how many scientific ideas you put into the list, you can get any ratio of market to non-market innovations. Furthermore, wouldn't the Western societies give most money for basic research to universities, the list would also have been very different. Thus it is not possible to conclude from this list, whether market or non-market forces are best at innovating.
I like Johnsons conclusion, I think he is right from an understanding of what motivates people to innovate, but you cannot reach this conclusion from the analysis in the last chapter.
Want to read something good by Steven Johnson? Read his book Emergence or go to his blog, with many interesting ideas on innovation on the internet.
Monday, 14 November 2011
Freedom to learn
On Psychology Today Peter Gray writes an intriguing blog on education called Freedom to learn.. In it he argues that forcing children to learn stifles their innate motivation to teach themselves and is thus counter productive.
The post titled Seven Sins of forced education gives as the fourth sin: "Interference with the development of personal responsibility and self-direction."
When I look back on my time as a student, what wonders me most, is that I never had the idea to explore a certain topic myself. I just read the books the school or university told me to read and that was it. Only as a researcher I started to take responsibility for my own eduction.
Another sin that fits well to the theme of the blog, variability, is the "Reduction in diversity of skills, knowledge, and ways of thinking." In a highly diversified economy, it is detrimental to teach everyone the same topics.
A highly recommended blog, Freedom to learn.
The post titled Seven Sins of forced education gives as the fourth sin: "Interference with the development of personal responsibility and self-direction."
Children are biologically predisposed to take responsibility for their own education... The implicit and sometimes explicit message of our forced schooling system is: "If you do what you are told to do in school, everything will work out well for you." Children who buy into that may stop taking responsibility for their own education.
When I look back on my time as a student, what wonders me most, is that I never had the idea to explore a certain topic myself. I just read the books the school or university told me to read and that was it. Only as a researcher I started to take responsibility for my own eduction.
Another sin that fits well to the theme of the blog, variability, is the "Reduction in diversity of skills, knowledge, and ways of thinking." In a highly diversified economy, it is detrimental to teach everyone the same topics.
A highly recommended blog, Freedom to learn.
Further reading
More posts on paleo (diet and lifestyle topics inspired by evolutionary thinking).
- Paleo and fruitarian lifestyles have a lot in common
- A comparison of the main ideas and recommendations of these two lifestyles.
- Natural cures for asthma?
- Some ideas for natural ways, which helped me cure or reduce asthma.
- Sleep and diversity
- Differences in sleeping times, from early bird to night owls, may provide security advantages.
- Is obesity bias evolutionary?
- A critical comment on an article, which states that humans have an intrinsic propensity to eat too much.
- Freedom to learn
- Forcing children to learn stifles their innate motivation to teach themselves and may thus be counter productive.
- The paleo culture
- After the Ancestral Health Symposium 2012, as discussion started about the sometimes self-centred culture of the paleo community
Friday, 4 November 2011
Darwinian or Smithian competition
Econtalk recently held an interview with Robert Frank, author of the book "The Darwin Economy".
The most interesting part starts with the statement:
The competition of Adam Smith is the competition between lion and gazelle. It makes both fast and strong. You could say, it also makes the group better of, on evolutionary time scales. Charles Darwin was aware that this is not always the case, competition between males can lead to aggressive and too strong males, competition between trees for the sun makes them tall (inefficient) and fragile (storm damage). In these cases collaboration between members of a species would make everyone better of.
The irony for economics, the study of how humans allocate their resources, is that humans are one of the most cooperative species on earth (we are strong reciprocators). You can see this everywhere, if you have an eye for it. You can see it in its most distilled form in economic games performed in laboratories, such as the ultimatum game and the common goods game.
The most interesting part starts with the statement:
"I start with a prediction that I won't live to see whether it comes true or not: I predict that if we were to poll professional economists a century from now about who is the intellectual founder of the discipline, I say we'd get a majority responding by naming Charles Darwin, not Adam Smith. Smith, of course, would be the name out of 99% of economists if you asked the same question today. My claim behind that prediction is that in time, not next year, we'll recognize that Darwin's vision of the competitive process was just a lot more accurate and descriptive than Smith's was."I think he is right, but am hopeful we do not have to wait an entire century.
The competition of Adam Smith is the competition between lion and gazelle. It makes both fast and strong. You could say, it also makes the group better of, on evolutionary time scales. Charles Darwin was aware that this is not always the case, competition between males can lead to aggressive and too strong males, competition between trees for the sun makes them tall (inefficient) and fragile (storm damage). In these cases collaboration between members of a species would make everyone better of.
The irony for economics, the study of how humans allocate their resources, is that humans are one of the most cooperative species on earth (we are strong reciprocators). You can see this everywhere, if you have an eye for it. You can see it in its most distilled form in economic games performed in laboratories, such as the ultimatum game and the common goods game.
Thursday, 18 August 2011
Productivity and context
Just listened to Econtalk with Bob Lucas, Nobel Laureate and professor of economics at the University of Chicago, on economic growth. Naturally they also touched on productivity. It was almost funny how they avoided one conclusion. It was on the tip of my tongue as the ending of many sentences.
They talked about a person raising chickens in Indonesia, producing about 50 eggs per chicken per year and having to pick them himself. They asked themselves the question why he did not use modern technology to produce 300 eggs a year. Lucas answer was the lower cost of labor in Indonesia and he stated that this cost is determined by people working in factories in the cities. Lucas: "Economic growth is always associated by a move out of agriculture and into the city environment."
Then the host Russ Roberts asked Bod Lucas why an unskilled immigrant makes so much more money as soon as he (illegally) crosses the US border. Or in other words why he suddenly becomes more productive, while his human capital did not change. Lucas said that it is about cooperation with other people, how productive you are, e.g. as a busboy, depends on the quality of the waiters and of the cooks. Furthermore, the richer people in America are willing to pay more for a dinner.
Roberts then notes that it is a beautiful and interesting problem, that he is probably not more intelligent than his dad, but his standard of living is much higher. Lucas avoids answering, but states: "The other thing is: How many people are competing with you at your level?" "Don't drop out of high school!"
The answers hint at the following: Context is very important in determining productivity and wages. In the two examples, the farmer and the immigrant, context determines their wage, not their skill, not their education, not their human capital. For the farmer it is important that other people get better wages in the city, for the immigrant is is important where he works.
Productivity thus cannot be determined by looking at a person, it is not defined at the level of a person. Scientifically put: it is a nonlinear computation, not a linear one. In the latter case you could isolate one element, one person. Strictly speaking productivity is only defined at the global level. It is still reasonably well defined for nations and probably for large companies. (Even the productivity of companies is determined strongly by their network of partners and institutional factors.) Thus it is also not possible to compute how much a worker should earn. It is up for negotiation. And you certainly cannot claim that someone deserves to earn a certain amount.
Of course, the employee does need to work and often needs skills. Lets not start a nature-versus-nurture-type debate. Productivity is determined (almost) 100% by context and 100% effort. Just as an organism is determined 100% by nature (genes) and 100% by nurture (environment; context) and you can only split these two factors for a given variability in the environment.
It is thus very well possible that top managers get better salaries because they have a better bargaining position, not because they are more productive as it often stated in the media. Doubling the salary of all workers would be a problem to a company. However, a company does not go bankrupt because their CEO gets a double salary; there is only one CEO and there are basically no market forces to reduce it until it becomes extremely excessive. That may be an important reason for the differences in salaries, rather than skills.
They talked about a person raising chickens in Indonesia, producing about 50 eggs per chicken per year and having to pick them himself. They asked themselves the question why he did not use modern technology to produce 300 eggs a year. Lucas answer was the lower cost of labor in Indonesia and he stated that this cost is determined by people working in factories in the cities. Lucas: "Economic growth is always associated by a move out of agriculture and into the city environment."
Then the host Russ Roberts asked Bod Lucas why an unskilled immigrant makes so much more money as soon as he (illegally) crosses the US border. Or in other words why he suddenly becomes more productive, while his human capital did not change. Lucas said that it is about cooperation with other people, how productive you are, e.g. as a busboy, depends on the quality of the waiters and of the cooks. Furthermore, the richer people in America are willing to pay more for a dinner.
Roberts then notes that it is a beautiful and interesting problem, that he is probably not more intelligent than his dad, but his standard of living is much higher. Lucas avoids answering, but states: "The other thing is: How many people are competing with you at your level?" "Don't drop out of high school!"
The answers hint at the following: Context is very important in determining productivity and wages. In the two examples, the farmer and the immigrant, context determines their wage, not their skill, not their education, not their human capital. For the farmer it is important that other people get better wages in the city, for the immigrant is is important where he works.
Productivity thus cannot be determined by looking at a person, it is not defined at the level of a person. Scientifically put: it is a nonlinear computation, not a linear one. In the latter case you could isolate one element, one person. Strictly speaking productivity is only defined at the global level. It is still reasonably well defined for nations and probably for large companies. (Even the productivity of companies is determined strongly by their network of partners and institutional factors.) Thus it is also not possible to compute how much a worker should earn. It is up for negotiation. And you certainly cannot claim that someone deserves to earn a certain amount.
Of course, the employee does need to work and often needs skills. Lets not start a nature-versus-nurture-type debate. Productivity is determined (almost) 100% by context and 100% effort. Just as an organism is determined 100% by nature (genes) and 100% by nurture (environment; context) and you can only split these two factors for a given variability in the environment.
It is thus very well possible that top managers get better salaries because they have a better bargaining position, not because they are more productive as it often stated in the media. Doubling the salary of all workers would be a problem to a company. However, a company does not go bankrupt because their CEO gets a double salary; there is only one CEO and there are basically no market forces to reduce it until it becomes extremely excessive. That may be an important reason for the differences in salaries, rather than skills.
Further reading
More posts on economics.
Sunday, 12 June 2011
Sleep and diversity
In an interesting article on sleep in traditional societies on Science News (which is not the news section of Science), Bruce Bower states that people used to sleep in groups and that differences in sleeping time where an advantage as it provided better protection for the sleeping group.
By the way, the circadian sleep-wake cycle itself is an example of variability in the activity level. Science still wonders why we sleep. It may well be related to this variability, which allows one to be more active during the day; while at night the reserves are filled again and repairs are performed. A remaining question would still be why you need to close your eyes for this, which is dangerous.
What makes it uncomfortable is that your weight is mainly born by your bones. If you sleep on your back your weight is on your heels, pelvis, shoulder blades and skull, on your side it is on your ankle, pelvis and shoulder, on your front the tops of the feet, the knees, pelvis, ribs (man) and collar bones. May there be a reason why exactly these places have no padding, no fat nor muscle? The reduced pressure on the weak body parts could maybe be beneficial for the circulation. Lying in a dimple on a mattress could change your posture (round back) and consequently your breathing.
I am curious what other people experienced and know about this topic; please leave a comment.
... sleep typically unfolds in shared spaces that feature constant background noise emanating from other sleepers, various domestic animals, fires maintained for warmth and protection from predators, and other people's nearby nighttime activities.
In traditional settings, however, highly variable sleep schedules among individuals and age groups prove invaluable, since they allow for someone to be awake or easily roused at all times should danger arise, Worthman holds.Thus a diversity of sleep schedules may be well adaptive, may have been selected for by evolution as it provides better protection. It would be great if modern society would allow people to follow these natural needs, instead of forcing a fixed schedule on early birds and night owls.
By the way, the circadian sleep-wake cycle itself is an example of variability in the activity level. Science still wonders why we sleep. It may well be related to this variability, which allows one to be more active during the day; while at night the reserves are filled again and repairs are performed. A remaining question would still be why you need to close your eyes for this, which is dangerous.
Sleeping without mattress
I found the article linked on the page The Ergonomics of Sleep: Why a Hard Surface can Provide Sweet Dreams. This page argues that sleeping on a mattress is a recent invention and may not be ideal for everyone. To me it makes sense to use traditional behavior as Null-hypothesis; modern ideas are often useful (for instance, hygiene and vaccinations), but should be tested. Thus one week ago, I removed my mattress to see it this works for me and slept on the wooden panel below, but I did add an exercise mat and a comforter (German: Bettdecke) as cushioning. It is definitely not comfortable, but I sleep well, wake up much more alert and it is easy to get out of bed.What makes it uncomfortable is that your weight is mainly born by your bones. If you sleep on your back your weight is on your heels, pelvis, shoulder blades and skull, on your side it is on your ankle, pelvis and shoulder, on your front the tops of the feet, the knees, pelvis, ribs (man) and collar bones. May there be a reason why exactly these places have no padding, no fat nor muscle? The reduced pressure on the weak body parts could maybe be beneficial for the circulation. Lying in a dimple on a mattress could change your posture (round back) and consequently your breathing.
I am curious what other people experienced and know about this topic; please leave a comment.
Further reading
More posts on paleo (diet and lifestyle topics inspired by evolutionary thinking).
- Paleo and fruitarian lifestyles have a lot in common
- A comparison of the main ideas and recommendations of these two lifestyles.
- Natural cures for asthma?
- Some ideas for natural ways, which helped me cure or reduce asthma.
- Is obesity bias evolutionary?
- A critical comment on an article, which states that humans have an intrinsic propensity to eat too much.
- Freedom to learn
- Forcing children to learn stifles their innate motivation to teach themselves and may thus be counter productive.
- The paleo culture
- After the Ancestral Health Symposium 2012, as discussion started about the sometimes self-centred culture of the paleo community
Saturday, 28 May 2011
Good ideas, motivation and economics
Steven Johnson recently wrote the book Where Good Ideas Come From: The Natural History Of Innovation. In this book he argues that good ideas mostly do not come as a sudden spark out of the blue, an epiphany, but rather grow slowly over time by combining ideas. His recommendation for finding good ideas is to create a diverse network with people with very different interests to increase the chance of seeing a new combination. The latter is similar to recommendations from creativity books to read broadly. Also personally, I like to read about a broad range of topics; reading just meteorology papers, it is highly unlikely that I will get an idea a colleague did not yet have. On the other hand, you do need to know your field to know what contribution is needed. Johnson also advices to write up your ideas and ideas of others and to discus them freely. Another advice from Johnson to organizations is to give employees the freedom to explore wild ideas in part of their time.
In the last chapter of the book and in an article for the New York Times, he answers the question why most good ideas come from amateurs and academics rather than solo entrepreneurs or private corporations. His answer is that the commercial guys are handicapped by keeping their ideas secret.
That may be part of the answer. Another is likely that people are not that much motivated by money when it comes to such complex cognitive tasks. As Daniel Pink explains in a talk at TED and in a beautifully made animation, offering people more money will increase their productivity for simple manual tasks. However, for tasks needing only a little cognitive skill people actually often perform worse if they are given a large monetary reward. Daniel Pinks equation: Motivation = autonomy + mastery + purpose.
Another indication that people are not solely motived by money, but also by concepts such as fairness comes from academic economics, from experimental micro-economics. The deviation from mainsteam economics (the neoclassical synthesis), with its concept of a homo economicus, who only cares about monetary gain, is beautifully distilled in a classic simple economic game, the ultimatum game.
In the ultimatum game, person A get to divide a sum of money. Person B has to agree with this division. If B doesn't, no one gets any money. When I first read about this game, I was wondering why it was interesting; people would simply split 50/50, wouldn't they. However, then it was explained that if B would be a good homo economicus, he would accept any offer, because it is better to receive something as getting nothing. Person A knows this and will only offer the smallest possible amount. As expected, reality looks very different. If Person A does not offer at least 30 to 40 percent, it is quite likely that B rejects the offer. Typically A offers 50 percent. This also happens if A and B do not know each other, if the game is only played once, and the results is similar in any culture or group. This game and many similar ones have led to the conclusion that humans have a innate sense of fairness.
This is a combination of three ideas. It is not yet sufficient to derive a new economic theory, but it might be a start. Do you have any ideas that together may make this network of ideas more fruitful?
In the last chapter of the book and in an article for the New York Times, he answers the question why most good ideas come from amateurs and academics rather than solo entrepreneurs or private corporations. His answer is that the commercial guys are handicapped by keeping their ideas secret.
That may be part of the answer. Another is likely that people are not that much motivated by money when it comes to such complex cognitive tasks. As Daniel Pink explains in a talk at TED and in a beautifully made animation, offering people more money will increase their productivity for simple manual tasks. However, for tasks needing only a little cognitive skill people actually often perform worse if they are given a large monetary reward. Daniel Pinks equation: Motivation = autonomy + mastery + purpose.
Another indication that people are not solely motived by money, but also by concepts such as fairness comes from academic economics, from experimental micro-economics. The deviation from mainsteam economics (the neoclassical synthesis), with its concept of a homo economicus, who only cares about monetary gain, is beautifully distilled in a classic simple economic game, the ultimatum game.
In the ultimatum game, person A get to divide a sum of money. Person B has to agree with this division. If B doesn't, no one gets any money. When I first read about this game, I was wondering why it was interesting; people would simply split 50/50, wouldn't they. However, then it was explained that if B would be a good homo economicus, he would accept any offer, because it is better to receive something as getting nothing. Person A knows this and will only offer the smallest possible amount. As expected, reality looks very different. If Person A does not offer at least 30 to 40 percent, it is quite likely that B rejects the offer. Typically A offers 50 percent. This also happens if A and B do not know each other, if the game is only played once, and the results is similar in any culture or group. This game and many similar ones have led to the conclusion that humans have a innate sense of fairness.
This is a combination of three ideas. It is not yet sufficient to derive a new economic theory, but it might be a start. Do you have any ideas that together may make this network of ideas more fruitful?
Further reading
More posts on economics.
More posts on creativity.
Is obesity bias evolutionary?
On the Huffington Post and on his own blog, David Katz, MD, asks the important question why so many people are obese nowadays. His answer does not sound convincing, he argues that there a bias toward obesity is an evolutionary advantage.
This might have been possible, but a simple calculation shows that this is unlikely. A man who is 100 kg too heavy and 50 years old, to a first approximation ate 2 kg per year too much. Two kilograms of fat is 1400 Cal and thus comparable to what one eats in 4 days to one week, depending on your, height, weight and level of activity. In other words, this man ate only about one or two percent more than he should have. You may increase this number somewhat to account for the fact that a larger body also needs more energy. Still the additional amount eaten by an overweight person is small and will be hardly noticeable in day life.
In the times that humans were hunters and gatherers, it should have been easily possible to eat a few percent more than usual. The year to year variability in the availability of resources is large; on the negative side think for instance of floods, droughts and grasshopper plagues. A human that is able to survive in a bad year, should easily be able to hunt or gather double his need in good years. One would expect the maximum amount of food a person can find is much more than a few percent more than the average needed. Otherwise, a human with a bit of bad luck would soon starve to death and be removed from the gene pool.
This might have been possible, but a simple calculation shows that this is unlikely. A man who is 100 kg too heavy and 50 years old, to a first approximation ate 2 kg per year too much. Two kilograms of fat is 1400 Cal and thus comparable to what one eats in 4 days to one week, depending on your, height, weight and level of activity. In other words, this man ate only about one or two percent more than he should have. You may increase this number somewhat to account for the fact that a larger body also needs more energy. Still the additional amount eaten by an overweight person is small and will be hardly noticeable in day life.
In the times that humans were hunters and gatherers, it should have been easily possible to eat a few percent more than usual. The year to year variability in the availability of resources is large; on the negative side think for instance of floods, droughts and grasshopper plagues. A human that is able to survive in a bad year, should easily be able to hunt or gather double his need in good years. One would expect the maximum amount of food a person can find is much more than a few percent more than the average needed. Otherwise, a human with a bit of bad luck would soon starve to death and be removed from the gene pool.
Sunday, 1 May 2011
Against review - Against anonymous peer review of scientific articles
Being a scientist and a friend of science, I hope that the abolishment of the anonymous peer review system will end power abuse, reduce conservative tendencies and in general make science more productive, creative and fun. This essay has two parts. First, I will discus and illustrate the disadvantages of the peer review system by a number of examples, and then I will give some ideas for reforms inside the review system.
The quality of reviews
The peer review system is there to guarantee a high quality standard for scientific papers. If all scientists were rational, selfless and fully objective beings with unlimited amounts of time, this system would be a good idea to find errors before publishing. However, if you put real scientists into the equation, scientists that are normal humans, your get a low quality of review and a system that is prone to power abuse.Honorary authorship of scientific articles
In their statement on scientific dishonesty the German science foundation (DFG) explicitly writes that honorary authors are unwanted. More specifically, they write that a co-author should have done more than just provide a desk, or write and lead a project or deliver data. In their view, every author should be 100% accountable for the entire content of the article. In this way, they hope that the co-authors check each others work more carefully.
The library network - from book archiving to knowledge facilitator
The amount of knowledge had doubled every 10 to 15 years for the last two centuries (Price, 1953); well at least the annually number of published articles does. The number of books is also increasing rapidly. In Newton’s times it was hard to get the right books, nowadays it is hard to select the right books. This changes the role of the scientific library. At the same time, information technology - powerful databases and the internet - open up new opportunities to find relevant information.
This document describes three ideas that will hopefully make the university book collections much more valuable. These ideas are can be linked to the fundamental thoughts behind the success of Amazon, Napster and Google.
This document describes three ideas that will hopefully make the university book collections much more valuable. These ideas are can be linked to the fundamental thoughts behind the success of Amazon, Napster and Google.
On the philosophy of language
A philosophy of language (Martelaere, 1996) is a science that is build on reason alone. In the natural sciences, it is generally accepted that one can only do fruitful research by combining reason and experiments, as your theory determines how you see your experiment, and experimenting without theory normally leads to experiments that are not informative. In the same way, philosophy will be most useful if it starts with the current societal consensus and improves upon it by making the premises clearer and the reasoning more logical.
The language philosophers see language as a limiting factor; limiting our thinking and limiting our perception of reality. Here, I would like to argue that languages are flexible enough, have evolved to allow for creative statements about the complex reality, that languages are no important limitation in our understanding of the world and ourselves.
The language philosophers see language as a limiting factor; limiting our thinking and limiting our perception of reality. Here, I would like to argue that languages are flexible enough, have evolved to allow for creative statements about the complex reality, that languages are no important limitation in our understanding of the world and ourselves.
Saturday, 30 April 2011
An idea to combat bloat in genetic programming
Introduction
Genetic programming (Banzhaf et al., 1998) uses an evolutionary algorithm (Eiben, 2003) to search for programs that solve a well-defined problem that can be evaluated using one number, the fitness of the program. Evolutionary algorithms are inspired by the evolution of species by natural selection. The main difference of this search paradigm, compared to methods that are more traditional, is that is works with a group of candidate solution (a population), instead of just one solution. This makes the method more robust, i.e. less likely to get trapped in a local minimum. These algorithms code their solution as a gene. New generations of solutions are formed using sexual (and a-sexual) reproduction. Sexual reproduction (crossover of genes) is emphasized as it allows for the combination of multiple partial solutions into one, thus using the parallel computations made by entire population, making the use of a population less inefficient as it would be otherwise.Bloat and evolution
One of the fundamental problems of genetic programming (GP) is bloat. After some generations (typically below one hundred), the search for better programs halts as the programs become too large. The main cause of bloat is generally thought to be the proliferations of introns. Introns are parts of the program that do not contribute to the calculation that is made, e.g. a large calculation that is multiplied by zero in the end, or a section that start with: if false then. Additionally, bloat can be caused by inefficient code, e.g. two times x=x+1, instead of x=x+2.In genetic programming, the linear genes are typically converted into trees or lists with lines of code. Some other coding methods are also used, see Banzhaf et al. (1998). Performing a simple crossover on the genes would almost certainly result in an illegal program. Therefore, crossover is performed on a higher level, e.g. two sub-trees are exchanged between the sets of genes, or sections of program lines are exchanged. This way the child-program is legal code, but still, it is often bad code, and its results are not very similar to its parents. As a consequence, evolution favors bloat at the moment that it becomes hard to find better solutions. In this case, it is better for the parents to get offspring that is at least just as fit as they are, as getting less fit children. The probability to get equally fit offspring is better in large programs (with a small coding part) than in small ones (most of which is useful code), as in large programs it is more likely that the crossover operator does not destroy anything. The problem of bloat is thus intimately linked to the destructive power of crossover.
Advection of the disturbance fields in stochastic parameterisations
A relatively new method to estimate the uncertainties of the weather prediction is the use of ensembles. In this case, the numerical weather prediction (NWP) model is run multiple times with slightly varying settings. A popular choice is to vary the initial conditions (prognostic variables) of the model, within the range of their uncertainty. It seems like this method does not bring enough variability: the ensemble members are still relatively similar to each other and the observation is often still not in the ensemble.
As a consequence, meteorologists are starting to look at uncertainties in the model as an additional source of variability for the ensemble. This can be accomplished by utilising multiple models to create the ensemble (multi-model ensemble), or multiple parameterisation schemes or by varying the parameterisations of one model (stochastic parameterisations). This latter case is discussed in this essay.
Ideally, one would like to use stochastic parameterisations that were specially developed for this application. Such a parameterisation could also take into account the relation between the PDF and the prognostic model fields. Developing parameterisations is a major task and normally performed by specialists of a certain process. Thus, to get a first idea of the importance of stochastic parameterisations, NWP-modellers started with more ad-hoc approaches. One can, for example, disturb the tendencies calculated by the parameterisations by introducing noise.
As a consequence, meteorologists are starting to look at uncertainties in the model as an additional source of variability for the ensemble. This can be accomplished by utilising multiple models to create the ensemble (multi-model ensemble), or multiple parameterisation schemes or by varying the parameterisations of one model (stochastic parameterisations). This latter case is discussed in this essay.
Stochastic parameterisations
In parameterisations the effect of subscale processes are estimated based on the resolved prognostic fields. For example, the cloud fraction is estimated based on the relative humidity at the model resolution. As the humidity also varies at scales below the model resolution (sub grid scale variability), it is possible to have clouds even though the average relative humidity is well below saturation. Such functional relations can be estimated by a large set of representative measurements or by modelling with a more detailed model. In deterministic parameterisations the best estimate of, for example, the cloud fraction is used. However, there is normally a considerable spread around this mean cloud fraction for a certain relative humidity. It is thus physically reasonable to consider the parameterised quantity as a stochastic parameter, with a certain probability density function (PDF).Ideally, one would like to use stochastic parameterisations that were specially developed for this application. Such a parameterisation could also take into account the relation between the PDF and the prognostic model fields. Developing parameterisations is a major task and normally performed by specialists of a certain process. Thus, to get a first idea of the importance of stochastic parameterisations, NWP-modellers started with more ad-hoc approaches. One can, for example, disturb the tendencies calculated by the parameterisations by introducing noise.
Online generation of temporal and spatial fractal red noise
It is relatively easy to generate fields with fractal noise using Fourier, wavelet or cascade algorithms (see also my page on cloud generators). However, these noise fields have to be calculated fully before their first use. This can be problematic in case a 2D or 3D spatial field is needed as input to a dynamical model that should also have a temporal fractal structure. Often the number of temporal time steps is so large, that the noise field become impractically large. Therefore, this essay introduces an online method to generate fractal red noise fields, where the new field is calculated from the previous field without the need to know all previous fields; only the current and the new field have to be stored in memory.
The algorithm involves two steps. The spatially correlated red noise is calculated from a white noise field using Fourier filtering. The white noise field evolves temporally correlated by addition of Gaussian noise. In other words, every pixel of the white noise field represents a fractal Brownian noise time series. Fortunately, the spatially correlated noise field retains the fractal nature of the temporal structure of the white noise field.
The algorithm involves two steps. The spatially correlated red noise is calculated from a white noise field using Fourier filtering. The white noise field evolves temporally correlated by addition of Gaussian noise. In other words, every pixel of the white noise field represents a fractal Brownian noise time series. Fortunately, the spatially correlated noise field retains the fractal nature of the temporal structure of the white noise field.
On cloud structure
I tried to keep this text understandable for a broad scientific audience. Thus who already know something about fractals, may find the first section on fractals trivial and better start with the second section on why clouds are fractal.
The fractal structure of a measurement (time series) of Liquid Water Content (LWC) can be seen by zooming in on the time series. If you zoom in by a factor x, the total variance of this smaller part of the time series will be reduced by a factor y. Each time you again zoom in by factor x, you will find a variance reduction by a factor y, at least on average. This fractal behaviour leads to a power law; the total variance is proportional to the scale (total length of the time series) to the power of a constant; to be precise, this constant is log(x)/log(y). Such power laws can also be seen in the measurements of cloud top height, column integrated cloud liquid water (Liquid Water Path, LWP), the sizes of cumulus clouds, the perimeter of cumulus clouds or showers, and in satellite images of clouds and in other radiative cloud properties.
If you plot such a power law in a graph with logarithmic axis, the power laws looks like a line. Thus, a paper on fractals typically shows a lot of so called log-log-plots and linear fits. To identify scaling you need at least 3 orders of magnitude, thus you need large data sets with little noise.
Clouds are fractal
Fractal measures provide an elegant mathematical description of cloud structure. Fractals have the same structure at all scales. That may sound exotic, but fractals are actually very common in nature. An instructive example is a photo of a rock, where you cannot see if the rock is 10 cm large or 10 m, without someone or some tool next to it. Other examples of fractals are commodity prices, the branch structure of plants, mountains, coast lines, your lungs and arteries, and of course rain and clouds.The fractal structure of a measurement (time series) of Liquid Water Content (LWC) can be seen by zooming in on the time series. If you zoom in by a factor x, the total variance of this smaller part of the time series will be reduced by a factor y. Each time you again zoom in by factor x, you will find a variance reduction by a factor y, at least on average. This fractal behaviour leads to a power law; the total variance is proportional to the scale (total length of the time series) to the power of a constant; to be precise, this constant is log(x)/log(y). Such power laws can also be seen in the measurements of cloud top height, column integrated cloud liquid water (Liquid Water Path, LWP), the sizes of cumulus clouds, the perimeter of cumulus clouds or showers, and in satellite images of clouds and in other radiative cloud properties.
If you plot such a power law in a graph with logarithmic axis, the power laws looks like a line. Thus, a paper on fractals typically shows a lot of so called log-log-plots and linear fits. To identify scaling you need at least 3 orders of magnitude, thus you need large data sets with little noise.