Monday 31 July 2017

Will we be wiped out by machine overlords?



The American Public Broadcasting Service (PBS) just aired a piece on artificial intelligence that seems to be quite typical for the American published opinion. The segment — titled "Will we be wiped out by machine overlords? Maybe we need a game plan now" — could not have been more wrong in my non-expert opinion.

They gave examples of the progress of machine intelligence. The example that impressed me the most is computers beating humans at playing Go, a game that starts from an empty board and has a huge number of possible progressions. From these examples of very specific tasks they extrapolate to machines soon having more general intelligent than humans. I think they are wildly optimistic (pessimistic?) about that. This will still take a long time.

A hint of how difficult handling reality and other "intelligent" beings is is the world soccer tournament for robots. And that is still just a game with a well defined surrounding and rules.



Notice that when the robots make a goal, they do not take off their shirts and do not jump on each other to celebrate.

But there will likely be a time where computers are smarter than we are. So what? They have long been better at mental arithmetic, now they are better at Go. That does not make them overlords. Machines are also faster than us, stronger than us, dive deeper than us, explored more of the planetary system than us. So what?

Just because machines are intelligent, does not mean they want to rule and even less that they would be evil. It will be hard enough to program them to survive and not to jump off a cliff. Making them want to survive will be even harder.

Just because they would be intelligent, does not mean they are like us. That is likely the main thinking error people are making: we are intelligent, thus an intelligent entity is like us. We evolved to want to survive and reproduce, mostly by collaborating with each other and nature, if necessary also in conflict. Intelligence is just a side show of this, that was apparently advantageous in our niche.

It is possible to make computers solve problems with methods that mimic evolution. Rather than tell the computer in detail what to do, with these methods you only tell the computer what problem you would like it to solve. That has to be a concrete aim so that the computer can determine if it is getting better at solving the problem. Even if you somehow are able to make the computer solve the problem "general intelligence", the computer would just be intelligent.

Being a human is so much more than being intelligent. There is currently a bonus on the labour market for smart people, but you need so much more capabilities and drive to make something of your life.

If being intelligent were so great, we would have been much more intelligent already. It probably helps if a tribe has a few intelligent people, but a tribe of philosophers would quickly go extinct. Getting the variability right is as important as the mean and bigger is normally not better, there are trade offs.

One wonders where this fear for intelligence comes from. There are so many people more dangerous than a nerd with stick arms. There are also such machine-overlord stories in Europe, but my impression is that is is more common in America and I wonder if this is anti-intellectualism being in vogue. A country where the government thinks scientists are the enemy and need to be defeated. Sad.



Or where the Trump-voter whisperers on the left blame kids who are interested in learning for all the societal ills of America and absolves the rest as innocent victims who cannot be expected to engage with society. This ignores that most of the elite were born into their wealth and have nice diplomas because of the wealth of their parents rather than their yearning for learning.


While I do not see an evil machine overlord ruling over humanity or destroying us, machine intelligence could be a game changer in several ways. Many, at least in newspapers, worry about its influence on the labour market and the creation of mass unemployment. This is possible, but I worry about this a lot less, it is just another step towards more automation and the additional efficiency has just made us more affluent. As far as I can see, we do not understand where unemployment comes from (apart from a small part of it due to changing jobs; [[frictional unemployment]]), so I am surprised that people are confident in making unemployment predictions, especially predictions into the far future.

One would expect that people worrying about mass unemployment would advocate shifting the tax burden away from labour. Making labour cheaper should increase demand. Alternatives would be taxing pollution instead. A reduction in environmental damages and better health would be additional economic benefits next to less unemployment or better wages.

Machine intelligence can change the balance of power. It is most worthwhile to invest in automation of large professions that serve needs for the coming decades. These are the professions everyone knows, which helps fuel the media scare. It will be a long time before someone invests money to make [[bell founders]] redundant. These kinds of jobs are not well known, but combined a decent part of the economy and in future likely even more. Collective bargaining is harder for these kinds of jobs, so labour may lose over capital, but these are also jobs where it is is hard to find replacements, where trust and good relations are important, so it could also be that labour wins over capital.

A recent survey of experts in machine intelligence predicted that in 2049 (pardon the accuracy) bestsellers will be written by computers and 11 years from now create a song that makes the US top 40. I do not believe this one bit. I would be happy to buy a book on coding in FORTRAN written by a computer, but when it comes to novels or a book on politics, I want to hear from a human. The computational methods I use to generate climate time series can also be used to generate pleasing music. That could have been a career option, but I would have hidden that the music was composed by a computer. Otherwise no one would have listened to it more than once. It may provide cheap background music in a supermarket.



Many jobs also need a lot more than just intelligence: sales people, doctors and teachers. At least for fast food workers it would have been easy to automate their jobs decades ago, but people prefer food made by humans handed to them by humans. Even simple restaurant now often have an open kitchen to show that the food it cooked by humans and not just nuked factory food.

If intelligence becomes a commodity that you can buy, the current bonus on the labour market for smart people may be gone. That was anyway just a recent invention. It would be interesting to see how that changes science; intelligence is an important skill for a scientist, but there are many more important ones. Also now a smarter colleague is often happy to do some complicated specialised task.

When worrying about overlords, a more sensible option would be to worry about humans aided by machine intelligence. Looking at ISIS and their "Christian" counterparts is seems that evil people are not particularly intelligent or creative. It could be dangerous if such people could buy their missing intelligence at Amazon. On the other hand maybe there is a reason for the anti-correlations, the more intelligent humans will be less sure of themselves and fundamentalism may disappear.

Initially likely only the elite can afford to buy more intelligence, but we would probably move quite quickly into a regime where everyone has such an add-on and intelligence just becomes normal and nearly worthless.



The main robots to worry about are the amoral machines we invented to create money. Corporations evolved with the aim of gaining money and power. They die, merge, split up and need to survive to make money. As long as they were small and made money by efficiently producing better goods and services within the bounds of the law they did a wonderful job, now they have grown large and started looking for political power. Corrupting the political system is an efficient was to grain money and power. When amoral robots do so, this may not end well for humans who are already squeezed out like lemons.

[UPDATE. I did not have to write this post, it has all been said before. I just listened to an EconTalk interview by Russ Roberts interviewing machine learning expert Pedro Domingos. Good to hear AI researchers seem to agree with me, that AI wiping us out is mainly Hollywood.

Russ Roberts: I love when you wrote--here's another quote from the book:
People worry that computers will get too smart and take over the world. But the real problem is they are too stupid, and they've already taken over the world.
Explain what you mean by that, and why you're not worried about some of the issues we've raised on this program before, with Nicholas Bostrom and others, that AI (artificial intelligence) is perhaps the greatest threat to humanity; machine learning could destroy the world; etc.

Pedro Domingos: Well, exactly. I think those worries are misguided, and frankly, I don't know too many, actually, AI researchers who take them too seriously. They are based on this confusion between AIs and people. Because humans are the only intelligent creatures on earth, when we think about intelligence we tend to confuse it with being human. But, being intelligent and being human are very different things. In Hollywood movies, the AIs and the robots are always humans in disguise. But the real AIs and robots are very different from humans, notably because they don't have goals of their own. People have this model of there will be a different set of agents who are competing with us for control of the planet. They are not going to be competing with us for anything, because we set their goals. Their intelligence is only being applied to solve the problems that we set them to solve, like cure cancer. And there, the more intelligent they are, the better.
]



Related reading

PBS: "Will we be wiped out by machine overlords? Maybe we need a game plan now"

BBC: "The automation resistant skills we should nurture"

Big Think: "Here's When Machines Will Take Your Job, as Predicted by AI Gurus"

The survey itself: "When Will AI Exceed Human Performance? Evidence from AI Experts"

Motherboard: How Garry Kasparov Learned to Stop Worrying and Love AI


* Photo Corpo Automi Robot by Bruno Cordioli used under a Creative Commons Attribution 2.0 Generic (CC BY 2.0) licence.

5 comments:

  1. Initially likely only the elite can afford to buy more intelligence, but we would probably move quite quickly into a regime where everyone has such an add-on and intelligence just becomes normal and nearly worthless.

    I wouldn't go *that* far. Extra intelligence may not give you a leg up as compared to other people, but it will still make us more productive, both individually and collectively.

    ReplyDelete
  2. Interesting post.

    My understanding of the AlphaGo machine is that is not strictly computational -- that is, given a certain board position, it does not just look up what move it should do next -- there are too many possible board positions. Instead it analyzes the position and decides, on its own, what move is best, and it does this by using "knowledge" already acquired from earlier games. An article I recently read said experts considered some of its moves unexpected and "beautiful."

    Victor wrote:
    "Just because machines are intelligent, does not mean they want to rule and even less that they would be evil."

    But is the first trait -- wanting to "rule" -- a sign of *all* intelligent life? Isn't it really the only rule of evolution? Don't almost all (or all?) "intelligent species" want to dominate their environment, in order to give the best chance for their offspring to survive and flourish?

    Evil is a nebulous term and, I think, and very subjective. So I'm going to pass on it.

    If man uses his "intelligence" to dominate others, as do chimps and dogs and ants and trees and viruses, why wouldn't a "more intelligent" "species" (AI) "think" about doing the same? Especially as we put them in more crucial positions, like managing electrical grids and the Internet and even military weaponry. Will it too want to survive, above all?

    In any case, a thoughtful post, Victor.

    ReplyDelete
  3. An intelligent machine is not an intelligent species, intelligent life. They did not evolve to survive and reproduce. Sometimes conflict will be unavoidable, but not sure if everyone wants to rule.

    ReplyDelete
  4. Victor: Maybe. But I basically see people and other living organisms as machines made out of meat. Does it matter if the machine is made of silicon?

    For meat machines, reproduction is a more basic need than is domination of their environment. Or they'd cease to exist. So perhaps this is a need that is very, very deep down in our physiology and psychology, in the most rudimentary part of our brain/existence.

    Even viruses seek to reproduce. In fact, that's about ALL that viruses seek to do. And they have no brain. So where do their actions come from? What drives them? Just chemistry.

    So I don't have a real problem calling advanced AI a "species." It's not clear to me that "intelligence" at that level won't feel the need to reproduce, just as bacteria do.

    ReplyDelete
  5. Does it matter that the machine is made out of steel? I have not seen any tendency for the thames barrier to team up with the Maeslantkering storm surge barrier to rule the world.

    Also AlphaGo did not team up with a chess program to rule the world. Is that just because they are not intelligent enough? Or because they do not want to?

    In Switzerland they build a huge computer to simulate a few neurons. If they some day have enough computer power to model a complete human brain, that brain likely would not like people to turn the computer off. Then we are talking. But a computer that is just intelligent could not care less if we turn the computer off.

    ReplyDelete

Comments are welcome, but comments without arguments may be deleted. Please try to remain on topic. (See also moderation page.)

I read every comment before publishing it. Spam comments are useless.

This comment box can be stretched for more space.