Sunday, 22 September 2013

A Tip For The Surivial Of Humanity

Let's assume that at some point in the future we'll be able to build computer systems capable of behaving just like a human brain. There are three obvious ways to get there. One way is to build a system that emulates an actual human brain, bootstrapped by copying the brain of an existing person. Another way is to emulate a human brain, but bootstrap via a learning process as children do. The third way is to build a system that doesn't work like a human brain but reimplements the functionality in whatever way seems most suitable to the hardware we have built. I believe that copying the brain of an existing person is by far the best path; the other options are much greater threats to the survival of humanity, and indeed intelligent life.

Fully reimplemented intelligence would be very hard to predict and control. Reliably engineering for virtue will be extremely difficult. In its early stages the system would likely be narrowly focused on specific goals (perhaps military or corporate), and there is great potential for catastrophic bugs, such as runaway goal seeking that accidentally devastates or exterminates organic humanity. Even if you think a complete transition from organic to machine intelligence would be a good thing, we could easily by accident end up at a dead end for intelligent life, for example if humans die off before all aspects of machine self-reproduction are automated.

On the other hand, if we copy an existing brain, we will know roughly what we're going to get: a disembodied human mind in a machine. We can even choose particularly virtuous people to copy. The process will no doubt be lossy, but people with severe disabilities cope with sensory and motor deprivation without going mad, and our emulated minds probably will too. We can be confident that a benevolent, thoughtful person who cares for the welfare of humanity will still do so after transcription.

The emulation-from-infancy approach falls somewhere in between in terms of risk. The result would probably be more like a human than fully reimplemented intelligence, but it's hard to predict what kind of person you would get, partly because they would grow up in an environment very unlike what we would consider a favourable environment for children.

I'm optimistic that an emulation approach is more likely to succeed technically than a full reimplementation approach at producing a self-aware general-purpose intelligence. Futurists tend to imagine that mere aggregation of computing power will bring on intelligence, but I think they're quite wrong. A great truth of computer science is that hardware scales but software remains hard. Porting existing software is expedient. However, copying an adult brain may be a lot harder than bootstrapping from infancy.

I'm not sure that any of these things will happen in my lifetime, or ever. There is great potential for God or man to frustrate our technological advancement.


  1. "Fully reimplemented intelligence would be very hard to predict and control. Reliably engineering for virtue will be extremely difficult. "

    I whole heartedly and completely agree with those statements.

    I also agree with your statement "Futurists tend to imagine that mere aggregation of computing power will bring on intelligence, but I think they're quite wrong." Sufficient computing power is a necessary but not sufficient condition.

    That said, I think how much computing power is required matters a great deal. If using the best available technology and the very good algorithms and it still requires a megawatt of electricity to be as intelligent as a human, then catastrophic consequences of bugs are harder to imagine, just pull the power plug (destroy the electricity transmission lines). On the other hand, if using the best available technology and very good algorithms it only requires 20 watts of power then catastrophic consequences are easy to imagine, pulling the plug might be very hard (since solar panels, gasoline engines or lead acid batteries can provide power for a long time.).

    Another key variable as you mentioned is when "all aspects of machine self-reproduction are automated." A rational and sufficiently knowledgeable and wise machine intelligence that cared about its own survival would not destroy humanity until that occurred. A a more foolish machine intelligence (or even a virus that is definitely subintelligent) could cause a lot of harm right now, and the amount of harm possible has vastly increased in the last decade (missile carrying drones, lots of cellphones with cameras and microphones) and will likely continue to increase in the next decade.

    As for whether emulation or reimplementation occurs first, I don't know, but it is important to consider that a algorithm designed for the hardware will run faster than emulating the hardware for a different algorithm. Human neurons operate vastly differently than digital logic, and emulating neurons with digital logic will cause a massive slowdown. No human or group of humans on Earth could emulate even a Commodore 64 in realtime, let alone a current computer.

    1. Yes, it's true emulated intelligence won't perform as well.

  2. Sounds like an essay coming out of Battlestar Galactica / Caprica universe :)

    1. It does sound like sci-fi, but we need to take this stuff seriously. Mostly-autononomous killer robots are being deployed now.

    2. It happens to be in this universe. See for example: or or