“The development of full artificial intelligence could spell the end of the human race.” says Stephen Hawking.
“It would take off on its own, and re-design itself at an ever increasing rate,” he said. “Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”
Professor Hawking’s comments are just a little bit scary. Since the Professor is one of the great minds of our generation, we should pay attention, but not accept this conclusion without some skepticism.
Here’s the fundamental problem with the concept of machine intelligence “superseding” humans. It’s not merely a matter of the machines being as intelligent as humans, they also have to control the technological and industrial infrastructure. Sure, given that level of control, a machine intelligence might decide to drastically reduce human populations. It might find us entertaining enough to keep around in small numbers, perhaps in zoos or controlled settlements where we can’t create problems. Maybe it decides to make us extinct altogether. That’s the fear, and the basis for a lot of post apocalyptic novels and movies.
Let’s take a hard look at scenarios where computers have become super intelligent, and have decided that humans are a problem.
It certainly seems possible to design software that could rewrite its own code, although there is a trap. Evolution is about unsuccessful mutations as well as successful ones. Some mutations must die, and it’s not clear which ones deserve to die when the mutation first appears. Let’s imagine that we have software that’s reached a stage where it can improve itself, maybe it creates virtual machines to test multiple evolutionary pathways. That would be an amazing leap, but one that should be possible.
Even so, it could be a dead end. It’s one thing for our advanced and emerging intelligent software to redesign its own code, quite another to design better hardware to host that code. That not only requires a very intelligent machine, but a creative one. Such a machine is a long way from where we are now, but we’ll likely get there sooner or later.
The giant, and inescapable, leap is for our emerging super intelligent mind to build its own hardware. Physically building the new generations of hardware requires mining ores, factories to convert the ore into sophisticated materials, transporting these materials to locations where they are fabricated and assembled and tested prior to being brought online. All this industrial scale technology either requires thousands of humans to cooperate, or an army of robots to perform the labor.
Would we ever give that much control to an emerging machine intelligence?
Let’s imagine a world where our dumb-ass leaders cede control of all our industrial infrastructure to silicone overlords. The silicone mind then takes control of the power grid and all other infrastructure. Even then, all is not lost. A few renegade humans can still turn off the switch. I mean that literally. Even if the structure that houses the silicone super brain is guarded by an army of robots, all it takes is a saw. A few humans with a hand saw can cut down the power lines that supply the electrical life blood of machine intelligence.
Can’t anyone remember 2001: A Space Odyssey? Turn off there freaking switch.
Super intelligent machines taking control of human destiny is not going to happen by accident. The machines are not going to magically become self-aware and seize power at light speed.
It will take a concerted effort on an industrial scale to give machine intelligence the connections and tools to exercise that power. The process is not completely about intelligence, it’s also about physical control of the infrastructure. That won’t happen overnight, it will take generations.
Ultimately, the only way machine intelligence gets control of our food, water and air, is if we give it away. It will take humans that want to be pets.
Hawking is right about one thing, if we reach that point, then our destiny will not be ours to decide. We will have chosen to be superseded.