[The following story is based in part on a report posted to the CNN web site on February 18, 1998. CNI News thanks artificial intelligence expert K. Eric Drexler, author of "Engines of Creation," for bringing the CNN text to our attention.]
Among the many scenarios of "alien" contact currently under discussion, one that is particularly attractive to mainstream space scientists involves first contact with a machine intelligence of alien making. This machine, capable of withstanding both the rigors and the long duration of interstellar travel, could be so advanced that it would represent a genuine alien intelligence in its own right, probably a reflection of the intelligence that made it.
Far-fetched? Like so much else about possible human contact with aliens, this possibility can now be judged against the reality of recent human accomplishments and plans.
For example, if we assume that human travel to another star system is forever blocked by laws of the universe and the limits of technology, then -- according to mainstream scientific thought -- alien visitation from another star system is similarly blocked and can be summarily ruled out. For decades, such thinking provided an easy "scientific" rationale for knee-jerk rejection of UFO claims. But today, as legitimate physicists and propulsion engineers speak openly of researching faster-than-light travel, this "easy out" begins to evaporate. In point of fact, growing numbers of scientists now consider travel to other stars at least worthy of contemplation, if not outright likely. In the same measure, the possibility of visitors from another star increases.
And it increases all the more if that star visitor is not flesh and bone, but highly advanced machine. How advanced? Perhaps advanced enough to identify and navigate to a life-bearing world after decades in the deep-freeze of space; then advanced enough to elude detection as it monitors and evaluates vast amounts of information about that world and its lifeforms; then advanced enough to decide how and when to interact with that world's intelligent inhabitants.
At that point, what this machine visitor does will be especially reflective of the demeanor and priorities of its makers. Hopefully, they will have been peace-minded and non-aggressive. If not, earth might one day face a serious problem.
But, according to experts in the field of artificial intelligence, that problem might be much closer than any visitor from deep space. It could be evolving in human laboratories right now. It could be little more than a decade or two away. And it could be very, very worrisome.
Most people in 1998 America easily associate the word "Terminator" with the film character created by Arnold Schwarzenegger. A Terminator, the story goes, is the result of a computer system that "got smart" and then turned on its human makers, creating incredibly powerful and super-intelligent killing machines to dispose of the human race.
Could such a thing happen? Perhaps yes.
Some Swiss scientists say such a threat may be closer than we think, according to CNN. Their doom and gloom talk was prompted by one of their own creations: an autonomous robot that learns from its environment. Within a few minutes, the microprocessor based robot can learn not to bump into a barrier. No one programs the robot's actions, and its creator isn't exactly sure how it will behave in any given situation.
Within 10 years, these scientists predict that similar but more advanced machines, equipped with artificial intelligence, will be as clever as humans. Soon after, they say, the man-made objects could become more intelligent than their creators -- and capable of taking over.
"Next century's global politics will be dominated by the question, should humanity build ultra-intelligent machines or not?" says Hugo de Garis. He's been thinking about the problem for quite a while, even as he's participated in building some of the smartest machines on earth. His 1989 paper, "Moral Dilemmas Concerning the Ultra Intelligent Machine" spells out his growing concerns in detail. [Available at http://www.hip.atr.co.jp/~degaris/Artilect-phil.html]
"I'm going so far as saying there will be warfare between these two major groups, one saying building machines is the destiny of the human species, something people should do, and the other group saying it's too dangerous," de Garis says.
Kevin Warwick, a professor of cybernetics, agrees that thinking robots could be dangerous. "I can't see any reason why machines will not be more intelligent than humans in the next 20 to 30 years, and that is an enormous threat," Warwick said.
De Garis speculates that the robots might soon tire of their human creators. "We could never be sure these artellects, as we call them -- artificial intellects -- wouldn't decide that humanity is a pest and try to exterminate us, and they'd be so intelligent they could do it easily," de Garis told CNN.
Warwick goes further. "We're talking in the future the end of the human race as we know it," he said.
De Garis and Warwick aren't alone. Equally gloomy speculations are offered by Vernor Vinge, a mathematician at San Diego State University specializing in computer architecture, whose 1993 paper titled "Technological Singularity" argues strongly that machines with super-human intelligence are likely within a few decades, and their arrival may signal the end of human civilization -- a "singularity."
"If the technological Singularity can happen, it will," says Vinge. "Even if all the governments of the world were to understand the 'threat' and be in deadly fear of it, progress toward the goal would continue. The competitive advantage -- economic, military, even artistic -- of every advance in automation is so compelling that forbidding such things merely assures that someone else will get them first."
A critical moment will arrive, Vinge says, when super-intelligence simply is. It may be a complete surprise to its human designers. But that moment will signal "a throwing-away of all the human rules, perhaps in the blink of an eye -- an exponential runaway beyond any hope of control."
From that moment forward, Vinge believes, we will be in a "Posthuman era." He compares the advent of super-intelligence to the first arrival of self-conscious human intelligence on earth. It represents not only a new species of life, but a different order of existence.
"Just how bad could the Posthuman era be?" he asks. "Well... pretty bad. The physical extinction of the human race is one possibility."
Of course, many artificial intelligence experts reject such musings as unjustified paranoia. Even among those who see super-intelligence coming, there is guarded optimism that it can be contained and controlled. As Vinge says, the perceived advantages of each new step will tend to outweigh the fear of negative consequences.
But can machines that are far more rugged than humans, physically quicker and stronger, impervious to harsh environments, disdainful of fatigue or pain -- and vastly smarter as well -- really be contained? According to de Garis, Warwick and Vinge, it's a dangerous bet.
Back to document index
Original file name: CNI - Robot Armageddon
This file was converted with TextToHTML - (c) Logic n.v.