When humans are obsolete
Copyright © 1999 Nando Media
Copyright © 1999 Los Angeles Times Syndicate
By GREG MILLER
(March 21, 1999 9:32 a.m. EST http://www.nandotimes.com) - Three decades after Stanley Kubrick and Arthur C. Clarke gave us the indelible image of HAL, the archetype of the "thinking computer" in the film of Clarke's "2001: A Space Odyssey," there's been a resurgence of the idea of a machine with intelligence superior to that of human beings.
This idea is represented in three new books, all released within the last few months: "The Age of Spiritual Machines: When Computers Exceed Human Intelligence," by Ray Kurzweil; "When Things Start to Think," by Neil A. Gershenfeld, director of MIT's Media Lab; and "Robot: Mere Machine to Transcendent Mind," by Hans P. Moravec, a professor of robotics at Carnegie Mellon.
The simultaneous appearance of three books with the same theme prompted a conference on the concept of machines surpassing human intelligence at Indiana University on March 6, the "Symposium on Intelligent Machines: The End of Humanity?" The event was organized by Rob Kling, a former UC Irvine professor.
The question posed by this conference title may seem prejudicial, but it was
reasonable given the arguments of at least two of the authors, Kurzweil and
Moravec. They contend in their books that computer processing power is
advancing so dramatically that sometime in the next century, machines will
become vastly more intelligent than human beings. Because of this, they
say, we
will confront the problem of being superseded as a species. Computers of the
future may begin to regard human beings as little more than an evolutionary
dead end, a kind of pet to be kept around only for amusement or by virtue of
the machines' benevolence.
This thesis was strong enough to pack an Indiana University auditorium with more than 300 spectators, who listened to panels of experts assess these ideas.
The concept of artificial intelligence, or AI, is controversial to say the least. Some regard the phrase as an oxymoron, whereas others view it as the Holy Grail of computer science.
AI has gone through an interesting metamorphosis in the last 20 years. In the
1980s, it was a hot field, funded lavishly by the Department of Defense, which
was keen on "smart" weapons and "battle management systems." This research
spun
off an industry that attracted a lot of venture capital and public investment,
spawning Silicon Valley firms such as Tecknowledge and Intellicorp. Nearly all
these companies are dead and gone, as the promises of AI failed to materialize
or the research was absorbed by other, more conventional approaches to
programming.
In the 1980s there was a joke circulating about the field: "If it works, it's
not AI." To critics, this meant that the field's hype was so outlandish that
its claims could never be realized. To supporters, the joke meant that AI was
constantly on the cutting edge, exploring new domains of knowledge, with its
practical applications quickly ceded to other fields, like database
programming
or pattern-matching algorithms.
But there has always been a cadre of "strong-AI" proponents, people who
believe
that the brain and computers are theoretically equivalent kinds of
information-processing devices and that it's only a matter of time before
computers not only catch up to brains but surpass them. This position was
summed up by MIT professor Marvin Minsky, who once called the human brain a
"meat machine."
The three authors mentioned above take this position for granted. They see a near-total congruity between the way computers work and the way the brain works, so that eventually we'll be able to not only reproduce human cognition in machines, but also to "download" our thoughts and memories into computers, giving us a far more resilient vessel for consciousness than the fragile and short-lived human body.
Moravec calls such future machines our "mind children," going so far as to
suggest that the entire purpose of human beings is to serve as an interim
stage
of evolution, a "carbon-based" life form whose main task is to build its
successor species, a silicon-based life form.
Kurzweil seconds this, saying in his book that mortality will be a thing of
the
past by the middle of the next century as we migrate to machine
consciousness.
Many people have an almost instinctive revulsion when confronted with such claims. The strong-AI advocates understand this but dismiss it as "species chauvinism."
They point out that for the human race to fulfill its destiny, which they commonly see as exploring the universe, we'll need a corporeal form with a far greater life span than that of the human body. And they typically say that computer power is growing so fast that there's no way we can prevent the emergence of super-intelligent machines even if we wanted to. There are several critiques of these views, such as the devastating counter-argument of philosopher John Searle of UC Berkeley, who asserts that although brains and computers might both be symbolic processors, brains are semantic processors while computers are syntactic processors. In other words, in brains meaning is all-important, and in computers instructions and their sequence are all that matter. None of the recent books grapple with this critique, possibly because it is so difficult to refute. But the authors also fail to see that the computer industry is headed in a different direction too. Kling said the authors are "extremely good at circumventing the industrial and economic ramifications of their predictions."
"Where do super-intelligent machines come from?" asks Kling. "Do they grow, do
they spring up or are they built by companies?"
He means that both robots and artificial intelligence programs are today
shaped
by economic requirements, and the demands of industry are for machines and
programs that do very specific and limited kinds of tasks. That is, there's
very little demand for general-purpose super-intelligence in machines.
Manufacturers want robotic arms and hands, not HAL or something like C-3PO in
"Star Wars." Expert systems are useful, but usually only in limited
applications, such as medical information or the maintenance of complex
machinery.
Kling also noted, "These books skip a lot of the issues surrounding our dependence on 'dumb' systems, for which the year 2000 problem is the poster child." Most people have problems not with machines that are too intelligent, but with computers that don't seem intelligent enough. No one has figured out, for example, how to make computers simply easier to use or how to make large, complex programs bug-free.
AI has made a lot of progress in the last 10 years, said Benjamin Kuipers,
chairman of the computer science department at the University of Texas at
Austin and a leader in the field. "But most of it is obscure to the public,
because this progress has been absorbed into other fields of programming.
"I personally think that AI is one of the greatest half-dozen scientific
challenges we have today," added Kuipers. "It's important for us to
investigate
the fundamental problem of what the mind is. We have an imperative to
investigate this subject, as human beings."
But Kuipers noted that just because we know how to split atoms doesn't mean we
should build a global nuclear arsenal that could destroy life.
"Perhaps we should know about (the issues these authors address) in the same
way we should know not to touch a hot stove," he said.