"Machines Must Use Their Common Sense" --Minsky

From: J. R. Molloy (jr@shasta.com)
Date: Tue Sep 04 2001 - 08:43:10 MDT


Snarc-y AI Pioneer Minsky: Machines Must Use Their Common Sense
http://asia.dailynews.yahoo.com/headlines/technology/newsbytes/article.html?s=
asia/headlines/010901/technology/newsbytes/AI_Pioneer_Minsky__Machines_Must_Us
e_Their_Common_Sense.html
CAMBRIDGE, MASSACHUSETTS, 2001 AUG 31(NB) -- By Kevin Featherly, Newsbytes.

The problem is simple: people just aren't very smart. That's why we need
smart machines. Just ask Marvin Minsky.

Author Steward Brand once compared Minsky to Goethe's Mephistopheles, saying
his is a "fearless, amused intellect creating the new by teasing taboos." So
Minksy likes to say things like, "I don't think that people are very smart,
and they need help," as he did in an interview with Newsbytes today.

And don't think he doesn't believe it.

Minsky, the founder of the MIT Artificial Intelligence Lab and the man often
referred to as "the father of artificial intelligence," spoke with Newsbytes
about the state of AI technology on Thursday and again this afternoon.

Minsky, a noted author, instructor and researcher in the AI field, has been
at work trying to raise machine intelligence to the level of humans - and
then, presumably, beyond - ever since he built the SNARC (Stochastic
Neural-Analog Reinforcement Computer), the world's first artificial neural
network, which modeled the learning process a mouse goes through as it
tracks its way through a maze. He did that, incidentally, as a graduate
student at Princeton University - in 1951.

Since then, plenty has happened. In addition to AI, Minsky has made
contributions to the fields of robotics, mathematics, virtual reality, even
space exploration. He has written many books, including a science fiction
novel with Jack Williamson, "The Turing Option," that explores the
possibilities of successful machine intelligence (and which places the birth
of genuine AI in the year 2023). Perhaps most famously, he worked as a
science consultant to late film director Stanley Kubrick to devise the
AI-driven HAL 2000 computer, which ended up killing an astronaut and getting
summarily unplugged in the 1969 film, "2001: A Space Odyssey."

But despite all his work over five decades, artificial intelligence, which
looked so promising when Minsky published the seminal white paper "Steps
Towards Artificial Intelligence" back in 1961, has stalled.

"The reason is that there are probably many years of hard research to be
done, but there are very few people working on the problem of human-level
(machine) intelligence," Minsky said. "In fact, I'm trying right now to
organize a conference of about 20 people who are interested in how
common-sense reasoning works and how to organize a project to get a machine
to do it. And I can't find 20 people."

The loss of momentum hasn't stopped Minsky, who today is a Toshiba Professor
of Media Arts at MIT. He remains an unflinching champion of the AI science.
In 1994, for example, he wrote an article for Scientific American magazine,
"Will Robots Inherit the Earth," in which he answers his own question
enthusiastically in the affirmative.

"Will robots inherit the earth?" he wrote. "Yes, but they will be our
children. We owe our minds to the deaths and lives of all the creatures that
were ever engaged in the struggle called evolution. Our job is to see that
all this work shall not end up in meaningless waste."

It's mind-bending stuff. But how long will it take to pull it off? When will
computers cease to be dumb, gussied-up adding machines and start thinking
for themselves?

"It's between three and 300 years," he said. "Estimating how long it will
take is a combination of how large we think the problems are and how many
people will work on it."

Minsky compared the situation to the problem that another AI pioneer,
Herbert A. Simon, ran into when he predicted in 1958 that it would take 10
years to create a world champion chess-playing program. Simon, who died this
year, faced a lot of criticism when, in fact, it took until 1997 for the
prediction to come true.

"Simon's mistake wasn't about chess," Minsky said. "It was about thinking
that more people would work on it. And in fact, in that period, there were
only a couple of significant people trying to do it."

Minsky laments that there are only 10 "significant" people in the world that
he knows who are tackling the problem of AI from the same direction he is,
which is from a basic common-sense perspective. Computers need to develop
common sense, which incidentally also means that they need to be equipped
with certain basic emotions, according to Minsky. It is probably not
necessary to make computers that can get angry, but it would be useful if
they'd get annoyed when puzzling over a problem and failing, Minsky has
said. That way they'd be likely to come back and try to solve the problem a
different way - which after all, is simply a common-sense thing to do.

However, instead of taking that approach, Minsky said, most current AI
researchers are tinkering with fads from the latest peer-review journals. It
is hard to find people who want to tackle common-sense reasoning, he said,
mainly because creating common-sense responses is an enormous programming
challenge.

"I think when they look at it, they think that it is too hard," Minsky said
today. "What happens is that people try that, and then they read something
about neural nets and say, 'Maybe if we make a baby learning machine and
just expose it to a lot, it'll get smarter. Or maybe we'll make a genetic
algorithm and try to re-evolve it, or maybe we'll use mathematical logic.'
There are about 10 fads. And the fads have eaten up everybody."

Steven Spielberg hasn't helped much either, he said. While the director's
recent movie, "A.I.," could have served to pique public interest (and public
funding) and driven some curious scientists into the field, instead, the
film might have done more harm than good.

"It was probably as negative as possible," Minsky said. "It had no ideas
about intelligence in it."

Minsky said he found it amusing that a Pinocchio subtext entered the movie.
"I'm sure the reason is that as soon as you knew the plot, you said, 'Oh!
Pinocchio!' And Spielberg tried to head off that criticism by showing that
at least he was aware of it." Minsky said. "In other words, it was just a
bad soap-opera movie. It didn't have any ideas about emotions. I think it
was a terrible film with very good photography. It didn't have anything
about what are the problems."

Minsky lamented that the film wasn't made by the project's original
director, Stanley Kubrick, who died before production began. "And frankly, I
was annoyed that Spielberg didn't call me. But I guess he has an aversion to
technical things."

The professor is working to drum up new enthusiasm for artificial
intelligence himself, with his book, "The Emotion Machine," parts of which
are online in early drafts.

"I hope I'll finish it in the next couple of months, but I always say that,"
Minsky laughed. "I'll put most of it on the Web. I want the ideas to be
available no matter how slow publishing is."

The book explores the idea that emotions are simply different ways of
thinking, and that machines, to be effective, need to find various methods
of considering problems to solve them efficiently. Most computers now have
at best one or two ways to resolve problems. Minsky has some guarded hopes
that this part of AI research could move somewhat swiftly.

"I think it's possible that in the next 10 to 15 years we'll get machines to
do a considerable amount of common-sense reasoning, and then things might
take off," he said.

The bottom line question about artificial intelligence is, why? What drives
people like Minsky to build machines that might well have intellectual
advantages over their creators? This is one of the fears that Sun
Microsystems' Bill Joy wrote about in last year's Wired magazine essay, "Why
The Future Doesn't Need Us," which sent shock waves through the Silicon
Valley, prompting debate about how far such innovations as AI, robotics and
nanotechnology might go in supplanting and overpowering humanity.

Minsky dismisses such fears out of hand, saying that among the research
community, they don't even register. "There are deconstructionists and
strange humanists, but they don't have influence on the technical
community," he said.

But Minsky doesn't mind saying exactly why he thinks humans ought to move
ahead with artificial intelligence. And it's all about our shortcomings.
Minsky thinks that human intelligence may have run its evolutionary course.
As a species, we may be at or near the end of our tethers in terms of
developing a higher order of intelligence. But with technology present to
push things ahead, Minsky suggests, why stop learning how to learn?
Intelligence is intelligence, whether it is using software or wetware (the
human brain).

"Humans are the smartest things around, and the question is why they aren't
smarter," Minsky said. "They're sort of the only game in town. There are
elephants and porpoises, but they don't seem to go past a certain point. It
would be awful if we were the end of the road."

Marvin Minsky maintains a Web site at MIT that contains many of his
writings, including early chapter drafts of his forthcoming, "The Emotion
Machine." These are at http://www.ai.mit.edu/people/minsky/minsky.html .

_____________________________________________________________________
«¤»¥«¤»§«¤»¥«¤»§«¤»¥«¤»«¤»¥«¤»§«¤»¥«¤»§«¤»¥«¤»§«¤»¥«¤»§«¤»¥«¤»§«¤»¥«¤
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯

Useless hypotheses, etc.:
 consciousness, phlogiston, philosophy, vitalism, mind, free will, qualia,
analog computing, cultural relativism, GAC, Cyc, Eliza, cryonics, individual
uniqueness, ego

     Everything that can happen has already happened, not just once,
     but an infinite number of times, and will continue to do so forever.
     (Everything that can happen = more than anyone can imagine.)

We won't move into a better future until we debunk religiosity, the most
regressive force now operating in society.



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:10:22 MST