Is Software Generality Overrated?

From: Robin Hanson (hanson@hss.caltech.edu)
Date: Wed May 14 1997 - 15:34:59 MDT


A computer has beat the best human at Chess. Once again we have
passed another long-forseen milestone toward AI. And again it was
accomplished by a relatively focused and non-general system, and by
more standard software engineering (rather than some "AI"ish approach
intended to implement more general agents).

Now clearly it will one day be possible to create software whose
reasoning abilities and breadth of skills are as broad and general as
humans, if not more general. But I'm wondering: how long this day be
put off by tradeoffs between the costs and benefits of generality?

On the benefit side: we already have humans who can't use much of
their generality. People can learn to do a remarkably wide variety of
things. But a standard lament is of people who must spend all day
doing some narrow task, but long to express all the other sides of
themselves. People like me leave promising careers in one field
because we're bored and want to do something new for a while. People
fight to move up heirarchies where they can take a broader view.

On the cost side: It can be very expensive to make software able to do
a wide variety of different tasks. It costs in speed of execution,
and in time to write, test, and maintain the code. People have a huge
"common sense" knowledge base to draw on, and writing it all down for
computers is a daunting task.

If the most general reasoning in combined human-computer systems
continues to be done by people, and computers take the more routine
specific tasks, then the generality boundary between computer and
human reasoning may move slowly. Yes computers get faster, but people
also learn more about how to do general tasks well, including how to
program and use computers. Eventually we will learn how to speed up
and further modify human brains, and if this day comes soon enough,
computer AI may never displace humans at the general tasks.

Robin D. Hanson hanson@hss.caltech.edu http://hss.caltech.edu/~hanson/

P.S. I seem to recall a "lightfoot" panel in UK that came to a similar
conclusion about AI research on the early 70s. Any sources on this?



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:44:26 MST