Re: Is GAC Friendly?

From: J. R. Molloy (jr@shasta.com)
Date: Sat Jun 30 2001 - 07:30:21 MDT


Eliezer wrote,
> Molloy, GAC has nothing to do with AI. Zero, zip, nada. It is not on the
> track to seed AI or Friendliness. There is no point at all to testing
> it. If one person enters the pixel "Is it Friendly to help humans? Yes."
> and someone else later enters the pixel "Is it Friendly to turn humans
> into sawdust? Yes.", GAC will see no contradiction. GAC is dumber than
> Cyc, dumber than Eliza, dumber than an SQL database, dumber than a
> tic-tac-toe search tree, and on roughly the same intellectual level as a
> sack of nails. GAC defines a new nadir of AI. It is a whole quantum
> level dumber than the worst AI programs of the 1950s while sacrificing
> nothing in hype.

Yes, I understand what you're saying, but I think of GAC as a dumb component
which can nevertheless function as a gear in a complex system. It may require
many component parts such as GAC, Cyc, and Cog ("it's just a cog in the
machine") plus interactive neural networks and input devices to produce a
human-competitive autonomous system. Just as thousands of engineers need to
collaborate on myriad sub-systems to build a functional spacecraft, so too
thousands of computer scientists can attack the problem of AI by developing
parts that operate together. GAC and Cyc are dumb in the same way that
encyclopedias and databases are dumb... in the same way that some educated
people are dumb. An encyclopedia of algorithms may also be dumb, until it is
used to create solutions to hard problems.

When GAC makes its selections on the MMPI-2 questionnaire, it will do so on
the basis of what information has been fed into it. Humans who endure this set
of questions will do the same, except that humans may try to second guess the
system, based (once again) on the information that has been fed into them. I
may be guilty of overestimating GAC, but I think you'll agree that I haven't
overestimated humans. As Einstein reputedly commented, "Two things are
infinite: the universe and human stupidity; and I'm
not sure about the universe." At least the stupidity of GAC is finite.

GAC isn't even close to an evolvable machine because it's not even a machine.
Like the DNA sequence of humans, the billions of bits of data in GAC (or Cyc)
just lie there, doing nothing. Given proper reactive and automated agency,
these bits come to life and function as the genetic code of synthesized
cognition. So, GAC has the raw material, it just doesn't have sufficient
organization and automatic feedback to do anything with it... yet. Like a
primordial soup of trillions of protein molecules, it lacks the spark of
self-organization, for now.

Stay hungry,

--J. R.

Useless hypotheses:
 consciousness, phlogiston, philosophy, vitalism, mind, free will, qualia,
analog computing, cultural relativism

     Everything that can happen has already happened, not just once,
     but an infinite number of times, and will continue to do so forever.
     (Everything that can happen = more than anyone can imagine.)



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:08:22 MST