From: Robin Hanson (rhanson@gmu.edu)
Date: Tue Oct 10 2000 - 07:38:17 MDT
Samantha Atkins wrote:
> > I find it striking that people seem to think that the speed and nature of
> > learning is one area where AIs will be very different from humans, even
> > though they seem to think AIs will be similar in so many other ways, ...
>
>A good and interesting point. One thing that seems less than certain to
>me concerning future AIs and learning is the notion that they will be
>almost totally able to share learned knowledge with one another
>relatively fully and instantaneously. But if the AI has a lot of its
>mentality modeled along the lines of neural nets ... once
>learned it may be no easier to extract the learned knowledge cleanly
>than in the case of a human because that net of information is quite
>entangled with other things that have little to do with the subject
>except to that particular individual. ...
>Perhaps this is an argument for not using things that resemble such
>networks for general knowledge acquisition and association.
Your hypothesis seems to be that the architecture of the human mind is poor
at communicating with other minds, and that some other architecture might do
much better. You might be right, but I suspect that what you are seeing is
mainly the intrinsic difficulty of communication, which any architecture
will have to deal with. AI researchers will be very happy to have an
architecture that even comes close to being as good as the human mind, and
there's no obvious reason to expect them to do much better any time soon.
Also, a plausible theory is that primate brains evolved primarily to deal
with the social world around them. So its not like one can argue that our
brains are bad at communication because they were evolved primarily for
other purposes. Communication has been near the center of recent
evolutionary pressures on our brains. This suggests even more that it
will be hard to do much better soon.
Symbolic style AI, i.e, based on something like CYC, seems to me the best
hope at fulfilling your vision - as its primary forms of representation
are very close to human language. Of course symbolic AI gets a lot of
criticism around here, though not from me.
Robin Hanson rhanson@gmu.edu http://hanson.gmu.edu
Asst. Prof. Economics, George Mason University
MSN 1D3, Carow Hall, Fairfax VA 22030-4444
703-993-2326 FAX: 703-993-2323
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:31:31 MST