-----Original Message-----
From: Billy Brown <bbrown@conemsco.com>
To: extropians@extropy.com <extropians@extropy.com>
Date: 06 January 1999 22:54
Subject: RE: Paths to Uploading
>Samael wrote:
>> Why do people assume that an AI will arrive fully formed and
>> ready to take
>> on the world. Looking at the human mind, it takes us years,
>> if not decades,
>> to understand the world around us and become adept at
>> manipulating it in
>> complex ways.
>
>Because they don't share our bandwidth limitations. Pre-human AI will
>probably require laborious spoon-feeding of knowledge, but that is standard
>for current-day AI projects. Nobody is going to recognize it as
>human-equivalent until it is smart enough to learn by reading a book.
Humans, when they are first created need spoon-feeding. They need very high repetition of information to spot the basic patterns in it and learn to recognise things around them. Later on, they can learn in leaps and bounds, but at the beginning they learn slowly. It seems likely (to me, anyway) that an AI would start off learnnig slowly and then pick up speed as it went along. Certianly, early AI's would learn slowly enough for this to be spotted.
>At that point the advantages of computers over organic systems come into
>play. It should be able to learn as fast as it can process data - and if
it
>can deal with real-world events the way we can, that makes it orders of
>magnitude faster than a human.
We also learn as fast as we can process data. Admittedly we don't force ourselves to learn as fast as we possibly could (most people aren't that enthusiastic), but we still spend months/years building those first levels of our neural network into recognising basic objects and hte relationships between them and it's only years later that we become able to spot the fine detail of the relationships and can make accurate theories about them.
Admittedly, you could make your AI's backwards compatible, so that the learning that and early AI makes can be uploaded into a later AI, but backwards compatibility always slows invention - and neural nets may be impossible to make backwards compatible (at least if they are general enough).
Samael