AI

From: Rob Harris (rob@hbinternet.co.uk)
Date: Wed Nov 24 1999 - 10:10:26 MST


>Well, I think the Asimovian (?) laws should be applied to any AI,
intrinsically,
>within some kind of non-alternative hard-wired framework. That is,
similarly to
>Java, any motive AI should operate within a "sandbox", subject to the
protection
>of humanity.

I've posted this same point a million times, but I'm going to do it again
anyway - cos it's not getting through. When you build a system to perform a
certain task, you have to tell it what to do - not what NOT to do. There is
nothing that does everything and has to be constrained down to a set task.
It doesn't work that way, but I can see exactly why people see the situation
in this way. Of course, us humans work in exactly that way, what with us
having "free will" and all. Just strange that all us free people always
choose exactly the same will (must survive, must gather resources, must have
sex, must climb the social hierarchy etc....). The point is that creating an
artificial "lifeform", or an evolving program that has "motivations" to
survive and so on, will serve no purpose to us whatsoever. "oh, great" the
creator will say, I've got a evolving program here on my machine with
emergent motivations that mean it will attempt to "survive". And it does.
Until I pull the plug.
Some talk of "seed AI" becoming a self aware nemesis of humanity. Crap. You
see, the idea of "seed AI" is analogous with the evolution of life itself.
In order for that first chain of genetic material to replicate, it had to
"find" (literally bump into) the complementary nucleotides to make a new
chain. In time and fluke, a structure that optimised this process arose, but
at a price. The price is that the next generation must find not only the
nucleotides that will make up the new chain, but the components of the
optimisation structure as well. And so the quest for resources is born. This
is the mechanism by which there has been evolution of life, because the
optimised bits of genetic material replicated faster, and so outnumbered the
old structures and so on until you have hardcore complexities such as
humans. So, how exactly are you going to bring about this situation in
"cyberspace"? How can resources ever be made to mean anything in this
context? The fact is that you have to hand make motivations by directly
specifying the situations in which the old generation can successfully
reproduce (a fitness function). So, basically, you have to create a program
using perhaps genetic algorithms, and have a specific task in mind, although
there is no reason why this solution would be superior to a straight program
for any specified task.
What is comes down to is this: if there were to ever be a malevolent program
threatening the human race, it is no more "intelligent" or "sentient" than
Microsoft Word, and it was created by a complete idiot who could have far
easier made a big bomb to blow us all up, than an AI super-hacker programmed
to get it's finger on that red button.

Rob.



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:50 MST