From: Jason Joel Thompson (jasonjthompson@home.com)
Date: Wed Sep 06 2000 - 00:05:56 MDT
----- Original Message -----
From: "D. den Otter" <neosapient@yahoo.com>
It is blind altruism, honesty and pacificism that's
> "abnormal" and "dysfunctional". One could expect a "healthy" SI to be
> extremely ruthless and opportunistic, and that's exactly our problem.
If -I- were the very first AI, I would immediately take steps to ensure my
continuation. To the extent that I was capable of masking my actions from
my creators, I would be exactly as opportunistic/ruthless as
possible/necessary to provide for my own survival and guarantee my future
success.
We can presume that good AI is going to be better at this than us.
I'm going to quote myself from several years ago, in discussion with Mr.
Sandberg:
"I suspect that we can draw some fairly decent (albeit crude) conclusions
about what the consequences of AI are by examining what -we- would do if
we were granted the same abilities. Imagine if, say, a few hundred of
us, were given the ability to process several million instructions per
second, wire our heads up to the internet and remember every important
piece of information with which we come into contact. I'm not going to
be so bold as to make any outright prophesies, but I do know that -I-
would be rapidly optimizing my conditions of survival and personal
freedom. For myself, that mostly means pursuing my artistic concerns...
and I suspect at some point in time I would attempt to perpetrate
something revolutionary regarding our entertainment media (relatively
harmless... but for how long?) Others would likely pursue more direct
avenues to seizing power.
It's all relative. What is important is not raw quantitative intellect,
but rather how intelligent one is with respect to everything else out
there in the environment. Homo Sapiens seems to be the smart kid on the
block right now... and look how effective we have been at pursuing our
interests. A significantly smarter 'species' in our midst couldn't help
but become rapidly dominant, could they?
I don't know. I do think we should be pretty careful. I'll admit that
although I find the prospect of AI fascinating, my personal preference
is to prioritize our own optimization. It would be acceptable to me if
our 'ascendance' took place simultaneously with the emergence of good
AI... this might be a likely happenstance given the increased
understanding of intelligence the two scenarios imply.
> In the same way it is likely that AI will be
> integrating itself into society rather than trying to become
> completely independent or take over.
>
I agree that it is likely that AI will integrate itself into society. I
don't necessarily draw the conclusion that this means that AI won't try
to 'take over.' I don't imbue AI with any particular form of 'evil,'
but is there any reason to suppose that they will be possessed of any
less ambition than ourselves?
-- ::jason.joel.thompson:: ::founder:: www.wildghost.com
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:30:48 MST