From: Michael Roy Ames (michaelroyames@hotmail.com)
Date: Sun Jun 09 2002 - 16:52:24 MDT
Eugen Leitl <eugen@leitl.org> wrote:
>
> My 0.02 euro: experiments involving hard-edged positive feedback
> autoenhancement loops are potentially lethal for slowtime flesh people
> which use a vulnerable natural ecology at the bottom of this gravity well
> for life support.
>
> From this follows: while we're passing the window of high vulnerability 1)
> we should reduce the rate of such experiments, focusing on most dangerous
> ones 2) we should push for technologies making people less vulnerable.
>
Not only will this path not succeed, it is one of the surest routes to
extinction.
The suggestion that we should "reduce the rate of such experiments", while
undoubtedly well-meant, is totally impractical. There are millions of
computers distributed world-wide on which such experiments could easily be
run. It is literally impossible to curtail experimentation in this area.
The advice to "push for technologies making people less vulnerable" seems
like a good idea. However, nothing reduces vulnerability like power - and I
don't mean just power to destroy, but power to create also. Humankind has
traditionally gained power through tool-use and social organization. Both
modern tools and modern social organization are currently stymied by
intelligence bottlenecks. That is: tools will become better tools as
intelligence is embedded within them, and social organization will become
more powerful and effective as the intelligence of the communications system
is improved, and as the intelligence of humans is improved. Therefore, to
empower people much further than they already have been will require
creating *intelligence* to facilitate that empowerment.
The more powerful we are, the more capability we have to create - or
destroy. You can't have one kind of power without the other kind too. AI
is just another example, the latest in a long, long line of double-edged
swords for humanity. If the good guys flinch away from wielding the AI
sword, then the bad guys will surely grasp it. Come to think of it, the bad
guys will probably grasp it anyway... and if they do, I want to be sure and
have an effective defense/response.
Needless to say, I think 'Relinquishment' = 'Suicide'.
>
> Unfortunately, any successfull experiment has a very strong probability
> for blighting this place, so I'd wish people would stop trying.
>
This statement is quite correct, but it leaves more un-said than it says.
What is missing is: "any successful general AI experiment has a very strong
probability of becoming humanity's most powerful tool, friend and ally".
What is also missing is: "how an AI turns out depends on how we build it".
There is lots more to say on this subject, but you get my drift --> saying
that the future could become less pleasant than the past is a trivial
observation, and somewhat misleading.
Oh, bye the way, I'm not going to stop trying.
Michael Roy Ames
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:14:41 MST