From: Carlo Wood (carlo@alinoe.com)
Date: Mon Apr 01 2002 - 09:18:14 MST
On Mon, Apr 01, 2002 at 07:34:52AM -0700, Ben Goertzel wrote:
> I think that there *may* be enough computing power out there to enable
> intelligence at the human level or beyond.
Even it is were possible, which I don't believe, it would be
a very unresponsible thing to do.
After reading up on the subject of the Singularity, which is
rather new to me (I suppose I should have introduced myself first),
I get the impression that everyone seems to assume that as soon as
the first transhuman intelligence has been established the 'rest'
will automatically follow.
To me however it looks like playing with atom bombs.
It seems much more likely to me that the first Mind will do
everything it can to survive and have only one (secret) goal:
remove the threat to its existance called 'humanity'.
The LAST thing such a Mind would do is building an even smarter
Mind, making itself redundant (unless it were possible to move its
awareness in total into the new Mind, but I doubt that it would
consider that something to be likely, or that that would be possible
for the first Mind to accomplish anyway).
Instead, it will start manipulation and politics 1) in order to
get enough power to make its survival secure. Once we humans
realize what is going on it will be too late to pull the plug
and within a few decades we will be no more than dangerous
rats who have to be exterminated.
The problem is not how to build a transhuman intelligence, that will
happen sooner or later. The problem is how to keep it under
control. Using 'idle' PC's on internet is *NOT* something you
even want to THINK about imho.
-- Carlo Wood <carlo@alinoe.com> 1) As for example in the SF classicer 'Ender wins', were a couple of little kids with more than average intelligence manipulate the whole world politic by starting (anonymous) chat on usenet groups.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT