Re: Why would AI want to be friendly?

From: phil osborn (philosborn@hotmail.com)
Date: Sat Sep 16 2000 - 19:47:34 MDT


What this really boils down to is that any SI is going to eat us, one way or
another. We will probably chose to be eaten (some people prefer the term
"assimilated"), because it will be better in terms of our subjective
experience to be completely transformed into something else - another SI, or
a piece of one. The question is whether our subjective experience or choice
will matter to the SI.

On today's Digital Village, one of the heads of IBM's advanced R & D was
interviewed about a presentation he gave at the "Next 20 Years" conference.
During Q & A from the public, one of the callers asked about some report
indicating that it might be possible to build a quantum computer that could
simulate the entire universe - at a higher resolution.

Interesting thing was that this guy - who also discussed the ubiquitous
computing environment at length, overlaid reality, etc., as things likely to
be a part of everyday life 10 to 20 years off - was in fact aware of the
quantum computing report in question, and did not simply dismiss it as SF,
altho he did express major reservations as to when we would be able to
actually build such a machine.

He did say that we were running up against a serious computing lag in terms
of the results coming from the genome projects. Just knowing the sequence
is only the first step - obviously. But what isn't so obvious to most
people is that the really useful stuff, as he put it, would come when we
were able to simulate the large protein folding quickly and accurately.
Then he predicted major breakthroughs in every area of medicine - including
aging.

Nice to know that there are people like that.

>From: Ken Clements <Ken@Innovation-On-Demand.com>
>Subject: Re: Why would AI want to be friendly?
>Date: Mon, 11 Sep 2000 05:32:31 -0400
>
>You can use the term "friendly" in connection with the behavior of
>humans and dogs and many other creatures that we understand, but it has
>no meaning when applied to behavior that we have no hope of
>understanding. I am always amused when SF writers attempt to describe
>the motives and behaviors of the SI. As if!!
>
>An SI may decide that it is "friendly" to suddenly halt all humans in
>mid thought. No humans would see this as "bad" because no one would
>experience it at all (I think I hear a tree falling in the woods
>somewhere). Now you might say that it was "bad" anyway because the SI
>would have known that if we knew what it was going to do we would not
>have liked it. But, what if the SI actually halted all of us because it
>decided to make a very "friendly" world for us, but knew that the
>planned manipulation of the matter of the galaxy would take several
>billion years, and wanted to spare us the subjective wait for paradise
>by encoding us now for playback later. What if we cannot make an SI in
>the first place, because at some point in development they always go
>into some kind of introspective state and halt themselves? These "what
>ifs" are nonfalsifiable, and pointless.
>
>We cannot know what an SI will do, if we could, it would not be one. It
>all comes down to this basic childhood wisdom:
>
>"It takes one to know one."
>
>-Ken
>
>

_________________________________________________________________________
Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.

Share information about yourself, create your own public profile at
http://profiles.msn.com.



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:31:01 MST