From: Dan Clemmensen (Dan@Clemmensen.ShireNet.com)
Date: Mon Jul 20 1998 - 20:56:51 MDT
Robin Hanson wrote:
>
> Dan C. wrote:
> >> >Since the SI will be vastly more intelligent than humans, IMO we may not
> >> >be able to comprehend its motivations, much less predict them. The SI will
> >> >be so smart that its actions are constrained only by the laws of physics,
> >> >and it will choose a course of action based on its motivations.
> >>
> >> Why do you assume such a strong association between intelligence and
> >> motivations? It seems to me that intelligence doesn't change one's
> >> primary purposes much at all, though it may change one's tactics as one
> >> better learns the connection between actions and consequences.
> >
> >Human motivation is less complex than the motivations of ants?
>
> You lost me here.
OOPS, I lost myself, also. I meant to say:
Human motivation is the same as the motivations of ants?
I intended this to mean that I think that human motivations are a lot more
complicated than the motivation of ants, and that the motivations of an
SI will be proportionately more complex.
>
> >Robin, the reason I produced the list of motivations and actions was
> >to attempt to provide specific examples. Can you reccomend a way for
> >me, or another human or group of humans or construct of humans (short of
> >an SI) to reliably assign probabilities to that list?
>
> Dan had written:
> >...
> >M: SI wants to maximize its power long-term
> >A: SI sends replicator probes to in all directions.
> >
> >M: SI wants to die.
> >A: SI terminates itself. ...
>
> It seems to me that the motivations of future entities can be predicted
> as a combination of
> 1) Selection effects. What motivations would tend to be selected for
> in a given situation?
I don't see this. Selection appplies to populations, and requires
replication and mutation. It may be that there is a population of
SIs. I don't think so, but I could be wrong. However, its irrelevant, since
only one SI is important to us: the one generated in our singularity.
> 2) Legacy motivations. Descendants of current creatures will likely
> retain much of their motivations, translated to a new context.
>
> Wanting to die isn't favored by selection or legacy, except for certain
> translation possibilities. Spatial colonization will be selected for,
> and has lots of legacy pushing for it as well.
>
> Increases in intelligence allow creatures to anticipate selection effects,
> accelerating their effects of creatures who prefer to be selected.
> Increases in intelligence also raise the possibilities of creatures
> attempting to integrate their otherwise disparate legacy preferences
> under simpler unifying principles. Otherwise, I don't see how increases
> in intelligence should lead us to expect much change in motivations.
>
This whole line of reasoning neglects the fact that the SI has control of
its own motivations. The SI may choose to (i.e., be motivated to) retain
legacy motivations, but this is by no means certain, or even likely.
By my own rules, I can assign no probability to it either way :-)
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:23 MST