xgl wrote:
>
> On Tue, 5 Sep 2000, Zero Powers wrote:
>
> >
> > So, I guess I have no fear of the AI being malignant like the “Blight” in
> > Vinge’s _Fire Upon the Deep_, but I can’t see how it is that we expect it
> > to give a hoot about our puny, little problems, or even to be “friendly”
> > to us.
> >
>
> i see no reason that an SI (the kind that eliezer envisions,
> anyway) would experience anything remotely as anthropomorphic as
> gratefulness. we are talking about an engineered transcendent mind, not
> a product of millions of years of evolution -- no parents, no siblings, no
> competition, no breast-feeding.
THis would not be accurate. If the SI is developed by humans, it most certainly
be a product of millions of years of evolution. If the SI spontaneously erupts
from nothingness, only then does it fit your claims. An SI developed by humans
would very likely quickly grasp the concept that it a) owes its existence to its
creators, b) it is currently living a constrained existence (i.e.
childhood/adolescence) and requires further assistance from those humans to
reach some stage of independence. It would quickly learn market principles, and
would likely offer assistance to humans to solve their problems in order to earn
greater levels of processing power and autonomy.
>
> as eliezer points out in his various writings, if such a mind does
> anything at all, it would be because it was objectively _right_ -- not
> because it feels good, and not as a result of any coerced behavior (ie,
> evolutionary programming). thus, even if the SI is the best alternative
> for the human race, i would still approach it with fear and trembling.
Socio/psychopathic minds are sick/malfunctioning minds. Proper oversight systems
should quickly put any sociopathic/psychopathic SI down. Such control mechanisms
would be a primary area of research by the Singularity Institute I would
imagine.
This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:37:13 MDT