From: paul@i2.to
Date: Tue Aug 03 1999 - 14:56:01 MDT
On Tue, 03 August 1999, "J. R. Molloy" wrote:
> Jeff Davis wrote,
>
> >Despite the dramatic talk of an SI destroying humanity, I picture a
> >well-thought-out, cautious, gradual approach to "waking up" and training an
> >artificial mind. The runaway self-evolution which Eliezer and others have
> >predicted seems unlikely in this setting, all the moreso because the
> >principles will be anticipating just such a situation.
I also like what I'm hearing from both you and Jeff.
> Precisely so. No pragmatic economic or organizational reason exists to
> incorporate a machine based consciousness outside a 100% secure containment
> environment. Hence, it won't happen.
This is where I disagree. Please see previous post on a
'Clemmenson' distributed computing, Internet based SI.
> The fact that the AI doesn't feel pain (no reason to build it in) may allow
> the AI to function perfectly with no concern for its virtual slavery.
Very good point.
> There again, since it has experienced no pain, it need not indulge in
> forgiveness or tolerance exercises.
Another good point.
> I think government aims at our best, not its best. Governments
> (corporatations, religions, families, and other entities) function as
> superorganisms, with their own continuity and longevity as their primary
> objectives.
I completely disagree. Superorganisms aim at their best not
their component parts best. I could care less about a few
blood cells, as long as my body keeps functioning. Same
goes for governments and corporations. They exist to
perpetuate their own existence. Since when did a corporation care if it laid-off 20,000 workers, as long as it stock price keeps rising? I can't believe you were being serious.
Paul Hughes
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:04:38 MST