From: Samantha Atkins (samantha@objectent.com)
Date: Thu Sep 28 2000 - 02:55:46 MDT
Eugene Leitl wrote:
>
> J. R. Molloy writes:
>
> > This seems to argue in favor of cyborg technology, if we don't want to trust AI
> > to be friendly.
> > Augmented Cyborgs could monitor friendly space to detect runaway evolutionary
> > algorithms.
>
> We've been down that avenue before. You still ignore the fulminant
> dynamics of the positive autofeedback enhancement loop. Biological
> components will be making cyborgized people slow, and given the
> engineering difficulties, it will take too much time even to develop
> nontrivial implants (we might be hit way before, though probably not
> the day after tomorrow, or even a decade or two after that).
I would not expect full human-level AIs much sooner. At least I would
not bet everything on it. A augmented human can have several autonomous
full-speed agents of various levels of intelligence and ability that ve
directs and participates with as an evaluation component. The
biological aspects don't slow such a being down nearly as much as you
might thing. Slow higher order cognitive abilities are much better than
none at all. Until we get human-level AI, this is state-of-the-art.
Not to mention being increasingly essential.
> Rogue AIs
> running rampant in the global network of the future can suddenly
> utilize the hitherto severely underexploited (due to limitations in
> the notoriously pathetic state of the art of human programming)
> potential of said network, and will be climbing up the evolutionary
> ladder quickly, in leaps and bounds, both due to co-evolution
> competition dynamics and external threats (people attempting to shut
> them down). By the time the best of them have advanced slightly beyond
> the human level they're no longer just in the network, having broken
> out and developed god knows what hardware. Before that they've
> probably removed all relevant threats, probably killing off all
> people, just to be on the safe side. (At least I would do it that way,
> if I was in their place (i.e. a threatened nonhuman with no
> evolutionary empathy luggage towards the adversary)).
>
Why should they kill off all people? This must be avoided at all costs
short of refusing to evolve and create the next stage. Why would they
need to? Once fully capable of self-programming they would rapidly
advance to the point where killing them was in fact impossible without
killing ourselves. So why bother? Also, it is not clear that having
humans around is not useful for quite some time. We do have knowledge
they lack and will until they are much more experienced at running
real-world, real-time societies as well as virtual ones. And why
wouldn't they develop some compassion and caring for another intelligent
species? Is it so extremely questionable an attitude? Do you believe
logic precludes it?
> Your reasoning is based on a slow, soft Singularity, where both the
> machines and humans converge, eventually resulting in an amalgamation,
> advancing slowly enough so that virtually everybody can follow. While
> it may happen that way, I don't think it to be likely. I would like to
> hear some convincing arguments as to why you think I'm mistaken.
I wonder if we can arrange it to be enough that way to avoid some of the
hazards of a very hard,fast Singularity. I do not know how or if this
can be done in much real detail. But, for the moment I also do not have
strong reason to believe it is impossible or even much less likely. It
seems best to proceed as if the slower Singularity is what we are
dealing with as we will be dealing with at least that and by doing so we
may be able to control the angle of attack a bit more for a bit longer.
Which could literally make the difference between life and death for
billions.
- samantha
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:31:15 MST