From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Tue Mar 14 2000 - 00:26:34 MST
sayke wrote:
>
> do terms like "dumb" kinda lose meaning in the absence of personal
> control? i think so.
Oh, bull. You have no personal control over your quarks, your neurons,
or your environment. There is not one tool you can use which has a 100%
chance of working. You are at the mercy of the random factors and the
hidden variables. "Maintaining control" consists of using the tool with
the highest probability of working.
> how kind of the sysop. theocracy might sound nifty, but i don't think it
> would be stable, let alone doable, from a monkey point of view.
How fortunate that the Sysop is not a monkey.
> an omniscient ai is pretty much inscrutable, right? i don't know how we
> can evaluate the inscrutable's chances of becoming what we would call
> "corrupt". i think the least inscrutable thing about an omniscient
> intelligence would be its need for resources. other then that... i dunno.
Yes, its need for resources in order to make humans happy. Munching on
the humans to get the resources to make the humans happy is not valid
logic even for SHRDLU. Inscrutability is one thing, stupidity another.
> i fail to see how it could not get tangled up... even in a case like "in
> order to maximize greeness, the resources over there should be used in this
> manner" (which has no self-subject implied) a distenction must be made
> between resources more directly controlled (what i would call "my stuff")
> and resources more indirectly controlled (what i would call "other stuff"),
> etc... and as soon as that distenction is made, degrees of
> ownership/beingness/whatever is implied, and from there promptly gets mixed
> up in the goal system...
Wrong.
What else can I say? You, as a human, have whole symphonies of
emotional tones that automatically bind to a cognitive structure with
implications of ownership. Seeds don't. End of story.
> necessary? in the sense that such an arrangement will increase my odds of
> survival, etc? i doubt it, if only because the odds against my survival
> must be dire indeed (understatement) to justify the massive amount of work
> that would be required to make a sysop; effort that could rather be
> invested towards, say, getting off this planet; where getting off the
> planet would be a better stopgap anyway.
Getting off the planet will protect you from China. It will not protect
you from me. And you can't get off the planet before I get access to a
nanocomputer, anyway.
> unless, of course, you come up with a well thought out essay on the order
> of "coding a transhuman ai" discussing the creation of a specialized sysop
> ai.
If the problem is solvable, it should be comparatively trivial.
Extremely hard, you understand, but not within an order of magnitude of
the problem of intelligence itself.
> i trend towards advocating a very dumb sysop, if it can be called that...
> a "simple" upload manager...
Probably not technologically possible. Even a mind as relatively
"simple" as Eurisko was held together mostly by the fact of self-modification.
> >You and a thousand other Mind-wannabes wish to
> >ensure your safety and survival. One course of action is to upload,
> >grow on independent hardware, and then fight it out in space.
>
> or just run the fuck away, and hopefully not fight it out for a very, very
> long time, if ever. dibs on alpha centauri... ;)
One of the things Otter and I agree on is that you can't run away from a
Power. Nano, yes. Not a Power. Andromeda wouldn't be far enough. The
only defense against a malevolent Power is to be a Power yourself.
Otter got that part. The part Otter doesn't seem to get is that if a
thousand people want to be Powers, then synchronization is probably
physically impossible and fighting it out means your chance of winning
is 0.1%; the only solution with a non-negligible probability of working
is creating a trusted Sysop Mind. Maybe it only has a 30% chance of
working, but that's better than 0.1%.
Of course, if you're totally attached to your carnevale instincts and
you insist on regarding the Mind as a competing "agent" instead of an
awesomely powerful tool with a 30% chance of working - an essentially
subjective distinction - then you might refuse to hand things over to
Big Brother; this does, however, consist of sacrificing yourself and
your whole planet to satisfy a factually incorrect instinct.
> or we all will go Elsewhere... or we will all stalemate... or we will all
> borgify... or we will all decide to commit suicide... or (insert
> possibilities here that only a Power could think of).
Great. In that case, the Sysop can set you free with a clear conscience.
> >No. You cannot have a thousand times as much fun with a thousand times
> >as much mass.
>
> i don't see how we can know that. what if, just for example, we need the
> entire solar system to make a very special kind of black hole? geez...
Then we'll all cooperate.
> mutually assured destruction seems more clever then a sysop.
It won't work for nano and it sure won't work for Minds.
> what if the objective goal is to attain as much "individuality" (whatever
Then we'll all do it together, inevitably. No problem.
> what if i want to *be* said Pact?
I don't trust you. I can't see your source code, and if I could, I
almost certainly wouldn't trust it. den Otter doesn't trust you either.
You're an agent, not a tool.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/beyond.html Member, Extropy Institute Senior Associate, Foresight Institute
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:27:22 MST