Re: SOC: Tractatus Theopoliticus (was: Is vs. Ought)

From: Delvieron@aol.com
Date: Fri Nov 26 1999 - 09:31:28 MST


<< Unless the motives of an SI are subject to arbitrary configuration or
 insanity, which would theoretically allow a sufficiently stupid person
 to create an eternal hell-world, then I am sure beyond the point where
 caveats are necessary. In practice, I'm sure enough that you can rip my
 arm off if I'm wrong. (David Gerrold's Criterion.)>>

Well, that sounds pretty damn sure<g>.
 
<<I'm not going to second-guess the SI. I believe that the SI will do
 whatever I would do if I was only intelligent enough to see the
 necessity. I don't know what the motives of SI may be, but I identify
 with SIs, as a child might identify with vis adult self.>>

I think there are three things that an SI would need to be able to make good
decisions (from the human perspective): Intelligence, Empathy, and Sympathy.

Sympathy is number one - the SI needs to wish to act in our best interests.
Empathy is number two - the SI needs to understand our perspective before it
could know what is in our best interests.
Intelligence is number three - the SI would need to be able to come up with
better methods to act in our best interests (and maybe even determine our
best interests) than we could on our own.

I find it interesting that you see yourself as a child to the SI's adult. If
this were the true situation, then why wouldn't the SI want to see you grow
up? assuming the SI can do some of the things you suggest, why not simply
show you how to gain SI level intelligence yourself?
 
<< Maybe. There are limits to how much time I'm willing to spend worrying
 about the ways twentieth-century American culture can misinterpret me.>>

I worry this is a maladaptive response. It would be a pity if society sought
to stop you not for what you were doing, but only what they thought you were
doing. Misinterpretations have gotten people killed, and I don't think we've
gotten beyond that level quite yet.
  
<<Again, I think that once again we have the basic divide between "life
 goes on" and "walking into the unknowable". I am not advocating a
 social policy. I am not advocating anything human, and the whole
 concept of "social policy" is a very human thing.>>

True, but it will also be a very SI thing if they choose to interact with us.
  
 <<Yes, but there wasn't even the theoretical possibility, at any point, of
 rule by anyone other than Cro-Magnons. And Cro-Magnons are,
 universally, evolved to contain special-purpose cognitive rules for
 reasoning about politics, with little hidden traps that cause every
 person to believe ve is the superior philosopher-king and fail to be
 one. It's all entirely explicable in terms of evolutionary psychology.
 And within that Integrated Causal Model, within that greater model of
 the Universe that contains and explains fifty thousand years of failure,
 we have every reason to believe that AIs will be on the outside.
 
 Unless the initial programming makes a difference, in which case all
 bets are off.>>
 
I'm guessing that initial programming does make a difference.

<< I can, but never mind. Why is perfection necessary? I'll take 99.99%
 accuracy any day.>>

You're right, I would take 99.99% any day. But I'd want some recourse should
the 0.01% errors go against me. Also, when the SI has that much power, how
do we know that it IS that accurate? It might just be altering our
perceptions and memories to make us think it is right that often (maybe for
"benevolent" reasons - like it doesn't want us to be emotionally upset about
how much it is messing up).
 
 <<Fine. Let's say the SI is programmed to simulate an enlightened,
 honest, fair, kindhearted, and absolutely informed agora. Wanna bet it
 couldn't beat the pants off of the modern American government?>>

Okay, you've got something there. It probably would. Of course, whose
definitions of enlightened, fair, and kindhearted are we using?
 
<< Why should errors accumulate instead of cancelling out?>>

I would say both might happen, cancellation and accumulation. Of course, as
long as it has a rigorous enough error check, and the error check isn't one
of the function where the errors are accumulating, then the SI should be able
to make corrections.
 
<<Once an error becomes large enough for humans to perceive, it is large
 enough for any SI remotely worthy of the phrase to notice and correct.>>

Agreed. Unless it is something that is only obvious from our perspective.
 
<<I mean, why doesn't this whole argument prove that thermostats will
 never match a free democracy that votes whether or not to turn the
 heater on?>>

Depends on what criteria are important for the decision whether to turn the
heater on or off. Thermostats, however, make no initial decision as to what
the temperature should be; so far that is still a human function. Once that
decision is made, then a thermostat would do it better than a committee.
 
<<Even supposing this to be true, an SI could easily attain a high enough
 resolution to prevent any humanly noticeable unhappiness from
 developing. I mean, let's say you upload the whole human race. Now it
 runs on the mass of a basketball. If the SI can use the rest of the
 Earth-mass to think about that one basketball, I guarantee you it can
 attain a perceptual granularity a lot better than that of the humans
 running on it.>>

But would this be the right thing to do? If you can upload humanity, then
can't you also alter humanity so as to have them be able to utilize all that
Earth mass for their own cognitive use? If the human race were uploaded and
an SI kept them simple enough for it to perfectly comprehend, then I would
not consider this in the best interests of humanity. The best interests of
humanity would be best served in promoting our own growth.

One variant on this idea which might be considered benevolent would be if
this were a dynamic instead of static process. Let's say the SI is one that
is constantly improving itself caught up in a Singularity type scenario,
where it is constantly growing and improving itself. Let us further say that
the process doesn't allow much in terms of leaping ahead so that you more or
less need to go through the previous levels of SI to get to the more advanced
ones (perhaps at least with your personality intact). It would be immoral to
ask the SI to stunt its own growth just so we could catch up, but as long as
it gave us what aid it could in our own growth, then it would be morally
acceptable for it to continue to retain its lead in ability, and also wise of
the up-and-coming former humanity to accept the SI's leadership, as it is in
a unique position to know better what to do in most situations. Under these
circumstances, I would have very little problem following the lead of the SI
(though I would still not take its rightness for granted).
 
 <<Complete control over the processors running humanity isn't enough power
 for you?>>

No, to really entrust ourselves to an SI you would also want it to have
complete control over the environment in which those processors and its own
operate. Shit still happens, even to SI's.
 
<<What complexity? Just because it looks complex to you doesn't mean that
it's complex to the SI. I occasionally get flashes of the level on
which human minds are made up of understandable parts, and believe me,
society is a lot less complex than that. Less communications bandwidth
between the elements.>>

I agree, the human mind is still more complex than human society in general.
But remember, human society is made up of all those complex human minds.
 
 <<Is this a true argument?
 Would it be obvious to a sufficiently intelligent being?
 Then what will happen, in your visualization, is that I create an SI,
 and it adopts Greg Burch's personal philosophy and does exactly what you
 want it to for exactly the reasons you cited.>>

Then wouldn't the SI really be an extension of Greg Burch's personality? If
the SI doesn't diverge from the opinions, goals, and desires of Greg Burch,
then how is it separate from Greg Burch? Furthermore, if it can't come to a
different conclusion as to what should be done, won't this handicap it in
providing a better world (for though I greatly respect Greg's judgement, in
my experience, no living human is ever completely right all the time. Though
things could be a lot worse than having the Greg Burch-SI running the
show<g>)? Why not just upgrade Greg Burch to SI level (and hey, what about
me too while your at it<g>)?

Glen Finney
 
 



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:51 MST