From: Paul Hughes (planetp@aci.net)
Date: Sun Dec 06 1998 - 14:29:01 MST
"Eliezer S. Yudkowsky" wrote:
> But I never proposed to personally exterminate humanity; I only said that I am willing to
> accept that extermination may be the ethically correct thing to do. In
> essence, I stated that I was willing to default to the opinions of
> superintelligence and that I would not change this decision even if I knew it
> would result in our death. But then, I would hardly go all disillusioned if
> we could slowly and unhurriedly grow into Powers. Either way, *I'm* not
> making the decision that the ends justify the means. If at all possible, such
> decisions should be the sole province of superintelligence, given our dismal
> record with such reasoning.
Ok, I'm beginning to follow you here. I think this entire line of reasoning is based on a belief that
there are greater and lesser objective degrees of ethical correctness. Granted, that I too have my
own set of ethical criteria which I compare to others, putting somebody like Martin Luther King above
me and Chairman Mao below me. I'm also willing to concede that my entire ethical criteria are
arbitrarily determined by evolutionary and environmental factors. That in an infinite potential space
of computational complex systems (with human society and the Terran biosphere being only one arbitrary
set of them), there could be alien ethical systems of a more advanced technological species that
completely contradict my own - the Borg being a prime case in point.
One can't argue that the Borg are further along the Singularity curve than we are. They possess much
greater technological sophistication - extremely complex and fully integrated computational systems
composed of a combination of neurological and nanotechnological components. The Borg's basic MO
consists of roaming the universe with the sole purpose of accelerating their evolution towards the
Singularity (which they call "Perfection") by assimilating as much computational componentry
(biological, technological, other) as possible. The hypothetical question too you Eliezer, if the
Borg arrived today (Dec 6, 1998) and you had the choice of continuing on the path your on now, or
becoming assimilated by the Collective, which would you choose and why?
> You are absolutely correct in that I assign a higher probability to my own
> self-awareness than to my Singularity reasoning. I don't see how this implies
> a higher moral value, however. (Should(X), Predict(X), and Is(X)) mean three
> different things. You can phrase all of them as probabilistic statements, but
> they are statements with different content, even though they use the same
> logic. For that matter, I am more certain that WWII occurred, than I am that
> superintelligence is possible. But we all know which we prefer.
Morality? Along my intended line of reasoning, I'm not sure what that has to do with anything. Since
we both assign a higher probability to our own self-awareness over that of *any* Singularity
reasoning, I find it illogical that you prefer an admittedly unknowable singularity over yourself. In
other words, your putting preference of your cogitation over that which allows your cogitation to take
place in the first place - your brain's self-awareness.
Playing this out in Star Trek terms, I would rather resist Borg assimilation for the potential of
achieving higher intelligence on my own terms - such as that of an Organian or 'Q', rather than allow
myself to be consumed by the collective - a potentially less 'moral' or 'ethical' species-complex. To
opt out of such 'desires' as you call them, is to sell myself and what I may potentially become (an
Organian) short. Both the Borg and the Organian styles of intelligence could be argued as having
potentially infinite payoff. Since you and I both agree that the singularity has a lesser assigned
probability than our own awareness, I choose to error on the side of my self-awareness (higher
probability) with potentially infinite payoff (Organian Path) over losing that higher probability
(self-awareness) with potentially infinite payoff (Borg Path). This has more to do with choosing the
path of highest probable payoff (my continuing self-awareness) over so-called 'desire' or certainty as
you say:
> I think you're confusing certainty and desire. I'd say that there's a
> 30%-80% chance of a Singularity, but even if it was only 30%, I'd still want
> it. (Incidentally, I don't assign more than a 95% probability to my own
> qualia. I still believe in "I think therefore I am", but the reasoning has
> gotten too complicated for me to be at all certain of it.)
>
> Let me ask it another way: I have 98% probability that the Singularity is a
> good thing, but only 40% probability that human awareness is a good thing.
I disagree. One could argue that an atomic bomb is a more powerful thing than a firecracker, but is
it necessarily a good thing? 'Good' is way to vague a word to bring into this topic of 'Shangri-La'
precision were all grasping for.
Paul Hughes
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:54 MST