Re: Artilects & stuff

From: den Otter (neosapient@geocities.com)
Date: Wed Sep 15 1999 - 16:22:19 MDT


> den Otter wrote:
> >
> > Responsible and ethical people would probably use Asimov's
> > robotics laws to control the AI, which may or may not work
> > (probably not). How the AI evolves may very well be "fundamentally"
> > beyond our control. So...I'd join the *uploading* team if you
> > have serious future plans. Load me up, Scotty!
>
> Dammit, Otter, if an entity that started out as uploaded Otter managed
> to keep *any* of your motivations through Transcendence, selfish or
> otherwise, you could use the same pattern to create a reliably
> benevolent Power. I mean, let's look at the logic here:
>
> OTTER and ELIEZER, speaking in unison: "The mind is simply the result
> of evolution, and our actions are the causal result of evolution. All
> emotions exist only as adaptations to a hunter-gatherer environment, and
> thus, to any Power, are fundamentally disposable."
>
> ELIEZER: "If there's one set of behaviors that isn't arbitrary, it's
> logic. When we say that two and two make four, it's copied from the
> laws of physics and whatever created the laws of physics, which could
> turn out to be meaningful. All the other stuff is just survival and
> reproduction; we know how that works and it isn't very interesting."
>
> OTTER: "All the emotions are arbitrary evolved adaptations, except
> selfishness, which alone is meaningful."
>
> This just says "Thud". To use a Hofstadterian analogy, it's like:
>
> abc->abd::xyz->?

[etc.]

Ok, let me explain...again.

Emotions may be "arbitrary evolved adaptations", but they're
also *the* meaning of life (or: give life meaning, but that's
essentially the same thing). That's how we humans work.

The only reason why logic matters is because it can be a useful
tool to achieve "random" emotional goals. Rationality is practical
...in an emotional context. It has no inherent meaning or value.
NOTHING has. Even the most superb SI would (by definition) be
_meaningless_ if there wasn't someone or something to *appreciate*
it. Value is observer-dependent. Subjective.

At least, that's what I suspect. I could be wrong, but I seriously
doubt it. So what's the logical thing to do? Risk my life because
of some emotional whim ("must find the truth")? No, obviously not.

The rational thing to do is to stay alive indefinitely, sticking
to the default meaning of life (pleasant emotions) until, if ever [*]
, you find something better. So maybe you'll just lead a "meaningful",
happy life for all eternity. How horrible!

[*] Actually, as I've pointed out before, uncertainty is
"eternal"; you can never, for example, know 100% sure that
killing yourself is the "right" thing to do, even if you're
a SI^9. Likewise, the nonexistence of (a) God will never be
proven conclusively, or that our "reality" isn't some superior
entity's pet simulation etc. You can be "fairly sure", but
never *completely* sure. This isn't "defeatism", but pure,
hard logic. But that aside.

So stay alive, evolve and be happy. Don't get your ass killed
because of some silly chimera. Once you've abolished suffering
and gained immortality, you have forever to find out whether
there is such a thing as an "external(ist)" meaning of life.
There's no rush.

Or:

#1
Your goal of creating superhuman AI and causing a
Singularity is worth 10 emotional points. You get killed
by the Singularity, so you have 10 points (+any previously
earned points, obviously) total. A finite amount.

#2
You upload, transcend and live forever. You gain an infinite
amount of points.

Who wins?



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:10 MST