Re: Socialism, Intelligence, and Posthumanity

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Jan 11 1999 - 20:23:32 MST


"Keith M. Elis" wrote:
>
> "Eliezer S. Yudkowsky" wrote:
> >
> > Welcome, Keith M. Elis to the very small group of people whom I guess,
> > from personal observation, to be relatively sane.
>
> How small would you say?

Me, you, Greg Egan, Mitchell Porter, Billy Brown, Eric Watt Forste and
Lee Daniel Crocker; possibly Anders Sandburg, Max More, Walter John
Williams, Lawrence Watt-Evans, Carl Feynman and Socrates; and quite a
few people I can't think of offhand. But almost certainly no more than
twenty all told.

No offense intended to those omitted; sanity is a highly specialized
mental discipline, comparable to Zen meditation or lucid dreaming.

> It's good terminology. I remember your original post to this list a year
> or two ago on 'Neutral Terminology for the Future', or something like
> that. I saved it somewhere. Has Anders put it in the >H vocabulary list?

Dunno, although 'Algernon' is there. I was also grateful for the
adoption of "neurohack", which I think is a sufficiently euphonious
buzzword to spawn a dozen startups five years from now, plus an article
in _Wired_.

>[snip]
> This is intuitively right to nearly everyone. And that should cause us
> to raise an eyebrow. Read the above and replace 'happiness' with
> 'sadness'. The resulting sentence seems ridiculous, but it's no more
> arbitrary than the original. Is there a difference between the two
> sentences other than whimsy? Maybe. If there is a difference, then what
> causes this difference? I don't think it's wrong to say that we cause
> the difference. We just like happiness better. I'm sorry but
> that's no standard for a rational person to live by. Or if it is, then
> the logic of thinking so ought to be replicable by an equally rational
> person.

Exactly. True statements aren't easily falsified; they drag all their
causal precedents down with them. If the Sun starts going around the
Earth, you lose General Relativity and the atmosphere floats away. If
you switch two gods in a creation myth, nothing happens. There isn't
any logic behind blind hedonism; it's just a blatant assertion. Anyone
with a memetic immune system sophisticated enough to walk upright should
scream and run away, but apparently there's been some evolutionary
sabotage of the hardware.

> > I have a story on the boiler where a bunch of immortal uploaded (but not
> > mentally upgraded) humans burn out their minds like twigs in a flame
> > over the course of a mere thousand years of happiness, for the very
> > simple reason that human minds are no more designed to be immortal and
> > happy than human bodies are designed to live to age three hundred and
> > withstand nuclear weapons.
>
> When you're finished, I'd like to read it. I've been working on a
> first-person novella illustrating the Darwin-informed absurdity of human
> behavior (I know Camus did something similar, but he suspended
> judgment). I'm not really sure I'm the one that ought to be writing it
> because it could be decent literature if done correctly. I'm in no
> hurry, though. The opening lines are these (approx.):
>
> "It takes a very clever person to be a good animal. To be a human? That
> must take some kind of god."

Good hook. My story is on hold until I learn enough cognitive science
and authorship to figure out a personality for the transhuman main
character that preserves plot tension. Stories are about pain
(particularly emotional pain) and hard choices. How do you reconcile
that, in a reader-sympathizable way, with a transhuman?

> IThere's a lot to do within ourselves. I'm
> more interested in man growing inward at this point. We monkeys best
> attempt to get the human thing right before fancying ourselves something
> greater. If we're going to spill foolishness across the galaxy, we may
> as well stay here.

There are two main pieces of advice I can give you:
1. Catch yourself lying to yourself. If you can just once perceive the
sensation of knowing the truth but refusing to acknowledge it, you can
apply the lesson and clean out your entire mind.
2. Study evolutionary psychology, starting with "Man: The Moral Animal".

> So then the ultimate good must be something along the lines of
> 'correctly assigning rational probabilities to candidates for the
> ultimate good.'

That's the Interim Meaning of Life, what you do if you don't know the
External Meaning of Life. The External meaning, for all we know, could
be some kind of exotic knot of energy.

> Intelligent people, when confronted with (for lack of a better term) a
> 'philosophical' debate, usually start wrangling over definitions and
> terms. I see this all the time, and I do it myself. In some sense, we
> are searching for the perfect language of philosophy, the perfect
> symbols for our meanings, with an eye toward a self-consistent and
> logical body of axioms. For some reason, it's always fruitless. Maybe
> you've discovered why. We're arguing about how we think, not what or
> why.

I think it's because most people, and certainly almost all philosophers,
can't keep track of what's real when they're operating at that level of
abstraction. I can always keep track of the concrete consequences of
any philosophical proposition; for me the words are only descriptions of
my visualization and not the visualization itself. I can concretely
define any term whatsoever, as it is actually used - in terms of our
cognitive handling of the word, if necessary. And that's where
virtually my entire facility with philosophy comes from. When I talk
about "objective morality", I am talking about an amorphous
(unknown-to-us) physical object which possesses an approximate
correspondence to our cognitive goals and which motivates
superintelligences, not "objective morality".

> > There are certain circumstances under which SIs might be illogical, and
> > it is enough to make my head spin, and these are they: If the logical
> > thing for SIs to do is wipe out the mortal race that gave them birth,
> > and the mortals can perceive that logic, they will not give birth to the
> > SI.
>
> If it is logical for the SI to wipe out the gaussians, it is logical for
> the gaussians to wipe out the gaussians.

The "mortals", not the "gaussians". I'm a Specialist, not a gaussian; I
possess some slight variance relative to the unmodified human race, but
it is not significant on a galactic or supragalactic scale. I am well
within the mortal side of any mortal/Power duality. If it comes down to
Us or Them, I'm with Them, but it would be megalomaniac to expect this
to result in any sort of preferential treatment.

> If so, and given that we
> haven't been wiped out, there is something interfering between this
> logical conclusion and actually doing the deed. One candidate is a
> strong genetic imperative to survive, and all of the attendant memes
> associated therewith. Then again, maybe we have been wiped out
> (forcefully uploaded) and are running as a sim in the elsewhere.

We can't see the logic or aren't obeying it. The SIs aren't here yet.

> > It would therefore be to the SI's advantage not to be visibly bound
> > by that logic, to be illogical. However, once the SI is created, it can
> > do whatever it wants; it cannot influence the mortal reasoning used, so
> > how can it be bound by it? However, the mortals know that, so whatever
> > illogic the mortals think the SI can use would have to transcend it.
>
> Ugh. This is very tricky to follow. The problem here is that what is
> logical cannot be illogical, and what is illogical cannot be logical. Or
> more to the point, if we know what logic is, we can deduce what illogic
> is. So both logical and illogical actions are easily perceptible,
> although the set of both together is infinite in scope. You're talking
> about something that is neither logic or illogic. That is, logic and
> illogic are not flip sides of a coin, but are actually together on one
> side, and there is something else on the other side. There's nothing
> very helpful about this except insofar as it requires one to admit that
> all bets are off.

Take a look at "Fun Problems for Singularitarians".

-- 
        sentience@pobox.com         Eliezer S. Yudkowsky
         http://pobox.com/~sentience/AI_design.temp.html
          http://pobox.com/~sentience/sing_analysis.html
Disclaimer:  Unless otherwise specified, I'm not telling you
everything I think I know.


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:02:47 MST