Re: Socialism, Intelligence, and Posthumanity

From: Keith M. Elis (hagbard@ix.netcom.com)
Date: Mon Jan 11 1999 - 16:39:43 MST


"Eliezer S. Yudkowsky" wrote:
>
> Welcome, Keith M. Elis to the very small group of people whom I guess,
> from personal observation, to be relatively sane.

How small would you say?

>
> "Keith M. Elis" wrote:
>
> > [...] The advent of AI/neurohacks will surely speed this
> > up as the ratio of augments to gaussians increases. [...]
>
> Thanks for using the terminology!

It's good terminology. I remember your original post to this list a year
or two ago on 'Neutral Terminology for the Future', or something like
that. I saved it somewhere. Has Anders put it in the >H vocabulary list?

> A point that far too few appreciate. Happiness is a state that
> theoretically occurs when all problems are solved;

And coupled with an evolutionary analysis we see that the solved
problems that sufficiently cause happiness are likely very few. We are
happy when our bellies are full and we are in no pain and just about to
mate. Then happiness gives way to physical pleasure. What better way to
keep us procreating? Our current way of life tends to complicate the
'about to mate' part. Our mating rituals are intricate, and costly, and
no results are guaranteed. One might say we are the most dejected bunch
of animals ever to walk the planet. Consciousness is a burden.
  
> happiness is not
> necessarily the best way to solve problems.

Any knucklehead can follow an evolutionary analysis from statements that
are nearly universally agreed-upon (such as 'happiness is good') to
'people probably can't help thinking so.' An hypothesis: the more people
agree on something, the more likely it is to be a product of evolution
rather than reason.'

>
> From my personal experience
> I can only say that there is intelligence in sorrow, frustration, and
> despair; but presumably other "negative" emotions have their uses as well.
>

They all had or have at least one use, the use to which they are
probably best suited: causing us to make affirmative efforts toward
seeing our genes into the next generation. In many cases, though, it is
suffering that gets one to self-reflect. The adage is that 'ignorance is
bliss.' I don't know about that, but I do think that bliss can breed
ignorance.

I spent some time reading Bentham and Mill, and they both nod in the
direction of Francis Hutcheson who once wrote (I forget where, offhand)
"That action is best which procures the greatest happiness for the
greatest number."

This is intuitively right to nearly everyone. And that should cause us
to raise an eyebrow. Read the above and replace 'happiness' with
'sadness'. The resulting sentence seems ridiculous, but it's no more
arbitrary than the original. Is there a difference between the two
sentences other than whimsy? Maybe. If there is a difference, then what
causes this difference? I don't think it's wrong to say that we cause
the difference. We just like happiness better. I'm sorry but
that's no standard for a rational person to live by. Or if it is, then
the logic of thinking so ought to be replicable by an equally rational
person.
 
> I have a story on the boiler where a bunch of immortal uploaded (but not
> mentally upgraded) humans burn out their minds like twigs in a flame
> over the course of a mere thousand years of happiness, for the very
> simple reason that human minds are no more designed to be immortal and
> happy than human bodies are designed to live to age three hundred and
> withstand nuclear weapons.

When you're finished, I'd like to read it. I've been working on a
first-person novella illustrating the Darwin-informed absurdity of human
behavior (I know Camus did something similar, but he suspended
judgment). I'm not really sure I'm the one that ought to be writing it
because it could be decent literature if done correctly. I'm in no
hurry, though. The opening lines are these (approx.):

"It takes a very clever person to be a good animal. To be a human? That
must take some kind of god."

I envision an implicit transhumanist theme. In defining what man was and
what man is, we define what man was not, is not, and is not yet. This
brings me to a point about transhumanism. Among transhumanists, a group
of people with models that approximate reality better than most, IMO, I
see too many animals eschewing limits and not enough humans. Biological
limits keep us from growing outward, indeed, but they also keep us from
growing inward. My consciousness, surrounded by the universe, seems to
be on the fringes of this deep, dank recess within myself, where all
sorts of stupidity lurks. Some call it the reptile brain, or animal
nature. I just call it unconsciousness. At this point, we don't have the
tech to beat a path across the galaxy, or live forever, or compile
matter, or even interface our brains with a simple calculator. With a
few possible exceptions (you know who you are), the cheerleaders here
are not going to be the ones to make it happen. But there is a lot to do
by even amateurs like myself. There's a lot to do within ourselves. I'm
more interested in man growing inward at this point. We monkeys best
attempt to get the human thing right before fancying ourselves something
greater. If we're going to spill foolishness across the galaxy, we may
as well stay here.

> Another point, grasped so easily by this one, impossibly elusive to
> almost everyone else. Yes, the qualia of pleasure are the most
> plausible candidate for the ultimate good that I know of - but you have
> to assign a rational probability to that, not assume it because it feels
> good.

So then the ultimate good must be something along the lines of
'correctly assigning rational probabilities to candidates for the
ultimate good.'

> Happiness may be good, or it may be irrelevant; and the
> possibility that it is good does not mean it is the only possible good.

Logically true.

> > It makes my head spin.
>
> Cease thy disorientation and consider this: That the logic we know is
> only an approximation to the truth, as the hard rationality of Euclidean
> space was only a superb but untrue physical theory.

A great and important point. We may regard the set of that which entails
reality to be a complete description of reality, but we have no basis to
assert that the set of all logical truths is a complete description of
same.

> I think that our
> evolved logic holds only as a high-level description of most of our own
> space, but not for anything deeper, such as the laws of physics or
> consciousness or the meaning of life.

It's no wonder that logic implies weird paradoxical conclusions when
applied to things that had no perceptible effect on our evolution. We
evolved in this world of 'macro-sized' objects, where a thing is itself,
and not something else. Some have said that evolution has little
philosophical import, but I can't imagine that it had nothing to do with
the way we reason.

> That's what I mean when I talk
> about non-Turing-computability, and I believe it because logic
> disintegrates into self-referential observer-dependent definitions when
> you try to reduce it to basic units. In other words, I wound up with
> basic elements of cognition rather than mathematics.

I don't think I've gone this far, and I'm not sure I would know a basic
element of cognition if it thumped me on the noggin, but the idea
resonates. I recall Wittgenstein's attempt to map, principally, the
limits of linguistic ability to express intelligible meaning, and thus
map the limits of conceptual thought. His idea was that if we can't
represent it, we cannot cogitate it. His conclusions basically threw
metaphysics and ethics and aesthetics out the window as principally
meaningless from a logical standpoint, which (through some
misunderstanding it seems) spawned the Vienna Circle and eventually
logical positivism.

Intelligent people, when confronted with (for lack of a better term) a
'philosophical' debate, usually start wrangling over definitions and
terms. I see this all the time, and I do it myself. In some sense, we
are searching for the perfect language of philosophy, the perfect
symbols for our meanings, with an eye toward a self-consistent and
logical body of axioms. For some reason, it's always fruitless. Maybe
you've discovered why. We're arguing about how we think, not what or
why.

>
> There are certain circumstances under which SIs might be illogical, and
> it is enough to make my head spin, and these are they: If the logical
> thing for SIs to do is wipe out the mortal race that gave them birth,
> and the mortals can perceive that logic, they will not give birth to the
> SI.

If it is logical for the SI to wipe out the gaussians, it is logical for
the gaussians to wipe out the gaussians. If so, and given that we
haven't been wiped out, there is something interfering between this
logical conclusion and actually doing the deed. One candidate is a
strong genetic imperative to survive, and all of the attendant memes
associated therewith. Then again, maybe we have been wiped out
(forcefully uploaded) and are running as a sim in the elsewhere.

Then again, even if it is logical for an SI to wipe us out, *and* we can
perceive the logic, at least you and I (and presumably others) would
still give birth to the SI simply because logic has no claim to being
the source of objectively true answers. It may be or may not be, but
that's what the SI is for.

> It would therefore be to the SI's advantage not to be visibly bound
> by that logic, to be illogical. However, once the SI is created, it can
> do whatever it wants; it cannot influence the mortal reasoning used, so
> how can it be bound by it? However, the mortals know that, so whatever
> illogic the mortals think the SI can use would have to transcend it.

Ugh. This is very tricky to follow. The problem here is that what is
logical cannot be illogical, and what is illogical cannot be logical. Or
more to the point, if we know what logic is, we can deduce what illogic
is. So both logical and illogical actions are easily perceptible,
although the set of both together is infinite in scope. You're talking
about something that is neither logic or illogic. That is, logic and
illogic are not flip sides of a coin, but are actually together on one
side, and there is something else on the other side. There's nothing
very helpful about this except insofar as it requires one to admit that
all bets are off.

_____________________________________________
Keith M. Elis
mailto:hagbard@ix.netcom.com



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:02:47 MST