From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sat Jan 09 1999 - 21:44:09 MST
Welcome, Keith M. Elis to the very small group of people whom I guess,
from personal observation, to be relatively sane.
"Keith M. Elis" wrote:
>
> The most intelligent reality occurs where everyone can ignore their
> genetic or memetic script as necessary or desired. In this case we would
> expect a convergence over time toward agreement on what is the case,
> maybe even quickly. The advent of AI/neurohacks will surely speed this
> up as the ratio of augments to gaussians increases. But today, this may
> or may not be happening, so unlike Liebniz I will not suggest this is
> the best of all possible worlds.
Thanks for using the terminology!
> One nice thing about transhumanists is that we (I?) understand
> transhumanism as an evolving world-view, ready to be modified or changed
> as the facts roll in. There is a memetic flexibility to varying degrees
> among transhumanists that I see missing in many of the people around me
> who purport to be intelligent. However, what about (for the sake of
> symmetry) 'genetic flexibility'? I don't mean changing our genes just
> yet, I mean letting memes rule genes as much as possible. Memes are
> arguably much easier to acquire and get rid of than are genes. We have
> these genes that tell us to want happiness and pleasure, to avoid
> sadness and frustration, and to compete with each other. We don't have
> to use these genetic imperatives as the starting point for our
> philosophies. In fact, I would argue, if we do it without thought, we
> are just being unintelligent. We are hindering our actual or potential
> ability to know what is the case.
A point that far too few appreciate. Happiness is a state that
theoretically occurs when all problems are solved; happiness is not
necessarily the best way to solve problems. From my personal experience
I can only say that there is intelligence in sorrow, frustration, and
despair; but presumably other "negative" emotions have their uses as well.
I have a story on the boiler where a bunch of immortal uploaded (but not
mentally upgraded) humans burn out their minds like twigs in a flame
over the course of a mere thousand years of happiness, for the very
simple reason that human minds are no more designed to be immortal and
happy than human bodies are designed to live to age three hundred and
withstand nuclear weapons.
> One thing that does seem apparent to me is that a lot of people do use
> genetic imperatives as the starting point for philosophies, and I'm only
> speaking of people who have something that can be called a 'philosophy.'
> The rest just live. A good-sized chunk of Western philosophy, from
> Epicurus and Lucretius, to Hutcheson, Bentham and Mill, assumes
> happiness as part or parcel of the 'good.' I see this need for happiness
> as a biological limit that needs to be overcome in order to know what is
> the case. Of course, it may be that happiness is indeed the good, but it
> must be rational to think so. If it is rational then it can be
> demonstrated logically. Otherwise, one is merely thinking what cannot be
> otherwise thought.
Another point, grasped so easily by this one, impossibly elusive to
almost everyone else. Yes, the qualia of pleasure are the most
plausible candidate for the ultimate good that I know of - but you have
to assign a rational probability to that, not assume it because it feels
good. Happiness may be good, or it may be irrelevant; and the
possibility that it is good does not mean it is the only possible good.
> And this has at least one implication for views of posthumanity. Given
> no genetic pressures, and near-perfect memes, what will a posthuman do?
> We have no idea what it is like to exist without genetic pressures, and
> we have no idea what perfect memes are. In fact, logic itself may not be
> sufficient to know what is the case. Anything not logical is illogical
> so would the posthumans, in order to know what is the case, think
> illogically?
>
> It makes my head spin.
Cease thy disorientation and consider this: That the logic we know is
only an approximation to the truth, as the hard rationality of Euclidean
space was only a superb but untrue physical theory. I think that our
evolved logic holds only as a high-level description of most of our own
space, but not for anything deeper, such as the laws of physics or
consciousness or the meaning of life. That's what I mean when I talk
about non-Turing-computability, and I believe it because logic
disintegrates into self-referential observer-dependent definitions when
you try to reduce it to basic units. In other words, I wound up with
basic elements of cognition rather than mathematics.
There are certain circumstances under which SIs might be illogical, and
it is enough to make my head spin, and these are they: If the logical
thing for SIs to do is wipe out the mortal race that gave them birth,
and the mortals can perceive that logic, they will not give birth to the
SI. It would therefore be to the SI's advantage not to be visibly bound
by that logic, to be illogical. However, once the SI is created, it can
do whatever it wants; it cannot influence the mortal reasoning used, so
how can it be bound by it? However, the mortals know that, so whatever
illogic the mortals think the SI can use would have to transcend it.
Ah, Singularity analysis, where Occam's Razor doesn't work, logic is
mutable, and bargains run backwards in time.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/AI_design.temp.html http://pobox.com/~sentience/sing_analysis.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:02:46 MST