From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Thu Oct 21 1999 - 15:00:29 MDT
Xiaoguang Li wrote:
>
> here's my understanding of Eliezer's argument for actively
> pursuing the creation of a power:
>
> destruction is easier than construction: given humanity's
> lackluster record with past technological advances, nanowar (read:
> armagedon) is virtually certain.
> evolution is messy: even if humanity survives the advent of
> nanotech and gives rise to transhuman powers, these powers would have no
> more chance of being benevolent than powers designed de novo. the
> dichotomy -- if transhuman powers retained evolutionary baggage such as
> emotional attachments or moral inhibitions, then they are prone to
> inconsistencies and self-contraditions and are therefore untrustworthy; if
> they did not, then initial conditions are insignificant, and there's no
> functional difference between transhuman powers or powers who arise
> otherwise. the caveat -- AI is easier than IA, so this scenario would
> require active suppression of technology, the ills of which are
> well-known.
> power before nanowar: creation of a power designed de novo before
> nanowar will result in a singularity -- that is, all bets are off (read:
> no more house advantage against the survival of humanity).
So far, so good.
> now for my doubts. does the creation of a power really increase
> the chances of our survival? it seems that the odds of humanity
> surviving a singularity are significantly lower than a coin flip. given
> the enormous difference between the creation and its creator, it seems
> most likely (> 99%) that humanity would not be significant to a power.
> that is, perhaps less significant than virion particles to a human
> being.
I honestly don't know. I really don't. I certainly wouldn't put the
probability at lower than 10%, or higher than 60%... but, ya know, I
could be missing something either way. I think we're missing a
fundamental part of the question. There isn't *any* outcome I can see
for the Singularity, not one of your three possibilities, that could
conceivably have happened a few hundred thousand times previously in the
Universe... which is what I would regard as the absolute minimum number
of intelligent species to expect, given that some galaxies are already
so old as to be burned out.
What I am fairly certain of, and what my intuitions agree with, is this:
1) If benevolent Powers can be created, it will be because there are no
forces or tendencies, internal or external, that would interfere with
their creation - not because we mounted a heroic effort and overcame the
resistance. Humans and AIs are part of the continuum of "mortal minds",
they are both made of fundamentally the same stuff, and there isn't any
magical reason why humanborn Powers can be benevolent and designed
Powers can't.
2) We can't come up with an evidence-consistent account of what happens
after the Singularity, or what Powers do, because we're not taking some
basic factor into account. The twentieth century just isn't late enough
for our worldview to be capable of phrasing the question properly.
3) There is a massive momentum, backed by profit-motive, idealism, and
all the forces of generalized technological progress, towards creating
and using ultratechnologies like nanotech and AI. Neither can be
suppressed indefinitely, because both will just keep getting easier and
easier, until eventually one person can do it. We can't slow down. We
can't pick and choose. We can only steer for one direction or the
other. Something huge is going to happen on this planet; it's just a
question of what.
4) Uploading is extremely advanced nanotechnology *and* cognitive
science. Any technology capable of uploading someone has been capable
of frying the planet *or* building an AI for at least five years of
Internet time.
5) This *will* be settled, one way or another, fifty thousand years
from now, no matter what happens in the meantime. One generation has to
be the last, so it would be irresponsible to refuse to deal with it.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/tmol-faq/meaningoflife.html Running on BeOS Typing in Dvorak Programming with Patterns Voting for Libertarians Heading for Singularity There Is A Better Way
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:34 MST