From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Thu Aug 05 1999 - 15:34:46 MDT
(WAS: Re: IA vs. AI was: longevity vs singularity)
den Otter wrote:
>
> ----------
> > From: Eliezer S. Yudkowsky <sentience@pobox.com>
>
> > I think we have a serious technological disagreement on the costs and
> > sophistication of uploading. My uploading-in-2040 estimate is based on
> > the document "Large Scale Analysis of Neural Structures" by Ralph Merkle
> > (http://www.merkle.com/merkleDir/brainAnalysis.html) which says
> > "Manhattan Project, one person, 2040" - and, I believe, that's for
> > destructive uploading.
>
> Then somewhere else you wrote:
>
> > I would
> > expect virtually everything Drexler ever wrote about to be developed
> > within a year of the first assembler, after which, if the planet is
> > still around, we'll start to get the really *interesting* technologies.
> > Nanotechnology is ten times as powerful and versatile as electricity.
> > What we forsee is only the tip of the iceberg, the immediate
> > possibilities. Don't be fooled by their awesome raw power into
> > categorizing drextechs as "high" nanotechnology. Drextechs are the
> > obvious stuff. Like I said, I would expect almost all of it to follow
> > almost immediately.
>
> So you've said yourself that nanotech is to be expected around 2015,
> and that even today's most advanced Drextech designs would soon be
> obsolete. How does this jive with a 25(!!!) year gap between the first
> assemblers and uploading? I bet that if nanotech would be as potent as
> you assume, full uploading could be feasible in 2020, about the same
> time as your AI. If time is no longer a factor, uploading becomes more
> than ever the superior option.
The key phrase is CRNS. As you point out, Merkle's article does not
include nanotech. In the event that nanotechnology was developed, then
we have a new measure of time: NRNS. I would expect spaceship
technology at about 6m NRNS, a working lunar colony at 2y NRNS,
uploading at 4y NRNS, active shields against gray goo at 1y NRNS, and
military attack goo at 3m NRNS. There will never be shields against
military goo - see later post.
> Merkle's article is a conservative extrapolation of current technology,
> *it does not include nanotech*. I didn't see that quote "Manhattan
> project, one person, 2040" either. He simply concludes that: "If we
> use the [conventional] technology that will be available in 10 to
> 20 years, if we increase the budget to about one billion dollars,
> and if we use specially designed special purpose hardware --
> then we can determine the structure of an organ that has long
> been of the greatest interest to all humanity, the human brain".
You're right. Interesting. Maybe I'm using the wrong source, but
that's the only one I could find, and I really thought that was the one.
Looking into it, now, I see that what's being proposed isn't actually
uploading, it's just getting a detailed wiring diagram from a series of
brain sections over the course of three years. I don't think that would
preserve sentience or even basic computational ability - it would just
be a useful hint for AIers, which is what the article is about. I
really seem to recall a technological estimate for actual non-nanotech
uploading that said 2040.
> > You're talking about off-the-shelf uploading.
> > You're talking about nondestructive-uploading kiosks at the local
> > supermarket.
>
> No, though these could very well be feasible with 2020 nanotech.
Yeah, about 6y NRNS.
> > That's 2060 CRNS
>
> Yeah, right, 45 years after the first functional assembler. Wow,
> progress will actually be *slowing down* in the next century.
> Would this result in an anti-singularity or something?
[snip]
> > Then they'd damage uploading more. It's a lot harder to run an
> > uploading project using PGP.
>
> Why would this be harder for IA than for AI?
Because one could at least try to run an AI on a network of home
computers using secure IP, while IA neurosurgery requires a complete
hospital, neuroimaging equipment, and a lab capable of adapting
(admittedly off-the-shelf) technology originally developed for epilepsy
for brain stimulation.
Uploading requires either a huge lab or nanotechnology. You can't hide
the lab, and nanotechnology is a whole 'nother issue.
> > Do I think nanotechnology is going to blow up the world? Yes.
>
> .......but probably not before we (most likely in a substantially
> augmented form) can escape to space.
Well, that's where I disagree. In the event that nanotechnology is
developed, say by Zyvex, either the U.S. government will confiscate all
of it, or Zyvex will have to keep it secret until it can fight the
government and win - and it'll have to develop all the applications on
its own, which makes that possibility fairly unlikely, especially since
the phrase "military applications" doesn't seem to have passed through
their mind.
So the US confiscates it, and either conquers the world, or keeps it a
secret, or the secret gets out and somebody tries to launch a preemptive
nuclear strike. I'm not really sure how this would play out, but it
seems to me to end either in a dictatorship or in a nanowar. I don't
see where the lunar colonies would come from.
> > Do I
> > lift the smallest finger against it? No.
>
> Unless nanotech is absolutely crucial to build your AI, this statement
> doesn't make any sense. This race is way too important to worry
> about good sportsmanship.
Maybe it's too important *not* to worry about good sportsmanship. If I
were to assassinate my own role model, not only would I tick off every
single person who could conceivably have helped me, and not only would I
kill the only person with enough wariness and influence to keep things
from getting worse if something *does* go wrong, but I'd also be
inviting retaliatory strikes on AI researchers. I really don't think
anyone's chances of survival would be helped by a deathmatch among transhumanists.
I can't keep nanotechnology from being developed - only keep it from it
being developed by the people I can influence, who are probably the best
of all evils.
> > I really don't see that much of a difference between vaporizing the
> > Earth and just toasting it to charcoal. Considered as weapons, AIs and
> > nanotech have equal destructive power; the difference is that an AI can
> > have a conscience.
>
> It has a *will*, an intelligence (but not necessarily a conscience in
> the sense that it feels "guilt"). An ASI is an infinitely more
> formidable weapon than nanotech because it can come up with new ways to
> crush your defences and kill you at a truly astronomical speed.
> Like the Borg, who adjust their shielding after you've shot a couple
> of them, only *a lot* more efficient. Nanotech is just stupid goo
> that will try to disassemble anything it comes into contact with it
> (unless it's another goo nanite -- hey, you could base your defenses
> on that). So...avoid contact. Unless the goo is controlled by a
> SI (not that it would bother with such hideously primitive
> technology), it can be tricked, avoided and destroyed. Try that
> with a SI...
It's not goo I'm worried about, it's deliberately developed weapons.
Yudkowsky's Fourth Threat: "Technologies with military potential are
*always* used." I don't think there's much more of a defense against
nanotechnological weaponry than there is against SIs. I mean, yes, an
SI is unimaginably large overkill while a nanowar is just a little
overkill, but, like I said, what's the difference between vaporizing the
planet and toasting it to charcoal? Is there that much of a difference
between being shot and being nuked? Either way, you're dead.
> These possibilities are non-trivial, certainly when the military
> start fooling around with AI, in which case they're likely to
> be the first to have one up and running. Hell, those guys might
> even try to stuff the thing into a cruise missile. So no, I don't
> see why I should trust an AI more than myself.
I do worry about the possibilities of "AI abuse". Maybe I don't worry
enough. Maybe I'm dissing Zyvex for being naive about military
applications while being just as stupid myself. Maybe *any* technology
capable of saving the world will be capable of blowing it up two years
earlier. One of the things that makes navigating the future so
interesting is that the problems are not guaranteed solvable. Sometimes
you're just doomed. But...
I guesstimate that, in self-enhancing AI, there's a sharp snap from
prehuman to posthuman intelligence. With any luck, none of the seed
stages will be intelligent enough to be a major threat, none of the end
stages will be dumb enough to be controllable, and none of the
intervening stages will last long enough to be a problem.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/tmol-faq/meaningoflife.html Running on BeOS Typing in Dvorak Programming with Patterns Voting for Libertarians Heading for Singularity There Is A Better Way
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:04:40 MST