From: Anders Sandberg (asa@nada.kth.se)
Date: Thu Jan 07 1999 - 11:40:16 MST
"Eliezer S. Yudkowsky" <sentience@pobox.com> writes:
> Anders Sandberg wrote:
> >
> > OK, this is the standard SI apotheosis scenario. But note that it is
> > based on a lot of unsaid assumptions: that it is just hardware
> > resources that distinguish a human level AI from an SI (i.e, the
> > software development is fairly trivial for the AI and can be done very
>
> That's not what Billy Brown is assuming. He's assuming software
> development is _harder_ than hardware development, so that by the time
> we have a working seed AI the available processing power is considerably
> larger than human-equivalent. I happen to agree with this, by the way.
Yes, that is not unlikely. But note that this is a problem for the
scenario, since if you assume that software development is hard, then
the AI will have a hard problem to solve (obviously on the order of
many man-years with a diverse skillset and diverse outlooks)
> > fast, and adding more processor power will make the AI *smarter*),
>
> I'll speak for this one. If we could figure out how to add neurons to
> the human brain and integrate them, people would get smarter.
<ROFL!> Sorry, but I disagree as a neuroscientist. The number of
neurons obviously place a ceiling on information content in our
brains, but it is the connection structure that makes us
smart. "Integrating" them is the problem, since I assume what you are
meaning is equivalent to "connecting them in the *right* way" instead
of just connecting them randomly (the later is quite possible; brain
transplants are being researched, and there is a gene in the mouse
that causes enormous brain growth - I wonder if they called it
Algernon? :-)
> (I deduce
> this from the existence of Specialists; adding cognitive resources does
> produce an improvement.)
Does it? In what tasks? In what ways? What resources? This is not so simple.
> Similarly, I expect that substantial amounts
> of the AI will be working on sufficiently guided search trees that
> additional processing power can produce results of substantially higher
> quality, without requiring exponential amounts of power.
What about combinatorical explosions?
> > that this process has a time constant shorter than days (why just this
> > figure? why not milliseconds or centuries?),
>
> Actually, I think milliseconds. I just say "hours or days" because
> otherwise my argument gets tagged as hyperbole by anyone who lives on
> the human timescale.
Actually, I doubt milliseconds because of the hardware limitations
even on nanocomputers, unless it turns out that self-enhancement is
well suited for massive parallelism without too much internal
communication.
> Centuries is equally plausible. It's called a "bottleneck".
Hmm, it took mankind hundreds of thousands of years to go from "Oook!"
to Linux, and he calls a puny century a bottleneck... :-)
> > that there will be no
> > systems able to interfere with it - note that one of your original
> > assumptions was the existence of human-level AI; if AI can get faster
> > (not even smarter) by adding more processor power, "tame" AI could
> > keep the growing AI under control
>
> 1) This is an Asimov Law. That trick never works.
> 2) I bet your tame AI has to be smarter than whatever it's keeping
> under control. Halting problem...
As the other response indicates, the watchdogs do not need to be
asimovs to do their job. You can have loyal, barking and biting
programs defending your system without having to worry about them
starting to discuss abolition.
And yes, an intelligent invader will still have a problem with the
security measures since it will likely (if it is designed well) not
have enough information about them, they have the advantages of a home
game and they can protect their system by cutting the net connection
if things turn out bad.
> > - that this SI is able to invent
> > anything it needs to (where does it get the skills?)
>
> If you dropped back a hundred thousand years, how long would it take you
> to out-invent hunters who had been using spears all their life? Skill
> is a poor substitute for smartness.
I wonder how well I could build all those waterwheels, metal melting,
steam engines and Volta cells. Have you tried to recreate technology?
And the interesting thing in this example is that in the end it hinges
not on me being a super-genius, but on me knowing things already (and
then needing to somehow implement them, which is the hard part!).
It would be interesting to drop you off on an isolated island together
with a randomly selected but stupid survivalist. Would your superior
intellect bring you more food?
I would rather say smartness is a poor substitute for skill, which is
why we tend to rely on learned skills rather than problem solving for
most tasks we do.
> And if it's stumped, it can get the answers off the Internet, just like
> I do.
OK, here is a question: how do I design a good robot body? Assume no
prior knowledge beyond a common sense database about the world,
including no experience with being physical, no scientific education,
no engineering education.
> > why are you
> > assuming the AI is able to hack any system,
>
> There are bloody _humans_ who can hack any system.
*Any* system?
> > especially given the
> > presence of other AI?).
>
> In a supersaturated solution, the first crystal wins.
But in a solution filled with small crystals, none can grow.
> If the intelligence is roughly human-equivalent, then there will be
> specialties at which it excels and gaping blind spots. If the
> intelligence is far transhuman, it will still have specialties and blind
> spots, but not that we can perceive.
Why assume we can't see them? There doesn't seem to be any reason for
one way or another.
> So yes, I make that assumption:
> An SI will be able to outwit any human in all respects.
Why do you make this assumption? What evidence or arguments do you have?
> > As you can tell, I don't quite buy this scenario. To me, it sounds
> > more like a Hollywood meme.
>
> Not really. Hollywood is assuming that conflicts occur on a humanly
> understandable level - not to mention that the hero always wins.
You mean like in "The Lawnmover Man"?
> The forces involved in a Singularity are Vast. I don't know how the
> forces will work out, but I do predict that the result will be extreme
> (from our perspective), and not much influenced by our actions or by
> initial conditions. There won't be the kind of balance we evolved in.
> Too much positive feedback, not enough negative feedback.
I agree with the word Vast. But remember, positive feedback
*amplifies* small differences, which means that the result is
influenced by our every action - the problem is whether it is chaotic
or just exponential. In the first case, we cannot predict anything. In
the later case, we can try to aim in a direction and hope things go
right.
Personally I prefer setting up the feedback loops instead. Why bet on
the outcome when you can try to write the rules?
-- ----------------------------------------------------------------------- Anders Sandberg Towards Ascension! asa@nada.kth.se http://www.nada.kth.se/~asa/ GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:02:44 MST