RE: The Dazzle Effect: Staring into the Singularity

From: Lee Corbin (lcorbin@tsoft.com)
Date: Sat Aug 18 2001 - 10:47:41 MDT


Eliezer wrote

>> It stands to reason that if either you or Eugene could accomplish this,
>> then so could many lesser mortals (although it might take us longer).
>> Could you provide a general outline of what you'd do, exactly, to
>> cause the computronium to "wake up"? I'm skeptical you see---it would
>> seem to me to require an incredible balancing act to keep the entity
>> on track.
>
> I didn't say it would stay on track. I said it'd wake up. Friendliness
> is *not* part of the spec on this one.

How could you be so certain that by "on track" I was referring to
friendliness?? I certainly was not. By "on track", I was actually
referring to the difficulty of its processes remaining coherant. Is
this a part of a general pattern of instantly disagreeing with a post
as a preface for further remarks, I wonder. It is *one* way, I suppose
of establishing an authoritative tone. I was just asking a question.

> Leaving both morality and Friendliness aside,

good! They're irrelevant here

> and considering it purely from a technical standpoint, all that would
> really be needed is a partitioning into million-brainpower computing
> partitions, a genetic code that fills up those partitions with neural-
> network elements, and an arbitrarily complex competitive game in which
> moves are signalled by the outputs of those elements... Take an
> idiot-simple neural network the size of a planet as the starting point,
> mutate and recombine the genetic code randomly, and start playing with a
> trillion-sized population of megabrainpower entities (megabrainpower does
> *not* imply megamindpower, it refers to the number of computing elements)...
> I'd expect at least one superintelligence to be born before a million
> generations had passed - five thousand seconds, less than two hours.

Perhaps. This, together with your original remark that "If the
Moon were made of computronium... [it might wake up just] due to
self-organization of any noise in the circuitry" is a very strong
statement of the thesis that intelligent activity is an attractor.
Stephen J. Gould is the great opponent of such contentions, practically
saying that progress is impossible, while Kauffman (and others, like
you) content that it's practically inevitable. Isn't the same reasoning
that you are employing here also going to imply that galaxies wake up
sooner or later too?

> The programming time would consist of writing the neural net spec, the
> genetic spec, the evolutionary operator, and the game spec. Efficiency
> would not be an issue and the only requirement for any degree of initial
> complexity would lie in the game. The network spec, and the genetic spec,
> and the evolutionary algorithm could all be extremely simple as long as
> the genetic spec and network spec were Turing complete.

Just want to make sure here: a sufficient condition for the architecture
to be "Turing complete", as you are using it, is for it to encompass a
Universal Turing Machine, i.e., if all functions which can be calculated
with a Turing Machine, could be programmed. This certainly is the default
assumption about all typical computers.

> Getting a non-lunar-sized piece of computronium to wake up is more
> complicated.

Why would that be, if you weren't in any particular hurry? Suppose we
put the deadline to be a billion years---or any other limit small enough
so that we are not simply waiting for thermal or quantum fluctuations
to accidentally create an SI---from what you've said, a few million CPUs
with high-bandwidth links would suffice. Then you believe that with a few
weeks' work, you could get the software to implement a "genetic code that
fills up the partitions with neural-network elements, and an arbitrarily
complex competitive game in which moves are signaled by the outputs of
those elements", and then the whole thing would wake up in a billion years?

Lee



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:09:55 MST