From: Billy Brown (bbrown@conemsco.com)
Date: Fri Mar 26 1999 - 06:49:49 MST
Eliezer S. Yudkowsky wrote:
> I guess the question everyone else has to ask is whether the possibility
> that late-term Powers are sensitive to the initial conditions is
> outweighed by the possibility of some first-stage transhuman running
> amuck. It's the latter possibility that concerns me with den
> Otter and Bryan Moss, or for that matter with the question of whether the
> seed Power should be a human or an AI.
So, remind me again, why exactly are we so worried about a human upload?
The last time I looked, our best theory of the human brain had it being a
huge mass of interconnected neural nets, with (possibly) some more
procedural software running in an emulation layer. That being the case, a
lone uploaded human isn't likely to be capable of making any vast
improvements to it. By the time he finishes his first primitive neurohack
he's going to have lots of uploaded company.
I think a seed AI closely modeled on the human brain would face similar
problems. What gives Ellison-type architectures the potential for growth is
the presence of a coding domdule, coupled with the fact that the software
has a rational architecture that can be understood with a reasonable amount
of thought. Any system that doesn't have the same kind of internal
simplicity is going to have a much flatter enhancement curve (albeit still
exponential).
Billy Brown, MCSE+I
bbrown@conemsco.com
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:03:23 MST