---"Eliezer S. Yudkowsky" <sentience@pobox.com> wrote:
>
> Robin Hanson wrote:
> >
> > Eliezer S. Yudkowsky writes:
>
> > >You can't draw conclusions from one system to the other. The
> > >genes give rise to an algorithm that optimizes itself and then
programs
> > >the brain according to genetically determined architectures ...
> >
> > But where *do* you draw your conclusions from, if not by analogy
with
> > other intelligence growth processes? Saying that
"superintelligence is
> > nothing like anything we've ever known, so my superfast growth
estimates
> > are as well founded as any other" would be a very weak argument.
Do you
> > have any stronger argument?
>
> Basically, "I designed the thing and this is how I think it will
work and this
> is why." There aren't any self-enhancing intelligences in Nature,
and the
> behavior produced by self-enhancement is qualitatively distinct. In
short,
> this is not a time for analogic reasoning.
Excuse me for carrying on (I hope I'm not being a pain, or pointless, or boring...), but I would say that, IMHO, the first Artficial Intelligence we will create will mostly have the same characteristics as us. The same defaults, the same qualities, no magic wand. And self-enhancement is no better, I don't see why it would make anything other than geniuses with any sort of add-on as can be imagined.
This could mean that singularity doesn't have to happen. Just the usual exponential knowledge growth. And if we might ever stumble upon a new architecture for intelligence (real new, not just adding or improving parts), this change might or might not be of great importance. But how can we know.
Manu.