RE: Maximizing results of efforts Re: Mainstreaming

From: Ben Goertzel (ben@goertzel.org)
Date: Sat Apr 28 2001 - 22:25:34 MDT


> >From this speculation, it would seem then, that we need to hedge
> our bets and
> build pre-Singularity level, but still very intelligent
> computers, to effect
> technological advances which would conceivably, provide new solutions for
> energy and resource needs.

It may well wind up that we want to build computers with a kind of
"intelligence ceiling", computers that ~don't~ become vastly
superintelligent precisely because they're more useful to us when their
intelligence is at a level that our problems are still interesting to them.

This is reminiscent of the situation in "A Fire Upon the Deep", in which not
all civilizations choose to transcend and become vastly superintelligent...

Or one can imagine "bodhisattva AI's" that become superintelligent and then
stupidify themselves so they can help us better.

When I was younger and even more foolish, I used to have a theory that I was
an all knowing all powerful God who had intentionally blotted out most of
his mind and made himself into a mere mortal, just because he'd gotten tired
of being so damn powerful ;>

ben



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:07:19 MST