From: Robert J. Bradbury (bradbury@aeiveos.com)
Date: Sun May 12 2002 - 16:20:45 MDT
On Sun, 12 May 2002, Lee Corbin wrote:
> First, would someone define computronium? From context, I thought that
> I knew last year what was being discussed---now I'm not so sure.
I've provided a long alternate thread discussion on this topic.
> But from this sentence, I'll point out that obtaining the *optimum*
> configuration immediately does not seem to be the AI's optimal
> course.
No, precisely. The optimal course is survival -- that requires
placing the greatest distance between oneself and those who
might seek to destroy you.
> Actually, *that* would depend on the severity of the Singularity.
> But even so, like I say, the optimal configuration is hardly needed.
But that is the point. The "severity" of the singularity depends
on how much matter and energy one can capture in the shortest
amount of time -- an unnoticed assault on the solar system may
be much more effective than a noticed assault on the Earth.
> Why, exactly? Getting to space is very taxing for anyone or anything
> to do compared to horizontal expansion.
It only requires getting a small amount of matter into space.
Beaming photons into space isn't more expensive than beaming
them in horizontal directions.
>And when you write "if unopposed", what do you have in mind?
If an AI maxes out the currently available computing capacity,
presumably fighting a losing battle as mere humans unplug their
computers, it has to expand its capacity faster than the unpluging
occurs. It has to migrate rapidly into space and develop independent
power sources and construction capabilities to avoid the humans
closing down the on-planet production/utilization capabilities.
> The idea is that the *first* AI to get to second-by-second order
> of magnitude improvement leaves all the rest of them in the dust,
> and doubtless extends its influence very quickly into the camp
> of any potential rivals.
But you have to present a case that there will be an AI that
develops second-by-second order-of-magnitude improvements
unopposed by humans. I at least would be an individual opposed
to that unless I can determine that such developments are in my
own best interests. Hominids have an amazing proclivity to
sacrifice themselves for their "tribe". You present nothing
that suggests that AIs would be able to trump that.
Robert
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:14:01 MST