From: Chris Capel (pdf23ds@gmail.com)
Date: Fri Feb 03 2006 - 14:55:18 MST
On 2/3/06, Jeff Herrlich <jeff_herrlich@yahoo.com> wrote:
>
> As a fallback strategy, the first *apparently* friendly AGI should be
> duplicated as quickly as possible. Although the first AGI may appear
> friendly or benign, it may not actually be so (obviously), and may be
> patiently waiting until adequate power and control have been acquired. If it
> is not friendly and is concerned only with its own survival, the existence
> of other comparably powerful AGIs could somewhat alter the strategic field
> in favor of the survival of at least some humans.
I think you're assuming here that multiple AGI's would have some
liklihood of being at odds with each other. But I don't think this
would be the case, if they were simple multiple copies of the same
basic design. It would take substantial differences for their goal
structures to differ enough to make them adversarial. Most likely, I
think, they would simply merge, since they would share the same goals
anyway.
On the other hand, multiple AIs of different designs would be almost
certain to be at odds to some significant degree.
Chris Capel
-- "What is it like to be a bat? What is it like to bat a bee? What is it like to be a bee being batted? What is it like to be a batted bee?" -- The Mind's I (Hofstadter, Dennet)
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT