From: Nick Hay (nickjhay@hotmail.com)
Date: Thu Aug 14 2003 - 04:16:10 MDT
Samantha Atkins wrote:
> On Wednesday 13 August 2003 07:52, Gordon Worley wrote:
> > Many AGI projects is, in my opinion, a bad idea. Each one is more than
> > another chance to create the Singularity. Each one is a chance for
> > existential disaster. Even a Friendly AI project has a significant
> > risk of negative outcome because Earth has no AI experts. Rather we
> > have a lot of smart people flopping around, some flopping in the right
> > direction more than others, hoping they'll hit the right thing. But no
> > one knows how to do it with great confidence. It could be that one day
> > 10 or 20 years from now the universe just doesn't wake up because it
> > was eaten during the night.
>
> The entire universe? Naw. Many AGI projects is a great idea precisely
> because we don't know which path is the most fruitful with least danger at
> this time. If humanity is facing almost certain disaster without an AGI
> and only with the right kind of AGI is the likelihood of survival/thriving
> high, then even risky possible paths are reasonble in light of the certain
> of doom without AGI.
Risky paths are reasonable only if there are no knowable faults with the path.
Creating an AI without a concrete theory of Friendliness, perhaps because you
don't think it's necessary or possible to work out anything beforehand, is a
knowable fault. It is both necessary and possible to work out essential
things beforehand (eg. identifying "silent death" scenarios where the AI
you're experimenting with appears to perfectly pick up Friendliness, but
become unFriendly as soon as it's not dependant on humans). You can't work
out every detail so you'll update and test your theories as evidence from AI
development comes in.
Creating an AI with the belief that no special or non-anthropomorphic efforts
are needed for Friendliness, perhaps assuming it'll be an emergent behaviour
of interaction between altruistic humans and the developing AI or that you
need only raise the AI 'child' right, is another knowable fault. There a
bunch of them, since there are always more ways to go wrong than right.
An AI effort is only a necessary risk given that it has no knowable faults.
The project must have a complete theory of Friendliness, for instance. If you
don't know exactly how your AI's going to be Friendly, it probably won't be,
so you shouldn't start coding until you do. Even then you have to be careful
to have a design that'll actually work out, which requires you to be
sufficenlty rational and take efforts to "debug" as many human
irrationalities and flaws as you can.
"AGI project" -> "Friendly AGI project" is not a trivial transformation. Most
AI projects I know of have not taken sufficent upfront effort towards
Friendliness, and are therefore "unFriendly AGI projects" (in the sense of
non-Friendly, not explictly evil) until they do. You have to have pretty
strong evidence that there is nothing that can be discovered upfront to not
take the conservative decision to work out as much Friendliness as possible
before starting.
Since an unFriendly AI is one of the top (if not the top) existential risk,
we're doomed both with and without AGI. For an AGI to have a good chance not
to destroy us, Friendliness is necessary. Ergo, Friendly AIs are better than
our default condition. By default an AGI is unFriendly: humane morality is
not something that simply emerges from code. If the AGI project hasn't taken
significant up front measures to understand Friendliness, along with
continuing measures whilst developing the AI, it's not likely to be Friendly.
> It would be a stupid
> AI that would literally wipe out "everything" which would belie it being
> smart enough to grab and retain control of "everything".
The unFriendly AI wouldn't destroy literally everything, but optimise nearby
matter to better fufill it's goal system. An unFriendly AI doesn't care for
humans, and so we get used as raw matter for computronium. It's this kind of
scenario that's the risk.
- Nick
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT