From: Bryan Moss (bryan.moss@btinternet.com)
Date: Thu Dec 06 2001 - 16:07:44 MST
Gordon Worley wrote:
> > Logic, common sense, and actuarial reasoning should tell
> > us that that *absolute* safety is an impossibility, and my
> > gut tells me that attempting to task some Power with
> > providing it is a recipe for disaster.
>
> We've already been down this road: anthropomorphic thinking.
>
> We cannot be 100% safe, but we'll try to get as damn close
> to it as possible and have escape routes in case all hell
> breaks loose.
A wild ride. Personally I see it as, we're either safe or
potentially screwed. I don't know how Eliezer sees it because
I still haven't read that Friendly AI thing, but I think his
view is similar. Basically, either some morality holds for
all intelligences or it does not. For some morality to hold
for all intelligences I think the following must be true: (a)
finding the optimal intelligence is an intractable problem;
and (b) comparing the optimality of one intelligence to
another is an intractable problem. If both of these prove to
be true then one intelligence has no grounds to favour itself
over another (or vice versa) and their morality must be a
superset of ours. In other words, it's all sunshine and
lollipops because we've got SIs[*] batting for our team. If
either one of these proves to be false then the drive toward
optimality *might* result in us being screwed (where "screwed"
means our evolved morality is at odds with a general morality
and we might have to do things we don't "like").
At the moment I think the "safe" scenario is the most likely.
BM
[* Of course, if (a) and (b) hold you might wonder how you can
have superintelligence. You can, you just can't be *provably*
superintelligent. So no certificates to hang on your wall.]
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT