From: Richard Loosemore (rpwl@lightlink.com)
Date: Tue Aug 16 2005 - 23:58:30 MDT
Michael,
I have to restrict my response to only one point you made, because some
of the places you quoted me were situations where I was trying to
characterize and clarify a position that I was *opposing*, so my head is
now spinning trying to keep track of whether I am coming or going :-).
Anyow, this is an important issue:
>>wait! why would it be so impoverished in its understanding of
>>motivation systems, that it just "believes its goal to do [x]" and
>>confuses this with the last word on what pushes its buttons? Would it
>>not have a much deeper understanding, and say "I feel this urge to
>>paperclipize, but I know it's just a quirk of my motivation system, so,
>>let's see, is this sensible? Do I have any other choices here?"
>
>
> No, you're still anthropomorphising. A paperclip maximiser would not see
> its goals as a 'quirk'. Everything it does is aimed at the goal of
> maximising paperclips. It is not an 'urge', it is the prime cause for
> every action and every inference the AI undertakes. 'Sensible' is not
> a meaningful concept either; maximising paperclips is 'sensible' by
> definition, and human concepts of sensibility are irrelevant when they
> don't affect paperclip yield. There would be no reason (by which I mean,
> no valid causal chain that could occur within the AI) to ever choose
> actions, including altering the goal system, that would fail to maximise
> the expected number of future paperclips.
>
> This is how expected utility works, and it can be quite chilling. Car
> companies failing to recall unsafe cars if the cost of the lawsuits is
> smaller than the cost of the recall is a tiny foretaste of the morally
> indifferent efficiency that EU driven AGIs can deliver.
This hypothetical paperclip maximiser is both having its cake and eating it.
If a paperclip maximiser is not aware of such things as goals and
motivations, it is not smart enough to be relevant, for the following
sequence of reasons:
a) The Seed AI is going to be expected to know enough to bootstrap
itself up to higher levels of intelligence. If all it knows is
paperclip maximiziation, it will be too stupid to do that.
b) A dangerously unstoppable paperclip maximizer will have to be built
by an AI: we are not smart enough, without help, to develop a runaway
paperclip maximiser that will escape detection by other humans on the
lookout for rogue AI projects. This is pre-Singularity stuff.
c) A successful Seed AI will bootstrap and then eliminate rival projects
quickly (except for case (d) below). After that, it will not allow
experiments such as the construction of superintelligent paperclip
maximisers.
d) If the Seed AI does not make the choice to be Friendly (on the side
of "convergence" (see parallel post in reply to Ben)), then it will
allow rival projects, etc. etc. It All Ends In Tears. For reasons
discussed elsewhere, I think this extremely unlikely. This is the only
case that could generate a paperclip maximiser that did not have general
intelligence and awareness of such issues as motivation.
So, again: I am not anthropomorphising (accidentally attributing
human-like qualities where they don't belong), but making the specific
statement that a seed AI worth worrying about would be impossibly
crippled if it did not have awareness of such design issues.
I am making this, not on general philosophical grounds, but on the basis
of specific nuts-and-bolts issues about how to get a real cognitive
system working.
And here, the nuts-and-bilts issue I have been raising is: "What is the
role of a motivation mechanism in a cognitive system, and how will that
impact the design and behavior of the Seed AI?"
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT