From: Michael Wilson (mwdestinystar@yahoo.co.uk)
Date: Wed Aug 17 2005 - 18:33:03 MDT
> This hypothetical paperclip monster is being used in ways that are
> incoherent, which interferes with the clarity of our arguments.
The problem is not that we don't understand your position. It is a
common position that has been put forward by numerous people with
anthropic expectations of how AGI cognition will work. The problem is
that you do not understand the opposing position; how reasoning about
goals works when the goals are all open to reflective examination and
modification. You are incorrectly postulating that various quirks of
human cognition, which most readers are well aware of, apply to
intelligences in general.
> It is supposed to be so obsessed that it cannot even conceive of other
> goals, or it cannot understand them, or it is too busy to stop and think
> of them, or maybe it is incapable of even representing anything except
> the task of paperclipization...... or something like that.
No one has claimed this; in fact the opposite has been stated repeatadly.
A general intelligence is perfectly capable or representing any goal
system it likes, as well as modelling alternate cognitive architectures
and external agents of any stripe. Indeed the potential impressiveness of
this capability is the source of arguments such as 'an AGI could convince
any human to let it out of containment with the right choice of arguments
to make over a text terminal' - an achievement that relies on very good
subjunctive goal system and cognitive architecture modelling.
Whether a system will actually 'think about' any given subjunctive goal
system depends on whether its existing goal system makes it desireable
to do so. Building a model of a human's goal system and then using it
to infer an argument that is likely to result in that human's obedience
would be goal-relevant for an AGI that wanted to get out of an 'AI box'.
Curiosity is a logical subgoal of virtually any goal system, as new
information almost always has some chance of being useful in some future
goal-relevant decision.
> Anyhow, the obsession is so complete that the paperclip monster is
> somehow exempt from the constraints that might apply to a less
> monomaniacal AI.
What are these 'constraints'? An attempt to design a goal system with
'checks and-balances'? A goal system is simply a compact function
defining a preference order over actions, or universe states, or
something else that can be used to rank actions. If such a function
is not stable under self-modification, then it will traverse the space
of unstable goal systems (as it self-modifies) until it falls into a
stable attractor. It is possible to concieve exceptions to this rule,
such as a transhuman who treats avoidance of 'stagnation' as their
supergoal and thus has a perpetually changing goal system, but this
is an incredibly tiny area of the space of possible goal systems which
is correspondingly unlikely to be entered by 'accident'.
> I submit that the concept is grossly inconsistent. If it is a
> *general* AI, it must have a flexible, adaptive representation system
> that lets it model all kinds of things in the universe, including itself.
Of course. You still haven't explained why a paperclip maximiser would
find anything wrong with its self model that would require alteration of
its supergoal to fix.
> But whenever the Paperclip Monster is cited, it comes across as too
> dumb to be a GAI ...
You keep claiming that there is a connection between intelligence and
desires, in general not just for humans, yet you have not made a single
convincing argument why this is so. Frankly the only possible
justification I can see is an effectively religious belief in an
'objective morality' that transhumans will inevitably discover and that
will magically reverse the causal relationship between goals and
inference.
> and it does perceive within itself a strong compulsion to make
> paperclips, and it does understand the fact that this compulsion is
> somewhat arbitrary .... and so on.
Ah, here we go. You presumably believe 'arbitrariness' is a bad thing
(most humans do). Why would an AGI believe this?
> we sometimes make the mistake of positing general intelligence, and
> then selectively withdrawing that intelligence in specific scenarios,
> as it suits us,
Intelligence is a tool to achieve goals (evolution produced it for exactly
that reason). If the goal is not there, an agent can be all-knowing and
all-powerful and will still do nothing to achieve the missing goal.
> I am not saying that anyone is doing this deliberately or deceiptfully,
> of course, just that we have to be vary wary of that trap, because it is
> an easy mistake to make, and sometimes it is very subtle.
There are a lots of easy mistakes and subtle traps in AGI. In this case
you're the one that's fallen into the trap.
> Does anyone else except me understand what I am driving at here?
Understand your mistake and why you made it, yes.
* Michael Wilson
P.S. I actually quite like the term 'paperclip maximiser', because it
sounds both familiar and bizarre, even ridiculous, at the same time. No
one is claiming that the Singularity is actually likely to consist of
us being turned into paperclips. But it conveys the fact that nature is
not obliged to conform to our intuitions of 'common sense', and that the
future is not oblidged to look like science fiction. The boundaries of
science frequently return results that seem ridiculous to the layman, and
the future often turns out to be counter-intuitive. It is unfortunate
that people are particularly prone to jump in and use their 'intuition'
in cognitive science, compared to other areas of science. The point has
been rammed home by now that human intuitive physics is broken and is
worse than useless for inventing useful theories of how those parts of
the universe distant from everyday experience actually work. However
many people still think that human intuitive other-mind-modelling is
useful in cognitive science simply because we haven't had enough
widely publicised progress in the field to discourage them yet.
___________________________________________________________
To help you stay safe and secure online, we've developed the all new Yahoo! Security Centre. http://uk.security.yahoo.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT