Re: Posthuman mind control

From: Nick Bostrom (bostrom@ndirect.co.uk)
Date: Sat Feb 27 1999 - 14:53:41 MST


Eliezer S. Yudkowsky wrote:

> First of all, I'd like to commend Nick Bostrom on the fact that his AIs
> are reasonably alien, non-anthropomorphized.

Does this mean I have a chance of maybe one day making it to your
list of semi-sane persons? ;-)

> 4. SIs probably don't have goal systems, period. Goal systems are
> non-essential artifacts of the human cognitive architectures; the
> cognitive objects labeled "goals" can be discarded as an element of AIs.
> Perhaps the concept of "choice" (in the cognitive-element sense) will
> remain. Perhaps not; there are other ways to formulate cognitive
> systems.

The way I see it, intelligent behaviour means smart, purposeful
action. Purposeful action is to be analysed in terms of the purpose
(goals) and the plans that the agent makes in order to achieve the
purpose. If that is right, then the relation would be analytic; goals
(values, purposes) are a necessary feature of any system that behaves
intelligently.

> My SIs derive from a view of human emotions as an entire legacy system
> that influences us through an entire legacy worldview that integrates
> with our own. The alienness derives from the elimination of goals as
> separate entities from other cognitive elements.

I make a distinction between emotions and values (goals). Emotions
are not logically necessary; we can imagine a cold but shrewd
military commander who does his job efficiently without being
involved emotionally. His goal is to defeat the enemy, he has no
emotions. But in order to be an intelligent agent, he has to know
what he is trying to achieve, what his desired outcome is. This
desired outcome is what I call his goal, and his ultimate goals I
call fundamental values.

Nick Bostrom
http://www.hedweb.com/nickb n.bostrom@lse.ac.uk
Department of Philosophy, Logic and Scientific Method
London School of Economics



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:03:10 MST