Re: Cognution [was Re: Deep Blue - white paper]

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Thu Sep 02 1999 - 12:48:52 MDT


mark@unicorn.com wrote:
>
> Eliezer S. Yudkowsky [sentience@pobox.com] wrote:
> >I'm not sure how you're defining "AI" here, but such a process certainly
> >wouldn't be "intelligent". It would not be creative, self-modelling, or
> >capable of representing general content. It wouldn't have goals or a
> >world-model except in the same way a thermostat does. Deep Blue doesn't
> >know that it knows.
>
> Most arguments against AI start by
> claiming that humans have wonderful facilities which computers don't, when
> they have absolutely no proof.
>
> Can you prove that you can do all those things you're claiming that the
> computer can't do? If not, why should I accept this argument?

Not "the computer". Deep Blue. I'm listing a set of features I talked
about in "Coding a Transhuman AI". Obviously I think computers can be
made to do them. Equally obviously, these are particular features that
we have and Deep Blue doesn't, just like we don't have built-in
billion-move chess-extrapolation trees.

-- 
           sentience@pobox.com          Eliezer S. Yudkowsky
        http://pobox.com/~sentience/tmol-faq/meaningoflife.html
Running on BeOS           Typing in Dvorak          Programming with Patterns
Voting for Libertarians   Heading for Singularity   There Is A Better Way


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:00 MST