From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Sep 19 1999 - 15:42:46 MDT
John Clark wrote:
>
> Eliezer wrote on Thursday, September 16, 1999:
>
> >Eliezer S. Yudkowsky thinks that we have weird, mysterious,
> >ineffable stuff going on in our neurons
>
> I doubt if you're right about that but if you are, how can you hope to make
> a AI until you know exactly what that weird, mysterious, ineffable stuff is
> and how it works?
I don't think it's tremendously _necessary_ ineffable stuff. I mean,
it's probably necessary on a low level for some computations, and on a
high level for qualia, but not intrinsically necessary to intelligence
(and particularly not mathematical ability; take that, Roger Penrose!).
I would guess that the vast majority of naturally evolved intelligence
in Universes with Socratean physics (our Universe being the "Socrates
Universe", you recall) is entirely computational; our own existence
being explained either by the Anthropic Principle or (gulp) interference.
We can't hope to upload humans until we've effed the ineffable, and
screwing the inscrutable is probably necessary for Power party tricks,
but a seed AI shouldn't require unmisting the mysterious.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/tmol-faq/meaningoflife.html Running on BeOS Typing in Dvorak Programming with Patterns Voting for Libertarians Heading for Singularity There Is A Better Way
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:12 MST