Re: Preventing AI Breakout [was Genetics, nannotechnology, and , programming]

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Oct 25 1999 - 08:25:36 MDT


Anders Sandberg wrote:
>
> "Eliezer S. Yudkowsky" <sentience@pobox.com> writes:
>
> > > (a) whether an AI can discover it is running in a simulation?
> >
> > Almost certainly. If it really is smarter-than-human - say, twice as
> > smart as I am - then just the fact that it's running in a Turing
> > formalism should be enough for it to deduce that it's in a simulation.
>
> So if the Church-Turing thesis holds for the physical world, it is a
> simulation?

In one sense, yes. But (1) if the world I saw was Turing-computable, I
probably wouldn't see anything wrong with it - *I'm* not that smart. Or
perhaps I underestimate myself... but nonetheless, the only way I
learned how to reason about the subject was trying to explain phenomena
that weren't Turing-computable, i.e. qualia. And (2) if *this* world is
Turing-computable, then obviously all my reasoning is wrong and I don't
know a damn thing about the subject.

> If the AI runs on a Game of Life automaton, why should it believe the
> world is embedded in another world? The simplest consistent
> explanation involves just the automaton.

But the explanation isn't complete. Where did the automaton come from?

> > You really can't outwit something that's smarter than you are, no matter
> > how hard you try.
>
> Ever tried to rear children? Outwitting goes both ways.

Someone tried to rear me. Perhaps I flatter myself, but my experience
would tend to indicate that it only goes one way.

-- 
           sentience@pobox.com          Eliezer S. Yudkowsky
        http://pobox.com/~sentience/tmol-faq/meaningoflife.html
Running on BeOS           Typing in Dvorak          Programming with Patterns
Voting for Libertarians   Heading for Singularity   There Is A Better Way


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:36 MST