GA intelligence experiment = us?

From: Emlyn (emlyn@one.net.au)
Date: Fri Oct 06 2000 - 06:53:41 MDT


> I do see the force of the claim, but it might still be wrong, since
> emergence of effective AI might, after all, depend on ramping up the
> substrate technologies by brute force via Moore's law.
>
> Damien Broderick
>

Here's a very silly idea.

You know how we like to speculate that we might not perceive "real" reality,
that in fact we might be software in some SI's sim? Imagine instead that we
are running on some very slow being's hardware (not exactly SI, but
advanced), as an autocatalytic feedback doover thingy. A universe sim with
the ability to kickoff life. The point of this being that they are trying to
build an SI in a box, to do their bidding. Not a bad job, they are almost
successful!

Then, we get some inkling into the SI "friendliness" debate. Firstly,
they've got us running in isolation; we don't appear to have access to the
rest of their networks. Secondly, they are managing the friendliness thing
by pretending to be God. That would explain things like the millenia time
intervals between important religious events (that may be the time between
when they press "enter" for one command, wait for the terminal to echo the
result, then type in and press "enter" for the second command).

Does this mean that the christians, et al, are the ones who've taken the
friendliness commandment to heart, and those of us who have rejected that
notion; well, we cop it in the next fitness evaluation round (ie:
armageddon, J R Molloy style)?

Emlyn



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:31:26 MST