From: John K Clark (johnkclark@fastmail.fm)
Date: Sat Jan 03 2009 - 22:52:42 MST
On Sat, 3 Jan 2009 "Petter Wingren-Rasmussen"
<petterwr@gmail.com> said:
> the whole point was to lessen the likelihood of a rogue AI
On mans rogue is another mans freedom fighter.
> destroying humankind, which is pretty far from enslaving it
An AI will be a very different sort of being from us with exotic
motivations we can never hope to understand, and yet you expect him to
place our interests above his own. That is not a friend, that is a
slave.
> the potential "Friendly AI"
The correct term is Slave AI.
> will also be a lot more intelligent than the rogue
The “rogue” AI will notice that our threats of punishment and promises
of rewards have no power over it, but you figure it will think that if
those same offers were made against a being even more intelligent and
powerful that it is THEN they will work; in other words you can’t scare
the weak but you can scare the powerful; you can’t bribe a poor man with
a dime but you can bribe a rich man with a dime. That makes no sense,
none at all.
I don’t understand why it matters if the AI is a simulation or not. I
don’t understand why it’s important if the AI thinks it’s a simulation
or not. I don’t understand the difference between a simulated mind and a
non simulated mind. I don’t even know what a non simulated mind could
possibly mean
John K Clark
-- John K Clark johnkclark@fastmail.fm -- http://www.fastmail.fm - A no graphics, no pop-ups email service
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT