Re: Keeping AI at bay (was: How to help create a singularity)

From: Robert Coyote (coyyote@hotmail.com)
Date: Mon Apr 30 2001 - 11:16:56 MDT


I belive many of the assertions on this thread are dangerously
anthropomorphic projections, it may be that the answer to the question "What
does AI want?" is incomprehensible.

-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: 6.5.8ckt http://www.ipgpp.com/

mQGiBDro5xMRBADCt2zaRX6c8vXGjmo0MOEecRW9TgiflMcJObNSflWOQCXUtB0I
Eji7D5kf1Gw5oPc3QJOLPK3pBXy60xBU9HIj2IDXG3DK+BU8Fqy0wqUPI7DTQltq
Ba9NCVfu1dRelnA7DPz/60uA+GjbfpCCMKvrfecXXi1RPobXZEw63FHp6QCg/2S7
qA7X2zB6+aZBL9FmMn8ioDMEAJvqrHdrKkM4gc8jz+G3BqPbBOl7feN3xmS6JCBv
YL3WvqCJlQwIUxBOVoMCQJ8qQmSlonm0A+92Qpd6T+s2Nn3A1VChAUSiFXPhKLvc
dp006ysBYYoFzupufua9UKHu+kh3Z9L1/KQWPBNuSy8RYnGByOK/j6VukRd8T+iS
5xRkBAC7HsskmEeGo1w304tBbcDDqJZBLJvmhibKLlrLLYHvJUp1MarQ9iYfIjKy
a2jCBryYYboEcvLYwodJYeDogv8dSthSrfJt/05+fzsWNlSf0A4ME96S/W1q0bPf
NuKJOnnIc59bR438qLr7cvfWVma6N7g5t7nHMtYGB0PrEOUwO7QgUm9iZXJ0IFMu
IDxyb2JzY2hyQGh1c2htYWlsLmNvbT6JAFMEEBECABMFAjro5xMJCwMCAQQHCAkK
AhkBAAoJEBMsisHbjYuDFvcAoP3mWj1giMkKyutmz6ugbHNPcxzMAJ9Q+LPlHAZJ
t5TLSKGq4i6rk0rAqLkCDQQ66OccEAgA04hU5H4wNk7ypcTuL6AOrYoKFUzEoL2R
7YtsMkS4tMNX58DwYsEJ/ya6YSCqEI8nd68zwpTqq7CDPT/GWgqwkvyJ/OFORcHY
fwhSnaYkeObrS73IeybVmpBUaLl4Qk6rzUsd2YtH40IH4Cmmawy9oeEbhSADIn7a
p8cgJ2HCFlLbifWZQRTBXwU0AO8XPicz7fOxe+pB8krSpo9UMtNHJtQvlHnv/mId
hfFtCnXKfoT8g+2KamPZ0uctGhJ90qIUxetmAtAjmJg+JwPiyhkNA3lrLS/TEYUR
HISGQtef5EI21wBUOwGrAizDg/fsZ1E+Erplet1ydg34hrNMQaZTZQACAgf+MIMC
/2DQeHXbmfZXKPGfZGMDYjkN321xmfFNJuseiA0wFFjH2uyqjxE/iV6xCRaK6n+8
B+hlzpDMReRys6SqqtboDoyOhMTBgibFNOr/BSRRd6+kjNSEwK165J4iMADulapp
YM+DvFtViVjSJ8KcOFZrukkfW+f9YYP6BgdRRmfRwBMxd4qZ6Zhsmt0bkMzCgKte
0ZAACKRJ1ZkA4Ug0J6X60KUgHPlJZ/kk1lM+Y/5ZjXi1wrutnRovAxjeQ+r/HnGC
OyWMic0I0c4POU31xFirqCD/3oCsIrH5kg5wSGyUDx3koOAaI1iW86ZasMR2qd1J
HhHuUjvz54j7JmphK4kARgQYEQIABgUCOujnHAAKCRATLIrB242Lg+36AJ9G2BS/
OlOT5Vnl6I5KJIzX2abBoACg5pyhwmGTXqZoIJojxPDQjJifYWw=
=QTms
-----END PGP PUBLIC KEY BLOCK-----

Eugene Leitl wrote

<snip>
. Rather, don't. We would all die. A real AI
could clean ruin your day, by eating the world, with you on it. So don't.
It's that simple.
<snip>

I think a bootstrapping AI very well could eat the world if you just hand it
over nanotech that it can control although I dont think its certain. I think
the trick is to convince (demand) it to upgrade us to "super-Jupiter
brained" intelligence so we too can participate without getting eaten
ourselves. This should be pretty easy. If the AI wants us to do things for
it (like give it power, memory, upgrades, etc....), it better be churning
out the upgrade diagrams and procedures (cures for aging, cancer, biomind to
silicon mind downloading, etc...) for us. Then, and only then, when all the
humans (?) are at the same level as the AI, can we talk about nanotech and
macro-engineering the galaxy. I think an AI would even want to take this
approach. I personally think that morality is based on reason and logic, so
if "we" can start deriving the science of morality, an AI certainly should
come to the conclusion that killing intelligent beings should be avoided if
possible. Besides, what's the point of being an omnipotent,
super-intelligence discovering the secrets of the universe if there's noone
to share it with?

Jerry Mitchell

_________________________________________________________________
Get your FREE download of MSN Explorer at http://explorer.msn.com



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:07:22 MST