From: J. R. Molloy (jr@shasta.com)
Date: Mon Jul 30 2001 - 09:55:55 MDT
From: "Eliezer S. Yudkowsky" <sentience@pobox.com>
> It means, "greater than the smartest human or organization of humans".
That's what I should have remembered. Thanks for the reminder and
clarification. (You're a gentleman and a scholar and other flattering stuff.)
Recursively self-improving Artificial Intelligence tends to parallel
recursively self-improving complex adaptive systems found in nature, IOW,
living organisms (at first glance, anyway). This new life form, this RSIAI,
friendly AI, technological singularity harbinger, or whatever it decides to
call itself, would be able to accurately identify incorrect human thinking,
would it not? I ask because a list member has expressed fear that a system
which identifies incorrect thinking might do so with extropians. Wouldn't that
actually be a friendly thing to do? I mean, if extropians think incorrectly, a
friendly AI would be doing all sentient beings a big favor by removing that
incorrect thinking, right? It's not that I want to think with absolute
correctitude. But in the end, it may be worthwhile to understand that thinking
may not be the best way to know reality. Much can be said for direct
experience, and action speaks louder than words.
©¿©¬
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:09:17 MST