From: Ben Goertzel (ben@goertzel.org)
Date: Tue Dec 24 2002 - 05:55:38 MST
> I've brought up my complaint about this answer to the FP previously:
> Your answer does not explain why I will not, about 5 seconds after the
> Singularity, design/test/launch a self-replicating probe "manned" by
> some sort of mind (either sentient or not, depends on what I decide
> then) that will go off and scour the whole reachable Universe for
> sentients that need help. Note that I do not have to go with the probe,
> and it only takes a few seconds of realtime to accomplish which probably
> isn't enough to completely destroy my livelihood in the post-Singularity
> rat race.
My hypothetical explanation is that, to your post-Singularity mind, this
probe-sending does not seem to be a worthwhile activity.
What we see as "needing help," a post-Singularity mind may see in a totally
different way.
I could conjecture that this post-Singularity mind might see "needing help"
situations as "part of the natural order of being". But that too would be
imposing too much human moral psychology on the "motivational structure" of
a being whose "inclinations", "desires" and "causes" are far beyond us.
Brian, I feel like you're asking us to explain why a post-Singularity
superintelligence won't do what you feel a well-intentioned human would do
in that situation. But it will be far from human!!
I have very little faith in the survival of human morality or
humanly-comprehensible motivations into the dramatically posthuman realm.
A post-Singularity "mind" will probably be neither friendly or unfriendly,
neither helpful nor non-helpful, but will behave in ways that the minds of
remaining humans (if indeed they are able to observe its behaviors) will
find rather inscrutable. Unless it "chooses" to have its behaviors appear to
humans according to some easily comprehensible pattern...
-- Ben G
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT