From: king-yin yan (y.k.y@lycos.com)
Date: Tue Aug 12 2003 - 17:30:55 MDT
Hello Everyone
I have been thinking about uploading for a long time, especially the
gradual uploading approach, which I like. But this approach brings up
one issue which is the "continuity of the soul" problem. People who
opt for a destructive upload would gain a "first-move" advantage
over others. So I started to look at the Singularity to see if it
offers any solution.
(By the way I have personally convinced myself that uploading is
achievable around 20-30 years from now.)
I think Eliezer's vision of a single superAI is rather problematic. I
think a diversity of specific-purpose AI's is the more likely scenario.
The reasons are as follows:
1. The definition of Friendliness is a political issue. There is no such
thing as value-free, "objective" morality; the Friendly AI can only
*inherit* the moral system of its creators and share-holders (if done
right at all!) and the Friendly-AI-as-a-God-like-moral-figure is unsound.
The debate about Friendliness will itself start a political war, rather
than solve all political problems.
2. One may argue that superAI will be very powerful and everyone
would want to be on "our" side. But this also is an unlikely scenario
because it does not resolve the problem of *who* will have more
power within our "party". Once again this would depend on the
definition of Friendliness and thus start a war. (I'm actually quite
pacifist by the way =))
3. Safety. It is better to diversify the risk by building several AIs
so in case one goes awry the others (perhaps many others) will be
able to suppress it -- fault-tolerance of distributive systems. It
seems the best way is to let a whole lot of people augment their
intelligence via uploading or *cyborganization*.
4. The superAI is unfathomable (hence unpredictable) to us, so
what's the difference between this and other techno-catastrophies?
5. Even if we have FAI, it probably will not stop some people from
uploading themselves destructively (They have their rights). This will
still create inequality between uploaders and those remaining
flesh-and-blood.
Therefore the superAI scenario will not happen UNLESS there are
some compelling reasons to build it. The fear is that destructive
uploading will create too much of a first-move advantage to the
effect that everyone would be compelled to follow suit immediately.
So the goal should be clear: Create a technology for humans that
would allow them to be on-par with uploads. And I think that
answer would be: "personal AI". The PAI starts off like a baby
and shares the users experience, like a dual existence. By the
time cyborganization is available, the cyborganization process
would be like merging with one's personal AI.
Thus, the rights to transhuman intelligence is distributed to all
those who can afford it. If you think about it, that is probably
the only sensible way to deal with computational power
explosion... ie to create a broadly distributed balance-of-power.
It doesn't matter that many people may not be techno-savy
enough to use the AI -- that depends on user-friendliness and
the best AI should be quite transparent and easy to use.
Well, this still sounds very vague and difficult, but it's more
plausible than the superAI scenario already (I think).
One last problem that remains is poverty. I predict that
some people will be maginalized from cyborganization, rather
inevitable. Who am I to save humanity? We have to accept
this and the next best thing is to maximize availability
through education and perhaps redistribution of wealth,
creation of more jobs etc.
Hope to hear your comments =)
Yan King Yin
____________________________________________________________
Get advanced SPAM filtering on Webmail or POP Mail ... Get Lycos Mail!
http://login.mail.lycos.com/r/referral?aid=27005
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT