From: Robert J. Bradbury (bradbury@aeiveos.com)
Date: Mon Apr 29 2002 - 20:39:52 MDT
On Mon, 29 Apr 2002, Eliezer S. Yudkowsky wrote:
[snip]
> I and several others attempted to point out during the Q&A session that just
> because uploading is hard for humans does not make it infeasible for machine
> intelligences - which Gregory Stock concedes will exist and will even be
> interested in helping humans - but Kurzweil did not use this argument. Nor
> did Kurzweil use any of Finney's arguments. On the whole, I would have to
> say that Kurzweil lost the debate. Pretty sad.
I have a somewhat different slant on the event. I think Ray's
arguments were more concrete. However I think Greg's arguments
will appeal to a much larger audience. In general I would tend
to agree that Greg is better at debating than Ray. You could
generally tell what point Greg was trying to make while that
wasn't always true with Ray. [I asked Ray a question after his
earlier presentation regarding when he would personally choose to
spend his own money to purchase BDE to obtain the resources to
bootstrap an AI and he *completely* side-stepped answering it.
Ray's responses to Greg's points during that debate seemed in
many cases to have that flavor (at least to me).] Greg's technical
merit score was low IMO, but because it matches the biases of the
judges ("natural" humans) it have significant merit.
Interestingly enough there was a side conversation between Greg
and Neil Jacobstein at my lunch table on Sunday -- though I was
only half listening to it, it suggested that Greg's arguments
(self-genome and children's genome enhancements do in fact
play well in Peoria).
This would make sense to me because the human instinct would
seem to be to trust an enhanced-human before it would trust
an "alien" AI (or even a human uploaded onto an advanced
computronium substrate). Generally speaking, unless you
know the "moral code" of entities significantly different
from oneself (from amoral AI's to Fugu fish) it seems to
be an unwise strategy to assume they are "friendly".
Nature has certainly not programmed us to assume that.
As a result, I believe that Ray does have his work cut
out for him to convince large numbers of people that
out-loading/migrating the human mind is the best
strategy. It might be the threat of real amoral AI's
that convinces significant numbers of people to do that
so as to level the playing field.
For me the entire discussion raised some very sticky
issues that were further emphasized by Neil Jacobstein
in several talks/discussions, essentially "How do we manage to
create 'safe' operating environments for sentient beings of
vastly different capabilities?" *Or* do we say, sentience is
overrated and what matters is what you produce (e.g. survival
of the fittest)?
But I've discussed these problems before and will not
attempt to cast my vote at this point. It would be
interesting to speculate on whether one could develop
a world/solar system where entities that punched out their
own cards when it became apparent (to an "entity") that their
"personal" vector was suboptimal.
One approach might be to run a number of "running scoreboards",
like they used to have with regard to the budget deficit or
the World's population, that ranked world "leader's"/terrorists
impact on the "net death toll" that their policies had caused.
Display them in Times Square, or at the Clock of Long Nows or in
several prominent newspapers, it might just have a significant effect.
(Remember -- the Net can trump information firewalls...)
Robert
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:13:42 MST