From: John Stick (johnstick@worldnet.att.net)
Date: Tue Jun 12 2001 - 19:49:26 MDT
Durant,
Fair enough. My response was based in part on a sense that the language of
cult and coercion is emotionally loaded, and calls up specific instances of
current behavior that are unlikely to be precisely duplicated by transhuman
AIs. There are interesting issues about what uses of information to
persuade another transhuman intelligence would be fraudulent, antisocial or
coercive, but the cult, conversion, and eaten by a meme tropes draw
attention to the extremes. Arguing that the extremes will become less
prevalent as intelligence increases is (like) trying to look beyond a
singularity: of dubious utility and accuracy-- but what else are we going to
talk about?
I don't buy the idea that intellectual coercion and its defense is an arms
race where each advance in intelligence helps each side equally. First,
there is the simple minded argument that we start at zero intelligence, zero
freedom, so any advance has to help. Not conclusive, of course, because
plotting intelligence against freedom may describe a curve whose slope goes
negative at some point, but still suggestive. Mostly it is just a sense
running scenarios through my head that each bit of defense will require a
much larger increase in offensive intelligence to knock down.
As for your inquiry about law and law books, I think law will help only
after the fact to ratify a consensus that develops within the AI community
on ethical behavior, that is, only after true AIs have been up and running a
long time (for them and us). Attempts to legislate ahead of technology on
subtle, fact specific issues like this rarely occur, and are never
effective. If legislators were convinced of the danger of AIs converting
humans, a ban of AIs would be the most likely response. (And it would still
be ineffective.) Redefinitions of fraud, blackmail and so on to make them
applicable to AIs will come only after substantial experience with the
behavior of real AIs. Laws mostly follow social conventions, and do not
precede them, and are easily swept away without that support, unless you
have a very strong secret police. I can't think of any books in legal
theory that address this issue convincingly at length, but if you want I
could suggest something generally relevant. Unfortunately, the most incisive
writings are all tied to one side or another of the culture wars, and so are
onesided.
As for meme talk (a complete side issue and I apologize for raising it),
although the evolutionary metaphor can be suggestive, I am generally
suspicious that it is often used to attempt to get to grand theory and
sweeping arguments without having to attend to the detailed mechanism. In
my amateurs understanding of current biology, the selfish gene theory meant
never having to say you are sorry for not having unraveled protein
chemistry, and is being superceded at the cutting edge because people are
beginning to find the ways proteins matter to evolutionary development. For
us, I think meme talk displaces more detailed discussions of how concepts
are adopted in rational discourse, when the whole point in the AI field is
to get to the details and implement them. But my argument here is unfair to
some uses of the meme meme, even if accurate about others.
John Stick
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT