From: J. R. Molloy (jr@shasta.com)
Date: Fri Sep 29 2000 - 01:16:25 MDT
Eugene Leitl writes,
> Your reasoning is based on a slow, soft Singularity, where both the
> machines and humans converge, eventually resulting in an amalgamation,
> advancing slowly enough so that virtually everybody can follow. While
> it may happen that way, I don't think it to be likely. I would like to
> hear some convincing arguments as to why you think I'm mistaken.
I can't think you're mistaken, since this thread entails nothing that can
presently be tested. My current readings in robotics persuade me that the most
successful engineers will build machines which can do useful things, rather than
try to build humanoid robots, just to prove it can be done. Similarly, I don't
find a fast technological singularity as useful as a convergence of augmented
humans with their Mind Children. IOW, the only usefulness of a fast TS would be
to escape or leave behind Homo sapiens (those nasty war mongers).
As you've probably found obvious, I favor the scenario of a friendly genetically
programmed AI that will help humans deal with the TS by allowing as many of us
as want to become cyborgs. Or perhaps I should say that I think friendly AI will
emerge in cyborgs congruently as it happens on the Net or in laboratories. This
doesn't constitute a convincing argument I know. Perhaps I'll work that out next
week when I return from my next meditation retreat. Meanwhile, I'd like to hear
some convincing arguments as to why you think an unfriendly AI (apparently the
only kind you give credence to) can be prevented emerging.
--J. R.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:31:16 MST