Re: Profiting on tragedy? (was Humour)

From: Michael Lorrey (retroman@tpk.net)
Date: Fri Dec 27 1996 - 15:21:11 MST


Lee Daniel Crocker wrote:
>
> > > I'm not sure I understand the connection between Extropianism and the Final
> > > Solution. They seem diametrically opposed to me. I grant you that some
> > > extropian concepts could be misconstrued or misrepresented.
> >
> > Superintelligent robots = Aryans, humans = Jews.
> > The only thing preventing this is sufficiently intelligent robots.
>
> Nonsense. What will prevent it is sufficiently /moral/ robots.
> Intelligence is a not a sufficient condition for morality, and
> perhaps not even a necessary one.
>
> If one buys Rand's contention that normative philosophy (ethics,
> politics) can be rationally derived from objective reality, then we
> can assume that very intelligent robots will reason their way into
> benevolence toward humans. I, for one, am not convinced of Rand's
> claim in this regard, so I would wish to have explicit moral codes
> built into any intelligent technology that could not be overridden
> except by their human creators. If such intelligences could reason
> their way toward better moral codes, they would still have to
> convince us humans, with human reason, to build them.

however, suppose the lone genius who develops uploading technology is
some riduculed, misunderstood individual who has been persecuted since
infancy. He/she uploads, becomes the God of the planet, and as stupider
and stupider individuals continue to piss him or her off, he/shee
decides to end the pestering mosquitoes in his/her ear with a few
hundred megatons placed in the most efficient pattern to wipe out the
annoyance. its the Lawnmower Man Syndrome, or Charlie's Law, to use
Eliezers mythology.

you must admit, the most likely to upload first would also be those
individuals farthest up the bell curve already, and thus run the risk of
also harboring persecution complexes etc.... Another of the Luddites
fears: that all those geeks they picked on as kids will come back to
haunt them, rule them, or just screw up their credit records..... These
individuals you could not "build" morality into. You could provide
extensive psych help prior to uploading, but such problems are just as
likely to be amplified the farther up the bell curve the individual
transcends to. ALso, consider "accidental" transcendance: an unwatched
AI. Look at the "True Names" story by Vinge in this area. The DON.MAC
program was a low level AI kernel that was forgotten about. As it grew,
it continued to do the job to which it was programmed: protect the
network, even to the point of protecting it from its makers.

That such warning fiction exists illustrates that such fears are
present. As stated prviously, even with my 160 IQ, a goodly portion of
humanity ticks me off on a daily basis. I could not imagine the amount
of patience a 1600 IQ AI, or IA upload, would have to practice on a
daily basis to keep from starting Armageddon just to end the annoyance.

-- 
TANSTAAFL!!!
			Michael Lorrey
---------------------------------------------------------
President			retroman@tpk.net
Northstar Technologies		Agent Lorrey@ThePentagon.com
Inventor of the Lorrey Drive	Silo_1013@ThePentagon.com
http://www.tpk.net/~retroman/
---------------------------------------------------------
Inventor, Webmaster, Ski Guide, Entrepreneur, Artist, 
Outdoorsman, Libertarian, Certified Genius.


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:35:56 MST