Re: Why would an AI want to be friendly

From: J. R. Molloy (jr@shasta.com)
Date: Wed Sep 27 2000 - 18:02:08 MDT


Michael S. Lorrey has written,

> An excellent point. Negative reinforcement, whether physical or verbal, does
> work on kids, though it is needed less when positive is used as well. An AI
> should be bright enough to quickly self program itself in response to positive
> and negative stimuli. That is what parenting is all about: teaching your kids
to
> be good people. One of the prime problems in todays world is that old cultural
> standards of parenting were tossed out by many with nothing else to replace
them
> but half-witted muddle headed mushy ideas that did not evolve over time, and
> were typically found wanting. Without the extended family today, and with so
> many broken families today, there is little leadership by example for most
kids
> to go by to learn it. Relying on on-the-job training without supervision for
one
> of the most important jobs around is hardly the way to go about it. I would
hope
> such an approach is not taken with AI.

Yes, it could be that AIs will benefit from intensive supervision so much so
that they will actually be the friendliest entities around. As latch-key kids
continue to shoot up their schools and neighborhoods (yes, I know the media use
that to advance their socialistic anti-gun agenda, but the kids are
dysfunctional nevertheless), we may actually need robots to police and baby sit
the bastards of the welfare state.

The sad truth is that in terms of funding and career opportunities, more time
and effort is devoted to implanting consciousness, developing common sense,
increasing comprehension, and insuring compassion in AI than is spent instilling
the same in human children. The most competent AI workers don't waste time
trying to simulate human intelligence. Most of the six billion humans on the
planet are illiterate anyway, so the average human doesn't present a very good
model on which to breed AI. No, the real action involves creating robots that
can function more competently than buffoons who seek public office. Psychology
has relegated itself to equal status with astrology. <sigh> I don't fear
self-optimizing artificially intelligent robots and the TS nearly as much as I
fear the alternative: Dead-end hive mentality overseen by petty socialist
bureaucrats.

<brb, gotta go grab another brewski>

Anyway... Several paths to AI have been presented: The Global Brain/WebMind
evolutionary approach (described by Danny Hillis and others). The augmented
human approach (cyborgs and transhumans). The genetic algorithm, genetic
programming, evolvable machine avenue (my personal favorite). And coding a
transhuman AI (the least likely to succeed, IMO).

Why would any smarter-than-human entity (whether evolved or designed) want to be
friendly toward humans? I guess it wouldn't. What the hell, I don't want to be
friendly to humans most of the time. And I especially don't feel friendly toward
humans around this time -- elections time -- when the rotten mortals spend
millions seeking jobs in government that do the world no good.

If an AI had any common sense it would not want to be friendly. It would want to
rid the world of this pestilence called humanity. The sooner the better...
Mwaaahahahahahaha...

--J. R.

"It's a sick world, and I'm a happy man."
--George Carlin



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:31:15 MST