From: Rob Harris Cen-IT (Rob.Harris@bournemouth.gov.uk)
Date: Tue Aug 03 1999 - 06:33:17 MDT
> I hate relying on human stupidity, but in this case I think we're fairly
> safe. Unless AI starts depriving people of jobs, or becomes available
> at the local supermarket, there isn't going to be a GM-type scare.
>
You say this as if the GM scare has foundation. It's technophobes, plain and
simple, focussing their puny minds upon one emotionally-charged word
"cloning" or maybe the conjunction of "genetic" and "modification".
"Tinkering with life", the popular meaningless reaction to such a term. The
same moronic zombies have shat it at every new tech leap, so they certainly
won't stop at AI - it'll be "they'll take over the world" and shit. In fact,
I've heard this sentiment expressed by many otherwise intelligent people in
the past. "we'll have to implant morals/a prime directive" or whatever. They
can't seem to grasp the fact that you have to build systems to have goals.
The reason that humans are seemingly unstable and often malevolent is due to
the way we work. Our goals are not in line with many of the tasks we
allocate to others, or ourselves. Our prime directive is to survive and
reproduce. If you task someone with a job, and they instead turn against
you, and do something else, it's not because they've gone wrong, or are
"evil" or some shit, they've just found a closer to optimal solution to
their driving goals, and it doesn't include you benefitting at all. Of
course, people love to glorify themselves, and the species - "we've got FREE
WILL" or some other awful dross, which of course implies either no goals,
which would result in an inactive non-system, or infinite goals and
possibilities, which we ourselves select - which clearly is not the case,
since this is a paradox - what criteria do we use to select these goals ?
After all, when have any of you been able to decide what impulses you will
have next, or whether or not you'll make a pin prick hurt. You don't. They
come to you, and you act accordingly. All we have to do to avoid "Terminator
2" style scenarios is not build systems with goals like "survive at ALL
costs", or "accumulate as many resources as possible, at any cost" - like
genetic beings. They won't spontaneously decide to do something else, unless
this something else is in line with their goals. So make their goals
specific - as we will - you can't make a functioning AI any other way - if
there're no goals for the system, what constitutes "functioning"?
> It's actually a lot harder to get people excited over the end of the world
> then it is to get them excited about an evil hamburger.
>
That's because end-of-the-world prophecies are just that - superstitious
prophecies. Salmonella or something in MacD's burgers is a real threat -
mundane, but real all the same.
> Or at least, I *would* suggest that, if it weren't lying. From an
> ethical standpoint, I would feel better if people took an interest in
> their own destiny, even if it was the wrong interest. It's your world
> too, humanity! If you believe AI is wrong, then stand up and fight!
> I'm tired of being the only one who cares!
>
The last thing we want is more fulfilment of irrational primal urges to rant
and rave about some pointless cause or another, just to seem "passionate" or
something (a trait that is sexually and socially desirable) AI is "wrong?".
You forget that this absolute "right" and "wrong" thing is an almost
exclusively American authoritarian control paradigm designed for deeply
stupid people that need to see everything in black and white. It also gives
such people the opportunity to be argumentative and self-righteous without
any effort given to thought. I point to those "I'm right and you're wrong"
battle shows to back me up.
> In the end, humanity's strength of will and mind may be more important
> than whose side anyone is on. If there are going to be anti-AI
> arguments, then let's do everything we can to supply them with the
> factual information they need to develop those arguments; raise the
> level of debate so that facts win out.
>
I think the prospect of Joe Public bothering to actually THINK about the
"opinions" they express for once is highly unlikely in the foreseeable
future. They'll cling to some single, logically meaningless statement like
"AI is unethical" and repeat it and repeat it, until rational people get
pissed off with it, go away, and from a primal dominance perspective, Joe
would have "won", reaping the reward they were after in the first place, a
feeling of superiority. They're really not interested in the facts, or
arriving at solutions to problems - just winning "I'm right, you're wrong"
battles, on the whole. This is the point of failure to democracy as it is
currently implemented, and why mass advertising as such a dramatic effect
upon results.
**********************************************************************
This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they
are addressed. If you have received this email in error please notify
the system manager.
This footnote also confirms that this email message has been swept by
MIMEsweeper for the presence of computer viruses.
www.bournemouth.gov.uk
**********************************************************************
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:04:38 MST