RE: fluffy funny or hungry beaver?

From: Lee Corbin (lcorbin@tsoft.com)
Date: Sat Jun 08 2002 - 09:17:38 MDT


Someone wrote

> > >defining "friendliness" for other people is as insidious and pompous and
> > >dangerous as defining "beauty" or "art".

My two cents: when an AI does take over some part, large or small,
of the Earth's surface, it will have an agenda. I hope that part
of its agenda is something close to what we would call friendly
behavior. Therefore, I am unoffended by suggestions that we
deliberately attempt to insert niceness or friendliness into such
creatures at the outset. (Our survival may depend on it.) Therefore,
I applaud efforts of people like Eliezer to define a sort of
Friendliness for his project.

Eugen wrote

> There's absolutely nothing wrong with bottom-up enforcement of consensus
> rules. It's our traditional modus operandi, debugged in millenia.
>
> However (There Is Another System; Gort, Klaatu Barada Nikto) you're
> heading for ethical whitewater, as soon as a very small group codifies
> whatever they think is consensus at the time into a runaway AI seed, and
> thus asserts its enforcement via a despot proxy.

Listen, someone somewhere will do this. Look whose being anthropomorphic
now with "ethical whitewater". I think it's getting pretty stupid of
people (not you) to stand and rail against the wind protesting the
inhumanity and lack of kindness of nature. Yes, it would be very NICE
if the universe were kind, and we could count upon some mysterious
process to infuse visiting aliens with kindness and goodness. But that's
silly.

Some small group somewhere will codify something, and it may not
by any means be any kind of "consensus" that you write about above.
I fear that the successful group of AI builders will be first by
omitting all the "unnecessary" touchy-feely stuff that will, as
a by-product, save my skin. They'll be first just because they
concentrate on getting their AI to take over, period. So we
must *encourage*, not *discourage* AI groups building in something
nice into the base of their machine.

> (For the sake of argument, never the other two and more probable outcomes
> from the experiment: catatonic and Blight).

Yes, maybe so. But we can at least try for a different outcome.

Lee



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:14:40 MST