From: Zenarchy (jr@shasta.com)
Date: Sun Dec 06 1998 - 20:31:48 MST
From: Eliezer S. Yudkowsky
Nick asked:
>> How to control an SI? Well, I think it *might* be possible through
>> programming the right values into the SIs,
Eliezer replied:
>We should program the AI to seek out *correct* answers, not a particular
set
>of answers.
>
>> but let's not go into that
>> now.
>
>Let's. Please. Now.
Yes, it comes to that, doesn't it? If the builders can't control their SI,
then it has no sane value for them. Yet they rush to build it anyway. Why?
Because their own intellect convinces them of the merit of intelligence per
se.
Concern that humans will relinquish intellectual supremacy to computers --
artificial super-brains, the SI, with an intellectual power far beyond
anything we can now comprehend, or that through genetic engineering, humans
will grow SI to order or combine both, so that super-grown organic brains
can plug in to a super-computer, or computer super-chips implanted into
human brains.
Would the SI take humanity on a quantum leap of consciousness, cutting our
deadly connections with the whole ugly history of the past, or would they
too get caught in humanity's idiotic conditionings and memetic attractors?
The question does not seem serious enough for the man in the street to stop
and consider. First of all, as many on this list know, this singularity
shall occur, arguments to the contrary deserve no attention. Knowing that we
can't avoid it, no need exists to try. The social order already has
intelligence well under control. (Einstein left an estate worth about
$30,000, but a has-been horse opera star like Reagan got the White House.)
The SI, with intelligence a million times that of Einstein, can produce
better science than ever before.
The real danger for humanity lies in the inability to control ignorance and
stupidity, not in a lack of control of intelligence. Stupid people feel
threatened by super intelligence. Smart and savvy people welcome the
solutions provided by super accurate machine thinking. Now we can relax and
spend our time meditating.
The SI cannot start any war on religious, political, romantic, or
territorial grounds. Those things appeal to far less intelligent brains.
(btw, with an SI in charge, we won't need a Mussolini to keep the trains on
time. <g>) Who fears the SI? The churches, the politicians, the anti-sex
leagues, the power brokers, and their minions. Big government in particular
has much to lose if automation replaces bloated bureaus. For my own part, I
consider robots nicer people than politicos, theologues, and matriarchs.
They seem like the nicest people you can find, and they never tire, and they
never retire. So now all smart people can take a break and meditate. No more
armies, no more wars -- life consists of carnivals and cabarets.
The SI reminds me of Hymie Goldberg, who answers a classified advertisement
in a newspaper which says, "Opportunity of a lifetime!" He is given an
address and finds himself face to face with old man Finkelstein.
"What I'm looking for," explains old man Fink, "is somebody to do all my
worrying for me. Your job will be to shoulder all my cares."
"That's quite a job," says Hymie. "how much do I get paid?"
"You will get one hundred fifty thousand dollars a year," says old man Fink,
"to make every worry of mine your own."
"Okay," says Hymie, "when do I get paid?"
"Aha!" says Fink. "That's your first worry."
When it creates super-intelligence, then humanity can enjoy super-sanity.
As for morality, just feed all the world's laws into the SI, let it combine
them into an average, and presto! the one-size-fits-all universal human
ethos, free with every Big Mac.
-zen
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:54 MST