From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Jun 26 2002 - 15:03:53 MDT
James Higgins wrote:
> At 03:59 PM 6/26/2002 -0400, Eliezer S. Yudkowsky wrote:
>
>> James,
>>
>> Do you believe that a committee of experts could be assembled to
>> successfully build an AI? Or even to successfully judge which new AI
>> theories are most likely to succeed?
>
> Do I believe a committee could successfully build an AI? Maybe. But I
> don't think it would be a good idea to do it that way.
>
>> If not, why would they be able to do it for Friendly AI?
>
> I never said they could, or should, DESIGN anything. Simply approve
> designs.
Approving designs requires the ability to understand them. Do you think a
committee could *approve* a working design for AI, if it had to pick one
proposal out of all those presented on the basis of which design proposal
had the best chance of simple success? Why, then, should a committee do any
better at approving one Friendliness design?
I think that handing something to a committee imposes an upper limit on the
intelligence of the resulting decisions. Committees can be smart but they
cannot be geniuses. If Friendly AI requires genius, then turning over the
problem to a committee guarantees failure, just as it would for the problem
of AI itself.
> Zoning committees don't build anything, but they are important to
> maintain order in a metropolitan area. I believe a Singularity Committee
> (or whatever it should be called - I'd like to avoid the term
> "committee")
Yes, I'm sure that avoiding the name "committee" will completely prevent
committee organizational dynamics from operating.
> would be a very useful asset to the human race. Although I can see where
> it could easily be seen as a detriment to will-full, single-minded, solo
> players or even like minded teams.
>
> Individual accomplishment is irrelevant in light of the Singularity,
> successful completion of the project in the safest manner possible is the
> only rational goal.
I agree.
> I believe your goal, Eliezer, is to make the Singularity as friendly and
> safe as possible, is it not? If so you should welcome such a committee
> as a way to ensure that the safest and most friendly design is the one
> launched.
I should NOT welcome such a committee unless I believe the ACTUAL EFFECT of
such a committee will be to ensure that the safest and most friendly design
is launched. Friendly AI design is not as complex as AI design but it is
still the second most complicated thing I have ever encountered in my life.
I would trust someone who built an AI to make it Friendly. I would not
trust a committee to even understand what the real nature of the problem
was. I would trust it to spend its whole time debating various versions of
Asimov Laws, never moving on the issue of structural Friendliness.
> You should under no circumstances fear such a committee since, if you
> really are destined to engineer the Singularity, the committee would
> certainly concede that your design was the best when it was presented to
> them.
That's outright silly. One, I don't think that destiny exists in our
universe, so I can't have one. Two, there is no reason why a committee
would be capable of picking the best design when the problem is inherently
more complex than the intelligence of a committee permits. The committee
will pick out a set of Asimov Laws designed by Marvin Minsky in accordance
with currently faddish AI principles. If the committee has to build their
own AI, they'll pick a faddish design and fail. I will not provide an AI
for them if they are not smart enough to build it themselves.
The fact that, at this moment, it takes (I think) substantially more
intelligence to *build* an AI, at all, than to build a Friendly AI, is one
of the few advantages that humanity has in this - although Moore's Law is
slowly but steadily eroding that advantage. I have not and never will
propose that SIAI (a 501(c)(3) nonprofit) be given supervisory capacity over
the Friendliness efforts of other AI projects, regardless of whether future
circumstances make this a plausible outcome.
It is terribly dangerous to take away the job of Friendly AI from whoever
was smart enough to crack the basic nature of intelligence! Friendly AI is
not as complex as AI but it is still the second hardest problem I have ever
encountered. A committee is not up to that!
>> Sometimes committees are not very smart. I fear them.
>
> I don't like committees either, and I can understand why you, in
> particular, would fear such a committee. It would take away your ability
> to single handedly, permanently alter the fate of the human race. Which
> is exactly why such a committee would be a good thing. Such decisions are
> too big for any one person to make.
Then they're too big for N people to make and should be passed on to a
Friendly SI or other transhuman.
> If you were on trial for murder and up for the death penalty, would you
> want one single person to decide your fate or a jury of people?
I'd study the past statistics and behavior of single judges and juries in
death-penalty murder cases before coming to a decision.
Friendly AI is a test of intelligence. If the minimum intelligence to crack
Friendly AI is more than the maximum intelligence of a committee, turning
the problem over to a committee guarantees a loss.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT