From: Brian Atkins (brian@posthuman.com)
Date: Sun Jan 23 2005 - 15:00:43 MST
Ben Goertzel wrote:
>
>>-----Original Message-----
>>From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org]On Behalf Of Brian
>>Atkins
>>Sent: Sunday, January 23, 2005 1:35 PM
>>To: sl4@sl4.org
>>Subject: Re: When does it pay to play (lottery)?
>>
>>
>>David Clark wrote:
>>
>>>The chance that *any* implementation of AI will
>>>*take off* in the near future is absolutely zero. We haven't
>>
>>the foggiest
>>
>>>clue exactly what it will take to make a human level AI, let alone an AI
>>>capable of doing serious harm to the human race.
>>
>>Your two statements of certainty are in direct conflict with each other,
>>so I don't see how you can hold both at the same time.
>
>
> I don't see that his attitude is self-contradictory.
>
> I would say "The chance that anyone will build a ladder to Andromeda
> tomorrow is effectively zero. We don't really have any clear idea what
> technology it would take to build such a ladder." No contradiction there.
> I don't know how to do it, but I know that none of our current methods come
> close to sufficing.
>
Exactly - you know because you know (or can fairly easily find out) what
the actual structural requirements are that such a ladder would have to
meet in order to succeed. This is a very different situation than AGI
knowledge is at currently. So your analogy is a very poor one in my opinion.
David was making statements about a technology where we still don't know
what the exact "structural requirements" are for it to work. For a
ladder we can easily use existing physics knowledge to determine whether
a design will hold up or collapse, and we can see for relatively certain
that we don't have the techniques to make it work, but for a given AGI
design there are no such hard and fast rules of such precision and
predictive strength.
-- Brian Atkins Singularity Institute for Artificial Intelligence http://www.intelligence.org/
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT