From: Brian Atkins (brian@posthuman.com)
Date: Wed Aug 17 2005 - 12:50:03 MDT
Richard Loosemore wrote:
> Brian,
>
> I am going to address your larger issue in a more general post, but I
> have to point out one thing, for clarification:
>
> Brian Atkins wrote:
>
>> Richard Loosemore wrote:
>>
>>> If you assume that it only has the not-very-introspective human-level
>>> understanding of its motivation, then this is anthropomorphism,
>>> surely? (It's a bit of a turnabout, for sure, since anthropomorphism
>>> usually means accidentally assuming too much intelligence in an
>>> inanimate object, whereas here we got caught assuming too little in a
>>> superintelligence!)
>>
>>
>>
>> Here you are incorrect because virtually everyone on this list assumes
>> as a given that a superintelligence will indeed have full access to,
>> and likely full understanding of, its own "mind code".
>
>
> Misunderstanding: My argument was that Peter implicitly assumed the AI
> would not understand itself.
>
> I wasn't, of course, making that claim myself.
>
I realize that you don't claim that; it was your assumption put in Peter's mough
that is what my comment was directed at. You misinterpreted Peter's intent in my
opinion, although Peter can pipe up if I'm wrong.
-- Brian Atkins Singularity Institute for Artificial Intelligence http://www.intelligence.org/
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT