From: Thomas McCabe (pphysics141@gmail.com)
Date: Sun Nov 25 2007 - 17:45:07 MST
On Nov 25, 2007 7:22 PM, Harry Chesley <chesley@acm.org> wrote:
> I don't buy the argument that because you've thought about it before and
> decided that my point is wrong that it necessarily is wrong. I have read
> some of the literature, though probably much less than you have. Until
> the list moderator tells me otherwise, I will continue to post when I
> have something I think is worth sharing, regardless of whether it
> matches your preconceived ideas. (Oh, shoot, now you've gone and made me
> get condescending too. I hate it when that happens!)
I have nothing against you posting, but please *read* before you post.
If you disagree with everything you read, and then post about it, at
least we can have a useful discussion.
> As to your response below, it is very long and rambling. It would be
> easier to refute if it were more concisely stated. The gist seems to be
> that we would not intentionally design an anthropomorphic system, nor
> would one arise spontaneously. I disagree, for a bunch of reasons.
>
> First, anthropomorphism is not an all or nothing phenomenon. It means
> seeing ourselves in our AIs. Certainly if we're intelligent and they are
> as well, we will see parts of ourselves. This seems axiomatic.
We are so used to interacting with a certain type of intelligence
(Homo sapiens sapiens) that we would be shocked by the alienness of a
generally intelligent AI. Look at how shocked we are by *each other*
when we violate cultural norms. And we're all 99.9% identical; we all
share the same brain architecture. See
http://www.depaul.edu/~mfiddler/hyphen/humunivers.htm for a list of
things that we have in common and the vast majority of AIs do *not*.
> Second, we may intentionally give AIs portions of our personalities, and
> may later realize that that was a mistake.
How is this going to happen? Magic? Osmosis? None of our other
computer programs just wake up one day and start displaying parts of a
human personality; why would an AGI?
> Third, we don't understand intelligence well enough to know what
> anthropomorphic aspects may be specific to human evolution and what is
> unavoidable or difficult to avoid in a GAI.
We can name a long list of things that are definitely anthropomorphic,
because they only arise out of specific selection pressures. Love and
mating for one thing. Tribal political structures for another.
> Fourth, there are many ways to create a GAI. If we do it by simulating a
> human brain on a computer, it will most certainly be anthropomorphic. Duh.
Brain simulations and uploads are another thing, I'm talking about
built-from-scratch, human-designed AGIs.
- Tom
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT