From: Thomas McCabe (pphysics141@gmail.com)
Date: Sun Nov 25 2007 - 19:33:56 MST
On Nov 25, 2007 8:43 PM, Harry Chesley <chesley@acm.org> wrote:
> Thomas McCabe wrote:
> > I have nothing against you posting, but please *read* before you
> > post. If you disagree with everything you read, and then post about
> > it, at least we can have a useful discussion.
>
> I would be curious to hear what you consider the prerequisite reading
> material to be. (I don't mean that at all facetiously. I really would
> like to know.)
>
> But note that not everyone on this list has the same goal or background
> as you do. My own interest is in building AIs. Theoretical speculation
> on things like fundamental limits of intelligence and provably friendly
> AIs is interesting but often pretty irrelevant. It's sort of as if I
> were trying to build a crystal wireless set and you're talking about
> Feynman diagrams. They may be entertaining, and they're certainly
> related in some way, but they don't help me build the radio. Similarly,
> this list has been an occasionally interesting diversion, but I don't
> want to spend inordinate amounts of time reading tangential material.
If you want to build a generally intelligent AI, it had better darn be
Friendly, or Very Bad Things (tm) are going to happen.
> > We are so used to interacting with a certain type of intelligence
> > (Homo sapiens sapiens) that we would be shocked by the alienness of
> > a generally intelligent AI. Look at how shocked we are by *each
> > other* when we violate cultural norms. And we're all 99.9% identical;
> > we all share the same brain architecture. See
> > http://www.depaul.edu/~mfiddler/hyphen/humunivers.htm for a list of
> > things that we have in common and the vast majority of AIs do *not*.
>
> Very true. And one reason we may intentionally build anthropomorphic AIs.
*Why* would anyone build an anthropomorphic AI? It would be a huge
amount of extra work, for no palpable gain, at a great risk to the
planet.
> > How is this going to happen? Magic? Osmosis? None of our other
> > computer programs just wake up one day and start displaying parts of
> > a human personality; why would an AGI?
>
> It'll happen by design, of course. You don't think we can program a
> human-like personality into an AI?
No. We'll either realize how useless it is and not try, or try and
fail. Anyone with the intelligence and determination to implement a
human-like personality, which is stable under recursive
self-improvement, has the intelligence and determination to realize
why it is not a good idea.
> For example, some companies are building companion robots for the
> elderly that very intentionally have personality, and that encourage the
> formation of long-term emotional relationships with their owners.
These aren't AGIs, thank Eru. If they were we wouldn't be here.
> > We can name a long list of things that are definitely
> > anthropomorphic, because they only arise out of specific selection
> > pressures. Love and mating for one thing. Tribal political structures
> > for another.
>
> I don't have your confidence that I know what is and isn't inherent. For
> example, I'm not sure that a group of interacting GAIs would not
> logically employ a system very much like tribal politics.
Where would such a system come from? Without strong selection
pressure, *why* would such a system arise? If you put a bunch of
humans together, they'll start politicking. We're human, and so when
we imagine *any* intelligent entities, we imagine politicking. This is
not how it actually works.
> (Agoric
> systems come to mind.) Or even love.
Again, love evolved because of strong *selection pressures* towards
efficient reproduction.
> As I understand it, love evolved
> because it allows two parties to trust each other beyond the initial
> exchange,
No, no, no. Love is not a generic "thingy that allows two
intelligences to trust each other". Love is a complex functional
adaptation, created by billions of years of evolution on organic
beings. An AGI can trust another AGI, in the sense that another AGI's
statements are given high confidence in the Bayesian probability
network. This is not "love". If you don't believe me, go read the
cognitive science literature on love, I don't know it in detail but we
know it is *far* more complex than that.
> which is important for some mutually beneficial contracts
> which would otherwise be unworkable. A similar arrangement might make
> sense in a GAI. If you feel that's impossible because machines can't
> feel, then we have another area of disagreement as I don't see any
> reason they should not if we can.
There is a huge, huge difference between "feeling" and Homo
economicus-type interaction. One sure as heck does not imply the
other.
> > Brain simulations and uploads are another thing, I'm talking about
> > built-from-scratch, human-designed AGIs.
>
> You may have been talking only about built-from-scratch GAIs, but we
> weren't. Dare I tell you to read the previous posts before replying?
>
>
I've read all the posts I've responded to, but this thread alone has
dozens of posts, there's no real point to reading them all.
- Tom
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT