From: Richard Loosemore (rpwl@lightlink.com)
Date: Tue Nov 29 2005 - 18:13:55 MST
The short summary of my responses (laid out in detail below) is that you
have only repeated your assertion that a very smart AGI would
"obviously" be able to convince us to do anything it wanted. You have
given no reason to believe in this other than you, personally, declaring
it to be a rejected idea.
I repeat: why is extreme smartness capable of extreme persuasion?
This is not even slightly obvious.
Richard Loosemore
Robin Lee Powell wrote:
> On Tue, Nov 29, 2005 at 11:53:57AM -0500, Richard Loosemore wrote:
>
>>Robin Lee Powell wrote:
>>
>>>On Tue, Nov 29, 2005 at 07:08:13AM +0000, H C wrote:
>>>
>>>
>>>>It's not so rediculous as it sounds.
>>>>
>>>>For example, provide an AGI with some sort of virtual
>>>>environment, in which it is indirectly capable of action.
>>>>
>>>>It's direct actions would be in text only direct action area
>>>>(imagine it's only direct actions being typing a letter on the
>>>>keyboard, such as in a text editor).
>>>
>>>Oh god, not again.
>>
>>I am going to address your points out of order.
>>
>>
>>>Quick tip #3: Search the archives/google for "ai box".
>>
>>Myself, I am one of those people who do know about that previous
>>discussion. If there is a succinct answer to my question below,
>>that was clearly outlined in the previous discussion, would you be
>>able to summarize it for us? Many thanks.
>
>
> The succinct answer is "Someone only marginally smarter than most
> humans appears to be able to pretty consistently convince them to
> let the AI out. The capabilities of something *MUCH* smarter than
> most humans should be assumed to be much greater.".
I can't understand what you are saying here. Who is the "someone" you
are referring to, who is convincing "them" to let "the" AI out?
You say that the capabilities of something much smarter than most humans
should be assumed to be much greater. That's like saying that because
we are very much smarter than calculators, we should be assumed to be
better at calculating sine functions to fifteen decimal places than they
are. There is no logic in this whatsoever.
>>>Quick tip #1: if it's *smarter than you*, it can convince you of
>>>*anything it wants*.
>>
>>I recently heard the depressing story of a British/Canadian
>>worker, out in Saudi Arabia who was falsely accused of planting
>>bombs that killed other British workers. He was tortured for
>>three years by Saudi intelligence officers. My question is: he
>>was probably smarter than his torturers.
>
>
> Really? In what sense? For what definition of "smarter"? How do
> you know?
I think you understand my rhetoric here.
>>He *could* have been very much smarter than them. Why did he not
>>convince them to do anything that he wanted? How much higher
>>would his IQ have to have been for him to have convinced them to
>>set him free?
>
>
> Wow. Who said anything about IQ? In fact, I suspect you'll find
> that IQ above a certain point is *inversely* correlated with being
> able to convince people of things. People with high IQ tend to have
> crappy social intelligence.
>
You make more unsupported assertions out of the blue. This time about
correlations between IQ and ability to convince people, and about
"crappy social intelligence" of people with high IQ.
I was trying to get at a serious point, but the point is being evaded, I
think.
>>More generally, could you explain why you might consider it beyond
>>question that persuasiveness is an approximately monotonic
>>function of intelligence? That more smartness always means more
>>persuasiveness?
>>
>>Is it not possible that persuasiveness might flatten out after a
>>while?
>
>
> It's certainly *possible*, but you and I seem to be talking about
> different things when we say "smarter" in this context. You seem to
> be talking about smarter in the way that, say, Eliezer is smarter
> than me. I'm talking about smarter in the way that I am smarter
> than a worm, or a tree, or a rock.
You are referring to extreme values of smartness. I am not making any
assumptions about how extreme the continuum of smartness might be, I am
asking you why you assert certain facts about those extreme values.
> I reject pretty much categorically that a being smart enough to hold
> my entire mental state in its head could not convince me of anything
> it likes. Further, I reject that anything much *less* smart than
> that is of any real existential threat.
>
So, when we focus down on the question I asked, your answer seems to be
that you simply "reject" the possibility that you might be mistaken?
You simply assert that if an AGI could hold your entire mental state in
its head (what on earth does that mean, anyway?), it would obviously be
able to convince you of anything it wanted. Why? There is no reasoning
behind this assertion
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:53 MDT