From: Perry E.Metzger (perry@piermont.com)
Date: Sun Nov 30 2003 - 15:56:01 MST
Brian Atkins <brian@posthuman.com> writes:
> Perry E.Metzger wrote:
>> I suspect (I'm sorry to say) that assuring Friendliness is
>> impossible,
>> both on a formal level (see Rice's Theorem) and on a practical level
>> (see informal points made by folks like Vinge on the impossibility of
>> understanding and thus controlling that which is vastly smarter than
>> you are.) I may be wrong, of course, but it doesn't look very good to
>> me.
>
> I think you have some misconceptions... First off, the concept isn't
> to provide perfect 100% assurance. No one is claiming that they can do
> so. Although that would be great, in practice or even on paper it
> isn't doable... we must settle for a more practical "best shot".
For a number of reasons I'm not sure a "best shot" is very likely to
succeed, either. However, that's a long and involved argument.
> Secondly, as you say, "controlling" something like this is an
> impossibility. Which is why we have never talked about attempting such
> a thing. You have to build something that will be ok on its own, as it
> outgrows our intelligence level, or else don't build it.
What you're talking about is attempting to construct something that
has internal controls that will prevent it from becoming
"unFriendly". Said internal controls are necessarily devised by a
creature less intelligent than the creature you are constructing
(they are after all devised by ordinary humans.)
I'm not sure that people have much of a way of coping with such
problems, except with the use of proof techniques, and unfortunately
proof techniques would fail (see Rice's Theorem).
But again, perhaps this is an argument for another day.
>> (I realize that I've just violated the religion many people here on
>> this list subscribe to, but I have no respect for religion.)
[...]
> So... I don't see any sacred cows to slaughter. Everyone here realizes
> (or should realize) that this is still a very new and very unfinished
> area of AI research. There are no guarantees it will ultimately pan
> out.
Not everyone has that degree of realism here, I think. There are, in
particular, those that speak of bringing on the singularity in much
the same way that some religions speak of bringing the
messiah. Perhaps I'm wrong about how many have that mindset -- who
knows. It is in any case my (fallible) observation. Take it with or
without a grain of salt, depending on your restrictions on dietary
sodium.
>> Keep in mind that likely, out there there are intelligent creatures
>> created without regard to "Friendliness theory" that whatever you
>> create is going to have to survive against. Someday, they'll encounter
>> each other. I'd prefer that my successors not be wiped out at first
>> glance in such an encounter, which likely requires that such designs
>> need to be stupendous badasses (to use the Neil Stephenson term).
>> Again, though, I'm probably violating the local religion in saying
>> that.
>
> Nope, this idea has been brought up at least one time previously
> (couple years back I guess). Specifically, the idea that for some
> reason a FAI is limited or unable to win a conflict with an external
> UFAI. I don't see any reason why this would be so, and I don't recall
> anyone being able to make a convincing argument for it last time
> around, but feel free...
I'm merely noting that a creature without any constraints on its
behavior might have certain operational advantages against one that
has constraints on its behavior, and one that had evolved in a
dangerous environment might be better equipped for danger than one
unused to such trouble -- though you are correct that these might not
be real problems. It is hard to say.
-- Perry E. Metzger perry@piermont.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:43 MDT