From: Samantha Atkins (samantha@objectent.com)
Date: Sun Aug 25 2002 - 23:38:08 MDT
Mitch Howe wrote:
> Samantha Atkins wrote:
>
>
>>I am not sure I agree that Friendliness "is referenced from
>>human morality". Please say more.
>
>
>
> Ok. The issue, as I see it, is this. Since it seems unlikely that there is
> any sort of cosmic Truth that human minds are tuned to when deciding what
> constitutes a morally "good" goal, human minds must be making these
> judgments as a consequence of their particular design.
But is that at all relevant? Of course they are judging as a
consequence of their particular design but this is not at all
the same as the implied claim that their particular design is
determining their specific judgment. Nor does it necessarily
follow that their judgment is incorrect or would not be arrived
at by minds of a different particular design and/or greater
ability.
>
> Without some universally objective meaning of Friendliness etched into the
> fabric of the universe, we can not expect that an SI would ever find such a
> thing.
I don't know what you mean by "etched into the fabric of the
universe". A not bad argument can be made that interacting
sentients will eventually come to value some type of
Friendliness as it generally may be found to maximize how many
of their goals can be satisfied with minimal risks.
A Friendly SI would thus have to find Friendliness in the minds of
> its beholders. I think this is why Eliezer has repeatedly emphasized that a
> Friendly SI would be Friendly for the same reasons an altruistic human is
> friendly.
But you seem to be assuming that the reasons are limited to
humans. I don't believe they are. I believe they generalize
(roughly) to all sentients. If they do not our goose is
probably cooked.
> A Friendly AI is morally comparable to the uploaded version of a
> *human* of unsurpassed Friendliness. We would not expect or even want a
I don't parse the meaning of this statement. I don't know what
"morally comparable" consists of. If you are implying it has
copied human morals then I disagree. If you are implying it
happens to agree with some human morals we consider important
then I agree with that.
> Friendly AI to be merely a human brain manifested in silicon, but at the
> very least it would have to have a thorough understanding of whatever it is
> in the human wiring concept that makes conclusions about what is and is not
> Friendly. It would have to have some sort of "process X" module, as it
> were.
>
I can tentatively agree that to be Friendly to a creature of
type A I need to understand the needs and nature of such a
creature.
> But this "process X" remains a human modus operandi, and should not be
> mistaken for an immutable law of the cosmos. So Friendliness of the kind we
This is your asumption, which I disagree with.
> are pursuing is really Human Friendliness. If the generalized template of a
> Friendly AI were set loose on an alien world among species who had no
> process X, but rather some totally unrelated process Z, it is a process Z
> module that this alien Friendly AI would develop to compute Friendliness.
> Therefore, the mature Friendly AI from human space might disagree violently
> with the Friendly AI from Kzin space -- on account of the very differnt
> modules they use to calculate what Friendliness is.
>
This does not necesarily follow either. Sentient beings may
always disagree even with (using your limited model) the same X
process running in both. After all, humans do it all the time.
> I don't see that we have any choice but to reference an AI's Friendliness to
> our own perceptions of it. There is no cosmic handbook of ethics or
> extraterrestrial species handy to consult. And I doubt we could bring
> ourselves to sign on with any of these other references anyway if these
> suggested that humans were a smelly blight on the universe.
>
I agree that from our perspective the AI will only be Friendly
if it acts in accord with our ideas of Friendliness at least
regarding not wiping humans out.
> One could argue that any intelligent species would roughly share our process
> X, making this issue irrelevant. I don't think this is completely wishful
> thinking since I suspect that any species intelligent enough to worry about
> would at least have to share the most fundamental heuristics; after all,
> what we recognize as logic seems to govern the natural universe, and it
> should be difficult to evolve very far in the natural universe using
> heuristics that totally ignore this logic.
>
Here we come closer together.
> But on the other hand, many concepts that we esteem as admirable human
> virtues have obvious foundations in our ancestral environment -- foundations
> that other intelligences might not share. For example, an intelligence that
> evolved from the beginning as a singleton planetary hive-mind would have
> little need to develop concepts of "personal freedom" or "altruism", since
> these are social concepts that require multiple minds to have any meaning.
>
But, for us here and now, pre-Singularity, the primary concern
with Friendliness *is* and must be from our perspective. That
however does not mean that our perspective is the same as that
of the SI. Its understanding of Friendliness might be a lot
deeper. But I doubt its understanding would include our doom. I
have more sympathy for the view that the SI might consider us
all quite mad and in need of radical and immediate
psycho-therapeutic reconstruction. But if it understands enough
of human psychology to actually judge this I doubt very much
that it would fail to understand how grossly unfriendly such an
act is.
- samantha
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT