From: Lee Corbin (lcorbin@tsoft.com)
Date: Mon Apr 29 2002 - 19:28:08 MDT
Samantha writes
> So why would it not turn out that any interacting sentient
> beings would find it beneficial to develop and practice ethical
> behavior?
There are two reasons. One, depending upon conditions, it
may or may not turn out that (what we would call) ethical
behavior is profitable or contributes to survival. We
know that such behavior varies from culture to culture,
to some extent, and from species to species to a great
extent.
Two, the "interacting sentient beings" that you refer to
might require thousands of generations, as our ancestors
did, to be selected for a certain kind of behavior. I
think that it's a mistake to believe that moral behavior
can be achieved by intense cerebration. On the contrary,
feelings seem to be a primary component of "nice" behavior,
and those feelings in all the cases we know of, had to be
selected for by evolution.
> We are talking about an entity that will be able to
> recapitulate all human thinking including ethics to
> date in vanishingly little time. It will ultimately
> decide for itself what ethics to practice.
But as I say, ethics stems very little from thinking. For
the most part, all ethical systems are rationalizations
that come after the fact, after the behavior. That's why
it's doubly amusing to hear some Libertarians try to excuse
their kindness towards strangers they'll never meet again,
or excuse other evidence of altruistic behavior, by rationalizing
that "really" they did it for a selfish purpose.
> I don't think that it "taking over" is itself nice depending on
> exactly what is meant. You cannot fix behavior in the internal
> structure of a self-evolving, super-intelligent being. The very
> attempt would be unethical.
As Dr. Logic's example IMO showed, and as you yourself have often
agreed, "taking over" (as in, say, the current Middle East situation)
is almost the only ethical thing to do.
Also, I do indeed think that it would be possible to fix behavior
in the internal structure of a self-evolving, super-intelligent
being. Asimov made his Laws fairly convincing, and Eliezer's
Friendliness, it seems to me, might be built in at a fundamental
level in an evolving being in the same way.
> I believe that the long range or long view always favors
> cooperation. As we increase in capabilities and in practical
> intelligence and in real abundance I find it much less likely
> that war would seem like a good alternative. I didn't say no
> defense if attacked though.
I think that you are right. War (or defection) almost always
has as a corollary the destruction of wealth. Especially in
rapidly advancing/growing systems, cooperation results in more
rapid progress and greater wealth. But this is only a generality---
after all. We must keep an open mind in order to be able to
anticipate and then deal with the exceptions.
Lee
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:13:42 MST