From: Russell Blackford (RussellBlackford@bigpond.com)
Date: Sun Sep 09 2001 - 16:50:18 MDT
JR said
>From an evolutionary psychology standpoint, human values evolve from
human needs, which are tied to biological needs.
Which is a pretty good answer the question I asked. My own answer would
probably be fairly similar, though I'd distinguish (as you might, too)
various kinds of needs. For example, my needs as a particular biological
organism might differ from what was needed by my ancestors for inclusive
fitness in the evolutionary environment.
JR added:
>I suspect you have your own opinions about where values come from.
Regardless
of where they come from, human values are not necessary for pure
intelligence,
and my conjecture is that they interfere with pure intelligence (which is
the
ability to solve problems autonomously).
JR, I'm just trying to get a handle on your thinking. I *think* I now see
why you say an AI with values would be (in a sense) "weak AI". Actually, I
assumed you had something like this in mind but wasn't sure.
You are, of course, redefining the terms "strong AI" and "weak AI", but I
realise you are doing so deliberately for rhetorical effect, so that's okay.
Can I take this a bit further, however? You seem to be putting a position
that the only, or the overriding, value is the ability to solve problems.
But isn't this a value judgment? I'm not trying to be cute, just trying to
see how you get to this point. It seems odd to me. I place a great value on
intelligence/problem solving ability as well, but I also think there's a lot
of truth in the dictum that I can't quite remember sufficiently well to
quote accurately from Hume - the one about reason being the slave of the
passions. The passions, in turn, doubtless have a biological basis. I'm
finding it very hard imagining an intelligence totally devoid of "passions"
or why it would be a good thing. I'm not even as confident as you that such
an intelligence would have greater problem-solving power. What would
*motivate* it to solve problems if it had no values at all? Shouldn't it at
least be motivated by curiosity? Moreover, why should we give up our values
which are based wholly or partly on our biological interests? Also, don't
you think, given that a lot of problem solving uses hypothetico-deductive
reasoning, not purely deductive reasoning, that it might be very hard
developing a system capable of conjectures and yet with no values? It seems
to me that even being conscious would give the system something analogous to
biological interests. If the system isn't conscious, why should we value it
except as a useful tool to our ends, based on *our* values? Etc?
I, in my turn, suspect that you may have answers to these questions, but I
haven't seen anything convincing from you so far. Do you want to spell it
out?
Russell
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:10:26 MST