From: "Russell Blackford" <RussellBlackford@bigpond.com>
> You seem to be putting a position
> that the only, or the overriding, value is the ability to solve problems.
> But isn't this a value judgment?
Define intelligence as the ability to solve problems and answer questions, and
think of this as a definition, instead of a value judgment, and the positional
disparity dissolves. BTW, I stole this definition from Dr. Francis Heylighen.
It could be the "value" of AI, I suppose, if one wants to conflate "strong AI"
with value. But I don't see the "value" of "strong AI" at issue in this
discussion. Hawking apparently envisions "strong AI" as potentially
tryrannical and dangerous (the myth of the monstrous machine), while Kurzweil
invokes "human values" as a remedy (another version of Eliezer's "Friendly AI"
scenario).
To review, Hawking's position is:
``We must develop as quickly as possible technologies that make possible a
direct connection between brain and computer, so that artificial brains
contribute to human intelligence rather than opposing it,'' and we should do
this "if we want biological systems to remain superior to electronic ones.''
Ray Kurzweil disagreed, stating:
> I don't agree with Hawking that "strong
> AI" is a fate to be avoided. I do believe that we have the ability to shape
> this destiny to reflect our human values, if only we could achieve a
> consensus on what those are.
I agree with Kurzweil that "strong AI" is _not_ to be avoided. What's more, I
think "strong AI" is to be diligently sought and enthusiastically welcomed.
Nevertheless, I disagree with Kurzweil's invocation of "human values" because
it seems to me that subjective values of any kind contradict and impede the
emergence of "strong AI."
("strong AI" = human-competitive AI, which can quickly evolve to artificial
superintelligence)
> I place a great value on
> intelligence/problem solving ability as well, but I also think there's a lot
> of truth in the dictum that I can't quite remember sufficiently well to
> quote accurately from Hume - the one about reason being the slave of the
> passions. The passions, in turn, doubtless have a biological basis. I'm
> finding it very hard imagining an intelligence totally devoid of "passions"
> or why it would be a good thing. I'm not even as confident as you that such
> an intelligence would have greater problem-solving power. What would
> *motivate* it to solve problems if it had no values at all?
Hume was a philosopher, right? >poof< ...there goes his credibility. ©¿©¬
Just kidding. Actually I think that making reason a slave rather than a master
explains much of human misery and sorrow. Although human intelligence has its
origin in biology, it does not follow that intelligence must remain tethered
to it. Hence, we have the transhumanist movement.
Nothing _intrinsic_ motivates AI to do anything at all, which is precisely why
Hawking's notion of AI "opposing" human intelligence is preposterous. By
itself, without input from an external operant, AI does nothing It just sits
there like a perfectly enlightened master, like a supercomputer on idle, like
a programmable calculator awaiting the next factor. "Sitting quietly, doing
nothing, Spring comes, and the grass grows by itself," as Li Po might put it.
Such an intelligence (some prototypes of which have been demonstrated) has
greater (than human) problem-solving power because its reasoning is based on a
more accurate model of reality -- one that is not weighted by the biases and
distortions of human values.
> Shouldn't it at
> least be motivated by curiosity?
I don't see any reason to embellish AI with anthropomorphic features.
Heuristic algorithms have commonly functioned as "curiosity" in intelligent
agents such as web crawlers and gophers to retrieve information. Beyond the
acquisition of knowledge, "curiosity" does not compute, and in fact would be
counterproductive in AI.
> Moreover, why should we give up our values
> which are based wholly or partly on our biological interests?
What do you mean "we," meat puppet? ©¿©¬
OK, I'm kidding again... hope you see the value of humor.
We should give up our values because they are fake, phony, and fraudulent.
People say they are fighting over "values" when they go to war (which involves
killing, looting, raping, etc.), but what they're really fighting over is
territory, resources, and "biological interests." These "values" are what
eventuate in global suicide via total war. As Eliezer figured out years ago,
the human race stands at a crossroads now: One road leads to punctuated
evolution and a quantum leap to a biological phase transition, aka the
singularity. The other road leads to dystopian nightmare police states and
"Outlaw School" totalitarian political systems.
AI ("strong AI" -- see note below) is the technology that makes the phase
transition beyond biological interests possible. By some definitions, A-life
is still a form of "biology" but I concede to those who prefer the term
"vitology."
> Also, don't
> you think, given that a lot of problem solving uses hypothetico-deductive
> reasoning, not purely deductive reasoning, that it might be very hard
> developing a system capable of conjectures and yet with no values?
The real problem is that we have an over-abundance of conjectures already, so
no, I don't see any difficulty here. To clarify this point, differentiate
between solving problems via reasoning versus *discovering* solutions via
pattern recognition. Discovering solutions by recognizing patterns is a
sub-set of the more inclusive term "problem solving." We don't need to develop
a system capable of conjecture because we already have such a system. It's
called the human brain, and it has managed to fill the world with conjectures.
As I envision it, "strong AI," just like any other intelligence, does not
function in isolation, but rather in relation to others. The difficulty for
humans in this new society of intelligent entities will be *accepting* the
solutions provided by machines that are more intelligent than the most intelli
gent humans. Do you think the Pope will be able to accept solutions offered by
machine intelligence which contradict the dogma of the church? Remember
Galileo?
> It seems
> to me that even being conscious would give the system something analogous to
> biological interests. If the system isn't conscious, why should we value it
> except as a useful tool to our ends, based on *our* values? Etc?
Well, since I don't believe in "consciousness," perhaps we can let the "strong
AI" decide for itself how to handle this kind of question when the occasion
arises. As for why we should "value" the AI system, I think you've answered
that yourself when you refer to it as "a useful tool." Initially, that's
exactly why developers try to increase the intelligence of systems, because
that makes them more useful. This is based on the performance capabilities of
the machine intelligence under consideration. For example, a system which
yields a ten million dollar profit is twice as valuable as one which nets a
mere five million.
> I, in my turn, suspect that you may have answers to these questions, but I
> haven't seen anything convincing from you so far. Do you want to spell it
> out?
It's not my intention to "convince" you of anything, but you're right, it
makes me happy to spell it out for you.
http://www.ecs.soton.ac.uk/~phl/ctit/ho1/node1.html
The Strong AI view says: the brain is a complex information processing
machine, but "consciousness" and understanding are by products of the
complicated symbol manipulation when we process information. We will
eventually be able to model this process and reproduce it.
The Weak AI view says: the brain is something more than an information
processing machine and although we will be able to model some of its
functionality, it will never be possible to model all the properties of the
brain.
©¿©¬
Stay hungry,
--J. R.
Useless hypotheses, etc.:
consciousness, phlogiston, philosophy, vitalism, mind, free will, qualia,
analog computing, cultural relativism, GAC, Cyc, Eliza, cryonics, individual
uniqueness, ego, human values
Everything that can happen has already happened, not just once,
but an infinite number of times, and will continue to do so forever.
(Everything that can happen = more than anyone can imagine.)
We won't move into a better future until we debunk religiosity, the most
regressive force now operating in society.
http://groups.yahoo.com/group/Virtropy/message/2949
This archive was generated by hypermail 2b30 : Fri Oct 12 2001 - 14:40:27 MDT