From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Wed Dec 04 1996 - 18:03:51 MST
> Likewise, the more I look into
> the matter, the stronger grows my opinion that what we currently
> refer to as "smartness" or "intelligence" is a whole cluster of
> fundamentally distinct skills that we have yet to disentangle from
> one another.
There are all kinds of semantic primitives; visual images, motor skills,
causal linkages, God knows what. Nothing rules out saying: "Person A
has a better ability to manipulate visual primitives by looking at them
from different angles, but has trouble with motor skills." "Person B is
better at tracing single causal links but can't formulate long causal
chains." (A is dyslexic, I am B).
> Eliezer, you offer us something you call "a measure of smartness",
> but you are not proposing any means of actually *measuring* anything.
The SAT - not IQ tests, just the SAT - measures the ability to trace
single causal links. I don't know why. Rubik's Cube could probably
measure the ability to handle visual semantic primitives.
> Also, your talk of The Meaning of Life makes me uncomfortable. Life
> has many meanings, and I don't think they can all really be reduced
> to a single measure.
1) Life has many meanings? Name five.
2) Nothing rules out multiple distinct classes of self-justifying goals;
in fact, I wouldn't be the least bit surprised.
> I expect that any one member of this bundle will be,
> perhaps, measurable someday, but that *value* or *meaning* is
> something that emerges only from the synergistic interaction of
> many such skills, knowledges, and microvalues and not something
> that can be identified with any one subcomponent.
My diatribes on semantic primitives and ethics were more or less
distinct; you appear to have gotten them confused. Smartness is the
ability to solve some problem; ethics asks what problem we should be
solving.
> We don't even have a general benchmark of value for
> something as simple as a desktop workstation.
Nonetheless, my PowerCenter 120 beats my old Plus.
> Improving performance in
> one area of a complex system nearly always involves performance
> tradeoffs in some other area.
I *know* that. I invented Algernon's Law: "Any major enhancement of
human intelligence is a net evolutionary disadvantage."
-- sentience@pobox.com Eliezer S. Yudkowsky http://tezcat.com/~eliezer/singularity.html http://tezcat.com/~eliezer/algernon.html Disclaimer: Unless otherwise specified, I'm not telling you everything I know.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:35:52 MST