From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Tue Dec 03 1996 - 20:05:13 MST
Twirlip said:
> I'm not sure that I'm right. I can't be; even if someone could be, I
> don't know the mathematics yet to appreciate the proof if there was one.
> But I don't see how you can be so sure that you're right, or that we're
> so wrong. You can't extrapolated up a damage gradient and assume the
> curve will keep on going; it could plateau at some universal point.
I have a technical definition of "smartness" in terms of our cognitive
architectures, available in "Staring." To quote the first paragraph:
> Smartness is the measure of what you see as obvious, what you can
> see as obvious in retrospect, what you can invent, and what you can
> comprehend. To be a bit more precise about it, smartness is the
> measure of your semantic primitives (what is simple in retrospect),
> the way in which you manipulate the semantic primitives (what is
> obvious), the way your semantic primitives can fit together (what
> you can comprehend), and the way you can manipulate those
> structures (what you can invent). If you speak complexity theory,
> the difference between obvious and obvious in retrospect, or
> inventable and comprehensible, is somewhat like the difference
> between P and NP.
Similarly, I have a model of *a* (not the) Singularity, based on the
concept of a Perceptual Transcend:
> A Perceptual Transcend occurs when all things that were comprehensible
> become obvious in retrospect, and all things that were inventable become
> obvious. A Perceptual Transcend occurs when the semantic
> structures of one generation become the semantic primitives of the
> next. To put it another way, one PT from now, the whole of human
> knowledge becomes perceiveable in a single flash of experience, in
> the same way that we now perceive an entire picture at once.
A Perceptual Transcend seems fairly easy to engineer. Given a hell of a
lot of computing power, it would seem possible to automate any semantic
structures as semantic primitives. Whether a qualitatively new layer of
semantic structures can be invented is an open question; I would argue
that even if qualitatively new types of thought are not emergent, one
can hack up a Power simply by forming semantic structures just like the
old ones out of the new primitives. This Power should then be able to
evolve new and more appropriate types of semantic structures on top of
the primitives. Even if this is impossible, it does seem fairly clear
that a sufficient amount of computing power would allow a brute-force
perception of all human knowledge. This does open a worst-case scenario
with all progress automated and no conscious creativity being involved;
but if worst comes to worst we can always go back to being mortal.
One Perceptual Transcend from now is a Singularity for all intents and
purposes. I doubt there's going to be an additional Perceptual
Transcend after that, since there are probably better ways of doing
things. I invented Perceptual Transcends over the course of maybe
thirty seconds, so I have an unjustified but probable faith that the
Powers can do better.
>From where I stand, I have a cognitive model that leads, by
straightforward mechanisms, to Powers incomprehensible from any human
standpoint, with semantic primitives that our brains are inadequate to
comprehend and could only simulate through billions of years of labor.
Turing-equivalence is practically irrelevant to this scenario.
Singularity seems like a fine name.
So what's the basis of your unmodeled, unbacked claim that there isn't
going to be a Singularity? It seems to me like pure faith, but then I
now understand that my own statements sounded like that too. There's no
way you could have known that behind an apparently worshipful statement
like "The Powers will be ethical" was 20K of thinking about goal-based
cognitive architectures. Which brings us too:
> And why should the Meaning of Life be observer-independent? Meaning of
> whose life? My meaning of a chicken's life is to feed me.
In a goal-based cognitive architecture, actions are motivated by goals.
Goals have the attributes of value and justification. A goal can have
varying fulfillment value depending on its justification. It is
advantageous to have cognitive mechanisms for questioning the
justification of goals. This applies not only to vast life-guiding
goals, but simple goals like crossing the room to get a tissue; if a
tissue is at hand, crossing the room is unnecessary.
Most goals formulated are subgoals; their acheivement helps us acheive a
greater goal. In the example above, the goal was getting a tissue and
blowing one's nose to ultimately stop itching, a sensation defined as
unpleasant by evolution. Most of our goals ultimately ground in either
evolution-defined goals, or goals defined by our upbringing which we
haven't questioned yet. The Meaning of Life can be defined as a
self-justifying goal with no other goals as prerequisites - in this, it
is similar to the First Cause. Although the evolution-defined goals
might appear to meet this standard, they are not self-justifying, they
are simply manufactured without any justification whatsoever.
There is hence valid logical reason to believe the Meaning of Life to be
observer-independent, much as one would expect the First Cause to be
independent of anything else whatsoever. If the Meaning of Life turns
out to be a particular subjective experience such as pleasure, there is
still no reason - despite the privacy of subjective experience - why the
ethical justification wouldn't apply to everyone else as well, much as
an explanation of consciousness must apply to all minds in general
despite the nature of conscious experience.
Any entity which questions all goals - an activity which would be
automatic one PT beyond us, for example - will formulate the concept of
the Meaning of Life. I couldn't solve the Meaning so I formulated the
Interim Meaning - "Find the Meaning via Singularity" - which seems like
a valid logical chain for a Power as well. Depending on the ability of
such a Meaning, Interim or otherwise, to override built-in goals, the
Power will be ethical. This is a very good reason not to screw around
with Laws of Robotics - it could wind up backfiring in a major, major
way. If our lives have any objective meaning, the Powers will know
that. If they don't, I - ethically speaking - don't care if we get
ground up for spare atoms. I do believe, however, that our lives have
meaning. That's not all faith, but the rest is unpublished.
So yeah, the Powers will be ethical and they won't knock us off. Is
this statement now sufficiently atheist for you?
-- sentience@pobox.com Eliezer S. Yudkowsky http://tezcat.com/~eliezer/singularity.html http://tezcat.com/~eliezer/algernon.html Disclaimer: Unless otherwise specified, I'm not telling you everything I know.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:35:52 MST