It's also possible that our actions in ambiguous cases may be
quantum-unpredictable, some arbitrary function of which neurons randomly fire.
But, and I'd like to adjust my definition a bit in retrospect (so much for
"clear"), OFAPP should state that you can perfectly predict the
*probabilities* of all outcomes.
> There's research going on to implement sophisticated neural nets
> in silicon hardware, and that research route might lead to an
> omniscient FAPP entity someday even if we are fundamentally different
> from abacuses.
Especially if our decisions are not determined by random quantum collapses,
but by genuine and predictable rational reasoning.
> But a review of the taxonomy of neural net
> architectures (I usually distinguish between feedforward and
> recursive architectures, and make a second distinction depending
> on whether the learning algorithm uses feedback from the exterior
> environment or not) makes it clear that chordate nervous systems
> are both recursive in architecture and (to some extent or another)
> seek out and use reinforcing behavior from the environment in the
> learning algorithms. It seems to me that we have very little
> technical experience building and training nets that use *all* the
> architectural tricks, or in other words, building and training nets
> that even remotely resemble ourselves or even other real animals.
"Neural nets are not built in imitation of the human brain. They are built in
imitation of a worm's brain, and when we have neural nets down straight we'll
have a long way to go." (self-quote).
> While I agree with you in wanting to guard against failures of
> imagination, venturing real predictions in a field as new and
> inchoate as this one is folly. I consider Moravec's predictions
> to be an enjoyable form of play, but I don't let them keep me up
> at night.
You'll note that I said Powers could be OFAPP. I was just pointing out that
our ethical systems derive a great deal of their pattern from:
(1), the possibility that you are wrong no matter how sure you are of yourself
("The ends do not justify the means")
(2), the fact that someone else might know more than you do no matter how dumb
you think they are ("Respect the opinions of others")
(3), the Hofstadterian Prisoners-Dilemna resolution, that your decision
process is partially duplicated in others ("what if everyone else decides to
do the same thing?").
Note that *all* *three* break down under even an *approximation* to OFAPP.
For all I know, they break down under first-stage transhumanity, no Powers necessary.
Our ethical laws are a paradox. They are very "fragile" derivatives of human
nature, in the sense that a slight alteration in nature would produce a large
difference in result. (My definition of "slight" may differ from yours.) But
we think of them as absolute, because only an absolute injunction can overcome
our nature to break ethical rules for what seem like rational and altruistic reasons.
But Anders rational result-by-probability multipliers don't obey 1
Knowledge-rich Powers may not obey 2 (or at least may not see any reason to),
plus 2 is a derivative of one in the sense that you don't estimate the
*probability* that someone knows more than you do.
And perhaps only a slight increase in emotional sophistication is necessary to
void the partial illusion of 3. One who *knows* that others are not reasoning
the same way, and can guess the outcome with near-certainty, may "defect" in a
non-iterated PD or any non-iterated game. (Force-uploading is "defecting", in
a sense.)
Finally, an increase in *self*-knowledge - and boy, will that be easy to
program - voids a lot of the *strength* of ethical rules. Again, ours are
absolute only because our nature is to incorrectly ignore them, especially in
political issues. So even if all 3 remain, they may be voidable at will.
> But you may well know more neural-net theory than I do (because
> I'm guessing that you may well have more math than I do), so
> maybe I'll adjust my paranoia upward a notch or two. As the
> Bears song goes, "fear is never boring." ;)
As long as you refuse to act on it, there is no such thing as too much paranoia.
-- sentience@pobox.com Eliezer S. Yudkowsky http://tezcat.com/~eliezer/singularity.html http://tezcat.com/~eliezer/algernon.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know.