From: Smigrodzki, Rafal (SmigrodzkiR@msx.upmc.edu)
Date: Thu Mar 21 2002 - 13:17:20 MST
Eliezer S. Yudkowsky [mailto:sentience@pobox.com]
<mailto:[mailto:sentience@pobox.com]>
Wrote:
One possibility is that you might just
gulp down a couple hundred (thousand) IQ points directly,
thus obviating
"eventually".
### This would be the coolest elixir ever.
Another possibility is that after chatting with the FAI
for a
couple of hours, you would have heard several major
revelations about
morality - i.e., incremental advances small enough to be
accepted relative
to your current knowledge - and would therefore be willing
to credit that
the FAI knew something about morality. Based on the
inductive
generalization from observed abilities of the FAI, and
deductive reasoning
about the probable capabilities of a transhuman, you might
then be willing
to assign a high probability that moral guidance received
from the FAI is
accurate. I guess you could call that "faith" but I don't
see why it
couldn't be governed by the Bayesian Probability Theorem
just like
everything else.
### OK. I have no problem with it, but from my experiences
with humanity so far, this might be quite hard to swallow for many. Although
on the other hand, the FAI might just turn out to be Gandhi, and
Demosthenes rolled into one, and cubed. Maybe it could convert the Taliban
in ten minutes flat.
--------
The 80% of humans who wish to "strike down the infidels" all
wish to strike
down different infidels, and would, if asked for a moral
justification of
their various hatreds, ground their justifications in moral
reasonings that
rest on different objective falsehoods.
Earth, the Galactic Luddite Preserve, may ban nanotechnology
(i.e., the
summed volition of the Pedestrians may be a ban); I see no
reason to ban it
anywhere else.
### Still - the end result you seem to imply is the FAI
acting counter to the explicit wishes (whether grounded in reality or not)
of the majority of humans. Again, I have no personal problem with it but I
am sure they would.
Also, what if the FAI arrives at the conclusion that the
principle of autonomy as applied to sentients above 85 IQ points, plus
majority rule trump all other principles? If so, then it might act to
fulfill the wishes of the majority, even if it means destruction of some
nice folks.
Banning nano altogether within a large radius of the Earth
might be necessary if there were persons unwilling to have incorporated
hi-tech defensive systems - a stray assembler, carried with the solar wind,
would be enough to wipe them out. I do hope the FAI will just say Luddites
are crazy, and if they get disassembled as a result of refusing to use
simple precautions, it's their fault (just as the vaccination-refusing bozos
who might die if you sneeze at them).
----------
> How do you verify the excellence of an SAI's ideas,
and
> differentiate them from a high-level FoF?
We verify the excellence using the same philosophy we used
to arrive at all
the moral content we gave the FAI to start with. If an FAI
develops the
ability to do moral thinking that is so advanced as to be
incomprehensible
to us mere humans, it would presumably be accompanied by the
ability to toss
out a few trivial tidbits that are tiny enough incremental
advancements over
current understanding for us mere moral bozos to grasp them
and be
enormously impressed.
### I have no doubt the SAI will be quite impressive but
without being able to follow its reasoning steps you will be unable to
detect FoF
I imagine that massive, at least temporary IQ enhancement
might be required by the FAI as a condition of being considered a subject of
Friendliness - by analogy to sane humans who do not afford moral subjectship
to entities at the spinal level (pro-lifers notwithstanding), the FAI might
insist you enhance to vis level at least temporarily to give your input and
understand Friendliness. After that perhaps one might retire to a mindless
existence at the Mensa level.
-----------
> Eliezer:
>
> Within a given human, altruism is an adaptation, not a
subgoal. This is
> in the strict sense used in CFAI, i.e. Tooby and
Cosmides's "Individual
> organisms are best thought of as adaptation-executers
rather
> than as fitness-maximizers."
>
> Rafal:
>
> What is an adaptation if not an implementation of a
subgoal
> of a goal-directed process?
Human subgoals are far more context-sensitive than
evolution's subgoals.
Evolution is limited to induction. If it didn't happen to
your ancestors at
least once, then from evolution's perspective it doesn't
exist. A subgoal
can be adjusted in realtime; adaptions are fixed except over
evolutionary
time.
### OK. I understand. But once an adaptation is accepted by
a declarative reasoning process into a person's goal system, it can become a
subgoal. So altruism is both an adaptation *and* a subgoal in some persons.
-----
> Eliezer;
fact does not mean <snip> that Greek philosophers were
common in the ancestral
environment.
### You sure? :-)
Rafal
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:13:03 MST