From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Thu Mar 21 2002 - 13:52:47 MST
> ### OK. I have no problem with it, but from my experiences
> with humanity so far, this might be quite hard to swallow for many. Although
> on the other hand, the FAI might just turn out to be Gandhi, and
> Demosthenes rolled into one, and cubed. Maybe it could convert the Taliban
> in ten minutes flat.
That would be easy. The only problem is that the Taliban does not *want* to
be convinced in ten minutes flat.
> ### Still - the end result you seem to imply is the FAI
> acting counter to the explicit wishes (whether grounded in reality or not)
> of the majority of humans. Again, I have no personal problem with it but I
> am sure they would.
Someone wants to shoot me. Is it moral for me to act counter to his
explicit wishes and dodge? Sure. Is it moral for a FAI to intersect the
bullet because my wishes about my life take precedence over his wishes about
my life? Sure.
> Also, what if the FAI arrives at the conclusion that the
> principle of autonomy as applied to sentients above 85 IQ points, plus
> majority rule trump all other principles? If so, then it might act to
> fulfill the wishes of the majority, even if it means destruction of some
> nice folks.
Why would an FAI conclude this?
> Banning nano altogether within a large radius of the Earth
> might be necessary if there were persons unwilling to have incorporated
> hi-tech defensive systems - a stray assembler, carried with the solar wind,
> would be enough to wipe them out. I do hope the FAI will just say Luddites
> are crazy, and if they get disassembled as a result of refusing to use
> simple precautions, it's their fault (just as the vaccination-refusing bozos
> who might die if you sneeze at them).
Well, the human-smartness proof-of-concept answer to this is the Sysop
Scenario, where all technology on the Basic Level - nanotech, femtotech,
chromotech, whatever - is an extension of the Sysop. But people have been
acting odd around the Sysop Scenario lately, so I think I'll fall back on
the real answer, which is "Figuring out things like that is part of what you
need a transhuman for."
> ### I have no doubt the SAI will be quite impressive but
> without being able to follow its reasoning steps you will be unable to
> detect FoF
It's an SIs responsibility to do that kind of maintenance. Once there's an
SI around, whether you detect FoF is quite pointless...
> I imagine that massive, at least temporary IQ enhancement
> might be required by the FAI as a condition of being considered a subject of
> Friendliness - by analogy to sane humans who do not afford moral subjectship
> to entities at the spinal level (pro-lifers notwithstanding), the FAI might
> insist you enhance to vis level at least temporarily to give your input and
> understand Friendliness. After that perhaps one might retire to a mindless
> existence at the Mensa level.
Why? I can easily conceive of an infrahuman mental entity that would
deserve citizenship, although I believe that primates have insufficient
reflectivity to qualify.
> ### OK. I understand. But once an adaptation is accepted by
> a declarative reasoning process into a person's goal system, it can become a
> subgoal. So altruism is both an adaptation *and* a subgoal in some persons.
Yes, but they are conceptually separate. I make use of my altruistic
hardware but I apply reasoned corrective factors where I expect hardware
altruism to differ from normative altruism.
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:13:03 MST