From: Keith Henson (hkhenson@rogers.com)
Date: Mon May 17 2004 - 21:50:20 MDT
At 06:46 PM 17/05/04 -0700, Michael wrote:
>Keith (and SL4),
>
>All this talk about 'best interest' drives me a little nuts... but I gues
>that is okay because it promts me into thinking productively
>
>I recently discussed the Big Brother aspect of FAI with an intelligent
>newbie, and the necessity of explaining alternatives and providing cogent
>examples lead me to greater thought on the subject. My earlier conclusions,
>that volitional morality will be very important to our freedom and safety in
>the future, have gained even more validity as a result of more detailed
>thinking on the issue. Also this thread, and the discussion of FAIs
>estimating 'best interest', has driven me to articulate in greater detail
>why having FAIs adhere to volitional morality as closely as possible would
>be a good idea.
snip
> The morally positive action an FAI might take could again be
>persuasive, non-invasive and helpful - advice, not force. And again, each
>individual would be and should be responsible for their own decisions and
>actions.
>
>As to the problem not being solvable, that is hyperbole. Humans have been
>successfully solving the problem of conflicting wants & needs for millennia.
Longer than that. But the "solution" often involved "kill them all," or at
least all the males.
>Sure, sometimes primitive mental programming gets switched on and all hell
>breaks loose (competition for mates, war, etc.) but most of the time people
>trade, negotiate, make deals, treaties and agreements - they sometimes even
>come up with win-win solutions (gasp!). If mere humans can solve this
>problem, and on occasion solve it well, then an FAI should not find it too
>difficult to facilitate.
The problem is not negotiating between competing entities, but deciding
which one's viewpoint you want to adopt.
>As for there being a single, universal 'best interest' - that item just
>doesn't exist. Each volitional being decides for itself what is its 'best
>interest', and that evaluation is in constant flux. It is part-and-parcel
>of being a conscious, intelligent being.
>
>In so far as a person's decisions/actions are morally negative... well, that
>is a whole-nother post.
The most fundamental actions a person can take involves reproduction. I am
personally *extremely* uncomfortable because the logic and my personal
feelings are in deep conflict. If there is unlimited reproduction or even
replication in a limited environment, eventually the population is reduced
to extreme material poverty. They just don't have enough atoms available.
Worse, humans seem to have mechanisms that evolved to induce wars between
tribes when they started feeling the pinch. If they were successful in
raising lots of kids, they hunted out all the game to feed the kids and
went into a mode where they slaughtered their neighbors or got slaughtered.
Now, with our memes shaped by a number generations of relative plenty, we
think that killing off the neighboring tribe's males and taking their
resources and women is double-plus-ungood. But if it comes down to strong
restrictions on breeding or an occasional bout of slaughter and be
slaughtered, which do you pick? The simplest math will tell you the human
race will be forced to picking one or the other, either by our own volition
or that imposed by an AI.
(My personal preference is the third way, leave for the far side of the
galaxy and let others figure out what to do.)
>Keith, when you wrote: "...understanding these [Ev.Psyc.] matters might be
>essential to providing the environment in which friendly AI can be
>developed."
>-- Sort of. It is not the environment that will be improved, but the
>accuracy of the FAI's human-cognition model.
I wasn't clear as to what I meant. AI research requires considerable
technologic support. An environment that because of massive resource wars
lacked computers and even food for the researchers would not be conducive
to much progress.
>It is very important that an
>FAI understand the ways in which human think so that it can better model the
>future, and better understand the human-generated data that will be
>presented to it. It is not enough for an FAI to determine that Johnny
>behaves with an approximation to Bayesian rationality 82.6% of the time.
>FAI needs to know what Johnny is mentally doing the other 17.4%, and why,
>and in what situations his cognition is likely to switch between modes.
That's well stated. Of course the problem is that Johnny may be in war
mode 17.4% of the time.
Keith Henson
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT