From: Thomas McCabe (pphysics141@gmail.com)
Date: Wed Mar 12 2008 - 21:28:25 MDT
On Wed, Mar 12, 2008 at 10:59 PM, Mark Waser <mwaser@cox.net> wrote:
> > Performing unethical acts is usually in the self-interest of, not only
> > AIs, but most humans. Billionaire drug-barons and third world
> > dictators make themselves huge piles of money off horrible and
> > unethical actions.
>
> Only in a short-sighted view in a society with inadequate enforcement. This
> is *much* more the argument that I was expecting to have. I will continue
> to address this point shortly. Thank you for bringing it up.
You can't simply *assume* that society will enforce prohibitions
against unethical actions. We're not just laying back and observing
the future- it's our job to *build* such a society, from the ground
up, starting with whatever we have now. You can't write a blueprint
for how to build such a society that starts off by assuming that such
a society has already been built. If you start off by assuming that
any unFriendly being is instantly vaporized, you're quite correct. The
question is, how do we get to the point where unFriendly beings are
vaporized (or at least prohibited from doing harm)?
>
> > Show us examples of such derivations.
>
> Coming shortly (it's getting late). Again, an excellent question!
>
>
> > Error, reference not found. There's no such thing as a computer "with
> > the intelligence of a human", because computers will have vastly
> > different skillsets than humans do (see
> > http://www.intelligence.org/upload/LOGI/seedAI.html).
>
> :-) You're being pedantic and difficult. I'm arguing a general equivalence
> here, not a specific skill set.
There's no such thing as general equivalence, without specific
equivalence in at least some cases; the general skillset is simply
some function of the union of all specific skillsets. To name a
specific example, there's no such thing as an animal that's
human-equivalent in sports, because the skillsets are too different.
Few animals could even hold a javelin, while no human can match the
brute strength of most animals.
>
> > The people on this list already have a great deal of human-universal
> > architecture, which AIs won't have. See
> > http://www.intelligence.org/upload/CFAI//anthro.html,
> > http://www.intelligence.org/Biases.pdf,
> > http://www.overcomingbias.com/2007/11/evolutionary-ps.html.
>
> Yes, but I don't see why my argument cares whether or not the AGIs have
> human universal architecture (except that it is a good argument that my
> testing on humans is insufficient for proof of behavior in AGIs)
You can't explain something to someone in English unless they have a
great deal of human-universal architecture. English was *built* for
humans- you can't just give it to, say, a Boeing 767 and pray for the
instructions to work. We invented programming languages precisely
because computers can't parse English.
>
> > Any AI intelligent enough to actually understand all this will be more
> > than intelligent enough to rewrite itself and start a recursive
> > self-improvement loop. See http://www.acceleratingfuture.com/tom/?p=7.
>
> Possibly true but it is probably not smart enough to get around the blocks
> that humans will have placed in it's way (and the fact that humans will have
> placed the goal that it is UnFriendly to attempt to do so until the humans
> declare that it is ready).
See http://www.acceleratingfuture.com/tom/?p=94 on how effective such
"blocks" are, even against other humans.
>
>
-- - Tom http://www.acceleratingfuture.com/tom
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT