Re: Thinking about the future...

From: QueeneMUSE@aol.com
Date: Wed Sep 04 1996 - 13:25:50 MDT


In a message dated 96-09-04 12:11:19 EDT, Nicolas writes:

<<
           It is also worth considering what would make the grandpa >AI
           bad in the first place:
           
           1) Accident, misprogramming.
           
           2) Constructed by a bad group of humans, for military or
           commercial purposes. This group is presumably very powerful
           if they are the first to build an >AI. The success of this
           enterprise will make them even more powerful. Thus the
           values present in the group (community, company, state,
           culture) that makes the first >AI will not unlikely be the
           value set which is programmed into subsequent >AIs as well.
           
           3) Moral convergence. Sufficiently intelligent beings might
           tend to converge in what they value, possibly because values
           are in some relevant sense "objective". They just see the
           truth that to annihilate humans is for the best. (In this
           case we should perhaps let it happen?)
>>
Yes, I agree, an Ai made by army or evil doers could cause much evil.But
again we have these possible scenarios which, except for the first half of
first one (accidental ) would be programmed in to the machine by a human. Or
which AI would develope on its own proto-human attributes ie: predatory
nature; morally corrupt, competative. I think it is logical that if we make
the machine purposely bad, it will be bad - but if we allow it to develope on
its own, it might not choose that option as a part of it's survival.
    I am going to assert here that the AI not being part of the food chain
makes it less likely that intelligence will see things the same as the
primate (lacking the lizard brain) and that most of these scenarios
(especially the last) still make the contention of 'bad granpa AI' is still
veiwing the world from a primate viewpoint (anthropomorphic) - ie:
survivalist in nature, overcoming weaker species in order to survive. The
developement of these traits in animal intelligences ( including alpha
primates) seems evolutionarily based on eating... farming, hunting,
conquering or the like. And mortality - a fear of... mean, "bad" emotions
that these instincts evoke.

Since AIs wouldnt be subject to these conditions, why would it develope these
logics?

N/QM/RSC
                   "Death is not an option"



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:35:44 MST