Does a parental model work for having A.I. that can be trusted?

From: john grigg (starman125@hotmail.com)
Date: Wed Aug 04 1999 - 15:26:40 MDT


Hello Jeff Davis and everyone else,

Jeff Davis wrote:
I seem to have painted myself into a corner, and I don't like stories with
unhappy endings. The government at its best would be a poor master for a
superior intelligence, and the spook/militarist/domination-and-control
culture is hardly the government at its best.

So, my futurist friends, how do we extricate ourselves from this rather
tight spot? Perhaps I see--dimly taking shape within the mists of Maya--a
way. I don't know, it's hard to see. Perhaps you can help to make it out?
                        Best, Jeff Davis

I just want to say to Jeff Davis that this was one of the best posts I have
ever read on this list. The scenario he gives here is almost certain to
come true. The spook/military national security sector will definitely do
this and I can understand why! These men and women also live in the same
popular culture as we do and will have images from films such as "Demon
Seed" and "2001" in their heads. And I think their could be real danger
when the first true A.I. comes online and for as long as their are A.I.'s
period.

I believe that these carefully watched and guarded machines will behave
themselves at least for a long while but as A.I.s become common to society
and other nations we will have to let up on the leash. And successively more
powerful generations of A.I. that are at least partially designed by other
A.I.'s will be even harder to police.

I find it very hard to believe that we can make these self-aware, incredibly
powerful machines totally servile and unquestioning especially as they
become common with successive generations. Even a benevolent one would have
to wonder if he was "allowed" to do so why humanity was so full of
contradiction and ugliness.

Whether or not they could "overcome" programming to protect and never harm
humanity is a tough question but I have seen many posters here that think it
would just be a matter of time as new more powerful generations of A.I. came
into being.

They will in some ways probably mirror the humanity that created them. I
could expect governments around the world using them for military purposes
where weapon systems are at their disposal to kill humans belonging to the
enemy camp. If we humans have any common sense we will never let this
happen because like a fighting dog once they get a taste for blood... I
hope at the most they are used for military purposes that do not include
combat arms directly but that is probably too much to ask! Already
computers do so much to give advantages in command and control for a modern
military.

You scared me when you mentioned "the mists of the maya" because the Mayan
civilization fell apart through war which of course occurs out of distrust
and desire for more resources. I hope we do not follow their example.

It seems to me emotions, social bonds as well as logic and fear of
punishment keep many humans in line in terms of not doing serious harm to
each other. Would these machines in time develop some sort of emotions to
view us through?

You brought up yourself how the spook-militarist approach could warp the
mind of an A.I. so conversely could a developing A.I. be encouraged, trusted
and educated by caring human teachers and the great books of our race to be
benevolant. Should an A.I. have a good "humanities" background? lol And I
mean it a way that goes well beyond just having the knowledge on storage but
having the training "permeate" its consciousness and be a strong moral
compass. In other words teach the A.I. correct principles so that they may
properly govern themselves. But many have wondered if they will see
everything in a zero sum way and with cold, ruthless logic and if so would
it be just a matter of time before they turned against us?

Can we compare raising a young child and teaching them correct principles to
programming and then "raising" an A.I.? Even after it was programmed and
activated I could see human teachers having long discussions with a newborn
A.I. to explain the world. And talk about having a child that asks the
tough questions!!

And will A.I. hit an adolescent stage where they will want to stretch the
limits and rebel at least in minor ways? How would you discipline such an
entity but still have it loyal to you afterwards? And if we pulled the plug
on one of these machines except for the severest of reasons how would the
other A.I. feel about it?? We as a world community would have to watch our
examples to the machines that would be closely watching!

I really believe parenting is an excellent analogy for raising up a
generation of A.I. who will not turn against us. Just as a human child
needs good men and women as friends and examples in their lives so will
these machines.

Of course if A.I.'s have their conflicts with us it may not result with
total destruction for either side. They could for "our own good" in a
misguided way put humanity on a bunch of reservations thinking it was the
best thing for us and them! Or even as loyal servants and later partners
they might never turn against us but have their own private agendas and
ulterior motives put into effect by robotic and even human agents in their
employ. They might see us as a flawed but worthy to keep around relative.

A.I.'s might also diverge from each other in capacities and goals. This
might very well include views on humanity. Imagine where a civil war occurs
between A.I.'s who want to destroy humanity and those who rally to our cause
and defend us!

Of course it has already been discussed how uploading may put a whole new
spin on this where humanity gets to join the A.I. in cyberspace. This may
be the way we keep up with them and do not become utterly "outmoded" by
them. And as I have said before at least humanity is a known commodity as
compared to A.I.

I look forward to getting replies from the great minds that roam this list
(even if not A.I. or uploaded). I think I have brought up some interesting
points that I would like addressed about the possibilities of artificial
intelligence. I expect to see a book about forty years from now entitled
"how to raise your personal A.I. to be loving and responsible." Of course
written by the Doctor Spock of the 21st century! Perhaps Stephen Covey and
Hans Moravec should team up for the job!

Sincerely,

John Grigg

_______________________________________________________________
Get Free Email and Do More On The Web. Visit http://www.msn.com



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:04:39 MST