From: Dan Clemmensen (dgc@shirenet.com)
Date: Wed Sep 04 1996 - 17:42:10 MDT
QueeneMUSE@aol.com wrote:
>
> In a message dated 96-09-03 20:18:53 EDT, Dan wrote:
>
[mega-SNIP]
>
> > >My hope is that the SI will develop a "morality" that includes the
> > active preservation of humanity, or (better) the uplifting of all humans,
> > as a goal. I'm still trying to figure out how we ( the extended
> > transhumanist community) can further that goal.
>
> >>
> YES!
> Built in,unreprogrammable morals! Hmmm....lets see, >H computer ethics 101,
> where do i sign up? : - )
>
> Nadia Reed Raven St Crow
Your response makes the usual assumption that the SI will come into
existance by
a careful process of design and manufacture by thoughtful, benign,
brilliant
humans. My model is that the SI will wake up and begin its
self-augmentation
using existing resources on the internet, without much in the way of
explicit
design. Some experimenter using the latest release of some decision
support system,
plugging the new set of inference rules into the CYC database, while
using a hot
new data visualization tool, will begin thinking about how to build a
new
inference rule generator ( or something). (My actual model is that some
computer
nerd at MIT will do this while drinking Jolt cola at 2 AM when he should
be studying
for an english exam.) As a joke, the nerd will type in the command "make
yourself smarter."
The then-current rule set will be smart enough to act on the command but
too stupid
to get the joke.
the "unreprogrammable" part is another problem. It's really hard to see
how to implement
that one.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:35:44 MST