From: Franklin Wayne Poley (culturex@vcn.bc.ca)
Date: Mon Sep 25 2000 - 15:03:51 MDT
On Mon, 25 Sep 2000, Samantha Atkins wrote:
> "J. R. Molloy" wrote:
> >
> > > No, because this is not the Singularity Seed but simply successful
> > > genetic programming projects. You seem to be answering a very different
> > > question than I was asking.
> > >
> > > - samantha
> >
> > Right. You had asked, "In what way is this good for or even compatible with the
> > nature of human beings?"
> >
> > But I maintain that what is good for or compatible with human nature is not
> > necessarily the most extropic path to higher levels of self-organized complexity
> > or more sentient life forms.
> > What is good for human beings may, after all, be bad for transhuman beings.
> > If AI is not friendly toward humans, that doesn't mean it will be unfriendly
> > toward transhumans.
> I assume that a
> transhuman is an evolved/augmented form of a human in many respects, and
> thus shares many goals with humans, especially in the early near-human
> stages. So my question is as applicable to them as to "mere" humans. I
> am worried when I hear about a proposed AI that is so much smarter than
> all humans (and transhumans for that matter) that it can and should make
> decisions for all of us that we should (will be forced to?) obey without
> questioning especially since our puny brains/processing units cannot
> match Its Intelligence.
I have given hundreds of IQ tests over the course of my career and
participated in the development of one of them (Cattell's CAB). If I were
to measure transhuman-machine intelligence and human intelligence; and
compare the profiles, how would they differ?
FWP
-------------------------------------------------------------------------------
Machine Psychology:
<http://users.uniserve.com/~culturex/Machine-Psychology.htm>
-------------------------------------------------------------------------------
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:31:11 MST