From: Christian L. (n95lundc@hotmail.com)
Date: Mon Mar 26 2001 - 15:32:58 MST
When I first started subscribing to this mailing list, I thought that the
goal of SingInst was to build a transhuman AI. I was wrong. The goal is
obviously to build a Utopia where Evil as defined by the members of the list
will be banished. The AI would be a means to that end, a Santa-machine who
uses his intelligence to serve mankind.
With this in mind, it is clear that you have to discuss the E-word and
define it somehow.
Personally, I feel that it will probably be impossible to "hardwire"
anthropomorphic morality and reasoning into a seed AI and expect those
goal-systems to remain after severe self-enhancement by the transcending AI.
Just as some humans are beginning to discard the hardwired goal of
procreation (our very reason for existing in the first place), the AI would
probably develop other goals (perhaps unknowable to us) that will eclipse
the original goals of servitude.
The resulting SI would be an utterly alien thing, and any speculation about
its actions would be futile. Hence my slight irritation regarding
discussions about the Sysops do:s and don't:s.
I have only skimmed "Friendly AI", so I do not consider the above to be a
critique of the feasibility of the Sysop-scenario (I will likely make one
later). However, the musings above will probably shed light on most of the
questions that have been raised in response to my original posts.
> > I fail to see the need for discussing concepts like good, evil, morality
>or
> > ethics at all, or how a Power/SI would relate to them.
>
>This seems like a very limited and inhuman[e] view.
Well, the SI is non-human after all (see above), and it is the actions of
the SI that will matter post-singularity.
>If there is no
>ethics then there are no guiding principles to your actions, no
>abstraction of what is truly in your self-interest longterm or not
Since it is my belief that the post-singularity world will be unknowable, my
definition of long-term is on the order of 20-25 years. My guiding
principles is reaching singularity as fast as possible. If you want to call
that ethics, that's fine with me.
>and
>every decision governing whether to take an action is utterly
>seat-of-the-pants at that moment. You also cannot depend on any context
>for the actions of others as they will make their own seat-of-the-pants
>(pls excuse physical metaphors) decisions moment by moment.
I agree. Humans act in their own self-interest. They have done so in the
past, and they will probably continue to do so in the future. However, most
of the time it is in their own self-interest to be nice to members of their
"pack".
>There can
>be no level of trust.
If you organize yourself in a "pack" and follow the rules set up there, you
can get personal protection and greater means of achieving your goals (they
normally coincide with those of the pack). When you interact with another
pack-member, you can be pretty sure that he/she will not break the rules and
risk exclusion from the pack. This can be called trust. The rules that the
pack sets up can be called ethics.
> >Ethics seem to be
> > little more than rules set up by humans in order to maintain a fairly
>stable
> > society. I don't see how that can have any meaning in the
>post-Singularity
> > world or even in the last years leading up to the Singularity.
>
>What, the need goes away for stable associations of entities? How >so?
There will ONE relevant entity. This entity will IMO relate to humans as we
relate to bacteria. We do not make stable associations with bacteria.
Again, the unknowability assumption makes it impossible to predict anything
IMO.
>The need for stable associations and for governing primary ethical
>principle if anything increases as the entities get more powerful and
>capable of greater harm and as the interactions and activities get
>orders of magnitude more complex.
See above.
/Christian
_________________________________________________________________________
Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT