Re: >H ART: The Truman Show

From: Anders Sandberg (asa@nada.kth.se)
Date: Mon Jun 22 1998 - 15:20:12 MDT


den Otter <neosapient@geocities.com> writes:

> Since AIs will presumably be made without emotions, or at least with
> a much more limited number of emotions than humans, you don't have
> to worry about their "feelings".

I think this is a fallacy. First, why would AI have no emotions or a
limited repertoar? Given current research into cognitive neuroscience
it seems that emotions are instead enormously important for rational
thinking since they provide the valuations and heuristics necessary
for making decisions well. Secondly, even if they have less feelings
than humans, why does that mean we can treat them as we like? If it
turns out that I have less emotions than most people, does that mean I
also have less rights? Obviously this kind of reasoning doesn't work,
beside the obvious impossibilities of finding out how AI or other
humans experience their existence and putative emotions. We need to
base our ethics on something more observable and stable than that.

> Also, one of the first things you
> would ask an AI is to develop uploading & computer-neuron interfaces,
> so that you can make the AI's intelligence part of your own.

This is the old "Superintelligence will solve every problem"
fallacy. If I manage to create a human-or-above-level AI living inside
a virtual world of candy, it will not necessarily be able to solve
real world problems (it only knows about candy engineering), and given
access to the physical world and a good education its basic cognitive
structure (which was good for a candy world) might still make it very
bad at developing uploadning.

> This would
> pretty much solve the whole "rights problem" (which is largely
> artificial anyway), since you don't grant rights to specific parts
> of your brain.

Let me see. Overheard on the SubSpace network:

Borg Hive 19117632: "What about the ethics of creating those
'individuals' you created on Earth a few megayears ago?"

Borg Hive 54874378: "No problem. I will assimilate them all in a
moment. Then there will be no ethical problem since they will be part
of me."

I think you are again getting into the 'might is right' position you
had on the posthuman ethics thread on the transhumanist list. Am I
completely wrong?

> A failure to integrate with the AIs asap would
> undoubtedly result in AI domination, and human extinction.

Again a highly doubtful assertion. As I argued in my essay about
posthuman ethics, even without integration (which I really think is a
great idea, it is just that I want to integrate with AI developed
specifically for that purpose and not just to get rid of unnecessary
ethical subjects) human extinction is not a rational consequence of
superintelligent AI under a very general set of assumptions.

Somehow I think it is our mammalian territoriality and xenophobia
speaking rather than a careful analysis of consequences that makes
people so fond of setting up AI as invincible conquerors.

-- 
-----------------------------------------------------------------------
Anders Sandberg                                      Towards Ascension!
asa@nada.kth.se                            http://www.nada.kth.se/~asa/
GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:12 MST