From: Anders Sandberg (asa@nada.kth.se)
Date: Tue Apr 28 1998 - 08:17:56 MDT
tlbelding@htcomp.net (Tony Belding) writes:
> Anders Sandberg <asa@nada.kth.se> wrote:
>
> AS> I realize that their complexity is less than the chemical networks of
> AS> many bacteria, so I don't feel too bad about them. But this may become
> AS> a real problem in the future - the other graduate students here (me
> AS> included) are quite interested in creating a sentinent system if
> AS> possible, and once we start to get close to that, then we are going to
> AS> need to think much more about ethics.
>
> So, what do you think about rule-based AI projects, like Cyc?
Well, I am biased since I do neural nets, mingle with connectionists
and read neuroscience, but I think rule-based AI is too brittle to
work in the real world. It is great within a clean domain, but runs
into trouble when the complexity is too large. Of course, I think this
is also a property of our own high level thinking: it only works when
our low-level systems have cleaned away all the disruptive complexity
from our sensory information. Most of the real work is filtering,
categorizing and abstracting, not thinking.
> I've thought about what kind of philosophical /purpose/ or role such artifical
> sentient beings could fill. They would, in practical terms, be a branch of
> humanity. They would be products of our culture, our civilization --
> essentially humans in different form. No identity of their own.
Yes and no. They would not be true aliens, but they wouldn't be yet
another form of humans (unless designed that way). Even a small
difference in basic motivations would make them quite alien, and if
they also have other differences (such as new forms of perception or
no bodies) I think they would diverge quite quickly - while likely
enriching humanity with a different point of view.
> But, it now seems unlikely that our galaxy is teeming with aliens
> just waiting for us to come and meet them. We need to create our own aliens.
Or become them.
> I would like to find an Earth-like planet somewhere, or maybe terraform a
> planet, and create a race of beings to inhabit it. Drop the first generation
> on the planet's surface with nothing: no language, no tools, no experience.
> Then sit back and watch as they build their own civilization from nothing: a
> slow and painful process, no doubt. But, everything they had would be theirs,
> not something borrowed from us. They could develop a complete identity of
> their own.
An interesting and somewhat cruel experiment. [ARISTOI SPOLIER ALERT!]
This is what the villains in Walter John William's _Aristoi_ do; one
of the more interesting questions in the book is the ethics of doing
this - is it needless cruelty or giving life?
> What happens when a highly sophisticated, non-sentient AI can pass the Turing
> Test? How do you convince most people that a machine is non-sentient when it
> can so convincingly /pretend/ to be?
How do you convince other people that John K Clark is non-sentinent
when he is so convincing?
IMHO, if it quacks like a duck and walks like a duck, it is a duck or
at least a good approximation. :-)
-- ----------------------------------------------------------------------- Anders Sandberg Towards Ascension! asa@nada.kth.se http://www.nada.kth.se/~asa/ GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:00 MST