From: Anders Sandberg (asa@nada.kth.se)
Date: Thu Jun 25 1998 - 02:53:32 MDT
den Otter <neosapient@geocities.com> writes:
> Max M wrote:
> >
> > den Otter wrote:
> >
> > > That's just one way to develop AI. There might very well be others. In
> > > any case, it wouldn't be very smart to just make an upgraded human
> > > (with an attitude). Genie AIs is what we want, mindless servants.
> >
> > Then I think that your type of AI falls more into the categori of IA
> > (Intelligence Amplification.) This is where very strong technical
> > support systems enhances the human mind. But they are not self aware. A
> > little like computers and the internet today, but only much smarter and
> > more developed. Ingenious interfaces. Mindless Servants if you want.
>
> Well, yes, that's basically what I want.
Me too. In many ways IA is the key to transhumanity.
However, I'm also quite fond of the idea of creating new
intelligences. Why? Because I want to promote maximum diversity, and
AI could probably become much more diverse than homo sapiens derived
entities (and if we add in various forms of human-AI symbiosis the
diversity becomes even greater).
> Not a new kind of artificial,
> superfast and highly intelligent humanoid with a "will", but a machine
> that can (more or less autonomously) conduct research at a lightning
> pace. A machine that doesn't ponder ethics, but rather dutifully serves
> and protects humans becuase it's been programmed that way.
As was pointed out in the other ethics thread, ethics is what you
should do. The genie machine has a built in ethics of serve and
protect humans. But if it doesn't ponder the ethics, it would not be
able to apply it well. What if I ask it to make me a weapon to kill
anybody who disagrees with me? What it should do is to deduce that
answering "No way!" is the best way of serve and protect humans,
including its owner (me) because if I got the weapon and used it, a
lot of others would likely attack me with equally nasty weapons. That
is a fairly clear-cut case, but what if the ethical problem becomes
more tricky? It needs to deduce extensions of its ethics, or it will
be inefficient or dangerous.
(This reminds me of the "zeroth law of robotics" some of Asimov's
robots came up with, although the logical reasoning behind it remains
unclear to me).
> Although this machine wouldn't be "conscious" like a human, it would
> nonetheless still classify as "intelligent", since it could
> learn from its own and other's mistakes or success, and improve
> its performance. Is this what they call "weak" AI?
I'm not sure, but it is close. I seem to recall that "weak" AI is
systems that can do a lot of apparently intelligent stuff, but aren't
really intelligent. Strong AI claims that it is possible to build
truly intelligent systems. Maybe a better term would be Asimov AI - AI
with built in servitor programming.
-- ----------------------------------------------------------------------- Anders Sandberg Towards Ascension! asa@nada.kth.se http://www.nada.kth.se/~asa/ GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:13 MST