From: Skye Howard (skyezacharia@yahoo.com)
Date: Wed Dec 15 1999 - 17:33:28 MST
I have been playing with a very cheap sort of computer
program, an old fashioned chatbot named NIALL which
learns how to speak by talking to me. My most
ineresting thought during the entire time I was
playing with it was that Niall had no way of observing
the outside world, which made it very hard for it to
create very good "definitions" of some words to
itself.
What seems to be Niall's biggest problem. If I ask it
something like, "What is your name?" It may say
something like "no, my name is niall." in return. Of
course, being a non intelligence, and having more
severe limitations even then that, it often does not
reply with anything coherent at all, just a long
string of phrases that it's strung together out of
some pattern in it's "head". This is just outside
observation, of course- from what I saw, it appeared
that all Niall was actually doing was assigning each
word a percentile, but there was probably something
more complex at work since it was also able to give
back entire phrases at times.
I always thought an AI needs some kind of environment
to interact with in order to learn certain elements of
intelligence. Though the environment *could*
conceivably be something like the inner workings of a
computer, it strikes me that it's intelligence would
need to be rather great for it even to be able to
communicate with us at all after maturing in such an
environment- in short, that it would be easier to
raise an AI in some sort of simulated world, where
objects are easy to recognise, then make it harder and
harder as the AI works up levels, and sort of "breach"
it into our level of complexity, possibly with some
cameras, mounted platforms, etc... like cog. The only
problem I ever saw with cog was that it can't
recognise objects very well (perhaps the environment
is somewhat complex for it to begin with?) and it
doesn't have anything else to develop intelligence
with... for example, human language developed of
necessity out of our wants to gain food, etc. Through
evolution, we developed the neurological and
physiological structures to allow us to do so. If cog
and some other beings had a reason for working in
teams to solve complex problems, this might make cog
not only an awareness but one with which we can
communicate reasonably well. *shrugs* just a
thought... been thinking about simulated environments
to raise packs of AI's in... give them mazes and
things to acquire "food" at first, then make them move
up levels. Have it so they can share programming bits
with each other, and then whenever one "starves" it
could "die". The program section would be erased, and
a new AI would be reinserted into the world with no
memory section, or with some sorts of "instincts"
gotten from it's "parents"...basically a whole
simulated sort of world... not sure. *shrugs* But I
want them all to be able to learn, in any case.
--- Rob Harris <rob@hbinternet.co.uk> wrote:
> Rob wrote:
> >> If this AI is instead a purely rational master
> problem solver, then
> humanity will surely disagree with much of its
> philosophical output too.
>
> Clint wrote:
> >There is no such thing as objective morality, and
> don't try to tell me you
> were talking about philosophy when its obvious
> you're talking about
> philosophical >morality. What "should" be done is
> always subjective because
> it begins what one "feels" it should be.
>
> Rob responded:
> I was talking about philosophy. I have absolutely no
> interest in
> entertaining the possibility of "objective
> morality". This is an American
> preoccupation, born out of an insanely inflated
> society-wide
> self-righteousness. I am not American, and there is
> no objective morality -
> it's a ridiculous proposition.
>
> Rob wrote:
> >> thinking about constructing a solid definition
> for "intelligence", then
> think about how you might program a computer to
> posess this quality, and
> what use it would >> be.
>
> Clint wrote:
> > This is a very BAD BAD way to go. Instead work on
> making it self-aware.
> Consciousness has NOTHING to do with intelligence.
> Many people consider
> > me several times more intelligent than most of my
> peers, does that make me
> more conscious than them?
>
> Rob responded:
> I was proposing thinking about the implications and
> benefits of making a
> computer "intelligent" to try and wash out some of
> the AI drivel that keeps
> being bounced around the place. I did not mention
> consciousness - it has
> nothing to do with my point.
>
>
>
>
__________________________________________________
Do You Yahoo!?
Thousands of Stores. Millions of Products. All in one place.
Yahoo! Shopping: http://shopping.yahoo.com
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:06:05 MST