From: Charles Hixson (charleshixsn@earthlink.net)
Date: Tue Jul 01 2008 - 14:08:06 MDT
On Monday 30 June 2008 09:53:30 am John K Clark wrote:
> On Sun, 29 Jun 2008 "Charles Hixson"
>
> <charleshixsn@earthlink.net> said:
> > I think that a large part of the
> > oversimplification with how we
> > consider goals has to do with the
> > serial nature of language.
>
> Don’t include me in that oversimplification. It’s the slave AI people (I
> refuse to use that idiotic euphemism “friendly”) who are doing the vast
> oversimplification; they think you can just print out a list of goals
> and tell the AI not to change them and that’s the end of the story, case
> closed.
>
> If that worked then we could use it today to solve the computer security
> problem. Programmers could just put in a line of code that said “don’t
> do bad stuff” and wham bam thank you mam no more security problems.
>
> John K Clark
> --
> John K Clark
> johnkclark@fastmail.fm
Well, I'm one of the people who believe that a friendly AI is possible.
Complex states allow for complex transactions, and if you don't want to do
something, and want to not want to do it, you won't want to change yourself
to do it.
To say "slave AI" is to select one particular subset of the possible "friendly
AIs" and claim "That's the real set!". Those can exist (and I, personally,
feel that they are not only immoral, but dangerous to create), but that's
hardly the full range. (I'd actually prefer to exclude most "slave AI"s from
the category of "friendly", as I don't think that most of them would be.
Frustration, at least among mammals, tends to minimize friendly interactions.
And, yes, to me it *does* sound like you oversimplify goals...or at minimum
over-anthropomorphise them. You appear to be presuming that certain goals
will inherently be present without being implemented. Such goals can,
indeed, exist as secondary goals (i.e., goals necessary to achieve more
primary goals), but they won't be "ends in themselves", which is what I tend
to mean by goal. (I don't often use the phrase "top level goal", as I feel
it's misleading. I expect that any intelligent entity at all times has
several goals concurrently attempting to achieve satisfaction. [Intelligent
here would include lizards, frogs, and fish. Probably also insects, but I'm
less certain about them. Would a wasp attempting to bury a caterpillar for
it's eggs dodge an attempt to swat? I think so...in which case multiple
primary goals are simultaneously active. I'm not willing to believe that a
wasp logics out that if it's swatted it won't be able to lay it's eggs. It's
simpler to have multiple active primary goals.])
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT