RE: Nightline comments on AI

From: Colin Hales (colin@versalog.com.au)
Date: Wed Aug 21 2002 - 02:05:33 MDT


Hi there....

> > far as I can tell, in no particular order:
> > 3) Eliezer, http://www.optimal.org/

 Yes yes - http://singinst.org/

Damn. Bugger. Typo. Typical me, really. Sorry.

> > If you want to judge where the AGI component of the singularity is
coming
> > from here's the layman's 'AGI litmus test', IMO: If the AGI
> wannabe lifts hands over the keyboard to write one line of code that will
> be part of the
> > AGI 'final runtime program', then they have failed. Think of it this
way: A
> > cake recipe, to an appropriately trained human, can represent a cake
really
> > well to that human. _But it's not the cake_. The recipe becomes a cake
when
> > the human is there. The progress is slow because most seem bent on
recipes
> > instead of the cake, and they don't know it. We have to make cakes, not
recipes.
>
>Did you leave out a negative in one of those sentences?
> I can't parse the reasoning as it stands.

Thank goodness! That's because there wasn't any reasoning, just rule of
thumb dumped for general use.

------------------------------------------------------
The cook's tour of the reasoning behind it:

Route 1 - 'I want to succeed, therefore...'
Decide how much like a human you want the AGI's 'what it is like' 1st person
internal experience to be. My stance: I want to succeed. The human mind is
the only proven, real general intelligence benchmark we have. Therefore: If
I want true AGI then removing aspects of it's function that resemble a human
mind will take it further and further from my goal.

Therefore: Von Neumann symbol processing architecture is gone. Coding is
gone. It's parallel. It's cellular.

Route 2 - 'it's a cake, not the recipe'
Take any symbolic representation of our own mind created by our mind and
then use it as the basis for 'coding' (as in von-neumann executed code) an
AGI. This could be musical sonnets, english prose, C++, Lisp, Smalltalk or
Graffiti. Whatever. No matter what it is, it's a) learned it's way of seeing
the universe 2nd hand and b) manipulating symbols not related to the
universe. It has no understanding. It will make utterances and exhibit
behaviour that will be recognised by a human when a human interacts with it,
but in reality it will have an inner life and 1st person experience of the
universe that we will have great trouble understanding(if it has one at
all!), just as we do with animals. It will be unable to inhabit our human
world autonomously in any useful way without a human presence because every
sense organ is attached to a foreign causality model, whether it's language
or data-structures created to the same effect. It runs the causality of
human thought processes, not the causality of the universe that created that
human thought. It's one level removed. A simulation for an audience.

Take a language-based machine like CYC or Alicebot. Its model of the
universe is a model of causality of our language. It has no understanding
whatever of the universe (where causality = laws of physics) except when a
human uses it and adds the final layer. It appears to, but that's all. It's
artificial and by some human measure it appears to be intelligent - but it's
the logicical equivalent of a hand tool. It acquires it's purpose when a
human uses it.

This is the trap that the human mind falls into when when it introspects. I
don't know why I don't see more discussion of this around 'the AI traps'. It
seems only logical to me. Douglas Hofstadter gets it! That much I know.

Route 3 - Anthropocentricity and 'being something'
I don't know how many times I've tried to get people to absorb this, but
I'll keep going until someone gets it.

"The first person experience of 'what it is like' to be human is like it is
because we *are literally made* of the computational building blocks".

.ie. it is not 'happening to us'. We *are* it. In AI we keep acting like
there's a little cinema in our head playing something (= replaying little
cake recipes to experience cakes). Not so. The sensory symbols conveying the
universe to us are patterns embedded in the behavour of the legions of
actual stuff that we are constructed of! To get a human experience you have
to make 'stuff behavour' represent your symbols, not manipulate a replica of
the symbols. (I'd love to hear someone explain this away. _I_ can't. Damn
nuiscance)
-------------------------------------

These are positions reached after decades of thinking and programnming
dumb-as-dogshit subsumption engines. I have had a lot of time to get used to
them. In the end I may be wrong. Fine! It's a lot easier if you relax the
above stance - however, IMO you'll fail to get an AGI and your experience of
it will be, at best, .... 'it works sort of OK but there's something
missing' in between cleaning up the mess it makes because it stuffs things
up all the time.

Maybe my goals are all wrong. Maybe a dumb hand-tool is what I should be
aiming for. I don't know. What I do know is that if I want it to be like us
or better, I have to get it right.

Summary: It's cellular. It's parallel. It's directly connected to the
universe. It's not programmed: its 'grown'/'trained'. (This is the only
valid 'programming', other than the creation of the underlying cells and
their behaviour).

The 'mindsmiths' have all this this in various incantations, and are more
likely to end up with something like what I am aiming for than the rest of
the AI community (connectionists, mathematicians like Taylor in the UK etc).
Stan franklin's IDA is a von-neuman brain structure simulation - a useful
tool to analysing potential brain structure and testing behavioural
hypotheses, but as an actual brain - it's still a hand tool.

Does that make sense?

cheers,

Colin



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:16:18 MST