Artificial Reality

From: Christopher McKinstry (cmckinst@eso.org)
Date: Sat Jul 07 2001 - 01:28:41 MDT


Here's some questions and answers from an interview I gave at
http://www.missingmatter.net in April. The full interview is here:
http://missingmatter.net/article.pl?sid=01/04/28/0334205

It explains that GAC is essentially an artificial reality.
=========================================================

2. lopati: also, in response to queries everything2 provides multiple
'nodes' because the database doesn't know anything about you or the
context really in which you asked the question or what kind of heuristic
you're applying when searching for your own answer. or like google will
rank its responses but it is up to you to interpret its oracular
pronouncement to your own ends. in a way these databases are senseless,
without proper stimulation or input they cannot 'know' what exactly it
is you're asking of it. without 'being in the world' how can
consciousness arise out of just language queries, i.e. is (abstract)
language enough to communicate the territory to the map or do you need
direct sensory receptors (sight, sound..)?

2. Chris: My assertion is that the Mindpixel corpus IS the world in
textual format. It's a digital model of everything that a person can
experience and communicate unambiguously. The point of the corpus is
that it can be used as the ultimate fitness test to evolve artificial
entities that perform like humans for the same reasons. GAC is just a
database. It is the things that we use GAC to automatically create that
are interesting, not GAC itself.

5. jmatthews: While Mindpixel is no doubt an excellent project that will
help developers create smarter programs, how useful is Mindpixel in the
long run? Our commonsense isn't something that is immediately and/or
explicity built into us, we learn through experience - and we probably
benefit from this fact. A lot of the common sense is relatively
subjective to the circumstances and the environment. AI will culminate
(imo) in a smart, cognitive android that will exhibit all the behaviours
of it's human counterparts. Will something like MindPixel be behind the
common-sensical part of such a machine?

5. Chris: Like I said before, the primary purpose of the Mindpixel
corpus is to simply be a high resolution model of reality for evolving
real intelligence in. Anything that artificially evolves to handle the
Mindpixel corpus will have to have much in common with humans. Think of
Mindpixel as a playroom for emerging AI's and not as AI itself. When the
emerging system is good enough to handle everything in the corpus, then
it is good enough to come outside and play with us.

6. jmatthews: Following on from the previous question, what do you think
will prevail: top-down approaches such as Cyc and Mindpixel or bottom-up
approaches such as Cog and other "learning" projects.

6. Chris: Though GAC could be considered a top-down AI, the Mindpixel
project itself isn't top-down. It is bottom-up! Remember the Mindpixel
Corpus is a training set for evolving artificial intelligences, it is
not in itself artificially intelligent.

I don't believe that top-down approaches can succeed until we've had
sucess from the bottom up simply because we don't know what we're doing.
We really need a bottom-up example that we can take apart and look at in
detail before we have any hope of top-down understanding. Another point
is that top-down understanding of human cognition may be beyond us. I
recall Danny Hillis talking about evolving simple sorting algorithms
that worked very well, but when he looked at the evolved code, he could
not understand it. Evolved solutions can be so complex that there are no
top top-down simplifications of them; the evolved code itself is it's
own simplest description.

7. missingmatterboy: You've often compared GAC to HAL. But HAL's story
is one of an AI gone awry, who killed humans because he was given
conflicting orders and couldn't handle it. Do you think a truly
conscious computer could be dangerous? If GAC becomes conscious, would
you place safeguards on it?

7. Chris: Can a person be dangerous? Sure. So can a machine. The
difference between man and machine is that we can (for now) control all
the inputs into a machine. We can control reality for them and test them
and certify their behavior in given situations. We can say much more
about the potential behavior of a machine because we can test it. And
I'm quite sure that we will do a lot of behavioral certification before
we let any artificially conscious entity loose.

Deeper in the future, things start to get funny when you factor out the
basic biological limitations of people, such as life span and memory
capacity that are not limitations for machines. Eventually we will enter
into resource conflicts with immortal machines. We will lose and
rightfully so. Evolution will say "Next" and that will be that.



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:08:32 MST