Re: Robots, but philosophers (or, Hal-2001)

From: Franklin Wayne Poley (culturex@vcn.bc.ca)
Date: Wed Sep 27 2000 - 15:53:30 MDT


On Wed, 27 Sep 2000, Samantha Atkins wrote:

> Franklin Wayne Poley wrote:
>
> > How about 2001, Hal? Could it be that by 2001, someone somewhere will
> > already have AI machinery to surpass human equivalency?
>
> Not on this planet. Maybe you want to call on Ashtar High Command or
> some such. :-) They've been feeding us all of this tech anyway, don't
> ya know?

How about the "Andromeda Strain", some little AI seed that will grow by
genetic algorithms until it knows all?

> > In summary, here is the argument that AI now is at a stage comparable
> > to the man-on-the-moon program from a 1960 perspective. In other words, we
> > mostly need quantitative extensions of what we know now and the
> > qualititative aspect of this project is not overwhelming. That is, we can
> > now see the areas which require innovation or invention and we can
> > reasonably assume that the breakthroughs will be made. The sheer magnitude
> > of the project should not be a deterrent. If we know how to reach the
> > objective and it is worth while to do so, so what if it costs hundreds of
> > billions?
>
> Qualitative is not overwhelming?

I listed 4 particular areas. The fact that significant progress has been
made in all 4 seems hopeful to me. For example, a computer programmer from
Australia sent me 390 k. on a program for grade one reading (if anyone
wants it just let me know off-list). I'm not even sure there is a
qualitative problem to be overcome here. Can you give an example of a
passage of grade one reading for which we can't program questions with
correct answers? If not, the programmers go on to grade two and so on.
What are the inventions required in the other 3 areas I mentioned?

  This has to be a joke. We have no
> idea what qualia even are among other "qualitative" problems of reaching
> human level intelligence.

I think the specialists working in these areas will be able to give a very
clear statement on what they need to progress. For example, if it is a
problem which has to do with edge detection for object
recognition/itemization they will be able to say so. That is what the
proposed EDTV-Robotics-State-Of-The-Art program needs to know. I'm not
interested in writing the script for another "golly gosh" ed tv program to
"wow" the public and provide a little education at the same time. I need
these precise statements of what if needed to progess, eg huge amounts of
additional labor using known technology or inventions of something new.

  We have relatively poor grasp of even higher
> level issues like concept formation and usage.

And I can list these esoteric and mentalist notions until the cows come
home. How about consciousness, common sense, comphrehension,
contemplation....? Watson's 1913 ms. in Psych Review exorcised mentalism
from scientific psychology. My own personal philosophy is dualistic so I
don't take the "strong behaviourist" position but I use it for practical
purposes. For a generation before Binet's very practical approach to
intelligence testing in 1905, psychologists spent enormous amounts of time
with this mentalistic navel gazing. They got nowhere.
   I don't mean to be harsh because I think a number of disciplines are
required to solve the problems presented in AI, but I wonder how many AI
workers have ever studied the history of trying to measure/observe/define
real human intelligence, let alone ever given an IQ test? What I see is a
lot of people treading the same ground with artificial humanoid
intelligence that philosophers-psychologists-educators-physicians had trod
with real human intelligence before practical, operational psychometrics
came along. And it is expected they would do so. After all aren't they
trying to simulate real human intelligence?

 Lots of theory, no
> satisfying fully general and full powered learning programs. No model
> we are even happy about for describing what humans do with
> percept-concept-more abstract concept chains.

Just give one example of such a chain which cannot be
verbalized. Skinner's dictum was "If it can be verbalized it can be
programmed". Now I think he meant programming in a more general sense but
it applies pretty well to computer programming. If we can verbalize the
rules for human conversation I think it is likely someone can write the
program for it. If we can verbalize a set of rules for
reading-questions-answers we can likely write a program for those rules.
   With words like "abstract" and "concept" we're back to the old
mentalism problem again. Peoples' eyes glaze over. They throw up their
hands and say, "We don't know what it is. How can we ever write a program
for it?" Instead of waiting for the High Priests of Esoterics to tell
them, psychologists a long time ago decided to turn this over to
pragmatics. What they said was, "If you can describe a situation which
purports to express intelligence, tell us. Then we will refine that
situation and turn it into a standardized test. If you don't have a
describable, observable situation, then go away until you do."
   As a result of that those involved in mental measurements are now
standing on pretty solid ground. It is as unlikely that a testable
situation of importance is going to be added to the pool as an element
will be added to the periodic table of the chemists. I don't deny that
there may well be a reality to something like "consciousness" and
thousands of other esoteric/mentalist notions. But almost all of
scientific psychology will agree with me that we don't want to be held
ransom by the witch doctors of esoterics.

  Without this you will not
> get there. Or is there something already done I am unaware of? Any
> pointers appreciated.

The pointers are as above. Also I explained that those 19 primary mental
abilities are based on the work of over a century of hard working and
smart people (counting the time spent in introspection labs which went
nowhere). What I've said is that if you want to simulate real human
intelligence with artificial humanoid intelligence this has to be a good
model to work with. Doesn't that make sense? If not, why not?

> Many are pretty darn sure you cannot reach human level congnition
> without at least much closer to human level computational throughput.
> Please show why these people, many of them experts who very much are at
> the vanguard of the quest, are wrong.

One error is trying to simulate HOW human minds work. (For basic or pure
research that's another matter...go ahead). I posted previously that it is
the RESULT of mind I am concerned with and now the HOW of it. We may take
another thousand years to find out all about how the human mind works and
right now we don't know very much at all.
   As long as Hal-2001 gives me the RESULT of all those problems in
intelligence I don't care how it does so. So tell me which of the 19
factors are going to be troublesome and why? Do we need massive amounts of
additional labour or do we need a new invention? If the latter, what is
the invention?

  If they are right, please show a
> way that in 1 year (initial suggestion above) we will both get this
> incredible leap in computational hardware density

Well, let's look at the requirements for the programming first and see if
we have the hardware already. For example, I think some estimates have
been made of what it would take to run a conversational program (to
converse as well as a typical human) and I think it is within present
technology.

 AND make use of it
> with appropriate software based on the brand new theoretical
> breakthroughs we also get in this year.

I've set out the framework for arriving at the results of measured human
intelligence (eg examining what we arrive at with 19 primary mental
abilities). I've pointed out the fact that human equivalency has already
been met for a significant portion of this. I've listed the areas where I
think invention/innovation MAY be be required (but I have to hear from the
specialists to know more precisely what those innovations might be). There
is nothing wrong with this framework or model if you want to call it that.
So just tell me the answers to the questions raised.

  While you are at it please send
> me some of the same drugs you are taking so I can also enjoy this
> fantasy as much as you are.

There is no fantasy in machine procedures for arithmetic/logic/mathematics
to give us the results yielded when we test humans on the various
Reasoning Factors. Don't machines surpass humans? There is no fantasy in
machine memory which surpasses humans as assessed by those Memory Factors.
There is no fantasy in machines doing mapping and visualization to surpass
humans on Visualization Factors. No fantasy in verbal abilities of
machines like the ability to give definitions and spell checking. No
fantasy in thinking that someone out there among millions of educators,
linguists, etc. might very well be able to verbalize the rules for grade
two readers as well as grade one readers and also the rules for
conversation. etc. etc.

FWP

-------------------------------------------------------------------------------
Machine Psychology:
               <http://users.uniserve.com/~culturex/Machine-Psychology.htm>
-------------------------------------------------------------------------------



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:31:15 MST