From: Eugene.Leitl@lrz.uni-muenchen.de
Date: Mon Mar 26 2001 - 15:25:05 MST
"Robert J. Bradbury" wrote:
> Thats not quite true. PDB is on an exponential growth curve as well --
> its just at a lower starting point. They are going to 'assembly-line'-ize
Sure, but structure also does mean dynamics, including interaction.
Static structure alone is rather sterile, though it does aid in docking
studies.
> 3D protein structural analysis but its going to take another 3-5 years
> to get there (you have to do purchase/build the necessary hardware
> and assemble the teams for this).
One also needs to create new forcefields of suitable accuracy and
performance. Not an easy task.
> I've seen a presentation by Lenat, and he nicely documented an analytical
> problem solved by their approach that was not solved by humans. And
Right, but I was mentioning robust AI (i.e. naturally intelligent systems),
not stupid theorem proving. If it can prove Goldbach's conjecture but not
find its way out of an open paper bag, that's not naturally intelligent, that's
idiot savant at best (and even these typically do manage to feed themselves
on their own).
> of course I know you are aware of the group at either ORNL or Argonne
> that has proved mathematical conjectures that were previously unproven
> (computers can exceed the depth of thought (most) humans seem capable of.)
If you consider that a proof might translate into a man-century (or
man-millenium) of hard work with zero error tolerance, that appears
rather cheap. Assuming I scale up Deep Blue to a cubic meter of computronium,
it will play like a demigod, but it has progressed only along a single,
very special axis.
> Minsky pointed out at the same conference that Lenat was speaking at that
> the brain may use a dozen or so 'heuristics' to determine the distance
The wording is just broken. There is no algorithmizable heuristics in
there.
> and size of an object it 'sees'. We have yet to see a computer that
> incorporates those dozen strategies and effectively learns to select
> between conflicting results.
The hardware is almost there (at least in terms of raw transistor count),
but the software has just stalled. We're obviously at the threshold
of a breakthrough, however long it may take. The future is definitely
in biomimetic and ALife, albeit it's a tad slow to deliver.
> I'd strongly doubt you can call what they are doing 'snake OIL'.
> They are laying the foundation in different areas for parrallel or
> even novel ways of doing some of the things that the brain does.
> Divide and conquer.
I think they're caught up in the 5th sidetrack of then 10th
sidetrack. I don't expect anything from that edge of AI but
more debacles.
> But if you want true self-conscious AI --
>
> [NOTE: And here I will make a prediction -- "When" we understand
> completely what 'consciousness' is; we will say 'Is that all it is?'
> and it will cease to hold the "high" position that it now does
> in terms of discussions like 'rights', 'simulations', 'zombies',
> etc. We will view 'consciousness' as just something else the brain
> can do -- just as the visual neural system has the ability to
> assemble lines, merge them into complete 'shapes', map them onto
> recognizable objects, etc.]
I don't care what it is, and how is it called, and how to make it
as long as it delivers.
> -- then IMO you need to look carefully at what William Calvin has
> been saying about the need to simulate yourself as the actor in
> the internal view of the 'scene'. That combined with a better
Sure, you've got to have a self representational system, and a future
trajectory planning unit (most likely, massively parallel). That's
a basic chestnut. I've know that much in highschool.
> understanding of how one 'talks to oneself' will yield the necessary
> understanding.
That's an unproven fact, whereas we know that darwinian evolution
can create intelligent systems.
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:06:42 MST