On Mon, 30 Apr 2001, Aleks Jakulin wrote:
> * Tools
> I am not sure the current programming languages have the properties that would
> facilitate building sophisticated AI. Although it might be theoretically
Sure, current languages are broken, but that doesn't mean much: the
hardware itself is broken, too. It's a vicious circle of the hardware
albatross hanging upon software's neck, and the other way round. And the
crew has started dropping, already.
In fact you don't need all that much to fix the languages, defining a few
extensions for operations on parallel arrays would do. Fixing hardware is
harder: we're not that adept at doing computation with matter yet. But
we're making very good progress. In a few decades we should be able to
reach the ultimate plateau of what is doable with hardware (i.e. the
things which go clunk when they fall on your toe).
> possible, building an AI with, say, C or Java, resembles building a castle from
> gravel with bare hands. We need the kind of tools that allow programming with
> concepts, influences, contexts, weights, levels-of-detail, causes, structures,
> heuristics, not instructions, predicates, rules or algorithms.
No. We need something which can twiddle huge amounts of little integers
very quickly. Because of relativistic latency and fanout limitations you
will have strong signalling locality. You can't program in an environment
like this, despite it being provably optimal. Human mind can't handle
literally billions simultaneous streams of control flow.
Luckily, you don't have to. You'll define boundary conditions,
functionality metrics, either formally (we're probably smart enough to do
that, it will be rather tedious, though), or informally (in an interactive
teaching session, we're certainly smart enough to do that, in a pinch we
can hire even a few chimps), and let the system figure out the pattern of
these little integers, that defines the mapping solving that problem.
> * Reusability
> An AI is something too complex to be rewritten every few years. It is not just
An AI is something too complex to be written, period. At least explicitly,
by mere humans. I'm really surprised some people are not yet tired of this
exercise in futility, slowly but surely reaching epic proportions.
> the procedures, it's also the knowledge the system has accumulated. This means
> that all the components will have to be reusable to an unprecedented extent. An
> idea is reusable, but its implementation is not.
You're describing good system design. All very useful, but not for AI.
Because explicitly coded AI is too hard for monkeys. Says at least this
monkey.
> * Hierarchy of computation
> An AI is not just a program. It's a system that combines the code, and the
> knowledge about the code. Code alone is not introspective. Knowledge about the
An ultimative computing paradigm doesn't make a difference between code
and data. It's all a sea of seething bits. It has bit patterns, which is
the system state. Them bits change, and contain state as well as mapping
from input to output vectors, which sense and do stuff in Real World
(purportedly, it's out there, somewhere, I never bothered to check).
> code is too inefficient to be interpreted. Given a task, the system recompiles
> the important tasks into code that executes quickly. And the code that executes
> especially often to justify the expenditure of time, can be recompiled into
> FPGAs. And every now and then, the system might provide designs for custom chips
> and ask the human guardians for them.
A truly smart system designs itself in the bootstrap, and it doesn't need
slow and stupid "guardians", at least after significantly into bootstrap
phase. You do not want anything like this roaming the current landscape.
It would be literally the end of the world. It might make sense at some
later point of the game, where there are no monkeys, nor a landscape to
roam.
> * Organization
> Building an AI has proven to be something too complicated for a small team of
> people working for a year or two. Perhaps it is better to provide a framework
More, it has proven itself too complicated for thousands groups of teams
of people working over decades. Perhaps there's a reason for that.
> that would allow many people to contribute, without breaking the system.
> Deductive thought is just one subsystem, other subsystems are spatial thought,
> classification, clustering, self-analysis, etc.
I hope you like it in there, where you're sitting. I've been there myself,
briefly, but thankfully, have gotten better since.
> * Ecology
> Living in the real world requires perception, and artificial perception is still
> terribly primitive. Until then, it is better to limit the focus of the AI on
Sensorics and motorics are basically solved on hardware level. Guess what,
it's the software again. And the hardware to run that software.
> software itself. We do not need proper perception: the objective for Seed AI is
> to rewrite and improve itself, while acting in a simple world (computer), where
> the environment is disk-space ecology, network geography, CPU food, and user the
> God.
We don't need no compilation//we don't need no flow control.
> * To Do
> Some things have to be done first. Building the framework is the most complex
> and important item. But we don't yet know how new concepts should emerge, how to
> deal with levels-of-detail, how to estimate the reliability of knowledge, how to
> formally describe ideas, or how to understand and rework software. The problems
> are at a very fundamental level. They're not philosophical, but require a lot of
> introspection, at a relatively mundane level.
Let me figure out the right rules for the integer lattice gas, give me
enough computronium implementing that particular lattice gas, I/O and
enough time to play with it. Rather, don't. We would all die. A real AI
could clean ruin your day, by eating the world, with you on it. So don't.
It's that simple.
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 10:00:01 MDT