From: James Rogers (jamesr@best.com)
Date: Thu Sep 11 2003 - 15:40:13 MDT
Kwame Porter-Robinson wrote:
> Some more questions, if you can stand them: Since it
> codes patterns in abstract have you tested it against
> "junk" material to ensure a control to compare
> against?
Yes. That is why I have simple generator functions that generate reliable
mathematically characteristic data streams. It is how I baseline and profile an
implementation in the abstract. A strong PRNG, for example will generate a data
stream that shouldn't converge but diverge on any computer we can build today.
Other generator functions produce data streams that converge or diverge at
different levels of model complexity and at different rates.
The metrics generated by these tests aren't terribly useful in themselves beyond
determining the correctness and efficiency of implementation. What is more
useful is that they give me something to compare against real-world data streams
to infer their intrinsic complexity. There are several levels of complexity
when interpreting the metrics that are actually being generated.
It is worth pointing out that "convergence" is an emergent property that is
essentially a function of the amount of memory/nodes/neurons/whatever that are
available to the SI engine. Things that basically don't converge in any
meaningful sense or even show mild divergence with some small amount of memory
may show very marked convergence when given 10x or 100x the memory for the
engine to work with.
> And would it be worth it to run this code upon two
> types of related datasets, such as a code tree and the
> corresponding documentation? Or perhaps, even upon itself, or
> would that meaningless?
I know what you are getting at, but that is something somewhat different. Any
highly efficient SI implementation necessarily does deep and exhaustive pattern
indexing that is highly optimized for pattern matching and manipulation. There
are a number of caveats and considerations that make it a more complicated
issue.
The importance of the SI implementation efficiency test independent of what you
are asking is that while you can do reasonably good pattern work with an
inefficient implementation, it is effectively intractable for higher-order
patterns. And if the SI implementation is nearly optimal, then not only are the
pattern inference/matching/etc capabilities nearly optimal, but they actually
become tractable for interesting levels of complexity and depth. That is why
the SI implementation is the linchpin; if you don't solve that nothing else is
tractable, and if you do solve it everything else is not only tractable for
interesting spaces but nearly optimal as a side-effect. I generally view it as
a many faceted but nonetheless single problem.
The machinery has a few more pieces than are strictly required for it to be a
mere SI implementation, but it was important that no matter what I did to the
machinery design one of its properties had to be that it was a very efficient SI
for the purposes of information representation or it could not work. In
practice, the current versions of the system contain no real duct tape or narrow
implementational aspects that need to be tuned (or can be tuned).
> Not to offend, but anything can be hacked and there's
> always someone smarter than you, so don't let that
> hold you back. If you had the code tied up in some
> kind hedge investment and legalities prevented you,
> than sure, don't release it.
I'm not offended in the least, and I have not yet put any legal encumberances on
the code (beyond the automatically implied copyright I guess). There obviously
will be encumberances on code developed for specific applications.
While there may be someone smarter than me, I have strong doubts that there is
likely to be anyone that understands this particular algorithm space better and
my general code hacking skills are pretty sharp. And while someone could do
superficial hacks and tweaks, there isn't much to do on the algorithms
themselves and the implementation is generally very elegant (and in ways that
many hackers probably wouldn't even notice). While there may be many uses for
the code, there is little need to *hack* the code.
> If anyone approaches AGI, but it is funded via some
> kind of commercialization scheme, mankind will not see
> Singularity from it. It's a catch-22 though cause you want to
> make money, but would like to move things along towards the
> Singularity. You have to look at who legally controls the
> code and decide which is more important; money or Man.
Question: What does code control have to do with the Singularity happening or
not, as long as the code is running? From a theoretical standpoint, I would be
more worried if every slack-jawed yokel and their brother had their own personal
Seed AIs that they were monkeying with. I would rather have one or two good
implementations than thousands of ones of varying quality out in the wild run by
wealthy people who may be naive, fools, mentally unsound, and/or stupid.
Part of the problem is that you need lots of money to acquire the kinds of
machinery that would allow a machine to have super-human intelligence. Really
good human level intelligence will run you about a 1-10 Tbyte of usable RAM (my
best estimate), and the biggest Big Iron today will get you about 0.5 Tbyte.
(Ten lashes for anyone who says "but a beowulf cluster..." and doesn't
understand why that won't work.) And while such a machine would be faster than
a human, it would not be significantly smarter.
That would leave me about a few tens of millions of dollars short of where I
need to be to bootstrap an SAI. On the other hand, a few tens of millions of
dollars is chump change to come by for a company with a very slick product. I
wouldn't whore myself completely for that much money -- that's dangerous -- but
I shouldn't really need to.
I think your conception of what the realistic trajectories are is naive. If
Friendliness is a concern (and this *is* SL4), then a closed implementation
backed by substantial capital resources looks pretty good. If you have an idea
that doesn't involve PC clusters and free code/love/beer as a game plan, I'm
open to suggestions. But with all due respect, your assertion wasn't
particularly compelling or constructive as presented. I'm not a Kool-Aid
drinker, but I can be swayed by a solid argument.
Cheers,
-James Rogers
jamesr@best.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT