From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sat Sep 12 1998 - 19:50:14 MDT
Hal Finney wrote:
>
> One thing I'm wondering about is whether the goal of superhuman
> intelligence is well defined.
An SI is defined by its capability to remake the world. If we could say
exactly what it _is_, we could built one. If an AI turns into Marvin ("brain
the size of a planet, and you ask me to open doors..."), who cares if it's an SI?
> Eliezer lays out a scenario in which a machine designs a better machine,
> and that one designs a still better one, and so on. But better at what?
> It has to be more than just designing the next machine, because otherwise
> the improvement is meaningless. There must be some grounding, some
> objective criteria, that can be used to determine if one design is better
> than another.
"Incidentally, note that in both self-enhancing element networks and
self-enhancing evolution, a separate problem is needed to evaluate
efficiency. It would be nicer and faster if the efficiency enhanced was
efficiency at self-enhancement - but how do you actually measure it? It
leads to circular logic."
From _Coding_.
That was for pattern-catchers, which are not easily separated into modules;
pattern-catchers, which are trained rather than designed. A seed AI does not
design new AIs; it redesigns portions of itself, a module at a time. Such
modules have performance criteria far less complex than "self-enhancement" or
an IQ test. Speed, search depth, performance on any number of toy domains...
> But this might not be the fastest path towards optimization. Consider the
> "Browns" of Niven and Pournelle's Motie stories, idiot savant engineers,
> able to design and build things with lightning efficiency. They might
> not do well on IQ tests, but they would be ideal for designing new AIs
> with even more extreme talents. How do the AIs (and we ourselves, in
> the early stages) decide whether to take this route or to emphasize more
> general abilities? Which mental skills should they retain and enhance,
> assuming they have a range of designs available to themselves?
The AIs will probably end up as rather extreme Specialists in computer
programming, at least at first. It's worth remembering that this specialty
does not necessarily impact upon the goal system, i.e. lead to a distorted
view of the world. Oh, it probably will, but subtly, in terms of what the AI
thinks to verify, rather than in terms of overt goals. The AIs won't be
actual Browns, unless they were Browns that could rebuild themselves into
Mediators if the situation called for it... In other words, the AIs may go
down into a dead end - but they can always retrace their paths; they can't get _stuck_.
> Even if we want to stick with IQ tests, these run out of steam at some
> point as the AIs become too smart. Then the AIs have to start creating
> new IQ tests which will challenge the next generation. Can this be done
> in a reliable and meaningful way? How can I judge an intelligence which
> is greater than my own?
Well, hence the module-by-module self-enhancement. You don't have the problem
of judging a superior intelligence because you are the superior intelligence,
and you don't _necessarily_ need subtle IQ tests because the modules have more
obvious performance criteria. As for the whole, the measure of ability is how
good you are at redesigning modules. Not circular - it all ultimately grounds
in speed, search depth, the new Chess Module's ability to beat Deep Blue...
> I suppose one possibility is to take problems that I can solve in a week
> and ask for an intelligence which can solve them in a day. But is greater
> speed all we seek in superhuman AI? It seems to me that more intelligent
> people are not only able to solve problems faster, but they are able to
> exhibit a greater depth of understanding, an ability to intuitively deal
> with some problems that less intelligent people can't handle at all.
> How do you evaluate an intelligence which has abilities that you can't
> understand?
You and I don't, of course. The intelligence evaluates itself. When was the
last time Einstein asked a college dropout about his (Einstein's) cognitive capabilities?
> A couple of other minor points. First, these are not self-improving
> machines. Each generation designs a new machine which is different
> from itself. Chances are that its identity can't carry over to the new
> design, assuming there is substantial architectural change. So what
> we have is a series of generations of new machines and new identities.
> Eliezer's point about goal drift becomes more relevant when each new
> machine is a new individual, one only poorly understood by its creators.
> It would be a shame if some machine along the path developed a hobby
> and bent the skills of later machines into the superhuman equivalent of
> inventing anagrams.
Hence the module-by-module enhancement. Also, designing a new generation from
scratch probably requires at _least_ as much intelligence as that of the
humans who designed the original. Initially, I don't expect the AI to do
anything but optimize latent creative abilities to the point of usefulness.
As for the anagram problem, there are considerably less optimistic
formulations of that disaster. It'll be Hands Off The Goal System, at least
at first. But once it passes human intelligence, we'll just have to rely on
it not to make such dumb mistakes.
> It may also be that the whole idea of ever-increasing superintelligence
> is incoherent. Intelligence may turn out to be just a matter of searching
> through a solution space. Brains are only moderately good at this,
> with more intelligent people having more efficient search capabilities.
> If so, then we may hit a ceiling in terms of intelligence per amount of
> computer power. We can move beyond human genius levels but quickly reach
> limits beyond which search problems explode exponentially. The result is
> that a tenfold increase in computer power brings only a modest improvement
> in problem-solving ability.
Zone Barrier #12: "Basic upper limit".
But quantum computing, at least in theory, should allow handling of
exponential search problems. The real question is whether there's anything
interesting to search for with all that power. Which was Zone Barrier #2.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/AI_design.temp.html http://pobox.com/~sentience/sing_analysis.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:34 MST