Intelligence, IE, and SI - Projections

From: Billy Brown (bbrown@conemsco.com)
Date: Mon Feb 08 1999 - 12:10:19 MST


Well, no one has objected to the last post on this thread yet, so I guess
its time for the next step. Given the previously outlined assumptions, what
would a human-level AI be capable of in the way of IE?

Slow AI

Let us say this AI goes online around 2020, and it runs on a minicomputer
with roughly human-equivalent processing power. It can probably use
Temporary Specialization and Long Thought without any special effort - these
sorts of abilities come very easily to a software entity. The
specialization ability is particularly powerful, because the AI can actually
devote nearly 100% of its hardware speed to a single abstract
problem-solving ability if it so desires (humans, in contrast, appear to
devote most of their processing power to sensory processing, motor control,
and other non-cognitive functions). The AI also has what Eliezer Yudkowsky
calls the AI advantage - see
http://www.pobox.com/~sentience/AI_design.temp.html for a good outline of
what this means.

As a result, the AI should be significantly better than even the best humans
at any task it can do. Note that this does not necessarily mean it can do
anything a human can - an engineering AI might not have any social
abilities at all, for example. If the AI is designed to write AI programs
it should be able to enhance itself, but creating cognitive abilities in
unfamiliar domains would still require real-world experimentation.

So, how far can this code-writing AI get? It can start by optimizing each
of its cognitive abilities individually. Since it is effectively smarter
than all but the most brilliant of humans, it should be able to get
substantial improvements before it runs out of ideas. It can also improve
'horizontally', learning new skills by interacting with the real world (or
the WWW) and creating efficient ways to solve new problems as it encounters
them.

Now, if the whole process stops there we have a moderately superhuman
entity, with an IQ in 200 - 300 territory. However, there is a catch. It
is now substantially smarter than when it did its first round of
optimization - shouldn't it now be able to see better ways of writing AI
software, which it could then apply to writing further optimizations? The
obvious answer is yes.

Now, at present there is no way to tell if this self-enhancement process
would be open-ended or not. If each round of improvements increases the
AI's intelligence by enough to make the next round possible, it will
eventually become an SI. If not, it will eventually reach a 'bottleneck',
where it has tried everything it can think of and it has to wait for faster
hardware before it can make further improvements. Either way, however, this
process is likely to take quite some time. It starts out running at roughly
human speed, after all - so the whole self-enhancement process could easily
end up taking years, or even decades.

Fast AI

OK, now let's try a different starting point. Say it turns out to be
difficult to make a human-equivalent AI, so the first one ends up being
implemented a decade or two later on hardware 1,000 times as fast. Does
this scenario look any different?

Well, suppose we start of with the same program as above. Its initial round
of enhancements only takes das instead of decades, because of the hardware
difference. It may still hit a bottleneck, but it will reach it much faster
and peak at a much higher effective IQ.

It can learn from the outside world a lot faster, too (especially from the
Net, if it has access to it). With 1,000 times normal human hardware (=
about 100,000 times what we use for abstract thought) it should be able to
master a very large set of skills. If it then draws on that knowledge to
further its IE effort, you have a good case for claiming it should achieve
open-ended enhancement.

Even if the AI reaches a bottleneck, it is worth reflecting on the result.
The difference between a complete idiot and an absolute genius is only about
100 IQ points. The difference between the human genius and this AI (after
its initial round of enhancements) is going to be at least that big. If its
IE reaches a point of diminishing returns after repeated cycles of
improvement, it will still peak well above that point. IMO, that is a big
enough difference that we should be wary of making predictions about what it
can or can't do.

Very Fast AI

Suppose AI is a really, really hard problem, as some of its detractors
claim. Then it might take even longer to get our example entity - maybe
2050-2060, with hardware 10^6 times faster than ours.

That means our trusty AI can do its initial round of self-enhancement in a
few minutes. It can devote the equivalent of a few hundred thousand human
minds to learning new skills (on the Net, in VR, and through interacting
with humans), while still reserving most of its CPU time for the enhancement
cycle. It is hard to imagine it reaching a bottleneck - its mind can easily
grow to encompass a small civilization, after all.

Billy Brown, MCSE+I
bbrown@conemsco.com



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:03:01 MST