Re: Paths to Uploading

From: Anders Sandberg (asa@nada.kth.se)
Date: Thu Jan 07 1999 - 11:13:36 MST


"Billy Brown" <bbrown@conemsco.com> writes:

> Anders Sandberg wrote:
> > As you can tell, I don't quite buy this scenario. To me, it sounds
> > more like a Hollywood meme.
>
> Yes, it does. I didn't buy it myself at first - but the assumptions that
> lead to that conclusion are the same ones that make a Singularity possible.

Yes and no. The assumptions below certainly make a Singularity
possible, but the reverse is not true. Vinge's original (vanilla?)
Singularity was simply that the effective intelligence-enhancing
ability of mankind as a whole increased, and thic could happen through
far less dramatic means.

I must admit that I'm disturbed by the *faith* many seem to put into
the Singularity. It is not a necessary part of transhumanist thinking,
it is just one wild prediction among many.

> Taking the objections one at a time:
>
> > OK, this is the standard SI apotheosis scenario. But note that it is
> > based on a lot of unsaid assumptions: that it is just hardware
> > resources that distinguish a human level AI from an SI (i.e., the
> > software development is fairly trivial for the AI and can be done very
> > fast, and adding more processor power will make the AI *smarter*),
>
> Actually, I assume there is a significant software problem that must be
> solved as well. That's what makes it all so fast - the first group that
> figures out how to make an AI sentient enough to do computer programming
> will be running their experiment on very fast hardware.

Well, one thing a fellow researcher told me was that the world-class
lab at a famous American university where he did his postdoctoral work
had worse computers than we have in our student labs (and our
institute has little money compared to the university). So excellence
in programming doesn't give you the best hardware. But you are talking
about hardware in general, so let's continue.

I think I understand your point but I think you miss mine: making a IQ
X AI requires effort E. Making an IQ 2X AI, does it require effort
0.5E, E, 2E or E^2? And will twice the intelligence double the
programming ability? You seem to assume something like this, the
ability times the available subjective time (large due to fast
computers) will make the next step happen very quickly. But if
development becomes quadratically harder, the AI, even when it devotes
all its resources, will make slower and slower progress.

> > that this process has a time constant shorter than days (why just this
> > figure? why not milliseconds or centuries?),
>
> The time constant for self-enhancement is a function of intelligence.
> Smarter AIs will improve faster than dumb ones, and the time scale of human
> activity is much harder to change than that of a software entity. In
> addition, an AI can have a very fast subjective time rate if it is running
> on fast hardware. Thus, the first smart AI will be able to implement major
> changes in days, rather than months. I would expect the time scale to
> shrink rapidly after that.

I agree with some of the above assumptions (time constant depending on
intelligemnce, smarter AI can likely improve faster, human have a hard
time changing timescales). But just because AI *could* be very fast
doesn't mean it *will* be very fast - that is the assumption you are
trying to prove. Can you give any support for why an AI program would
have a specific speed?

In addition, the human-level AI might be significantly slower than
humans, so that even if it can build a better AI it will be faster to
use humans. AI will only speed its own development if the number of
collaborating AIs times their ability divided by their speed is
smaller than the number of collaborating humans times their ability
divided by human speed. Collaboration of course introduces logistic
problems.

> > that there will be no
> > systems able to interfere with it - note that one of your original
> > assumptions was the existence of human-level AI;
>
> No, my assumption is that the first human-level AI will become an SI before
> the second one comes online.

Why is the human level so critical? Self-enhancement might be doable
by less complex systems or perhaps require significantly more advanced
systems, depending on form of intelligence and designability. And
system security could definitely be efficiently provided by
less-than-human level AI to an extent that hampers even a very bright
mind (ever tried to silently sneak past a watchdog?).

> > that this SI is able to invent
> > anything it needs to (where does it get the skills?)
>
> I presume it will already have a large database on programming, AI, and
> common-sense information. It will probably also have a net connection - I
> would expect a program that has access to the WWW to learn faster than one
> that doesn't, after all. By the 2010 - 2030 time frame that will be enough
> to get you just about any information you might want.

Even *skills*? Wow. Somehow I'm not entirely convinced. And if you
really can learn skills through the net in this scenario, then the
*humans* will be well on their way to SI.

There is a difference between data, information, knowledge and ability.

> > and will have
> > easy access to somebody's automated lab equipment (how many labs have
> > their equipment online, accessible through the Net? why are you
> > assuming the AI is able to hack any system, especially given the
> > presence of other AI?).
>
> Again, by this time frame I would expect most labs to be automated, and
> their net connections will frequently be on the same network as their
> robotics control software. You don't need to be able to hack everyone, you
> just need for someone to be stupid.
>
> Besides, the AI could expand several thousand fold just by cracking
> unsecured systems and stealing their unused CPU time. That speeds up its
> self-enhancement by a similar factor, which takes us down to a few minutes
> for a major redesign. I expect a few hours of progress at that rate would
> result in an entity capable of inventing all sorts of novel attacks that our
> systems aren't designed to resist.

OK, you are simply assuming nobody notices what appears to be a super
worm or virus on the net? A program that not just takes of processor
power, but also hogs net resources to send dense data (its thoughts)
everywhere. In a world that is obviously highly dependent on the net,
where many important systems appear to be net-connected and the
majority of people have grown up with the net and its pitfalls?

Personally I think this is another big Hollywood meme: the
crackability of systems. Somehow it seems so simple in movies to crack
mothership computers... Sure, security will always have flaws. But are
they really so exploitable that somebody can take over a lot of
systems easily without making any fuss? (remember things like anomaly
monitoring software - I have seen a neural network classify what the
users are doing and what kind of people they are; that program would
immediately notice something amiss). Most likely you could run SATAN
and its derivatives to get a few accounts, but in the process a lot of
system operators would go on alert.

Notice that you seem to be assuming that as soon as the good AI that
so far has spent its existence single-mindedly programming better
versions of itself notices a lack of computing power, it quickly
teaches itself to crack all sorts of systems (without anybody
noticing) and then goes on with a career as engineer. Sure, it is
smart, but *that* smart, flexible and able to find applicable skills
without anybody noticing anything?

> > And finally, we have the assumption that the
> > SI will be able to outwit any human in all respects - which is based
> > on the idea that intelligence is completely general and the same kind
> > of mind that can design a better AI can fool a human into (say)
> > connecting an experimental computer to the net or disable other
> > security features.
>
> I don't think intelligence is entirely general - my own cognitive abilities
> are too lopsided to permit me that illusion. A merely transhuman AI, with
> an effective IQ of a few hundred, might not be any better at some tasks than
> your average h
> An SI is a different matter. With an effective IQ at least thousands of
> times beyond human average, it should be able to invent any human cognitive
> skill with relative ease. Even its weakest abilities would rapidly surpass
> anything in human experience.

But you are assuming what you want to prove, namely that the AI can
grow into an unstoppable SI. But you have not managed to show that the
human-level AI will reach SI level just by being in the
AI-researchers' big computer.

-- 
-----------------------------------------------------------------------
Anders Sandberg                                      Towards Ascension!
asa@nada.kth.se                            http://www.nada.kth.se/~asa/
GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:02:44 MST