From: Thomas McCabe (pphysics141@gmail.com)
Date: Tue Jan 29 2008 - 13:33:54 MST
Intelligence isn't everything
[edit]
An AI still wouldn't have the resources of humanity.
Looking at early humans, one wouldn't have expected them to rise to a
dominant position based on their nearly nonexistent resources and only
a mental advantage over their environment. All advantages that had so
far been developed had been built-in ones - poison spikes, sharp
teeth, acute hearing, while humans had no extraordinary physical
capabilities. There was no reason to assume that a simple intellect
would help them out as much as it did.
When discussing the threat of an advanced AI, it has at its disposal a
mental advantage over its environment and easy access to all the
resources it can hack, con or persuade its way to - potentially a lot,
given that humans are easy to manipulate. If an outside observer
couldn't have predicted the rise of humanity based on the information
available so far, and we are capable of coming up with plenty of ways
that an AI could rise into a position of power... how many ways must
there be for a superintelligent being to do so, that we aren't capable
of even imagining?
* Bacteria and insects are more numerous than humans.
o Possible rebuttals:
- Sheer population size isn't a reasonable measure of success. We
wouldn't consider it a success if the Earth was so jam-packed with
humans that there was barely enough food. Indeed, overpopulation is
already considered a serious problem in many countries, particularly
in non-industrialized nations where birth control isn't readily
available.
- Bacteria and insects took hundreds of millions of years to grow and
adapt to the huge range of environments they currently inhabit. Modern
man has been around for less than .1% of that timespan, yet we have
increased our numbers faster than any other species in history.
- Bacteria, insects, fungi, protists, and other small organisms have
always existed in much larger numbers than mammals, regardless of how
successful the mammals were, primarily because of size.
- Organisms with short life cycles reproduce much faster than
organisms with long cycles (r-strategy vs. K-strategy). If the human
race had reproduced faster, people would die much faster than they do
now.
* Superminds won't be solving The Meaning Of Life or breaking the
laws of physics.
o Rebuttal synopsis: The Meaning Of Life has already been
solved (link). As for the laws of physics, we can far exceed the
bounds of today's civilization without breaking any of them. There's
no physical law saying humans have to get cancer or travel at .00001%
of c.
* Just because you can think a million times faster doesn't mean
you can do experiments a million times faster; super AI will not
invent super nanotech three hours after it awakens.
o Rebuttal synopsis: All of the world's major laboratories
are now computerized, and computers aren't secure from human hackers,
much less superintelligent AIs.
* Machines will never be placed in positions of power.
o (Kurzweil paraphrase): If, tomorrow, all the computers in
the world shut down, the entire planet would be thrown into utter
chaos. Basic utilities like electricity, cable, phone, etc. would all
fail. Most cars wouldn't even start. You couldn't get paid, or use a
credit card, or take money out of a bank. The trillions of dollars
transferred globally on a daily basis would come to a screeching halt.
The government and the military would be crippled, unable to
communicate or take action. All investments, from stocks, to bonds, to
mutual funds, would suddenly disappear. And on and on it goes...
[edit]
On an Intelligence Explosion
[edit]
There are limits to everything. You can't get infinite growth.
For one, this is mainly an objection against the Accelerating Change
interpretation of the Singularity, most famously advanced by Ray
Kurzweil. When talking about the Singularity, many people are in fact
referring to the "Intelligence Explosion" or "Event Horizon"
interpretations, which are the ones this article is mainly concerned
with. Neither of these requires infinite growth - they only require us
to be able to create minds which are smarter than humans. Secondly,
even Kurzweil's interpretation doesn't contain infinite anything -
"there are limits, but they are not very limiting", is what he has
been quoted saying.
(Add also: Infinite growth isn't necessary; the potential for finite
growth is enormous. Even right here on Earth, there's 6 * 10^24 kg of
available matter; almost none of it has ever been used for anything.
Could also link to different articles estimating the physical limits
of computation, or the limits of processing power theoretically
obtainable with technologies currently under development.)
* A smarter being is also more complex, and thus cannot
necessarily improve itself any faster than the previous stage -- no
exponential spiral.
o Does anyone have counter-evidence? This looks like a real
possibility. - Tom
* Computation takes power. Fast super AI will probably draw
red-hot power for questionable benefit. (Also, so far fast serial
computation takes far more power than slow parallel computation
(brains).)
o Rebuttal synopsis: Power consumption per FLOP has gone
down with Moore's Law like every other parameter, and there's no
reason to believe this trend will stop. Human brains only consume
about 1% of the power currently used by human civilization.
* Giant computers and super AI can be obedient tools as easily as
they can be free-willed rogues, so there's no reason to think humans+
loyal AI will be upstaged by rogues. The bigger the complex
intelligence, the less it matters that one part of the complex
intelligence is a slow meat-brain.
o I don't understand this objection. - Tom
+ It's basically saying that AI can be used for good
as well as bad, and there's no reason to assume that the bad designs
will beat the good ones. Might be useful to say something about a
first-mover advantage, as well about the fact that it's incredibly
hard to get right a mind design whose wishes are anywhere near what
humans would want them to be... - Kaj
* Biology gives us no reason to believe in hard transitions or
steep levels of intelligence. Computer science does, but puts the
Singularity as having happened back when language was developed.
o Rebuttal synopsis: The average human has a frontal cortex
only around six times larger than the average chimpanzee, and yet the
result of that change has been huge (civilization, nuclear bombs,
etc.).
* Strong Drexlerian nanotech seems to be bunk in the mind of most
chemists, and there's no reason to think AI have any trump advantage
with regard to it.
o Rebuttal synopsis: Nanosystems, Eric Drexler's 1992
technical book on nanotechnology, has never been found to contain a
significant error (link).
* There is a fundamental limit on intelligence, somewhere close to
or only slightly above the human level. (Strong AI Footnotes)
o It seems that counter-evidence exists, but I haven't seen it. - Tom
+ One could note that simply making faster processors
or larger stores of memory will make a mind more intelligent, and that
our brains are nowhere near the physical limits for either. Like here,
for instance. - Kaj
[edit]
On Intelligence
* You can't build a superintelligent machine when we can't even
define what intelligence means.
o Rebuttal synopsis: Even if we can't define intelligence
precisely yet, in practical terms, we know what it does: intelligence
provides optimization power to reshape the world. There are plenty of
things that could give one more optimization power, for instance,
faster processing and more memory.
[edit]
Intelligence is not linear or one-dimensional, so talking about
greater- or below-human intelligences doesn't make sense.
[edit]
Talking about human-equivalent AI is pointless. A computer mind would
of necessity be much smarter than humans in some fields: for instance,
in the field of doing multiplication or addition. Creating a truly
"human-equivalent" AI would require needless work and involve
essentially crippling the AI.
It is true that intelligence is hard to measure with a single, linear
variable. It is also true that it will probably take a long time
before there is truly human-equivalent AI, just as there is no
bird-level flight: humans will have their own strong sides, while AIs
will have their own strong sides. A simple calculator is already
superintelligent, if speed of multiplication is the only thing being
measured.
However, there are such things as rough human-equivalence and rough
below-human equivalence. No human adult has exactly the same
capabilities, yet we still speak of adult-level intelligence. A
calculator might be superintelligent in a single field, but obviously
no manager would hire a calculator to be trained as an accountant, nor
would he hire a monkey. A "human-level intelligence" simply means a
mind that is roughly capable of learning and carrying out the things
that humans are capable of learning and doing. It does not mean that
we'd be aiming to build an AI with exactly the same capabilities as a
human mind. Likewise, a "superhuman intelligence" is a mind that can
do all the things humans can at least at a roughly equivalent level,
as well being considerably better in many of them.
- Tom
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT