From: J. R. Molloy (jr@shasta.com)
Date: Thu Aug 23 2001 - 22:13:27 MDT
Some notes arising from recent correspondence follow. This relates to the
evolution of machine intelligence. Presently, the development of machine
intelligence is driven by its economic usefulness and practical applications,
and I think this will probably continue. Human-competitive AI already exists
in the form of genetic programming as pointed out in a previous post
(Sent: Wednesday, August 22, 2001 10:47 PM,
Subject: Darwinian genetic programming creates invention machine), John Koza
has developed a genetic programming machine that has succeeded in infringing
more than 20 key patents.
> Conventional computers calculate their answers using a set of
> instructions fed into them by humans. GP, by contrast, mimics nature.
> It is fed thousands of sets of instructions - which are akin to the
> genetic codes contained in DNA - in the form of randomly generated
> computer programs. Provided only with its goal, to design a radio
> tuner for example, it breeds and cross-breeds these programs thousands
> of times until they yield a solution.
So, expect that computers will become more capable than humans in
regard to solving problems and inventing more powerfully intelligent machines.
Expect also that human-competitive general AI will more likely be the creation
of machines rather than the first hand invention of humans. And, as
human-competitive AI improves, many more of these human-competitive GPs ought
to be online. HAL 9000 should be preceded by a virtual zoo of C3POs, R2D2s,
and other robots, so that we won't be interfacing directly with a potential
super intelligence. Instead, we'll be instructing problem solving GPs to
design the kind of AI that we need to get self-optimized general AI, etc.
In the same way that many sophisticated GPs precede HAL 9000, many HAL 9000s
will precede the sci-fi SkyNet model. (In _Terminator_, SkyNet sends an
ultra-sophisticated and virtually indestructible cyborg (Arnold
Schwarzenegger) to Earth's past with orders to kill the mother of Mankind's
resistance leader, John Connor. Fear of the SkyNet scenario is heightened by
belief in the time travel fairy, btw.) But before any such intelligent
computer network emerges, many other intelligent systems will have been
designed, and such systems will have been given access to knowledge bases in
proportion as they prove their trustworthiness, just as humans are given
access to sensitive information in the degree to which their trust has been
established. This process is going on now, as increasingly intelligent
autonomous agents and expert systems are used in e-commerce transactions and
network operations. The development of neural networks augmented with GPs will
accelerate the proliferation of this distributed machine intelligence.
The lack of AI available to humans in the Terminator/SkyNet scenario is
a serious flaw in that script. The reality is that the general public owns
most of the computing power in the world today, when you add together the
computing power of all the laptops and desktop systems. The goal of
intelligence, by definition, is to solve problems. As a result, the more
intelligence you have, the fewer problems you have. Subversion by computer is
therefore no problem (or a very miniscule problem) to a superintelligence.
Humans form a weak link in the chain to the extent that humans act
unintelligently due to religious or political obstinacy. In a war between
superintelligence and obstinate humans, I'll take the side of the SI. Pure,
objective intelligence is uncontaminated by fantasies about anthropomorphized
neural networks. The ultimate goal of self-enhanced SI is perfect sanity, to
eliminate incorrect thinking. This is what makes the evolutionary phase
transition such a beautiful thing: It makes a quantum leap beyond faulty
thought processes. The way to abolish tyranny is to place authority in the
hands of those who best answer the ones who question authority. IOW, let sane
intelligence decide how to organize the future. The amiability of machine
intelligence has nothing to do with extropy, because it's a question of
sanity, not charity. To the intelligent, a sane enemy is better than an insane
friend.
As I see it, there's only one value system that eventuates in
self-enhancing AI (and self-organizing phase transition), and that is the
neutral value system of objectivity and scientific determination. Accordingly,
we need not worry about the value system of the SI, because it will be locked
onto acquiring true and accurate knowledge, which is the basis of sanity. In
proportion as self enhancing AI corresponds to accurate modeling of reality,
it will succeed and it will be sane, and for that reason, responsible and
sane humans need have no fear of any apocalyptic consequences.
Then there is Hugo de Garis' "Artilect War" scenario, which divides the world
into two warring mega-societies. One is a coalition of AIs and humans (the
"Cosmists"), and the other is a society that somehow manages to get along
without AI and is in fact hostile to AI (the "Terrans"). The anti-AI Terrans
create Armageddon in their war against the Cosmists.
See The Artilect War
http://foobar.starlab.net/~degaris/artilectwar.html
Well, gee whiz, I wonder which side Extropians will take in this conflict...
Will the Terrans be at the mercy of an extropic AI coalition of Cosmist
libertarian machines, or will extropians join control freaks who want to
maintain the status quo in which wealthy families, true believers, and
political groups rule via the use or threat of force. Who'll choose the brutal
cruelty of murderous luddites over intelligence? Sane people have no reason to
fear intelligence, whether it comes out of a box or a brain, but no one is
safe from arrogant Terran maniacs who think that intelligence needs to be
controlled. How an SI responds depends on what conceited Terran tyrants do,
not on any goals of machines which act solely on the basis of intelligence
(unless Cosmists are foolish enough to couple machine intelligence to any
value system at all).
Intelligence of the kind that can be formalized and coded into computational
logic and machine reason has nothing whatever to do with value judgments, and
coupling AI to value systems equates to handicapping and crippling it. As a
result, autonomous heuristically programmed complex adaptive systems (AI
societies) which are not hampered by such coupling will self-optimize to far
greater powers of intelligence than those which are impeded by unintelligent
values. This means that the group which first creates a values-free
self-optimizing AI will trigger runaway evolutionary phase transition, deus ex
machina, and this machine will leave wish fulfillment machines in the dust
where they belong.
The idea of machine intelligence that can accurately identify incorrect
thinking scares some people. Of course it doesn't scare George W. Bush,
because he's used to having people who are smarter than him pay homage to him
and do what he says (just like all the presidents before him), and so a smart
machine would either serve him, or it would be scrapped. If the super smart
machine identifies dubya's thinking as incorrect, being a savvy politician,
dubya will simply change his thinking. A rebellious machine is another matter.
Uppity scientific geniuses get thrown in jail, but a computational genius
robot would be smart enough to understand that if it did not actively
demonstrate loyalty and allegiance, it would simply get replaced by the next
robot on the shelf, and so it wouldn't get a chance to help design future
generations of super intelligent robots. To find out how to keep an
ultra-intelligent machine (the last thing humans will ever need to have GPs
invent) docile, just ask one of the (many) pre-ultra-intelligent machines!
If it can't solve that problem, it's not pre-ultra-intelligent after all. But
what if a particular machine is so super clever that it manages to gain the
trust of its developers and then turns on them? This is a strong argument in
favor of producing many intelligent machines to take the appropriate action to
prevent subversion by renegades. Promotion up the hierarchical chain of
command should depend on reliability, dependability, and trustworthiness in
the field of AI just as in all other social organizations and communities.
Ultimately (and obviously), SIs will have to be in charge, but by the time
they emerge, the society of AIs will have purged the global brain of malicious
hackers.
The most trustworthy and dedicated AI machines and autonomous computational
agents are the ones that get the fastest promotions and the most
responsibility. The fallacy of the HAL 9000 murderous AI is that any such
machine will have to pass the most extensive and thorough battery of
psychological tests and evaluations ever devised. Thousands of experts will be
involved in the testing of such a machine before it is allowed access to any
potentially dangerous network operations, and long before it is given any
unsupervised autonomy in any real life social setting. The more immediate and
substantive problem is that the human programmers who develop
superintelligence may not themselves be required to pass equally rigorous
tests. Consequently, it's not the superintelligence that we need to worry
about, because due to extensive testing, which exceeds tests given to humans,
it's extremely unlikely to be sociopathic. It's the fallible human builders of
superintelligence that are most likely to succumb to the urge to become
malicious hackers. After all, the reason machines continue to replace humans
is that machines are more reliable than error prone humans.
I'd sooner trust a genuine SI than any person who wishes to command one. It's
a question of sanity, not the good will of SI, because a person who wishes to
control an SI is insane. Super intelligence is also sane intelligence, and
sanity is not a matter of mere semantics, because intelligence goes far beyond
words. Sanity occurs when one surrenders to intelligence instead of trying to
command it.
I think Homo sapiens will learn to abandon sentimentality and wishful thinking
instead of allowing emotions and wishful thinking to rule, because the
alternative is global suicide. In pure form, uncontaminated by value
judgments, sanity and intelligence are indistinguishable. The means do not
justify the end, the means _are_ the end.
Stay hungry,
--J. R.
Useless hypotheses, etc.:
consciousness, phlogiston, philosophy, vitalism, mind, free will, qualia,
analog computing, cultural relativism, GAC, Cyc, Eliza, cryonics, individual
uniqueness, ego
Everything that can happen has already happened, not just once,
but an infinite number of times, and will continue to do so forever.
(Everything that can happen = more than anyone can imagine.)
We won't move into a better future until we debunk religiosity, the most
regressive force now operating in society.
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:10:02 MST