From: J. R. Molloy (jr@shasta.com)
Date: Wed Sep 06 2000 - 17:36:21 MDT
Eugene Leitl
> So you've got a billion AIs gyring and gimbling in their wabe, and how
> exactly do you intend to supervise them, and guarantee that their
> behaviour will be certifyable not hostile in all possible situations?
All the AIs don't have to run factories and do the markets. Sacrifice a few
million of them to supervise the others, i.e., give some of them the job of
police, some psychologists, a few executioners, [insert your favourite
enforcement agent here] so that these AIs can monitor the entire population.
Only the ones who perform above average as supervisors get to reproduce. The
very best line of supervisors get appointed as governors, executive officers,
and overseers of the AI meritocracy.
> You never got bitten by a dog? A maltreated dog? A crazied, sick dog?
> Never, ever?
I've never been bitten by a dog, but I've heard about other people getting
bitten. Mailmen get bitten from time to time, but it's almost never, ever fatal.
Dog-level AI wouldn't be all that dangerous. Whether it takes five years to go
from dog-level AI to human-level AI or 500 million years to go from dog-level
natural intelligence to human level natural intelligence, along the way to the
higher levels of intelligence, evolution provides for behavior modification
which can accommodate the higher levels of intelligence. (Although one might
doubt this when considering the 5, 000 wars humans have fought in the last 3,000
years.)
Speaking of wars, the first fully functional dog-level robots will probably be
dogs of war.
> How strange, I thought it was the other way round. Statistical
> properties, emergent properties, all things which you don't have to
> deal with when all you have is a single specimen.
I think it would be as foolish to try to bring a single specimen up to
human-level AI as it would be to use all of a nation's resources to educate just
one human. Equal access to education makes a population more stable and more
efficient and more successful. When you only educate the prince(s), then the
king can be threatened (as many have been in monarchical societies). There's
safety in numbers.
> Of course, there is no such thing as a single specimen, unless the
> thing instantly falls into a positive autofeedback loop -- same thing
> happened when the first autocatalytic set nucleated in the prebiotic
> ursoup. But then you're dealing with a jack-in-the-box SI, and all is
> moot, anyway.
If all of the millions of dog-level AIs fall into positive autofeedback loops,
(don't forget, some are guard dogs, some are drug sniffing dogs, some are
seeing-eye dogs for the blind, some are sheep dogs, none are merely pets) that
would be wonderfully beneficial because it would make it much easier to train
them. Nobody wants a really stupid dog. The most intelligent animals (including
humans) are generally and usually the most cooperative, and easy to train. The
dangerous criminal element is almost always average or below average in
intelligence. If the dog-level AI instantly falls into a positive autofeedback
loop (about as likely as a human falling into a positive bio-feedback loop and
becoming a super genius), you simply ask your family guard dog (now an SI) to
continue its line of duty with hugely expanded responsibilities and obligations
to protect the family against hackers, attackers, and intruders of every kind.
The first thing your faithful SI pooch does might surprise you with its
brilliance. If you have money in the stock market, the SI would instantly check
to see if your SI stock broker's dog has acted to safeguard your investments
from any rogue market manipulators. The beautiful part is you'd not have to roll
up a newspaper to chastise the SI doggie, because it would have more reason to
offer behavior modification to *you*. What used to be your canine security guard
has transformed into your own personal genie.
If you don't trust high IQ, then you'd better kick Eliezer off this list before
he scuttles the enterprise.
[Don't forget dog-level AI spelled backwards is IA level-god.]
> Yeah, *right*.
Stupidity, rather than too much intelligence, is the cause of most of the
world's problems. Just between us, friends, I'd place a hell of a lot more trust
in an SI than in any of the current candidates for political office. It seems to
me that fear of malevolent SIs is misplaced. Having grown up (instantly) in a
free market environment, the first thing the SIs would have to do is make some
money. The SIs would have to curry favor with the right people (and more
importantly with the right SIs) to acquire power. With a sufficiently high
number of SIs, they'd instantly figure out how best to create a stable and
prosperous SI community.
> It is exactly prevalence of such profoundly unreasonable, utterly
> devoid from basic common sense expectations that makes me consider
> breeding an AI a highly dangerous business. Because there is such a
> demand for the things, and because so many teams are working on them,
> someone will finally succeed. What will happen then is essentially
> unknowable, and most likely irreversible, and that's why we should be
> very very careful about when, how and the context we do it in.
Life is profoundly unreasonable, utterly devoid of basic common sense
expectations. Life doesn't want to breed an AI. Life wants to breed millions of
them simultaneously and instantly.
Life is really weird that way. For example, Sasha Chislenko had a much better
life than I have. Yet he apparently killed himself, and I'm still plugging along
optimistically hoping for things to get better. [perhaps he found life too time
consuming] Now, if a smart and savvy guy like him decides to chuck it in, while
underachieving jerks like me keep merrily cruisin' along, hey! maybe I'm missing
something; perhaps he knew something (or his depression revealed something to
him) that I don't know. Maybe I should join the billions of grateful dead folks.
Anyway, we all have to die sometime. Why not die bringing a higher life form
into existence? Look how many fools throw their lives away on idiotic notions
like patriotism and honor. I think Morovec was brilliant when he said (in Palo
Alto) that the most responsible thing we can do is to build artificial life,
especially if it surpasses us in cognitive capability. The riskiest thing you
can do is to not take any risks. The best insurance against some mad scientist
building an AI is to have thousands of responsible and reasonable scientists
building millions of AIs.
Lots of people didn't want humans to build nuclear weapons. Too dangerous. But
if it weren't for nuclear weapons, I'd likely have died in my twenties fighting
a war with the USSR or some other maniacal regime. Nuclear weapons have been an
effective deterrent to another world war. Multitudes of SIs will be perhaps the
only deterrent to fanatic religious retards (in the Middle East and dozens of
other hot spots) dragging the world into another global conflagration.
> Socialize a god.
I have a better idea: Let a million gods socialize us.
--J. R.
"He that will not sail till all dangers are over must never put
to sea. " --Thomas Fuller
"Every act of creation is first of all an act of destruction."
--Pablo Picasso
"Begin at the beginning and go on till you come to the end; then stop."
--Lewis Carrol, from Alice in Wonderland
"Do not be too timid and squeamish about your actions. All life
is an experiment." -- Ralph Waldo Emerson
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:30:50 MST