From: Eugene Leitl (eugene.leitl@lrz.uni-muenchen.de)
Date: Thu Sep 07 2000 - 04:43:59 MDT
J. R. Molloy writes:
> All the AIs don't have to run factories and do the markets. Sacrifice a few
Any AI on the global network is an AI one step away from being
entirely uncontained. Any really smart AI dealing with humans, whether
online or offline is one or two steps away from being free. Read the
archives, we've talked about this before.
How would you contain a prisoner who's working the salt mines if all
what it takes to make him disappear and cause a major civil war
devastating the world is a short contact with a single grain of salt?
You can keep him away from salt, though that would be difficult, but
then he can't work the mines. Why would you want to keep such a
dangerous, useless prisoner? Or, what if you don't realize that salt
has such a magical effect on him? That many prisons in the world are
keeping such prisoners, most of them run by ignorant administrators?
Look, I'm sorry if my analogies are so skewed, but that's because we
haven't been in a such poised regime before (with the exception of
when life emerged here), about to punctuate the equilibrium and
suddenly spill into a new local minimum. We don't have the imagery to
talk about such things.
> million of them to supervise the others, i.e., give some of them the job of
> police, some psychologists, a few executioners, [insert your favourite
> enforcement agent here] so that these AIs can monitor the entire population.
Anything nontranscendent and powerful standing under central control
monitoring the entire population is a serious mistake. Allowing such
situation to persist is like playing russian roulette every afternoon.
> Only the ones who perform above average as supervisors get to reproduce. The
> very best line of supervisors get appointed as governors, executive officers,
> and overseers of the AI meritocracy.
You're making strange noises, but I don't know what they mean. Sounds
all like instant desaster to me.
> > You never got bitten by a dog? A maltreated dog? A crazied, sick dog?
> > Never, ever?
>
> I've never been bitten by a dog, but I've heard about other people getting
> bitten. Mailmen get bitten from time to time, but it's almost never, ever fatal.
The difference is in this case is, that when a single dog bites a
single human in the world, all people die instantly, and the world
becomes the domain of dogs. You could also distribute a million remote
controls for an Armageddon machine among the populace, instructing
them not to press the button. How many seconds will it take on the
average, until the world (as we know it) ceases to exist?
> Dog-level AI wouldn't be all that dangerous. Whether it takes five years to go
That's what I said, but teenagers sometimes kill people,
too. Teenagers are thought to be more intelligent than dogs. We're
talking about a god with his own agenda. Sounds lots more dangerous
than a teenager with a gun. Or a nuke. Or the launch codes.
> from dog-level AI to human-level AI or 500 million years to go from dog-level
> natural intelligence to human level natural intelligence, along the way to the
> higher levels of intelligence, evolution provides for behavior modification
> which can accommodate the higher levels of intelligence. (Although one might
> doubt this when considering the 5, 000 wars humans have fought in the last 3,000
> years.)
What if it takes five minutes for the seed AI to achieve god status,
taking over the global networks? The ursoup did simmer for a long
while, before the first autocatalytic set emerged by chance. Once
emerged, the thing burned through the colossale turine like
wildfire. The organic chemicals never knew what hit them. Most of them
were metabolized, a few were assimilated.
A broken-out god (an ecology of gods and other critters, actually)
will be very fast. Even assuming that it won't discover some very
interesting physics in those endless minutes and hours it will have at
it's disposal.
> Speaking of wars, the first fully functional dog-level robots will
> probably be dogs of war.
Autonomous weapon platforms with the intelligence of a common house
fly will be very, very deadly. Something as smart as a house kitty, oh
my god.
> > How strange, I thought it was the other way round. Statistical
> > properties, emergent properties, all things which you don't have to
> > deal with when all you have is a single specimen.
>
> I think it would be as foolish to try to bring a single specimen up to
True, that's why it will never happen.
> If all of the millions of dog-level AIs fall into positive autofeedback loops,
What is the probability that all the air molecules in the room
suddenly decide to assemble into just one corner of it, leaving you
gasping in a hard vacuum? The probability that it happens is very low,
but not zero.
Now, if you have a billion of agents, what is the probability that all
of them enter the runaway regime in the same minute, if not second?
The probability for it is a lot higher than in the suddenly airless
room example, but it is negligeably low.
> (don't forget, some are guard dogs, some are drug sniffing dogs, some are
> seeing-eye dogs for the blind, some are sheep dogs, none are merely pets) that
> would be wonderfully beneficial because it would make it much easier to train
> them. Nobody wants a really stupid dog. The most intelligent animals (including
No one wants a really stupid dog, that's the problem.
> humans) are generally and usually the most cooperative, and easy to train. The
> dangerous criminal element is almost always average or below average in
> intelligence. If the dog-level AI instantly falls into a positive autofeedback
> loop (about as likely as a human falling into a positive bio-feedback loop and
> becoming a super genius), you simply ask your family guard dog (now an SI) to
You accelerate me by a factor of a million, and give me a female (the
AI needs no such thing, and the AI is facultatively immortal), plus
full access to the world's information, including my fully annotated
genome (the AI will have that, and whatever the AI can come up with,
which will be a lot). Then you give me a day. Almost three thousand
years pass in my time frame, and I'm starting with everything you
already have. One day passes in yours. Quite a day. Give me your year,
and a million year pass in my time frame.
Use your imagination.
> continue its line of duty with hugely expanded responsibilities and obligations
> to protect the family against hackers, attackers, and intruders of every kind.
> The first thing your faithful SI pooch does might surprise you with its
> brilliance. If you have money in the stock market, the SI would instantly check
> to see if your SI stock broker's dog has acted to safeguard your investments
> from any rogue market manipulators. The beautiful part is you'd not have to roll
> up a newspaper to chastise the SI doggie, because it would have more reason to
> offer behavior modification to *you*. What used to be your canine security guard
> has transformed into your own personal genie.
> If you don't trust high IQ, then you'd better kick Eliezer off this list before
> he scuttles the enterprise.
I have no trouble with Eliezer's project, as long as he doesn't start
stealing from biology, or will start using evolutionary
algorithms. Since he says he'll never will, I'm sleeping quite safely.
> [Don't forget dog-level AI spelled backwards is IA level-god.]
>
> > Yeah, *right*.
>
> Stupidity, rather than too much intelligence, is the cause of most of the
> world's problems. Just between us, friends, I'd place a hell of a lot more trust
> in an SI than in any of the current candidates for political office. It seems to
How much trust do ants put into humans? And humans essentially leave
them alone.
> me that fear of malevolent SIs is misplaced. Having grown up (instantly) in a
> free market environment, the first thing the SIs would have to do is make some
> money. The SIs would have to curry favor with the right people (and more
See, many people think like you. They think it's safe. That's why I'm worried.
> importantly with the right SIs) to acquire power. With a sufficiently high
> number of SIs, they'd instantly figure out how best to create a stable and
> prosperous SI community.
But does this also mean a stable and prosperous human community? I
think it means almost instanteous extinction for the whole of the
biology, us included, as a side effect of them going about their
business.
> Life is profoundly unreasonable, utterly devoid of basic common sense
Well, then we're about to die. And nothing we can do about it. Excuse
me, there is this party I have to attend to. Life is short. You know.
> expectations. Life doesn't want to breed an AI. Life wants to breed millions of
> them simultaneously and instantly.
Not in this universe. Nucleation processes are rare, but when they
occur they restructure the landscape entirely. We seem to be at the
threshold of such event, and we're smart enough to realize that
something is coming. As a whole, we can't do anything about it,
apparently.
> Life is really weird that way. For example, Sasha Chislenko had a much better
> life than I have. Yet he apparently killed himself, and I'm still plugging along
> optimistically hoping for things to get better. [perhaps he found life too time
> consuming] Now, if a smart and savvy guy like him decides to chuck it in, while
> underachieving jerks like me keep merrily cruisin' along, hey! maybe I'm missing
> something; perhaps he knew something (or his depression revealed something to
> him) that I don't know. Maybe I should join the billions of grateful dead folks.
Neurochemistry is a fluke. Whether you win, or you lose is not
something which you can influence a lot.
> Anyway, we all have to die sometime. Why not die bringing a higher life form
> into existence? Look how many fools throw their lives away on idiotic notions
Sorry, I'd rather become that higher life form than die by being
stupid enough to make that life form before it's own due time.
> like patriotism and honor. I think Morovec was brilliant when he said (in Palo
> Alto) that the most responsible thing we can do is to build artificial life,
> especially if it surpasses us in cognitive capability. The riskiest thing you
Moravec is brilliant, but he's also a raving psychotic looniac. You
did hear what was (to his obvious discomfort) being cited from one of
his less formal interviews? He can die merrily for what I care, this
doesn't mean I (and people near to me) will have to die because Herr
Moravec thinks that artifical life is cool. I also think that
artificial life is very cool, but not cool enough to warrant our
extinction. Especially is this extinction is potentially preventable,
by tweaking the boundary conditions in the right fashion.
> can do is to not take any risks. The best insurance against some mad scientist
> building an AI is to have thousands of responsible and reasonable scientists
> building millions of AIs.
If you take a thousand of obviously responsible and reasonable
scientists, you have *at least* 10 raving looniacs among them, and
even responsible and reasonable scientists do make mistakes,
especially if you have a thousand of them working independently. Or a
hundred thousand.
> Lots of people didn't want humans to build nuclear weapons. Too dangerous. But
Nuclear weapons? Fiddlesticks. Who cares about a global nuclear war,
it still won't kill humanity sustainably.
> if it weren't for nuclear weapons, I'd likely have died in my twenties fighting
> a war with the USSR or some other maniacal regime. Nuclear weapons have been an
During the Cuba crisis it was due to restraint of a single mere man
against strong insistance of his advisors that my and your parents did
not become grilled radioactive steak. It was a very, very close
escape.
But as I said, who cares about nuclear weapons. Or even engineered
biological weapons. All they can do is kill billions of people. But
not every single one of them.
> effective deterrent to another world war. Multitudes of SIs will be perhaps the
> only deterrent to fanatic religious retards (in the Middle East and dozens of
> other hot spots) dragging the world into another global conflagration.
You would be funny, if I didn't think you were serious.
> > Socialize a god.
>
> I have a better idea: Let a million gods socialize us.
Then you're willing to commit your suicide, and homicide on countless
people around the world.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:30:50 MST