From: Eugene Leitl (eugene.leitl@lrz.uni-muenchen.de)
Date: Sun Oct 01 2000 - 05:14:04 MDT
Eliezer S. Yudkowsky writes:
> Are you so very, very sure that you know better than the people running the
> facilities? If the people with enough understanding and experience to have
No.
But if I make a mistake, we've got a bunch of dead researchers who
wouldn't listen. If they make a mistake, we've got millions of
lightyears of planets full of dead people. (Aliens are also people, of
course). Dunno, these odds sound good to me.
On a less galactic scale, there is this crazy molecular biologist down
the street, who subscribes to the "Green Planet, Death to the People"
church. (Man, these people are sure not Friendly). Funded by fellow
billionaire church members, he has managed to engineer a
long-symptomless-latency high-infectivity high-delayed-mortality
bioweapon, consisting from a dozen of diverse virus families. You know
he has made successful tests on primates and people (bums snatched off
the street), and intends to start releasing the stuff in all major
airports and subways as well in stratospheric bursts (properly
packaged). All numerical epidemic models they've ran predict >99%
infection and >95% mortality rate. In other words, the threat is
rather believable. Because they're rather paranoid, they've got a
device to reliably (they're very good engineers) self-destruct the
entire facility, triggerable from a room you just managed to
penetrate. (Well, I've seen an old James Bond movie yesterday).
Would you press the big red button, instantly killing all people on
property and safely destroying all virus cultures and information on
how make them?
> created an AI tell you firmly that they have devoted considerable time and
> effort to Friendliness and they believe the chances are as good as they'll
> ever get, would you really nuke them? Are you so much smarter than they are,
> you who would not have written the code?
I'm just smart enough to know no one can be that smart to predict what
a superhuman Power is going to do. At the same time, Turing, Goedel &
footnotes to them say you can't, and game theory and evolution theory
say it won't be a smart thing to try, since offering some constraints
on behaviour of Powers, which don't look too pretty if you happen to
be at the human receiving end of it.
Thankfully, people are not yet smart/stupid enough to try to make a
Power in a major, concerted effort, so we're limited to the equivalent
of an industrial accident. Such as your AI suddenly exploding into
your face, having unexpectedly slided off into evolutionary
regime. Because 1) you currently don't have any resources worth
speaking of 2) you seem so far to make every effort to keep it from
going darwinian, I so far sleep rather safely.
(Just to be on the safe side, don't publish your realtime WGS 84
coordinates, will you? ;)
> Thankfully, you will never have authority over nuclear weapons, but even so,
> just saying those words can make life less pleasant for all of us. You should
> know better.
Thankfully, you will never have authority over enough resources for a
SI project likely to succeed, but even so, just saying those words can
make life less pleasant for all of us. You should know better.
Seriously, who's playing the architect of humankind's future destiny
here? You think you're smart enough for that?
Instead of trying to persuade people to pull over into a high enough
fitness regime by dangling enough of juicy carrots in front of their
noses before embarking on a project to end all projects, or ending up
there spontaneously, you say "people no good, I'm also only human, but
I know what is good for the rest of them, so I'll just go ahead, and
will do it, the faster, the better". That sounds smart, for sure.
> me> I'm not. I don't like where the logics of it all is leading us. There
> me> must be a more constructive way out.
>
> There is. One group creates one mind; one mind creates the Singularity. That
That sounds disturbingly terminal.
Kinda <godwin>Ein Volk Ein Reich Ein Fuhrer</godwin>.
> much is determined by the dynamics of technology and intelligence; it is not a
> policy decision, and there is no way that I or anyone else can alter it. At
"We're in a room full of people. There's a hand grenade on the
table. I'm going to go and pull out the pin. There is no way that I or
anyone else can alter it."
Uh, don't think so. All I have to do is to prevent someone from
pulling the pin long enough (even if it involves braining them with a
heavy blunt instrument), until I can evacuate the room. Afterwards,
the thing may or may not go off, it will be relatively irrelevant.
> some point, you just have to trust someone, and try to minimize coercion or
> regulations or the use of nuclear weapons, on the grounds that having the
> experience and wisdom to create an AI is a better qualification than being
> able to use force. If the situation were iterated, if it were any kind of
Why has being smart something to do with being reliable? The opposite,
if anything.
Now we're all sons of bitches. (Of course no one said exactly this
statement at Trinity, it's an urban legend).
> social interaction, then there would be a rationale for voting and laws -
> democracy is the only known means by which humans can live together. But if
> it's a single decision, made only once, in an engineering problem, then the
> best available solution is to trust the engineer who makes it - the more
> politics you involve in the problem, the more force and coercion, the smaller
> the chance of a good outcome.
I agree to that, but not if that engineer's decision is going to be
amplified to a space region a couple of hundren million light years in
diameter. That sounds rather monstrously irreversible, and hence would
seem to require substantial consensus.
> I'm not just saying this because I think I'll be on the research team. I'm
> willing to trust people outside myself. I'd trust the benevolence of Eric
> Drexler, or Mitchell Porter - or Dan Fabulich or Darin Sunley, for that
> matter. I'm willing to trust the good intentions of any AI programmer unless
Trust in benevolence of single people I hardly know to decide
something which has a global impact? No, thanks.
> they specifically demonstrate untrustworthiness, and I'm willing to trust the
> engineering competence of any *successful* AI team that demonstrates a basic
> concern with Friendly AI.
One non sequitur after another. Your irrational insistence on
"Friendly AI" at all costs in face of grave objections does not make
you extremely trustworthy, but you're surely aware of that.
> The idea that "nobody except 'me' can possibly be trusted" is very natural,
> but it isn't realistic, and it can lead to nothing except for endless
Hardly, since most people don't wind up pushing shopping carts down
the streets for a living.
You have to trust other people. It's natural.
> infighting. I know a lot of important things about Friendly AI, but I also
> know that not everything I know should be necessary for success - my knowledge
> indicates that it should be possible to succeed with a limited subset of my
> knowledge. And scientists and engineers are, by and large, benevolent - they
Unless you manage to communicate your insights into words and concepts
understandable by other people, you're just another Smart Guy with a
Stupid Idea.
> may express that benevolence in unsafe ways, but I'm willing to trust to good
> intentions. After all, it's not like I have a choice.
I don't trust anybody (me included), if the stakes are high
enough.
And please tell us, why you think not having a choice.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:31:19 MST