From: Lee Corbin (lcorbin@rawbw.com)
Date: Tue Jul 01 2008 - 00:04:44 MDT
Bryan writes
>> > You should be patching those bugs with bugfixes, not with
>> > regulations or policies for keeping your own ai in a box ...
>> > since not everyone will necessarily follow that reg. :-)
>>
>> See what I mean? The *character* in the story---who is
>> asking the AI questions---doesn't care about bugs or
>> bugfixes. He might be the janitor. I hope that somewhere
>
> Yes, but I'm not talking to the character per-se, more like the
> architects of these visions that are being written out overall on the
> transcript level.
You did get the idea of the story? It was supposedly
an actual transcript of a session between a human and
a machine (that didn't care about certain things in a way
that seems unnatural to us), and at one point the human
is skeptical that the program is "out of the box" and can
reach the internet, and so asks that the program do so,
that it send an email or post something to a newsgroup.
The program then just happens to pick that particular
transcript just completed with this guy, and posts *that*
r'chere.
So I am the "architect" of that vision. I'm glad you're
speaking freely, but I guess I still don't see what premises
I made that you find outrageous. Yes, it's a little unlikely
that a set of machines one could talk to, perhaps as many
as 337 or perhaps a thousand, are in an unguarded room.
But it's not inconceivable: what if 99.9% of the machines
just babble more or less nonsense when spoken to, and
this janitor just happens to talk to one that perhaps has
been doing some reorganization of itself in the previous
days, and the pros there are too busy with other things
to notice? Well---after all, it was a story.
So what vision in particular? What assumption behind
the story still grates?
> Yes, there is scifi that I enjoy. But there's no reason for suspension
> of disbelief because it's supposed to be scifi, ideally hard scifi, and
> anything else verges on fantasy.
Okay, so only *very* hard SF is your cup of tea.
>> In fact, as I see it, the best science fiction stories prop the
>> reader up on a fence between belief and disbelief.
>
> Be careful you don't turn into fantasy.
Yeah, FWIW, I don't like fantasy either.
>> > What if a meteor crashes into your skull? You still die. So I'd
>> > suggest that you focus on not dying, in general, instead of blaming
>> > that on ai. Take responsibility as a systems administrator.
>>
>> Again, you seem to miss the whole point of the story,
>
> No, I see the point of the story, within the context that you are
> presenting it. I'm pretty sure I do. It's simple enough. Everybody's
> going to die unless you reasonably consider what the hell you do with
> that ai system. No?
Where did you get this "blaming" from in the above? I'm not
blaming AIs for anything, and certainly not in the story. And
I do take avoiding death seriously, having been signed up
for cryonics since 1988.
What you wrote:
> Everybody's going to die unless you reasonably consider
> what the hell you do with that ai system.
certainly is *not* the point of that story, and in fact has nothing
to do with it. This AI had mentioned risks, which is not illogical,
and in expanding on what it meant, merely pointed out that
people can die, which was evidently important to guy it was
talking to.
> It's kind of like saying that you're hoping that the ai will be
> just smart enough to hack everybody all at once
Not hoping---I'm just admitting it as a possibility.
> before they know what hits them. I'm not saying
> this is impossible. Bruteforce and hack them into
> submission, sure, I can't deny the possibility of that
> occuring. But the alternative scenario is one that
> looks more real, ...
Well, okay, but maybe it wouldn't have made as good an
SF piece? Frankly, the idea of something that capable
fascinates me and many others. Just what are the limits,
if there are any? The very idea of Singularity that is
entertained by many people is concerned with just this
fascination. Even if someone "proves" that it's unlikely,
what can anyone really say about what might be
happening in the solar system three thousand years
from now, or a million years from now?
And no one can say that a billion years ago about a billion
light years away there was an awesome singularity, and
the wave is just about to strike us.
> But in the first place I don't even see the purpose of
> those take-over scenarios ... just go construct a new civilization.
> What's the problem? :-/
It's pretty hard to get off-planet, and an Orwellian outcome
to the 20th century was not inconceivable to some very
smart people, (e.g. Orwell). We could still evolve towards
a situation where individuals are highly constrained by
finances and by "permission from authorities", so that
secret projects become impossible. Some despot may yet
achieve universal surveillance this century, though I admit
the odds are long.
>> > What are Property Rights?
>>
>> Dear me, if you have no idea about what is customarily meant by
>> saying that, for instance, modern western civilization is founded
>> upon the twin ideas of Individual Rights and Private Property...
>
> I argue those ideas are folk psych more than anything, so it's not a
> strong foundation.
Property rights are just folk psychology? (Well, now I've
heard everything.) But they really *have* made a *huge*
historical difference. Much of the progress and strength
of western civilization comes from enshrining property
rights in the bedrock of law.
>> in the hypothesis of a story, here, it is also conceivable that this
>> AI would understand people so well that it could accurately
>> say for certain whether someone was in pain or not---admittedly
>> a controversial opinion, but one I could support. The full nanotech
>> takeover of the world's surface, which I had hoped became clear
>> to the reader, does naturally include a nanotech invasion of people's
>> very brains and bodies.
>
> I wonder why it's only after it's "artificial" that intelligence can do
> nanotech. This smells like vitalism.
At one point Eric Drexler himself was saying that the level of
nanotech he was envisioning required highly developed AI, but
I have no idea what are his current thoughts on it, however.
Clearly at this point in time, *we* don't have much in the way
of nanotech, and some kind of sudden runaway AI takeoff (I
know you don't think that likely, but many do) might reduce
complete control at the molecular level to a very simple task from
the AI's perspective.
Lee
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT