From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Aug 01 1999 - 20:34:51 MDT
den Otter wrote:
>
> ----------
> > From: Max More <max@maxmore.com>
> > This must be where we differ. No, I don't think total control is desirable
> > or beneficial, even if it were me who had that total control. If true
> > omnipotence were possible, maybe what you are saying would follow, but
> > omnipotence is a fantasy to be reserved for religions. Even superpowerful
> > and ultraintelligent beings should benefit from cooperation and exchange.
>
> I find it extremely hard to imagine how something which can expand
> and modify its mind and body at will could ever need peers to
> cooperate with. If a SI can't entertain itself it isn't a real SI, and
> when it runs into some obstacle it can simply manufacture more
> computing modules, and/or experiment with new thought structures.
No argument here.
> I think it's fair to assume that a SI would be essentially immortal,
> so there's no need to hurry.
I don't know. My original Singularity scenario was a whole
branch-1e60-Universes-per-second deal, so losing one second would drop
the "ultimate good" by a factor of 1e60. Now that I've managed to
eliminate virtually every single Great Filter solution short of reality
being a computer simulation, which I still don't believe, I've simply
given up and admitted I don't know. Whether or not Powers experience
any kind of "urgency" is another coinflip.
> Even if there's such a thing as the end
> of the universe, it would still have billions of years to find a solution,
> which is ample time for even a human-level intelligence. Needless
> (or perhaps not) to say, a SI would never be "lonely" because a)
> it could and no doubt would drop our evolution-imposed urge for
> company, it having outlived its usefulness, and b) it could
> simply spawn another mind child, or otherwise fool around with
> its consciousness, taking as much (or little) risk as it wanted
> should it ever feel like it.
This all sounds right to me.
> > Despite my disagreement with your zero-sum assumptions (if I'm getting your
> > views right--I only just starting reading this thread and you may simply be
> > running with someone else's assumptions for the sake of the argument), I
> > agree with this. While uploads and SI's may not have any inevitable desire
> > to wipe us out, some might well want to, and I agree that it makes sense to
> > deal with that from a position of strength.
>
> Exactly, just to be on the safe side we should only start experimenting
> with strong AI after having reached a trans/posthuman status our-
> selves. If you're going to play God, better have His power. Even
> if I'm completely wrong about rational motivations, there could be
> a billion other reasons why a SI would want to harm humans.
You keep on talking about what you're going to do because of your goals.
That's legitimate. But, don't you think you should try to first
project the situation *without* your intervention? It's all well and
good to try to influence reality but you should really have some idea of
what you're influencing.
When I try a Yudkowskyless projection, I get nanowar before AI before
uploading. I'm trying to accelerate AI because that's the first
desirable item in the sequence. Uploading is just too far down. If it
was the other way around I'd be a big uploading fan and I wouldn't
bother with AI except as a toy.
That's the way navigating the future is supposed to be; find the most
probable desirable future, find the leverage points, and apply all
possible force to get there. Clean, simple, elegant. The problem with
selfness - not "selfishness", but holding on to your initial priorities
- is that it introduces all kinds of constraints and unnecessary risks.
> > I'm not sure how much we can influence the relative pace of research into
> > unfettered independent SIs vs. augmentation of human intelligence, but I
>
> We won't know until we try. Nothing to lose, so why not? It's
> *definitely not* a waste of time, like Eliezer (who has a
> different agenda anyway) would like us to belief.
I beg your pardon. I have never, ever said that IA is a waste of time.
*Uploading* is pretty much a waste of time. Neurohacking is a good
thing. Of course, the legalities and the time-to-adolescence means that
concentrating on finding natural neurohacks like yours truly will be
more cost-effective.
> > too favor the latter. Unlike Hans Moravec and (if I've read him right,
> > Eliezer), I have no interest in being superceded by something better. I
> > want to *become* something better.
>
> I saw an interview with Moravec the other day in some Discovery
> Channel program about (surprise, surprise) robots. He seemed
> to be, yet again, sincere in his belief that it's somehow right
> that AIs will replace us, that the future belongs to them and
> not to us. He apparently finds comfort in the idea that they'll
> remember us as their "parents", an idea shared by many
> AI researchers, afaik. Well, personally I couldn't care less
> about offspring, artificial or biological; I want to experience
> the future myself.
If what you say about Moravec is true, then he's still only half a
fanatic. I don't *care* whether or not the AIs replace us or upload us,
because it's *not my job* to care about that sort of thing. It's up to
the first SIs. If den Otter does manage to upgrade himself to Power
level, then I'll accept den Otter's word on the matter. I know exactly
how much I know about what really matters - zip. And I know how much
everyone else knows - zip, plus a lot of erroneous preconceptions.
Once you discard all the ideas you started out with and consider things
rationally, it just doesn't make sense to try and second-guess SIs.
They know more than I do. Period. There aren't any grounds on which I
could argue with them. See, Moravec is making the same mistake as
everyone else, just on the opposite side. If Moravec actually
*advocated* the extinction of the human race, or tried to program it
into the first AIs, I'd move against him just as I'd move against anyone
who advocated our survival at all costs, or against anyone who tried to
program Asimov Laws into AIs. It's not a question of my having an
allegiance to AIs, even. If I thought that AIs would cause harm, I'd
move against them.
I serve an unspecified that-which-is-right. I don't know what is
*specifically* right, but I'm going to do my best to make sure someone
finds out - not necessarily me, because that's not necessarily right.
Occam's Razor. As a subgoal of that-which-is-right, I'm going to try
and create an SI and protect it from interference. That's all that's
necessary. Everything else can be trimmed away. That's the way
navigating should be; clean, simple, elegant and with the absolute
minimum of specifications.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/tmol-faq/meaningoflife.html Running on BeOS Typing in Dvorak Programming with Patterns Voting for Libertarians Heading for Singularity There Is A Better Way
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:04:37 MST