From: den Otter (otter@globalxs.nl)
Date: Thu Aug 05 1999 - 08:13:33 MDT
----------
> From: Eliezer S. Yudkowsky <sentience@pobox.com>
> I think we have a serious technological disagreement on the costs and
> sophistication of uploading. My uploading-in-2040 estimate is based on
> the document "Large Scale Analysis of Neural Structures" by Ralph Merkle
> (http://www.merkle.com/merkleDir/brainAnalysis.html) which says
> "Manhattan Project, one person, 2040" - and, I believe, that's for
> destructive uploading.
Then somewhere else you wrote:
> I would
> expect virtually everything Drexler ever wrote about to be developed
> within a year of the first assembler, after which, if the planet is
> still around, we'll start to get the really *interesting* technologies.
> Nanotechnology is ten times as powerful and versatile as electricity.
> What we forsee is only the tip of the iceberg, the immediate
> possibilities. Don't be fooled by their awesome raw power into
> categorizing drextechs as "high" nanotechnology. Drextechs are the
> obvious stuff. Like I said, I would expect almost all of it to follow
> almost immediately.
So you've said yourself that nanotech is to be expected around 2015,
and that even today's most advanced Drextech designs would soon be
obsolete. How does this jive with a 25(!!!) year gap between the first
assemblers and uploading? I bet that if nanotech would be as potent as
you assume, full uploading could be feasible in 2020, about the same
time as your AI. If time is no longer a factor, uploading becomes more
than ever the superior option.
Merkle's article is a conservative extrapolation of current technology,
*it does not include nanotech*. I didn't see that quote "Manhattan
project, one person, 2040" either. He simply concludes that: "If we
use the [conventional] technology that will be available in 10 to
20 years, if we increase the budget to about one billion dollars,
and if we use specially designed special purpose hardware --
then we can determine the structure of an organ that has long
been of the greatest interest to all humanity, the human brain".
> You're talking about off-the-shelf uploading.
> You're talking about nondestructive-uploading kiosks at the local
> supermarket.
No, though these could very well be feasible with 2020 nanotech.
> That's 2060 CRNS
Yeah, right, 45 years after the first functional assembler. Wow,
progress will actually be *slowing down* in the next century.
Would this result in an anti-singularity or something?
> and you've committed yourself to making
> sure that *nobody* upgrades themselves, or even runs themselves for a
> few million years subjective time, until that technology is available.
Huh, what makes you think that? I welcome *any* kind of technology
that can upgrade a human, because with this there's at least the
theoretical chance that you can use it yourself.
> Let me know if you need any help. I don't think that neurohack-IA is
> going to change the odds in the slightest, though. Actually, it'll
> change the odds in my direction. I'll have friends.
Ah yes, you assume that all, or at least most, people of enhanced
intelligence would share your goals. You could very well be right
with regard to natural neurohacks (geniuses do seem to have a
lot in common), but would upgrading a "normal" person have the
same effect?
> > Or the Luddites may finally "discover" AI and do some serious
damage.
>
> Then they'd damage uploading more. It's a lot harder to run an
> uploading project using PGP.
Why would this be harder for IA than for AI?
> Do I think nanotechnology is going to blow up the world? Yes.
.......but probably not before we (most likely in a substantially
augmented form) can escape to space.
> Do I
> lift the smallest finger against it? No.
Unless nanotech is absolutely crucial to build your AI, this statement
doesn't make any sense. This race is way too important to worry
about good sportsmanship.
> Okay. You don't mind killing off copies of yourself? No, wait, wrong
> question. Of course den Otter doesn't mind. Your *copies* don't mind
> your killing them off?
As I've already mentioned, perhaps they could be tweaked in such
a way that they're still useful as test subjects but lack "free will",
or
you merge with your copy (only the good ones, of course) afterwards,
and no-one dies. Not that the copy could do that much if it disagreed,
btw; after all the original is much faster and has control over the test
setting.
> > > *You* try to think like an SI. Why do you keep on insisting on the
> > > preservation of the wholly human emotion of selfishness?
> >
> > Because a minimal amount of "selfishness" is necessary to
> > survive. Survival is the basic prerequisite for everything else.
>
> First of all, you are wrong. If I may formalize your statement, you're
> saying: "For all goals G: Survival is a subgoal of G." Well, if my
> goal is maximizing the sum of human pleasure, it may make perfect sense
> to die in a raid on the Luddite Anti-Wireheading Headquarters. What
> you've done is establish "survival" as an interim subgoal of most goals.
> Then you perform a mental slight-of-hand and say it's an interim
> subgoal of all goals. Then you perform another slight-of-hand and say
> this universality means survival is an end in itself. Then you say it's
> the only thing that can be an end in itself. Then you say everything
> else is irrational. I think you're skipping a few steps here.
Ok, different approach: first of all, what is the most basic reason why
we pursue goals? Because it gives us "fulfilment" (pleasure) when
we succeed. That's how we're wired (maybe some severe mental
cases aside). Do you agree? If so, then if you cut away all the fluff
we're essentially pleasure-seeking agents. Now, when "pleasure" is
good, then the more of it one can get, the better. Obviously a subgoal
like "survive" and *its* subgoals like "evolve as far as you can",
"gather
knowledge" etc. are a more efficient approach to maximize the duration
and intensity of "pleasure" than the "altruistic" subgoal of "create a
superior intelligence no matter what the costs (to yourself)".
If your ASI kills you, you have cheated yourself out of a potential
infinity of pleasure for a brief moment of satisfaction. This is very
short-sighted, not what I'd call "intelligent hedonism" (and yes,
deep down you're a hedonist, whether you like it or not). Afaik,
your whole agrument rests on the denial of your true nature. The
fluff has gotten in your eyes, so to speak.
> For that matter, selfishness isn't a 100%-certain prerequisite for
> survival. Suppose I point a gun to your head and say "Suppress all
> selfishness as indicated by this here fMRI display or I blow your brains out."
If successful, that action (trick) would still be *motivated* by
"selfishness". Without that motivation, one might not even try.
Anyway, this is just silly hair-splitting. The "selfish" survival
drive is simply a bloody useful tool and a solid fundament to
build a mental structure on. It's practical, as being alive gives
you a near(?)-infinite number possible goals to pursue, while
being dead gives you exactly zero choice.
Emotions are the (only) meaning of life. To have emotions you
need to be alive, so be sure to include survival as a subgoal.
That's logic.
> You're arguing in certainties - that is, you think your arguments are
> certain. Actually, I don't know what you *think*, but that's what
> you've been saying. Survival is a *necessary* prerequisite for
> *everything* else. Selfishness is the *only* rational goal. When
> anyone thinks they have a 100% certainty, it is 99% probable that their
> arguments are wrong. Not necessarily the conclusion, it doesn't say
> anything about the conclusion, but it does say that their arguments
> aren't based in messy ol' reality.
A pleasure-seeking goal system which includes survival and all of
its little helpers (you know, science, technology etc.) is a *very
safe bet*. It may not be the Ultimate Truth, but it's a most practical
and user-friendly interim goal, and one which allows you to look
for your Truth, whatever it may be, indefinitely. "God's waiting room".
Can you offer me something more practical than this? If not, this
wil be the "ultimate" interim goal until further notice.
> > What you describe is basically common sense, not altruism (well,
> > not according to *my* dictionary, anyway). Perhaps you should
> > use some other term to avoid confusion.
>
> Okay. I'm talking about a reasoning system that chooses between
> options, plus the probabilistic assertion that one choice is superior to
> others, will yield choices without any initial goals. The assumption
> that differentials exist is enough to produce them.
Oh, that's much better (though a bit long, perhaps).
> > > Acting so as to increase personal power
> > > doesn't make rational sense except in terms of a greater goal.
> >
> > Yes, and the "greater goal" is survival (well, strictly speaking it's
> > an auxiliary goal, but one that must always be included).
>
> YES! *Think* about that! Survival is only an auxiliary goal - what I
> would call a "subgoal". To make the system go, you need supergoals.
> What are they? Are they observer-dependent?
You bet your ass they are! The universe may exist "objectively", but
goals (mental states) are by definition observer-dependent.
> I think Occam's Razor
> would tend to rule this out until specific evidence is produced to the
> contrary. Are the supergoals arbitrary, depending on the initial state
> of the system? If so, my supergoals are as rational as yours.
Since your supergoals are in fact nothing but fluffy pleasure
actuators (or discomfort avoiders), it follows that my goals are
(much) better. See above.
> If I share the supergoals of the Powers, then wouldn't the same
> inexorable reasoning take over and make the survival of Powers a subgoal
> of *mine*?
Not automatically, if that's what you mean. You could choose to
adopt the survival of Powers as a subgoal, of course, but in the
end it's all about the satisfaction *you* (and only you) can feel
when the Powers do you proud.
> In essence that's exactly what happened. I don't insist on
> personal continuity with the Powers because I don't have a supergoal
> that requires it. Asserting that I can't trust the Powers is foolish;
> you don't know what *you'll* do tomorrow, either - trusting yourself
> more than others is a lesson learned from the mortal environment.
Powers might be an awful lot smarter than we are, but that doesn't
mean that they can't be flawed, far from it. They could kill everyone
on earth only to find out that they'd been flat wrong about it "five
minutes" later. Or they could kill themselves and never find out.
> I have supergoals. Survival is only an auxiliary goal, compared to
> that.
But an oh-so-necessary one. Without it you can't chase your
chimeras for very long.
> My projection of our relative mental architectures indicates that
> a de-novo AI programmed to serve those supergoals will be more
> efficient, more reliable, and above all *less expensive* than a
> hacked-up upload/upgrade of myself. It is therefore rational to serve
> my supergoals through the survival of AI.
You're being way to clinical about this. Sure, the AI could be a much
more efficient, elegantly structured being, but its existence only
matters in relation to *you*. If you cease to exist, the AI loses its
meaning.
> You're arguing that I should let my subgoal of survival get in the
way
> of AI.
I say you should pick a safer way to get your kicks.
> Let's think about that. Why are the AIs, hypothetically,
> exterminating me? Because I'm not needed to serve their - and my -
> supergoals. So in this case, I don't mind that the survival subgoal is
> violated, because the value has been reduced to zero; the causal link
> between survival and accomplishment has been broken. No, I won't get to
> see my accomplishments, but my goal is not "seeing X" but simply "X";
> which, by Occam's Razor, is simpler.
Well, see above I guess. Or better yet, check out this link. Now *this*
is enlightened stuff...
http://pierce.ee.washington.edu/~davisd/egoist/articles/Egoism.Robinson.html
> You can see the evolution in my old posts on Extropians
> and the old version of "Staring Into the Singularity" - I went from "The
> Singularity is absolute good, no question about it, and They'll be nice
> to us" to "I don't have the vaguest idea of what absolute good is, and
> the whole thing is extremely tentative, but if there is one I suppose
> the Singularity is the best way to serve it; and I don't care about
> anything else, including my own survival or the survival of humanity,
> because that's not in the absolute minimal set of assumptions I need to
> generate choice differentials."
So essentially you've evolved from a naive state to misguided state.
You say you serve Truth or the Absolute Good, but these are empty
religious terms; you might as well choose to serve "God" and it
would be just as pointless. Enligtenment is realizing that *you*
are the center of your universe, that there's nothing more important
than your existence. When you die, the universe might as well
cease to exist, nothing matters anymore from your *subjective*
point of view, which is the only one you have, and always will
have. So select "supergoals" that don't conflict with your survival,
and be happy. It's the best thing one can do with an otherwise
meaningless existence. Forget objective goals, they're bullshit.
Btw, you have changed your views (radically) before, so the
same might happen in the future (you're still young so this
isn't exactly unrealistic). How can you be so sure that
you're right *now* when apparently you were "wrong"
before? You think that you can't lose with your approach,
but you're dead wrong (pun intented); there's no guarantee
whatsoever that your ASI will do "the right thing", if it
exists at all. It's a blind gamble, nothing more. My
approach (subgoal: stay alive, interim goals: seek
pleasure, expand sphere of influence, seek knowledge,
evolve until you find a better goal system) may seem a
bit mundane in comparison, but it *is* a safe bet.
> > Well, IMHO there's no ultimate
> > answer. It's an illusion, an ever receding horizon. Even the smartest
> > SI can only guess at what's right and true, but it will never ever
> > know for sure.
>
> How can you know this? I do not see any way in which you would have
> access to that piece of information.
Really, I can't understand how an apparently intelligent guy like you
can belief in absolute certainty.
> Happiness is what we experience when we achieve goals.
Yep. This is the real drive behind your, mine and everyone else's
goals.
> Are you going to
> spend your life trying to be happy?
You, I and everyone else is already doing that, with varying
success, obviously...Nothing wrong with this system, IMHO,
certainly not when you can satisfy all your needs with future
tech, or adapt your goals for maximal happiness.
> Well, then how do you know you're
> happy? Because you think you're happy, right?
Yes. I think (I'm happy) therefore I am (happy). Good enough
for me!
> So thinking you're happy
> is the indicator of happiness?
Yep.
> Maybe you should actually try to spend
> your life thinking you're happy, instead of being happy.
It's all the same thing.
> What this is is one of those meta/data confusions, like "the class of
> all classes". Once you place the indicator of success on the same
> logical level as the goal, you've opened the gates of chaos.
If you call the above chaos, then chaos (apparently) isn't so bad
after all.
> > It is, together with the auxiliary goals of survival and
> > the drive to increase one's knowledge and sphere of influence, the
> > most logical default option. You don't have to be a Power to figure
> > that one out.
>
> Like I said - and like you admit - "auxiliary". Once we've dismissed
> inflating an indicator as the supergoal, and admit that knowledge,
> power, and survival are simply means to an end, we once again confront
> the question of what that end is.
The end could be anything you want it to be, though the logically
consistent approach would be to seek pleasure, a safe bet, until
something better comes along.
> I am not joking. Selfishness as an end in itself is insane.
Aha, "as an end in itself". Well, first of all it isn't any more sane
or insane than any other supergoal (supergoals are all arbitrary
in the end), and as a subgoal or auxiliary goal it is one of the
most practical, if not *the most practical*, around.
> > Reason dictates survival, simply because pleasure is better than death.
> > If "symetry" or whatever says I should kill myself, it can kiss my ass.
>
> See, there you go. You've just put your evolved, hormonal, emotional
> impulses on a level above logic and reason.
Logic and reason are just tools, auxiliary like survival. Emotions are
our (only) motivator, they give meaning to our meaningless existence.
We are both motivated by emotions, the only difference is that I'm
being frank about it.
> And you're willing to
> sacrifice whatever parts of yourself are needed to become an SI?
Becoming a SI isn't, or at least shouldn't be, about sacrifice, but
about gain. Becoming more than you are, but keeping all options
open.
> > "I" want to feel
> > the glory of ascension *personally*. That it wouldn't matter to an
> > outside observer is irrelevant from the individual's point of view.
>
> Well, that kind of emotional desire is exactly what you have to be
> willing to give up, if you want to grow your mind. Not "give up",
> actually, but you do have to be willing to say that logic occupies a
> level above it.
This is completely arbitrary. Your emotions cause you to
worship logic, a tool which was developed to aid survival.
You want to switch the tool and the actuator? Fine, but
there's no logic in that unless you have supergoals
which require a practical, logical approach (like survival)
But wait, those supergoals are the result of emotions
too, so you still end up serving emotions like everyone
else.
When logic, and proportion, have fallen sloppy
dead...ha, ha. Go with the flow, man.
> For your mind to grow strong, reason has to have
> logical priority over everything else. I'm not talking about the wimpy
> heart-vs-head or intuition-vs-skill cultural myth. Intuition and skills
> are simply different ways to approximate a perfect rationality that, as
> humans, we can never be isomorphic with. But you do have to admit that
> rationality takes priority over not-rational things. Otherwise you'll
> find it difficult to hone your skill because you'll have an excuse to
> keep ugly messes around in your head.
Like I said, rationality is a very practical tool, and I value it
greatly for
it, but ultimately it's just that, a tool, and not an end in itself.
Emotions
(of the positive kind) are the (only) end in itself.
> Hey, do the math. Figure that humanity has been around for, what,
> 50,000 years with an average population of 100,000,000? So that's a
> total of five trillion experience-years. Now there are six billion
> people, so if they last an extra decade... that's a 1% increase. Not
> really significant except as a minor fluctuation.
>
> Oh, wait. I forgot. You're only doing the calculations for den Otter.
If the calculations apply to me, they apply to everyone else who
is alive *presently* (what is the relevance of previous generations?
Oh wait, it's that silly objectivism again, isn't it?) A decade extra
per person does indeed matter *to that person*.
> I really don't see that much of a difference between vaporizing the
> Earth and just toasting it to charcoal. Considered as weapons, AIs and
> nanotech have equal destructive power; the difference is that an AI can
> have a conscience.
It has a *will*, an intelligence (but not necessarily a conscience in
the sense that it feels "guilt"). An ASI is an infinitely more
formidable weapon than nanotech because it can come up with new ways to
crush your defences and kill you at a truly astronomical speed.
Like the Borg, who adjust their shielding after you've shot a couple
of them, only *a lot* more efficient. Nanotech is just stupid goo
that will try to disassemble anything it comes into contact with it
(unless it's another goo nanite -- hey, you could base your defenses
on that). So...avoid contact. Unless the goo is controlled by a
SI (not that it would bother with such hideously primitive
technology), it can be tricked, avoided and destroyed. Try that
with a SI...
> I mean, *I* care about a Perversion, or the possibility of a malevolent
> Power wreaking true evil on an intergalactic scale.
I wonder, what's "evil" to someone who doesn't mind wiping out
humanity for the abstract concept of "truth" (or whatever)? Does
"evil" equal "irrational" in your view?
> I don't care about
> it much because if it can happen, it undoubtedly has already, so our
> doing it wouldn't make *too* much of a difference, plus it's a pretty
> small possibility to begin with. I don't see why *you* would care at all.
Well, maybe because I care about my existence?
> I mean, look at all the rationalizing you have to do just to justify the
> idea that *you* somehow know whether or not your own survival is right
> in ultimate terms.
But it's the *relative* terms that matter...
> > Tough luck, I'm The One.
>
> I'll be cheering your efforts on, as long as you don't interfere with
> mine. After all, my success counts as your failure but your success
> counts as my success.
Does this mean that if I'd repeat what I've written so far after having
ascended, you'd belief me unconditionally?
> See how much easier navigation is once you drop
> all the unnecessary preconditions?
Yes, and if you stop caring about anything, you'll be enlightened,
right?
> Then we have nanotech. Once diamond drextech becomes common, one guy,
> in a lab, can get a complete arsenal that goes from zero to sixty in
> hours once the basic unit is developed. There's no anti-first-strike
> restraint. There's no balance of power. And there are too many
> players. I think I posted this whole argument earlier - were you around?
Well then, looks like we should have that debate again and again
until we *do* find a solution, or are you giving up already? For
example, would goo nanites eat eachother too? If not, you
could make defenses by surrounding your base with nanites
that look like the enemy, but are inactive. Or you surround
yourself with something that kills nanites, like molten rock
or massive radiation(?).
If nukes start flying, having a relatively simple bunker in the
middle of nowhere like (no offense, guys from Down Under)
Australia for example could do the trick. You'll only have to
hold out until you've made a spaceship, after all (now there's
a nice nano@home project: how to swiftly grow a space craft
with Drextech).
> > Many would be killed, but humanity probably
> > woudn't be wiped out, far from it. A malevolent AI would
> > kill *everyone*. See my point?
>
> Yes. You're wrong.
No, *you' re* wrong. Or could we both be wrong? Within 30
years, we'll know for sure...
> > > Second: I know how to program altruism into
> > > an AI; how would you program selfishness?
> >
> > Make survival the Prime Directive.
>
> You did read the section on Interim Goal Systems and the Prime Directive
> from _Coding a Transhuman AI_, right? You're aware that initial-value
> goals are both unnecessary and unstable?
Could be, but you asked me to program selfishness, not to make the
most stable system. Not that I assume that the "selfish" AI would be
somehow disfunctional, far from it. Like I said, it's a very solid
fundament
to expand your consciousness on.
> > Ah well, I guess I'll try to find eternal
> > bliss then, until (if ever) I come up with something better. But
> > feel free to kill yourself if you disagree.
>
> I certainly will. Please *don't* feel free to interfere with my efforts
> to create an AI because *you* think that just because *you* can't figure
> out the answer no possible entity can.
I maintain, The Answer is a chimera. What kind of arrogant fool could
ever think that he holds All The Answers? The very moment he'd say
that, God or some badass alien Ultrapower could pop up and laugh
in his face.
> I really object to your intimation that "only God" can figure out the
> ultimate meaning of life. That sounds defeatist,
What a joke, this coming from someone who assumes that
mankind will kill itself the very moment it has nanotech, and
that our only hope is some deus ex machina.
> passivist,
Huh?
> and
> Luddite.
The term "Luddite" isn't what it used to be, I guess.
> Me, I figure that any garden-variety superintelligence should
> be able to handle the job.
Oh right, it will say "X is the meaning of life. *Of course* I'm 100%
sure, I'm a *SI* for cryin' out loud, so you better listen to me, dumb
humans. Oh and btw, "X" demands that you die. Resistance is
futile, of course". Along comes a SSI. "You damn stupid moron SI,
the meaning of life is "Y". *Anyone* knows *that*, now look what
you did, killing all those humans for nothing. _Of course_ I'm 100%
sure, I'm a SSI. But wait! There's a SSSI! "You both suck -- the
meaning of life is "Z". Of course I'm 100% sure"...ad infinitum.
> > > And I think an IA Transcend has a definite probability of being less
> > > desirable than an AI Transcend. Even from a human perspective. In
> Oh, yeah, sure, evolution is WAY more trusty than a human designer.
The human designer is a mere product of evolution.
> Even, no, ESPECIALLY if you're trying to accomplish something that was
> never even REMOTELY INCLUDED in the original design goals, like an
> upgradable architecture. And of course NO creation can be superior to
> its Creator.
There's a BIG difference between something being "superior" and it
being a near-perfect Harbinger of Reason, Prophet of Truth. There's
no reason to belief that SIs can't fuck up, and when they do they'll
likely do it big (because they are big, think big and act on a cosmic
scale). This is assuming that they are "pure", textbook Yudkowskian
AIs; if there's a bug in the original program (and there *will* be
bugs),
it could be amplified or mutated in totally unpredictable and
potentially disastrous ways as the AI starts to play with itself.
If you or your team screw up, or someone tries to sneak in some
Asimov laws or even a virus, you'll have a "bad seed".
These possibilities are non-trivial, certainly when the military
start fooling around with AI, in which case they're likely to
be the first to have one up and running. Hell, those guys might
even try to stuff the thing into a cruise missile. So no, I don't
see why I should trust an AI more than myself.
> I'd call you a Luddite... but oh, wait, I already did.
Go right ahead...Yep, I'm a Luddite, sure thing! To hell with all
that disgusting technology (oh shit, I'm using it right now).
> > It may be messy, but it's all I have so it will have to do.
>
> It is NOT all you have. The only reason it's all you have is because
> you THINK it's all you have. Get hip to the transhumanist meme, man!
> If you want a better mind, program one! Unless, of course, you're so
> set on being The One that you refuse to do so.
I thought that the transhumanist meme was more about *upgrading*
what we have. It was when I last checked, anyway.
> Yes, I do have my "own" agenda, which involves using IA as a means to
> AI. And if you'd like to start your own IA project, I'll be more than
> happy to contribute advice, brain scans, genetic material, or anything
> else you need.
Are you serious about this? Well, ideally we'd have a project that
aims to develop practical nanotech designs for things like escape
craft, space habitats, food replicators, neurological enhancements,
weapons and, last but not least, mind uploading. If you test it
now in VR, that could save a lot of valuable time when the shit
hits the fan. Apart from this, we need to focus on non-nanotech
(contemporary) means to achieve IA, and ways to get some
serious funding. If you have any suggestions, I'd love to hear
them.
As a matter of fact, you've mentioned on several occasions that
you have "thousands" of potentially lucrative ideas. I'm certainly
interested in a relatively easy, low budget way to make lots of
money. So, how about this: if you can give me something that
works, I'll give you 50% of all profits.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:04:40 MST