Re: IA vs. AI was: longevity vs singularity

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Aug 01 1999 - 22:27:44 MDT


den Otter wrote:
>
> ----------
> > From: Eliezer S. Yudkowsky <sentience@pobox.com>
>
> > > The first uploads would no doubt be animals (of increasing complexity),
> > > followed by tests with humans (preferably people who don't grasp
> > > the full potential of being uploaded, for obvious reasons).
> >
> > You have to be kidding. Short of grabbing random derelicts off the
> > street, there's simply no way you could do that. Or were you planning
> > to grab random derelicts off the street?
>
> The animal tests alone would make the procedure considerably
> safer (if the procedure works for an ape, it will most likely work
> for a human too), and it's really no problem to find relatively
> "clueless" yet eager volunteers; I could very well imagine that
> serious gamers would be lining up to get uploaded into the
> "ultimate game", ExistenZ style, for example. If you make the
> procedure reversible, i.e. leave the brain intact and switch the
> consciousness between the brain and the machine, there
> doesn't have to be much risk involved.

I think we have a serious technological disagreement on the costs and
sophistication of uploading. My uploading-in-2040 estimate is based on
the document "Large Scale Analysis of Neural Structures" by Ralph Merkle
(http://www.merkle.com/merkleDir/brainAnalysis.html) which says
"Manhattan Project, one person, 2040" - and, I believe, that's for
destructive uploading. You're talking about off-the-shelf uploading.
You're talking about nondestructive-uploading kiosks at the local
supermarket. That's 2060 CRNS and you've committed yourself to making
sure that *nobody* upgrades themselves, or even runs themselves for a
few million years subjective time, until that technology is available.

> > And did I mention that by the
> > time you can do something like that, in secret, on one supercomputer,
> > much less run 6000 people on one computer and then upgrade them in
> > synchronization, what *I* will be doing with distributed.net will -
>
> Yes, I know the odds are in your favor right now, but that could
> easily change if, for example, there were a major breaktrough
> in neurohacking within the next 10 years.

Let me know if you need any help. I don't think that neurohack-IA is
going to change the odds in the slightest, though. Actually, it'll
change the odds in my direction. I'll have friends.

> Or programming a sentient AI may not be that easy after all.

I'm not saying it's easy. I'm saying it's easier than uploading. Am I
certain? Well, let's apply the David Gerrold criterion: Can you rip my
arm off if I'm wrong? Yes, absolutely.

> Or the Luddites may finally "discover" AI and do some serious damage.

Then they'd damage uploading more. It's a lot harder to run an
uploading project using PGP.

> Or...Oh,
> what the hell, the upload path is simply something that *has*
> to be tried.

Go right ahead, as long as you don't try to slow down other things so it
can be tried first. That's the part I object to - you insist that your
technology be first and you're willing to deliberately slow others down
instead of doing like the rest of us and speeding your own efforts up.
Do I think nanotechnology is going to blow up the world? Yes. Do I
lift the smallest finger against it? No.

> If it works, great! If it doesn't...well, nothing lost, eh?

Nothing but time. Oh, wait. We don't *have* time.

> > Besides, there's more to uploading than scanning. Like, the process of
> > upgrading to Powerdom. How are you going to conduct those tests, hah?
>
> After having uploaded, you could, for example, run copies of
> yourself at very slow speed (in relation to your "true" self),
> and fool around with different settings for a while, before either
> terminating, or merging with, the test copy. If the tests are
> successful, you can upgrade and use copies of your new state,
> again running at relatively slow speed, to conduct the next
> series of tests. Ad infinitum.

Okay. You don't mind killing off copies of yourself? No, wait, wrong
question. Of course den Otter doesn't mind. Your *copies* don't mind
your killing them off?

> > > Forget PC and try to think like a SI (a superbly powerful
> > > and exquisitely rational machine).
> >
> > *You* try to think like an SI. Why do you keep on insisting on the
> > preservation of the wholly human emotion of selfishness?
>
> Because a minimal amount of "selfishness" is necessary to
> survive. Survival is the basic prerequisite for everything else.

First of all, you are wrong. If I may formalize your statement, you're
saying: "For all goals G: Survival is a subgoal of G." Well, if my
goal is maximizing the sum of human pleasure, it may make perfect sense
to die in a raid on the Luddite Anti-Wireheading Headquarters. What
you've done is establish "survival" as an interim subgoal of most goals.
 Then you perform a mental slight-of-hand and say it's an interim
subgoal of all goals. Then you perform another slight-of-hand and say
this universality means survival is an end in itself. Then you say it's
the only thing that can be an end in itself. Then you say everything
else is irrational. I think you're skipping a few steps here.

For that matter, selfishness isn't a 100%-certain prerequisite for
survival. Suppose I point a gun to your head and say "Suppress all
selfishness as indicated by this here fMRI display or I blow your brains out."

You're arguing in certainties - that is, you think your arguments are
certain. Actually, I don't know what you *think*, but that's what
you've been saying. Survival is a *necessary* prerequisite for
*everything* else. Selfishness is the *only* rational goal. When
anyone thinks they have a 100% certainty, it is 99% probable that their
arguments are wrong. Not necessarily the conclusion, it doesn't say
anything about the conclusion, but it does say that their arguments
aren't based in messy ol' reality.

> > I don't
> > understand how you can be so rational about everything except that!
> > Yes, we'll lose bonding, honor, love, all the romantic stuff. But not
> > altruism. Altruism is the default state of intelligence. Selfishness
> > takes *work*, it's a far more complex emotion.
>
> Well, so what? *Existence* takes work, but that doesn't make it
> irrational.

So Occam's Razor.

> > I'm not talking about
> > feel-good altruism or working for the benefit of other subjectivities.
> > I'm talking about the idea of doing what's right, making the correct
> > choice, acting on truth.
>
> What you describe is basically common sense, not altruism (well,
> not according to *my* dictionary, anyway). Perhaps you should
> use some other term to avoid confusion.

Okay. I'm talking about a reasoning system that chooses between
options, plus the probabilistic assertion that one choice is superior to
others, will yield choices without any initial goals. The assumption
that differentials exist is enough to produce them.

> > Acting so as to increase personal power
> > doesn't make rational sense except in terms of a greater goal.
>
> Yes, and the "greater goal" is survival (well, strictly speaking it's
> an auxiliary goal, but one that must always be included).

YES! *Think* about that! Survival is only an auxiliary goal - what I
would call a "subgoal". To make the system go, you need supergoals.
What are they? Are they observer-dependent? I think Occam's Razor
would tend to rule this out until specific evidence is produced to the
contrary. Are the supergoals arbitrary, depending on the initial state
of the system? If so, my supergoals are as rational as yours.

If I share the supergoals of the Powers, then wouldn't the same
inexorable reasoning take over and make the survival of Powers a subgoal
of *mine*? In essence that's exactly what happened. I don't insist on
personal continuity with the Powers because I don't have a supergoal
that requires it. Asserting that I can't trust the Powers is foolish;
you don't know what *you'll* do tomorrow, either - trusting yourself
more than others is a lesson learned from the mortal environment.

I have supergoals. Survival is only an auxiliary goal, compared to
that. My projection of our relative mental architectures indicates that
a de-novo AI programmed to serve those supergoals will be more
efficient, more reliable, and above all *less expensive* than a
hacked-up upload/upgrade of myself. It is therefore rational to serve
my supergoals through the survival of AI.

You're arguing that I should let my subgoal of survival get in the way
of AI. Let's think about that. Why are the AIs, hypothetically,
exterminating me? Because I'm not needed to serve their - and my -
supergoals. So in this case, I don't mind that the survival subgoal is
violated, because the value has been reduced to zero; the causal link
between survival and accomplishment has been broken. No, I won't get to
see my accomplishments, but my goal is not "seeing X" but simply "X";
which, by Occam's Razor, is simpler.

> > I *know* that we'll be bugs to SIs. I *know* they won't have any of the
> > emotions that are the source of cooperation in humans. I *still* see so
> > many conflicting bits of reasoning and evidence that you might as well
> > flip a coin as ask me whether they'll be benevolent. That's my
> > probability: 50%.
>
> Yes, ultimately it's all just unknowable. BUT, if we assume that the
> SIs will act on reason, the benevolence-estimate should be a lot lower
> than 50% (IMO).

Reasonable, unreasonable - I don't know. I give up. Yes, it *is* 50%.
Some days it's 30%, some days it's 10%, some days it's 70%; you might as
well split the difference and call it a coin-flip. But even on the days
when it's 10%, it's still humanity's best chance for survival. (It's
not my job to care whether or not humanity survives, of course, but it
is my job to know if I'm rationally opposed to the faction that wants
humanity to survive. At present, the answer is no.)

> > If Powers are hungry, why haven't they expanded continually, at
> > lightspeed, starting with the very first intelligent race in this
> > Universe? Why haven't they eaten Earth already? Or if Powers upload
> > mortals because of a mortally understandable chain of logic, so as to
> > encourage Singularities, why don't they encourage Singularities by
> > sending helper robots? We aren't the first ones. There are visible
> > galaxies so much older that they've almost burned themselves out. I
> > don't see *any* reasonable interpretation of SI motives that is
> > consistent with observed evidence. Even assuming all Powers commit
> > suicide just gives you the same damn question with respect to mortal aliens.
>
> I dunno, maybe we're the first after all. Maybe intelligent life really
> is extremely rare. I think Occam's Razor would agree.

I've considered that, but I'm 95% certain that we're not first. Of
course, that's because I have to invoke either the Anthropic Principle
or simulation-runners to explain the existence of qualia, a necessity
which you would probably regard as not.

> > And, speaking of IA motivations, look at *me*. Are my motivations
> > human?
>
> Yes, your motivations are very human indeed. Like most people,
> you seem to have an urge to find a higher truth or purpose in
> life, some kind of objective answer (among other things, like
> personal glory and procreation).

That's how I got here. But once I did have a fully logical
justification system, it sort of took over and blew away the
scaffolding. You can see the evolution in my old posts on Extropians
and the old version of "Staring Into the Singularity" - I went from "The
Singularity is absolute good, no question about it, and They'll be nice
to us" to "I don't have the vaguest idea of what absolute good is, and
the whole thing is extremely tentative, but if there is one I suppose
the Singularity is the best way to serve it; and I don't care about
anything else, including my own survival or the survival of humanity,
because that's not in the absolute minimal set of assumptions I need to
generate choice differentials."

> Well, IMHO there's no ultimate
> answer. It's an illusion, an ever receding horizon. Even the smartest
> SI can only guess at what's right and true, but it will never ever
> know for sure.

How can you know this? I do not see any way in which you would have
access to that piece of information.

> It will never know whether there's a God (etc.) either,
> unless He reveals himself. And even then, is he *really* THE God?
> Uncertainty is eternal. The interim meaning of life is all there is, and
> all there ever will be.

> Out of all arbitrary options, the search for the
> highest possible pleasure is the one that makes, by definition, the
> most sense.

If it's "by definition", then I assume you're defining it as the
achievement of supergoals. And then you're performing a logical
sleight-of-hand and saying that the goal-achievement-indicator is itself
the goal. That's the whole wireheading paradox. I mean, if pleasure is
the goal, then maybe you need a pleasure-indicator so you know how happy
you are; and then maybe the AI will devote itself to increasing the
indicator, so it thinks it's infinitely happy when actually it's not. I
mean, let's think this through, people.

Happiness is what we experience when we achieve goals. Are you going to
spend your life trying to be happy? Well, then how do you know you're
happy? Because you think you're happy, right? So thinking you're happy
is the indicator of happiness? Maybe you should actually try to spend
your life thinking you're happy, instead of being happy.

What this is is one of those meta/data confusions, like "the class of
all classes". Once you place the indicator of success on the same
logical level as the goal, you've opened the gates of chaos.

> It is, together with the auxiliary goals of survival and
> the drive to increase one's knowledge and sphere of influence, the
> most logical default option. You don't have to be a Power to figure
> that one out.

Like I said - and like you admit - "auxiliary". Once we've dismissed
inflating an indicator as the supergoal, and admit that knowledge,
power, and survival are simply means to an end, we once again confront
the question of what that end is. And remember, knowledge, power, and
survival are not subgoals that apply only to myself. The subgoal of
increase(power(me)) assumes more than the logical minimum required. All
that's needed is increase(power(entity e: e.goal == me.goal)).

> > > Or are you hoping for an insane Superpower? Not something you'd
> > > want to be around, I reckon.
> >
> > No, *you're* the one who wants an insane SI. Selfishness is insane.
>
> You must be joking...

I am not joking. Selfishness as an end in itself is insane.

> > Anything is insane unless there's a rational reason for it, and there is
> > no rational reason I have ever heard of for an asymmetrical world-model.
> > Using asymmetric reflective reasoning, what you would call
> > "subjectivity", violates Occam's Razor, the Principle of Mediocrity,
> > non-anthropocentrism, and... I don't think I *need* any "and" after that.
>
> Reason dictates survival, simply because pleasure is better than death.
> If "symetry" or whatever says I should kill myself, it can kiss my ass.

See, there you go. You've just put your evolved, hormonal, emotional
impulses on a level above logic and reason. And you're willing to
sacrifice whatever parts of yourself are needed to become an SI? Oh,
sure you are.

> > I find it hard to believe you can be that reasonable about sacrificing
> > all the parts of yourself, and so unreasonable about insisting that the
> > end result start out as you. If two computer programs converge to
> > exactly the same state, does it really make a difference to you whether
> > the one labeled "den Otter" or "Bill Gates" is chosen for the seed?
>
> Yes, it matters because the true "I", the raw consciousness, demands
> continuity. There is no connection between Mr. Gates and me, so
> it's of little use to me if he lives on after my death. "I" want to feel
> the glory of ascension *personally*. That it wouldn't matter to an
> outside observer is irrelevant from the individual's point of view.

Well, that kind of emotional desire is exactly what you have to be
willing to give up, if you want to grow your mind. Not "give up",
actually, but you do have to be willing to say that logic occupies a
level above it. For your mind to grow strong, reason has to have
logical priority over everything else. I'm not talking about the wimpy
heart-vs-head or intuition-vs-skill cultural myth. Intuition and skills
are simply different ways to approximate a perfect rationality that, as
humans, we can never be isomorphic with. But you do have to admit that
rationality takes priority over not-rational things. Otherwise you'll
find it difficult to hone your skill because you'll have an excuse to
keep ugly messes around in your head.

> Oh, if you assume that death is the most likely outcome, a decade
> extra does indeed matter. It's better than nothing.

Hey, do the math. Figure that humanity has been around for, what,
50,000 years with an average population of 100,000,000? So that's a
total of five trillion experience-years. Now there are six billion
people, so if they last an extra decade... that's a 1% increase. Not
really significant except as a minor fluctuation.

Oh, wait. I forgot. You're only doing the calculations for den Otter.

> > What
> > matters is the relative probabilities of the outcomes, and trying to
> > slow things down may increase the probability of *your* outcome relative
> > to *my* outcome, but it also increases the probability of planetary
> > destruction relative to *either* outcome... increases it by a lot more.
>
> Compared to a malevolent SI, all other (nano)disasters are peanuts,
> so it's worth the risk IMO.

I really don't see that much of a difference between vaporizing the
Earth and just toasting it to charcoal. Considered as weapons, AIs and
nanotech have equal destructive power; the difference is that an AI can
have a conscience.

I mean, *I* care about a Perversion, or the possibility of a malevolent
Power wreaking true evil on an intergalactic scale. I don't care about
it much because if it can happen, it undoubtedly has already, so our
doing it wouldn't make *too* much of a difference, plus it's a pretty
small possibility to begin with. I don't see why *you* would care at all.

> > I think you overestimate the tendency of other people to be morons.
> > "Pitch in to help us develop an open-source nanotechnology package, and
> > we'll conquer the world, reduce you to serfdom, evolve into gods, and
> > crush you like bugs!"
>
> BS, by joining you get an equal chance to upload and become
> posthuman.

Oh, please. This is like offering every Linux coder their own computer
manufacturing plant. Like I said, you're assuming supermarket uploading kiosks.

> If you let the others cheat you, you'll only have
> yourself to blame. And looking at the world and its history,
> it's hard to *underestimate* people's tendency to be morons,
> btw. Damn!

Oh, I quite disagree with you on that score. Most idealists tend to
underestimate people's tendency to be morons. But I'm glad that you
have such confidence in humanity.

Or did you perchance mean "overestimate"? Or was that entire sentence a
little Hofstadterian joke?

> > > What's a "pure" Singularitarian anyway, someone who wants a
> > > Singularity asap at almost any cost? Someone who wants a
> > > Singularity for its own sake?
> >
> > Yep.
>
> (from another thread)
> > If this turns out to be true, I hereby award myself the "Be Careful What
> > You Wish For Award" for 1996. Actually, make that the "Be Careful What
> > You Wish For, You Damned Moron Award",
>
> In case of a AI-driven Singularity, something like the above could
> make a nice epitaph...

Hey, it's not like I care.

I mean, look at all the rationalizing you have to do just to justify the
idea that *you* somehow know whether or not your own survival is right
in ultimate terms. Just drop the baggage and let the SIs decide. Your
mind will be a lot clearer once you make the commitment to minimalism.

> > Humanity *will* sink. That is simply not something
> > subject to alteration. Everything must either grow, or die. If we
> > embrace the change, we stand the best chance of growing. If not, we
> > die. So let's punch those holes and hope we can breathe water.
>
> Ok, but don't forget to grow some gills before you sink the ship...

No, no, and no! I'm not a gill-grower! I'm a ship-sinker! Fish grow
gills! That's not my job!

> > You may be high-percentile but you're still human, not human-plus-affector.
>
> Tough luck, I'm The One.

I'll be cheering your efforts on, as long as you don't interfere with
mine. After all, my success counts as your failure but your success
counts as my success. See how much easier navigation is once you drop
all the unnecessary preconditions?

> > And, once again, you are being unrealistic about the way technologies
> > develop. High-fidelity (much less identity-fidelity) uploading simply
> > isn't possible without a transhuman observer to help.
> ....
> > Any uploadee is
> > a suicide volunteer until there's an SI (whether IA or AI) to help.
> > There just isn't any realistic way of becoming the One Power because a
> > high-fidelity transition from human to Power requires a Power to help.
>
> Assumptions, assumptions. We'll never know for sure if we don't try.
> Help from a Power would sure be nice, but since we [humans] can't
> rely on that, we'll have to do it ourselves. If we can upload a dog, a
> dolphin and a monkey successfully, we can probably do a human too.

Big "if". Again, go ahead and try, but don't ask *me* to wait.

> > > Besides, a 90% chance of the AI killing us
> > > isn't exactly an appealing situation. Would you get into a
> > > machine that kills you 90% of the time, and gives total,
> > > unprecedented bliss 10% of the time? The rational thing is
> > > to look for something with better odds...
> >
> > Yes, but you haven't offered me better odds. You've asked me to accept
> > a 1% probability of success instead.
>
> I think that your 1% figure is rather pessimistic. In any case, don't
> forget that AI researchers like yourself directly and disproportionately
> influence the odds of AI vs IA. If some of the top names switched
> sides, you could quite easily make IA the most likely path to
> ascension. You demand better odds, but at the same time you
> actively contribute to the discrepancy.

Let's distinguish "uploading" from "IA". I'm real big on IA. I'm
certainly playing both sides of that coin. I'm not an uploader, and I
don't think that any number of AI researchers switching will make
uploading the most likely path.

Remember, I demand better odds because you want me to switch my
criterion of success from "AI OR uploading" to "uploading", or in other
words, you want me to beat a deadline of 2015 CRNS with a 2040 CRNS
technology instead of a 2020 CRNS technology. In fact, you want me to
classify the 2020 CRNS tech as "undesirable" so now I have to beat that
deadline too. And to top it all off, you want to specify either that
the 2040 CRNS happens to *den Otter* out of all the people in the world,
or that there are 2060 CRNS uploading kiosks and six hundred thousand
people can upload simultaneously.

Speaking as a navigator, you'd have to offer me a DAMN HUGE differential
of desirability before I'd go within ten light-years of a plan with that
many extra constraints. I expect ENOUGH goddamn trouble trying to
develop a 2020 CRNS technology before a 2015 CRNS technology, my success
percentile is dropping into the 30s, and you expect me to take on a gap
NINE TIMES AS LARGE?

> Ultimately it always comes down to one thing: how likely is
> a nuclear/nano war really within the next 30 years or so, and
> how much damage would it do.

Damn near inevitable given enough time; if it happens when the
technology is far enough along, it could easily wipe out all
multicellular life.

> Does this threat justify the
> all-or-nothing approach of an AI Transcend?

I bet your ass.

> Well, there hasn't
> been a nuclear war since the technology was developed more
> than 50 years ago, I'd say that's a pretty good precedent. Also,
> nukes and biological weapons haven't been used by terrorists
> yet, which is another good precedent. Is this likely to change
> in the first decades of the next century? If so, why?

Yes. Two nations have nuclear weapons - as in, enough to damage the
planet - so there's a balance of power. It takes a long time and a lot
of resources to develop, so most other nations can't get a full arsenal,
and if they did we could see it coming. The arsenals were developed
gradually, so at any given point an attack could be met with fairly
equal retaliation. And finally, nuclear weapons aren't self-replicating.

Then we have nanotech. Once diamond drextech becomes common, one guy,
in a lab, can get a complete arsenal that goes from zero to sixty in
hours once the basic unit is developed. There's no anti-first-strike
restraint. There's no balance of power. And there are too many
players. I think I posted this whole argument earlier - were you around?

> Even if we had a full-scale nuclear conflict, this would by no
> means kill everyone, in fact, most people would probably
> survive, as would "civilization".

I'm not worried about nuclear war, except insofar as it would shift the
balance of probabilities between nanotech and AI. This is actually
enough to make me worry quite a bit. A global computer network is
substantially harder to reconstruct than one Zyvex laboratory, so I'm
treating nuclear war as "losing" from my standpoint.

> A "malevolent" AI would
> kill *everybody*. Is grey goo really that big a threat?

Like I said, I bet your ass.

> A fully autonomous replicator isn't exactly basic nanotech,
> so wouldn't it be likely that people would already be
> starting to move to space (due to the incresingly low
> costs of spaceship/habitat etc. construction) before
> actual "grey/black goo" could be developed?

Oh, I wish. But the law of these revolutions is that they hit harder
than anyone expects, although sometimes they take longer. I would
expect virtually everything Drexler ever wrote about to be developed
within a year of the first assembler, after which, if the planet is
still around, we'll start to get the really *interesting* technologies.
Nanotechnology is ten times as powerful and versatile as electricity.
What we forsee is only the tip of the iceberg, the immediate
possibilities. Don't be fooled by their awesome raw power into
categorizing drextechs as "high" nanotechnology. Drextechs are the
obvious stuff. Like I said, I would expect almost all of it to follow
almost immediately.

> And even
> on earth one could presumably make a stand against
> goo using defender goo, small nukes and who knows
> what else.

No, I covered that during the "Goo Prophylaxis" debate on Extropians.
Basically, we can't defend against nuclear weapons right now and
nanotechnology just makes it worse.

> Many would be killed, but humanity probably
> woudn't be wiped out, far from it. A malevolent AI would
> kill *everyone*. See my point?

Yes. You're wrong.

> > As far as I can tell, your evaluation of the desirability advantage is
> > based solely on your absolute conviction that rationality is equivalent
> > to selfishness. I've got three questions for you on that one. First:
> > Why is selfishness, an emotion implemented in the limbic system, any
> > less arbitrary than honor?
>
> Selfishness may be arbitrary, but it's also practical because it's
> needed to keep you alive, and being alive is...etc. "Selfishness"
> is a very sound fundament to build a personality on. Honor
> often leads to insane forms of altruism which can result in
> suffering and even death, and is therefore inferior as a meme
> (assuming that pleasure is better than death and suffering).

So, selfishness is arbitrary, but for some supergoals it's useful. No
disagreement there. Honor is also useful for some supergoals, since
other people tend to treat you better when you act honorably. Many
emotions are useful. They're still arbitrary. And in a well-designed
system, they're just ordinary subgoals instead of special-purpose code,
entirely dependent on supergoals and having no existence apart from
them. You seem to be according selfishness special treatment, placing
it above reason ("if symmetry [another word for Occam's Razor] wants me
to be unselfish it can kiss my ass"), which is what I object to.

> > Second: I know how to program altruism into
> > an AI; how would you program selfishness?
>
> Make survival the Prime Directive.

You did read the section on Interim Goal Systems and the Prime Directive
from _Coding a Transhuman AI_, right? You're aware that initial-value
goals are both unnecessary and unstable?

> > Third: What the hell makes
> > you think you know what rationality really is, mortal?
>
> What the hell makes you think you know what rationality really
> is, Specialist

I don't, that's why I hand it over too...

> /AI/SI/Hivemind/PSE/Power/God etc., etc.? Oops,
> I guess it's unknowable.

Sheesh, give them a shot. We know we can't do it because we tried. You
can't conclude from our own pathetic failures that the answer is
"unknowable". Me, I think they have a better chance than we do, and
that differential is all I need to make choices.

> Ah well, I guess I'll try to find eternal
> bliss then, until (if ever) I come up with something better. But
> feel free to kill yourself if you disagree.

I certainly will. Please *don't* feel free to interfere with my efforts
to create an AI because *you* think that just because *you* can't figure
out the answer no possible entity can.

I really object to your intimation that "only God" can figure out the
ultimate meaning of life. That sounds defeatist, passivist, and
Luddite. Me, I figure that any garden-variety superintelligence should
be able to handle the job.

> > And I think an IA Transcend has a definite probability of being less
> > desirable than an AI Transcend. Even from a human perspective. In
> > fact, I'll go all the way and say that from a completely selfish
> > viewpoint, not only would I rather trust an AI than an upload, I'd
> > rather trust an AI than *me*.
>
> Well, speak for yourself. Since the AI has to start out as a
> (no doubt flawed) human creation, I see no reason to trust it
> more than the guy(s) who programmed it, let alone myself.

Oh, yeah, sure, evolution is WAY more trusty than a human designer.
Even, no, ESPECIALLY if you're trying to accomplish something that was
never even REMOTELY INCLUDED in the original design goals, like an
upgradable architecture. And of course NO creation can be superior to
its Creator. I'd call you a Luddite... but oh, wait, I already did. So
let me just say that I'll believe you can out-upgrade a seed AI on the
same day you outswim a nuclear submarine.

> > I think you're expressing a faith in the human mind
> > that borders on the absurd just because you happen to be human.
>
> It may be messy, but it's all I have so it will have to do.

It is NOT all you have. The only reason it's all you have is because
you THINK it's all you have. Get hip to the transhumanist meme, man!
If you want a better mind, program one! Unless, of course, you're so
set on being The One that you refuse to do so.

> > > What needs to be done: start a project with as many people as
> > > possible to firgure out ways to a) enhance human intelligence
> > > with available technology, using anything and everything that's
> > > reasonably safe and effective
> >
> > *Laugh*. And he says this, of course, to the author of "Algernon's Law:
> > A practical guide to intelligence enhancement using modern technology."
>
> Um no, actually this was meant for everyone out there; I know that
> *you* have your own agenda, and that the chances of you abandoning
> it are near-zero, but maybe someone else will follow my advice. Every
> now and then, the supremacy of the AI meme needs to be challenged.

Yes, I do have my "own" agenda, which involves using IA as a means to
AI. And if you'd like to start your own IA project, I'll be more than
happy to contribute advice, brain scans, genetic material, or anything
else you need.

> > Which is the *other* problem with steering a car by shooting out the
> > tires... Taking potshots at me would do a lot more to cripple IA than
> > AI. And, correspondingly, going on the available evidence, IAers will
> > tend to devote their lives to AI.
>
> Well, there certainly are a lot of misguided people in that field, no
> doubt about that. Fortunately there also are plenty of people (in the
> medical branches, for example) who, often unknowingly, will
> help to advance the cause of IA, and ultimately uploading.

Like I said, it doesn't make a difference. AI is 2020 CRNS, and
uploading is 2040 CRNS. No amount of optimism and plotting and
sucker-cheating and tire-shooting is going to change the ordering. The
Soothsayer herself couldn't navigate a problem like that. Gap's too large.

-- 
           sentience@pobox.com          Eliezer S. Yudkowsky
        http://pobox.com/~sentience/tmol-faq/meaningoflife.html
Running on BeOS           Typing in Dvorak          Programming with Patterns
Voting for Libertarians   Heading for Singularity   There Is A Better Way


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:04:37 MST