Re: LIST: the Gooies

From: Dan Fabulich (daniel.fabulich@yale.edu)
Date: Wed Apr 14 1999 - 00:00:07 MDT


At 02:40 PM 4/13/99 -0700, Lyle Burkhead wrote:
>Dan Fabulich writes,
>
>> I've got a new one for you. You've tried straw-man.
>> You've tried parody. You've even tried direct ad hominem.
>> Have you considered presenting your points in clear
>> straight-forward essay style?
>
>See my reply to Eliezer on the Geniebusters thread. Or, see Geniebusters
>itself.

I did. Is that your straight-forward essay? Your use of "exercises" is
condescending, and backfires a lot of the time. Indeed, the whole text was
apparently designed in such a way as to get readers to come up with exactly
the answers you want them to come up with (or to not come up with any
answer at all, when that was your intended effect). As a result, I found
myself puzzling over several sections of your argument, where I had clearly
given an answer to one of your exercises that you didn't expect... I
answered questions which you thought had no answer, or I thought of cheap
ways to do things you assumed were expensive. Making your assumptions
clear, as well as your conclusions, is a better (and clearer) way to write
an argument.

>You might try heeding your own advice, by the way. You accuse me of using
>straw-man arguments, but you don't say which arguments you are referring
>to. I'd like to see a clear, straightforward essay from you on that point.
> Please show me which of my arguments are straw-man arguments.

OK, I'll give it a try. Eliezer actually beat me to the punch on this one,
on most counts... Though I'll actually jump in and defend the "genie
machine" concept where he wouldn't.

That is, I think we can make a machine with a whole lot of pre-programmed
products loaded in it. You want to make a violin? Say "I want a violin."
You want a submarine sandwich? Say "make me a submarine sandwich with
everything on it." It won't be able to make anything that it hasn't been
programmed to make, but it will be able to make anything that it has
already been programmed to make. The only AI present will be a Natural
Language Parser, or, as Drexler put it, an AI with "both great technical
ability and the social ability needed to understand human speech and
wishes." Ask it to make a submarine and it might say "I'm sorry, Lyle, I
can't do that."

[Straw Man: Where does Drexler say that the Shape Shifter, or the genie
machine, wouldn't have to be programmed?]

You claim that having the ability to move individual atoms won't make a
genie machine, because we'll still have to program it.

Now, nobody doubts that programming a nano-assembler will be expensive. AI
might make this somewhat cheaper, but it might not. No matter. The point
is that programming a nano-assembler is expensive; you can't just summon up
a submarine sandwich by genie power alone.

However, as you surely realize, the whole point behind automating
production is that you can make automated stuff much cheaper than you can
make stuff manually, even though the cost of designing the automation
system is much greater than the cost of making one unit of stuff.

Thus, if some group of people, somewhere, ANYWHERE, managed to write a
program which could tell a universal assembler how to make a violin, (write
the program == automate the whole system) then from then on anybody who had
the program and some replicators could make a violin from raw materials.
Writing the violin program is thus a sunk cost; from then on, the cost of
making a violin would be small. Now, you may still have to pay a lot to
get your hands on the violin program if intellectual property rights are
well enforced, but once you did, you could make a lot of violins very cheaply.

>The same considerations would apply to diamond trees, if they existed. If
>diamonds are made by replicators, that does not imply that they will be
>free, or as cheap as potatoes.
>
>In fact when you go from natural trees to diamond trees, the situation will
>change for the worse. Natural trees don't require much of an investment.
>Agricultural land is not expensive. Neither is fertilizer. Agriculture is a
>fairly low-tech business. A "diamond tree" (i.e. a machine that uses
>replicating atomic positioners to produce diamonds) will require an
>enormously complex and expensive environment. Besides, an orange tree
>doesn't have to be designed. A diamond tree does. (No, don't tell me about
>Genies!)

You make that claim about a nanite's "complex and expensive environment"
pretty offhandedly. You didn't even make me do an exercise to try to guess
how complex or expensive you think it would be. Can you justify the claim?

As for the fact that it would have to be designed: you're right. But, once
designed, copies of the design, and copies of the tree, would be cheap.
Maybe even as cheap as potatoes.

You also make a big deal about the cost of retooling. You seem to miss the
whole point of a universal assembler (though it's hard to tell, due to the
nature of your writing style). The very premise behind a universal
assembler is that, once you've written the program for the assembler, you
can retool it very cheaply, since the program contains automatic
instructions telling the assembler how to retool. The retooling
instructions *are* complicated, but I only have to figure them out once.

The upshot of this is that if you give a universal assembler a program to
make a violin, along with the raw materials required to build it, the
universal assembler can build a violin. If you give it a program to make a
roast beef sandwich, and the raw materials with which to build that, it can
build a sandwich. But, most important of all: as long as I don't throw
away the violin program, I can then go back and make more violins once I'm
done having it make my sandwich.

As for your AI arguments, I feel like the jury is still out on how
effective AI will be in fulfilling our wishes. Maybe we'll be able to make
a pretty smart design system that's willing to work for free. Maybe no
system smart enough to work on nanotech will be stupid enough to work for
free. Maybe the very idea of making a bootstrapping transhuman AI do what
we want it to is folly. You definitely don't prove your case online.

Oh, and as for that bootstrapping problem: I don't think anyone ever will
prove the bootstrapping theorem, as you presented it. "It will be possible
for human-equivalent robots to go beyond the point of human equivalence,
even though it is not possible for us." Why? Because I think that we
COULD improve our own intelligence, if we actually had the tools with which
to alter our brains in fine detail. (Not the cognitive tools, just the
manipulation tools.) I predict that we *will* be able to increase our own
intelligence given nanotech and lots of programming.

Fortunately, an AI could easily make detailed adjustments to its inner
workings. Thus, an AI would have an even easier time bootstrapping than we
would: whereas we would need to both figure out how to change ourselves and
also which changes to make, an AI would only have to deal with the latter
problem regarding which details to adjust.

That's all for now... I think I've probably wasted too much time on this
already.

-Dan



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:03:32 MST