From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sat Nov 10 2007 - 15:48:05 MST
Robin Hanson wrote:
>
> The anchor that I start with is my rough estimate of how long whole
> brain emulation will take, and so I'm most interesting in comparing AGI
> to that anchor. The fact that people are prone to take these estimate
> questions as attitude surveys is all the more reason to seek concrete
> arguments, rather than yet more attitudes.
If you want to compare AGI *relative* to whole brain emulation -
unanchoring the actual time and hence tossing any pretense of
futuristic prophecy out the window - then that's a whole separate story.
I would begin by asking if there was ever, in the whole history of
technology, a single case where someone *first* duplicated a desirable
effect by emulating biology at a lower level of organization, without
understanding the principles of that effect's production from that low
level of organization. I would not be surprised if someone could
think of one example, but I would be surprised if they could think of
three, or of a single *major* example.
The elementary reason why I am suspicious of whole-brain emulation is
that the main arguments for it seem to proceed as follows: "Nobody
has any idea why birds fly, or even a good definition of what "flying"
means apart from the fact that birds do it, so we'll get flightiness
by emulating bird biochemistry before we have de novo flying
machines." This argument is intrinsically based on ease of
imaginability with current knowledge, rather than probable future
advances in knowledge. With current knowledge it "seems easy to
imagine" that future technology could emulate a brain cell by cell,
but "hard to imagine" that anyone will understand the sacred and
mysterious principles of intelligence. And similarly, in 1890 it
would have been easy to imagine a flying machine that looked just like
a bird and flew just like a bird, and hard to imagine a flying machine
that worked differently.
You cannot use ignorance as if it were positive knowledge.
Looking at history, we find two lessons:
1) Extremely mysterious-seeming desirable natural phenomena are
eventually understood and duplicated by engineering;
2) Because they have ceased to be mysterious by the time they are
duplicated, humans design them by engineering backward from the
desired results, rather than by exactly emulating the lower levels of
organization of a black box in Nature whose mysteriousness remains
intact even as it is emulated.
Cars don't emulate horse biochemistry, sonar doesn't emulate bat
biochemistry, compasses don't emulate pigeon biochemistry, suspension
bridges don't emulate spider biochemistry, dams don't emulate beaver
building techniques, and *certainly* none of these things emulate
biology *without understanding why the resulting product works*.
The notion of whole-brain emulation *which preserves intelligence's
mysteriousness* seems to me a device to preserve the future's
nonabsurdity - to avoid violating the invariant "Intelligence is
mysterious" in a futuristic prediction. But the future is always absurd.
Suppose I put it to you this way: *Given* the lessons of past
history's engineering of formerly mysterious phenomena, what
characteristics visible *without benefit of hindsight*, would have
enabled ancient futurists to *distinguish* the extremely rare case of
a desirable phenomenon that is first duplicated by emulating a lower
level of biological organization while the higher levels remain
mysterious and non-reverse-engineerable, from all the many cases where
the high level was understood by insight and then engineered with a
different lower level of organization?
I might try to answer if I could think of any cases at all of the
first type. They may exist, but I don't know or am not recalling them
(note use of availability heuristic).
I can think of nothing which separates the AI case from any of the
historical cases of the second kind, except for various special
pleading of the form "But this time it *really is* mysterious!",
presumably as distinguished by various signs and portents (such as
failed optimism) which were also present in past historical cases. I
would mark this down as a failure to appreciate how failure to develop
flying machines looked *at the time*, without benefit of hindsight.
None of this is an argument for AI happening quickly in an absolute
sense, simply that it will not *first* happen through whole-brain
emulation. It is not like arguing, in 1880, that powered
heavier-than-air flight will happen before 1900 - it's hard to see how
this could have been foreseeable at all, one way or another. But they
might have validly guessed in 1880 that nonbirdlike flying machines
would *precede* flying machines that emulated the tendons, muscles,
skeletons, relative weights, and wing-flapping patterns of biological
birds. Even though information of the second kind seemed "easy to
imagine" a clever anatomist discovering and clever mechanic
duplicating, while a nonbirdlike design seemed "hard to imagine".
Even in 1880, they could have remembered that ships are not like fish,
trains are not like horses, and elan vital turned out not to be so
crucial in chemical reactions after all.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:00 MDT