From: Mark Walker (mdwalker@quickclic.net)
Date: Fri Nov 30 2001 - 19:22:18 MST
Eliezer S. Yudkowsky wrote:
>
> We do not have the technology. We have the technology to make arbitrary
> alterations to DNA, in the same sense that we have the technology to make
> arbitrary alterations from ones and zeroes.
I am not sure in what sense the changes I described are arbitrary. We have
the technology to make specific changes to different homeobox genes which
control the relative size of different parts of the brain. So the changes
are not arbitrary in this sense. It is true that we do not know what the
changes will do for the organism but this would be the point of
experimenting.
>Going from DNA to
> intelligence enhancement is not necessarily easier than going from ones
> and zeroes to Artificial Intelligence. And your compile-build-debug cycle
> is 14 years long, and illegal.
>
I said that this was a back-up plan, so it is consistent with this that it
is not necessarily easier. Damien pointed out as well that it would take
time for the experiments to mature. I didn't think to mention this but I
suppose I should have: I have some expertise in the subject of a maturation
period as a former child myself.
G.E is a good BACK-UP plan because: (1) we have the technology
presently to make non-arbitrary alterations, and (2) manipulating the
homeobox genes in question might prove to be a sufficient condition for
creating a greater than human intelligence. Given the plasticity of the
neocortex during ontogenesis, and its role in so-called higher cognitive
functions, I think this would be a good area to experiment with. There are a
number of hypotheses one could try here right now (at least on animals),
e.g., one null hypothesis is that a 50% increase in the neocortex in a pygmy
chimp will not lead to an increase in intelligence. Barring financial,
ethical and legal worries, this is the sort of thing would could attempt
after lunch. If memory serves I think the cycle time on the pygmy chimp is
about 3-5 years.
I am not proposing G.E. as a competitor to AI. We may agree that the
biggest worry is suffering at the hands of our own stupidity. I hope SIAI
works, but I would feel a little better with a back-up plan.
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:12:18 MST