Would you bet the farm? [was: Gattaca on TV this weekend]

From: Robert J. Bradbury (bradbury@aeiveos.com)
Date: Sun Jun 16 2002 - 05:02:12 MDT


On Sat, 15 Jun 2002, Brian Atkins wrote:

> Harvey Newstrom wrote:
> >
> > We're talking 27-32 years if the technology is available today. We're
> > talking 32-42 years if it has to go through FDA and government
> > approvals. [some snips applied]
> >
> > I doubt that the movie GATTACA could reasonably occur in real life until
> > after 2050.
>
> I hope Hal and everyone else here who still has dreams of "bio" technologies
> having any real chance of coming before AI takes this estimate and sticks
> it deep in their heads and cogitates a bit more on what a truly realistic
> view of the future is likely to be.

But unless I'm missing something, the GATTACA scenario is not so much different
from Greg Stock's "Redesigning HUMANS" (which I'm in the middle of reading).
(Mind you, I've never seen GATTACA so my perspectives are quite indirect.)

I believe that both Greg's and Brian's comments are premised on the idea
that it will be "impossible" to rewrite *our* genomes. (Or augment
said genomes with genomic patches or nano-enhancements). Such premises
have a low probability of being correct, IMO. [Many of us know my
disagreement with Greg with respect to the feasibility of modifying
our genomes -- while Greg is writing about why it seems unlikely, I'm
trying to create the corporate structure and capabilities that will
enable such engineering.]

The real question in my mind revolves around the degree to which humans
would accept AIs. Greg argues that none of us would adopt advanced genetic
engineering technologies unless we could see clear advantages for our offspring.
*But* how many people on the list would choose to avoid radical *self*-evolution
in the face of an AI that cannot be guaranteed to have the interests of
"humanity" as one of its "prime directives"? Just because the Singularity
Institute may have as a goal a "Friendly AI", that does not imply that
Saddam Hussein or the government of China has similar goals.

So, a question for extropians -- would you be willing to risk your life
(by embracing untested technologies) to uplift yourself if you felt
humanity itself were at risk?

> What comes before that is a long string of intermediate stage technologies
> which will have many spinoffs, including- you guessed it- extremely powerful
> computer technologies. We'll get massively powerful computer hardware well
> before drextech.

Which raises some very sticky issues. What if that such technologies
are embraced by one of the leaders of the "axis of evil" before we can
embrace them? Is an advanced AI in the service of a megalomaniac
better or worse than nanotech in the service of said megalomaniac?

Bubble, bubble, toil and trouble, witches cauldron brew...
Robert



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:14:49 MST