Re: Gattaca on TV this weekend

From: Brian Atkins (brian@posthuman.com)
Date: Sat Jun 15 2002 - 22:23:49 MDT


Anders Sandberg wrote:
>
> On Sat, Jun 15, 2002 at 04:21:28PM -0400, Brian Atkins wrote:
> > Harvey Newstrom wrote:
> > >
> > > We're talking 22 years minimum if the technology were ready today.
> > > We're talking 27-32 years if the technology is available today. We're
> > > talking 32-42 years if it has to go through FDA and government
> > > approvals. Who knows how many additional years to gain public
> > > preference over traditional mating.
> > >
> > > I doubt that the movie GATTACA could reasonably occur in real life until
> > > after 2050.
> >
> > I hope Hal and everyone else here who still has dreams of "bio" technologies
> > having any real chance of coming before AI takes this estimate and sticks
> > it deep in their heads and cogitates a bit more on what a truly realistic
> > view of the future is likely to be.
>
> Of course, the AI side should think about the history of AI and ask
> themselves whether their visions of timescales are any faster. There
> might even be the problem that real AI will require a "childhood" of
> experience to become useful, which would slow things a lot.

We can't predict when the breakthrough in the softare side of AI will come.
What we can say is that no one, whether through AI or nanotech-based brain
enhancement or some other way, is going to create a transhuman intelligence
until at least the required hardware to implement it is available. If we can
estimate advanced nanotech at 2020 or beyond, and we know it takes longer than
that to grow some bioengineered transhumans, and we also put uploading at 2020
or beyond, then what we can say for sure is that AI is the only technique that
has a shot at working pre-2020.

>
> I think one important holy cow to challenge for all of us here on the
> list is the "fast transformation assumption": that changes to a trans-
> and posthuman state will occur over relatively short timescales and
> especially *soon*. While there are some arguments for this that make
> sense (like Vinge's original argument for the singularity) and the
> general cumulative and exponential feeling of technology, we shouldn't
> delude ourselves that this is how things really are. We need to examine
> assumptions and possible development paths more carefully.

I'm not sure why you brought this up, but anyway:

Well relating to the subject line I have to say I am reminded of Vincent
in the movie (who I thought was a rather Extropian fellow) who after much
searching and thinking was able to find a way (difficult, but possible)
to get what he wanted. Frankly you sound a lot like his father who kept
encouraging him to become a janitor. Right now there is one identifiable
way (also quite difficult, but potentially possible) to achieve the "fast
transformation assumption" (FTA) (can't we just call it the Singularity?)
within this decade even. And until I and the others like myself find a
better way we are going to be just as persistent as Vincent while we pursue
this one. One very difficult potentially possible way is better than none.

If you want to label this "delusional" go ahead, but from my view it looks
like the people of the Gattaca society who were so sure they had everything
perfectly measured that it wasn't worth considering the potential that there
could be something else. Perhaps in other circumstances this wouldn't be
so bad, but here we are talking about something literally world-changing
that can't even get funding worth a slice of what NASA puts towards anti-
gravity and cold fusion. Surely the potential rewards are worth quite a bit
more risk capital than what is currently being spent, and the Extropian
choice is to embrace and fully explore this chance rather than giving up
before trying and waiting 20 years or longer for some form of completely
human-driven slow scenario to begin playing out.

I can defend this cow all day, can you slaughter it?

>
> > Going back to your question the other day Hal about whether we should worry
> > more about bio and nano coming first, I hope this helps. As for nano think
> > about it like this: we don't really have anything really to worry about
> > there until we get the fully fledged final stage of it ("drextech"). What
> > comes before that is a long string of intermediate stage technologies which
> > will have many spinoffs, including- you guessed it- extremely powerful
> > computer technologies. We'll get massively powerful computer hardware well
> > before drextech.
>
> The problem isn't gods erupting from our machines, but mold. Industrial
> accidents with bio or nano can be very bad, even when the technology is
> trivial. Imagine a widely used nanodevice that turns out to have a small
> part that is non-biodegradable and jams a certain metabolic chain in
> soil bacteria a la microscopic silicosis. That could do terrible damage
> without being smart, self-replicating or advanced. Once the problem was
> diagnosed we could of course fix the design and start designing the
> cleanup, but it could cause serious damage. A few failed rice or wheat
> harvests, and we will have trouble.
>
> This is the kind of low-tech, close-to-home problem that often gets
> ignored in discussions about grand technological processes. But
> accidents shape technology and the perception of them, which in turn
> controls what will be developed. There is a reason people accepted the
> railroad accidents but abandoned Zeppelins after Hindenburg.
>

If Hal was talking about this or some more basic kind of bio disaster then
I was off-base in my criticism. These will be worth worrying about much
sooner, and are (at least in the case of a bio plague) just another reason
to achieve a Singularity sooner rather than extending our window of
vulnerability.

-- 
Brian Atkins
Singularity Institute for Artificial Intelligence
http://www.singinst.org/


This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:14:49 MST