Re: Gattaca on TV this weekend

From: Brian Atkins (brian@posthuman.com)
Date: Sun Jun 16 2002 - 10:34:55 MDT


Anders Sandberg wrote:
>
> On Sun, Jun 16, 2002 at 12:23:49AM -0400, Brian Atkins wrote:
> >
> > We can't predict when the breakthrough in the softare side of AI will come.
> > What we can say is that no one, whether through AI or nanotech-based brain
> > enhancement or some other way, is going to create a transhuman intelligence
> > until at least the required hardware to implement it is available.
>
> But what is the required hardware? It doesn't have to be massive
> nanocomputers, it could be as Marvin Minsky put it, a 286 with the right
> algorithm.

No one can say for sure yet as you know. We can estimate this and that, and
say that it seems somewhat likely that with things like Blue Gene we are
moving into the range. If that's true, then the capabilities we'll have
post-2010 are overkill. But I think this misses the point- even though I
can't prove to you that 1 petaop/s is enough, I have a reasonable basis for
believing that it might be. This is enough to make it worthwhile to work
on developing some AI software to test this out rather than sit around
twiddling my thumbs. The only reasonable basis to sit around twiddling your
thumbs is if you can somehow be 100% sure we won't have any reasonable
hardware until significantly later than one of the other Singularity paths.

>
> > If we can
> > estimate advanced nanotech at 2020 or beyond, and we know it takes longer than
> > that to grow some bioengineered transhumans, and we also put uploading at 2020
> > or beyond, then what we can say for sure is that AI is the only technique that
> > has a shot at working pre-2020.
>
> This is true. But it assumes 1) that no other technologies will become
> relevant over the next 20 years (20 years ago only Richard Feynmann had
> thought about quantum computers), and 2) that a technology "working"
> will become immediately very significant.

Well I personally spend a large amount of time following technology, and
if something came along that was a better idea I'd pursue it instead. For
the moment though, this is the best potential bang for my buck I've found.

If you want to describe to me a realistic theory of how a working self-
enhancing AI would not become immediately significant, go ahead.

>
> > > I think one important holy cow to challenge for all of us here on the
> > > list is the "fast transformation assumption": that changes to a trans-
> > > and posthuman state will occur over relatively short timescales and
> > > especially *soon*. While there are some arguments for this that make
> > > sense (like Vinge's original argument for the singularity) and the
> > > general cumulative and exponential feeling of technology, we shouldn't
> > > delude ourselves that this is how things really are. We need to examine
> > > assumptions and possible development paths more carefully.
> >
> > I'm not sure why you brought this up, but anyway:
> >
> > Well relating to the subject line I have to say I am reminded of Vincent
> > in the movie (who I thought was a rather Extropian fellow) who after much
> > searching and thinking was able to find a way (difficult, but possible)
> > to get what he wanted. Frankly you sound a lot like his father who kept
> > encouraging him to become a janitor. Right now there is one identifiable
> > way (also quite difficult, but potentially possible) to achieve the "fast
> > transformation assumption" (FTA) (can't we just call it the Singularity?)
> > within this decade even. And until I and the others like myself find a
> > better way we are going to be just as persistent as Vincent while we pursue
> > this one. One very difficult potentially possible way is better than none.
>
> I think you are mistaking my intentions. You seem to interpret what I
> said as "why bother trying to make AI", which is incorrect. I am
> discussing this on the metalevel, as a memetic gardner. I'm seriously
> worried that transhumanism has plenty of assumptions held by many people
> that are not firmly founded on good evidence or at least careful
> analysis. If we don't continually question and refine our assumptions,
> we will end up living in a fantasy world. Of course, even after deep
> discussion people will perhaps come to different conclusions (which I
> guess is the case here). That is entirely OK.

You'll be happy to know that I don't walk around convinced that our work
will ever lead to anything of significance. But the fact that it could,
and there doesn't appear to be a better way to spend our resources, is
what is driving this. It's an experiment. What I still don't see is any
good reason to not try the experiment, or to switch and do some other
experiment first, or to twiddle my thumbs.

>
> Here is another assumption which I think it is worth questioning: that a
> fast transformation is desirable.
>
> (This really belongs in a non-Gattaca thread)

Yes why don't you start said thread, and give some good reasons why it
might not be desirable.

>
> Mold ex machina:
> > These will be worth worrying about much
> > sooner, and are (at least in the case of a bio plague) just another reason
> > to achieve a Singularity sooner rather than extending our window of
> > vulnerability.
>
> On the other hand a very fast development would mean that we reach
> powerful levels of damage potential fast - even if you develop safety
> systems first they might not have been fully distributed, integrated and
> made workable when the truly risky stuff starts to be used. Just look at
> software today - imagine the same situation with nanoimmune systems or
> AI.
>
> I wonder if the singularity really ends the window of vulnerability.
> Maybe it just remains, giving whatever superintelligences are around
> nervous ticks.
>

I'm all for wondering and maybes and on the other hand, but it would take
more than that to convince me to pursue an alternate plan of action. If
you have some new information to talk about go ahead, otherwise I feel
I've already considered all this long ago.

-- 
Brian Atkins
Singularity Institute for Artificial Intelligence
http://www.singinst.org/


This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:14:49 MST