Re: Gattaca on TV this weekend

From: Samantha Atkins (samantha@objectent.com)
Date: Sun Jun 16 2002 - 22:56:06 MDT


Anders Sandberg wrote:

> On Sun, Jun 16, 2002 at 12:23:49AM -0400, Brian Atkins wrote:
>
>>We can't predict when the breakthrough in the softare side of AI will come.
>>What we can say is that no one, whether through AI or nanotech-based brain
>>enhancement or some other way, is going to create a transhuman intelligence
>>until at least the required hardware to implement it is available.
>>
>
> But what is the required hardware? It doesn't have to be massive
> nanocomputers, it could be as Marvin Minsky put it, a 286 with the right
> algorithm.

A 286 couldn't run a bloody gnat. Let's get real here. If
Marvin Minsky believes it can be done, and he says so rather
often, then let him or one of his grad students offer some proof
, preferably by construction.

> I think you are mistaking my intentions. You seem to interpret what I
> said as "why bother trying to make AI", which is incorrect. I am
> discussing this on the metalevel, as a memetic gardner. I'm seriously
> worried that transhumanism has plenty of assumptions held by many people
> that are not firmly founded on good evidence or at least careful
> analysis. If we don't continually question and refine our assumptions,
> we will end up living in a fantasy world. Of course, even after deep
> discussion people will perhaps come to different conclusions (which I
> guess is the case here). That is entirely OK.
>

Well said.

> Here is another assumption which I think it is worth questioning: that a
> fast transformation is desirable.
>
> (This really belongs in a non-Gattaca thread)
>
> Mold ex machina:
>

A fast transformation is certainly more quickly dangerous if we
assume that a slower transformation is viable. A sufficiently
slow transformation could have personal consequences like being
too slow to insure or make quite likely our own personal survival.

If we do think a slower transformation is required to insure
reasonable survivability, what do we do if the technology
ramp-up looks to be moving faster than that? Do we actually
advocate policies to slow it down?

>>These will be worth worrying about much
>>sooner, and are (at least in the case of a bio plague) just another reason
>>to achieve a Singularity sooner rather than extending our window of
>>vulnerability.
>>
>
> On the other hand a very fast development would mean that we reach
> powerful levels of damage potential fast - even if you develop safety
> systems first they might not have been fully distributed, integrated and
> made workable when the truly risky stuff starts to be used. Just look at
> software today - imagine the same situation with nanoimmune systems or
> AI.
>

Not to worry. By today's software the poor beastie would be a
goner within 12 hours tops anyway. :-)

 
> I wonder if the singularity really ends the window of vulnerability.
> Maybe it just remains, giving whatever superintelligences are around
> nervous ticks.
>

If we go into it with the notion that continued darwinian
selection of the most agressive, fastest, smartest is the way it
should be then it is almost inescapable that there is a
continuing windo of vulnerability.

- samantha

 
>



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:14:50 MST