Re: Gattaca on TV this weekend

From: Eugen Leitl (eugen@leitl.org)
Date: Sun Jun 16 2002 - 05:42:49 MDT


On Sun, 16 Jun 2002, Anders Sandberg wrote:

> But what is the required hardware? It doesn't have to be massive
> nanocomputers, it could be as Marvin Minsky put it, a 286 with the
> right algorithm.

I'm not sure how a single i80286 IBM AT can help us. It might be capable
to do something interesting, but it itself doesn't help us to find a
bitvector allowing it to do something interesting, unless we've got a lot
of clue, or are extremely lucky. It doesn't look good on both counts so
far, but of course we can't be absolutely sure.

It is probably relatively safe to assume that you need to cover a lot of
state space to find something interesting, and even a city full of IBM ATs
looks to slow for that. We do have some evidence that a lot of crunch can
design very compact systems, though.
 
> I'm seriously worried that transhumanism has plenty of assumptions
> held by many people that are not firmly founded on good evidence or at
> least careful analysis. If we don't continually question and refine

This is accurate, of course. Iterated BS in absence of validation tends to
smell stronger, as participants tend to more and more lose connections to
the ground under their feet, which in most cases was tenuous to start with
(I'm explicitly including myself in here).

> our assumptions, we will end up living in a fantasy world. Of course,
> even after deep discussion people will perhaps come to different
> conclusions (which I guess is the case here). That is entirely OK.

A while back someone mentioned how the traditional intellectuals lost
track to reality and drifted off into obscurity and irrelevance.
Obviously, similiar mechanisms are at work here.
 
> Here is another assumption which I think it is worth questioning: that a
> fast transformation is desirable.

We absolutely need to slow down and think when we come within earshot of
nonhuman systems which are human-competitive over a far range of areas.
 
> On the other hand a very fast development would mean that we reach
> powerful levels of damage potential fast - even if you develop safety
> systems first they might not have been fully distributed, integrated
> and made workable when the truly risky stuff starts to be used. Just
> look at software today - imagine the same situation with nanoimmune
> systems or AI.
>
> I wonder if the singularity really ends the window of vulnerability.
> Maybe it just remains, giving whatever superintelligences are around
> nervous ticks.

Living right in a middle of an extinction event or rat race as usual do
seem to be very different modes, though. We *are* living in the middle of
an extinction event, as far as other species are concerned. It's just so
far we (and a couple of a few who managed to hitch the ride) have been the
profiting party.

Eventually winding up at the losing end would seem to be kinda ironic.



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:14:49 MST