From: Anders Sandberg (asa@nada.kth.se)
Date: Sun Jun 16 2002 - 03:06:29 MDT
On Sun, Jun 16, 2002 at 12:23:49AM -0400, Brian Atkins wrote:
>
> We can't predict when the breakthrough in the softare side of AI will come.
> What we can say is that no one, whether through AI or nanotech-based brain
> enhancement or some other way, is going to create a transhuman intelligence
> until at least the required hardware to implement it is available.
But what is the required hardware? It doesn't have to be massive
nanocomputers, it could be as Marvin Minsky put it, a 286 with the right
algorithm.
> If we can
> estimate advanced nanotech at 2020 or beyond, and we know it takes longer than
> that to grow some bioengineered transhumans, and we also put uploading at 2020
> or beyond, then what we can say for sure is that AI is the only technique that
> has a shot at working pre-2020.
This is true. But it assumes 1) that no other technologies will become
relevant over the next 20 years (20 years ago only Richard Feynmann had
thought about quantum computers), and 2) that a technology "working"
will become immediately very significant.
> > I think one important holy cow to challenge for all of us here on the
> > list is the "fast transformation assumption": that changes to a trans-
> > and posthuman state will occur over relatively short timescales and
> > especially *soon*. While there are some arguments for this that make
> > sense (like Vinge's original argument for the singularity) and the
> > general cumulative and exponential feeling of technology, we shouldn't
> > delude ourselves that this is how things really are. We need to examine
> > assumptions and possible development paths more carefully.
>
> I'm not sure why you brought this up, but anyway:
>
> Well relating to the subject line I have to say I am reminded of Vincent
> in the movie (who I thought was a rather Extropian fellow) who after much
> searching and thinking was able to find a way (difficult, but possible)
> to get what he wanted. Frankly you sound a lot like his father who kept
> encouraging him to become a janitor. Right now there is one identifiable
> way (also quite difficult, but potentially possible) to achieve the "fast
> transformation assumption" (FTA) (can't we just call it the Singularity?)
> within this decade even. And until I and the others like myself find a
> better way we are going to be just as persistent as Vincent while we pursue
> this one. One very difficult potentially possible way is better than none.
I think you are mistaking my intentions. You seem to interpret what I
said as "why bother trying to make AI", which is incorrect. I am
discussing this on the metalevel, as a memetic gardner. I'm seriously
worried that transhumanism has plenty of assumptions held by many people
that are not firmly founded on good evidence or at least careful
analysis. If we don't continually question and refine our assumptions,
we will end up living in a fantasy world. Of course, even after deep
discussion people will perhaps come to different conclusions (which I
guess is the case here). That is entirely OK.
Here is another assumption which I think it is worth questioning: that a
fast transformation is desirable.
(This really belongs in a non-Gattaca thread)
Mold ex machina:
> These will be worth worrying about much
> sooner, and are (at least in the case of a bio plague) just another reason
> to achieve a Singularity sooner rather than extending our window of
> vulnerability.
On the other hand a very fast development would mean that we reach
powerful levels of damage potential fast - even if you develop safety
systems first they might not have been fully distributed, integrated and
made workable when the truly risky stuff starts to be used. Just look at
software today - imagine the same situation with nanoimmune systems or
AI.
I wonder if the singularity really ends the window of vulnerability.
Maybe it just remains, giving whatever superintelligences are around
nervous ticks.
-- ----------------------------------------------------------------------- Anders Sandberg Towards Ascension! asa@nada.kth.se http://www.nada.kth.se/~asa/ GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:14:49 MST