From: Richard Steven Hack (richardhack@pcmagic.net)
Date: Fri Mar 01 2002 - 01:01:38 MST
At 12:00 PM 2/28/02 -0800, you wrote:
>Richard Steven Hack wrote:
>
> > The problem is that a tech like nano is going to have very
> > general, very pervasive effects very quickly;
>
>I don't think so, there are several orders of magnitude difference
>between the complexity of a Fine Motion Controller and an Assembler
>Arm and several more orders of complexity to get to nanorobots
>and several more orders to get to nanorobot systems.
I was referring to the general technology having some rapid applications in
the real world that would have significant effects on society.
>Unless you can make a very strong assertion that something like
>molecular electronics enables an up-evolving AI that can rapidly
>crack the complixity hurdles, nanotech is going to only impact
>society over a 10-20 year period.
I view a ten-twenty year period as a short time. Especially compared to
the notion that social forces are going to somehow move more quickly than
that - I don't think so. Unless of course the technology is applied to
that end.
> > the same is likely to be true of "true" AI,
>
>*If* you make the assertion that there aren't complexity "ceilings"
>or evolutionary "dead-ends". If intelligence turns out to be
>like the process of solving systems of linear equations, then
>we may have one method with very fixed properties for quite
>a long time before a completely different approach from another
>branch of mathematics provided a better solution.
Do we have any evidence that is the case? And remember Drexler's comment
that we don't need to crack the complexity - we just need to duplicate it...
> > vastly extended lifespan, and other major advances.
>
>The problem with vastly extended lifespans is that it may very
>well be that to know for *sure* if your life extending technologies
>will really "work", you are going to have to do atomic level
>simulations of entire organs for very long periods of time.
>We aren't going to have the computational capacity for doing
>that any time in the near future.
What is the "near future"? By 2020, it has been suggested by others that
the complete body-wide simulation of all cellular reactions will be
feasible. And why do we need atomic-level simulations to determine
longevity probabilities?
>I'd limit the fully developed nano-entitites to 4 things:
>mass, energy and "nano-scale system designs" and the
>means to turn the mass into the systems. I'd tend to
>agree that the only thing worth trading is information.
Why leave out computational capability? Granted it is constructible, so
perhaps it is not a primary but it is a requirement for development so I
would suggest it is close enough to a primary requirement to be
included. Certainly some degree of computational capacity has to be a
given IMO.
>Whether we still have an "economy" will depend on how far
>down the path we go towards a hive mind mentality. Our
>bodies and brains presumably have "economies" for the
>allocation of resources -- whether there is an "external"
>economy depends on the extent to which "we" grow to encompass
>everything worth "exchanging".
>
>Robert
>
My point is that a fully developed nano entity has no need for "allocation
of resources" - at least not as we understand it. Without biological
death, and with the nano ability to construct anything at will from cosmic
resources, the only need to construct anything will be in pursuit of goals
we cannot now imagine. Granted, it is possible that some of these goals
may require cooperation between such entities, but it is not certain that
this cooperation will need to be "traded for" - it might be freely given if
the goal is considered desirable by all the entities concerned.
Human economics is a direct result of human nature, i.e., human physical
and social and cultural evolution. There may be such a thing as "posthuman
economics" but I have yet to determine what it might entail, other than the
exchange of information.
I don't think a "hive mentality" is at all likely, either. I suspect that
posthumans will be just the opposite - absolute individuals with no need
for social interaction as we know it - but perhaps with the capacity for
extremely intimate social relations when desired or necessary. I believe
posthumans will be post-social, post-economic, post-political, and
post-biological. The full development of this may take some decades beyond
the initial Singularity, but I believe that with brains operating a million
times faster than human brains, that this will not take long. I predict
that by the end of this century, humans will no longer exist - either
because they have destroyed themselves with aspects of the technology, or
because they all transcended (voluntarily or otherwise), or they have
challenged the posthumans and been destroyed. The only likely fourth
possibility is all of the above - some destroy themselves, some transcend,
some are destroyed, and perhaps others merely muddle along as before.
Richard Steven Hack
richardhack@pcmagic.net
--- Outgoing mail is certified Virus Free. AVG Anti-Virus System Version 6.0.325 Release Date: 01/28/02 Virus Database: 182 Release Date: 02/19/02 Checked by AVG anti-virus system (http://www.grisoft.com). Version: 6.0.325 / Virus Database: 182 - Release Date: 2/19/02
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:12:43 MST