From: Omega (omega@pacific.net)
Date: Sat Jan 25 1997 - 00:47:39 MST
Eugene Leitl wrote:
> I know that. It's just that I don't believe that an exponential (or
> hypergolic hyperbolic) curve describes future developments adequately.
> (I wish it did, since liquid nitrogen is _very_ cold, and my belly
> (emotion) does not buy everything my head (ratio) tells it about
> identity conservation). The whisky is good, but the steak is substandard.
Yes, the belly does not buy what the head tells it. A very significant,
and IMO greatly underrated consideration when humans are talking about
either cyronics or anything even remotely related to trans-humanism.
> > him. Decades I would agree with (if the number of them were low)
> > but, IMO, not centuries.
> >
> > There is also the possibility that technological development may
> > end up being hyper-exponential (with time) when "qualitative" and
> > "non-continuous" breakthroughs (e.g. as in that often mentioned
>
> Oh yes, Singularity is a nice concept. Saturation sounds much less
> interesting. And developmental discontinuities (plateaus), caused
> to JIT-unavailabilty of next-generation technologies sound outright
> grim. So I'm a killjoy. Sue me. <snip>
>
> Yes, there's a lot of things happening. However, I think that
> superintelligent machines are the key, and these will need truly
> impressive hardware, which will need oodles of cash and manpower for R&D,
> which industry might be unwilling (local optimization), and the state
> unable to spend and a couple decades to hatch (wet neurosci results,
> molecular manufacturing, macroscopic von Neumann probes, etc.) which is
> simply is not happening.
>
> Maybe we need better PR.
I agree only to the extent that the things you mention reduce to human
behavior, which is what I consider to be, by far, the biggest factor in
this whole process, but not with what you characterize as the hardware
requirements.
> > Maybe it won't in fact be hyper-exponential, but I feel that your
> > conclusion still falls under what I would call the default way
> > of estimating the future that Russel describes as flawed. What
> > do you think, do I have the beginnings of a compelling case?
>
> Eliezer makes a very good point, one which maestro Vinge described
> in the blooming of the Blight at the beginning of AFUTD: each subsequent
> second grew longer and longer. Arithmetically, the argument is
> impeccable: a fast machine can be used to build a yet faster machine,
> which in turn... However, remember the reasons why Lem's 'GODs' in
> "Fiasco" were so tiny. Wormhole building is accessible (if at all) at
> very high energies, which take vast structures, which take some time to
> build. There might be even a natural barrier (killjoy, killjoy), the
> grapes might be hanging a trifle too high for natural life (spake Michio
> Kaku in "Hyperspace").
>
> So relativistic physics sets us a barrier. Building megascale physical
> structures a yet another. Moreover, the future is in flux. We can't be
> certain of anything. We all might be dead or thrown back noticeably by
> 2030, because of a natural, or, engineered pandemia, a society breakdown,
> a global war for resources, whatever (Boulding's War Trap, Population
> Trap, Entropy Trap in the "Great Transition" (1964)). The only certain
> thing about the future is that it is uncertain. Trivially? True.
I agree that the future may well be uncertain (although the possibility
exists that it may actually be fully deterministic in light of the new
transactional interpretation of QM [a subject for the free-will thread])
but I don't agree with your assessment of the relevant technology.
Wormholes? Relativistic physics? Whoa, I know I mentioned the pos-
sibility of hyper-exponential development, but my original context was
sex changes within the context of nanotech designer life. I agree there
may be fundamental barriers, but I see these as being much further out
than anything involved in simply getting to nanotech and designer life.
I see no reason at all why early trans-human intelligences should press
the limits of physics anymore than the human brain itself does.
It's not at all clear to me that developing nanotech even requires
trans-human intelligence, and even if it does, I'm not at all con-
vinced that the difficulties of getting to trans-human intelligence
are as you present them. I mean if the development of trans-human
intelligence really is going to take:
> truly impressive hardware, which will need oodles of cash and manpower
> for R&D,
Then maybe we should consider turning the job over from the computer
science field to the gene splicers and the pharmaceutical industry; at
least there, the starting platform is already one of "human intelligence".
All in all, it seems like the greatest problems are not knowledge, but
those amorphous things we call politics, wisdom, morality, motivation,
and human behavior in general (not to mention trans-human behavior).
-- In the Ecstatic Service of Life -- Omega
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:44:03 MST