From: Anders Sandberg (asa@nada.kth.se)
Date: Fri Aug 28 1998 - 12:31:04 MDT
Doug Bailey <Doug.Bailey@ey.com> writes:
> 1 ruble to anyone who can utter that sentence aloud in one breath. :)
Given it's current plunge, I doubt it would be a fair deal :-)
> Seriously, I sympathize with Anders' comments, in part. One the most
> difficult adjustments I've had to make when thinking about how the things
> we talk about on this list will play out in the context of society is
> abandoning fundamental aspects of the present. Some of the people on
> this list have been thinking deeply about the future for many years and
> a much more adept at "future thinking" than relatively new list members.
> I've only been exposed to transhumanist topics for about a year now and
> while I've come along way, there's much room for improvement. I think a
> healthy amount of tolerance and patience should be exercised by the
> senior list members when dealing with some of the naive notions of the
> newest members. Efforts should be made to help the new members "future
> think".
Yes, this is true (and I do hope I have taken the hint). There is a
dilemma in thinking about the future in a transhumanist way. On one
hand we allow ourselves to explore *all* possibilities, including
those which radically break off from the old ways (changes of the
human condition, abundance instead of scarcity, SIs taking over the
world and keeping extropians as art objects), on the other hand we try
to remain rational and critical, firmly grounded in facts and known
rules. If we explore all the weird and wonderful possibilities we can
easily lose connection with reality, if we concentrate too much on the
facts and rules we can get bogged down in reasoning like "throughout
history people have died, hence people will always die". What is
needed is to temper one with the other.
Nanotechnology is a great example of what happens when you do this
well: based on known physics Drexler et al have shown the possibility
of nanodevices and placed some constraints on their capabilities. From
this inferences can be drawn that suggest some unexpected (to most)
possibilities.
Doing this is hard. But definitely worth it. We just have to help each
other balance in the rational region where we are "cautious and
conservative" in the transhuman way.
> For example, the issue that arose in the Sentism thread about ultra-wealthy
> individuals "hogging" all the transhuman technologies for themselves is a
> classic example of projecting the current socio-economic realities of our
> capital-scarce, hard-to-create-gadgets world onto a future with exotic
> technologies. The "have and have-not" aspect of human society has existed
> since people could realize that they had things. Its hard to emerge out
> of that box and conceive of a world where there might not be classic class
> rivalry, wealth disparity, and so forth. People have a hard time wondering
> how we could get from a world like we have now to such a world. People are
> skeptical of utopian predictions.
With good reason. Utopias tend to be either advertisments for the
latest ideology, or fictions with little reality connection.
In the above example, it is by no means clear that wealth disparities
will vanish even with piles of useful nanotech and AI. It might turn
out that everyone is equally rich when it comes to matter and energy,
but rare things like true genius and wisdom have a high price and
people with them can also buy more services of others than "ordinary"
people. Maybe true equality is only possible when all limits are
arbitrary.
However, I agree with your evaluation of the have/have-not way of
thinking. It is really an extension of the myth of the world as a
zero-sum game. In reality the world is a positive sum game, and it
doesn't really matter if somebody has a higher score than another as
long as they both are well off and can increase their scores.
> One last note, I'm not so sure about your "inventions are usually made by
> solitary geniuses with no outside support".
[Your later retraction noted; I'll remember that when I next post
something stupid in the middle of the night :-) ]
-- ----------------------------------------------------------------------- Anders Sandberg Towards Ascension! asa@nada.kth.se http://www.nada.kth.se/~asa/ GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:31 MST