Re: Gattaca on TV this weekend

From: Harvey Newstrom (mail@HarveyNewstrom.com)
Date: Sun Jun 16 2002 - 12:20:31 MDT


On Sunday, June 16, 2002, at 05:06 am, Anders Sandberg wrote:
> I'm seriously
> worried that transhumanism has plenty of assumptions held by many people
> that are not firmly founded on good evidence or at least careful
> analysis. If we don't continually question and refine our assumptions,
> we will end up living in a fantasy world.

I am extremely concerned about this. I would say that most of the
predictions made on this list are unreasonable. I believe that most of
the theories that we discuss are possible, but they will probably take
ten times longer than people are predicting. The problem isn't so much
having the technology to do what we want, but trying to figure out what
we should do. We have complete control over computers, but we still
can't keep them from crashing. It is not because the technology isn't
there. It becomes a management and predictability problem. Full
control over something doesn't mean we will have a full understanding of
all its possible ramifications.

> Here is another assumption which I think it is worth questioning: that a
> fast transformation is desirable.

I would say definitely not. Every new invention goes through a long
period of evolution on its road toward improvement. No matter how good
a concept is, there are always further refinements or ramifications that
weren't originally known. We need to grow by small incremental steps.
I would never upgrade my computer without backing it up first.
Similarly, I would never upload my brain without a way to get it back,
or change my genome without a way to revert my DNA back. Changing fast
is not bad per se, but we need to take time to evaluate new
technologies before we commit to them in real life.

> On the other hand a very fast development would mean that we reach
> powerful levels of damage potential fast - even if you develop safety
> systems first they might not have been fully distributed, integrated and
> made workable when the truly risky stuff starts to be used. Just look at
> software today - imagine the same situation with nanoimmune systems or
> AI.

This is my concern. Evolution has taught us that diversity and
distribution are important. No new changes should be implemented across
all humans or societies all at once. Unforeseen disasters would then be
distributed to all humanity at the same time. I think a good safety
mechanism would be to colonize the asteroid belt, the Kuiper belt, or
the Oortt cloud. Then the separate little worlds could experiment with
various new technologies separately. Disasters would be slow to spread
and would be better isolated.

> I wonder if the singularity really ends the window of vulnerability.
> Maybe it just remains, giving whatever superintelligences are around
> nervous ticks.

The paperless office never materialized, as computers have their own set
of problems. The promises of free energy never materialized, as
electricity and nuclear power have their own problems. Robots still
aren't doing my laundry, and I still don't have a flying car after the
year 2000. Promises to end all problems always fail. We simply move
onto bigger and better problems as we attempt bigger and better
challenges.

--
Harvey Newstrom, CISSP <www.HarveyNewstrom.com>
Principal Security Consultant <www.Newstaff.com>


This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:14:49 MST