Topologi of a SI AI (Was:Re: extropians-digest V7 #341)

From: Max M (maxmcorp@worldonline.dk)
Date: Sat Dec 14 2002 - 06:33:29 MST


Eugen Leitl wrote:

> Immortal implies posthuman. Soon, at least. So, I don't think you'll have
> to worry. We might be monsters; but at least we'd be monsters way beyond
> of what mere humans can grasp.

I have noticed that this is a normal viewpoint for tranhumans. I have it
myself to some degree. But after thinking about it, it has really
started driving me up the wall.

The argument goes: "After the singularity, we will be so intelligent and
different that we will be impossible to understand for mere humans".

What I hate about it, is that it is exactly the same kind of mystiscism
that religions use. It's dogmatic and un-scientific. It cannot be tested
in any way. It's no different than saying that "after death we will all
go to heaven." Wich cannot be tested either.

Perhaps we can't understand it, but we should at least try to. We should
at have some theories that could be tested.

I know that there are ideas about Borganism's vs. Jupiter brains, but
after that there are hardly any theories. These cannot be the only two
topologies that are possible.

If we don't develop some theories that are plausible and testable, we
really have a poor argument for wanting to develop superintelligence.

I don't think that "We want to become super intelligent, because then we
will experience what it is like to be super intelligent." is a good
enough argument.

Any objections ;-)

-- 
regards Max M Rasmussen, Denmark
http://www.futureport.dk/
Fremtiden, videnskab, skeptiscisme og transhumanisme


This archive was generated by hypermail 2.1.5 : Wed Jan 15 2003 - 17:58:44 MST