From: Eugen Leitl (eugen@leitl.org)
Date: Sat Dec 14 2002 - 07:42:20 MST
On Sat, 14 Dec 2002, Max M wrote:
> I have noticed that this is a normal viewpoint for tranhumans. I have it
> myself to some degree. But after thinking about it, it has really
> started driving me up the wall.
If it drives you up the wall, don't think about it, then ;)
> The argument goes: "After the singularity, we will be so intelligent and
> different that we will be impossible to understand for mere humans".
It's not an argument. It's a statement. It's not even entirely accurate,
because some beings will be _less_ than a human.
> What I hate about it, is that it is exactly the same kind of mystiscism
> that religions use. It's dogmatic and un-scientific. It cannot be tested
That's okay. Science is not about culture. No chimp science is going to
make her predict superchimps' Ulysses (the space probe, not the book).
> in any way. It's no different than saying that "after death we will all
> go to heaven." Wich cannot be tested either.
>
> Perhaps we can't understand it, but we should at least try to. We should
> at have some theories that could be tested.
There are some aspects of future existance which might or might not be
predictable. In fact, we spend a lot of time on this list pushing the
envelope of predictability (to put it less politely, shooting the bull).
> I know that there are ideas about Borganism's vs. Jupiter brains, but
> after that there are hardly any theories. These cannot be the only two
> topologies that are possible.
I don't speculate in topologies. But there are some constraints of
computational physics and physics in general which make some things appear
more likely than others. The kind of substrate alone doesn't say much
about the culture inhabiting it, though.
> If we don't develop some theories that are plausible and testable, we
> really have a poor argument for wanting to develop superintelligence.
Who wants to develop superintelligence? Not me. I don't even know what
'superintelligence' is.
> I don't think that "We want to become super intelligent, because then we
> will experience what it is like to be super intelligent." is a good
> enough argument.
It's not an argument. It's a statement. And an obviously true statement,
imo.
This archive was generated by hypermail 2.1.5 : Wed Jan 15 2003 - 17:58:44 MST