Re: Topologi of a SI AI (Was:Re: extropians-digest V7 #341)

From: Max M (maxmcorp@worldonline.dk)
Date: Mon Dec 16 2002 - 00:50:52 MST


Mark Walker wrote:

> If some subject
> S verifies the truth of some theory or proposition P, then S must understand
> P. In other words the dogma is that , understanding or comprehending P is
> necessary for verifying the truth of P. A simple example shows this dogma
> false: S is a child aged 7. She asks her physicist mother if the proposition
> 'E=mcc' is true. Mum says yes. S has verified that P but has very little
> compression of what P means.

Asking mother about the truth, in the best tradition of science ;-)

> I agree in general with the idea of testability. It seems extremely
> improbable to me, but I think we should bear in mind the empirical
> possibility that higher intelligences cannot be created.

But I was not questioning wether a "higher intelligence" could be
created. Because that is obviously possible. Simple improvement that
cold be suggested and understood allready today::

     - Total photographic recall, both senses and feeling

     - Perfect math skills, knowing results just by thinking of problems

     - Improved logic, built in expert system for aplicable problems

     - Mood improvements, creating "a lust to know"

But these are all pretty banale compared to how much more intelligent we
could imagine a SI.

I was wondering how it would be like to be a higher intelligence. How
their lifes would be, compared to ours. I wanted some examples on why
posthuman is better than human. I wanted a reason to become posthuman.

In that process I want to "understand" Super Intelligent posthumans. And
I am not shure that the "we will understand them as little as a dog
understands us" argument applies.

It is not hard to imagine that there is a sharp divide between animal
intelligence and human intelligence. But perhaps a super intelligence
isn't capable of understanding anything more than us. Given enough time.
So perhaps, given enough time, brains, and good ideas, we can understand
what it is like to be an SI.

However, I don't really believe that I can understand a posthuman, if it
is Super Intelligent, but I do believe that it is in the best scientific
tradition to try. Even if it's difficult, bordering on the impossible.

I don't want proof that it can be done. Just some heureka! insights will
do. ;-) I need better arguments.

-- 
hilsen/regards Max M Rasmussen, Denmamk
http://www.futureport.dk/
Fremtiden, videnskab, skeptiscisme og transhumanisme


This archive was generated by hypermail 2.1.5 : Wed Jan 15 2003 - 17:58:45 MST