From: Damien R. Sullivan (phoenix@ugcs.caltech.edu)
Date: Wed Jul 22 1998 - 19:24:57 MDT
Robin wrote:
> Damien has an essay on the topic at:
> http://www.ugcs.caltech.edu/~phoenix/vinge/antising.html
God's claws, I'd forgotten about that. And I'm not even ashamed of it, after
2 years.
On Jul 22, 11:19am, Eugene Leitl wrote:
> doesn't it seem to be a bit presumptious the bipedal ape's position on
> the smartness scala can't be topped by similiar increases? Our senses
No more so than assuming that no computer language will be more powerful in
capability than the ones we have. In Hofstadter's terms, assuming that Floop
is the top language, and that there is no Gloop. Assuming that
Turing-completeness is, well, complete, as far as ways to do computation go.
With quasi-Darwinism for creativity.
I forget what the proofs in this area actually state. But to me
incomprehensible SI means
> Strange, I really have trouble believing in an SI I can understand to
> a meaningful extent. If I could, it wouldn't be an SI, or I would be
> its peer. It could be governed by some simple laws (the degenerated
Perhaps we have different definitions of SI. I think of Della Lu, or a
Culture Mind, or a greedy polis. Bigger and faster, but understandable, both
in principle and (eventually) in detail. The detailed understanding might be
obsolete when it came, like an 8086 trying to decrypt keys generated by a 786
and changed every day, but it would still be understanding. But the Mind is
still a Super Intelligence.
> magickal (it could or could not), but we wouldn't be able to tell
> which way it would turn out to be. The Singularity is a developmental
> _prediction horizont_, after all.
People in 1798 couldn't predict how 1898 would come out. 1898 couldn't
predict how 1998 came out. On the other hand, Franklin thought of cryonics.
The prediction horizon Vinge worries about is not being unable to predict how
things will turn out, but being unable to imagine possibilities at all. It's
the business of SF writies to be wrong about their projections; an SF writer
is in trouble when they can't project. Robin and I say there _are_ boundary
conditions we can predict, and lots and lots of possibilities we can imagine,
and if we play enough we may well get close (without knowing it ahead of time)
to whatever actually happens, and whatever does happen will very likely be
explainable to us.
Because we can play universal Turing machine, which can compute anything.
Unless we run out of memory.
> 'Sufficiently advanced technology is indistinguishable from magic'?
Not an axiom in my world.
> > magical SI through AI seems a bit incoherent; you exploit the
> > Church-Turing thesis, then assume the result does something beyond
> > Turing-completeness...
Strong AI, which is tied up in the concept of the Singularity, assumes
Turing's worldview and proofs. To assume that the self-modifying AI can
become something we are intrinsically incapable of understanding is to assume
that there's something beyond Turing-complete languages and UTMs. I don't
know if I could prove this to be thoroughly inconsistent, but it seems
slightly incoherent to me. Or inconsilient, to use E.O. Wilson's word.
Or: if the computer can emulate me, I can emulate the computer.
-xx- Damien R. Sullivan X-)
For God made Ewan Gillies
God gave him wings to fly
But only from the land where he belonged.
But I'd fight with God himself
For the light in Ewan's eye
Or any man who tells me he was wrong.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:23 MST