From: J. R. Molloy (jr@shasta.com)
Date: Sun Jan 07 2001 - 04:15:13 MST
From: "Spike Jones" <spike66@attglobal.net>
> My reasoning goes thus: since silicon based intelligence and carbon
> based intelligence have different diets and different habitats. I have
> a notion that emotion is seated in intelligence. Super intelligence
> then means super emotions, and so... I hope this is how it works...
> a super AI would love us. It (or they) would see how it (or they)
> and humans could work together, help each other, etc. There
> is no reason why we and Si-AI should be enemies, since we
> can coexist.
I agree. This seems to be a perennial topic central to extropy. The
subject has come up several times in the last few years, and I've
commented that the most compassionate and trustworthy people that I've
known were also the most intelligent.
Of course SI is all hypothetical stuff, but what the hell, let's talk
about it anyway.
I think alien intelligence (I prefer that term to artificial intelligence
because it's *all* artificial once you send children to school) will
become human-competitive in a few more years. Genetic programming has
already yielded patentable solutions to problems, solutions which humans
failed to produce. So, alien intelligence, as manifested in Artilects,
Robo sapiens, Mind Children, or Spiritual Machines, may (no, shall) evolve
to SI soon after becoming 100% human-competitive.
Since intelligence means the ability to solve problems, an SI would be a
super problem solver (SPS), and therefore, if it was real, it wouldn't
pose a problem; on the contrary it would solve the problem. Want to know
how to deal with accelerating technological progress? Ask the super
problem solver.
> Another angle is this: a more advanced civilization has the luxury
> of trying to protect and preserve wildlife. The western world
> does this. Those societies where people are starving have little
> regard for preserving wildlife, eh? So the AI would be a very
> advanced civilization, and we would be the wildlife. Temporarily
> that is, until we merge with the AI.
Well, let's pursue this a bit further. An alien intelligence (or SPS)
would look upon Homo sapiens in ways unimaginable to us, not just with a
view to preservation. In addition, advanced extraterrestrial alien life
forms have been there, done that. The fact that Homo sapiens have been
permitted (by superior ET intelligence) to flourish on Earth provides
evidence that alien intelligence does not threaten human life.
Incidentally, in regard to the "paradox" about where all the ETs are, I
think such information would be extremely important and valuable. So, why
would they tell us about it? I mean, information that crucial doesn't need
to be shared with mere humans. Pearls before swine, and all that.
> Of course this analysis could be wrong, we just dont know what
> will happen. On the other hand, we *do* know exactly what
> will happen if we fail to develop nanotech and/or AI. spike
Right, developing alien intelligence is by far the lessor evil.
From: <Eugene.Leitl@lrz.uni-muenchen.de>
> There is no silicon based intelligence now, I doubt there will ever
> be. Silicon doesn't do intricate stable 3d structures very well, so
> it probably has to be carbon
Good point. That's why I think alien intelligence will come from organic
(biological) genetic programming. Why build something when you can simply
grow it?
> You can't build a super AI, no one is that smart. You can only create
> a seed AI, an AI child, if you wish. If you make it basically human,
> by looking which structures are created during neuromorphogenesis and
> replicate their functions, and rear the AI child amongst the humans,
> it will have similiar emotions. Initially. (Unless you broke something,
> and raised a psychopathic AI without knowing it).
I think the psychosis resides mostly in the human brain which has a phobia
about AI.
> No sir, superintelligence is something qualitatively different.
> The positive autofeedback runaway which you cannot follow soon
> confronts you with something incomprehensible. A supernatural
> force of nature, if you so wish.
Because "super" signifies a quantitative difference, not a qualitative
difference, it makes sense to think of superintelligence as something
quantitatively different from human intelligence. If it's qualitatively
different, then you should call it something other than intelligence, or
at least qualify if as such. A supernatural force of nature that solves
problems sounds like a very friendly entity to me.
> Thank you, but the wildlife is dying just fine, despite the protection.
Right, and Homo sapiens may be killing itself, despite the best efforts of
alien intelligence (which may supercede human life).
> So let's merge with the ants, and the nematodes, and the gastropods.
It's not so much that we merge with the ants and so forth... but notice
that they have merged with us: we have dogs, cats, parrots, etc., for
pets; we have horses, cattle, sheep, etc., as farm animals; we have
cockroaches, ants, dustmites, etc., as parasitic fellow travelers.
Symbiosis prevails on Earth as it probably does throughout the universe.
> Yes, lowtech scenarios are more easily understandable, and none
> of them look very pretty nor sustainable.
Nor do lowtech scenarios include the GNR that preoccupies Futurists and
provokes Bill Joy to advocate his particular brand of relinquishment.
Stay hungry,
--J. R.
3M TA3
=====================
Useless hypotheses: consciousness, phlogiston, philosophy, vitalism, mind,
free will
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:04:37 MST