> ...it seems to scary because it (as of this
> point in time) because it's untried; like aviation was during the early 20th
> century.
>
> Mitch
That analogy works well enough for me.
The fact that AI may (or may not) share our fears doesn't make it less scary
does it?
Oh, well. Optimizing the search for synthetic sentience requires the capacity
to detect sentience, obviously. Does it take super sentience to detect super
sentience, or is that the name of some stupid song... by "Rabbi Loew and The
Emotionally Peculiar Prague Puppets" τΏτ
Human-competitive machine intelligence, as several experts have noted, shall
become the holy grail of the next few years (or decades, whatever), and it
shall become the most expensive commodity of the last ten minutes of the next
few years (or...). So, to maximize the results of extropian effort, put as
much mainstream money into machine intelligence as you can manage, and put as
much machine intelligence into mainstreaming extropian effort as you can
afford. Of course if we had a human-competitive AI, it could help solve this
problem for us, and we wouldn't need to worry about it quite as much. So, from
a circular logic standpoint, the first and most important task is to evolve
human-competitive machine intelligence. Not everyone favors circular logic.
Outside this box biotechies make magic with neural models. Dozens of teams all
over the planet are working night and day to pop out a real live
human-competitive machine intelligence. The stakes are so high, it's hard to
remember that this is very stale news.
From: "Ben Goertzel" <ben@goertzel.org>
> It may well wind up that we want to build computers with a kind of
> "intelligence ceiling", computers that ~don't~ become vastly
> superintelligent precisely because they're more useful to us when their
> intelligence is at a level that our problems are still interesting to them.
Religion has served this purpose (preventing children from thinking for
themselves) for thousands of years, Ben. What if the Mormons (or worse yet,
the Scientologists) build the first AI... will they indoctrinate it with their
intelligence ceiling memes? Why not, they hobble their own kids with
superstitious memes.
> This is reminiscent of the situation in "A Fire Upon the Deep", in which not
> all civilizations choose to transcend and become vastly superintelligent...
Yeah, I get it. Sort of like Democrats, huh? (SI ain't democratic.) τΏτ
> Or one can imagine "bodhisattva AI's" that become superintelligent and then
> stupidify themselves so they can help us better.
Or one can imagine Moses stupified himself when he got the ten commandments.
(God only wanted to give him one commandment, but since they didn't cost
anything... OK, dumb joke.) A real bodhisattva would tell you to become one
yourself instead of trying to build an artificial one. Does it make sense to
build a super sentient instead of experiencing superlative sentience? Buddha
reportedly told his brother Ananda, "Be a light unto yourself." That was
twenty-five centuries ago, and it's still the best advice.
> When I was younger and even more foolish, I used to have a theory that I was
> an all knowing all powerful God who had intentionally blotted out most of
> his mind and made himself into a mere mortal, just because he'd gotten tired
> of being so damn powerful ;>
I had exactly the same theory. I'll bet many of us did. Of course sooner or
later you'll remember who you are, and then we'll all know the power of
awakening a foolish god.
BTW, some folks are still trying to get the Net to transcend.
"The web knows. It knows everything. The web is god."
--Spike
--J. R.
Useless hypotheses:
consciousness, phlogiston, philosophy, vitalism, mind, free will, qualia,
analog computing, cultural relativism
Everything that can happen has already happened, not just once,
but an infinite number of times, and will continue to do so forever.
(Everything that can happen = more than anyone can imagine.)
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 10:00:00 MDT