From: Mike (mikew12345@cox.net)
Date: Sat Jun 26 2004 - 00:39:18 MDT
> -----Original Message-----
> From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org] On Behalf
> Of Thomas Buckner
> Sent: Friday, June 25, 2004 10:08 PM
> To: sl4@sl4.org
> Subject: RE: We Can't Fool the Super Intelligence
>
>
> --- Simon Gordon <sim_dizzy@yahoo.com> wrote:
>
> > >The universe is more interesting with us in it.
> >
> > LOL. Really? Well you could be right, but given that
> > at some point the superAI would be capable of bringing
> > into existence any number of an infinite variety of
> > "designer intelligences", i tend to think that we
> > would only really be interesting from a historical
> perspective...and
> > is history all that interesting? Maybe to some humans it is but i
> > doubt whether higher intelligences would be at all bothered about
> > anything like that.
>
> Yes, really. I think you deem that the superAI will be "vast,
> cool, and unsympathetic" to the degree that ve has no concept
> of how humorous our follies and farces are. Even our
> stupidity is interesting. Strange, but true. And yes, I think
> even resurrecting the dead by recreating all possible
> iterations of intelligence (including ourselves) is
> reasonable if the processing capacity is available. To do
> otherwise is (to me) a bit like reproducing half the
> Mandelbrot set while deciding that the other half is somehow
> inferior. I myself have strongly come to suspect that human
> qualia are not intrinsically better than, say, bonobo qualia
> on a good day, or dolphin qualia. Why would a superior
> intelligence deny itself access to other modes if ve had the
> choice? I assert that this falls into the class of things
> people think a SAI might do that in fact ve would not do,
> because ve would know better. There might be useful learnings
> or experiences ve could derive from 'seeing through our
> eyes', so that if ve got rid of us before the resource was
> exhausted, ve would be doing something dumb. I once mentioned
> to a woman acquaintance a bit of data that I gleaned from an
> article in Esquire (a men's magazine). She replied, "I don't
> read men's magazines." I told her that this was unenlightened
> because I have a rule: Never limit your sources of
> information. Now, if I am smart enough to understand that, so
> is any SAI worth ver salt. Ve might run out of uses for us
> eventually, but we can't predict if or when, and initially,
> neither can the SAI. Don't sell us short the way we sell
> other life short, and don't sell SAI short the way you sell
> us short. We are able to destroy other life and intelligence
> because we are smarter, but we do it because we are not smart enough.
>
> Tom Buckner
>
> =====
Put this on a much larger scale and see if it still makes as much sense.
How much intelligence is there in a petri dish culture? Does your
argument still apply? "Don't destroy that specimen in the dish, because
it might contain a unique source of information."
Once the study of the virus is done, the usual thing to do is to destroy
the sample. If the AI is truly thousands of times more intelligent than
us, it may not take it too long to decide that it's learned all that's
useful from us.
Additionally, the AI will undoubtedly have many new and interesting
things to occupy its time; things that we can't begin to imagine. Human
existence may be as interesting to the AI as the daily routine of a sea
slug is to the average human today.
Unless the AI perceives a real *need* for humans to exist, (a need as
basic as the hardware it runs on), the need to remove humans may win
out. We consume more than our share of resources, we foul up the
planet, we fight amongst ourselves, etc. We may have value as another
source of information, but is that enough to balance the equation?
Especially when the AI can run all the simulations it wants, in all the
multiverse permutations imaginable, without the associated impact on the
planet? A nuclear holocaust has value as a source of information, but
that doesn't mean we should have one.
I think our future is shaky unless:
1) humans are essential for the continued existence of the AI, and more
importantly,
2) the AI accepts #1 as true.
Mike W.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT