From: Samantha Atkins (samantha@objectent.com)
Date: Thu Oct 05 2000 - 04:21:26 MDT
Brian Atkins wrote:
>
> hal@finney.org wrote:
> >
> > Brian Atkins writes:
> > > Well as I said to Eugene- look around at the reality of the next 20 years
> > > (max). There are likely to be no Turing Police tracking down and containing
> > > all these AIs that all the hackers and scientists out there will dream up.
> >
> > That's not clear. First, it could easily take longer than 20 years to
> > get superhuman AI, for several reasons:
> >
> > - We may not have nanotech in 20 years
> > - We may hit Moore's Wall before then as computer speeds turn out to be
> > on an S curve just like every other technology before them
> > - Software may continue to improve as it has in the past (i.e. not very
> > fast)
> > - AI researchers have a track record of over-optimism
>
> Perhaps so, in which case Eugene has nothing to worry about. These things
> above are not what we want to discuss on this thread. We want to hear what
> you propose to do to have a good outcome in a world where the above things
> turn out to be wrong.
>
I was not aware that it was your business to define or delimit what we
discuss in this or any other thread. It is actually quite important
that we understand the relative steepness of the ramp toward Singularity
because our what we spend time and energy on to insure the best outcomes
we can changes with our conclusions.
> >
> > Secondly, I suspect that in this time frame we are going to see
> > increased awareness of the dangers of future technology, with Joy's
> > trumpet blast just the beginning. Joy included "robotics" in his troika
> > of technological terrors (I guess calling it "AI" wouldn't have let him
> > keep to the magic three letters). If we do see an Index of of Forbidden
> > Technology, it is entirely possible that AI research will be included.
>
That is extremely unlikely. What AI we use is very delimited and quite
crucial to the problem domains it is deployed for. We need greater
strategic intelligence including from AI. AI is a very many splendored
thing. We sould not throw out everything to do with AI just because we
are afraid of SI. That would be dumber that teaching creationism.
> Don't you think this better happen soon, otherwise the governments will
> end up trying to regulate something that already is in widespread use? There
> are going to be "intelligent" programs out there soon- in fact you can already
> see commercials in the last year touting "intelligent" software packages.
> Do you really think it is likely our government would outlaw this area of
> software development once it becomes a huge market? Extremely unlikely...
>
There are intelligent programs out there today and have been for quite
some years now. But most "intelligent" software packages aren't and
insult the intelligence of their buyers.
> >
> > Third, realistically the AI scenario will take time to unfold. As I
> > have argued repeatedly, self-improvement can't really take off until
> > we can build super-human intelligence on our own (because IQ 100 is
> > self-evidently not smart enough to figure out how to do AI, or else
> > we'd have had it years ago). So the climb to human equivalence will
> > continue to be slow and frustrating. Progress will be incremental,
> > with a gradual expansion of capability.
>
No cutting edge technology comes from the middle of the Bell Curve. I
don't see the point of this argument.
>
> > that appear when you have intelligent machines will become part of the
> > public dialogue long before any super-human AI could appear on the scene.
> >
> > Therefore I don't think that super-intelligent AI will catch society by
> > surprise, but will appear in a social milieu which is well aware of the
> > possibility, the potential, and the peril. If society is more concerned
> > about the dangers than the opportunities, then we might well see Turing
> > Police enforcing restrictions on AI research.
>
I agree. Along the way we overcome the first level fears of loss of
jobs and income and the benefits going only to the richest corporations
and government. Show me AI tools that work with me as intelligent
assistants in producing what is important to me and a lot of fear and
uncertainty begins to melt away.
> Well I'd love to see how that would work. On one hand you want to allow
> some research in order to get improved smart software packages, but on
> the other hand you want to prevent the "bad" software development that
> might lead to a real general intelligence? Is the government going to sit
> and watch every line of code that every hacker on the planet types in?
> In an era of super-strong encryption and electronic privacy (we hope)?
Eventually, AI will become more and more general and self-organizing.
Code, generally must go in that direction if we are to reap the benefits
of computerization. We cannot indefinitely write code like we do
today. Nor can we have only programs that are "mindless". But the cure
will lead over time to a self-organizing, self-optimizing software
environment and likely to increasing self-awareness and general
intelligence. If there has been a strong sybiotic relationship and
building of trust along the way then this eventually is far less likely
to be deadly than otherwise. On the other hand, building an SI in the
basement and springing it on the world one fine morning is likely to
generate maximum reaction, fear, chance of really bad outcome and
general discord. Assuming you could get there from here, of course.
- samantha
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:31:25 MST