From: Brian Atkins (brian@posthuman.com)
Date: Mon Oct 02 2000 - 23:32:11 MDT
hal@finney.org wrote:
>
> Brian Atkins writes:
> > Well as I said to Eugene- look around at the reality of the next 20 years
> > (max). There are likely to be no Turing Police tracking down and containing
> > all these AIs that all the hackers and scientists out there will dream up.
>
> That's not clear. First, it could easily take longer than 20 years to
> get superhuman AI, for several reasons:
>
> - We may not have nanotech in 20 years
> - We may hit Moore's Wall before then as computer speeds turn out to be
> on an S curve just like every other technology before them
> - Software may continue to improve as it has in the past (i.e. not very
> fast)
> - AI researchers have a track record of over-optimism
Perhaps so, in which case Eugene has nothing to worry about. These things
above are not what we want to discuss on this thread. We want to hear what
you propose to do to have a good outcome in a world where the above things
turn out to be wrong.
>
> Secondly, I suspect that in this time frame we are going to see
> increased awareness of the dangers of future technology, with Joy's
> trumpet blast just the beginning. Joy included "robotics" in his troika
> of technological terrors (I guess calling it "AI" wouldn't have let him
> keep to the magic three letters). If we do see an Index of of Forbidden
> Technology, it is entirely possible that AI research will be included.
Don't you think this better happen soon, otherwise the governments will
end up trying to regulate something that already is in widespread use? There
are going to be "intelligent" programs out there soon- in fact you can already
see commercials in the last year touting "intelligent" software packages.
Do you really think it is likely our government would outlaw this area of
software development once it becomes a huge market? Extremely unlikely...
>
> Third, realistically the AI scenario will take time to unfold. As I
> have argued repeatedly, self-improvement can't really take off until
> we can build super-human intelligence on our own (because IQ 100 is
> self-evidently not smart enough to figure out how to do AI, or else
> we'd have had it years ago). So the climb to human equivalence will
> continue to be slow and frustrating. Progress will be incremental,
> with a gradual expansion of capability.
Up to a point...
>
> I see the improved AI being put to work immediately because of the many
> commercial opportunities, so the public will generally be well aware of
> the state of the art. The many difficult ethical and practical dilemmas
Public.. well aware of state of the art... bwhahahaha. No I don't think
so. At any rate, I guess you are talking about a different scenario here;
one without Turing Police preventing development?
> that appear when you have intelligent machines will become part of the
> public dialogue long before any super-human AI could appear on the scene.
>
> Therefore I don't think that super-intelligent AI will catch society by
> surprise, but will appear in a social milieu which is well aware of the
> possibility, the potential, and the peril. If society is more concerned
> about the dangers than the opportunities, then we might well see Turing
> Police enforcing restrictions on AI research.
Well I'd love to see how that would work. On one hand you want to allow
some research in order to get improved smart software packages, but on
the other hand you want to prevent the "bad" software development that
might lead to a real general intelligence? Is the government going to sit
and watch every line of code that every hacker on the planet types in?
In an era of super-strong encryption and electronic privacy (we hope)?
-- Brian Atkins Director, Singularity Institute for Artificial Intelligence http://www.singinst.org/
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:31:21 MST