From: Brian Atkins (brian@posthuman.com)
Date: Mon Jun 17 2002 - 18:06:09 MDT
Harvey Newstrom wrote:
>
> While a super-smart AI invents stuff every second, humans and physical
> reality will become the bottleneck to deployment. We simply can't risk
> deploying untested technology without a backup plan. A super-fast AI
> won't want to destroy its own planet either, and it should understand
> the scientific method and requirements for testing and verification of
> technology in the real world after simulations show that something
> should work.
>
> This is not a neo-Luddite viewpoint requiring us to slow down the
> deployment of technology. Quality control, security, and planning have
> always slowed down major projects. It has always been possible to speed
> up production by ignoring safety, security and quality.
>
I'm glad you stated the not-neo-Luddite because frankly I was beginning
to wonder (and still am a bit) whether transhumanism is growing a big
split as technology accelerates and these abstract discussions become more
relevant to reality.
However, I think what you're saying there doesn't make complete sense. It's
the part about the "super-smart" AI that bugs me. If an AI has grown into
superintelligence then quite likely it is capable of constructing enough
computronium to let it fully /emulate/ the whole planet if necessary to test
out its new tech ideas much more quickly than realtime.
I also don't see any real basis to your claim that "everything will take
10 times longer than the claims on this list" comment. Ray Kurzweil's
estimate of 2029-2049 for a "slow Singularity" is rather conservative I
think. If you see things taking significantly longer than that I'm curious
why?
-- Brian Atkins Singularity Institute for Artificial Intelligence http://www.singinst.org/
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:14:52 MST