Eliezer S. Yudkowsky wrote:
> I don't see any grounds for believing in a "difficulty" that will
> prevent a nanocomputer with a million times the raw computing power
> of a human from being at least as much smarter than humans as humans
> are from chimpanzees,
I agree completely... if the AI has that nanocomputer to run on.
> or in a difficulty that prevents an AI running over the Internet
> from being intelligent enough to use rapid infrastructure to
> recompile the planet's mass and upload the population.
So where's that rapid infrastructure? I can see a very frustrated AI wasting years teaching these dim gaussian humans how to make the tools to make the tools to make the processors the AI *really* wants. Even with robotic tools, a lot of human interaction would be needed, unless the nascent AI can earn enough (or steal enough) to buy a *lot* of robots. If Elisson can't turn itself into a crack nanotech and robotics designer within limited resources, the rampup will be limited.
> Whether difficulties occur after that is something of a moot point,
> don't you think?
...and I agree with that too. Too many visions of nanotech are childish wish fulfillment things. Sure, you could make 3D video wallpaper- but why bother when the (properly processed) video signal could simply be injected directly into the viewer's optic nerve? Assuming the viewer still has anything as outdated as jelly eyeballs and optic nerves, of course.
-- Doug Jones, Freelance Rocket Plumber