Robin Hanson wrote,
> Notice that under this conception it makes little sense to imagine a
> computer in a small lab suddenly breaking through to super-intelligence.
> Intelligence increases would be the result of breakthroughs all across the
> globe, and the compute cycles in any one lab would only be a very small
> fraction of that total.
Because human brains don't use autocatalytic positive feedback loops to
self-optimize, they can't suddenly break through to SI. Consequently, it makes
more sense to imagine SI emerging in a humanoid AI or cyborg or purely
artificial life environment which *does* use evolvable and genetic algorithms
which can self-optimize.
--J. R.
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:50:15 MDT