From: Johnicholas Hines (johnicholas.hines@gmail.com)
Date: Tue Feb 03 2009 - 15:43:53 MST
On Tue, Feb 3, 2009 at 4:22 PM, John K Clark <johnkclark@fastmail.fm> wrote:
> Yes, I am rather confident that a Jupiter brain won't be stupid and that
> we won't be able to outsmart something smarter than we are.
The above message is an example of one-dimensional thinking about
intelligence. Modeling the differences in capabilities between humans,
other species, and non-life processes such as evolution by only one
dimension is probably useful, but I think we (sl4 denizens) do it too
much.
It should be easy to imagine two entities which cannot be strictly
ranked in capability by only one dimension. For example, one entity
could beat another at chess, while the second could beat the first at
checkers.
I would argue that in practice, most (narrow) AI researchers are
implicitly using mental models of multiple capabilities (and
capability growth during engineering). These mental models predict
that turning it off will remain a viable alternative.
People who believe that hard takeoff of AI is likely, and dangerous,
are usually using one-dimensional models of capability and capability
growth.
I think developing safety protocols for Friendly AGI research means
(among other things), building explicit (and tested and verified)
models of capability growth - probably including more than just one
dimension.
Johnicholas
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT