From: Stuart Armstrong (dragondreaming@googlemail.com)
Date: Fri Jun 20 2008 - 06:24:04 MDT
> I think this violates the criterion Matt gave originally "It would
> also not include simulations where agents receiving external
> information on how to improve themselves. They have to figure it out
> for themselves."
I admit it wasn't a proper solution, just a crude simplistic model.
"Figuring out for themselves" isn't clearly defined, though; I'm
pretty sure we could migrate the model to something more along the
lines Matt intended.
But the main weakness is that we are much, much smarter than these
models. I've argued that we have:
1) non-evolutionary RSI for dumb models
2) approximate ways of measuring intelligence for above human entities
The big hole is the connection between the two: is human-level or
higher than human level non-evolutionary RSI possible? The proof of
that is only in the pudding; in the meantimes, do we have many
non-evolutionary RSI at a higher level than my dumb models? (maybe
even useful one ;-)
Stuart
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT