Billy Brown wrote:
>
> Agreed. You can only build a chain of ungrounded symbols so long, then the
> system starts spitting out garbage. IMO there are at least two limitations
> at work here - you can only handle so many layers of indirection, and you
> have very little working memory with which to perform mental operations.
> Since human memory and logic are fuzzy, overflowing either boundary simply
> produces nonsense output - and since people don't normally have error
> checking at this level, they usually don't realize they aren't making sense.
>
> I would presume you still have the limited-working-memory problem when
> working with complex idea systems. If not, please explain - I've been
> trying to generalize around that one for years now.
> Unless they use rationality to divide people from animals. Then people who
> fit their definition of rational, within the limits of available processing
> power, might survive a general extermination. Not a likely scenario, but
> not impossibly unlikely, either.
I don't think that my level of rationality is so far away from an idiot's, relative to the total width of the spectrum, that a random division would lie anywhere in the middle of the human race. The only major targetable landmark is realizing that your current actions can be dependent on your projection of the Singularity's behavior and that a possible causal link exists.
-- (*) = Not a very good way of looking at it. The objects and their relation occupy a single mental simulation; at any given time I am probably focusing on a particular set of rules and trying to deduce facts, or vice versa. Anyway, I don't run out of RAM. -- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/AI_design.temp.html http://pobox.com/~sentience/sing_analysis.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know.