On 5/13/01 6:07 AM, "Francois-Rene Rideau" <fare@tunes.org> wrote:
>
> Now of course, see how a Solomonoff
> predictor can only be _approximated_ by computational agents. This entails
> that we (rational sentient beings, including AIs and ETs) are all irrational,
> when faced with complex enough problems. However, there are convergent
> algorithms to extract all information there is from "simple enough" systems
> that can be described in rules polylogarithmically simpler than the observed
> system. Etc.
This is correct. I would just like to expand on this a little bit by
stating that it is possible to compute the maximum possible accuracy of a
prediction for a given problem. As you suggest, sufficiently complex
problems and sufficiently small predictors will produce an intelligent agent
that is provably unable to give a better prediction than you'd get by
flipping a coin.
Somewhat counter-intuitively, you actually can get *better* results in some
cases with small predictors by flipping a coin than by using the predictor.
In these cases, one could easily claim that the predictor was in an
"irrational" state. The "irrationality" (which really isn't much more than
an observer bias in many cases) is usually caused by insufficient
experiential data and/or insufficient predictor memory. The "irrationality"
of the predictor is intimately related to the theoretical accuracy of the
predictor as mentioned above, and could possibly be quantified as the
deviation from the maximum for the given configuration. When the
irrationality is sufficiently large, the trustworthiness of the predictor
plummets.
> PS: Solomonoff Induction is well-explained in chapter 5 of Li and Vitányi's
> "Introduction to Kolmogorov Complexity and its Applications, 2nd Edition".
This is one of the de-facto reference texts on the subject, being heavily
referenced in papers on the topic. I would second the recommendation.
-James Rogers
jamesr@best.com
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 10:00:05 MDT