From: Ben Goertzel (ben@goertzel.org)
Date: Mon Aug 26 2002 - 08:30:16 MDT
> Given this, I would then state that rationality is the ability of these
> processes to make use of the BPT in an efficient, objective, and right
> manner, which is to say rationality is the ability to get your priors
> and U right.
>
> --
> Gordon Worley
Let me see if I can transform this into something I like ;)
Let's assume we have a system at hand with certain goal, embedded in a
certain environment, and with a processor of a certain type and capability
Then, you can assess the effectiveness of the system *by the standards of
the assumed universe*, by assessing
A
-- how well its actions achieve its goals
B
-- how well an optimally intelligent system could have achieved those same
goals given the same processor type/capability and the same basic
perception/action domain
The "effectiveness ratio" A/B is an index of how well the system has
succeeded, if the ratio is 1 then the system is perfect, if the ratio is 0
then it's a total failure.
I don't call this intelligence or rationality, i call it effectiveness,
because I've made no requirement that the goal be complex, and I consider
intelligence as the ability to achieve complex goals in complex
environments.
NEXT, you're assuming that the behavior of any system can be modeled by
assuming the system is carrying out *approximate* probabilistic inference
with certain parameters. In other words, any system can be matched with an
"inferential system model."
Then, you're saying that those systems with higher effectiveness ratios,
given a certain processor type/capability and a certain perception/action
domain type, are going to tend to correspond to inferential system models
with certain particular parameter sets.
-- Ben G
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT