From: Ben Goertzel (ben@goertzel.org)
Date: Tue Aug 02 2005 - 11:37:25 MDT
> > For instance, quantum physics can be derived from the assumption that
> > uncertainty should be quantified using complex-valued
> probabilities (cf Saul
> > Yousseff's work). Mathematically it seems consistent that
> there are more
> > general physics theories that use quaternionic and octonionic
> > probabilities>
>
> Okay, so you have probabilities coming from "larger" fields than
> the reals. Do
> you think you have evidence that those would provide
> box-exploits,
Not evidence; just speculative ideas for how they might...
But I'm in the middle of a vacation and will share those ideas at a later
time.
> > Now, turning the previous paragraph into a real theorem would involve
> > formalizing "intelligence" and "organism" and "box" in useful
> ways (which we
> > have currently only made limited progress towards), and then proving a
> > possibly very hard theorem. But I submit that if we did prove
> something
> > like this, it would be decent evidence for the "other part" of
> my reason for
> > believing a superhuman Ai could find a box-exploit.
>
> You'd also need a good working definition of "possible," and other nasty
> things like that. I doubt it would work. In any case, the
> evidence would only
> be as strong as your definitions of all of the terms are
> uncontroversial. Good
> luck.
I think this proof would actually be easier than proving anything
significant about "Friendly AI" in a rigorous way...
Very roughly speaking, they would require the same kind of mathematics,
which doesn't really exist yet...
Perhaps a narrowly-specialized semi-superhuman theorem-proving AI will help
us with such things ;-)
-- Ben G
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT