From: Jeff Bone (jbone@jump.net)
Date: Sat Dec 08 2001 - 17:37:02 MST
"Eliezer S. Yudkowsky" wrote:
> Ben Goertzel wrote:
> >
> > For what it's worth, my intuition agrees with Eli's on this. We have yet
> > explored only a very small part of the "known universe", and there are big
> > aspects of particle physics that we don't yet come close to understanding
> > (e.g. quantum gravity, quantum measurement -- yes, there are claims of
> > understanding, but nothing well-substantiated). To assume that the Big Bang
> > /Big Crunch model or any other fragment of modern science is going to
> > survive untouched 1000 years from now is just plain silly.
And let's be clear, I have not assumed that *either* the Big Bang or the Big
Crunch model is going to survive untouched for any given length of time.
There are, however, some logical truisms we can all agree on: either the timelike
extent of the universe will be infinite, or it will be finite. From these logical
disjunctions, we can begin to reason in a Bayesian manner --- informed by our best
current understanding of physics --- about the probabilities of certain outcomes.
Side note: we're really arguing about priors, here. In my experience,
"scientists" (rational actors who build models from observations and seek to
refine those models through an iterative cycle or prediction / observation /
modification) who have studied a particular field are (a) the first ones to admit
how little is actually known in that field, and yet (b) are much more capable of
yielding accurate predictions in that field --- assuming they stick to the
scientific framework --- than those who haven't studied the particular field in
depth. Given the choice between a prediction of "heat death" by an expert vs. a
prediction of "universe spontaneously fills up with daisies" from a faith-based
"reasoner," I'm willing to grant the scientist the higher level of confidence in
my Bayesian analysis of various outcomes. ;-)
> > The same intuition leads me to believe that the notion of "designing
> > Friendly AI" is only slightly relevant to the final outcome of the grand
> > "superhuman AI engineering" experiment. I agree that it's a worthwhile
> > pursuit, because it has a clearly > 0 chance of making a difference. But as
> > with fundamental physics, there's a hell of a lot we don't understand about
> > minds...
>
> Well, the CFAI model doesn't require that the creators know everything
> about the human mind. It requires a certain bounded amount of complexity
> which is used to construct an unambiguous pointer to unknown facts about
> the human mind, facts which may not be known now, but which are expected
> to be accessible to a transhuman intelligence.
I agree with all of that.
> In other words, under the CFAI model, you can say: "I have this vague
> feeling that liberty and life and love and laughter are important, but I'm
> not sure about it, and I don't know where the feeling comes from. Count
> that in, okay?" The physical causation behind this statement - in your
> accumulated experience, in your brainware, in your genes - is in principle
> accessible to a transhuman intelligence, even one that has to extrapolate
> the causes after the event. The Friendly AI can then intelligently use
> existing philosophical complexity to decide which of these causes are
> valid and should be absorbed. The Friendly AI can then "repeat" the above
> statement at a higher level of intelligence - that is, having absorbed the
> moral baseline behind the statement, it can re-produce the statement as
> you would have produced it at a higher intelligence level.
And this would be entirely in line with the psuedo-Epicurean ethics I generally
employ, but it may be inconsistent with other goals such as long-term
survivability, etc. Therein lie the aforementioned tradeoffs.
> So what's needed is the threshold level of moral complexity to understand
> how to correctly use pointers like the one described above - not a
> complete diagram of human moral complexity, or a complete understanding of
> transhuman philosophy. That threshold level of complexity - which is big,
> but bounded, and hopefully accessible to merely human understanding - is
> what CFAI attempts to describe.
...and the point I'm making is that the assumption that this complexity is either
measurable (big or otherwise) *or* bounded in a context-free way is a qualitative,
intuitive assessment that is neither defended in the "Friendliness" argument nor
--- in the absence of such a defense --- provably defensible. And while I haven't
done it, it in fact *may* be provably *indefensible.*
jb
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT