From: mjg223 (mjg223@is7.nyu.edu)
Date: Wed May 24 2000 - 11:47:00 MDT
Between puffs on the ontological crack pipe, Dan managed:
> [big old snip]
> ... but despite our differing motivations, we will, at least, agree on
> the Facts, right? What are these except the beliefs which we cannot
> imagine ourselves rejecting? How could we tell the difference between
> the two? Why would we care about such a difference?
I believe in an external reality which exists independent of our ability
to know or perceive it. A fact isn't a belief we're really sure about,
it's an accurate statement about what's going on outside your head.
How you gauge its accuracy is another question. On the whole, I'd tend
to go with 'ability to predict,' rather than 'usefulness.'
> I'm not pulling this "radical interpreter" stuff out of a hat, as it
> were. Have you read Quine on this question?
I haven't read Quine on any question, I must confess. Not part of the
engineering curriculum. I tried doing some Philosophy of Mind once. I
got about as far as 'do we have minds when we're asleep' before going
out for a cigarette and never coming back.
> > If motivation is made up but reality isn't, then it seems better to
> > describe the mind as something that parses the real world than a hack
> > that keeps you alive.
>
> I'm rejecting the notion that a useful distinction can be made between
> realism and anti-realism. I'm a pragmatist about that question: it
> just doesn't matter. The whole distinction between "made up" and "not
> made up" in this context is a useless artifact: there are no answers
> there, and no need for answers.
>
> Will you wear a different tie based on whether you have free will or
> not? Will you build a bridge differently on the basis of whether your
> worldview is right for your own purposes or objectively right? Will
> you behave any differently at all if you decide that P is a
> proposition you believe unflinchingly or if you think that P is a
> Fact?
If my purpose is to construct a bridge that doesn't fall down, and I
have a theory which describes the world well enough that I can, then
I'd say my theory is at least somewhat 'right.' 'Somewhat' because all
we ever have are models, which reflect either more or less what's
really going on.
> You won't even build an AI differently, I argue. This question is
> totally irrelevant as to whether there is a simple generic
> truth-finding algoritm that we're running, or whether there is a nasty
> complicated mess leading us to our conclusions. We are willing to
> agree that anything that follows our algorithm (or one like it) will
> reach our conclusions. We're even willing to agree that our algorithm
> is mostly right. The only added claim, a useless one, is that we got
> to have this algorithm, and not another, because it's right. I don't
> see what you get out of saying this.
We have come pretty far afield. I don't really want to talk about
whether the chair's there or not.
I don't know what you mean 'we've got to have this algorithm, and not
another, because it's right.' What did I say that sounds like that?
> How is this different from drawing a line in the epistemological sand
> and saying "No uncertainty past this point!" I can always raise
> useless questions like "how do you know that's a structure, rather
> than some arbitrary set?" Or is that a structure simply because you
> CALL it a structure?
I don't know, dude. If the Chinese nation dreams it's a butterfly
flapping it's wings, does anyone get wet in the virtual rainstorm six
months later? I know stepping outside the system makes you feel all
bad an' shit, but if we're going to have a conversation we've got to
have some base set to ground arguments in.
> > Out of curiosity, how would you explain Kasparov's ability to play a
> > decent game of chess against a computer analyzing 200 million
> > positions a second? Certainly not a regular occurrence on the plains
of
> > Africa, what more general facility does it demonstrate?
>
> Eh? It can't just be his "chess-playing" facility? I'm not aware of
> anything *else* at which Kasparov is the best in the world, suggesting
> that it is his chess-playing facility, and, apparently, nothing else,
> that did the work. :)
>
> Were I to assume that something more general was, in fact, at work,
> I'd have to guess that his capacities to plan ahead, imagine in a
> structured manner, and empathize with his opponent were all at work.
> Shot in the dark on my part...
Well it can't be his "chess-playing" facility, since there was never
any pressure to evolve one. I must be something else more general,
which was selected for but also happens to be applicable to
chess. Monkeys didn't sit around playing tournaments for millions of
years, or if they did it certainly didn't help them pass on their
genes. (I was in a chess club in high-school. Believe me, no
reproductive advantage there.)
Capacity to plan is one facility almost certainly involved, but he can't
do nearly as complete a search as the machine. 'imagine in a
structured manner' is provocative but too vague to comment on. I'm not
clear what advantage is to be from empathy for a computer.
> > This is a very anthropomorphic view - I'm looking for a definition
> > that transcends humans and evolution, the essential properties shared
> > by all possible intelligences. You seem to be saying there isn't such
> > a thing - or it's the null set.
>
> No *interesting* properties, other than stuff like "X can pass the
> Turing test," "we'd probably call X intelligent," etc.
Gah.
> If the code is written in a way that a "programmer" subsystem could
> understand it, or, as Eliezer calls it, a "codic cortex" analogous to
> the visual cortex, then each part may attempt to code itself better.
> Each domdule can participate in improving the capacity of the rest of
> the domdules. That's how the system will improve, and improve itself
> better and faster than any human could have done. But you need a rich
> system before this can take off. That's where the hand-coding comes in.
How does the machine decide what's an improvement? Out of the space of
possible self-modifications, how does it distinguish qualitative
improvements from junk? Putting aside the issue of universality, is
there's a mechanism other than external feedback?
-matt
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:28:48 MST