1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
|
Noah Goodman.
Reverse engineering intelligence.
If we're going to make ourselves better, my research is aimed at understanding our minds by answering the very old question: what is thought? More precisely, how are thoughts structured, and how does the structure of thought lead to flexible behaviors that humans exhibit? And let's answer these at the computational and mathematical level. And more precisely, perhaps at an engineering level. What is thought? What are the engineering principles we need?
Let me start by describing two important ideas in cognitive science. The first starts with the observation that htought is incredibly productive. It make sinfinite use of finite things. If you read in the newspaper one day, that big green bear who loves chocolate. You have no trouble conjuring it to imagination, the mind is very productive. Even at a bigger scale, mass and momentum, or the concept of a nation. To explain this productivity, cognitive scientists have turned to the term of computational representation. Thought is built up from small pieces combinatorially. We bring up words from a small set of letters. Microphone, oops. We can build up molecules out of a small set of atoms, and similarly, we can build up thoughts from smaller pieces.
There's another principle that some people have come to: thought is incredibly useful even though the world is uncertain. So, the uncertainty of the world can come in the form of nosiness, there's rain all over the place. You can still guess what the road-signs mean. Imagine you call your friend up on your phone. Does your friend hang up on you because he wanted to hurt your feelings? Are you trtying to recover? You don't have enough information.
Cognitive scientists use the principle of probabilistic inference. We start with a problem like that, A+B+C, and in elementary school you think there's a single answer. In probability, there's degrees of belief and so on. There's a mathematics for dealing with that. The probability calculus is for reasoning under uncertainty. Now, these two principles are incredibly influential in cognitive science. There's the beginning of cognitive science through logical ai, and then certain in reaction to this, probability took over, kind of like in connectionists research programs. These two ideas, composition and probability; there have been few programs and attempts to bring them together.
Computational language of thought hypothesis, through probability. To explain this more carefully, let me give the flavor of mathematics. We use an old calculus, like Church-Turing thesis fame Church, lambda calculus. You can combine functions, like double that takes X and adds itself. You can define higher-order functions. The remarkable thing is that by making functions and putting them together, you get the abilities of computation. To add probability to this, we do something that looks simple, like coin flipping, random, etc. We might flip a coin and call that A, B and C, if you do this again, because the flipping of a coin is random, and so on. And if you keep doing this, and making a histogram, and you get a probability distribution, it's degrees of uncertainty, how much you think is going to be one, zero, and that's the flavor. The probabilistic lambda calculus is universal for probabilistic computation. This is all we need.
We need this probabilistic language of thought hypothesis more precise. The mental representations of things that make up thoughts, there's functions in a probbailistic lambda calculus. This is great because (1) the thoughts are built up into bigger ufnctions, (2) those representations those thoughts support thinking, but it's probabilistic inference which deals with uncertainty. It's a mathematics that is precise enough to develop a language called CHURCH. You can build models.
http://projects.csail.mit.edu/church
I want to give you one breif example. How do you use this abstract formal system for explaining how humans think? You have a friend Bob who has this box, he puts it on a table, and the light goes on. How do you think the box works? Maybe you need to put them together. So we ask people, similar to this in an experiment, and people siad, very stronglyh, well you have to push both buttons A&C, to make the light go on. So why do I find this striking? Well it's striking because if you approach this problem from a causal analytical problme, Dr. Spock, a perfect scientist. There's not enough information, you have confounding evidence. That's not what people think: the structure is that you need to press both buttons. Sure enough, if you only take causality into account, you don't know what's going on.
What we need to do is incorporate how people reason about other people. We use a probabilistic lambda calculus to say that beliefs and desires combine together to actions, where there are approximations of achievement of their desires. We can compose it together with a model of causality that makes a bigger model about what an agent knows, and okay probability calculus, given that I have seen this evidence. The beautiful thing is that this model now strongly predicates the inference that the people make. Basically what the model is doing is why else has bob pressed both of those buttons unless he needed to to get the lights to come on.
Because it's compositional, and the model of rational action, there's a whole variety of different inferences that people draw. Some of my plants have sprouted, you might have inferred that not all of them have sprouted, we can take that same model, and we could say that the desire is his to communicate clearly, we find the prediction that some of them have sprouted, and maybe not all of them have sprouted. Suppose he only checked some, and then he sprouted some, and if you assume that plants sprout, then you could then assume that maybe all of the plants have sprouted. We have this scale of implicature, as the model predicts this goes away if the person who is speaking has partial knowledge. You can use this set of principle to explain complicated aspects of human cognition.
What I've done is said that there's two principals that we want to take into account, to understand thought at a computational and engineering level. We can make a formal simple to explain this. We can explore the flexibility and power of human thinking. Once we master how human thinking works, we can go on to build a better system. Thanks.
|