IBM Research's 2006 Almaden Institute Conference on Cognitive Computing
The Blue Brain Project and the Emergence of Perceptions / The Emergence of Intelligence in the Neocortical Microcircuit
Henry Markram
video:
- https://www.youtube.com/watch?v=9gFI7o69VJM
- http://www.almaden.ibm.com/institute/resources/2006/Disk2.avi
powerpoint: * http://www.almaden.ibm.com/institute/resources/2006/Almaden%20Institute%20Henry%20Markram.ppt
That's why I was so excited when we got the proposal for cognitive computing. I am pleased and honored to introduce a person who has swayed between the interface between physics and computation tools to bring the field of neuroscience further, Prof. Dr. Henry Markram. He is co-director of the Brain Mind Institute, the Director of the Blue Brain Projector at EPFL, he is Director of the Center for Neuroscience Technology. Henry Markram has.. before joining EPFL... Weizmann Institute. He was also working for a couple of years as a .. fellow in the laboratory of .. at the Max Planck Institute. Among his many contributions to the field, one of his is the precise millisecond timing of pre and post synaptic potential to reveal an operating principle for spike timing dependency. These many fields that he has worked in has led to a series of awards such as the mumble mumble. He is now leading the Blue Brain Project at EPFL and IBM, and it is my great pleasure to introduce Henry Markram who will talk to us about the emergence of intelligence in the neocortical microcircuit. Please welcome..
Thank you very much for this honor. So, about 3 weeks ago we had this conference in Barcelona which was, to commemorate the 100 years since .. started drawing every neuron in the brain. He essentially did. He drew every type of neuron that we basically know now. The purpose of this conference was to reflect back on the last 100 years, but also look forward. If 100 years can happen, then maybe you can look further. What is going to happen in 100 years?
Well, two very interesting issues or topics of debate arose at this meeting. The first one was, many people said, what would we do as mankind to put inside a time capsule and send it out to space so that it would land on some alien planet and they would be able to understand who we are. Of course, you had all kinds of opinion, you had people saying let's send the genome, send it all over. So you are going to read the genome? I don't know. We don't know how to read it. We don't know how to construct who we are from the genome. It's a very important milestone, but are they going to understand who we are?
Other people said, since this was a meeting about the brain and the types of neurons, let's send the piles of the books about the different neurons. Okay, let's send them. We can't even look at all of these thousands of papers and reconstruct the brain and figure out how it works, so do we think they are going to be able to do it? Maybe if they are very advanced and they have already seen that type of biological design, they will say okay let's go. But maybe it's a silicon race and they can't figure out how to build us.
Other people said let's send DARPA's robot, and if you send their robot, are you really going to put your name on it and say this is us? So yes they would recognize that we were a very sophisticated race that could send DARPA probes, and they would know we could build robots. Would thta tell them about us?
What we think is that the one thing that would really demonstrate to an outsider that we understand the brain is that if we can send them the blueprints.. not just the biological blueprints, but the full biological processes that would look like us and interact and have all of the cognitive capabilities. Of course, that's a dream and challenge. That requires building a robot with a brain that is virtually a copy of the way that we are. So the first part of my talk was going to illustrate to you the very first small step towards not just building a neural circuit that is very clever, but to building accurate, a small part of the human brain.
The second issue which was very interesting, was what makes humans special? So you had all sorts of reactions. Some people said humans are not special. They are not special because in each environment of each animal, we all adapt as well as we can within that environment. But the overriding opinion was that we are special, and we're special because we can share knowledge for the purpose of understanding our own brain. It's difficult to find animals that can do that.
I think there is another reason for why we are far more special than that. And that is the imagination. Imagination is more important than knowledge, said Einstein. I think the reason why imagination is so important is because it allows us to look inside the paradigm in which we are .. but not only to look inside the paradigm, but to say this is the foundation of axioms that we use to interpret the brain. It allows us to step outside that paradigm and let us say, whta if we actually interpret the brain from a completely different place?
So in the second part of my lecture, I am going to try to take you and illustrate you briefly the current paradigm that rules, from which we understand the brain. And then take you to the opposite paradigm. So let's begin with a .. the first... as we heard beautifully from Edelman, that the human brain really starts with the human neocortex. That is the emergence of man from mouse to man. It's a thousand fold expansion in the neocortex.
So much so that 80% of our brain is neocortex. That is what defines us. That is what allows us to interact, predict, analyze, it's the neocortex. In the neocortex, there .. the neocortex is essentially composed of small little microchips or columns, known as columns. There are about a million of these columns in the human brain. They are each about 0.5 mm in diameter. One can debate forever what a column is. It's the minimum set of neurons that creates a society for neurons. Neurons are not islands. They need a group around them. The minimum set of neurons around them turns out to be approximately the size of the column. In the rat it is about 10,000 neurons and in the human it is about 100,000 neurons.
I nevolution these columsn were added, until the brain folded upon itself. That is why the brain is so convoluted. The secret to the neocortex is to understand these little columns. The goal of my lab has been to go inside these little columns and try to reverse engineer how they are built with as much detail as we have technology allows us to do.
So just to give you an idea of what that looks like, this is a technique that we devleoped at that time called multi-neuron patch clamp. You put a slice of brain under a microscope, you use infrared to visualize the brain while it is alive, you use a glass pipette to approach the neuron and clearing away by positive pressure, and then you clamp on you suck on to the membrane, you blow a little hole and then you can inject dyes or record electrical behavior. You can see the morphology with dies. You can send little messages. This is an extremely powerful technique because it gave us the tools to systematically start reverse engineering the microcircuits.
So over the years we have developed this to the most extreme art that is possible right now. This is a 12 patch setup. We can't get more manipulators in there. We need to have very fine sub-micron precision or control of these manipulators so that we can choose which cells. But this has allowed us to rapidly increase the rate at which we get the data and work out how the circuits are built. We inject a dye into these, we use a confocal laser scanning microscope and we can see the full structure of the cells.
So the first thing that you hit upon is you re-discover Cathal's inspiration that he must have had when he looked under the microscope. He called them the butterflies of the cerebral cortex. You have a whole spectrum of different types of cells, just like in the Amazon forest with a whole spectrum of trees. This is a daunting variety of trees. Wheen you go to the computational people and show them this, they are completely flabbergasted. How are we going to understand the brain with so many types of neurons?
slide: Morphological classes of neocortical neurons
They do cluster and group into certain classes. There are nine types of neurons. In the previous lecture, there are inhibitory cells. There are also 9 or 10 types of excitatory cells located in different layers. So there is an order and it is not that complex. Each of the tyypes are different. Of the 100 or 130 billion neurons, there are no neurons that are the same. Just like people.
No neurons are the same, but they do fall within classes. We can now model these cells, any one of these cells. Very often we need to repair first before we model them because in the slice we may shave off some of the branches. This is just showing some of the algorithms that we use to repair the dendrites, in red, or the axons. Once you have studied 100 or 1000 pine trees, you don't need to get every single one of them. You can clone them.
We can now essentially clone as many neurons as we need from each of the types with the full morphological diversity that we see in the neocortex. So we have the neurons. The next thing is that we need to match the electrical behavior of the neurons. When oyu inject positive current in the cells, the cell of course depolarizes. All cells sit at about -70 or -80 millivolts resting potential. Then suddenly they spike. These are action potentials, thought as the currency of the brain. This is thought to be the basis of perception and consciousness. Neurons produce action potentials.
If you look at the neocortex, you discover another big challenge. Not only do they look differently, but they have different behaviors. We need to understand these different behaviors. Some of them stutter, some wait before firing, some of them have bursting behavior, and so on. We have characterized most of these, just like a psychologist analyzes all the different patients. What can be seen is that each of the anatomical types have multiple electrical types. You might have a pine tree, but that pine tree might have different behaviors. If you put them together, you start to see that in the neocortex, in one layer, you have many different types of neurons. These are the different types of neurons in one layer.
You are looking at about 300 types of neurons in the neocortical microcircuit. We can capture them, each one again might fall within a class, but they are all slightly different. Not only are they structurally different, but electrically different. Now the challenge is to model not just the classes but also the variations within the classes.
So what we had to do is go further below the surface and find out why they differ. Now we know that neurons are different because they have different behaviors and because of the ion channels, like sodium, calcium, chloride, potassium. In the neocortex there are about 200 ion channels to select from. If I select 20 of these, I am going to have a certain behavior. If I select 20 of those ion channels, I will have different behavior. We have to model each ion channel and then know which one corresponds to different behaviors s othat we can build each of the neurons.
Over the past 10 years, one of the techniques we have used is to go into the cells and extract the cytoplasm. It's like a reverse copying process. We take the mRNA to reverse copy thecomplementary DNA of the genes that are switched on in that single cell. We can use primers to find out which genes are switched on in that cell. These are the names of ion channels that are used. We can see that these genes are switched on to produce ion channels in that one cell.
It gets complicated because these genes have subgenes. They can combine in a combinatorial manner to give you over 200 ion channels. So we've had to work out how they combine, what types of ion channels are used in the neocortex and we've mostly done that and been able to, we've started now to build a large database of these different types of ion channels. We know now which ones of these we need to use to model each of these different types of behaviors. It's not that simple.
It gets even more complicated. That is, neurons are very complex structures. These ion channels are inserted in different locations and depending on the locations this can add new properties to the neuron. So not only do we need to know which of the 20 or 30 ion channels get selected from the pool, we need to know exactly where to put them. So for a whole neuron there are several million ion channels. We have used many different techniques; but this is prefacture scanning microscopy and you can locate single channels or single molecules to constrain the model and say that there are certain distributions for certain ion channels. It will be possible to put these ion channels in exact locations.
We have recorded from 10,000 cells and we have started to derive the first recipe for the neocortical column. It's the recipe of cell types to build the microcircuit. Now, as I said, the neocortex from mouse to man is column- is surprisingly stereotypical. There are paproximate ratios and things like that. The types of interneurons that there are. This recipe might change across regions and species. The core aspect of this recipe stays very much the same. So once we have this template we will be able to build columns from different species and different brain regions.
The next thing to be able to do is, and this is an infrared picture of a full column. This is what it looks like. It's just packed in. Red is the dendrites, blue are the axons. We can model the structure and function, but how are we going to connect them? So we've done a number of different experiments. The first is that what we wanted to know, Edelman raised an important issue in the neocortex there are so many specific things, is it possible for one neuron to go and target a specific cell? Someone made the claim that it is not possible. So we recorded from two cells, fill them with a dye, use a confocal microscopy to do a detailed 3d reconstruction, take the axon and track it and follow it, and count whenever it gets into 0.1 micrometers to the dendrite of the target cell. Then we do some statistics to see if there are any biases for how the branches are going to target the different cells. As Edelman predicted, there is no bias. Every axon touches every target cell. It's an all-to-all circuit that is ready and in place for a specific functional circuit. We could see that when, because it's not going to transmit information to all of them, it's just structurally in position, it's going to transfer information to about 10% of them. All it has to do is grow a synapse at those 10%, and I will show you later that this is a dynamic process.
Now, we know that, we used that, and we're going to come back, we're going to use the information about all-to-all structures and patches on each cell to constrain how we position these neurons in 3d space when we build the circuit. For each type of cell, we need to know whether it touches, but also where it touches. Whta part of the axon is used? And on what part of the dendrite does it place the synapses? So we have recorded from many connected cells. This is a small basket cell that is connected to 3 parietal cell. The red shows points of information exchange, synapses. It looks like a mess. If you do this repetitively for each type of pre- and post-synaptic neurons, there are very specific patterns for which parts of the roots are used to contact which parts of the branches. This is the most crucial information for building a brain. Nobody has bothered to collect this data because it's extremely tedious. These are the locks and keys to place 10 to 20 million synapses in exactly the right location so that they will exchange information in the right locations.
So we built a circuit builder. It's an application that interfaces with the database of these different neurons that we have repaired and functionalized. We entered the recipe. We defined the cortical column, and defined the positions the first positions of the neurons. What you're seeing is that this is a hexagon because we anticipated coupling these columns together. What you're seeing is the learning of 10,000 neurons. Starting with layer 1, 2, 3, and this blue are the axons, the red is the dendrites. So as I said, we are packing together in this space approximately, we are trying to pack and position 10,000 neurons. Now we used other types of rules such as minicolumnar rules to give the first initial position. But you have to be aware that it doesn't help to just put the neurons there, they are going to have to move and jitter and spin in order to get the exact position that fit the experimental data. You can see how much spillover is there to connect the dendrites sitting side by side.
So the next thing that we need to do is, once we've positioned them, we now need to connect the neurons. This takes 2 steps. First we structurally position them based on statistical information. Then we have to connect them. This is a zoom up of the dendrites in gold, and axons in blue. We have to run collision detection for every time the axons touch the dendrites. This is about 100 million times. We run this on the Blue Gene, and we have to run this over 8,000 processors. One of the most demanding problems is algorithms to do this quickly. We can do the first iteration but we need to do 10,000 iterations while we spin and jitter the neurons and check that they are structurally positioned correctly and that the locations are exactly based on the lock and key principle that we derived before.
Once we do that we can now connect these neurons based on statistics of connectivity. We can use algorithms later. Statistics of connectivity says that you structurally position them, and only 10% of them are going to be connected, and they will create multiple synapses on each other. That's how we solve the problem of random structural connectivity but specific functional connectivity. A single neuron is packed with synapses in many different locations, like at the top or at the ground here.
We can also zoom in on a single cell and color map the source location of neurons. We have this lock and key. The translation is a specific color map for each type of cell. We can use the color map to see if we have managed to capture what we have seen in biology. That allows us to get the specific color map for every type of cell. It is different. It is unique. It depends on what the source cells are. This is how we structurally and functionally start connecting the neurons. The neurons have to be able to communicate electrically.
We have systematically recorded from several neurons. Excite one neuron, invoke an action potential, and record an electrochemical transduction process. You produce an analog response in the target cell. From this cell on to this cell, you rapidly shut down in the transmission. In a simple sense, this is a low pass filter. It's really a non-linear dynamics. You can also have synaptic transmission to another cell, like when you have a high-pass like transmission. Every neuron contacting another type of cell has a unique non-linear synaptic property. So we had to derive for all the major pathways, the exact type of dynamics we were going to use to model the transmission between the different types of cells. There is a map, there are specific rules, we haven't been able to record from every single type of pathway, but we have developed rules that allow us to generalize to the types we haven't recorded yet.
There are also maps for the synaptic inhibitory inputs. And these allow us to systematically model the different connections between the different neurons in a biologically accurate manner. You have different classes of neurons and they are assigned. There are many other parameters, like biophysical and physical parameters that are required to capture the neuron so that the neuron can inject the right amount of current into the target cell. We can now go to the circuit builder, and once we have decided who has synapses with it, we can assign the synaptic properties thta will act as a synapse. We can show that through the model we can capture the different types of dynamics impinging on the cell. There is a whole spectrum of dynamics that synaptic injection does for these cells. We can capture the detailed synapses between the major types of cells in the neocortex.
There are many other things. We have looked at learning algorithms which we showed many years ago. Spike timing dependent plasticity. If you have 20 million non-linear synapses, you need learning algorithms that will be able to align and adjust the filtering constants. We have done some experiments to derive these learning algorithms.
At this point in time, we do believe that we have the minimal, the starting building blocks for a sufficient level of biological detail just to begin, the first step, to building and simulating this circuit and eventually visualizing it. What we do is we take each neuron, we are simulating them for the first time thousands of neurons that each neuron has the full morphological and electrical complexity. The simplest way is to put one neuron per processor and use the MPI messaging as axons. That's essentially what we do. There are software packages that allow us to do these kinds of simulations.
The problem is that it produces about 1 TB of data per one second of simulation. So we need another supercomputer because we want to decide which terabyte to keep for analysis. We don't want to spend 1 second of simulation and then spend 3 weeks or a year analyzing it. We need to use a visualization to immediately allow us to assess whether something is interesting or to store it later.
So we went to SGI and we needed very large shared memory. They gave us a very big supercomputer. 300 GB shared memory. This allowed us to build a small media center in Louissanne where we can create 3d representations and sit inside the neocortex. That's more like being a child, you want to build a puzzle and see how the circuit works. Is it for science or for pleasure? Well, it's a lot of fun.
So this is a first simulation, this is very crude, of 10,000 neurons. We are only visualizing 10% of them and the different colors of different neurons as they fire. The visualization was done in the last 2 weeks. So this is the very first version. It's going to get 1000 times better. This just gives you a feeling of the immense complexity of what you could in principle see. You can even see action potential propagation. You really can now zoom in into single cells and watch their electrical activity. We can screen, we can select only certain types of cells, certain ion channel, or we want to wipe out a certain neuron and see the effected circuit. There are a thousand or more in silco experiments that can be done in silico that can generate hypotheses for studying the circuit further.
Here's a quick view that looks like, we are looking at the top of layer 1. We are going to go inside while the circuit is active. You can see at the top... 10,000 neurons, 10 million dynamic synapses, morphological and electrical activity. This is still not calibrated. We are aiming at, by the end of the year, only reaching version 10. So this is not calibrated, but it is the first .. with electrical activity and so on.
This is just a single cell, you can extract this single cell out of 10,000. You can study what happens to a single neuron. You can zoom in and slow it down, and relate that to for example synapses. Okay.
So this is just an example of what we call a biological refinement. Everyone says it is impossible and there is so much we don't know. Our philosophy is, if you don't know it, don't let it stop you. If you don't know it, you assume it, and you find ways to do experiments to fill in those gaps. We are increasing the level of refinement of the biological data. At version 10, we will capture as accurately we can, the data that we have collected for the last 10 years. We should get a tremendous amount of data from these simultaions.
There are other things that are needed. You can build the circuit, but how do you calibrate it? You can calibrate the function. The Martinotti Di-Synaptic Loop. You want to calibrate the circuit itself. What we have done is we have done experiments where we bounce information between neurons. We record from 3 neurons, we stimulate it, it activates another cell, and these di-synaptic pathways allow us to start working out properties, and in the simultaion we should see similar statistics about the amount of current being injected and so on, and we can do that now with the patch clamp at a more sophisticated level allowing us to look at the circuit, see circuit statistics, and these are the di-synaptic pathways going through a related second cell, but we can also do a whole hos tof flow response, we can look at a single neuron and see how it behaves. You can get many constraining principles as you want in order to calibrate the circuit.
Okay, so, this is just to give you a feeling. This is not activity, this took a few hundred hours to render, this is without activity in a circuit with 10,000 neurons. The reason why I am showing you this is because when I first saw this, I just wondered, what could be the reaction to seeing this. Well my reaction was, what if we painted perceptions on to the dendrites.
Let's paint the perceptions on to the dendrites. It's analog. If you were the maker of the brain, would you choose 0's and 1's to build your perceptions, or would you choose analog perceptions. In fact, it's known that, all the maps, even fMRIs and EEG, most of those maps are not reflecting action potentials. They are reflecting other data. We have seen dendritic maps. But why do we only look at 0's and 1's coming out of cells? Maybe you can produce the specific structures.
I am sure you all know that the current paradigm that we are in is called the action potential paradigm. What it assumes is that dendrites are processing information so that they will pass whatever is the right thing, the final computed information from the somatic cell body, and it will then send it to the next one to produce a chain of spikes. Across many many cells, the spatio-temporal patterns of spikes represents perceptions. The pattern of spikes seems to represents the perception. That's what they measure, measure the spikes and try to see how it can recreate perceptions. In this model, dendrites process information for somatic spike output and spatio-temporal patterns of spikes represent perceptions.
Well whta if we look at the opposite paradigm? And we said, we make a Capernican like revalation. Let's move from being soma-centric to dendro-centric. In this model, perceptions are formed directly on dendrites and spatio-temporal patterns of spikes just maintain and animate perceptions. And let's say that the perceptions are formed directly on the dendrites, and the spatio-temporal pattern of spikes just maintain and animate it. Just what if? Just an alternative way of looking at how we may interpret the brain. Let's see where this takes us.
We know that we can get patterns and maps on to the dendrites. They go across dendrites. Right here and right here, it may be the same neuron, two dendrites, having two different properties. Or, here, it's a collection of neurons from many different locations. So you are getting dendritic maps across the dendrites. But what happens if you could put intelligence into the circuitry so that you could start painting? Literally painting pictures on to the dendrites. Can we paint perceptions directly onto the dendritic mass? What if we could put enough intelligence into the circuits so that each compartments could be controlled in that way.
It's not that simple, because we've only done 2d imaging. But Grinvald has done some of these, not even in imagination have done 3d, he has painted these pictures, if you look deeper there are columns. More and more, the recordings are suggesting it doesn't look that way. Rather, it looks more like what Fregnac has suggested. When you are doing this, one neuron sitting inside this block, how do you get those very specific voltages? So maybe, if we did a realistic image, for some reason most neuroscientists only showing bars or stripes and stuff, maybe if we showed a realistic image, you'd see objects forming in the dendrites. Maybe we need topology to understand how we build objects for representation. Maybe if you get it really precise, put a lot of intelligence in these circuits, you could start building little, .. then what you need is just use spikes to animate these objects. It could be that, they even become real complex scenes. Analog scenes across the dendritic space.
So we did a very quick simulation just to play with this idea. We took the dendritic column, voxelized it, threw away the neurons, and we mapped the voltages into each voxel. We are just looking at the brain at the block level. 90% of the brain is just dendrites. We are looking at it as a block of dendritic tissue. If you disconnected the circuit, there's no intelligence, just do Poisson simulation. It follows the pattern of which neurons and so on. But now you disconnect it, it's not connected, you just connect the circuit and what you start seeing is the formation of complex structures or objects, something that deviates from just a simple ... so let me show you another one. That's just a single voltage where they are appearing across a voltage range. But here's what happens when you have two different voltages. The complexity of these objects could take on several forms. They do cluster. So somehow the voltage across dendrites from different neurons, there is some form of connectivity based on clustering.
So what happens if you could put intelligence into those circuits? This is just going to give you a feeling, it's not just that it's an object or chair, it's an entire universe, you could have a whole scene inside a region of dendrites.
So we did an experiment, and this is coincidentally parallel to another part of my lab where we are studying plasticity, we did a study to see if it is possible to rewire a circuit. We patched 6 cells, and saw how they connected. We took the pipettes out, and waited 12 hours, repatched it, and we found that the circuit was different. After 4 hours the circuit was different. Just to show how much inertia there was in the current paradigm, Science magazine rejected this and said it was not interesting.
So we do these recordings and we look at the glutamate evoked responses. We activate the circuit and when you do this you can see that you have connections appearing and disappearing. This is potentially the substrate of Edelman's work could use in all kinds of restructuring of the circuitry. This is happening over a 4 hour period. Circuits are dynamically rewiring. For 50 years we have studied how a synapse gets stronger or weaker, but not how the circuit restructures. You can stimulate the circuit and have it rewire; you can stimulate the circuit and get it to build more specific objects.
Also coincidental is that electron microscopy has shown that the distance between synapses is so close that spillover from one input is almost impossible to avoid. Within 200 nm there is another synapse related to a different neuron. So it may just be that what the brain is trying to do is controlling the voltage within a certain voxel, not just a single neuron.
So to summarize, let's look at a world as an electromagnetic dendritic object. We do not see the world. What we do is we use any clue that our senses can provide us to build a virtual analog model of the world. So the world that we see is not the world, the world that we see is the world we build with as much information as we can so that we don't make mistakes and bump into something, the neurons build a distributed dendritic object. The circuit provides the rules to build and animate the dendritic object. The enhanced cognition is the ability to run a simulation of the model into the future, to minimize or optimize that. Different brains can learn to build the same model. We have 6 billion brains on the planet, we have different rules but we build the same models. Animals have different brains, and they build a different world. They do not build the same world, it's the same world but it's different. We use spikes to transfer the minimum information required to change or transfer perceptions. It seems crazy to transfer perceptions from one part of the brain to the other. Spikes can do that. They are rich enough, but not that rich, they can transfer perceptions. Spikes emitted on tops of dendritic perceptual waves, if you have dendritic waves, then spikes are the driving animating part of perceptions. Minimize the number of spikes to learn or transfer the right information. It's been known for several years is that as you learn you spike less and less, because you want to get more and more efficient. This also has theoretical implications which is that perception may be there without the spike and you may even have perceptions at resting state.
So the challenge to robotics and to model the brain and to use spikes is that this theory just says, in contrast to the current paradigm, if you take a robot and extracted those spikes, you might get enough information to get the robot to interact intelligently, but you won't be able to use those spikes to recreate what we see. They will not recreate the analog perception of what we see when we interact with the world. You can get it to look like it is interacting, but there is not enough information in the spikes to recreate such a complex analog world. It's a challenge.
- Idan Segev
- Philop Goodman
- Charles Peck
- James Kosloski
- Felix Schuermann
- IBM
- ONR (in Israel) (Tom McKenna?)
- EPFL (Swiss Federal Institute for Technology)
- SGI