Todd Huffman. As soon as he's reading. You'll have to go to xkcd to see why that was funny. That was a really great talk and reminded me why I love biology. I think it's the most amazing interesting thing in existence. If you haven't dived into a molecular biology book, you should. It's amazing. I'll be talking about whole brain emulation, which is taking the brain and emulating it in a computer. The goal is to understand reproduce thought in another substrate. This is the end-goal of neuroscience. I'll talk about the continuity between the research and long-range transhumanist goals. The roadmap was put together by Anders Sandberg. 150 pages long. It goes into these topics in a lot more detail. I highly suggest you read it. On a note, this is a large topic, there's a lot of detail, Ken and I will be doing a lunch session so we can go into longer discussions. My views don't necessarily represent the views of those that I collaborate with.
Whole brain emulation has a number of assumptions. The roadmap laid out the assumptions. There are philosophical assumptions and neuroscience assumptions. The roadmap does a good job. There are also functions about computability. The assumption for whole brain emulation to work is that the brain is a system on which human thought is based, at which we can look at and understand well enough, to emulate at a level, with enough specifity, to reproduce it in another substrate, for instance a computer.
Thoughts are multi-scaled. In order to do emulation, there must be emulation at all different scales of thought. That spans from molecular scale to cells and systems. In the whole brain emulation roadmap, we've gone in detail how the different levels of computation simulation that needs to take place. I'm going to focus in. This is a diagram. We've broken out all of the things that have to happen in order for whole brain emulation to work.
Scanning, translation and simulation. There are subcomponents. I am interested in scanning and half of the translation component. I'm no good at any of that other stuff. On the translation, simulation side, those are interesting and hard problems. What I'm interested in is scanning, brain tissue, processing it and putting it into databases. It is a hard task. It may take a long time. We've put together estimates for how long it would take to scan in a human brain. The most detailed scan, we need to scan things in at the molecular level. That would come somewhere around 2070. Even as optimistic as transhumanists are, that's a very long time, and that's with a lot of work going into this problem.
What I'm currently doing is a spin-out company to commercialize destructive scanning. This is from the UT A&M. 3Scan.
The kinfe-edge scanning microscope is a destructive scanning microscope. Basically if you're going to look at biology in 3D, you have to slice it. You can slice it optically with confocal methods. When you slice it physically, it's laborious, and there's lots of problems with how the tissue compresses. What we've done is, instead of taking a photo in 2D, we take a picture in one dimension. We have a diamond knife. We have a diamond knife, we shine light to the back of the knife, and the microscope is perpendicular to the bevel of the knife. We imagine continuously as we slice. This is the actual device. There's a mouse brain. Light coming in to the back. Microscope up here. This is an umagnified view of the diamond knife here.
What we get out of that are slices like this. This is tissue, this is mouse tissue that has been stained with a golgi stain. The thin spider-weby things are the neurofilaments running through the brain. This is a cell body, here's the axon, here's the dendritic body. We take a series of these slices, and we get a volume, with structures we can see in 3D, we algorithmically trace through the 3D structures. This is a 3D volume. There's a 3D structure in it. The data is always more interesting.
These are Perkinje cells in the cerebellum. They are interesting in this reconstruction because they have a distinctive shape. The dendritic arbors are in-plain with each other. These are the cell bodies. Up here is the dendritic arbor. These are actually neurons. These are not simulated on a computer. The arbors are all in plain. These are done on a single block of tissue. This was done with a golgi stain. It stains 1% of the neurons in a particular volume. There's a lot more cells. To get contrast, we can't look at all of them at the same time. The next data set that I'm going to put up is from the same mouse brain, we're able to do entire mouse brains at the same time. This is from the cortex. I'll tell you what you're looking at. The axons going up here. The cell body here. The axons go perpendicular to the surface of the cortex, and parallel with each other. If you squint, the stronger neurofilaments, the axons, are all traveling in the same direction. And, uh, what we're interested in is, doing circuit diagrams for long-range connectivity in the neurons. There are other methods that are good at looking at small 3D volumes, but this enables us to get really large volumes. We can take a cm3 and reconstruct it into sub-micron voxels on the order of 3 days. It's really fast, and it's a large volume. I don't have a visualization of all of the neurons in the brain. These are the major blood vessels in a mouse brain that we put through our microscope. The purpose of all of this is to digitize neuromorphology and connectivity so that we can better understand how neurons compute things. I'm interested in this for whole brain emulation, uploading, as a long-term goal of transhumanism.
I'm under no allusions about the knife-edge microscope. It's amazing, but it doesn't see enough biology to do an accurate emulation. Lots of scanning methods have various strengths and weaknesses. I think electron microscopy has potential to get us a long way towards whole brain emulation. There are a few different mechanisms for high-throughput electron microscopy. The group in Germany, the Denk group, is the leader of the pack. SBSFEM. FIBSEM. ATLUM. They have a scanning electron microscope, they take a picture of a block face, and the microscope slices off the top layer. The ATLUM project is more interesting, because it has more promise in the long-run, and potentially the inventor of the microscope is here in the audience. I can pass off the microphone to him for more discussions. How the ATLUM works- Ken interrupt me if you need to- how the ATLUM works is that the brain is embedded in plastic, and it's put on a rotating spindle, and there's a knife over here, and there's a bead of water. The brain is rotating around, it peels off, floats over the water, and it's picked up by a conveyor belt by mylar, and picked off to go be prepared for scanning electron microscopy. The thing that I think is most interesting, using this method, you can parallelize this process. Once you slice and lay out the brain on tape, you can put it on 60 cm wafers, for the semiconductor industry, and build a large automated library so you can parallelize your imaging across multiple scanning electron microscopes. He's actually doing this. They're able to do 1k slices without human intervention. One mm takes about a week to scan. The imaging is the rate limiting step. It takes about 1 decade to scan that same weeks worth of slicing. Electron microscopy is being pushed by the semiconductor industry for faster speeds, for developing faster chips. There are microscopes on the brink of coming to the market that can speed up the imaging on the order of 1k. I'm optimistic that these techniques in the coming years will be a powerful tool in destructive scanning.
This is a strip from the ATLUM. The mouse cortex is going this way. If you zoom in on this section. When you zoom in fully, this is a neurofilament, this dark area is a synapse. The larger circles here are the synaptic vesicles. It's hard to see on this slide, but you can see the cytoskeletons. You can also see the mitochondria nicely, which is critical to some of my concepts on how you would populate parameters in whole brain emulation. At this point I'll show you some videos.
What this video is going to be is flying through the z-axis of neural tissue. The really important thing is that you can trace these with your eyes. If you can see it moving with your eyes, you can pull it into a computer, and you can use algorithms to reconstruct it. These are beautiful videos and I can look at them for hours. Here's another video of Ken .. zooming in and out of the data set. How detailed the data is, and how much there is. Ken, it's cm2 at the biggest, yes? It's from 5 nanometer scale to 1 cm. You can trace the features. They can be reconstructed. The features here are artificially colorized so you can see what you're looking at. At lunch look at my laptop because the details come out better. You can see a lot of the relative features. This is a 3D reconstruction of that volume. They have taken and removed one of the neurofilaments here. This is where that synapse buds up. You can reconstruct the morphology of the neurons, and the connectivity, you can see how they are connectinvg.
For every person in the room, there are almost 1k out of the room. Everyone here is perfect. Restrain your enthusiasm enough to hold up a microphone, so everyone on the webcast can share in being here.
Another powerful component in the technique is the combination with other imaging methods. You can fluorescently image this tissue, and others, in the microscope. You can connect different ways of looking at the functionality and the structure. If you go through and reconstruct the morphology and connectivity, you then have to populate the parameters to do a simulation. That's a huge topic. I think it's possible to do this. One of the concepts I have is for populating parameters by looking at the energy infrastructure of the cell. ATP is the energy currency. You can't direct where it goes. You have to move around the mitochondria. The mitochondria are very visible. Mitochondria are moved around in neurons in response to learning, on the order of 5 minutes. If you have a reconstruction at the EM level, I think you can take and model the metabolism and use it to populate the parameters that would be necessary to run a simulation. That's not all the way to emulation, but it's a possible path.