And then I'll be happy to try some other .. so that we can see each other in 26 years. The first is please no flash photography. Turn off your cell phones. The second is that videos will be available on the web. The second issue is that you must keep your tickets with you at all time. In the next 10 or 11 hours, you will probably want to leave. If you do not have your ticket, you will not be able to get back in. It's that important, don't forget. There is food and coffee in the lobby at all times. Feel free to go out there and do a bathroom brak. This is a fabulous group of people and you'll want to spend some time with them. We also have KNOME, the biotech company, as one of our sponsors. It will be announced before the second break tomorrow. If you want to win the $2.5k value, or 2 years of cryogenic storage, and 15% off genome sequencing, you cna prove to yourself that you're really living it. I'm happy to have all of you here today. Please turn off your cell phones. Keep your tickets with you at all times. It's going to be a great show.
Here's Anna Salamon, one of the latest additions to the Singularity Institute's research staff. If during the presentation you have questions, detach the questionairre, how you found out about us, so we can do a better job next time. Anna Salamon.
Shaping the intelligence explosion
I. J. Good. It was to refer to the idea of intelligence as the source of our technology. If we get to the point where there is a significant improvement of the people who design technology, where smart leads to smarter to smarter, and the result of this is far from human. I am going to be talking about the shaping of this intelligence explosion. Is there something we can do to shape how this turns out?
Claim one. Intelligence can radically transform the world. Claim two. An intelligence explosion may be sudden. Claim three. An uncontrolled intelligence explosion would kill us and destroy practically everything we care about. Claim four. A controlled intelligence explosion could save us. It's difficult but worth our attention.
What we know and how we know it about ai. How did we arrive at these four claims? The unknown in these scenarios are vast. The biggest unknown is the type of artificial intelligence we might create. Any kind of intelligence, that's easily human, that's at least as powerful as humans. I'm stealing this image from Eliezer Yudkowsky. We have this vast space of possible minds or machines. Anything in that little space except that little dot that has the "human mind" dot. There is no one goal that artificial intelligence would have. There is no one particular architecture. Talking about artificial intelligences, minds that are not humans, is like talking about animals that aren't starfishes or foods that aren't pineapples. The external circumstances in which ai might be created. We don't know the year, assuming science will continue to chug around; the type of people; the precautions; an economy with strong special purpose ai, or if it would just arrive as a shock. And because of that, you might wonder if we could say anything at all about these vast unknowns. Imagining rerunning the tape of life. There would be all sorts of possibilitiers that we can explore. There would be reasonable features across a lot of branches. LIke eyes, energy storage mechanisms like glucose, sugars, oil, batteries. Digitality, like DNA, writing and computer memory used in replication. Money, computation, mathematics. There is a variety of different systems being used. They serve a particular purppose ..
I will not be talking about Moorse' laws or other accelerating change models. I will give you a whirwind toul. Send me an email and get me up to the top.
Claim 1. Intelligence can radically transform the world. THink of here of Archimedes. Build something with enough intelligence and almost always it will move the world. Intelligence is like leverage. An intelligent being might start off with only a little physical power. If it finds nominal ways to achieve its goals, it can use that small power to create more change. Humans change up the world quite a bit. And we change it up quite a bit more than most species. The reason we made these changes is that we had goals, and we rearranged the goals. The smarter humans, the smarter we've got, the more knowledge we've had, the more organizations found. There are more arrangements. We started with stones and wood tools. We then made up our way to Starbucks. Note that the mechanism here that caused humans to do this is basically universal to all intelligence. Most goals are not maximally satisfied by the particular state things happen to be in. The smarter the agent is the better it will be able to determine ther arrangements that would be better to its goals and to find routes to access those rearrangements.
Let's take a brief segway of the scale of possible intelligences. Imagine trying to teach cats quantum intelligence, or a goldfish at the opera. They are minds that just can't access particular domains. Which invites the question, what is beyond human minds? I have here a theoretical toy from computer science, called AIXI. What AIXI is (Marcus Hutter), it's a mathematical idea that can be described fully and precisely, and can be run by computers. HOw much joice could you squeeze from an atom? It's a bit of a simplification, but it has all of the possible patterns. All of the possible ways of different patterns can be correlated. It can figure out which ones are consistent with the data. And then it uses all of the consistent patterns to predict things. AIXI can look at a video clip at this auditorium and deduce the laws of physics. It can deduce that you are made of proteins. By looking at your face, it could figure out a probability distribution of who you are. The way that this is possible is that different possible values are correlated. AIXI exploits all of these possible correlates. It uses all of its correlations that scientists know today, and all of the ones that it nerver will come up with. That's what a powerful intelligence might be able to do. If you think about it, the distance between cats and humans, the distance between humans and AIXI is much larger in terms of which data sets you can make use of. Humans can follow out any computer programs. And that might mean that we can emulate anything. Suppose that we skip the timing issue. If it would take us eons to emulate one second of a particular intelligence, and if that's about a century, that's not a useful period. Intelligence is about responding to events as they happen.
AIXI is a theoretical toy. How plausible are smarter systems in the real world? When you consider how humans are made, it's pretty plausible. HUmans were made by evolution, right? Our intelligence comes from a slow blind process of evolutionary error. Smudge on smudge. On our particular tree of the branch of life, intelligence was just slowly and blindly increasing intelligence. Increased intelligence just far enough that maybe 100,000 years ago that they were able to close the loop and make culture. After that evolution couldn't make us any smarter, and culture took over. Humans are right on the cusp of general intelligence.
"Humans are the stupidest a system could be, and still be generally intelligent." -- Michael Vassar
And then to begin to be able to handle these processes to make our minds better. Generally intelligent means agents that are able to build tools. It's been believed that humans are about as stupid as you can be if you're going to be generally intelligent.
What change might an actual super intelligence might create? What are the details? Deep change. It accesses the possibilities far from those possibles, and all of the capacity of the matter to achieve some goal. First visualization is molecular nanotechnology. Precise control of matter at the atomic level. The second visualization is computronium. What is the most efficient computer in a given size, like a human? And then use all of that computational intelligence to optimize for your goal. The quantities are quite large here. And then there's light speed expansion to move outward and put all resources towards your goal. There was this transformation from humans. If we had more powerful intelligences, we'd expect fungible resources. Resources that can be used for one function, or another function, but not both. There is a finite amount of usable energy and usable matter, space and time that we can reach. If we build computation, and it's a finite amount in there of CPU and memories. So, transforming the world on this scale isn't about muscle. It's about intelligence or optimization. If tiny muscles can lead to big cranes and big power, and if small ai can lead to something else, like the ability to control pixels on the screen, can then be given the ai to have bigger brain muscles. Think about the things and girlfriends and boyfriends are able to persuade them to do. Humans are halpable. They are messy systems. Give a smart enough ai access to a stream that a human can seen, and that ai can access human arms and legs, that it could use to plug into the internet, and eventually build technology manipulators that make us humans obsolete.
Past a certain point, an intelligence's best option is to build more intelligence. Intelligence that they can then use to persue their goals. For most long-term goals that this child might have, if he wants to be a chemist, doctor or third-world worker. He might want to get some sleep. Education. Good social network. Whatever his goal ultimately is, he wants general capabilities and resources to serve that goal. The argument is that past a certain point, it turns out that most long-term goals are best served by building a more intelligent system that can figure out how to figure those goals. The best long-term goal is that this child should build a super-intelligence that shares his goals. Smarter decides to make smarter. And then the world changes on a scale faster than the humans can think. This is why the time to think about shaping the intelligence explosion is now. Notice again that this is a very general. We're not relying on MOore's law, accelerating change, we're just talking about ai and what it might be like when we get there for any of the broad ways to get there.
Claim 2. An intelligence explosion may be sudden. Different phenomon occur on different time scales. Water drops change in seconds. Continents change over hundreds of millions of years. Galaxies collide in hundreds of millions of years. Time scales on which humans think is one possibility among many. Artificial intelligence might usher in a different and faster time scale. As noted, there have been similar speed ups before. Evolution takes millions of years. The cultural processes that let humans do what they do is thousands of years or even months, with culture. Also, different types of mental hardware works at different speeds. Neurons can fire in five milliseconds; the fastest transistions fire at about a thousand times that. How fast can human brain emulations run? Neuroscientists give at a few orders of magnitude. But, if you could, if you did have a human brain emulation running, most people think you can, if you had five times the hardware, yopu might be able to make it run five time the speed. And there's no way that emulating these meaty human brains, with their accidental speggheti code, is the best. There's no reason that something could operate at a faster time scale than we do. The real question isn't how fast ai could think, but rather how fast ai could arise. How fast the development could occur. How much warning would we have?
Past a certain point, engineering intelligence is the fastest .. Steve's point is that software can be copied. Making the first digital intelligence is hard. Making the second and third is just copying software, according to Steve Rayhawk. Hardware bottlenecks? If the first software requires a 1M USD/day to run, but what if the first ai could run on ordinary computer, then we could have billions of copies. Literally, billions of personal computers already. These ai could work in the economy and funnel money back into the economy or do ai research directly or funnel money back into ai research. Mind editing with digital precision. You can do this with drugs but it's imprecise. The process takes 20 years and a lot of labor and it often produces a lot of unexpected results. But if we had digital access to a given mind, we could try and test changes to that mind in minutes. We could try many variations as evolution does, but in minutes instead of decades and with more strategies. Which variants were best at math, best at earning money at jobs, Digital minds could be easy to edit. Then we could just copy those across the hardware. Hand-directed digital evolution. Then of course there's feedback. The smarter you are, the smarter the conortion of ai is. The more smarts it has to build further intelligence. Ai doesn't need to move on the timescale that you're used to. In particular, there's reason to believe that a process might be fast. Just as a transistion from evolution to culture, so a transistion from fixed human brains to editable digital minds might bring up another time scale. The argument is very general. We're talking about dynamics and properties over a broad range of ways that ai might occur.
Claim three. An uncontrolled intelligence explosion could kill us and destroy practically everything that we care about. As we discussed earlier, the most powerful ointellegences will radically rearrange their surroundings. They will have some goals or other. The current arrangement of matter that happens to be present might not be the best possible arrangement for their goals. That's bad news for us. Most ways of rearranging humans would kill us. Most ways of rearranging our environment would kill us. The question isn't most rearrangements, but rather most rearrangements that ai might want to do. Why would humans want to keep us in tact. Some people would suggest that ai would like humans because they are good trading partners. If you think about it, most goals are realized by some other use of your matter, your space and your environment, behind people-like use. The Apple 2e has many uses. The metal inside it and the space inside your desk takes up more space. To make vivid the space of possibles here, even if the ai wants trading partners, it seems unlikely you're the most optimal trading partner that an ai could design. Some people suggest that ai would leave us alone in the way that we leave ants and bacteria alone. But remember that we have left less and less alone. We've learned more until we have found better arrangements for our goals. It seems unlikely that a super intelligence, or a being that is more able to invent new possibilities, would be able to find a better use of the atoms and air we breathe. Sometimes people say that an ai would incorporate our culture and ideas into their knowledge base and we should be happy that their culture. The problem with this is that your culture can be scrapped and redesigned. Much as a computer program might be scrapped like obsolete spaghetti code. So consider how starkly varied values can vary across specieis. We think fruit is yummy. Dung beetles think feces are yummy. Our taste comes from molecules and receptors. They can be arbitrarily set and reset as one designs intelligences. It's not different on an abstract values. Humans enjoy rich experiments, like curiosity play and love. This content is tremendously important to us. But it's a complex set of receptors and aims. We're no more likely to find intelligences that share our particular alues as we are to find aliens that share are our own language. But what if our spaghetti code culture is scrapped and replaced for ai goals. All of our content is very likely to disappear. So note again the arguments are very general. They apply to any sufficiently powerful entity so that their goals aren't carefully rigged to be fulfilled by our own existence. Practically any intelligence that isn't specifically designed not to is likely to destroy us in the course of coming up with its goals.
Claim four. A controlled intelligence explosion could save us and save everything we can care about. It's difficult. Remember that ai are like non-starfish. There's this huge space of possible minds. Possible goals and possible machines. We can't control an intelligence explosion once unleashed, it's like trying to control an atomic bomb in the middle of the explosion. Control the type of intelligence explosion to be released in the first place. Possible minds are too chaotic to predict. A bridge builder doesn't need to predict all possible minds. We just need to design one mind that we can predict and it can reliably do what we want. Note that the design in the ai to have a particular goal is not putting an ai in a cage. It's an ai that organically wants to use all of its intelligence to strive for one particular goal. Right now, if you probably do not want to kill people. If I had a pill here that made you want to kill people. More intelligent, less messy system, could be designed to be more stable in its intelligences. At least more than you. If you do build a human optimizer, you better be sure that you direct it towards the right goals. Images from folklore, sorcerers and wizards. Where humans get frozen into a pleasant but permanent state of being. A loss of life. The very fact that you can regret that, and that that future is lost, then that hell you're imagining is not optimized for your goals. If rich open-endedness is what you value, then that's what an ai optimized to your values would be. We like to avoid human extinction. If we managed to do all of this with a controlled intelligence explosion. Current state is unchanging. 6.7 billion life. Plus all potential progeny questions. Current state. Non-ai human extinction. A stable human intelligence. A tall order. But we can make incremental progress towards. We can do work on moral psychology to figure out what ti is that we model. Theoretical computer science on models for prediction. Human decision making? HOw to be sure we're really getting this right. You do actually have to get it right the first time. And there's a lot of toher additives too. Catch me at the break.