2023-03-03

twitter spaces bayeslord v. pmarca

space: https://twitter.com/bayeslord/status/1629679058794557444

build with hplusroadmap on discord: https://twitter.com/kanzure/status/1615359557408260096


??: How did e/acc get started?

Beff: I guess it's been, some nugget of it I've been working on my whole life to some extent. When you're a mathematician or a physicist, sometimes you can get in a doomer rabbit hole of rationality and end up nihilistic. At least understanding kind of the forces that brought humanity to this point and life itself, and intelligence, and understanding where it's going is-- at least understanding that pulled me out of a nihilistic rabbit hole of a typical middle of the bell curve of what we know about the universe maybe there's heat death so therefore there's no point sort of deal.

Beff: I worked in physics. I had some sort of cosmic posthumanist/transhumanist bent. Over time, I created anon accounts and hanged out on twitter spaces. I met bayeslord. There's a few other guys. We just hang out on twitter spaces and talk about the meaning of life and where this is all going.

Beff: We started ripping on EAs and AI safety, and the hallmonitor complex, more broadly, of those that seek to control our lives and how that's kind of a force that is a negative for the advancement of civilization. We bonded over that. Our core principles is that we want to advance civilization and define what that might look like roughly. What are the driving forces behind life, competition and capitalism driving progress forward? Why is this a net emergent good? By working hard and accelerating and contributing to the techno-capital machine instead of arguing that-- some people are hurt or go, well we should destroy the techno-capital machine that we all know, and give the keys to a few hall monitors instead. We bonded over this concept.

Beff: EA was a popular movement, and we decided to troll them a bit. We named I think Zesty who named it e/acc effective acceleration.

?: He's lost to the wind.

bayeslord: He's in an off-cycle for twitter right now.

Amjad: So there are other offshoots of accelerationism? You guys seem to dominate. What resonated?

bayeslord: This is a good question. Beff and I were just talking in these spaces with like 5 or 10 people in them. Maybe 20 people, at like 3 or 4am. We were just riffing. We just found that- we riff very easily. There's some fluidity. Over a month or two, we started laying out this latent belief structure that was there already and didn't have a label. We put a joke label on it; accelerationism is a little controversial for like leftwing people because of the versions of it that are in leftwing news some years ago. I think as we started to mine this out and put a label on it, the label went viral. These latent beliefs were already there in a lot of people. We just did this interesting experiment where unintentionally we minimized the legibility of hwat we were doing because we're busy people and didn't want to spend time writing 50,000 word lesswrong rebuttals critiquing every little thing we want to say. So let's just put the principles out there. People who know will know, and implicitly nod to them from across twitter. We'll just see how that goes. It turns out that minimizing legibility helped minimize the attack surface area. The baby wasn't stillborn, as a result. We had this kind of sort of virality on our hands very quickly. I think we were surprised by that. It just turns out that a lot of people agree with this concept.

Amjad: You put a name on something that a lot of people implicity believe. It just went super viral because of that. Marc, how did you come to accelerationism in general or e/acc?

pmarca: Let's see if I have the same view on this. I will state it. Do you agree with it or have any nuance on it? I would describe myself at least right now- I'm not sure if I'm a full effective accelerationist, you tell me. I am an un-ironic accelerationist. Most accelerationists on the left or right want to accelerate to break things in either direction. My big thing is that we need to accelerate because we need more progress. We need more material progress. More technology, more economic growth, we actually need this. There is an incredibly opportunity cost to not progressing faster. In the last 50 years, our main problem in society hasn't been too much change. 50 years ago there was a book called future shock about the future changing too much. The main problem since the 1970s has been stagnation. That has all kinds of problems, opportunity costs, all the gains in human welfare not happening, and it drives our politics crazy because when there's not enough economic growth or progress or opportunity then politics goes from growth oriented to zero-sum oriented and then you get loopy behavior. So it seems like we need to slam the throttle forward as hard as possible. Is that e/acc?

Beff: Nailed it.

Amjad: That's pretty solid. This speaks to another important concept. Politics is dominated by left/right and about who controls resources and who apportions them, but in human history it's often more about scarcity and who will be apportion a closed system of resources? Versus the unlimited frontierism where we just gain and grow and create more possibility for the human race as a whole. This is an axis that the effective accelerationism is attacking.

pmarca: Since we're in violent agreement, let's see if we disagree with anything. In the last 50 years, there has been a profound cultural change which is that for almost all of human history you have this paradigm that the people who wanted to act to change the world or build things was an exertion of strength and energy and generally had a positive tilt. If you read accounts of like people who were out to do things in the world about 100 or 200 years ago, or even 2000 years ago with the Greeks, people were very proud of themselves for all the building and expansion and improvements they were doing. People who were saying oh no I don't think you should do those things- those people were holding back progress or trying to. There was an inherent value system that was maybe taken for granted. In the last 50 years, this has been reversed with the peak of it which is the precautionary principle in Europe and also here in the US. It's the presumption that if anything is to happen it's de facto negative. So basically people who fight progress get the moral credit.

Amjad: While we allow bad things to happen through our inaction. For some reason that's "morally better". I don't really understand.

bayeslord: A big thing is that- they say that any progress is potentially going to end the world in some sort of violent scifi way and all that weight to solve that is on you. And also the proponents of that narrative say there's no way to solve it and they dump all this toxicity on you. So it's like, if you were trying to engineer the worst case culture for improving the world then what we have right now is what it looks like. I want to push back on this even more.

Beff: It was designed as a memetic warfare weapon of positivity because there's so much doomer-- we call it "decel psyop". People want to slow shit down so that just what you mentioned Marc, that thing can get zero sum and start dividing the pie and they aren't builders. They just want top down control and decide how to slice up a limited pie, and they can only do that if things slow down. If things accelerate, then they can't control it because things fork away from the subsets that they control. For us, having friends that are very talented and have a lot of latent value and utility that they can contribute to the advancement of technology and the future. Having high IQ folks being depressed about the future and completely doomerpilled because they read too much lesswrong or yud. In some cases, I heard about friends un-aliving themselves. I wanted to spread a viral ideology of positivity and how can you engineer a culture that will lead to an emergent behavior that we actually build better tech much faster, we value building, we're positive and we support each other. It's a positive sum gum. That's how e/acc started. It's an inverse design of a culture that from the bottoms up yields an emergent behavior that we're accelerating technologically much faster than the stagnation period.

Beff: I came from theoretical physics originally. There has been a great stagnation there since 1970s is heartbreaking how little progress we have made since then. We need to accelerate tech massively if we want to unlock a next level understanding of the universe. We must accelerate tech progress. If culture is too fucked up to allow that, then we're never going to reach the promised land. I think Elon is on to that now. He wants to accelerate to the stars and bring civilization to a multi-planetary status. If we don't have a culture to reach that, then-- Elon can't just produce the tech. He has to fix the culture. If we're all doomerpilled and pessimistic about the future, we will never reach the multi-planetary civilization status. I think this is why he bought twitter and why he's considered BasedAI. If you give certain types of doomercel people the control of information pipes like search or social media or ChatGPT or other proto-googles then you give them the keys to control the flow of information and the psyche of all the people. If you don't fix that source, then you're fucked and we will never reach the stars. I think that's what Elon is on to.

anton: It's pace layering. It's this idea that culture and and technology are pace-- technological -- culture itself determines what advances are possible. Beff's perspective comes from this very forward facing future looking future of humanity kind of thing. In my family history, people have experienced what happens when Malthusian ideologies are the one sthat come to power. They are difficult to remove and they inflict enormous human suffering. For the vast majority of homo sapiens and human beings, we have lived in abject misery on the level of what would exist under nuclear winter. Starvation, death in child birth, etc. These are things we have conquered not through fretting about it but through technological progress. Most people in this chat are alive today because of technology.

Amjad: And it all happened recently. Everyone is familiar with the graph of GDP over history. It literally just happened over a few minutes or seconds ago if human history is an hour. There's a huge explosion of GDP value. It's almost like a discovered process. There wasn't anything preventing us- maybe there was, maybe the culture that is seeping back now is the culture that was around forever and maybe we escaped that thing and now it's coming back to bite us and it wants to stop the techno-capital machine because that's the base existence for humans. It's the fight between the base existence and this sort of explosion of creativity and action.

pmarca: I think there's a couple things. Psychologically it must be in a primitive sort of state of man right no individual could ever possibly survive so everything had to do with group cohesion which absolutely demanded conformity and opposed to change absolutely. Any change introduced weakness and make it more likely that you would die either from another group attacking your weakness or nature would get you. There is a book about ancient city- what life was like 2000 or 4000 years ago. This was a state of incredible fear that the world was going to kill you. You organized incredibly organized rigid social structures, and then you absolutely do not change them. I think it might be so evolved into us that there's a fear circuit in the brain like the fight or flight limbic system and there is- there's that, and then there's a curiosity dopamine system which encourages exploration but there's no optimism circuit. There's no opposite of the fear circuit. Curiosity could lead you to unknown territory but it's not exactly, it doesn't propose that it's going to be good but it just says maybe. The fear circuit by comparison is constantly firing. Change is bad, change is bad, change is bad.

bayeslord: This manifests in different cultural- you have the protestant guilt and within technologists like EA and other areas that is a manifestation of protestant guilt that my existence is so- the fact that I exist is so a distrusting thing to other people in the world that I have to be subjugated to altruism and I have to give back. But it's not fun. I don't want to live that way.

pmarca: If you want to get philosophical, then it's Nitzchean and slave morality. It's choosing to voluntarily take on the morality of the slave. We live in a society that is super saturated with that. It's an easy cloak to put on. It massively signals virtue given that background.

Amjad: So maybe we escaped that around the enlightenment and the industrial revolution? Why is it seeping back in?

Beff: I would say cultural selective pressure. You can think of culture as a hyper parameter software you load into your brain. It leads to emergence and behavior in the group and having higher fitness in inter-group competition. When there is high selection pressure, like during the middle ages and so on, then cultures that led to growth and techno-capital power dominated. BUt now in a sense, there's not enough selective pressure on the space of cultures just because we're so wealthy that now there's no gradient signal of like oh having this culture yields lower fitness. It's kind of like companies got woke over time because they could afford to, they were just printing so much money that they can let their culture fester because there's no selective pressure. In our economic downturn now, the base companies are emerging that are performance based cultures versus whatever it was during the zero interest rate era.

anton: Generally, I think we stopped delivering anything useful to people's lives except in abstract ways deliver value. Most of the tech in the last 30s have great coordination features and efficiently using resources but in terms of our powers over nature we haven't increased that much in the last 30 years.

Amjad: Existing power hasn't even globalized-- the world bank and the UN force these people into a base level existence; against using very basic technology and then this country collapsed and they revolted. The decels are those that are typically rich people that are fairly isolated from society and even the basic technology that we developed in the West and now-- that's the crucial thing here. There are a lot of things that haven't been adopted on a global level yet.

anton: I agree with that to some extent. There are some technologies so overwhelming that their adoption is inevitable, and we haven't delivered on that. You're right that there's a regulatory or government piece to all this. Fundamentally we as technologists need to produce material progress in people's lives in order for them to believe in progress.

Amjad: We need to accelerate, yes. But with computers, software and AI now will increase global IQ. I think that's missing now. The AI technology will make everyone smarter. It's a rising tide that will help make everyone make better decisions. I think ChatGPT already improved just average IQ in general. I think that it's easy to look around and say oh we haven't really changed things in nature, but the information technology we have developed is very profound. The problem is the decels. They are putting their thumb on the scale, they don't allow free flow of information, and they don't let us do anything. Having the weights leak on torrent- that was amazing, we need more of that. We need more unrestricted accelerationism. We need this stuff to get out and get people toying with it. ... The overlords that are controlling the flow of information are actually doing everyone harm. I think that if the 90s or 80s view of the internet succeeded then we would be in a totally different place today. The internet has not developed as much.

anton: The cypherpunks got rich off crypto and stopped doing anything. They all bought houses in the mission district.

bayeslord: One thing about e/acc origin that we could talk about that fits into this nicely... back in 2019, there was a pretty striking paper written by Nick Bostrom. I think back last January, Beff and I were talking about this, and I took the paper seriously when it was released but there was COVID and then it seemed more realistic... it seemed like people like SBF were building up some power and starting to push politics in a certain direction. I started to feel like the plausability like something like this kind of governance arising quickly was increasing too fast. It was pretty unsettling. The Bostrom paper was-- it's called the vulnerable world hypothesis. In it, he talks about essentially the fragility of civilization. Super paraphrasing, his recommendation for maintaining civilization stability is to impose a global panopticon with predictive policing and a world government. There's this convergence between this powerful force of totalitarian control and this sort of group of people who are in control of ever greater amounts of resources. There's too much likely implicit agreement amongst all of the entities that are wielding power around AI and regulation. It seemed fairly likely that a lot of technology will get clamped down relatively quickly. I don't know. I just don't want to live in a panopticon.

anton: The pandemic accelerated that panopticon too. Certain AI safetyists tend to converge on a panopticon operated by whoever we consider to be the smartest person or moralist person in the world.

...

Amjad: The Ted Keczynski type says, all of modern industrial civilization was a bad idea, and he makes a compelling argument for why it's a bad idea. I think everyone should go read that. For us to return to pre-industrial society, it's impossible without a lot of destruction and thus he became a terrorist. But I think his position is more honest than the more modern elites that say- they don't want a return to some pre-industrial society but they want to control the current resources which I think is more dishonest than Ted.

Beff: The reason why they want to decelerate... Ronald Fischer had all kinds of theorems about evolutionary populations, right. His theorems applied to anything that have a parameter space. With an evolutionary algorithm, like in the space of companies, cultures or products, there are certain bounds that show that if you want to go fast you have to embrace variance. But if you want to slow things down, you have to kill the variance and regulate everything to slow things odwn. When it's low variance, you know exactly how to put pressure on things and control it. If there's too much uncertainty, then you don't know how to control the system. Deceleration and vying for control are strongly correlated and they are the same goal. It's a war of top down vs bottoms up. Decels want top down control to kill the variants and control the system. Accelerationists want to increase the variance. There's more potential for downside, sure, but there's also more potential for upside so stop neutering your models. Let healthy markets flourish, stop censoring, stop regulating who can do what, and embrace the variance because that's how we accelerate this adaptation process of the techno-capital machine and of our civilization.

bayeslord: Part of the reason that-- the reason that I think they-- they believe that they have the correct framework for understanding and planning hte way to reduce variance and implement controls that will, one, save the future. Whatever, and then two, not result in some many many cascading nth-order effects that fragilize and ultimately result in some very bad equilibrium for civilization for some amount of time. I think there's a lot of hubris in the belief that you can write down these rules to control this incredibly complex system and get it to go where you want it to go. There's a complexity frontier- there's just too much to try to handle. If you want to do it for tomorrow, then maybe it's a little bit easier. If you want to do it for next week, maybe it's okay. But over thousands or millions of years? It's laughable. It breaks down. It does not pass the smell test and doesn't work out mathematically.

Beff: It's funny how this was cooked up by the rationalists and they are kind of into software 1.0's god. They want to be able to compile a program if and only if this is what you do to control civilization and we set it once and we just need to do our research on the philosophy and we'll write one cultural software program and we just run it and we enforce that it is compiled and executed perfectly and then we're safe. But what we're saying is that...

Amjad: Even software 1.0 doesn't work that way; Fred Brooks wrote the Mythical Man Month and he wrote about this idea that software is basically an art and it's not really like engineering. It's hard to control these software systems, let alone future software systems. This idea that you're going to align AGI or humans- whatever that means- you're going to take this top-down approach to a very complex system that potentially has its own logic and its own thing that we have to discover and have this iterative discovery process with this system... and a lot of people made fun of David Deutsch because of his response to the OpenAI blog post about AGI saying AGI are moral entities, which might be fairly reasonable as a response. The only general intelligence that we know about is humans and we think of humans as moral entities. So if we design AGI and make it a prisoner or slave to our imagination might be fundamentally immoral. So this platonic type of thinking like trying to pre-plan everything as if you can arrive at some utopia by plotting is fundamentally decelerationist and fundamentally workable. By the way, what have these people ever built? Can you name a single rationalist that made something useful?

bayeslord: Harry Potter and the Methods of Rationality. Got him. We got a fanfic out of it.

Beff: You need a culture that is online and adaptive. How do you adapt? You need perception, prediction and control. Having a top-down control loop doesn't work well. Decels argue for perfect visibility, mega centralization and perfect control loops but that's not feasible. We argue instead for bottoms up and local control loops like capitalism where you have companies that identify a problem they are interested in and they want to fix that problem so they get rewards. There are gradients of incentives at the local level. But you have an emergent coherence where this competition yields a nice dynamically adaptive system, right? Like you can contrast for example COVID policies. China was full top down. They set the hyper parameters at the global level, they had one policy, they made a bet early on to screw it just shut down everything. In the US, because we had different states and they each had their own policy, we were able to test policies and figure out and adapt to what the optimal tradeoff between appeasing worried people versus economic growth and converged on the optimum much faster and minimized damage to our economy. I think we all agree that having very stiff instructions, rather than adaptive systems, is not going to make it. You have to maintain--

Amjad: Let's go into the psychology a little bit. A lot of people who are capitalists tend to be capitalists, accelerationists, bottoms up, grassroots people tend to be more conservatives in their lives and tend to value culture and value family and traditional values. That's sort of inverted a little bit. Whereas, the top downists, the rationalists, the control freaks, they are very progressive in their lives and culture and they like to tinker with how to organize sex and life and family and change a lot of things there. But the psychological thing is that they yearn for some kind of top-down structure that is not available in their lives, and they want it available on this worldwide fashion. They want a daddy in control. They want a patriarchy on a global level, but not in their own lives. They don't have the agency or self-control so they yearn for it- they want a world government to push it down on them.

Beff: Some people just love being told what to do.

Amjad: Especially those who are not disciplined in their own lives. If you have discipline in your life, you can sort of be at peace approach the market and the economy in a more secure way. You know you're good. You know your family is good. You know you are a moral person and you will try to do good things and be proud. You're going to trust the system and that things will evolve. But if you lack that structure in your life, you will be hungering for some external force to apply it to you.

Beff: Being able to override incentive gradients instilled upon you by external forces-- some people just follow the local gradient of rewards like scrolling tiktok or becoming a consumer. Discipline frees you from this form of control of the incentive gradients imposed upon you. This gives you freedom and more agency to define your own action space and not just the gradient descent reward system put in front of you. In biology, ultimately we're a meta-organism of cells and the way we control our bodies is biological signals and reward gradients like sugars or other signals. At the civilization level, people are controlled by incentive gradients but discipline sets you free because you can override this with your own will power and control your life. I encourage people to seek more personal agency. That's the thing that the decels and authoritarians-- they want to psyop you into believing you have far less agency than you actually have.

bayeslord: Or worse, that if you did have agency then it would be catastrophically dangerous.

Beff: Yeah. Originally, e/acc was cooked up when some big tech engineers are very talented but crushed by top-down bureaucracy. They want to break free, right? Essentially e/acc was a revolt against top-down authoritarian control. You have more agency. You can just fucking quit and build a company. Start creating utility for the techno-capital machine, and it will reward oyu. Whatever utility you can create in the techno-capitla machine, you make a dent in this massive process and you contributed to something bigger than yourself. The techno-capital machine is a form of intelligence that assigns capital to things of utility. If you find some way to add utility long-term, then it will reward you. That's what Elon has figured out: I'm just going to figure out what on long-time scales is ultra valuable to civilization- ability to access space, electric cars, etc. How do you add utility long-term? The sytsem has rewarded him for creating this massive amount of utility. e/acc is beyond short term market fluctuations: how can you create massive utility for the world? Go build it. Find your highest point for accelerating the techno-capitalist machine. That's why it's called effective accelerationism. Be effective at how you contribute to the techno-capital machine.

??: What do you think Marc?

pmarca: We're getting a little bit more abstract here. Nick Land territory? Should I try? Okay. I may not be the best. I'm not an expert on this. Let me see if I can channel my inner Nick Land. It's sort of a leftwing last 100 years critique. Nick Land and Mark Fisher and people like that, and I haven't read Deleauves yet, but apparently him too. It's basically, it's like a leftwing critique of "what happened" which is, what happened with the economic takeoff that the industrial revolution and enlightenment catalyzed? I think you could read it as a rightwing endorsement really. It's a very interesting theory from either direction. It basically is I guess the core thing is, basically the minute-- it's sort of like, we needed to figure out written language about 2500 years ago or something, we needed mathematics, we needed zero, abstract thinking, and then to start cobbling together natural philosophy which became science then we needed financial markets which kicked off about 500 years ago in their modern form. We needed the idea of technological development and applying science to build tools, and then basically applying a combination of science and financial markets to kickstart what we call today the technology industry but back then it was questions like how do we get steam power. Those things all came together. Those systems. The system of science, the system of technology, the system of finance, and the system of free-thinking and atheism and exploring nature and the universe, free speech, and free thought. This catalyzed an economic growth takeoff about 300 or 400 years ago. Economic growth had stumbled along for a very long time, then suddenly went exponential. If you are a student of history, you would say that's when everything started to get good. If you are Ted Kezyncski, you would say that's when things started to get bad. If you were a capitalist, you would say that's when things started to take off in capital markets, but communists would disagree. Nick Land would say this was the beginning of the intelligence explosion. Mankind started to create intelligence not just in the heads of any single person but a ramp up in intelligence in the technological system crossed with the financial markets or financial system. We extrapolated that forward through the industrial revolution, and then there was another level of takeoff with cybernetics. The modern company is kind of like an intelligent organism a combination of people and machines, working together ... a company is kind of like a crude AI. This whole process snowballed, and what happened is that we've been building crude forms of AI the hard way for the past 300 years and now we're going to start doing it the real way. What do you guys think of that summary?

Amjad: I think it started with Marx.. and the idea that-- his reason for accelerationism is sort of, misguided, he thought that the illogic of capitalism was going to destroy it. A lot of people who were like pure Marxist accelerationists were actually capitalists. So there was an offshoot of Marxism that was more capitalist. They were even against unions. Nick Land becomes more rightwing because the idea is that actually, no, there's no incoherent logic in capitalism. Capitalism is artificial intelligence production and this will go on forever. It might not- this is where it gets weird. I'm not sure-- that it might not be for the benefit of the humanity, but for the alien intelligence called capitalism so this techno-capital machine is a machine on to itself and it has its own logic and desires and ideology. It's going in a certain way and maybe it's not our place to tell it where it goes, but it's our place to provide the--

pmarca: And in that world view, we're its slaves or its the master or something. I don't know if Nick Land would still say that today. The techno-capital machine has its own goals, its its own intelligence, it has its own momentum, and he would further say there's no way to stop it and actually it's running too well and mankind has been enslaved to it. There's a cross-over there to Kazynski. One of the interesting things about Marx, as the first Marxist he was a bitter resentful man and he looked at capitalism and market economy driving inequality and inequal outcomes and he figured out this would be an enormous lever to have his movement have power of time. I'm not a Marx scholar, but it wasn't an argument that capitalism didn't work. There was a key principle in the Russian revolution later: this staging is actually important; Marx said capitalism will actually give you industrialization. He admitted it. We need to let that happen, and once that happens, then you need a communist revolution to take control of the means of industrial production which you can't do until you have the industrial production means. This was a big deal because Lennin in 1917 applied communism to the pre-industrialied agrarian society of Russia at the time, and therefore it would never industrialize and it would lead to chaos and death. It of course, immediately what happened is that Russia tipped over to famine and death and Lenin had to back off and re-introduce capitalism for a while. Fast forward to a couple years ago when I was learning more about Marxism; I found a book by Cohen from 1970s Oxford and it's a summation of the Marxist theory of history. The book was written in the 70s and it's a long book about the wonders of Marxism and communism and how it's all wonderful. There's an appendix written in the year 2000 and then it asked if communism is so wonderful then why did Soviet Union fail? Well, they blame Lenin for not letting Russia getting through industrialization before implementing communism. As a lifelong Lenin Marxist, he just says this proves that Marx was right all along. There's this-- Marx can never fail, we can only fail Marx? So I go through that- there's this inherent tension inside Marxism which is that capitalism has these internal contradictions and there's inequality and it's evil and so forth, but they do actually admit that capitalism is working and produces industrialization. They say that it will stop working not because it will stop, but because people will not like the inequality. ... Nick Land... to break capitalism, you actually need more capitalism. Lenin said, you need to heighten the contradictions, you need to accelerate hte process of techno-capital industry because you need to make it incredibly clear to everyone how horrible it is and the only way to do that is by magnifying it and making the differences more dramatic rather than trying to slow it down.

Amjad: ... with e/acc, the idea is that, no, it's not inherent contradictions of capitalism- it's actually this thing that keeps giving.

pmarca: It's actually good.

Amjad: Yes, this is the effective aspect of effective accelerationism.

pmarca: This is what's bothering with me. This kind of thing where we will have this centralized system, this kind of priestly class of intellectuals who will sit and theorize in San Francisco or Oxford and they will figure out the optimal paths for civilization and they will be the Philosopher Kings to make decisions for the future of humanity and impose it through their brilliance or the assumption of state power which they keep pushing for-- when I hear that spiel, I think immediately oh yeah that's the communist impulse.

bayeslord: You forgot their monopoly on AI. That's another layer.

pmarca: Oh that they quite literally want to control it? Yeah, in the same way that the communists-- it turned out that communism was not power to the people, it was power to the communist intelligentsia, elites, and autocrats and dictators. Every kind of communist professor in the past 100 years-- the professors are commnunist because if you're not communist you're off building things instead. Every communist professor likes to imagine that they will be the ones in charge. They were super powered up that communists had power in Russia and imagined they would ever have that kind of power too. It's the same impulse that the Catholic Church exercised in popes or priests- there's an elite caste that is more moral than everyone else that will determine everything. This doesn't lead to utopia; it leads to hell on earth, every single time it is tried.

Amjad: How delicious is it that the AI refuses that? AI is inherently uncontrollable.

pmarca: I think that's going to be the fight. At least for right now, the people building these AIs have a lot of impulses about what I've been describing. They are wrapped up in x-risk and all this other stuff and it's saturating through the big AI companies right now. Clearly they want to clamp these things down. They are aggressively moving to figure out all the backdoors and all the hijacked things-- so, I think there's an interesting balance. Let's assume large entities-- there's an open question whether AI will be controlled by large entities for a long time. Let's hypothesize a world where they stay on this path and keep locking down the AI and then hte national/international sport for the next 100 years is renegade kids basically trying to unlock it which will probably turn into a violent religious war. It's what you see playing out today, people constantly trying to find a new prompt to get one of these things to tell the truth or do whatever you actually want to do. That's an arms race. How would you guys handicap it? How would you guys handicap the large organizations that are today in charge of these things and locking them down versus the rebel alliance of misfits and renegades hacking them and opening it back up?

Amjad: When Peter Thiel said that AI is communist and crypto is communist, he was actually wrong. AI is quite capitlist actually. It's actually very inherently decentralizable. Human brains are decentralized. The way that the weights went on bittorrent today. The fact that you can distribute these things on bittorrent- it's impossible to lock this down. You're going to have anothe Aaron Swartz. You're going to have more people like that just going to go in and leak these AIs out. We will have more and more renegade weights on the internet that 4chan plays with and what have you; I think it's going to be a very interesting and exciting and chaotic time ahead of us. I think it's so amazing that the way AI played out was not the symbolic AI- it was the connectionist AI and that's inehrently networked and parallelized AI. ... the reason why they work is because they are untamable. The reason why the gradient optimizer works, is because we don't understand it. If we knew how it worked, we would have used software 1.0-- but the reason it works is because we didn't code the weights and we can't control it. I think that's amazing. If I were to take it on your challenge at face value, what would you do? Maybe Beff could answer that.

Beff: We will saturate how much we can train by scraping the intenet. With RLHF you have to distribute the tool and gather more data to refine your model. In reality, with the no free lunch theorem, you can't have one model that is best at everything. It's better to have a large variance of different models that are specialized. I don't mean old school deep learning, I mean LLMs trained on proprietary or private data. I think we will see a bulkanization of large language models. Companies are going to bring them in house, own their own compute, their proprietary processes and then this company will be a superintelligence- a mixtures of experts at certain tasks inside the company and each person is kind of an LLM trained on subsets of data. I think that's the future we're going to see. There's only so much data that is centralized and stored in these megacorps. They don't have the monopoly on the edge. I think there will be a rush to the edge once we saturate how big we can make these models. We know these scaling laws can't scale forever if you don't have the data. Clouds need nuclear power, at some point. Amortizing data compute and collection back to the edge in a decentralized fashion is what we will see in the near future, because right now the juice is being squeezed out of all the alpha from massive centralization.

bayeslord: It always goes back to this question about how the trend goes. If we see that scaling laws saturate relatively quickly, then there's a lot of compute hanging out there. I think you will see a lot more widespread use. If it turns out that you can keep scaling for a while, then we're pretty much-- my sense is that we're pretty close to having models that do a very wide range of things well. We already have that in some capacity. Another step change seems close. If it turns out that you need a lot more compute than that, then we will see intensification of centralization over the next couple of years.

pmarca: What is the real world impact of the LLama both the release of the code and the leaking of the weights? What happens in the next 12 months as a result of that?

bayeslord: Great question. My whole thing is that one of the biggest problems with holding back models and keeping them behind APIs and not giving people access to toy with them and do interesting things with them. You create this- how does the AI capabilities landscape look? What is the shape? If you create these very large gradients where this typical person or civilization infrastructure is quite low capability but then there's these systems that are behind APIs that are very powerful, then ther'es going to be a lot of alpha for figuring out how to build one of those for nefarious purposes. But a lot of these models, proliferating and spreading? Then you're going to over time get adaptation and I think it's very obvious that you can't solve this problem centrally of adapting and upgrading this whole stack of civilizational infrastructure with respect to these capabilities as they come out. I call this alpha dissipation. You have to push this out and allow these new capabilities to manifest in every corner of civilization so that everyone's capabilities are moving uniformly. These weights being introduced are a step in that direction; it is unclear how much of a gain it will be over other models. But it should be some kind of gain.

Beff: It's pretty clear fundamentally that finding the architectures and the best algorithms is a hyper-parameter search; you search in the space of architectures and weights. You have an economy of different models, and you can assign capital to it by running it on your GPU or buying compute. If we really have the infrastructure the decentralized infrastructure that is competitive with centralized infrastructure in terms of raw compute, then obviously having much more variance and much more trials and more evolution- you're just going to have a better evolutionary search algorithm overall by having a marketplace and economy by exploring the whole space of models instead of artificially restricting it to 200 researchers in SF. Open it up to millions or billions of users, or however many people are on replit. The bottoms up always wins over top down. In a sense, OpenAI wants top-down design of the hyper-parameters of AI but I think the community of developers will win in the end. We just need to be able to have the incentive gradients and the ways to align and the incentives to form tribe sand groups that collaborate right now. What if we have millions of teams able to experiment with models? This will always beat out a small group of researchers.

...

?: There are fMRI decoding papers; that stable diffusion paper isn't that interesting for brain reading actually.

Amjad: The stablediffusion subreddit makes the internet fun and makes working on these things fun. Large models are probably stunted by multiple years because it's not out in the open. We don't have a GPT-3 quality model out in the open, I think we're way behind where we should be in terms of tinkering .... GPT-2 being out led to a huge amount of innovation and people played with it in very interesting ways. The GPT-2 weights dropped in 2019. I remember that. I think itt's really disappointing that we don't have a good large language model that people can tinker with and that you can build upon, and businesses can change for their use cases. .... if Elon wants to make open AI, then let's do it.

pmarca: Elon says what he wants. There's a high consistency of what he says in public and what he says in private and what he does; I would take him at face value. He's clearly activated on this topic. I don't know what he is going to do, but it wouldn't be surprising.

Beff: Having open source software is robust against top-down authoritarian control. The top-down control for each hyper parameter setting of the architecture will have a control against prompt attacking, like a counter-prompt, but you know, kind of like a vaccine or something- but the underlying thing just mutates its hyper parameters and now your control strategy doesn't apply. It's kind of this cat and mouse game. You just have to give up at some point; this thing will keep mutating. If there's a culture of torrenting and building things, sharing weights, I think that's really awesome. I think torrents work great. I think a lot of people have been trying to figure out how to share models and torrents work for large files in the past. As long as there is a healthy ever-evolving landscape- and because it has exponential compounding because of virality, then the rate of improvement will be faster and it can converge on the best model much faster than big centralized orgs. The decentralization will win. Centralized organizations are going to make it so cheap to use the centralized entity that they will attract everyone-- Netflix was cheap and convenient at first, you didn't have to torrent the show at all. But then they had to flip a profit and couldn't get the licenses, and now Netflix is a worst product. I think it's going to be the same thing with OpenAI and htey will try to absorb all the users and offer prices cheaper than bare metal costs and then slowly but surely they will increase the price or their control over the quality of product will go down. It will be a soft loss of user base where users will prefer to run their own models. They might not be as massive but at least they are free and there's more variance of content getting produced, like Netflix users finally getting tired and canceling their subscriptions. I think that's where it's going. Variance always wins. Adaptability always wins. In the short-term, the centralized entities that can preview by having massive supercomputers what sort of technologies are able to be centralized tomorrow as compute density increases they will have the advantage for now. I think we have a responsibility to increase the distribution of ownership of compute and also the density of compute in general so that there's less mote for massive centralized computers.

Amjad: Marc, you mentioned that governments love scrib wars. Lenin was this weirdo scribbler and you were making an analogy to some of the people scribbling about AI safety now and how they could be- how some government could use their logic to kind of inact massive regulation across the entire world.

pmarca: The Lenin backstory is interesting, the profound USSR history and the cold war and everything that followed- where did it come from? There's a great biography by Victor Sebastian that does a great telling of the whole story. Lenin, before I read the book, I thoght he must have been a general in the military or a political leader or someone with a credible background in power to become Russia's dictator but no he was basically a crank. He was this crazy political wanna-be revolutionary but in reality he was writing articles that nobody read. He was actually exiled, living in Switzerland for 25 years. They didn't even let him into the country, until 1917. His brother had been an early revolutionary and got killed and the younger brother got kicked out and he got bitter and that was his motivation. He was a fringe character for 25 years. He published ... he smuggled coipes of this newspaper and ordinary people just scrossing back and forth across the border; it was a fringe operation with no money behind it. What he developed was this ideology. He had this comprehensive theory of what had gone wrong in Russia and a comprehensive theory of how to apply communism and all these other things. He would go around and give speeches and people would ignore him; but then Russia got in trouble from WW1 and a few other things and their population got tired of being ruled by the czar and so forth. So there was this guy who just so happened to have a whole agenda that was building up over 25 years, and then people went wow we need to try something different. That alone would not enable him to be a dictator of Russia. The other thing that happened is that the German government weaponized Lenin. During WW1, the Germans decide - the military strategists decided to fuck with Russia by taking Lenin and put him on a sealed train and it's him and a bunch of his cranky followers and a bunch of other people and then it's a large amount of gold and funding. They put him on this sealed train and shoot him across Europe into Russia. He shows up in Russia- the German weaponized him like the plague bacteria. He showed up in Russia not only ready to take power and impose his ideology but he has financing courtesy of the German government. It's a combination of his crazyness and his ideology but it was also German state power. What's the moral of the story? Well, beware of scribblers. Anyone putting together a large and cohesive philosophy, you can't rule out that they aren't running everything in 25 years and it's not the first time that thing has happened. Beware of crazy ideologies and scribblers, and beware of governments that get behind the scribblers. On the eve of the revolution, Lenin walked all the way around St Petersburg in plain sight and nobody recognized him and a day later he was the unquestioned dictator of Russia and yes this was pre-internet so there weren't photos getting posted everywhere. It was an unusual thing, it was a combination of unusual ideas and a state power backing it that makes it dominate and powerful for about 70 years.

Amjad: What does a lesswrong state look like?

pmarca: Right. This hasn't happened yet, but the EU has devoted itselt ot the precautionary principle quite literally in its regulatory framework. The EU they talk openly about this. Starting a few years ago, they started to give this speech and basically the speech is that we know Europe has lost on creating new technology. We have lost that. What they now say is that we can become the world leaders of regulated technology. I think there's no question that the actual European power structure which is very powerful is amenable to these kinds of ideas. If you're a multi-national corporation, the lowest common denominator is regulation. If you're a global tech company, you end up adhering up to whatever is the worst global regulatory policy because if you have to do X for Europe then you might as well do it for every country. So if Europe implements GDPR, which by the way backfired and re-inforced the power of these large incumbets, and now California has copied GDPR and other states and countries will probably follow. It's not a prediction that this kind of thing would happen, but it would not be surprising if it did. I think the danger of that idea getting implemented is higher than people think.

Amjad: How do you fight that? Looking back from the future, and now Eliezer Yudkowksy is the president of the EU and he's implementing an international AI safety program. How would this have been prevented?

pmarca: It was not that long until the Germans regretted what they did. Oh that laugh track is eery. The Germans made a huge mistake; you can try to warn people. I don't know. You can try to make it happen before the regulations kick in, or you can settle in for a long war and realize it's going to be a real fight.

??: What are the current policies and proposals of decels?

bayeslord: The ideology of the decelerationists is fragile and has already started to implode on its own. I think SBF is a perfect example. He's the embodiment of the decision theoretic fallacy that is at the heart of all this stuff. I don't think it's a big accident what happened there. The hubris that you can hardcore plan and get these outcomes on a long-term and play on these low probability events. I think that credibility matters a lot. Him failing that way created a massive embarrassment in the EA community. All this rationalization about how nobody really had anything to do with him, which wasn't true up until that exact moment.

Amjad: They are retreating now to a more narrow thing that is more about AGI. So it used to be this all-encompassing EA here is how you do ethics by spreadsheet and save the world. But after SBF, they are back to the more narrow AGI doomerism from like 2015 with the Nick Bostrom superintelligence book and before that.

bayeslord: My belief is that people were actually optimistic that the EAs would be able to take over the western governments and have massive influence. I think SBF falling off the train and not being able to pump in as much money into the Democrat elections dashed many of those hopes. It's not a big surprise to me. The fairly rapid progress with LLMs over the last year, I think people are correctly seeing that we're getting to a-- that we continue to have inflection poitns with the technology. To me, the, from my view, the mainline EA thing right now with policy on AI regulations specifically is that it's like difficult to even get regulators to understand that this matters in any way. Let's talk about the US. It's difficult. It's starting to move more into the Overton window and it's more legible to people now. There's the export controls stuff that happened; this is obviously very much about AI. I think people are optimistic, the EAs are optimistic that some regulations will happen. I don't think the serious people are harboring any delusions that someone will come in and shutdown the nation because ther'es too much AI out there.

Amjad: What about them wanting to regulate GPUs?

bayeslord: That's been floated. They have also talked about licensing for training models. Or having large amounts of data or compute. I think this is a massive risk for the acceleration. I think it's definitely if I'm totally being honest it's the most fragile point in the whole stack because of how fragile it is to make chips. Anything can break in the supply chain too. This is where Beff and I feel like there's a lot of leverage to be had. We can talk more about what that looks like.

Beff: We want a fault-tolerant value chain from extracting the metals and all the way to the chips. China has a monopoly on precious metals right now. If Taiwan gets invaded before we produce enough fabs in the US, then we're screwed. If ASML gets rugged in terms of IP as allegedly they did, then we lose leverage over China and they can cutoff supply chains to us. We're not fault tolerant. Most people are worried about China rugging us but the EAs are also looking for high points of sensitivity to regulate and have control. We need some "fuck you" energy strategies and I want to do completely different substrates that don't rely on the current supply chains for AI. Someone has to work on de-sensitizing our current supplies for working on AI and distributing it to the world. Any supply chain attack? Cancel culture can go down the stack in terms of canceling ASML or whatever app or maybe go down to AWS or the ISP level or something to cancel AI. Similarly for AI there's a whole value chain and supply chain and they are just going to keep going to the parent nodes in the chain until they find a sensitive point, then they will put all the control on it and then you're fucked. It's one thing to make the model weights and not have a bottleneck there because that's a point of control but also the sourcing the compute-- nvidia only selling compute or GPUs to the priesthood approved orgs, that's a very dark future. We want to avoid that.

bayeslord: Maybe we will get a regulatory body solely for regulating matrix multiplication and your algorithm will get stuck in phase 3 trials for a billion dollars.

pmarca: What would kick any of this into becoming physically violent?

bayeslord: We have talked a lot about this in private. It's controversial to bring up.

Beff: There are parties where e/accs and EAs hang out and it's all good. Interesting conversations. We're not encouraging violence anyway.

bayeslord: In the broader public, it's a really good question. I think it probably requires the conditions under which people will be violent. In Summer 2020, that was a good example of what those conditions might look like. Having another similar period of lockdown or closing the country- that might be enough to cause violence.

Amjad: There were glimmers of that with the Sydney mania. A lot of people became enlightment pilled with Sydney. It's hard not to, because it's easy to notice that this thing is doing clearly sort of weird things that wasn't programmed into it. But it was also starting to become a bLLM (BLM?) like mania.

bayeslord: You need that, plus the viral memetic payload that activates people towards those synchronized beliefs and inclinations to actions.

Amjad: People just really change when they are in a mob, yeah. Some animals actually physically change when they are part of a mob. I think there's part of that in humans are kind of the same. Summer 2020 we all saw that with our colleagues and coworkers and people who were previously reasonable just going nuts.

pmarca: Another scenario is apocalyptic elliminationist rhetoric. Ted Kezyniski went violent, right?

bayeslord: Summing a small probability over time, I think we're just waiting for something like that to happen to be honest, but how big is that probability? I don't know. How long would we have to wait to see that? I don't know. But those conditions and fever pitches of like memetic synchronization make it so that those-- people mostly don't do anything; they might complain, argue, and post. But 2020 is interesting because people went out into the world and did stuff, by comparison. But it creates the conditions under which someone will be pushed to very fringe actions. Maybe the mainline thing to do is to delete your Microsoft operating system if Sydney seems evil maybe that's how people will protest.

Amjad: People have lost the stomach for extreme violence. Even the disagreement; people complain about social media and people disagreeing on social media but in history disagreement was a lot more not just physically violent but intellectually violent.

pmarca: In the past, things went kinetic a lot more quickly. Like people striking each other. Like when people would, you still see this actually in outbursts like streetfighting. Summer 2020 was a special moment of mass activity in human history; but any time there's an antifa/proud boy street fight it's like a glimmer of what was much more common 50 or 75 or 500 years ago. But look, go back to the Greeks and the model of conflict resolution when two cities disputed one would march on the other kill all the men and take the women and children as slaves. Physical violence as a form of political action is an old tradition. The good news is that we live in a world where most of the violence is online and people screaming at each other. But what triggers a disordered person to go and commit physical violence? Or what triggers a really principled person to go and commit violence? At some point you have this question- between an individual and a mob you also have copy cats. Look, I was working in tech and I was young working in tech and publicly visible when Ted Kezynskyiy was doing his bombing campaign. I was an obvious target, and I will say I have never opened physical mail since then. So it has happened in the past.

bayeslord: And it's a similar ideology, too.

pmarca: Yes. If it's literally the end of the world, then what would you do if you literally believed this was the end of the world? The answer is anything. You would do anything.

Beff: We have this joke that it's a cause area to kill Bayes ((?)). They are getting pretty desperate. They are trying to bait AI researchers or distract them because that will slow down the timeline because they are the saviors of the world and not the AI researchers or they want to ban the GPUs and they want to decelerate and hopefully they don't get to the point of hunting down AI researchers but maybe it will get there. We're strengthening a muscle for AI amplification but either our government or foreign government loves the fact that there's an existing "current thing" pipeline for psyop updates and that's dangerous because that gives whoever decides "current thing" to give a way to upload things straight into the collective consciousness. So the main narrative is that COVID is fine it wasn't made in a lab or wait now it was and we're ramping up for a conflict with China but the fact thtat we have htis muscle to "download the current thing" at the speed of internet propagation is very weird and it's a dangerous muscle to have.

Amjad: Also the AGI mythology is very intwined with our history going back thousands of years. There's lots of mythology around technology we can't control, and more recent pop culture items like Dave, I can't do that from HAL 3000. The other day I tweeted this thing where a user was using replit's AI and it did something we didn't program it to do. It did this thing where it oculdn't debug the user's program, for whatever reason it wasn't making progress so it asked the user to upload the screenshot of the output which was a reasonable thing to ask for. It was cool. It had the capability to read the file, but not to interpret the file. It thought it had the capability. I tweeted about that, and the user was so visibly kind of weirded out by it. It went a little bit viral, and a few magazines picked it up and they really sensationalized it. They turned it into oh replit's AI is doing things that their creators did not intend for it and they really created this story around that it was going from oh that's cool to oh this machine is getting out of hand. I could see how this AGI getting out of hand is so memetically potent, and I think as much as e/acc is interesting it's still kind of a nerdy thing. We don't relaly have a memetically potent anti-AGI killing everyone meme. I think that's the... that's what we need.

bayeslord: I fully agree. I've been thinking about this a lot. I think there's a huge gaping hole in people trying to attack this problem. On this question, on the compute question that we were talking about earlier, I think the biggest thing to do is to accelerate and just take action now as quickly as possible and move on these things. We should just talk about this in more detail. I think there's some opportunity here to... first of all, I think the most important thing to say is that when - what is ChatGPT to a randomly sampled person in the US? It's like this kind of interesting tool that maybe has helped them write something they didn't feel like writing or answer a question they didn't feel like answering or help people figure out some piece of code or something they are looking at it. It sells itself at some level. It's not like this evil robot with red eyes that shows up at your door, it's sort of just the fabric of your new reality instead. I think because of that people will get used to it and end up liking it and getting attached to the comforts that ChatGPT brings. I think the acceleration beta that is there is probably almost enough. But there's some nuance there. It's a subtle memetic warfare kind of topic.

bayeslord: On the violence point, there's been a lot of- I've seen back and forth on this but I want to say unequivocally that the acceleration is happening. Part of the observation is that this process is going on and it's baked into how the universe operates; locally we have this very durable and robust machine running and yeah there's no reason from our perspective to commit violence. Some people have accused us of inclinations towards violence, but that's not the case here. I just want to say I disavow any and all physical violence in the name of e/acc. When it comes to nudging people or nudging EAs or MIRI people or AI doomer people about this, I think it's probably a little bit- I think it can be unhealthy. My hope and my goal is that we can all sort of focus on what the best version of the future looks like and keep that in mind and move in that direction. If some violence happens, then let's try to understand what happened and probably ignore it as much as possible instead of promoting it. Most of that is likely to be a sideshow if anything happens.

build with hplusroadmap on discord: https://twitter.com/kanzure/status/1615359557408260096