From extropians-request@extropy.org Tue Sep 7 01:08:14 1993 Return-Path: Received: from usc.edu by chaph.usc.edu (4.1/SMI-4.1+ucs-3.0) id AA01651; Tue, 7 Sep 93 01:08:12 PDT Errors-To: Extropians-Request@gnu.ai.mit.edu Received: from news.panix.com by usc.edu (4.1/SMI-3.0DEV3-USC+3.1) id AA02672; Tue, 7 Sep 93 01:08:01 PDT Errors-To: Extropians-Request@gnu.ai.mit.edu Received: by news.panix.com id AA24416 (5.65c/IDA-1.4.4 for more@usc.edu); Tue, 7 Sep 1993 04:04:03 -0400 Date: Tue, 7 Sep 1993 04:04:03 -0400 Message-Id: <199309070804.AA24416@news.panix.com> To: Extropians@extropy.org From: Extropians@extropy.org Subject: Extropians Digest X-Extropian-Date: September 7, 373 P.N.O. [08:03:48 UTC] Reply-To: extropians@extropy.org Errors-To: Extropians-Request@gnu.ai.mit.edu Status: RO Extropians Digest Tue, 7 Sep 93 Volume 93 : Issue 249 Today's Topics: [1 msgs] ADMIN: Resent Messages [1 msgs] CHAT: Birthday! [1 msgs] HUMOR: _Telemachus Sneezed_ (was Re: PHIL: Galt Strike) [2 msgs] Help [1 msgs] Human Population [5 msgs] PHIL: Ethics, "Green Goo", human population [1 msgs] WAR/NANO/LAW, "nanarchy" [1 msgs] WAR/NANO/LAW, constraints and crime [1 msgs] WAR/NANO/LAW: automated defense and war [1 msgs] Administrivia: No admin msg. Approximate Size: 52358 bytes. ---------------------------------------------------------------------- Date: Sat, 4 Sep 1993 00:39:08 -0400 From: Duncan Frissell Subject: Help To: Extropians@extropy.org ::Help --- WinQwk 2.0b#0 ------------------------------ Date: Sat, 04 Sep 93 00:01:02 GMT From: Michael Clive Price Subject: Human Population Edward J OConnell writes: > Even Drexler admits that Malthus is right. > Geometric growth is impossible. Ray Cromwell responds: > No, Drexler said _exponential_ growth is impossible. [...] So, Drexler's wrong (again). But, could someone enlighten me as to the difference between geometric and exponential growth. Where I live they are the same things. > -- Ray Cromwell Mike Price price@price.demon.co.uk ------------------------------ Date: Fri, 3 Sep 1993 23:40:08 -0800 From: dkrieger@netcom.com (Dave Krieger) Subject: HUMOR: _Telemachus Sneezed_ (was Re: PHIL: Galt Strike) At 9:24 AM 9/3/93 -0700, Dave Krieger wrote: >>Tangent: I recently reread a chunk of Illuminatus! and found the >>commentary on the (fictional) novel _Telemachus Sneezed_ absolutely >>hilarous, and rather on the mark. I'd quote some, but I seem to be >>a victim of state-dependent memory. >> Eli ebrandt@jarthur.claremont.edu Quoted in accordance with principles of "fair use" (for educational purposes only). Please don't sue me, Mr. Wilson. BTW, "Atlanta Hope" is the fictional author of _Telemachus Sneezed_: Briefly, then, _Telemachus Sneezed_ deals with a time in the near future when we dirty, filthy, freaky, lazy, dope-smoking, frantic-fucking anarchists have brought Law and Order to a nervous collapse in America. The heroine, Taffy Rhinestone, is, like Atlanta was once herself, a member of Women's Liberation and a believer in socialism, anarchism, free abortions and the charisma of Che. Then comes the rude awakening: food riots, industrial stagnation, a reign of lawless looting and plunder, everything George Wallace ever warned us against -- but the Supreme Court, who are all anarchists with names ending in -stein or -farb or -berger (there is no _overt_ anti-Semitism in the book), keeps repealing laws and taking away the rights of policemen. Finally, in the fifth chapter -- the climax of Book One -- the heroine, poor toughy Taffy, gets raped _fifteen_ times by an oversexed black brute right out of _The Birth of a Nation_, while a group of cops stand by cursing, wringing their hands and frothing at the mouth because the Supreme Court rulings won't allow them to take any action. In Book Two, which takes place a few years later, things have degenerated even further and factory pollution has been replaced by a thick layer of marijuana smoke hanging over the country. The Supreme Court is gone, butchered by LSD-crazed Mau-Maus who mistook them for a meeting of the Washington chapter of the Policemen's Benevolent Association. The President and a shadowy government-in-exile are skulking about Montreal, living a gloomy emigre existence; the Blind Tigers, a rather thinly disguised caricature of the Black Panthers, are terrorizing white women everywhere from Bangor to Walla Walla; the crazy anarchists are forcing abortions on women whether they want them or not; and television shows nothing but Maoist propaganda and Danish stag films. Women, of course, are the worst sufferers in this blightmare, and, despite all her karate lessons, Taffy has been raped so many times, not only by standard vage-pen but orally and anally as well, that she's practically a walking sperm bank. Then comes the big surprise, the monstro-rape to end all rapes, committed by a pure Aryan with hollow cheeks, a long lean body, and a face that never changes expression. "Everything is fire," he tells her, as he pulls his prick out afterwards, "and don't you ever forget it." Then he disappears. Well, it turns out that Taffy has gone all icky-sticky-gooey over this character, and she determines to find him again and make an honest man of him. Meanwhile, however, a subplot is brewing, invoving Taffy's evil brother, Diamond Jim Rhinestone, an unscrupulous dope pusher who is mixing heroin in his grass to make everybody an addict and enslave them to him. Diamond Jim is allied with the sinister Blind Tigers and a secret society, the Enlightened Ones, who cannot achieve world government as long as a patiotic and paranoid streak of nationalism remains in America. But the forces of evil are being stymied. A secret underground group has been formed, using the cross as their symbol, and their slogan is appearing scrawled on walls everywhere: SAVE YOUR FEDERAL RESERVE NOTES, BOYS, THE STATE WILL RISE AGAIN! Unless this group is found and destroyed, Diamond Jim will not be able to addict everyone to horse, the Blind Tigers won't be able to rape the few remaining white women they haven't gotten to yet, and the Enlightened Ones will not succeed in creating one world government and one monotonous soybean diet for the whole planet. But a clue is discovered: the leader of the Underground is a pure Aryan with hollow cheeks, a long lean body, and a face that never changes expression. Furthermore, he is in the habit of discussing Heracleitus for like seven hours on end (this is a neat trick, because only about a hundred sentences of the Dark Philosopher survive -- but our hero, it turns out, gives lengthy comments on them). At this point there is a major digression, while a herd of minor characters get on a Braniff jet for Ingolstadt. It soon develops that the pilot is tripping on acid, the copilot is bombed on Tangier hash and the stewardesses are all speed freaks and dykes, only interested in balling ecah other. Atlanta then takes you through the lives of each of the passengers and shows that the catastrophe that is about to befall them is richly deserved: all, in one way or antoher, had helped to create the Dope Grope or Fucks Fix culture by denying the "self-evident truth" of some hermetic saying by Heracleitus. When the plane does a Steve Brodie into the North Atlantic, everybody on board, including the acid-tripping Captain Clark, are getting just what they merit for having denied that reality is really fire. Meanwhile, Taffy has hired a private detective named Mickey "Cocktails" Molotov to search for her lost Aryan rapist with hollow cheeks. [...] Cocktails Molotov, the private dick, starts looking for the Great American Rapist, with only one clue: an architectural blueprint that fell out of his pocket while he was tupping Taffy. Cocktails's method of investigation is classically simple: he beats up everybody he meets until they confess or reveal something that gives him a lead. Along the way he meets an effete snob type who makes a kind of William O. Douglas speech putting down all this brutality. Molotov explains, for seventeen pages, one of the longest monologues I ever read in a novel, that life is a battle between Good and Evil and the whole modern world is corrupt because people see things in shades of red-orange-yellow-green-blue-indigo-violet instead of in clear black and white. Meanwhile, of course, everybody is still mostly involved in fucking, smoking grass and neglecting to invest their capital in growth industries, so America is slipping backward toward what Atlanta calls "crapulous precapitalist chaos." At this point, another character enters the book, Howard Cork, a one-legged madman who commands a submarine called the _Life Eternal_ and is battling _everybody_ -- the anarchists, the Communists, the Diamond Jim Rhinestone heroin cabal, the Blind Tigers, the Enlightened Ones, the U.S. government-in-exile, the still-nameless patriotic Underground and the Chicago Cubs -- since he is convinced they are _all_ fronting for a white whale of superhuman intelligence who is trying to take over the world on behalf of the cetaceans. ("No normal whale could do this," he says after every TV newscast reveals further decay and chaos in America, "but a whale of superhuman intelligence...!") This megalomaniac tub of blubber -- the whale, not Howard Cork -- is responsible for the release of the famous late-1960's record _Songs of the Blue Whales_, which has hypnotic powers to lead people into wild frenzies, dope-taking, rape and loss of faith in Christianity. In fact, the whale is behind most of the cultural developments of recent decades, influencing minds through hypnotic telepathy. "First, he introduced W. C. Fields," Howard Cork rages to the dubious first mate, "Buck" Star, "then, when America's moral fiber was sufficiently weakened, Liz and Dick and Andy Warhol and rock music. Now, the Songs of the Blue Whales!" Star becomes convinced that Captain Cork went uncorked and wigged when he lost his leg during a simple ingrown toenail operation bungled by a hip young chiropodist stoned on mescaline. This suspicion is increased by the moody mariner's insistence on wearing on old cork leg instead of a modern prosthetic model, proclaiming, "I was born all Cork and I'm not going to die only three-fourths Cork!" Then comes a turnabout scene, and it is revealed that Cork is actually not bananas at all but really a smooth apple. In a meeting with a pure Aryan with hollow cheeks, a long lean body, and a face that never changes expression, it develops that the Captain is an agent of the Underground which is called God's Lightning because of Heracleitus's idea that God first manifested himself as a lightning bolt which created the world. Instead of hunting the big white whale, as the crew thinks, the _Life Eternal_ is actually running munitions for the government-in-exile and God's Lightning. When the hollow-cheeked leader leaves, he says to Cork, "Remember: the _way up_ is the _way down_." [...] That scene where Taffy Rhinestone sees the new King on television and it's her old rapist friend with the gaunt cheeks and he says, "My name is John Guilt" -- man, that's _writing_. His hundred-and-three-page-long speech afterwards, explaining the importance of guilt and showing why all the anti-Heracleiteans and Freudians and relativists are destroying civilization by destroying guilt, certainly is persuasive... I still quote his last line, "Without guilt there can be no civilization." Her nonfiction book, _Militarism: The Unknown Ideal for the New Heracleitan_ is, I think, a distinct letdown, but the God's Lightning bumper stickers asking "What is John Guilt?" sure give people the creeps until they learn the answer. (end quote) ------------------------------ Date: Saturday, 4 September 1993 08:38:42 PST8 From: "James A. Donald" Subject: WAR/NANO/LAW, "nanarchy" In <9309031926.AA08282@geech.gnu.ai.mit.edu>, rjc@gnu.ai.mit.edu (Ray) wrote: > 50s sci-fi movie. In it, we learn that the civilizied races of the galaxy > have built a race of super-police robots that they themselves do not have > the power of overcoming. > Now, the question is: if we could build such robots how could we keep > them from misusing their power? How would the robots remain technologically > competitive? They would have to continually upgrade themselves to maintain > their superiority over any crooks but that leaves the door open for > mutations. (e.g. they decide that police enforcement is a bad career > choice and decide to take over the galaxy instead making humans into > slaves) In "The rings of the masters" by Chalker a supercomputer is created to keep the peace. It correctly decides that humans are dangerous to themselves and to each other, and the that only way to keep them safe is to keep them unfree and ignorant. > Seems like throwing away the "key" is a bad idea. The programmers created seven keys, encoded in seven rings, that had to be kept by the seven most powerful humans. These keys, when brought together, would allow reprogramming the computer, or switching it off. The computer could not disobey its programming to keep the keys safe and in the hands of humans with authority, but it ensured that the humans who had the keys were on seven separate planets, ignorant of technology. The only way to keep the world safe and free, to ensure a market based rather than predation based ecology, is for lots and lots of separate conscious beings each to have very powerful weapons. Freedom and total safety are incompatible. The right to keep and bear arms. --------------------------------------------------------------------- | We have the right to defend ourselves and our James A. Donald | property, because of the kind of animals that we | are. True law derives from this right, not from jamesdon@infoserv.com | the arbitrary power of the omnipotent state. ------------------------------ Date: Sat, 4 Sep 93 14:25:05 -0700 From: freeman@maspar.com (Jay R. Freeman) Subject: Human Population [ Dave Krieger suggests I relocate to a sparsely populated area in lieu of wistfully dreaming about a world with much lower population. ] I actually have sampled areas of much lower local population density: I now live in Palo Alto, on the shores of San Francisco Bay about half way down to San Jose. It is a rather nice suburb, with lots of trees, moderate numbers of birds and small wild animals (up to raccoon and opossum), with open areas ranging from coastal forest through chapparal (which I think I am spelling wrong), oak woodland and marine salt marsh, all within a few minutes drive or bicycle ride. I was born and raised in Burlington, Vermont, literally across the back fence from one of the last stands of climax white pines in the northeastern US (new development cut into very old estate). I lived for a year and a half in Davenport, California, a town of several hundred population about twelve miles up the coast (logical north -- actually more like west) from Santa Cruz. And the most crowded place I ever lived was Berkeley, California, when I was in graduate school. The emptiest of these places wasn't empty enough. The animal populations (I confess to being a closet rabbit-lover -- purely an aesthetic choice, I have no request that you agree with me) were well down from what might have obtained in a large and long-unpopulated (by humans) region. Dave suggested Montana, explicitly. I haven't lived there, but I have flown over the state at low altitude in a lightplane, and it, too is virtually barren of animals. Lewis and Clark traversed much of this area in their expedition in the first decade of the nineteenth century, and reported large-mammal wildlife population densities up to 10,000 per square mile -- that's like the great wildlife herds in the remaining big parks of east Africa. I guess that what I mean by wanting to live in an environment with low non-uploaded human population density is, that I want (1) large enough chunks of biome unimpacted by humans, to support such wildlife quantity and diversity, and (2) sufficiently few humans who would like to live in such an environment, that we can all go and do so -- at least, for those intervals when we choose to download -- without messing it up. Notwithstanding these sentiments I certainly consider myself a technophile and possibly even an extropian (slight hedge required because I am not a libertarian -- some day when things get dull perhaps I will post an essay on why libertarianism is inconsistent with extropian principles). I'm not going to post a list of credentials, but I hope no one thinks I am an infiltrating luddite. I'm just a used astrophysicist who likes to leave cat food out for the possums who live under the back deck. -- Jay Freeman, (the other Jay, the other Freeman) ------------------------------ Date: Mon, 6 Sep 93 17:57:19 WET DST From: rjc@gnu.ai.mit.edu (Ray) Subject: ADMIN: Resent Messages I've resent the messages that never went out _that I am aware of_. If you posted a message 2 days ago and never got it back, let me know. Some duplicates may have shown up, ignore them. -- Ray Cromwell | Engineering is the implementation of science; -- -- EE/Math Student | politics is the implementation of faith. -- -- rjc@gnu.ai.mit.edu | - Zetetic Commentaries -- ------------------------------ Date: Mon, 6 Sep 93 18:28:56 WET DST From: rjc@gnu.ai.mit.edu (Ray) Subject: Human Population Michael Clive Price () writes: > > Edward J OConnell writes: > > > Even Drexler admits that Malthus is right. > > Geometric growth is impossible. > > Ray Cromwell responds: > > > No, Drexler said _exponential_ growth is impossible. [...] > > So, Drexler's wrong (again). But, could someone enlighten me as to > the difference between geometric and exponential growth. Where I > live they are the same things. Only if you define population growth in a stricter sense. When I said geometric growth wasn't impossible I wasn't speaking of a geometric progression but _geometrically_. If our civilization is taken as a sphere then it can theoretically grow at near the speed of light (size varies as the cube of time) However, how should we define population growth anyway? In the future it will most likely be composed of local exponential growth and global cubic growth. A space faring civilization traveling at near light speed would drop off "seed" populations in habitable areas as it moves along. Each seed population would rapidly grow exponentially until a critical mass is reached (like the West) then growth would settle down to a sustainable rate. Under this model, not only is exponential growth possible, it is desirable to rapidly reach a sustainable industrial infrastructure. As a rule of thumb, growth will probably always assume an exponential rate when the local environment permits it. -Ray -- Ray Cromwell | Engineering is the implementation of science; -- -- EE/Math Student | politics is the implementation of faith. -- -- rjc@gnu.ai.mit.edu | - Zetetic Commentaries -- ------------------------------ Date: Mon, 6 Sep 93 19:58:45 WET DST From: rjc@gnu.ai.mit.edu (Ray) Subject: PHIL: Ethics, "Green Goo", human population Jay R. Freeman () writes: > I sympathize with Wilson. Let me outline a thought experiment > that may illustrate why. > > Suppose I find an unclaimed piece of forest somewhere (I told you > this was a thought experiment), and build my house in a nice tree. I > take up residence, keep things fixed up and neat, and chase off other > critters who want to live there. The fact that I had found, used, > maintained and defended my tree would probably be grounds for most of > you to ascribe property rights in it to me, so that if you should have > some other pressing use for that particular tree, you would approach > me with a proposed contract to exchange it for something else. > Fine, except -- did I forget to mention? -- I'm not actually > human. I have a long furry tail, I gather nuts for the winter, and I > chatter obnoxiously at bluejays, cats and picnicking extropians. Thus > even though I have found, used, maintained and defended my tree, most > of you humans would probably go right ahead and take it away from me > without any consideration for my property rights, as if the fact that > squirrels are incapable of making and understanding contracts excluded > them from participation, not only in economics but also in > civilization itself. > That's fine, too, because -- did I forget to mention? -- I'm not > actually a squirrel, either. I never said I was, did I? I'm really a > cleverly-disguised scout module deployed by advanced entities from > above the high transcend, where we have just learned to circumvent > those physical conditions that have previously kept us out of the slow > zone and the unthinking depths. We considered the arguments presented > in the thread between Krieger and Wilson a few tens of seconds after > the big bang, and concluded that beings incapable of solving > googleplex by googleplex arrays of googleplexth-order coupled > inhomogeneous nonlinear partial differential equations in less than a > femtosecond are intrinsically incapable of participation, not only in > economics, but also in civilization itself. [you're hypothetical mega-intelligent species must not be all that intelligent if they download into furry earth animals and get killed. How did your squirrel rationalize the eating of nuts which might actually be small nanotech brains?] Let's take this argument to the extreme. How do you know that cockroaches aren't super intelligent uploaded dinosaurs from the last singularity? How many super-intelligent species do you kill every day by breathing, eating, and walking? Can you prove beyond a shadow of a doubt that vegetables aren't intelligent species which operate thoughts over centuries? Humans will never be able to calculate the total impact of their activities. The rational course of action is to therefore, do nothing! Suicide is the only sure way of not harming anything. We must never explore space, colonize other planets, exploit comets and asteriods because we could, unknowingly, destroy a complex ecosystem that has been running for millions of years. You might argue that there is a middle-ground. If so, I challenge you to rationally calculate the amount of "allowable killing and murdering" of the environment we are entitled too then. Such a study would probably take decades, meanwhile technological growth and expansion would be retarded. (waiting for the academics to finish their studies) I propose (quite arbitrarily) we simply draw the line at humans. We keep disregarding whale and rabbit property rights when it has utility until we discover that they are sentient, they go extinct, or they start a war. Natural selection is not immoral, but it is incompassionate. -- Ray Cromwell | Engineering is the implementation of science; -- -- EE/Math Student | politics is the implementation of faith. -- -- rjc@gnu.ai.mit.edu | - Zetetic Commentaries -- ------------------------------ Date: Mon, 6 Sep 1993 20:45:47 -0500 From: "Phil G. Fraering" Subject: HUMOR: _Telemachus Sneezed_ (was Re: PHIL: Galt Strike) Hmmph. That wasn't really funny. It had potential, in some parts... The Guilt part really had potential, if you've read Donaldson's stuff... (Which I was able to finish, unlike some of Ayn Rand's books). pgf ------------------------------ Date: Mon, 6 Sep 93 19:01:13 -0700 From: freeman@maspar.com (Jay R. Freeman) Subject: Human Population Phil G. Fraering writes: > Jay writes about how he'd like a lower population density. > > I'd like to ask if he thinks it possible for a slightly higher > population density than we have right now to treat the enviornment > a lot better than we are currently, without an egregious use of > force. I think that our present population, or a slightly higher one, could certainly treat the environment better than we do now. I believe it would require a substantial change in the life styles of many in order to do so. Whether such a change could be brought about without "an egregious use of force" is a bit beyond my capabilities with a crystal ball. (Please do not interpret that last sentence as either a call for the use of force or an excuse for it.) -- Jay Freeman ------------------------------ Date: Mon, 6 Sep 93 19:04:53 -0700 From: drexler@netcom.com (K. Eric Drexler) Subject: WAR/NANO/LAW, constraints and crime In response to Robin's proposal: >Some conservative estimate would be made of what any entity might >think it had to gain by causing harm, of the amount of harm it might >thereby cause, of the chance that that entity could be identified as >the culprit if it did so cause harm, and of the cost of such >investigation. (Private speculators can estimate some of this.) The >entity would have to then show it had enough wealth or crime insurance >to cover all this, or enough wealthy ancestors and siblings to "vouch" >for it. This seems workable on the assumptions (1) that no feasible crime overthrow the enforcement mechanism itself, and (2) that all crimes are associated with a significant chance of apprehension. The "crime insurance" concept (a credit is due to Chris Peterson here) provides a set of ground rules under which insubstantial entities gain enough substance that punishment becomes meaningful. Caveat (1) can be categorized as a military problem; caveat (2) may make one think twice about well-protected anonymity. >So regarding Eric's questions, parent and sibling relationships need >not be traced with certainty. Those entities that decline to make >such tracing easy would pay for the consequences of this some other >way. In general, this scheme requires that entities be "born" with ID cards, and possibly with bank accounts or insurance policies. Eric Drexler ------------------------------ Date: Mon, 6 Sep 93 19:05:08 -0700 From: drexler@netcom.com (K. Eric Drexler) Subject: WAR/NANO/LAW: automated defense and war Robin begins by paraphrasing a scenario that I suggested, and with which he disagrees: >That is, within a few years (months?), joe ordinary nation has started >nanotech, developed AIs which do R&D a million times faster than us, >built an overwhelming military force, taken over the world, and reached >the end of (military) technological innovation for all time. All while >the other guys are still in committee. Please! Not "joe ordinary nation", but "Joseph Superior Nation (or Coalition) the First"; show a little respect, or you'll get in big trouble with the world rulers. Actually, for most practical purposes it isn't necessary to take over the world; intimidation can do most of the job. Also, reaching "the end of (military) technological innovation for all time" is unnecessary and presumably impossible, and I don't remember mentioning a committee process. Robin's paragraph does a nice job of light ridicule, but I don't detect an argument against the scenario. He then provides a further summary: > some political entity will (for a time) be able to dominate the world, > and will be terrified of the consequences of not doing so, because of the > risks associated with an arms race arising in a more symmetrical situation > ... [given] a technology so different from today's that past military > experience provides no basis for predicting the stability of a > multilateral competition. ... [It may ask itself] With this (absolutely > corrupting) power in hand, how can that political entity or coalition > relinquish its power safely? ... without turning it over to potential > enemies. > >Fortunately, > > technological means emerge for projecting military and police power > with highly automated systems ... [using] entities far more stable and > predictable than human beings and able to think orders of magnitude > faster ... [which can] provide a stable framework for security (in a > military sense and perhaps a police sense). ... vastly less intrusive > than modern governments ... attempt to build those [basic] principles > into the system and then throw away the key. ... for example, > suppressing the transfer of resources by forcible seizure These paragraphs are highly edited and rearranged. Don't quote them as mine! >Where to start? First let me say that I can certainly imagine treaties >between suspicious military powers which are enforced in part by >automated systems This starts by assuming (without argument) a relatively symmetrical situation; I have already argued that this is unlikely. >Specifically, I can imagine a >course-grain automated monitoring system, broadcasting the situation at >many militarily strategic points to many military powers (or >distrusting internal organizations). This would require enough >monitoring sites to detect large scale military movements, but not >enough to see who stepped on your geraniums. Recent arms control treaties include provisions for on-site inspection: for example, Russians inspecting U.S. civilian chemical plants out of concern for chemical weapons production. It is not clear what will constitute a strategic point (or material, or facility) in a civilization spread across free space, and using molecular manufacturing systems to transform materials from one form to another, potentially with great speed. I'd like to develop ideas that can handle with the worst plausible cases, while hoping that problems are not actually so bad. >...Even here I find it hard to imagine throwing away the key, though I >could see requiring a high degree of unanimity to make changes. Demanding enough unanimity is essentially equivalent to throwing away the key. (By the way, do we count heads, MIPS, dollars, or guns? Under what conditions is it permissible to vote a change that permits strong military forces to smash and plunder rich, peaceful societies?) >But I don't understand why, in a >nanotech era, a single power should be so much more terrified of >breaking up into multilateral competition than they would be now. We're contemplating a future in which multiple basic technologies (computation, materials, energy conversion, etc.) have recently advanced by orders of magnitude, in which weapon system performance and emergent opportunities for strategic surprise are radically different, and in which weapon production can be orders of magnitude faster than today. Novel self-replicating systems are possible, including self-replicating weapons. Machine intelligence and/or uploaded human minds may take the field as strategic actors, with radically increased speed of thought and different vulnerabilities, values, and goals. Imagine that you are a reasonably cautious statesman with the ability to keep this technological potential under more or less centralized control. Now, someone suggests dispersing this technological potential among multiple competing military forces (human or otherwise). Wouldn't our statesman consider this to be a form of Russian roulette with the human race at the muzzle end of the gun, unless the proposal included some remarkably reliable safeguards? You may recall that the nuclear arms race had people a little nervous, despite the rather tame and predictable properties of nuclear weapons. >But the publicly >visible efforts by Eric and the Foresight Institute have largely >ignored key policy issues like estimating the speed or scope of a >nanotech transition. Indeed. After one has been ridiculed for suggesting that 2 + 2 = 4, one may spend considerable time defending that point before publicly arguing that 10 x 10 x 10 = 1000, or daring to venture that 1000 >> 4. >If I were to guess, I'd say Eric thinks that soon after replicators one >could easily create many cubic meters of nanocomputers, and that within >a few years such computers would naturally become advanced AIs, who >could then build cubic kilometers of nanocomputers, and then the game >is up. I think that AI (and even huge nanocomputers) is much harder >than this... Not quite the right guess. Computers won't "naturally become advanced AIs"; it will presumably take hard work. The machines Robin is willing to contemplate would deliver on the rough order of 10**27 times the computational power of the machines on which AI research has been done in the past (a cubic kilometer of nanocomputers would presumably be spread over a large area in space to make power supply and cooling tractable -- using some typical numbers, Robin's suggested machine consumes about a millionth of the solar power output). This makes the difference between trying to squeeze something like full human intelligence out of a machine with on the rough order of a millionth of the computational power of a human brain (the past AI effort) and trying to develop a moderately superhuman intelligence (at first) using machines with about 10**21 times the computational power of a human brain. I suspect that this changes the problem quite fundamentally, and that past experience with AI research on workstations is a poor guide to the difficulty of AI on super-super-super-(etc.)-computers. My bet is that we will get real machine intelligence rather quickly once we have a billionth of this computer capacity, but it is worth considering alternative scenarios. Machine intelligence, however, has nothing to do with the basic argument for a fast military transition. This rests instead on the ability of molecular manufacturing systems to make hardware that is far superior (often by one or more orders of magnitude) and far cheaper (typically by several orders of magnitude) than military hardware is today. Combine this with the ability to produce a billion copies of a system shortly after demonstrating one good prototype, and add to this the likely ability to translate preexisting designs into moderately-better, much-cheaper forms for molecular manufacturing. The result is an ability to swamp a conventional opponent in a lightning, one-sided arms race. These capabilities won't happen immediately after the development of a self replicating system, but they seem likely emerge soon after, by historical standards. There are three reasons for this: (1) the historically-unprecedented availability of computer simulations and computer aided design techniques, which will enable designs to precede tool development, (2) the sheer speed of experimentation possible with fast manufacturing processes, and (3) the ease with which a poor design based on (for example) diamond-fiber composites and nanocomputers can beat a good design based on conventional materials and devices. In short, design and experimentation can be fast, and the competitive hurdles to be surmounted are low so long as one has a monopoly in the basic technology. Finally, letting opponents compete in the game will be unpopular. When I briefed a room full of flag and staff officers at the Pentagon last year, there was visible agreement with the proposition that the consequences of these technologies for military strategy and tactics are unpredictable, and that this gives a considerable incentive to avoid an arms race in the technology -- that is, to prevent the emergence of the technology outside the leading coalition. In light of the above, I am persuaded that one must take seriously a range of scenarios in which there initially is a massive imbalance of power favoring a single coalition. I am not quite ready to dismiss alternative scenarios, but they do not seem as likely. >I think that AI (and even huge nanocomputers) is much harder >than this, Why are arrays of computers so hard to build, if one settles for long-range connectivity not better than a three-dimensional mesh? Yes, this is a serious constraint for many problems, but physics is cruel. >and therefore expect uploading well before AIs, >From a strategic perspective, how does an upload differ from an AI? (To model the thinking of voters, politicians, etc., one cannot take the perspective of the uploads.) With a roughly million-fold speedup [(millisecond synaptic firing) / (nanosecond gate switching)], after one day an uploaded person has experienced 3,000 subjective years, perhaps with enormous opportunities for self-modification. It is then a smart software system about which one can (at present) say very little with confidence -- much like a hypothetical AI. >slower more incremental growth of nanotech economies and armies With a capital stock that can double in an hour (see Nanosystems 14.5, "Comparison to conventional manufacturing"), why expect slow growth? This seems a reassuring but unlikely assumption, and it isn't the hard case to consider. >and that there may >never be other things that think millions of times faster than uploads. This is plausible, but emphasizes the parallels between uploads AIs. >Eric asks "just what terribly-wrong outcome should we fear?". As I >said before, I fear a system trying to prevent too many useful actions >it could not tell from potential coercion, That is, a bad design; might we do better? >it costing too much Even a large resource cost might be tolerable, if it made the rest of the resources far more valuable (e.g., by eliminating the functional equivalent of military-support taxes and risks of war). A good design might in fact consume few resources. >and looking too ugly, Ah, esthetics... >it preventing us from using more familiar punishments >to deter types of coercion the system doesn't cover, Again, a bad design. >and most of all the system being taken over by despots. Again, a bad design. How might one get a good design? Let us assume that we are too stupid to solve the problem, and even too stupid to formulate the problem correctly (taking into account physics, the structure of the universe, uncertainties in both, technological cleverness, tradeoffs among constraints, freedoms, risks, everything that Robin and I have considered, and more). Let us assume that we have over 10**21 brainpower to apply to the problem (see above), and that we have some ability to get that brainpower to pursue complex ends, perhaps by applying agoric-style market incentives in a competitive context. To sketch an approach to an approach: one might first structure incentives (in an interactive system with many human information "buyers") such that 10**18 brainpower is applied to generating problem formulations that take into account every concern that a person might recognize as relevant, given that each such concern is spelled out or otherwise indicated in careful prose, animations, mathematical models, poetry, or whatever. One might apply a further 10**18 brainpower to criticizing these problem formulations. One might apply a further 10**18 brainpower to sorting among these formulations and criticisms to find the ones worth bringing to human attention. One might apply yet another 10**18 brainpower to attempting to devise systems that perform well according to the more attractive problem formulations (and avoid performing unacceptably as judged by Robin's by-then-classic criteria), and another 10**18 brainpower to criticizing these proposals, for example, by finding plausible ways to break them. This still leaves about 10**21 brainpower (about 10**30 MIPS) for other purposes, such as running physical simulations to model proposed systems or attacks on them, or running war game models at higher levels of abstraction. One can even do many actual, physical experiments in a short time, using molecular manufacturing to make the devices. After all this cogitation has churned for million subjective years (i.e., one calendar year), with ongoing feedback from a million or more interested human beings, it might be that (with another 10**18 brainpower to help in the sorting) one can identify some remarkably attractive options for achieving genuinely desirable long-term goals. These might not include stable, automated security arrangements, but it is hard to see why they wouldn't, unless though sheer dislike of automation, or of security, or through a preference for permanent instability at the foundations of civilization (note that what is stable may be some highly abstract rules). Alternatively, it may turn out that proposals very much like Robin's appear viable after this amazingly more detailed scrutiny, at which point they would be very credible. Is this a satisfying answer? Probably not, in some visceral sense. As someone who thinks for a living, I find the idea of solving future problems using future machine intelligences to be distasteful. But this "solution" presents so many problems of its own that one could keep quite busy with it. >Eric "would encourage Robin to present ideas for addressing issues of >short-term and long-term military stability". I focus on imagining the >folks in some region trying to contract for defense services, and >looking for good indicators that the folks they contract with won't >enslave them or roll over should someone try to invade. My best idea >there is for them to look at betting markets on this question, where >the market speculators are in distant places and so are not threatened >by a bad local outcome. This is not much help, though, if a single >military power is the clear global military equilibrium. Yes: "I'm sorry, but we won't contract to defend you because we'd surely loose. Yes, that's how the military balance has tilted, and no, we weren't in charge, so don't blame us. Many civilizations have fallen to invaders, often quite nasty ones. Are there any messages you'd like us to record before they sweep through and destroy you and your friends? They'll eat your libraries, you know." Tell me more about military stability, please. You may have a solution, and if it would work and preserve the kind of world one would like to live in, then I'd probably prefer it to an ugly automated defensive system. Spontaneous orders are (as you know from years of discussion) more in keeping with my political and esthetic preferences. Spontaneous orders are, however, built on rules: economic orders are built on legal orders which are built on constitutional orders; everything is built on physical law. It may be that a layer of a new kind, buried down at some level, would facilitate the sorts of spontaneous orders we prefer. Eric Drexler ------------------------------ Date: Mon, 6 Sep 1993 22:13:05 -0400 (EDT) From: Harry Shapiro Subject: ExI: Extropy #11 I found Extropy on sale at St. Marks books, a cool, NYC book store. /hawk -- Harry S. Hawk habs@extropy.org Electronic Communications Officer, Extropy Institute Inc. The Extropians Mailing List, Since 1991 ------------------------------ Date: Mon, 6 Sep 1993 20:59:07 -0500 From: "Phil G. Fraering" Subject: Human Population 1. Sorry, folks, but I think I misspelled whatever I was trying to say with 'egregious.' 2. Well, that'll be another message. pgf ------------------------------ Date: Mon, 6 Sep 1993 21:01:28 -0500 From: "Phil G. Fraering" Subject: CHAT: Birthday! For your information, today, September the Sixth, is m{_y birthday. I just thought I'd mention that. (Please excuse me, I'm having line noise problems right now; it's part of that perverse law of the universe that means that extropians isn't going out while I have access to a good terminal on-campus). Also, for the benefit of those who have always wondered: Yes, it is called Labor Day because it's the day my mum went into labor. If you had the day off today, yes, you're welcome. pgf ------------------------------ End of Extropians Digest V93 #249 *********************************