From extropians-request@extropy.org Sat Sep 17 21:03:25 1994 Return-Path: extropians-request@extropy.org Received: from usc.edu (usc.edu [128.125.253.136]) by chaph.usc.edu (8.6.8.1/8.6.4) with SMTP id VAA23284 for ; Sat, 17 Sep 1994 21:03:23 -0700 Received: from news.panix.com by usc.edu (4.1/SMI-3.0DEV3-USC+3.1) id AA09259; Sat, 17 Sep 94 21:02:30 PDT Received: by news.panix.com id AA26416 (5.65c/IDA-1.4.4 for more@usc.edu); Sun, 18 Sep 1994 00:02:26 -0400 Date: Sun, 18 Sep 1994 00:02:26 -0400 Message-Id: <199409180402.AA26416@news.panix.com> To: Extropians@extropy.org From: Extropians@extropy.org Subject: Extropians Digest #94-9-207 - #94-9-218 X-Extropian-Date: September 18, 374 P.N.O. [00:01:18 UTC] Reply-To: extropians@extropy.org X-Mailer: MailWeir 1.0 Status: RO Extropians Digest Sun, 18 Sep 94 Volume 94 : Issue 260 Today's Topics: [1 msgs] #1 Half-Baked Assumption (was Singularity) [1 msgs] [#94-9-184] Superheros [1 msgs] E-mail list [1 msgs] Interns Sought (tele-commute), c++, tech writers & requirements[1 msgs] Save the planet, kill you... [2 msgs] Superheroes [1 msgs] Superheros [1 msgs] THE SINGULARITY [1 msgs] what's next? [2 msgs] Administrivia: Note: I have increased the frequency of the digests to four times a day. The digests used to be processed at 5am and 5pm, but this was too infrequent for the current bandwidth. Now digests are sent every six hours: Midnight, 6am, 12pm, and 6pm. If you experience delays in getting digests, try setting your digest size smaller such as 20k. You can do this by addressing a message to extropians@extropy.org with the body of the message as ::digest size 20 -Ray Approximate Size: 26459 bytes. ---------------------------------------------------------------------- From: "Everything in moderation, NOT!" Date: Sat, 17 Sep 1994 14:19:42 -0500 (CDT) Subject: [#94-9-207] ________________________________________________________ Oscar C.S. os0002@acad.drake.edu HAkston@aol.com Hedonist at Large To each according to their greed, >From each according to their gullibility. ________________________________________________________ Finger file available for those with proper fingers... ________________________________________________________ ------------------------------ From: szabo@netcom.com (Nick Szabo) Date: Sat, 17 Sep 1994 15:14:37 -0700 (PDT) Subject: [#94-9-208] what's next? Adam writes: > Here's my bug: I'm hearing a lot of talk about the future. Is that not > what extropians play with, develop, and guide? We think about the future quite a bit, but we still only guide our little corner of it. We won't have a disproportionate impact until we are disproproportionate in both numbers and, via self-transformation, in intelligence, knowledge, and skill at the technologies and cultural environment _at hand_. > I read that uploading my brain to a computer will probably be the first > major extropian change to rebuild society, (bodies cast aside, perhaps). Hardly! Most us think uploading will be among the later developments (I rank it among the most distant developments myself). If we think we will live to see uploading (usually by other life extension vehicles, such as cryonics) it's important to gauge its probability and desirability -- thus the talk about about identity and copying, whether motivated AI will rise before and usurp the supply and demand for uploads (or even kill of humans), etc. Other than the personal angle, it's just another interesting far future possibility to talk about. > Is the problem just that there are technologies that > exist, but that I'm not seeing? No, the problem is that many of these technologies are completely theoretical. However, different from most science fiction, they are often _rigorously_ theoretical, in the sense that they follow the laws of physics within quite conservative margins, and (so far as we can tell) have good economic motivation. For a detailed technical analysis of nanotechnology, for example, (in its Drexlerian form completely theoretical at this point), see K. Eric Drexler's _Nanosystems_. Extropians have wildly different expectations of how rapid future progress will be. Mine are relatively conservative (compared to most Extropians, not to the population in general, or even technologists in particular). At the risk of starting another "Singularity in 2030 predicted by mathematical curves" flamewar, I don't expect to see a rapidly self-reproducing robotic "assembler" in my unaugmented lifetime. I do expect (with enough probability to make it matter) that in the meantime technologies like biotech, cryonic suspension, etc. will provide the augmentation I need to get to that era (and beyond). As far as college and career goes, you can do quite well just focusing on today's technologies, or even non-technological areas. There's nothing written in stone that Extropians have to focus on technology in our careers; much less that we should work on the vaporware technologies of tommorrow rather than the real technologies of today. In fact there are many areas we should be making a difference in but can't, for lack of skills such as media writing and artistry and business acumen. The "Rapture of the Future" syndrome, being so interested in the future that we fail to take advantage of today (and thus, among other things, fail to achieve that future!) is of course well known, and alas, quite real and hard to shrug off. > the horror of its nonexistence disturbs me. I don't mean to shock anybody, but in fact most of these technologies don't exist. Furthermore, the future will likely be far different than the futures we predict (and in many ways better, given that theoretical applied science work is, quite properly, based on conservative assumptions about physical laws, engineering margins, etc.). > So am I wrong? Am I missing current technologies that would blow me away, Of course, there are plenty of current technologies that will blow people away. > Isn't it our responsibility as > scientists and extropians to do it ourselves, to create the future that we > plan so intricately? Responsibilities for performing the improbable are meaningless. I try to concentrate on building a future for myself, and helping other folks in the process. (But this Rapture of the Future, and its nasty relatives like the Rapture of Politics, keeping getting in the way...) Nick Szabo szabo@netcom.com ------------------------------ From: No Taxes through No Government Date: Sat, 17 Sep 1994 18:20:57 -0400 Subject: [#94-9-209] [#94-9-184] Superheros T. David Burns: >Batman. > >Though his motivation is a bit on the strange side. Childhood trauma >resulting in obsessive behavior and post traumatic stress syndrome, with >some peculiar twists. Not very post-human, though. Tim Starr: >...just compare Frank Miller's "Batman: the Dark Knight Returns" to the >Batman movie... Despite the pop-schlock tendencies that are inherent in just about any Big Name creation, I found _The Dark Knight Returns_ to be a GREAT anarcho-capitalist fable, at least as far as the concept of self- defense is concerned. I encourage all budding AC's to check it out. If only the Batman movie had been based on _Dark Knight_... ------------------------------ From: szabo@netcom.com (Nick Szabo) Date: Sat, 17 Sep 1994 15:22:15 -0700 (PDT) Subject: [#94-9-210] Superheroes I see several possibilities for transhuman conflict, including transhuman vs. prejudice against transhumans (eg Heinlein's "Methuselah's Children", _Friday_, etc.). Transhumans can easily be outnumbered thus setting up a fair conflict (as with most hero fighter jocks, gun slingers, etc.) The possibilities of nonviolent competition vs. violent confrontation do, alas, seem to present fewer dramatic possibilities; or at least implementing them in a way that catches a popular audience's interest is quite a tricky proposition. Nick Szabo szabo@netcom.com ------------------------------ From: aberenzw@minerva.cis.yale.edu (Adam) Date: Sat, 17 Sep 1994 18:50:01 +0500 Subject: [#94-9-211] what's next? Nick writes: >Responsibilities for performing the improbable are meaningless. I try >to concentrate on building a future for myself, and helping other >folks in the process. (But this Rapture of the Future, and its nasty >relatives like the Rapture of Politics, keeping getting in >the way...) what continually frustrates me as i look at today's society is how politicians and economists don't seem to get how much potential benefit lies in science. it seems fairly obvious to me that most important historical revolutions were at least partly spruured by technological developments. That the business world can't get itself in gear to focus on developing useful, helpful, and thereby marketable technologies disturbs me. don't get me wrong; I'm not universally condemning businesspeople as blind fools. I do, however, think that with existing technologies and fair degree of organizational power, some fantastic ideas (that already exist, and as nick says, are "rigorously theoretical") can be made real. so why don't I see it more often? AB aberenzw@minerva.cis.yale.edu "Come to the edge," he said. They said, "We are afraid." "Come to the edge," he said. They came. He pushed them... and they flew. -Apollinaire ------------------------------ From: Reilly Jones <70544.1227@compuserve.com> Date: 17 Sep 94 19:02:12 EDT Subject: [#94-9-212] #1 Half-Baked Assumption (was Singularity) Jessie wrote 9/17/94: In extensive discussions in the CyberForum on CIS over the past couple of years, I have found an almost universal half-baked assumption, which could be termed conventional wisdom, that Artificial Sapient entities (or inorganic super-aliens or whatever else the concept goes by), created by Homo Sapiens, will be benevolent to us. The idea of them being "duty-bound" is quaint, considering that the concept of duty has been expunged from our compost-modernist degenerate civilization. The concept of deontology is nigh extinct. The benevolence assumption is always coupled with an inevitability assumption, almost as if, "they're inevitable, therefore, they'll be benevolent." It is religious faith, pure and simple. Hans Moravecs expressed this assumption at the Extro-1 conference, others on this list have expressed it. It shows up in current popular books. Frank Tipler in his odious book "The Physics of Immortality" puts the assumption in his typically inhuman, death-worshipping fashion: "I am arguing that to prevent men and women who are capable of creating an intelligent robot from doing so is shortsighted, a product of fear and ignorance, not rational deliberation. We ourselves are 'intelligent machines.' Indeed, there is a powerful practical argument for creating intelligent machines. Such machines will enhance our well-being, even if they are our superiors in every way." (Sounds like Frank wants to bump off the irrational Luddites. Why not? He proves with mathematical formulas that the murdered Luddites will be physically resurrected anyway.) Kevin Kelly in his book "Out of Control: The Rise of Neo-Biological Civilization" reports the assumption this way: "People will survive. We'll train our machines to serve us." At least he asks the right question (but without attempting to answer it): "Would an artificial evolution have its own agenda and goals completely outside the desires of its creators?" My answer to Kevin's question hinges on two of my proposed properties of sapience (beta version): the ability of an entity to assign itself purposes first-person subjectively, unknowable to any other entity outside of itself, and unchangeable by any other entity outside itself; these internally chosen purposes, may or may not be communicated to others at the choice of the entity and can override any built-in homeostatic or programmed values. Failure to achieve these properties is failure to achieve a sapience equivalent to ours now and subordinates the entities to our will. If we create Artificial Sapient entities that achieve these properties (as well as other properties not relevant to the discussion) and surpass us, thus subordinating our self-determination to their will; the entities could not safely assume we couldn't create other, superior entities that would subordinate the firstborn species. We are in trouble as soon as the firstborn entities are advanced enough to reach this logical conclusion. The creators can only lose their ability to create superior sibling rival species if placed in the equivalent of zoos or wiped out. If we play the Titans to our god-like creations, the neo-Olympians, the danger is that we will share their fate, from Goethe's "Song of the Fates": "out of deep-down chasms the breath of suffocated Titans steams up to their nostrils like incense, as delicate vapor." The inevitability and benevolent assumption's sole purpose is to impose an entropic death-worshipping moral philosophy on humanity. The unadulterated message is: give up, resign yourself to a loss of self-determination. I don't think that humans, in the bright glare of reason, ever think anything is inevitable. The argument just doesn't take root with people who believe in free will, although it may appeal to those who believe in the religious form of fate or Marxist historical determinism. The Extropian principles of boundless expansion and dynamic optimism simply do not square with an "inevitable future" argument. In addition, the Extropian idea of mutual non-coercion founded on defense-capability equality is overturned when faced with militarily superior intelligent aliens. The most pernicious quality of the assumption is that there is a whispered suggestion that some humans will be more equal than others and receive more favorable treatment from their creations in return for their help during the mop up operations. Very much like Prometheus helping Zeus against his own kind. The idea that a small number of forward-thinking individuals will suck up to the alien destroyers of humanity in a quest for a paradise of meaning sounds like a recycling of the ancient Hebrew concept of the "Chosen People." P.S. Sasha, I know this is attempting to sway people's religious faith, but at least the discussion hasn't devolved to 'how many angels can dance on the head of a nanobot' yet, so maybe it has some utility. ------------------------------------------------------------------------ Reilly Jones | Philosophy of Technology: 70544.1227@compuserve.com | The rational, moral and political relations | between 'How we create' and 'Why we create' ------------------------------ From: thomas.knox@index.com Date: Sat, 17 Sep 94 18:20:20 Subject: [#94-9-213] THE SINGULARITY JE> outgrow and then ignore us, kinda like we did with God? JE> Descartes, Kepler, Newton, Neitzche, et al didn't really kill JE> God, they just made Him irrelevant. If I could interview god: Q: How did you become so incompetent that you can't even type or send an internet message to "ALL". Q: If you're so great, why don't you have an e-mail address? `;|;`;`;`;`;`;`;`;` ---> thomas.knox@ptonline.com <--- `;`;`;`;`;`;`;`|`;`; ';| |`;`; `,| This message brought to you by the letters "a,n,a,u,j,i,r,a and m" |',', ';| |`;`; ;`| "Registration is like venereal disease. It's fun getting it, but |',', ,'| once you have it, you wish you'd never paid." - Thomas Knox, USA |,',' ;`| |',', `;|;`;`;`;`;`; ------------------------------ From: ahg@lgs.win.net (Andre Gauthier) Date: Sat, 17 Sep 1994 11:15:28 Subject: [#94-9-214] Interns Sought (tele-commute), c++, tech writers & requirements >a conscious being, Romana Machado wrote: > >> Andre Gauthier writes: >> >This request has got to be the epitomy of >> >neo-fraudulous wheeling and dealing on the part > > >> effort. This proposition may be of interest to students or other >> below-entry-level workers. Their work may be worth exactly what this > >This is exactly the type of person we are looking for. Also the >product is already designed largely built we are only looking for help >with non criticial features. > That's the whole point. Since the "volunteer" help you seem keen on recruiting will in all likelihood deliver questionable quality, how can you stand up in front of a customer and talk about the quality of the product you are about to sell him...You may argue that the essential core of the product was done by experts, do you think he will believe you? ------------------------------ From: "Everything in moderation, NOT!" Date: Sat, 17 Sep 1994 18:50:03 -0500 (CDT) Subject: [#94-9-215] E-mail list Please remove my address from the e-mail list. Regards- Oscar C.S. os0002@acad.drake.edu ------------------------------ From: Randy Mace Date: Sat, 17 Sep 1994 20:30:07 -0400 (EDT) Subject: [#94-9-216] Save the planet, kill you... > My guess is that the "church of euthanasia", and its attendant > lunatic fringe nonsense messages, are either the results of a > budding stand-up comic desperately searching for new > exploratory material to work with, or, perhaps more insidiously, > a "stealth" far right christian group employing yet another > bizarre strategy to fullfill its "divine" mission of self inflicted > stupidity. In either case, this is fairly dull stuff. > > My message to the sender of this nausea, PLEASE get a life!!!!! > > Sincerely > Charles I agree with you that the trash post is the work of some sort of weirdo but if it is some "far right group", why does it have to be a "far right Christian" group? Your prejudice is showing. And by the way, I am a confirmed agnostic. -- The opinions expressed are my own unless otherwise specified! Randall H. Mace aa160@seorf.ohiou.edu 74467.2403@compuserve.com ------------------------------ From: minsky@media.mit.edu (Marvin Minsky) Date: Sat, 17 Sep 94 20:49:06 -0400 Subject: [#94-9-217] Save the planet, kill you... >> . . . a "stealth" far right christian group employing yet another >> bizarre strategy to fullfill its "divine" mission of self inflicted >> stupidity. In either case, this is fairly dull stuff. > >I agree with you that the trash post is the work of some sort of weirdo >but if it is some "far right group", why does it have to be a "far right >Christian" group? Your prejudice is showing. And by the way, I am a >confirmed agnostic. Well, do you really see important differences between one or another group that employs yet another bizarre strategy to fullfill its "divine" mission of self inflicted stupidity? Perhaps he specified Christian simply because, at the moment, except in Florida, they're not generally killing their critics as frequently are the next most popular brand. So, perhaps it wasn't prejudice but simple cowardice. Not uncommon, though. I think agnostics are cowards, too. :-). ------------------------------ From: sw@tiac.net (Steve Witham) Date: Sat, 17 Sep 1994 21:55:45 -0400 Subject: [#94-9-218] Superheros Marvin Minsky writes- >...Who did you really like in your favorite Star >Trek? Those sanctimonious heroes, or the mad scientists who create the >(somewhat defective) immortal mind copies or (slightly flawed) >superintelligent machines that the enterprise crew righteously destroy in >one episode after another? I like Data, the superhuman who longs for mortality! Do I get the cookie? --Steve ps it's true i like data and star trek. every premise, character, plot device & conclusion is wrong and i love the show. uh-oh fire up dbx... - - - - - - - - - - This sentence contains three a's, one b, three c's, two d's, thirty five e's, six f's, two g's, seven h's, eleven i's, one j, one k, two l's, one m, twenty two n's, fifteen o's, one p, one q, five r's, twenty six s's, twenty-one t's, one u, seven v's, eight w's, three x's, five y's, and one z. ------------------------------ End of Extropians Digest V94 #260 *********************************