From extropians-request@extropy.org Wed Nov 3 13:40:39 1993 Return-Path: Received: from usc.edu by chaph.usc.edu (4.1/SMI-4.1+ucs-3.0) id AA09115; Wed, 3 Nov 93 13:40:33 PST Errors-To: Extropians-Request@gnu.ai.mit.edu Received: from apple-gunkies.gnu.ai.mit.edu by usc.edu (4.1/SMI-3.0DEV3-USC+3.1) id AA17588; Wed, 3 Nov 93 13:40:30 PST Errors-To: Extropians-Request@gnu.ai.mit.edu Received: by apple-gunkies.gnu.ai.mit.edu (5.65/4.0) id ; Wed, 3 Nov 93 16:30:13 -0500 Received: from news.panix.com by apple-gunkies.gnu.ai.mit.edu (5.65/4.0) with SMTP id ; Wed, 3 Nov 93 15:43:20 -0500 Received: by news.panix.com id AA21783 (5.65c/IDA-1.4.4 for exi-remail@apple-gunkies.gnu.ai.mit.edu); Wed, 3 Nov 1993 15:43:13 -0500 Date: Wed, 3 Nov 1993 15:43:13 -0500 Message-Id: <199311032043.AA21783@news.panix.com> To: Extropians@extropy.org From: Extropians@extropy.org Subject: Extropians Digest X-Extropian-Date: November 3, 373 P.N.O. [20:42:53 UTC] Reply-To: extropians@extropy.org Errors-To: Extropians-Request@gnu.ai.mit.edu Status: RO Extropians Digest Wed, 3 Nov 93 Volume 93 : Issue 306 Today's Topics: BOOKS: Palgrave Set [1 msgs] Bet 5000 [2 msgs] FOOD:Crosspollination at U of Wisconsin-was Beating the Stock Market[1 msgs] Future Science [3 msgs] HEx: Thin trading metric [1 msgs] I heart my Corvair [1 msgs] META: Further list development. [1 msgs] META: Message rating proposal. [3 msgs] POLI: Jimmy Blake wins B'ham city council seat [1 msgs] POLI: Jimmy Blake wins B'ham city council seat [1 msgs] Why I'm no fundy (was: MOVIE: "Johnny Mnemo [1 msgs] unsubscribe [1 msgs] Administrivia: No admin msg. Approximate Size: 55827 bytes. ---------------------------------------------------------------------- Date: Wed, 3 Nov 1993 02:39:22 -0500 From: Alexander Chislenko Subject: META: Message rating proposal. I just sent a 34-K message with my suggestions on message rating system to the list; you can retreive it with a command: ::resend #81 Below I am including the top page of the message. Suggestions on Message Rating. ============================= PREFACE: The following text describes some features of the message rating agent that, IMO, addresses all recently expressed concerns; it: - is very easy to use (even simpler than Tim's, in some respects); - provides personal incentives for participation to *every* user; - rewards users for the quality of their postings and ratings. The text consists of two parts: Part I. My early article describing the idea in a general way. Skip it if you have already read it, or think you have a good enough idea of the subject. Most of this text was written for news.futures in summer 1992, well before the current list software or HEX were implemented; I apologize for terminological inconsistencies I may have left here; I really want to post it now, before the newly revived interest to message rating fades again. This part describes more features than I would like to see in the first implementation, though still omits some important parts of a fully functional system (I will not spend my time describing them until I see enough interest in its implementation to make such discussion worthwhile). Part II. Practical suggestions for the structure and interface of a minimal functioning system. This part is not a spec, or even a technical description of software. It's purpose is to outline the first version of the rating agent, and give some understanding of why I have chosen this set of features, and what we may do next. ------------------------------------------------------------------- I use the occasion to thank Harry and Ray whose efforts brought to life the current generation of list software, and built the foundation for further enhancements, including those described below. ------------------------------------------------------------------------------ | Alexander Chislenko | sasha@cs.umb.edu | Cambridge, MA | (617) 864-3382 | ------------------------------------------------------------------------------ ------------------------------ Date: Wed, 3 Nov 93 00:32:46 -0800 From: plaz@netcom.com (Geoff Dale) Subject: I heart my Corvair Andy Wilson said in reply to Phil Fraering: > Date: Fri, 29 Oct 1993 18:21:23 -0500 > From: "Phil G. Fraering" > > >In fact, it turns out that the Corvair was a perfectly safe car > >for the time, and they are now collected by some folks. > >They are collected because of their unique design, with a rear-engine >rear-drive configuration like a Porsche 911 or VW Beetle, which is >strange for a U.S. manufacturer. They were remarkably fast because of >the combination of high horsepower and light weight. They are most >certainly not collected because of their safety. As an actual Corvair owner ('64 Convertible), I'd like to point out that Ralph Nader's original objection to the "unsafe at any speed" Corvair had to do with a poorly designed suspension. That suspension was last used in '63. The '64 to '69 Corvairs are as safe as any of the cars of the time. Handle better than most, too. The interesting anecdote below does little to change my opinion of the car. >My father, who builds prototype vehicles for GM, once had a Corvair >on which he installed a stock supercharger, and the car was so >fast that he considered it too dangerous, and took the supercharger >off. He never thought his Porsche or a '63 'vette he used to have, >or even a souped-up VW Beetle he built, were too dangerous. This is >a funny story in our family because he told my mother that he was >installing the supercharger in order "to get better gas mileage", >and he had to tell her what it was really for in order to justify >the effort in taking it back off. Perhaps your father would feel the same about the Shelby Cobra (the fastest accelerating car on the road, ever). If you over-power a car, it becomes dangerous in the wrong hands anyway. Perhaps your father felt too tempted to really let the car rip. I couldn't say. All I know is: They can have my Corvair keys when they pry them from my cold dead hands. ;-) _______________________________________________________________________ Geoff Dale -- insert standard disclaimers here -- plaz@netcom.com "We are the shock troops of reality." - Voice of the Friends (Wild Palms) ------------------------------ Date: Wed, 3 Nov 93 3:40:25 EST From: rjc@gnu.ai.mit.edu Subject: Bet 5000 James A. Donald writes: > The quantum system will not always go into the solution > state. It merely has a high probability, a probability > reasonably close to one, of being in an eigenstate of the > truth operators corresponding to the desired solution. Say > 0.8 if you want a definite target for acceptable > performance. Which should make it as useful as a classical system finding a "near optimum" solution (which differs from the "best" solution by a large amount according to your hypothesis) to the living organism which depends on the right answers in the jungle. If I recall Mike's original remark about quantum computers, it was that they needed too much error correction to be time efficient (i.e. impractically slow) Like a lot of quantum magic, there always seems to be a "gotcha" which prevents the desired trick from working (FTL communication being the usual trick) -- Ray Cromwell | Engineering is the implementation of science; -- -- rjc@gnu.ai.mit.edu | politics is the implementation of faith. -- ------------------------------ Date: Wed, 3 Nov 93 1:06:01 PST From: tcmay@netcom.com (Timothy C. May) Subject: META: Message rating proposal. I just took a first look at Sasha's 38 page proposal for a message/thread/author rating system. I plan to read it, someday, at a more leisurely pace. Perhaps we can arrange a series of seminars and courses on how to understand this system and its myriad options. I mean no disrespect to my Russian friend here, as it looks very impressively powerful and complicated. Multitudinous powers and options for rating individual posts, setting defaults, and on and on. But I fear it is vastly too complex, that fewer than a dozen people on the list will take the time to even reach a novice level. So, what's the point? If the ultimate goal is a kind of Perl-like system for developing agents for mailing lists and newsgroups in general , especially for eventual use outside the Extropians, then perhaps the effort and complexity is justified (though I am leery of this kind of complexity). But if the goal is to somehow make the reading of our own list a bit easier, I cannot see the benefits of this system ever catching up to the effort needed to read about it, learn the commands, and master it, let alone to pull ahead and be a major time-saver. (Let me point out that my own simple-minded scheme, a simple scalar vote, was not intended to provide filtering of this sort, but only to allow a crude "best of" ranking.) I am reminded of Arthur C. Clarke's wonderful story "Superiority," in which overly sophisticated weapons systems ultimately lose out to more basic and quickly deployed primitive weapons. (I'm tempted to skip this piece and instead write a Klaus! piece patterned after "Superiority," but I suppose I've already made my points here.) Considering that my own spot poll of Bay Area Extropians reveals that few of them are using anything more than simple ::excludes (and a substantial fraction I talk to are not comfortable using that), I really wonder how many will take the time to learn how Sasha's commands work and how the payment system works (something about spending credits on one's own postings...I'll have to read it more closely, someday, but it seems brain-damaged, as it will cause those of us who happen to post a lot to spend our credits...we've brought all this up before). I wish him well, but I really wish more of these schemes would get discussed early on....I sense that there's a lot of behind-the-scenes planning going on. It's fine with me, as it doesn't take any of my time directly, but I wonder if it's the best way to evolve the List. (Of course, I raised similar kvetches with the "List software," as Harry and Ray were planning it and then writing it, and it turned out well. Or at least the one main command, "::exclude," turned out especially well. So, maybe Sasha's systems will be similarl successful.) Good luck! And I'll try to figure it all out someday. --Tim May -- .......................................................................... Timothy C. May | Crypto Anarchy: encryption, digital money, tcmay@netcom.com | anonymous networks, digital pseudonyms, zero 408-688-5409 | knowledge, reputations, information markets, W.A.S.T.E.: Aptos, CA | black markets, collapse of governments. Higher Power: 2^756839 | Public Key: PGP and MailSafe available. Note: I put time and money into writing this posting. I hope you enjoy it. ------------------------------ Date: Wed, 3 Nov 93 1:56:37 PST From: szabo@netcom.com (Nick Szabo) Subject: Future Science I'm gratified by the excellent comments on this thread. Ray Cromwell: > In comparing GP to Darwinian evolution I think you overlook one important > thing -- computation time. True, for now. For GP-style sexual crossover we have (very roughty) 1 billion years*1 generation/yr*1e12 organisms = 1e21 fitness cases for evolution of life. With fitness cases of 1 million instructions and 1% of civilization's projected computing power in 2013, (1e15 IPS), over the course of four years we get (4e16/1e6)*4e6 = 1e16 fitness cases. It will take 17 more doublings of civilization's computational speed (34 more years?) to give us life-fitness-case power. (Though it's likely there are a couple orders of magnitude worth of errors in these wild guestimates; more accurate figures appreciated). But life's amino acid language is so different from the language of science, that we might do better to compare with the last 10,000 years of memetic evolution. One interesting measure, perhaps more apropos the "days to the singularity" thread than specifically GP, is the arithmetic capability of all civilizations on the planet. Here are some _very_ rough guestimates, better figures welcome. Peak values, in terms of primitive integer arithmetic instructions per second: Sumerians+Egyptians: 100 priests*1 IPS = 100 IPS Greeks/Persians/Phoenicians: 1,000 math-literate folks*1 IPS = 1,000 IPS 10th century China: 10,000 abaci* 10 IPS = 1e5 IPS 18th century Europe (log tables, primitive calculators, 1e6 math-literate people*1 IPS) = 3e6 IPS 1920 (100,000 cash machines*100 IPS, 1e7 math-literate people*1 IPS) = 2e7 IPS 1993 (100 million PCs*1 MIPS) = 1e14 IPS 2013 (assuming doubling every 2 years) = 1e17 IPS It would also be interesting to measure the shear information content of civilizations over time (interpolating for lost manuscripts and the like). Massive explosion with the printing press, but not in terms of mutual information at first, since the first publications largely served to spread existing memes wider. In general, we might develop a "civilization computational complexity map" with three dimensions: | / | / | / time | / Kolmogorov complexity (resources needed to transmit | / knowledge, oral & written | / | / ================ space (brain memory needed) Right now we are manufacturing 100,000 times the arithmetic capability of Greek civilization every second! Furthermore, most of the valuable scientific/engineering knowledge that the Greeks left us can now be implemented on computer -- even the patterns of their architecture have now been captured in a concise grammar from which endless varieties of Ionic columns, etc. can be generated. This is relevant to GP to the extent that GP is based on a capability to do massive amounts of Platonic computational primitives that are completely unavailable or done vastly more slowly, even when combined in parallel, by the amino acids. Amino acids can't compute sums, and computers can't simulate amino acids. (At least, neither one can approach the ability of the other by dozens of orders of magnitude in these areas). Civilization's technology, largely based on Platonic designs, has far outstripped life in most areas (locomotion, flight, computation, echolocation, electronics, etc. etc.) and done many things of which life is incapable (drill mile-deep holes, explore outer space, etc. etc.), while remaining behind in just a few (eg self-replication). I suspect GP, standing on the shoulders of much of our current Platonic knowledge, will also vastly outstrip protein evolution in most areas, and perhaps conquer remaining areas like self-replicating machines. > I do not think GP > running on anything less than a TeraOP computer will beat human scientists. Again it depends on the domain of application. For finding complex nonlinear functions to fit massive amounts of data, GP will beat naked scientists by dozens of orders of magnitude, and will probably beat scientists armed with today's statistical analysis packages by a narrower but still wide margin. For simplifying that model and getting it published, we still might do it the old-fashioned way (albeit computers can greatly improve this area in other ways: typesetting, on-line journals, software for learning & experimentation like Mathematica, etc.) > How would a GP physicist know when to halt anyway? There are several rules of thumb for this, just as for today's scientists. One is to stop when it gets stuck and a good solution, and go work on something else for a while, until we accumulate enough additional mutual information to have another go at it. Until scientists get comfortable with letting GP run loose like this, they will likely use it for very narrow purposes, as a fancy statistical regression tool. > fields like Quantum Gravity where there is no experiment to "test the > fitness") In this case, the mutual information of the theory with other, testable theories might provide a good fitness function, as would its logical and mathematical consistency with testable theories. > If it were discovering the laws of motion, would it stop > at Newton/Gallileo or go on to Einstein? Good question! What data is sufficient to demonstrate Eisteinian relativity? Mercury's orbit, Michaelson-Morley, etc. come to mind. Note that both of these are "anomalous" findings, which is why I suggested feeding the solar data (including neutrino flux which falsifies current theories), GRO data (unexplained homogeneous gamma-ray burst sources), and other modern anomalous data to GP along with current, relevant, verified formal models as primitives ("standing on the shoulders of giants"), and let it fly. We already know that simple orbital trajectory data is sufficient for GP to discover Kepler's Laws. This is practically the only effort at GP science so far; the technique is new and the founders and early practitioners are mostly interested in robotics, stock market investing, and other areas far away from the hard sciences. Nick Szabo szabo@netcom.com ------------------------------ Date: Wed, 3 Nov 93 5:43:27 EST From: rjc@gnu.ai.mit.edu Subject: META: Message rating proposal. re: Tim's comments Besides filtering agents, the list software was designed to make other features easier: * cryptographic additions (signature checking, moderation, encrypted list) * hypertext (www/html, gopher, wais) * mail-2-news and news-2-mail * controlled distribution and access (a major problem with UseNet is that there is no way to control who gets your news messages and no way to prevent someone from posting) * information market (buy/sell messages with authentication, money being cpu/bandwidth credits, thornes, or real money, etc) * easy/remote moderation (majordomo does this, but majordomo has no real power) * thread control and user moderation (this was in the original spec but I scrapped it for now in favor of getting a working list running. A lot of the code is finished, merely commented out) * message search and retrieval Filtering agents are a major part of the list philosophy and I originally envisioned these: * standard exclude/include agent * ranking/rating agent * reputation based market agent * e-money based agent, buy/sell posts on a market, subscribe to a stock ("threads") * genetic algorithm/programming or neurnal net based filter * private human moderator/filter (PPLs are a core extropian philosopy. Instead of Harry banning posters, you hire a person to be your "justice system", to read threads and exclude certain subjects from your view. This form of filtering would work much better than any software based algorithm atleast until PerlAI gets here) This is all based on the fact that the list software is really a sort of mini list operating system and agents are "daemons" which examine incoming messages for commands from their subscribers. Thus, many things are possible. However, there is one general limitation and that is all list agents are allowed to look at your message, so there is no "piping" behavior where the output of one agent is sent to the next. This imposes the limitation that there must only be one agent which actually delivers posts to you, the others are only allowed to process commands. Otherwise, you get duplicate messages. (however, if that is what you wish, it will work. There is no software check which prevents you from subscribing to two agents or more) -Ray -- Ray Cromwell | Engineering is the implementation of science; -- -- rjc@gnu.ai.mit.edu | politics is the implementation of faith. -- ------------------------------ Date: Wed, 3 Nov 1993 07:37:22 -0500 From: habs@panix.com (Harry S. Hawk) Subject: META: Further list development. Phil wrote: >Nice ideas. Why are you against a list ->newsgroup agent? >Charlie talked about how hypertext might be better; then again, ... >all that would really need to be done is to beef up the security. 1) The list as it stands, and will remain private and we will never put 100% into public archive. 2) There is a need for older posts of merit to be archived; also some members of the list would allow their posts to be archived. 3) IF we implement archives (via NNTP or HTTP, etc.), we will first implement commands that will allow users to decide if All or None of their posts can be archived as well as allow adhoc archive|non archive (::archive on|off, and (::www yes|no), etc. 4) IF we do such an archive it would only be One-Way 5) I personally don't have any desire for a NNTP gateway, but have a very Strong interest in a WWW archive and am currently looking for a site to host us. /hawk -- Harry S. Hawk - Extropian habs@extropy.org In Service to Extropians since 1991 ------------------------------ Date: Wed, 3 Nov 1993 07:48:53 -0500 From: pcm@world.std.com (Peter C McCluskey) Subject: FOOD:Crosspollination at U of Wisconsin-was Beating the Stock Market My primary source of unusual foods has been to go outdoors and find/catch them myself. Some of the more interesting foods that I've gotten this way are: Shadberry (aka Juneberry or Serviceberry) (Amelanchier canadensis) Rose Petal Jam Sumach lemonade (Rhus typhina) Day Lily tubers (Hemerocallis fulva) Ground nut (Apios americana) Sweet Flag (Acorus calamus) Milkweed (Asclepias syriaca) sea urchin Woodchuck For a thorough but dry list of edible plants, read _Sturtevant's Edible Plants of the World_, ed. by U.P. Hedrick. Dover, 1972. For a more colorfull and beginner-oriented approach, read Euell Gibbon's _Stalking the Wild Asparagus_, _Stalking the Blue-eyed Scallop, and _Stalking the Healthfull Herbs_. Other foods that I've found through more conventional means: Whale Reindeer Elk brain Goat (curried) Cloudberry (multer in scandinavian) (Rubus chamaemorus) ---------------------------------------------------------------------------- Peter McCluskey >>> pcm@world.std.com >> pcm@macgreg.com >> pcm@cs.brown.edu ---------------------------------------------------------------------------- ------------------------------ Date: Wed, 3 Nov 1993 08:25:10 -0500 From: susan.farrell@gtri.gatech.edu Subject: unsubscribe Unsubscribe susan.farrell@casbah.gatech.edu ------------------------------ Date: Wed, 3 Nov 1993 09:04:22 -0500 From: ddf2@postoffice.mail.cornell.edu (David Friedman) Subject: BOOKS: Palgrave Set >Sorry, I won't be reviewing the 14 volume Palgrave set because Laissez Faire >has sold out and will not be getting any more. ... >Fred I am puzzled. The (hardcover) Palgrave that I am familiar with is four volumes--I know because I have a copy, having written two of the articles in it. Is this a many volume paperback version, or is "14" a typo for "4?" David Friedman Cornell Law School DDF2@Cornell.Edu ------------------------------ Date: Wed, 3 Nov 1993 08:23:42 -0600 (CST) From: derek@cs.wisc.edu (Derek Zahn) Subject: Future Science Nick Szabo is to be congratulated for his very interesting thread on automated science. > For GP-style sexual crossover we have (very roughty) > 1 billion years*1 generation/yr*1e12 organisms = 1e21 fitness cases > for evolution of life. I find this fairly reasonable. > With fitness cases of 1 million instructions This, however is certainly unreasonable. For fitness cases to be evaluated in such a short period of time, the fitness function would have to be extremely trivial. Imagine a simulation of something with the structural complexity of a bacterium (surely a lower bound) -- a complex of thousands of parts interacting iteratively with a diverse environment. I can't see how it could be evaluated in anything close to 1e6 instructions. Further, if we want the "organisms" to be adaptable, as we certainly do, we can look to current algorithms for learning for guidance as to how much work it would take. Lots. If we want to evolve designs for complex objects to act in the real world, this gets even worse barring some breakthrough in simulation of phsyical phenomena such as airflow and stress. > Sumerians+Egyptians: 100 priests*1 IPS = 100 IPS Well, this is kind of misleading I think. It implies that my own PC (10 MIPS, conservatively) should be able to recapitulate 1000 years of ancient scientific discovery in just a few days. If properly spoon-fed the data and relevant mathematical constructs, I could see that, but discovering relevant data and developing the mathematical tools is the hard part, not just performing correlation. > Right now we are manufacturing 100,000 times the > arithmetic capability of Greek civilization every second! > Furthermore, most of the valuable scientific/engineering > knowledge that the Greeks left us can now be implemented on > computer -- even the patterns of their architecture have now > been captured in a concise grammar from which endless varieties > of Ionic columns, etc. can be generated. Yes, but such implementation requires the very conceptual innovations that made that knowledge possible. The largest challenge, I believe, is one of representation. We can't just give it a bunch of data from a telescope and ask for an explanation. The physical nature of the telescope is of primary importance. Here's a challenge for you, Nick: Pick a single GP-style experiment you forsee as promising, and we can try to imagine the GP system that could grind away on it. I'd suggest the development of a theory of superconductivity, but that's probably too complex for a first thought experiment. derek ------------------------------ Date: Wed, 03 Nov 93 15:32:28 GMT From: price@price.demon.co.uk (Michael Clive Price) Subject: Bet 5000 James Donald declares, to me,: > You are a liar, a thief, and a cheat. Thank you, Donald, for you calm, cool and objective appraisal. I see you use the same impeccable standards to judge other people as you use in sifting scientific evidence. I presume Donald does not wish to trade with a liar, a thief, and a cheat so here is my disproof of Donald's quantum computer design, in which he claims to perform non-poynomial calculations in polynomial time. Donald's computer "works" by starting with the system in a superposition of all the possible bit-strings, equally weighted, all with an equal starting energy. There is a set of constraints which the desired bit-string satisfies. By adjusting the hamiltonian, he associates an energy rise with each possible bit-string proportional to the number of constraints violated by each particular string. The desired bit-string, by definition, violates no constraints and therefore has the lowest energy - the initial energy. All the other bit-strings have a higher energy. Donald's assertion is that the system stays in the ground or lowest energy state at all times (provided the hamiltonian is adjusted slowly), hence that system selects the correct bit-string. This is not so. There is simple equation (see equations 20.77 & 20.83 in vol3 of the Feynman Lectures or any standard QM textbook) for calculating the change with time of the average expectation of any operator. denotes the average value of some operator or parameter, A. According to this equation: d/dt = + [H,A]/ihbar The term [H,A] = HA - AH is the commutator of A with the hamiltonian, H. I am interested , that is the average value the energy operator or hamiltonian. Since [H,H] = 0 then d/dt = . dH/dt has rather simple form in Donald's design. Since H(t) = a*C*cnorm(t) - (N0+N1+N2 .... +Nm) C is the number of constraints violated. a is a constant. cnorm(t) = integral of exp(-t*t/2) from -oo to t then dH/dt = a*C*exp (-t*t/2) therefore d/dt = = a**exp (-t*t/2)> where is the average number of constraints violated by the initial bit-strings, which, in turn, is very close to the total number of constraints present. Integrating over time gives us the total rise in expectation of energy for the system. It works out to be: delta = a**sqr(2pi) Therefore the system does not stay in its lowest energy state, as Donald has asserted and requires. Adjusting the hamiltonian has, inevitability, injected energy into the system and split the ground state degeneracy. The chance of the system selecting the correct bit-string is only 1 in 2^m, where m is the string length, in bits. In other words, no better than chance. This is in variance with Donald's one definite criterion for success, that it would select the answer with greater than 0.8 confidence. Q.E.D Donald owes Price 5000 UK pounds. Mike Price price@price.demon.co.uk PS Donald's crackpot quantum computer ideas bear no relation to Seth Lloyd's or David Deutsch's, who are both very careful scientists. Their designs are consistent which such trivia as QM and energy conservation. Oddly enough they don't claim to perform non-polynomial computations in polynomial time. :-) ------------------------------ Date: Wed, 3 Nov 93 9:34:39 CST From: jeff@frodo.b30.ingr.com (Jeffrey Adam Johnson) Subject: POLI: Jimmy Blake wins B'ham city council seat [ My apologies if this information has already been posted. ] Dr. Jimmy Blake, chairman of the Alabama Libertarian Party, won his runoff-election bid for Birmingham City Council, District 3. He made it ! The votes were: Jimmy Blake 2906 Virginia Volker 2332 Many thanks to all those who contributed time and money to his campaign. ====================================================================== Jeffrey Adam Johnson Internet: jajohnso@ingr.com ("I speak only for myself.") ====================================================================== ------------------------------ Date: Wed, 03 Nov 1993 11:23:48 -0500 From: "Perry E. Metzger" Subject: POLI: Jimmy Blake wins B'ham city council seat Jeffrey Adam Johnson says: > Dr. Jimmy Blake, chairman of the Alabama Libertarian Party, won > his runoff-election bid for Birmingham City Council, District 3. > > He made it ! Congratulations to Dr. Blake and the ALP. Perry ------------------------------ Date: Wed, 3 Nov 93 13:51:13 EST From: Brian.Hawthorne@east.sun.com (Brian Hawthorne - SunSelect Strategic Marketing) Subject: HEx: Thin trading metric So everyone can see just how thin this market is, I am going to be adding a new metric to the Nightly Market Report: shares traded in the last 24 hours. It will be mostly 0's, I'm afraid... I'm hoping the new software (coming Real Soon Now) will perk things up a bit. ------------------------------ Date: Wed, 3 Nov 1993 10:19:08 +0000 (GMT) From: Charlie Stross Subject: Why I'm no fundy (was: MOVIE: "Johnny Mnemo Craig Pression wrote ... >In <9311021421.aa26112@ruddles.sco.com>, Charlie Stross writes: >|> >From: Harry Shapiro >|> >|> >a conscious being, W. Scott Meeks wrote: >|> >>[...] I get the impression from what I've heard about >|> >> Sterling that he's possibly somewhat extropian as well. >|> >|> >Sterling is basically a socialist, but on the annarchist side of >|> >the house, imho. >[...] >|> Er, that also describes me ... ;-) >Yes, and it lowers your "EC Index", just as it lowers Bruce's, which >is what Hawk was saying. As long as you don't start "debates on the >basics", though, we look the other way for list purposes; but if you >wrote up that, ah, underripe political philosophy in a book, it wouldn't >make the ExI Recommended Reading list, anymore than a political book >by Bruce Sterling would. Yeah? Then I'm in good-ish company, IMO. Actually, my mail was intended to go just to Harry, but I screwed up (forgot to change the header) so it went to the list. But I stand by it. Please remember, though, that I come from a different cultural background to you, email address notwithstanding -- whoever said that the United States and the UK were "two countries divided by a common language" got it right on the money. "Socialist" has quite different connotations in the UK, and in most of Europe, from the US, and this makes discussions of it, er, 'interesting'. I'm not going to go into the basics of socialism v. capitalism. Flogging a dead horse is not one of my favourite pastimes. However, I'd like to poke a sharp stick at some fundamentalist viewpoints that I feel are held too inflexibly. I attribute the difference in attitude to socialism between the US and Europe to a radical difference in historical background. A friend of mine from California opined that the USA has two founding fathers, and politics there is a battle between the two: Thomas Jefferson on one side, and Cotton Mather on the other. Neither of them had much to do with the anarchist tradition of Godwin, Proudhon, et al; neither of them had much truck with the political mainstream of Europe, either. Talking of a political founding father in Europe is nonsensical; there have been influential thinkers, but nobody ever defined a clean break. Even Marx was working within an older tradition. If you read around the history of anarchism in Europe, you'll find that an interesting breakpoint came around 1870, when the proto-socialists (and communists) split with the anarchists over the issue of how to run things once the governing aristocracies had been overthrown. The socialists wanted to set up an ideal order in which everyone would have equal opportunities; the communists wanted to impose a dictatorship of the proletariat (they hadn't formulated the obscene idea of the 'vanguard party' back then), while the anarchists just wanted to do away with all authority and return to a primal edenic existence. Naive? Yes. Utopian? That, too. A failure? Yes. So why pay attention to the scrap-heap of history? Well, it underlines something I've noted on this list, and in face-to-face meetings with libertarians and anarchists (of the European persuasion); a certain idealistic tendency to gloss over the nitty-gritty of how things work and how people think. In a nutshell, both traditions seem to over-simplify human relations. The European anarchist tradition was more-communist-than-communist, with a goal of abolishing governments, money, the market -- everything that divided people -- so that all could live in a state of primal harmony. (I hear you say: "yeah, right.") The US anarchist tradition is more-capitalist-than-capitalist, having the goal of abolishing everything except the market -- so that all can live in a state of primal acquisitive freedom. My gut feeling is that both philosophies, reduced to the level of a fundamentalist creed, are wrong: people are both competitive *and* cooperative, depending on their circumstances, and any system that assumes people will behave at one extreme all the time is out of touch with reality. (Note: The above paragraph is my personal opinion. Your mileage may, indeed almost certainly WILL, vary. If you want to flame me, please don't clutter up the list; take it to private mail.) We live in a monstrously complicated society, that is becoming more complex very rapidly. Not only are the technologies getting more complex, but so are the social mores surrounding them and the ways we interact through them are becoming more convoluted. So are the ways in which we are supposed to relate to the rest of society in general. It's probably not overstating things to say that a modern, developed society is so complex that nobody can handle _all_ their personal interactions adequately on their own. I pay an accountant to file my tax returns; I pay an insurance broker to insure me; I 'pay' (compulsorily) for the government's security service (the Police) to protect me. It makes sense to pay other people to do things I _could_ in principle do for myself -- because there aren't enough hours in the week for me to do all of them. The market is of course a very efficient mechanism for mediating all sorts of social exchange, including the provision of basic services. But it's still a mediational mechanism that takes some of the input resources just to keep it running. The idea the socialists had was to replace it with something more efficient. Trouble was, their idea of a more efficient system turned out to be less efficient under most circumstances. The sensible ones moderated their policies into what is now the mainstream of politics in Europe: the not-sensible ones tried to impose central planning mechanisms and failed miserably. The evil ones imposed central planning, shot anyone who didn't agree, and still failed miserably. So it goes. However: the market is an instantiation of a bidding-based evolutionary algorithm. Just because the socialist attempts to build a better mousetrap failed doesn't necessarily mean that the _idea_ of inventing a better mousetrap is invalid. Find a resource-distribution algorithm that's faster or has lower overheads than the market algorithm, and that can work in non-linear systems, and what have you got? Anyway, this is why I am not a fundamentalist capitalist. Capitalism is just a mechanism for mediating human exchanges. Other mechanisms (mostly less efficient) exist: maybe _more_ efficient ones also exist but haven't been found yet. To paraphrase Winston Churchill, the market system isn't good; it's merely the least bad solution we know about for handling highly-complex unstable systems. Question: is the complexity of society increasing on a similar exponential curve to our information processing capability? If so, is the information-processing ability of the market going to keep pace with the social system it's mediating? Go figure ... -- Charlie -------------------------------------------------------------------------------- Charlie Stross is charless@scol.sco.com, charlie@antipope.demon.co.uk ------------------------------ Date: Wed, 3 Nov 93 11:09:02 PST From: tcmay@netcom.com (Timothy C. May) Subject: Future Science Nick Szabo's genetic programming auto-poster generates: > New thread: how will science be done in the future? I submit that > the next 20 years will drastically alter the face of science. The > engines for this revolution are brewing in computer science labs today. > Furthermore, the coming revolution will increase the speed of many kinds > of science by orders of magnitude and revolutionize our technological > base in the process. Most of the skills of most of today's scientists > will be rendered obsolete, at a vastly greater speed than blue > collar workers, for example, will be replaced by robots. I agree that the new tools will change things, but I suspect the change in the nature of science will not be as large as Nick thinks. In particular, I contend the _science_ has actually been _slowing down_ in the last couple of generations. If so, then the new tools Nick talks about could speed things back up again, but maybe not. How could science be slowing down when we have so many new gadgets, so many new journals, so many bits of information being generated and published? What about lasers, DNA science, microprocessors, etc.? There was a thread more than a year ago about how many "basis vectors" of knowledge we've gone through, in rough terms. A basis vector in the sense of a fundamental theory or viewpoint that is a building block for later theories, elaborations, technologies, etc. I argue that in physics, for example, we are about "80% done." To be sure, some amazing new discoveries will be made, perhaps even completely rewriting our interpretation of current theories. But there are few things out there now that defy interpretation, at least not in the way that a century ago there were some obvious phenomena that had no real explanation. (There are unexplained astronomical phenomena, like x-ray bursteres, about which we know little. These may be "mundane" entities like rotating neutron stars, or they be "new science" entities like crossing cosmic strings....who knows? And there are high-temp superconductors, about which theories are currently defficient (in more ways than one, if you get my pun). New science is still possible...I just claimed it was slowing down, not that it has already stopped.) The exploitation of the new scientific theories of the past century has of course exploded. Electronics, biotech, the computer business, the materials science area, aircraft, etc. These fertile areas involve the combination of many diverse bits into ever-more-complex parts, an area that I agree with Nick will be ripe for using GP techniques in. In the first few decades of this century we saw some amazing scientific discoveries. (And the technologies our grandparents saw introduced--light bulbs, cars, airplanes, penicillin, radios, television, etc.--pretty much dwarfed anything I can recall seeing introduced in my lifetime, except of course for the computer and DNA and perhaps things like lasers). -fundamental theories of quantum mechanics (later elaborated, some would say fundamentally rewritten), explanation of emission spectra, of nuclear properties (sufficient to build bombs and reactors, i.e., quite sufficient). - a basic understanding of the galactic structure of the universe (previously unknown), the expansion of the universe, the main sequence of stars - relativity, special and general - a theory of computation (Turing, Post, Church, Godel) - evolution and genetics (though the c. 1950 discovery of the form of DNA was of course crucial) - and on and on Now my point is not to argue that the first half of the 20th century was richer in terms of scientific discoveries, though I think it clearly was, but to argue that the huge increases in numbers of "scientists" in the second half of the century has certainly not produced an acceleration in the rate of scientific discovery. Here's what I think we have seen: "rate of science" ^ | | ** | * * | * * ? | * | * | * | * |* |_____________________> time 1800 1900 2000 Of course, another curve, rate of technology vs. time would show a much sharper, and still continuing, growth. Basic discoveries are combined and cross-bred (even without GP) and the resultant products are sold, generate more profits, etc. As someone else also noted, many of the items Nick goes on to list (in his article) are actually not new science, even loosely interpreted, but technological applications. (There's a fine line between science and technology, but I contend that many of these things listed by Nick are not related to fundamental understandings of the nature of reality, which is what I think science basically means.) Technology will likely change dramatically with the new tools Nick is talking about, but I'm not at all sure science itself will be so easily mechanized (or augmented). In particular, I am skeptical that major new theories will emerge by GP-style crunching. This is reminiscent of the enthusiasm in the late 1950s for Newell-Simon-style "theorem provers" that would start with a bunch of basic axioms or theories and just "generate" all the true statements. Clearly, there are many reasons why this will not work. Whether classifier systems can in fact generate "interesting" and "important" results remains to be seen...I've seen no such results reported so far. And the combinatorial explosion of "true, but trivial" results will dwarf even the computational power of 30-50 years ago. (Not to sound dismissive of Nick's idea--and my lengthy reply should say I take it seriously--but any talk of counting on the doublings of computer power every few years to solve some presently intractable problem probably is already going down the wrong path. I won't elaborate now on why this is probably so.) But I agree that change of some sort will come. Recall from the AIT thread of several months back that Chaitin argues that mathematics itself will become more experimental and more subject to computer exploration of complex domains. A very recent article in "Scientific American" dealt with the same subject. Enough for now. --Tim May -- .......................................................................... Timothy C. May | Crypto Anarchy: encryption, digital money, tcmay@netcom.com | anonymous networks, digital pseudonyms, zero 408-688-5409 | knowledge, reputations, information markets, W.A.S.T.E.: Aptos, CA | black markets, collapse of governments. Higher Power: 2^756839 | Public Key: PGP and MailSafe available. Note: I put time and money into writing this posting. I hope you enjoy it. ------------------------------ End of Extropians Digest V93 #306 *********************************