From extropians-request@extropy.org Fri Oct 21 09:02:15 1994 Return-Path: extropians-request@extropy.org Received: from usc.edu (usc.edu [128.125.253.136]) by chaph.usc.edu (8.6.8.1/8.6.4) with SMTP id JAA05782 for ; Fri, 21 Oct 1994 09:02:14 -0700 Received: from news.panix.com by usc.edu (4.1/SMI-3.0DEV3-USC+3.1) id AA00857; Fri, 21 Oct 94 09:02:11 PDT Received: (from exi@localhost) by news.panix.com (8.6.9/8.6.9) id MAA03799; Fri, 21 Oct 1994 12:01:58 -0400 Date: Fri, 21 Oct 1994 12:01:58 -0400 Message-Id: <199410211601.MAA03799@news.panix.com> To: Extropians@extropy.org From: Extropians@extropy.org Subject: Extropians Digest #94-10-380 - #94-10-389 X-Extropian-Date: October 21, 374 P.N.O. [12:01:23 UTC] Reply-To: extropians@extropy.org X-Mailer: MailWeir 1.0 Status: RO Extropians Digest Fri, 21 Oct 94 Volume 94 : Issue 293 Today's Topics: AI development acceleration (was: Nanotechnology and uncertain ...[1 msgs] Astrology [1 msgs] Astrology: Devil's Advocate [1 msgs] BOOK: _Regional Advantage_, by Annalee Saxenian [1 msgs] Challenge to uploaders [1 msgs] PSYCH: Self-Esteem? [2 msgs] REproduction [1 msgs] Tolerance and nonsense. [2 msgs] Administrivia: Note: I have increased the frequency of the digests to four times a day. The digests used to be processed at 5am and 5pm, but this was too infrequent for the current bandwidth. Now digests are sent every six hours: Midnight, 6am, 12pm, and 6pm. If you experience delays in getting digests, try setting your digest size smaller such as 20k. You can do this by addressing a message to extropians@extropy.org with the body of the message as ::digest size 20 -Ray Approximate Size: 28570 bytes. ---------------------------------------------------------------------- From: davisd@nimitz.ee.washington.edu Date: Thu, 20 Oct 94 17:48:49 -0700 Subject: [#94-10-380] PSYCH: Self-Esteem? Romana writes: > I do treat myself as an object when I examine, criticize, and try to > improve myself in the real physical world. (Your comment makes me want to > examine my point of view for a flaw, however.) It makes sense to objectively judge your capabilities, to increase your power and skill at making the world what you would have it be. Years ago, I was almost constantly sick for a couple of years. I was not *bad*. I wasn't *effective* in making my life enjoyable. Now I take care of my health, and I feel better for it. > My attitude is something > like: "I feel good about myself because of what I make of myself, and how I > deal with challenges to my being." I suppose I am more of a Nietzchean > egoist. I might feel good about having the power to do certain things. For example, I feel good about the change I've made in my health. Or I might lose weight, and feel good about that. Feeling good about me appears mighty unspecific to me. > A more radical individualist might say something like, "I feel good > about myself because I am me", and be unconcerned with self-transformation > or personal power. I don't feel good or bad about me. I feel good or bad about lots of things, but I am neither an outcome to feel satisfied about nor an object which I need to fight to keep or avoid. I am the guy doing the feeling. Buy Buy -- Dan Davis ------------------------------ From: davisd@nimitz.ee.washington.edu Date: Thu, 20 Oct 94 18:39:06 -0700 Subject: [#94-10-381] PSYCH: Self-Esteem? > From: William Wiser > Subject: PSYCH: Self-Esteem? > > Self-esteem is a mix of thoughts and chemicals and ultimately comes from an > understanding of your capabilities and the rules of the game (plus a little > prozac for some of us). The best measure I can think of at the moment is > to ask the question "Am I having fun yet?" This is a spin on self esteem I am more comfortable with. It doesn't ask whether I deserve to be happy, it asks whether I am. Of course, it has lost much of the flavor of esteem and turned more toward simple enjoyment. > but I see no reason to expect less than > total mastery in the game of life As the fellows at Schlitz once said: Go for the gusto, go for the best. You know that life's too short to settle for less. > Will Wiser sbrooks@scf.usc.edu > Third party candidate for Emperor of Earth > (Just kidding, I'm not hanging around). Too bad. I might have appointed you. Someday I may need someone to run earth for me. Buy Buy -- Dan Davis ------------------------------ From: price@price.demon.co.uk (Michael Clive Price) Date: Thu, 20 Oct 1994 23:57:56 GMT Subject: [#94-10-382] Astrology: Devil's Advocate Harvey Newstrom writes: > Whenever something seems "impossible", I like to develop a wild > theory that might explain it, just to keep me from being to smug Nah, I prefer being smug, self satisfied bastard :-) Michael Price price@price.demon.co.uk ------------------------------ From: nancc@netcom.com (Nancie Clark) Date: Thu, 20 Oct 1994 20:00:14 -0700 (PDT) Subject: [#94-10-383] REproduction A retrothought re reproduction: The desire or the choice to reproduce and to birth can and does affect both men and women in diverse areas of life. We reproduce ourselves constantly by stating our convictions or our impressions . Each and every time we communicate, we are sending out signals of ourselves - dupes. The creative process has often been compared to giving birth - and as thematically depicted by the abstract expressionists - both the ecstasy and the pain. Each time I create I feel that I am in perpetual reproduction. After the sudden rush of conceptualization passes and the frustration of hard work and labor set in, I am tired albeit fulfilled. Having children is both selfish and delightful. Children bring tremendous joy and fulfillment not only to their biological parents, but also to the collective parent - the extended family. Some individuals have an innate, an inherent need or even curiosity to experience learning and relearning "through the eye of the child." Some of us hold onto the ego of adulthood and have no interest. Some, lack patience to deal with the constant interruptions of a child. Some of us are naturally maternal/paternal and love to fondle and cuddle - easiest expressed through and given to a child. Is child bearing in itself antiquated While the physical pregnancy, I believe, needs to be updated to a more current technology such as ectogenesis (I think this is the correct term) or outside the womb, it will probably be safer for women and will alleviate the nine months of discomfort and hours of frustration and labor. When we begin to "share pregnancy" by way of fascinating innovations such as mosaic birth, transgenesis, or hybrid birth, we can become more creative in reproducing physically. Why not put our best features forward? Why not reproduce the most stimulating, attractive and creative aspects of ourselves? Anyway, its a very personal and individual choice, whatever the reason. Nancie Clark ------------------------------ From: fcp@nuance.com (Craig Presson) Date: Wed, 19 Oct 1994 10:12:43 -0500 Subject: [#94-10-384] Tolerance and nonsense. At 06:04 PM 10/20/94 -0400, L. Todd Masco wrote: [...] >Frankly, I don't think tolerance is the problem. In fact, I'd rather >have more tolerance. [...] I think we're using that word in two senses here. As liber*s, we have a lot of use for tolerance of other people's behavior ("being an anarchist means putting up with a lot of shit you don't like") and political views. However, when you put on your academic robe, you take on a responsibility to be critical and skeptical and to show your students how to be both. You _tolerate_ all viewpoints but you subject them all to analysis. >How are those who don't think for themselves to distinguish between >the claims of authoritative Astrologers and authoritative AI researchers? By learning to think for themselves, of course, the first step in that direction being to copy other people who are known to do it well. >Why can't Johnny think for himself? Because he was rewarded for >regurgitative "learning" and punished by authorities and peers for >voicing or otherwise acting upon independant, critical thoughts. This is certainly a valid point, and it won't do to condemn Minsky for doing his part to fix the problem. \\ fcp@nuance.com (Craig Presson) CPresson@aol.com\ -- WWW: http://www.nuance.com/~fcp/ -----------------\ -- President & Principal, T4 Computer Security ------> -- P.O. Box 18271, Huntsville, AL 35804 -------------/ // (205) 880-7692 Voice, -7691 FAX -----------------/ ------------------------------ From: Paul Elliott Date: Fri, 21 Oct 94 3:09:45 +1800 Subject: [#94-10-385] Tolerance and nonsense. > > I think that when talking about tolerance, it's important to consider > the failure modes of having too much or too little tolerance... I think > it helps lend a valuable sense of perspective to things. > The problem is not too much or too little tolerance, but rather a faulty definition of tolerance. Some people think that tolerance is the view that all assertions are equal and that all ways of life are equally good. True tolerance is not this, but is simply the resolution not to resolve disputes about assertions or ways of life with appeals to violence. The false view of tolerance (above) destroys human potential because it short circuits the search for truth. It causes people with differing views to plaster over disputes with phony tolerance, rather than seek evidence or arguments to settle issues. True tolerance allows one to ruthlessly attempt to destroy false ideas with evidence or argument, but it renounces a violent solutions. -- Paul Elliott Telephone: 1-713-781-4543 Paul.Elliott@hrnowl.lonestar.org Address: 3987 South Gessner #224 Houston Texas 77063 ------------------------------ From: "Peter C. McCluskey" Date: Thu, 20 Oct 1994 22:22:01 -0700 Subject: [#94-10-386] BOOK: _Regional Advantage_, by Annalee Saxenian This book describes some differences between the high-tech cultures in Massachusetts' Route 128 area and Silicon Valley which appear to explain why sales of Silicon Valley companies substantially surpassed Route 128 companies. While Route 128 companies adopted some of the informal hacker-style culture that Silicon Valley did, they kept many of the organizational policies of traditional industrial firms. Some of the differences: Massachusetts companies retained a fairly hierarchical structure with important decisions being centralized in the top management, while Silicon Valley companies had divisions which functioned much more like autonomous units. There has been much more cooperation between companies in Silicon Valley. The boundaries between companies is blurred enough that it is almost possible to think of Silicon Valley as one big company. There are reports "that in the early days of the industry it was not uncommon for production engineers to call their friends at nearby competing firms for assistance when quartz tubes broke or they ran out of chemicals". Whereas in Massachusetts, it is uncommon for competitors to discuss business at all. Massachusetts companies were much slower to adopt open systems. Massachusetts companies remained vertically integrated, which meant that a start-up had trouble finding a market in which to buy components that it couldn't afford to make itself. In Massachusetts, the normal career track was to stay at the same company for decades. In Silicon Valley, widespread acceptance of frequent job changes made start-ups easier and spread knowledge more widely. Curiously, I looked in 3 bookstores in the Boston area for this book and didn't find it. I have noticed it in 2 out of the 3 bookstores I have been in since moving to Silicon Valley, without deliberately looking for it. Another book worth mention is Jim Roger's _Investment Biker_, tales of his round-the-world motorcycle trip. As a book about travel it is fairly interesting (he didn't limit himself to places where the roads were mapped). The comparisons between governments is much more informed and capitalist than can be found in newspapers (as of 1990, China was very capitalist, the Soviet Union showed almost no hint of capitalism). -- --------------------------------------------------------------- Peter McCluskey | pcm@rahul.net | Cardassia delenda est! finger for PGP key | pcm@world.std.com | netcom delenda est! ------------------------------ From: "On the Internet nobody knows if you're a Turing Machine." Date: Fri, 21 Oct 94 01:50:41 EDT Subject: [#94-10-387] Astrology Here's a simple astrology experiment: Send your date of birth (month/day) in a message to the list, or along with your next message, in a form something like: birthday: Feb 8 Perhaps some sort of astrologically influenced pattern will show itself--or not. ------------------------------ From: freeman@netcom.com (Jay Reynolds Freeman) Date: Thu, 20 Oct 1994 23:39:00 -0700 Subject: [#94-10-388] Challenge to uploaders The First Extropian Apple is tempted to seed this thread with postings designed to get to the core of the matter, but in-ciders have threatened to peel and quarter him if he should be so rotten... ------------------------------ From: solman@MIT.EDU Date: Fri, 21 Oct 1994 05:21:19 EDT Subject: [#94-10-389] AI development acceleration (was: Nanotechnology and uncertainty) > Carl Feynman writes: > >AI, as opposed to most other technologies, has a curious self-accelerating > >effect. As soon as someone develops a machine that is better, faster or > >cheaper at designing AI systems than is a human designer, they can use that > >machine to develop the next generation of such machines, which leads to a > >self-accelerating cycle culminating in the creation of godlike intellects, > >that are to us as we are to the beasts that perish. > > > >Imagine, for example, that a design team of 100 people spends two years > >designing the first slightly superhuman AI. They start work in January > >2010, and in January 2012 they have a machine that is as good at designing > >hardware and software as they are, but does it twice as fast. The humans > >build a hundred of these Mark I gizmos, hook them together, and they design > >the Mark II, which is four times as fast as a human. The Mark II is > >available in January 2013, and it designs the Mark III, available in > >August. By New Year's Eve 2013, they're up to the Mark VII, with 16 KB > >(kilobrain) capacity, and new generations are emerging every few hours. > > > >... Of course, the development rate cannot become infinite; eventually > >it will be limited by some resource other than the cleverness of > >design. ... Note that this scenario is independent of the level of > >nanotechnology available at the time of the development of AI. > > This accelerating AI scenario has been repeated so often to become a > standard wisdom. But as I said last December when we discussed > Vinge's Whole Earth Review article (winter '93), it is not at all > obvious that things would go this way. > > The scenario above depends crucially on the idea that there is a fixed > amount of "intelligence" (here 200 person-years), in which any > intelligence can design something which is twice as efficient as > itself - can do the same design work with only half the inputs > (including time). In economic terms, this is like saying that the > "production function" of intelligence has dramatic "increasing returns > to scale". > > Now I'll grant that this may be possible (at least over some range), > but you should also grant that intelligence may be much harder to > produce than this. Maybe designing intelligence is so damn hard that > smarter creatures than us find it even harder to improve on > themselves, i.e., harder than we find it to improve on ourselves. We > don't really understand intelligence well enough to say one way or the > other, and the overoptimism of previous estimates of rates of AI > progress is not especially encouraging. [I've just installed a new mail filter and this four month old post came up. Since the author suggested that this issue is of frequent interest, I'll comment on it now anyway.] I would hypothesize, based on human behavior, that as the AI's become more powerful, the development time before the next generation (2x) AI will decrease _faster_ than the power of the AIs increase. If you look at humans working in a group, you notice that once a certain group size is reached, communication becomes the limiting factor. This happens in conventional computers too, but in conventional computers the communications process is bandwidth limited. Typically there exists some fixed method of representing data, and the rate of communication is limited primarilly by the bandwidth between the location of that representation and the computer. In humans the limiting process seems to be one of knowledge acquisition. The information I need from the other people working on my project is unlikely going to be of a clearly defined form. Very often, I won't even be aware that I need a piece of information unless the team member who generated it tells me. It thus becomes necessary for myself and the person who has the knowledge to go through a complex and time consuming processes of transfering ideas until both of us posses enough of the other's knowledge base to identify the project critical ideas and transfer them. The time bound here is not one due to bandwidth, but one due to human understanding (or alternatively human computation). If I get humans that are twice as intelligent (but keep the number the same... lets assume that a project creating the first true AI would be given top priority and would be staffed to saturation) I can expect a speed up of more than two times because not only do the humans think faster, but ideas which were previously spread over multiple humans are now carried out by a single human (or at least fewer humans) thus obviating (or decreasing) the need for knowledge transfer. Now admitedly I have just made a large number of minimally justified assumptions, but I tend to trust this result because it appears to work in real life. If the members of team A individually work x times faster than the members of group B and both have roughly similar communication skills, group A will usually work substantially more than x times faster than B. Now if the AI's that we create share with humans our apparently holistic method of knowledge representation, it seems reasonable that the productivity of groups of AIs might also increase at a substantially faster rate than that of the individual members of those groups. In such a case, even if the difficulty of creating an AI twice as smart as the creators increases substantially as the creators get smarter, it is still very reasonable to expect the creation time to decrease. [Of course, in claiming that the speed of invention is limited by the rate of knowledge transfer between the inventors, I left myself open to the very real problem that it might first be necessary to "raise" the AIs until they are smart enough to participate in the creation of new AI's. Unless it becomes possible to scale the "stimuli" given to AI's as they are "growing up" at the same rate as the AIs increase in power, the time between generations of AI could become dominated by child rearing] Cheers, Jason W. Solinsky [BTW, why are we assuming that AI's are going to want to obsolete themselves anyway? What incentive could we offer AI's to do this?] ------------------------------ End of Extropians Digest V94 #293 *********************************