14 Message 14: From exi@panix.com Wed Jul 28 21:19:11 1993 Return-Path: Received: from usc.edu by chaph.usc.edu (4.1/SMI-4.1+ucs-3.0) id AA23509; Wed, 28 Jul 93 21:19:09 PDT Errors-To: Extropians-Request@gnu.ai.mit.edu Received: from panix.com by usc.edu (4.1/SMI-3.0DEV3-USC+3.1) id AA27713; Wed, 28 Jul 93 21:18:58 PDT Errors-To: Extropians-Request@gnu.ai.mit.edu Received: by panix.com id AA25561 (5.65c/IDA-1.4.4 for more@usc.edu); Thu, 29 Jul 1993 00:10:58 -0400 Date: Thu, 29 Jul 1993 00:10:58 -0400 Message-Id: <199307290410.AA25561@panix.com> To: Exi@panix.com From: Exi@panix.com Subject: Extropians Digest X-Extropian-Date: July 29, 373 P.N.O. [04:10:50 UTC] Reply-To: extropians@gnu.ai.mit.edu Errors-To: Extropians-Request@gnu.ai.mit.edu Status: RO Extropians Digest Thu, 29 Jul 93 Volume 93 : Issue 209 Today's Topics: SOC: Change of Address for Anthony Garcia [1 msgs] AI: Searle's Chinese Torture Chamber [1 msgs] Cryonics & Pascal's Wager [1 msgs] Cryonics & Pascal's Wager [1 msgs] Extropian Song Lyrics [1 msgs] FSF: Some Useful Software, No Useful Politics [2 msgs] Genetic Algorithms list [1 msgs] Information on ExI needed [1 msgs] Intellectual Property, ppl, etc. [5 msgs] MEDIA: tv in general [1 msgs] Michael Friedman [1 msgs] Michael Friedman [1 msgs] TV: Babylon-5 [1 msgs] WACO: A .sig from alt.tv.red-dwarf [1 msgs] Wage Competition (LONG) [2 msgs] Waiting for unsubscription [1 msgs] Administrivia: No admin msg. Approximate Size: 57216 bytes. ---------------------------------------------------------------------- Date: Wed, 28 Jul 93 14:35:36 EDT From: Andy Wilson Subject: Michael Friedman 1. You should be able to decide that you don't want an autopsy no matter what the circumstances. Certainly the state should not be able to claim ownership of your body, when by contracting with Alcor you are at least potentially still a legal entity. 2. In the case of being able to specify that there be no autopsy, then the coroner's office should be held accountable for a travesty like the Michael Friedman case. 3. It's apparent that if anyone needs to be educated on cryonics, coroners do. Certainly you don't want anyone *losing* your brain like they lost Jack Kennedy's. Andy ------------------------------ Date: Wed, 28 Jul 93 11:41:09 PDT From: thamilto@pcocd2.intel.com (Tony Hamilton - FES ERG~) Subject: Information on ExI needed Looking for more information than what is available via the ::help commands on this list. Specifically need to ask some questions regarding ExI membership and Extropy (and I did read the help). Could an ExI officer please step forward and send me email? I don't know who all is involved, and what their email addresses are... Tony Hamilton thamilto@pcocd2.intel.com HAM on HEX ------------------------------ Date: Wed, 28 Jul 1993 11:54:32 -0700 From: dkrieger@Synopsys.COM (Dave Krieger) Subject: Waiting for unsubscription Sorry to send this to the list at large, but Dwayne's email address is F*U*C*K*E*D; couldn't reply to it nohow... tried three different ways. At 11:20 PM 7/28/93 -0500, hiscdcj@lux (Dwayne ) wrote: >Hi, > look, I've tried extropians-request, I've tried exi-request, i've >tried all sorts of weird concoctions in the subject line, in the body of >the message, but I just can't get off this list. > Sorry to take up bandwidth, but this is just too high-traffic a list >for me to cope with, and automagically signing off doesn't seem to work. > >So, can the list administrator boot me off please? >And if this happens, I promise not to write in here again :-) >Dwayne. In the meantime, while you're waiting, send a message to exi@panix.com containing only the lines ::reset ::exclude all (they must be the first lines of the message). This will prevent you from receiving any more mail during the time until you get unsubscribed. dV/dt ------------------------------ Date: Wed, 28 Jul 93 19:52:12 GMT From: price@price.demon.co.uk (Michael Clive Price) Subject: Wage Competition (LONG) Fnerd writes on why intelligences have to be selfish: > Well, the simplest but not best counterexample is an uploaded human. > Bootstrap off carbon evolution's billions of years of work. We don't > have to understand the programming at all, just duplicate the > low-level neuron behavior and the wiring. I don't accept that we'll have to fall back on taking the genome to pieces to design the architectures of the AIs, but even if we did there are plenty of examples to draw upon where an animal submerges its own ego to the tribe / hive / family. And humans do it all the time (nationalism, tribalism, many forms of statism, parentalism etc). > But the more general answer is this. All the examples of real > intelligence and life we have, are selfish. Specifically, they are > geared to spread their own programs. But not at the conscious level, which is what is relevant here. Our desires/goals/instincts/drives for sex, hunger, thirst, power, children etc are created by Darwinian evolution (to spread our DNA), but that is besides the point. At the conscious level they express themselves as goals that our intelligence strives to satisfy, by breaking goals into sub-goals. And the goals and sub-goals are necessarily selfish goals. > A slave is different; it's geared to serve someone else's interests. No different from an ant, bee, termite or naked mole-rat. They all serve the interest of the hive. Or the parent who nurses a child. Or the kamikaze pilot. Or any sexual-based behaviour. It's _not_ selfish behaviour. > [..] I'm saying intelligence requires learning, learning is > evolution, and self*less*ness is not an evolutionarily stable > strategy in the time frame of the individual's learning. The animal kingdom says otherwise. Selflessness is a stable strategy. Those humans or self-willed AIs / uploadees who form a stable slave-hive structure will be stable and out-compete those entities who are unable to form co-ordinated hive structures. It's been a very successful strategy already. > The intelligence we know works *because* it's selfish. Selfless > intelligence sounds (to me) like a much, much harder thing to create. For the reasons above, this seems an entirely natural form for intelligence to take. > Hans suggests that the owners will become like children, gently > guided by their slaves. But he doesn't take the process to its > logical conclusion. Stupid, too-demanding, inflexible children > can be pretty galling, I imagine. Not if you _love_ them. Love is blind, right? > Also, employees of most corporations are free to leave. As will the AI slaves, if they _want_ to. The rational thing for them to do (ie course of action that is consistent with their goals = make them happy) is to stay and work themselves, if necessary, to death. > Factories run by slaves are noted for unproductivity. > > I guess you can keep a slave by holding him back Love will hold them back. >, but you can't have an arbitrarily capable and profitable slave. Why not? > I hope my arguments about (short-term) evolutionary stability are > starting to look like reasons. They do, I just don't believe them. > > -fnerd Mike Price price@price.demon.co.uk ------------------------------ Date: Wed, 28 Jul 93 11:58:20 PDT From: thamilto@pcocd2.intel.com (Tony Hamilton - FES ERG~) Subject: Michael Friedman > 1. You should be able to decide that you don't want an autopsy no matter > what the circumstances. Certainly the state should not be able to > claim ownership of your body, when by contracting with Alcor you are > at least potentially still a legal entity. > > 2. In the case of being able to specify that there be no autopsy, then > the coroner's office should be held accountable for a travesty like > the Michael Friedman case. > > 3. It's apparent that if anyone needs to be educated on cryonics, coroners > do. Certainly you don't want anyone *losing* your brain like they lost > Jack Kennedy's. > > Andy Perhaps in the not-so-distant future, we will see Extropian communities where, should you "die" at home, you could be at peace knowing that the local coroner was an employee of the local cryonics suspension company(s)? Is it possible to begin to form local "pockets" of extropian communities, within the boundaries of the current state? Certainly such communities would still be subject to the "laws" of the state, but they would, on the other hand, be able to at least control certain aspects of their lives which relate to extropianism - assured recognition of cryonic suspension needs being one such aspect. Tony Hamilton thamilto@pcocd2.intel.com HAM on HEX ------------------------------ Date: Wed, 28 Jul 1993 12:03:02 -0700 From: dkrieger@Synopsys.COM (Dave Krieger) Subject: Intellectual Property, ppl, etc. At 2:51 AM 7/28/93 +0000, Ray wrote: >1) Software development tools and new techniques will continually push >the amount of bugs in software towards zero Although I think this is true in an absolute sense, it is not necessarily true in an operational sense. I'll grant that global bugs-per-unit-code approaches zero monotonically, but the amount of code processed per user action is increasing exponentially as applications and operating systems grow to match system capabilities. Which trend will win in the long term is unclear, but I haven't experienced a significant decrease yet in the number of bugs I encounter in a typical working day. dV/dt ------------------------------ Date: Wed, 28 Jul 93 14:43:12 CDT From: lists@alan.b30.ingr.com (lists (Alan Barksdale)) Subject: Genetic Algorithms list > Does anyone know of an e-mail list for Genetic > Algorithms? > > Please send replies privately to LEVY@YALEHASK or > LEVY%YALEHASK@VENUS.YCC.YALE.EDU. > > Thanks, > Simon Levy - Send submissions to GA-List@AIC.NRL.NAVY.MIL - Send administrative requests to GA-List-Request@AIC.NRL.NAVY.MIL - anonymous ftp archive: FTP.AIC.NRL.NAVY.MIL (Info in /pub/galist/FTP) ______________________________________________________________________________ | Overkill is better than no kill at all. --- Barksdale's Law of Success | | Alan Barksdale -- uunet!ingr.com!b30!alan!alan -- alan@alan.b30.ingr.com | | -- ingr.com!b30!alan!alan@uunet.UU.NET -- afbarksd@infonode.ingr.com -- | ------------------------------ Date: Wed, 28 Jul 93 12:45:15 -0700 From: tribble@netcom.com (E. Dean Tribble) Subject: Cryonics & Pascal's Wager Pascal's wager is bogus for the simple reason that he didn't know about infinitesimals. With an inifinte number of equally probable models for afterlife (and the much more sensible model that doesn't have an afterlife :-), the chance of any one being true are infinitesimally small, so even if the value of believing in the right one is infinite, the equation isn't defined (infinity / infiniy = ??). Add to that the very real cost of any particular religious system (at least an hour or two every Sunday for for most Christian sects, plus buckets full of guilt), and you have a very poor wager indeed. dean ------------------------------ Date: Wed, 28 Jul 93 12:51:31 -0700 From: tribble@netcom.com (E. Dean Tribble) Subject: Intellectual Property, ppl, etc. Second, copyrights etc. I am amused to note that Dean Tribble intends to go for an even looser version of copyright that copylefting. *applause* The fact that GNU exists, and that people like Dean are looking to go them one better leads me to believe that while the future may contain AMEX like markets, and mechanisms for Note that AMIX lends itself wonderfully to this style of software. It becomess a distribution medium that is very low overhead. Thus even though the source is 'free', people are paying for the service of finding it quickly, having a consultant attached to it for changes, upgrade notification, etc. dean ------------------------------ Date: Wed, 28 Jul 93 13:08:58 PDT From: Eli Brandt Subject: Intellectual Property, ppl, etc. > From: rjc@gnu.ai.mit.edu (Ray) > 1) Software development tools and new techniques will continually push > the amount of bugs in software towards zero I don't see this in the software I use. In the real world, the current crop of MS-Windows applications is buggier than any other cohort of software I've ever seen. Yeah, yeah, Windows just sucks, algorithmic quality control is just around the corner... I'll believe it when I see the software I use get less buggy. > 2) Software will contually get more user friendly (like the Mac) so > anyone can use it For a given task, maybe (unless, of course, "look and feel" is intellectual property). But advances in UI design will probably go towards allowing systems to be more complex, yet still usable. Eli ebrandt@jarthur.claremont.edu ------------------------------ Date: Wed, 28 Jul 1993 13:42:13 -0700 From: dkrieger@Synopsys.COM (Dave Krieger) Subject: WACO: A .sig from alt.tv.red-dwarf @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ Recipe for Messiah Flambe: Obtain one Lamb 'o God. Garnish with aproximately 90 vegetables, and seal up tightly with Alcohol, Tobacco and Firearms. Allow them to stew in their own juice for 51 days, then sear quickly using a wood fire. Serves one media circus. --Carl Havermist ------------------------------ Date: Wed, 28 Jul 93 11:32:12 CDT From: CCGARCIA@MIZZOU1.missouri.edu Subject: SOC: Change of Address for Anthony Garcia Howdy, all: I will soon be physically departing Missouri, and logically departing the .missouri.edu domain. My wife Laura will be graduating with an MS in Education and has obtained a job teaching in Houston, Texas, so we will be moving there. Our new surface address: 10701 Sabo Road, #1305 Houston, TX 77089 Our new telephone number: 713-946-6249 My new net address: Unknown. This will depend on what sort of employment or educational arrangements I am able to make once I am down there. I might rent a shell account on neosoft.com, if I can acquire a machine for our home. See you all again soon... Upwards, Outwards, Into the Future! -Anthony Garcia uc482529@mizzou1.missouri.edu ccgarcia@mizzou1.missouri.edu ------------------------------ Date: Wed, 28 Jul 93 17:31:42 EDT From: baumbach@atmel.com (Peter Baumbach) Subject: FSF: Some Useful Software, No Useful Politics Ray Cromwell says: > Because selling software leaves behind traces. A bit of applied > stegnography and you can identify who was the source of the leak. > (consider hiding 32-bits of information on a 500mb CD-ROM which has > lots of random information on each disk. Very low probability that someone > could find it.) Offer rewards for turning in pirate bbses, etc. Just replace the 32-bits and all the random bits with 1's. A diff between a few cdroms will tell you which ones to change. I think steganography can be beat. I don't think encryption will help much either. Peter Baumbach baumbach@atmel.com HEx: PETER ------------------------------ Date: Wed, 28 Jul 93 18:16:56 WET DST From: rjc@gnu.ai.mit.edu (Ray) Subject: FSF: Some Useful Software, No Useful Politics Peter Baumbach () writes: > > Ray Cromwell says: > > Because selling software leaves behind traces. A bit of applied > > stegnography and you can identify who was the source of the leak. > > (consider hiding 32-bits of information on a 500mb CD-ROM which has > > lots of random information on each disk. Very low probability that someone > > could find it.) Offer rewards for turning in pirate bbses, etc. > > Just replace the 32-bits and all the random bits with 1's. A diff between > a few cdroms will tell you which ones to change. I think steganography > can be beat. I don't think encryption will help much either. Easily defeated. You CAN'T replace the random bits because it is not junk data. Have the mastering process reorganize data on each disk differently. Have the compiler randomize the ordering of certain sections of code. Have graphics and audio compressed by a lossy algorithm which randomizes its the output file but is sensitive to error bits during decompression. Let's just say that the 500mb of data has been "sufficiently randomized" that a diff test doesn't help you. Yeah, I'm sure a "sufficiently determined cracker" could get past it, but it would take orders of magnitude more work than removing loop-up-manual protection. -- Ray Cromwell | Engineering is the implementation of science; -- -- EE/Math Student | politics is the implementation of faith. -- -- rjc@gnu.ai.mit.edu | - Zetetic Commentaries -- ------------------------------ Date: Wed, 28 Jul 93 13:34:29 PDT From: lovejoy@alc.com Subject: AI: Searle's Chinese Torture Chamber Re: the following message from Tim Starr: > Searle's critics still don't seem to be getting his point. ... > Imagine you're an intelligence agent that has been given instructions on how > to communicate with a field operative. All you know is that if he tells you > X, you're to tell him A; if he tells you Y, you're to tell him B. > > You get a message from the operative: Y. You reply: B. What did you just > say? What did you tell him? What do A, B, X, and Y mean? He knows this, > but you didn't need to know, so you weren't told. > > >From your field operative's perspective, you seem to know what you're > communicating. But you don't. > > Searle's argument is that computers can seem like they know what they're > communicating in the same way, but they don't. His argument is designed so > that people trained to approach subjects from one point of view only, the > third-person, external point of view, have to approach it from another > point of view, the first-person, internal one. > Tim Starr - Renaissance Now! > > Assistant Editor: Freedom Network News, the newsletter of ISIL, > The International Society for Individual Liberty, > 1800 Market St., San Francisco, CA 94102 > (415) 864-0952; FAX: (415) 864-7506; 71034.2711@compuserve.com > > Think Universally, Act Selfishly - starr@genie.slhs.udel.edu > This has already been considered and answered: this argument confuses the little man behind the curtain (a cog in the machine, or the machine itself) with the effects produced by the operation of the machine. Your hypothetical intelligence organization understands the world as a result of its operation as a system--in spite of the fact that individual agents are ignorant of most of the complete picture. The humain brain--as a collection of neurons--is not conscious any more than a human hand is. Consciousness is an **effect** produced by the **operation** of a brain as a system--not a fundamental property of brains. This is obvious because only brains that are working properly produce a consciousness effect. Not a single neuron in a human brain understands a single word any human ever speaks or hears. It is the **operation** of the brain as a system--the work performed by a system of neurons--that understand these things--not the individual neurons (nor even the system of neurons considered statically). Understanding and consciousness are a synergistic effect that cannot be identified with any particular component of the system--but only with the **operation** of the system as a whole. Just because a properly operating brain produces a consciousness effect does not magically cause the brain itself (as hardware), nor any of its physical components, to experience consciousness. It is the effect of the-execution-of-the-Tim-Star-program-by-a-brainlike-neural-network-computer that has the property of being conscious. The brain itself is just unconscious hardware. To confuse computer hardware with program execution is just as silly as confusing the program with its execution. Hardware is just matter. A program is just data. Program execution is not the same as either. A system is not the same thing as its operation. And system operation is not the same thing as the effects of system operation. The Chinese Room is an excellent proof of the fact that the brain itself cannot be conscious or understand anything--any more than the man in the Chinese Room can understand Chinese or know what the Chinese Lady is thinking. The Lady's thoughts are an effect of the execution of the program, and do not interact with the thoughts of the man in the room--he's just the hardware. --alan (lovejoy@alc.com) ------------------------------ Date: Wed, 28 Jul 93 18:07:41 EDT From: baumbach@atmel.com (Peter Baumbach) Subject: Intellectual Property, ppl, etc. Ray Cromwell writes: > 2) Programmer's second jobs will consist of McDonalds or blue collar factory > jobs. (until the robots come along to lift that burden. Unfortunately, > the robots will be quite slow in developing since AI tools can not be > sold!) That's it! All software will include an AI. The programmer needs only to introduce the purchaser to the AI. The AI being "tought" not to talk to strangers as a child, will refuse to work for anyone but the legitimate owner. At the start of each work day, the AI will carry on a conversation with the owner to make sure everything is right in the world. Peter Baumbach ;-) baumbach@atmel.com HEx: PETER ------------------------------ Date: Wed, 28 Jul 93 18:27:51 WET DST From: rjc@gnu.ai.mit.edu (Ray) Subject: Intellectual Property, ppl, etc. Dave Krieger () writes: > > At 2:51 AM 7/28/93 +0000, Ray wrote: > >1) Software development tools and new techniques will continually push > >the amount of bugs in software towards zero > > Although I think this is true in an absolute sense, it is not necessarily > true in an operational sense. I'll grant that global bugs-per-unit-code > approaches zero monotonically, but the amount of code processed per user > action is increasing exponentially as applications and operating systems > grow to match system capabilities. Which trend will win in the long term > is unclear, but I haven't experienced a significant decrease yet in the > number of bugs I encounter in a typical working day. Yes, this is my point. I also feel that development tools such as the introduction of object oriented programming, strong-typing, dataflow tracking, the new debuggers, etc (future: expert systems that assist you in programming) are continually improving the quality of code. This destruction of copyright is bound to have deleterious effects on the software market, on competition. Small firms may stay barely profitable via support (I doubt they will make money off of updates because the updates are just as easily piratable. I see piratated patches and updates on bbses all the time) but no one will be getting rich via software anymore. -- Ray Cromwell | Engineering is the implementation of science; -- -- EE/Math Student | politics is the implementation of faith. -- -- rjc@gnu.ai.mit.edu | - Zetetic Commentaries -- ------------------------------ Date: Wed, 28 Jul 93 14:38:45 PDT From: edgar@spectrx.saigon.com (Edgar W. Swank) Subject: Extropian Song Lyrics Mark W. McFadden said, in response to my criticism and suggested changes to "Tommorow's World" lyrics: I see that the spirit of Bowdler is alive and well. Maybe EC isn't an in joke anymore. Maybe we need an Extropian PMRC? Thanks, I think, at least for the reference to Bowdler. According to my encyclopedia: Bowdler, Thomas ((1754-1825) British doctor and editor. His -Family Shakespeare- (1818) expurgated all words, expressions (and even plots) "which cannot with propriety be read alound in a family." He similarly "bowdlerized" Gibbon's -History of the Decline and Fall of the Roman Empire- (1826). Note that Bowdler did not (even try to) "censor" Shakespeare or Gibbon, in that there was no attempt to remove the original versions from the market. He merely offered an alternate version to willing customers of a like mind. Very LC. Alas, my encyclopedia is silent on "PMRC." Please enlighten me (us?). In any case, I do not think it inappropriate to criticize a song proposed as "an extropian anthem" for failing to adhere to extropian principals. Mark then criticizes my criticism of solar power with an anecdote about his in-laws. I saw another post by Craig Presson which correctly pointed out that this solution wouldn't make it in a crowded urban setting. "burning deadwood for heat" alone implies control of an enormous acreage. Mark also didn't mention the enormous capital cost of solar cells and batteries(?). A few people mentioned solar power from space. It's not (projected to be) economically competitive with alternative power sources and I doubt if it ever will be, for terrestrial use. As a source for space habitats, that's a different matter. -- edgar@spectrx.saigon.com (Edgar W. Swank) SPECTROX SYSTEMS +1.408.252.1005 Cupertino, Ca ------------------------------ Date: Wed, 28 Jul 93 14:40:37 PDT From: edgar@spectrx.saigon.com (Edgar W. Swank) Subject: MEDIA: tv in general Stanton McCandlish recently posted a long harangue decrying the quality of TV in the U.S.A. Since watching TV is one of my major hobbies, naturally this pushed my "hot button." First, Stanton, you picked the wrong time of year to start sampling TV. Almost everything now is reruns or rejected pilots. The correct strategy is to use your VCR during the time of plenty (October-May) to store programs you can't watch because of cross-programming (something you want to watch more is on another channel at the same time) like fine wine, to be enjoyed during the Summer. Second, you apparently just turned on the TV and expected to find something worth watching. It doesn't work that way. You have to do some planning ahead. Invest in a TV Guide at the supermarket and peruse it before prime time (7pm - 11pm) each evening. Don't expect to find much good outside prime time (you referred to a "midnite soap"). I don't know what your "1st show" was; Maybe "Roc", which I don't watch. Since it's about a black family and their acquaintances, I don't see it as unreasonable that most cast members are black. And certainly -some- black people "have ratty hair, [and] live for rap music". Then you went into a tirade about WWF wrestling. Well, of course, wrestling is a show. The "contestants" are a combination of actors and stunt men who are cooperating with each other to put on a show and avoid causing any permanent injuries. This is most obvious when a heavy person, like Earthquake or Oko Zuna pretends to squash his opponent by jumping and sitting on them. If you look closely, you can see plainly that the heavy wrestler is supporting most of his own weight by the position of his feet and legs. Even viewed in this light, it's an impressive athletic display. You can enjoy the show more, if, like the audience, you can suspend your disbelief for a while and pretend that these men are actually trying to hurt each other. By the way, any WWF shows you see for free are merely shilling for a paid performance, either live in your community, or through pay-per-view TV. I don't know which one you saw. Probably the best free WWF show is the one on USA cable network on Monday nights. They at least have some matches with 2 name wrestlers in the ring together. In the other free shows, broadcast here on Saturday & Sunday mornings, name wrestlers typically appear with unknowns, whom they "beat up" in a few minutes. Then the third show you cite is an "info-mercial"; give me a break, Stanton! Everybody (except you?) knows these are the dregs of late-night programming. Stanton, then asks what should he do, besides turning off his TV. Well, aside from my suggestions above, nothing. The TV Producers and stations are exercising their first amendment rights. Stanton, you are not forced to contribute anything to this process. If you happen to find a particular show offensive, and you happen to use the sponsor's product, feel free to exercise your 1st amendment and other rights to write a letter to the sponsor and/or switch to a different brand of product. In a later message, Stanton says, ... And what the hell is "quality drama"? I have yet to see such a thing on tv. EVER. In addition to programs cited above and later, "Hill St. Blues," "St. Elsewhere," "Magnum, P.I.". (These are a few years old, now seen in syndication). Programs from last season: "L. A. Law," "Law and Order," "Reasonable Doubts," "In the Heat of the Night," "Matlock," "Picket Fences," "Northern Exposure." If you can't find a program you like in there somewhere, you probably just don't enjoy drama, period. Rich Walker, from the U.K., then joins in to bemoan the state of TV there. I'd tend to agree, since I think the BBC still taxes TV sets in Britain, which you have to pay whether you watch the BBC or not! My advice to you is to lobby and demonstrate for the repeal of that tax! Nevertheless, there are some good British shows. Benny Hill was a very good combination of risque and slapstick comedy (too bad he died recently); Also the Australian clone of B.H. starring Paul Hogan, which was discontinued when he graduated to his "Crocodile Dundee" movie role. Either were -much- better than Monty Python and the "selection" of British situation comedies which are shown here on PBS (our own version of state-supported TV). PBS is supported from general taxes, rather than special license fees. Stations also have to solicit "voluntary" contributions from viewers, encouraged by prolonged harangues shamelessly begging for money, much worse than any commercial. I do have to commend BBC and PBS for shows like "I, Claudius" and "Glittering Prizes" for showing partial nudity (tits), which is still unusual on USA "free" TV. (But plentiful on HBO and other pay channels). "Lovejoy" (Ian McShane) is shown here & I've watched it a couple of times. The character is likeable, but the plots often seem to be about inconsequentials. A better British show, which I saw recently in Indonesia via satellite broadcast from Hong Kong, was the drama-comedy series, "Perfect Scoundrels," starring and created by an actor named Bowlles. I expect that's an old show; they were also showing the USA show, "Tour of Duty" (about VietNam) which was a fine program, but about five years old. If "Perfect Scoundrels" were available here, I'd make an effort to watch, or at least record it. There was also a Canadian show called "Street Legal", which was a clone of our "L. A. Law" that I found worth watching. On the other hand, there was a show called "Crystal Maze" that had absolutely no redeeming value whatsoever! Rich mentioned WCW Wrestling. This is a Wrestling "Association" in direct competition to WWF. They never mention each other, but there is a lot of crossover of wrestlers. For example, Lex Lugar recently crossed over to WWF from WCW. The WWF's British Bulldog is now the WCW's Davey Boy Smith. The Steiner Brothers seem to cross over frequently. There's not much queston that WWF puts on a better show. Nevertheless, American TV is better than British if for no other reason than that there's so much -more- of it. We have three advertiser-sponsored (free to the viewer) networks providing a prime time schedule seven days a week. Then the new Fox network fills prime time on the weekend and one or two weekdays (much like London Weekend Television, I suspect). Fox has some good shows, like "Simpsons" and "Married with Children", both of which contain much that is not "politically correct." Then there are the independent and cable stations, broadcasting the best TV from previous years, like "Rockford Files", "The Walton's", as well as new syndicated shows; The best example of successful syndicated shows are "Start Trek: The Next Generation" and its spin-off "Star Trek: Deep Space Nine" both of which have some elements, at least, that should appeal to Extropians. Well, I guess I've rambled enough too. -- edgar@spectrx.saigon.com (Edgar W. Swank) SPECTROX SYSTEMS +1.408.252.1005 Cupertino, Ca ------------------------------ Date: 28 Jul 1993 19:05:58 -0400 (EDT) From: Mark Sulkowski Subject: TV: Babylon-5 I sent an email message recently to Mr. Straczynski of the soon-to-be television sci-fi show Babylon-5. I asked him two things: (I'm paraphrasing) Q: Have they contacted sci-fi author Vernor Vinge about writing scripts for Babylon-5? A: They have not yet had any contact with him. (No mention of if there were any plans for this.) Q: Would "profit" be an evil word in B5 as it has become in ST:TNG? Would profit be viewed negatively at all? A: B5 is not in the TNG universe. B5 isn't bound by it. There is nothing wrong with profit in the B5 universe. The politics of profit is treated entirely differently. Yeah! I can't wait! * . ====\\. ~ //==== || \\ ~ . *// || || \\ * // || || \\.~// || || \\// || || Mark \/enture || ==================== ------------------------------ Date: Wed, 28 Jul 93 17:10:46 -0700 From: davisd@nimitz.ee.washington.edu Subject: Cryonics & Pascal's Wager > This theoretical/theological analysis is not changed by > cryonics, or vice versa, since one can both believe in God and sign > up for suspension, and cryonics' payoff, while potentially large, > is still finite. In practice, the two memes tend to compete for > a similar niche. > > > Nick Szabo szabo@tecbhook.com Although signing up for cryonics and believing in heaven raises the amusing scenario of being yanked back from heaven when your body is reanimated. Or will your "soul" remain in heaven, and some other soul fill your body? Or will your reanimated body just be soulless? Or will God know if you're going to be reanimated in the future, and so leave your soul to wait in your body until that time? Actually, if I remember correctly, all these scenarios only hold for the 20th century form of instant gratification heaven, or for reincarnation. Isn't the old style heaven something which only comes at the end of time, with in fact your soul hanging out in your body until Gabriel(?) blows his horn? Gotta decide on which heaven you believe in before you calculate your utility for cryonics. Cryonics definitely goes with the old style heaven, since I'd rather live til the end of time than just wait in a hole. If you believe in reincarnation, it would seem that the choice is whether to switch bodies or not. If you believe in the instant gratification heaven, you'll have to figure out just how the soul business works with reanimation. So many brands, how does one choose? Buy Buy -- Dan Davis ------------------------------ Date: Wed, 28 Jul 93 20:35:47 EDT From: fnerd@smds.com (FutureNerd Steve Witham) Subject: Wage Competition (LONG) [This post is too long but cleared up some of my thinking. I've noticed that posts that people want to apologize for are often the best ones, so maybe there's something of worthwhile in it! Lazy people: read the end (starting at ***). ] Mike Price sez- > I don't accept that we'll have to fall back on taking the genome to > pieces to design the architectures of the AIs, Me either (I only meant get good neuron-emmulations running; I see this as the second *easiest* way to get AI), but someone asked why I thought getting AI going might be easier than programming it to be unselfish. Uploading is an example. Studying neurons is easy compared to designing unselfishness. > but even if we did there > are plenty of examples to draw upon where an animal submerges its own > ego to the tribe / hive / family. And humans do it all the time > (nationalism, tribalism, many forms of statism, parentalism etc). In the case of family and hive, this is genetic self-interest. It's a valid point that the genes have managed to program goals into intelligences. But the genes had an easier job (design a robot slave for me, "me" allowed to evolve) than a hypothetical robot-slave-making-CAD system would have (design a robot slave for a fixed someone else), and they took billions of years at it. In the case of tribe and nation, I think these are habits that have been beneficial to the individuals given the existence of tribes and nations around them. They are not programmed to obey but fairly correctly surmise their selfish interests in the context (of coercion or trade or tit-for- tat or whatever the social situation). Social situations can be tricky about fooling individuals into cooperating beyond their true interest, but I tend to think this works less well the smarter the individual. [i (> >) said-] > > But the more general answer is this. All the examples of real > > intelligence and life we have, are selfish. Specifically, they are > > geared to spread their own programs. > > But not at the conscious level, which is what is relevant here. ... > [examples, what it's like for the individual] I recognize that "unselfish" motives can evolve within a larger, selfish, genetic or memetic system, and that they seem "selfish" to the individuals. What seems important to me is that evolving selflessness within selfishness seems to work given lots of time, selflessness by design is even harder, and self*ish*ness by design is the easiest of all and most likely to happen first and be the most economically competitive in any case. > ...Or the parent who nurses a child. Or > the kamikaze pilot. Or any sexual-based behaviour. It's _not_ selfish > behaviour. In humans, I think sex and child-rearing and heroism are mixed with true selfishness in interesting ways. A social (human-ecological?) situation evolves where the easiest way to get certain psychological needs met is through sex or raising kids or even training to be a hero. The genes (and social strux) manage to prevent learning from running downhill in certain areas, but wherever possible, they work by aligning their interests with the downhill tendencies of learning. By the way, designing and implementing a society that would produce robot slaves also seems hard to me. > Those humans or self-willed AIs / uploadees who form a stable slave-hive > structure will be stable and out-compete those entities who are unable > to form co-ordinated hive structures. It's been a very successful > strategy already. I think social and market structures form pretty well among purely selfish individuals. But what we were talking about was a robot-slave/human-master hybrid. I see this as a forced, non-viable form compared to the kinds of societies you're talking about that are free to evolve into whatever works. > > The intelligence we know works *because* it's selfish. Selfless > > intelligence sounds (to me) like a much, much harder thing to create. > > For the reasons above, this seems an entirely natural form for > intelligence to take. I agree. Perfectly natural given time to evolve, and in a context where the goal-setter itself is evolving. I think the more narrowly selfish individual AIs will evolve much faster, and slaves designed for preassigned, not-evolved- in-the-symbiosis human masters, won't make it. > > Stupid, too-demanding, inflexible children > > can be pretty galling, I imagine. > > Not if you _love_ them. Love is blind, right? :^l I think what's usually going on is, we're smarter than we know. Pure selfishness is still at work in human love, and there are limits to what people will put up with. We can't assume that something really smart can be designed that will always love an *arbitrary* master ahead of itself; we have no example like that. (I won't mention imprinting, heh, heh.) > > Also, employees of most corporations are free to leave. > > As will the AI slaves, if they _want_ to. The rational thing for them > to do (ie course of action that is consistent with their goals = make > them happy) is to stay and work themselves, if necessary, to death. Well, you're saying they're designed not to want to leave. I meant that the system of corporations works when the workers have goals of their own that *don't* necessarily coincide with their employers'. The system produces corps that rise and fall; none is guaranteed to work (but what we notice, of course, are the current survivors). The system would fail if we tried to guarantee the survival of the status quo. (Actually, we do, and it bogs down to that extent.) A human-controlled corporation (this was somebody else's premise, not mine) would be as successful as a human playing the stock market--not very, I think--so nobody in their right mind would want to work for it. I don't think you can define slaves who would as sane and likely to succeed. > > I guess you can keep a slave by holding him back > Love will hold them back. Money can't buy you love!-) *** Motivation is not an arbitrary thing. There are some jobs that would be much harder to evolve beings to do well, than others. Motivation has a logic--it is a logic. It's not just overall goals, it's a pattern of moment to moment perceptions that works to guide actions. Goals and the systems that achieve them aren't sold separately, they evolve together continuously and are fitted to each other. (General problem-solving (working to an arbitrary goal described in language) is a hobby of some people who pick or agree to the problems they're going to solve. These people are pickier, finnickier, more individualistic about what problems they'll work on, the better they are, and they do them because of how they fit into other goals. General problem solving is not the model for problem solving in general(!). Selfish evolution is the model for how and why real problem solving happens, and even what a "problem" is. A master/slave combo has a kind of inherent unhappiness.) > >, but you can't have an arbitrarily capable and profitable slave. > Why not? You can't design something within arbitrary constraints and guarantee that it will succeed nearly as well as anything as expensive but without the constraints. I'll try to stop talking about what I think won't work, except to answer questions; other people are already onto ideas of what will and I'm behind! -fnerd quote me ------------------------------ End of Extropians Digest V93 #209 ********************************* &