33 Message 33: From extropians-request@gnu.ai.mit.edu Sun Aug 1 00:06:11 1993 Return-Path: Received: from usc.edu by chaph.usc.edu (4.1/SMI-4.1+ucs-3.0) id AA26654; Sun, 1 Aug 93 00:06:08 PDT Errors-To: Extropians-Request@gnu.ai.mit.edu Received: from panix.com by usc.edu (4.1/SMI-3.0DEV3-USC+3.1) id AA06070; Sun, 1 Aug 93 00:05:58 PDT Errors-To: Extropians-Request@gnu.ai.mit.edu Received: by panix.com id AA17903 (5.65c/IDA-1.4.4 for more@usc.edu); Sun, 1 Aug 1993 03:02:31 -0400 Date: Sun, 1 Aug 1993 03:02:31 -0400 Message-Id: <199308010702.AA17903@panix.com> To: Exi@panix.com From: Exi@panix.com Subject: Extropians Digest X-Extropian-Date: August 1, 373 P.N.O. [07:02:28 UTC] Reply-To: extropians@gnu.ai.mit.edu Errors-To: Extropians-Request@gnu.ai.mit.edu Status: RO Extropians Digest Sun, 1 Aug 93 Volume 93 : Issue 212 Today's Topics: [3 msgs] CHAT: Reanimation chores & posthuman mail filters [1 msgs] Cryonics Organizations. Was Who is signed up for cryonics [1 msgs] HEX: more concerns [1 msgs] HeLa cells [1 msgs] Intellectual Property, ppl, etc. [1 msgs] MISC: MEDIA: more TV stuff, ADIOS: gotta leave the list [1 msgs] Meta: Brief List Outage [1 msgs] POLI: help with a school choice proposal [1 msgs] Searle and Starr [1 msgs] Searle's Chinese Torture Chamber Revisited [1 msgs] TECH: encrypted computer? [1 msgs] Administrivia: No admin msg. Approximate Size: 51408 bytes. ---------------------------------------------------------------------- Date: Fri, 30 Jul 93 22:06:04 PDT From: hfinney@shell.portal.com Subject: Searle's Chinese Torture Chamber Revisited Since Tim Starr is the only person I've encountered who actually supports Searle's Chinese Room argument, I'd like to get his opinion on one specific argument that was brought up in the Journal of Behavioral and Brain Sciences debate. (I think this argument was invented by Richard Hoagland.) He suggested the following hypothetical. Normally our brains are composed of neurons which interact via neurotransmitters. Suppose someone (a Chinese woman, in fact) has a problem in which her neurotransmitters fail, leaving her neurons unable to communicate, so her brain does not function correctly. Technology is developed which will correct for this deficiency, via a "demon" which moves from neuron to neuron, very quickly, activating each neuron in exactly the manner in which it WOULD have been activated had the neurotransmitters been operating. The demon moves so quickly that the neurons are able to return to their normal firing pattern, and the Chinese woman is able to move and talk normally. Is this person conscious? Does the answer change if the demon itself is conscious? As I recall (and I haven't read his essay for a year or so), Searle surprised many by announcing that the Chinese woman actually WOULD be conscious in this situation, because the demon was merely supplying the stimulation which would have been supplied anyway based on "causal forces" (or some such). This is surprising because this case is meant to be a close analog to Searle's Chinese room, with the demon taking the role of the human operator in Searle's example. It would seem that if one will deny the consciousness of the Chinese woman in Searle's example, a similar argument would force the denial of the consciousness in the second example. I'd be curious as to whether Tim agrees with Searle's answer to this example. Thanks - Hal Finney hfinney@shell.portal.com ------------------------------ Date: Sat, 31 Jul 1993 07:25:41 -0400 (EDT) From: Harry Shapiro Subject: POLI: help with a school choice proposal a conscious being, andreag@csmil.umich.edu wrote: > > I need some critical review of ideas, and this list seems like > an excellent place to get it. > school-only choice. What does such a system need to look like > so it works even though private schools aren't allowed? Take a look at Bionomics and/or give Michael a call 415 454-1000 (Institute). There is a section in his book about this type of school choice. /hawk ------------------------------ Date: Sat, 31 Jul 93 11:56:39 GMT From: starr@genie.slhs.udel.edu Subject: The Searle Argument >From: fnerd@smds.com (FutureNerd Steve Witham) >Subject: Searle's Chinese Torture Chamber Revisited > >No one is proving that computing processes are all that's necessary >to produce the phenomena of mind. We are only showing that Searle >fails badly at *dis*proving it, which is what he intends to do. > >I think all any of us can say is that it looks like a perfectly good >model. I honestly can't, for reasons I've given. >From: extr@jido.b30.ingr.com (Craig Presson) >Subject: Searle's Chinese Torture Chamber Revisited > >[Tim's long multi-message response deleted] 'Twas entirely too long, I confess. >I should point out that Tim concatenated my comments to Perry's, thus >putting my words in Perry's mouth (PM would say they tasted OK in this >case [<-- self-referential joke]). Yes, I regret the mistake. >I decided not to respond in detail, since Tim* admits that he doesn't >need: > >-- falsifiability as a criterion of truth Yes, I don't see why axioms ought to be excluded from truthfulness. >-- a definition of "mind" that makes sense That's not what I meant. I do need a definition of "mind" that makes sense. The one(s) that have been profferred don't make any sense to me at all. >and because arguments with open premises like this are endless. Perhaps. I'm rather shocked at how the premises that your (plural) arguments depend upon are snatched up with so little apparent care for their soundness. Why? Am I missing the part where they are established as sound? Is this step just rejected? Do you think you can reach true conclusions from unsound premises? >>From: rjc@gnu.ai.mit.edu (Ray) > >starr@genie.slhs.udel.edu () writes: >> >All physical laws are causal and well-stated. >> >> Meaning? > > Meaning a computer can simulate them. What I mean was: By what standard are they "well-stated"? Doesn't "physical" mean "natural"? You seem to imply that all physical laws must be stated in the form: If X, then Y. Have I got this right? If so, then what is the status of laws which can't be stated in this form, such as metaphysical axioms? > Biology is grounded firmly in chemistry, and chemistry can be completely >derived from quantum mechanics, thus biology is mechanical. A cell >is a machine, a virus is a machine, and a neuron is a machine. (you may >argue that biology is not derived from chemistry, but I'd challenge you >to provide a shred of evidence that it is not.) "Derived" and "Grounded" don't seem adequate to explain how chemical reactions can cause a machine to become an organism. How is this done? > So all that we have left is 'mental laws'. Either the "mental process" >(and by calling it a "process" we subject it to computational simulation) >is governed by brain chemistry/physical law or it is not. If it is >not, we are left with a ghost in the machine, the super natural. Calling it supernatural presumes the mechanical view of nature. Why must nature be mechanical? It seems possible that mind could be natural, physical, but non-mechanical, in which case it wouldn't be "supernatural," but "super- mechanical." Mind seems to lack an essential characteristic of machine: extension. It seems to have location - my mind is "in" me, yours "in" you - but it doesn't seem to have extension. My understanding of post-Cartesian physics is that its objects of study are things with extension. Have I got this right? If so, then how can physics study non-extended mind? (Aside: this distinction was the purpose of the thought-exercise of the Scholastics that so many make light of: "How many angels can dance on the head of a pin?" Angels were supposed to be beings of pure thought, lacking extension, too, only having location.) Searle hits the nail on the head in his reply to the paper Ray kindly posted. The foundational issue here is ontological, metaphysical, a question of what mind consists of, not the epistemological question of how we can know other minds. It all seems to go back to Descartes bifurcation of reality into the mechanical and the mental, the extended and the non-extended. You (plural) seem to have accepted his exclusion of the mental from the same part of reality as the mechanical, and simply rejected the mental. Why? Why not reject Descartes' metaphysical foundation of the mind-body dichotomy, and include both mind and machine in the same part of reality, rather than insisting that mind must be machine, or that if it isn't, then it's impossible? >To claim that it could >deviate would be the same as claiming that human bodies aren't made out of >real matter that behaves the known physical laws! What physical law makes me type these words? What physical law makes us disagree about this issue? Human bodies behave in many ways that seem inexplicable by mechanics. They don't behave in ways that are contrary to mechanics - as Ernst Mayr puts it, living organisms are teleomatic - I admit. I don't mean to suggest that they can. It just seems to me that mechanics doesn't include mental "objects." >From: extr@jido.b30.ingr.com (Craig Presson) > >Ray, thank you for explaining Science, The Universe, and Everything to >Tim so patiently. I'm also grateful for Ray's patience, but I haven't come up against anything in the explanation of any of these three things that I didn't already understand. I do think we're getting closer to what I don't understand, however, which seems to be the justification for the mechanical view of the universe. >From: dkrieger@Synopsys.COM (Dave Krieger) >Subject: My last word on Searle > >This is my last contribution to this discussion; Tim has convinced me that >he's being obtuse and deceptive on purpose. He claims to love truth, but >is employing shoddy rhetorical devices and willful distortion in an effort >not to have to think about what others are actually saying. No, no, no. Why do you hurt me so, Dave? All I'm trying to do is understand the reasons why you (plural) take the position you do, I assure you. >>>If the agent doesn't know what the conversation is even about, >>>then he is obviously not the one doing the communicating! Come on, you're >>>sharper than this, Tim. >> >>You're better than this, too, Dave. The agent does indeed know that the >>conversation is about the expressed symbols. All he doesn't know is what >>they mean. Thus, he is doing at least part of the communicating. > >He is a communication channel. He is not participating in the conversation >any more than a parrot or a printing press. I disagree. Parrots and printing presses don't know why they repeat words and phrases or print them. My hypothetical agent does know why he transmits a signal. His behavior is purposeful, unlike beasts and machines. >>>It seems unlikely that mental agents program themselves. >> >>Why not? Because you can't imagine such a thing? > >This is the reason I'm getting out of this debate, Tim; you are >deliberately and dishonestly quoting me out of context. 'Twas deliberate, but I meant no dishonesty. >I explained my >rationale for that statement in the rest of that paragraph: > >>>It seems unlikely that mental agents program themselves. They program each >>>other, and they are programmed by outside stimuli, but (as Minsky points >>>out in Society of Mind) agents that programmed themselves would be too >>>prone to positive feedback to be evolutionarily stable. > >Tim, you have crossed the line dishonest and deceptive. You're mistaken. I've never, ever tried to deceive you or anyone else in this thread or on this list for as far back as I can remember. 'Twas for another reason that I omitted your "rationale": I couldn't articulate my objection to it yet. Now, I think I can. You put it in terms of likelihood and probability, not certainty. Thus, you seemed to leave room for the possibility that mental agents do, indeed, self-program. If mental agents are self-programming, as I, Searle, and even Steven Harnad, the author of the critique of Searle's argument that Ray posted, seem to hold, then they are radically different from computers in this respect. As Harnad puts it in section 3.1 of his paper: "whereas the intentionality of human symbols is "intrinsic," that of machine symbols is `derived' from or parasitic on human intentionality. Nonhuman symbols only have meaning if they are so interpreted by people; otherwise they are just meaningless `squiggles.'" This seems to be along the same lines as the point I was trying to make about self-programming. >I'm not going to >stoop to debate you on this topic any more. "Debate" is much to strong a word for me. "Discuss" is what I'd intended to do. I'll miss you. I regret appearing dishonest, and I'll try not to make the same mistake twice. >I think it was Mark Twain who >said, "Never wrestle with a pig. It gets you dirty, and the pig enjoys >it." And on top of it all, you invoke my ancestor against me. Sigh. Tim Starr - Renaissance Now! Assistant Editor: Freedom Network News, the newsletter of ISIL, The International Society for Individual Liberty, 1800 Market St., San Francisco, CA 94102 (415) 864-0952; FAX: (415) 864-7506; 71034.2711@compuserve.com Think Universally, Act Selfishly - starr@genie.slhs.udel.edu P.S., I posted my apology publicly to try to dispel the impression that others besides Dave may have gotten of my misconduct, but I'm going to send him a private note, too. ------------------------------ Date: Sat, 31 Jul 93 15:33:10 GMT From: starr@genie.slhs.udel.edu Subject: Reply to Hal >From: hfinney@shell.portal.com >Subject: Searle's Chinese Torture Chamber Revisited > >Since Tim Starr is the only person I've encountered who actually supports >Searle's Chinese Room argument, Really? Wow. I always seem to get into situations where I'm arguing against everyone in the room. Does this mean that I'm arguing against everyone else in the Chinese Room? >He suggested the following hypothetical. Normally our brains are >composed of neurons which interact via neurotransmitters. Suppose someone >(a Chinese woman, in fact) has a problem in which her neurotransmitters >fail, leaving her neurons unable to communicate, so her brain does not >function correctly. Technology is developed which will correct for this >deficiency, via a "demon" which moves from neuron to neuron, very quickly, >activating each neuron in exactly the manner in which it WOULD have been >activated had the neurotransmitters been operating. The demon moves so >quickly that the neurons are able to return to their normal firing pattern, >and the Chinese woman is able to move and talk normally. > >Is this person conscious? I'd say so. >Does the answer change if the demon itself is >conscious? Nope. >As I recall (and I haven't read his essay for a year or so), Searle >surprised many by announcing that the Chinese woman actually WOULD be >conscious in this situation, because the demon was merely supplying the >stimulation which would have been supplied anyway based on "causal forces" >(or some such). We seem to have very similar approaches to the subject. I think I'll drop him a piece of e-mail, and try to read more of his stuff. >This is surprising because this case is meant to be a close analog to >Searle's Chinese room, with the demon taking the role of the human operator >in Searle's example. I guess I can see how this could be so. >It would seem that if one will deny the consciousness >of the Chinese woman in Searle's example, a similar argument would force >the denial of the consciousness in the second example. The difference is that we know that brains cause consciousness, somehow. But since we don't know how, we don't know how anything else could. This doesn't mean that it's impossible, but I think Searle's position is stronger than you might think. >I'd be curious as to whether Tim agrees with Searle's answer to this example. I decided it very quickly, without even having to read Searle's conclusion. It follows quite directly from the premises we seem to share. >Thanks - You're quite welcome! Tim Starr - Renaissance Now! Assistant Editor: Freedom Network News, the newsletter of ISIL, The International Society for Individual Liberty, 1800 Market St., San Francisco, CA 94102 (415) 864-0952; FAX: (415) 864-7506; 71034.2711@compuserve.com Think Universally, Act Selfishly - starr@genie.slhs.udel.edu ------------------------------ Date: Sat, 31 Jul 93 09:26:04 PDT From: edgar@spectrx.saigon.com (Edgar W. Swank) Subject: AI's as dangerous slaves FutureNerd Steve Witham said: i mentioned the idea of the slave holder having control of the slave-AI's pleasure center (with a button). Hans Moravec sez-- > > No, no the conditioning system is a program inside the robot. > The robot feels good when its psychology module says its owner > is happy. Okay, but who or what conditions the psychology module to give the right answers? Or how do you program it with a suitably subtle idea of what the owner wants? ... I agree with fnerd that "owner happiness" is a dangerous inner goal. I would prefer a compulsion to FOLLOW ORDERS from the owner. Some fool- hardy owners might then order their android to "make me happy"; at least they could perhaps countermand the order before any irreversible damage was done. The "follow orders" programming might seem similar to fnerd's button. And I agree there is a danger that a smart android might try to manipulate its owner into giving just the orders the android wanted to follow. We see this now occurs commonly with wives & children in cultures (e.g. Asian) where at least outward subservience to the dominant male is required. But its not clear that the android, unlike the wife or child (or slave in historical periods) has a real independent self-interest, at least beyond rudimentary self-preservation (required to avoid constant replacement and/or repair). But "follow orders" programming gives us a tool beyond the simple button. We can just -ask- an android if it has tried to manipulate us to give it orders we might not have otherwise given. An example: Me: OK, fess up, android. Did you influence me to change my behavior in any non-obvious ways this week? Android: Yes, sir. Last Thursday, that "spontaneous" sex play I initiated was really to distract you from eating a second dessert. Me: Ah, yes! Well, I suppose that was OK, keep doing that. Anything else? Android: Yes, sir. I also used several subtle psychological ploys to influence you to buy me that expensive dress and jewelry and take me to that expensive night club to show me off. Me: I'll be damned! And I thought that was all my idea! That's a dangerous area. Be sure to ask my explicit permission before using any such subtle ploys again! Peter C. McCluskey said: Whether a distinct psychology module is feasible depends very much on how intelligence is achievable. If the classical AI notion of designing an intelligent system will be the first method used to create human-equivalent robot minds, then it is not too hard to imagine keeping them enslaved. I claim that two other approaches show significantly more promise of producing the first AIs, connectionism with a fair amount of evolutionary programming, and uploading. ... The connectionist approach doesn't automatically rule out a conditioning system, but the effort required to accomplish it would be dramatically higher, and I can't think how you would go about verifying that it worked as intended. My own view would be a hybrid system. Use connectionism for lower-level functions that logical AI doesn't do well, like visual pattern recognition and recognition of spatial relationships; balancing, walking, & running on two legs; Speech recognition & Natural Language processing, etc. But use a logical program at the highest level to coordinate and direct the lower levels. Be sure the only way the lower connectionist levels have to communicate with each other is through the upper logical level. The basic problem is the primary cause of intelligence under this approach consists of system-wide rewards to independent thought, which are very different from and not entirely compatible with rewarding servility. Well, at least the "independent thought" needs to be at a lower priority than servility. The incompatibility is easily removed by just ordering the AI to "think independently" (with appropriate boundaries for the situation). Uploaded intelligences would clearly not allow themselves to be enslaved. This is not as obvious as it seems. What appears to be abject oppression from the outside could be a cushy job from the inside. What if we start with a poor person with a terminal disease; he can't even afford medical care much less cryonics. We upload him into a rich virtual reality, with luxurious virtual surroundings, virtual servants, etc. All he has to do is, a few minutes a day (from his point of view) direct the operations of the android slave; he is expected to make the android act appropriately servile. The rest of the day is spent in glorious virtual luxury and pleasure. He has every motive to please the android's owner, otherwise he is likely to be flushed out of the android to oblivion, or perhaps just to a drab and dreary virtual reality, and replaced with another willing candidate. The same thing will happen, of course, should he try to avoid his obligation by killing or harming the owner. This scenario does lead us back to fnerd's button-problem. An uploaded AI will definitely have an independent self-interest and (since it thinks faster) more effective intelligence than its owner. It will have no compulsion to tell the truth. So the exposure to manipulation is there. -- edgar@spectrx.saigon.com (Edgar W. Swank) SPECTROX SYSTEMS +1.408.252.1005 Cupertino, Ca ------------------------------ Date: Sat, 31 Jul 93 09:25:38 PDT From: edgar@spectrx.saigon.com (Edgar W. Swank) Subject: Cryonics Organizations. Was Who is signed up for cryonics Tony Hamilton asked: The main problem I have with Alcor (not even knowing much about them), is that they appear to have a virtual monopoly on the service (or am I wrong there?). I am comforted that the do _not_ have a monopoly on the technology, but it would be nice to have some competition with regards to the actual suspension services. Yes, Tony, you are wrong there. There are at least two or three other organizations offering cryonics services: American Cryonics Society (ACS) P.O. Box 1509 Cupertino, CA 95015 (408) 734-4200 or 255-1763 FAX (408) 734-4441, 973-1046 Email: acs@spectrx.saigon.com (American Cryonics Society) Supporting membership, including American Cryonics and American Cryonics News $35./yr. USA, $40. Canada & Mexico, $71. overseas (Note: The Immortalist (below) includes American Cryonics News.) I'm a Founder, Governor, and suspension member of ACS, so naturally I think they have the most to offer. ACS has traditionally relied upon Trans-Time as the principal provider of cryonics services to its members. Currently, relations between the two organization have deteriorated, so this may not be the case in the future. ACS has taken the position of offering the maximum choice to its members by forming relationships with all service providers who would cooperate. Currently ACS has a contract with CI for either or both suspension or storage. ACS is run by its board of governors, who are elected by the members eligible to participate in its suspension program. The Cryonics Institute does its own suspension and caretaking of patients. Cryonics Institute (CI) 24443 Roanoke Oak Park, MI 48237 (313) 547-2316 & (313) 548-9549 The Immortalist Society, which has the same address and phone number, publishes The Immortalist, monthly, $25./yr. USA, $30./yr. Canada and Mexico, $40./yr. overseas. Airmail $52. Europe, $62. Asia or Australia. A gift subscription ($15./yr. USA, $25. outside USA) includes a free book ("The Prospect of Immortality", "Man Into Superman", "Engines of Creation", or "Living Longer, Growing Younger"). CI has done a lot of research in storage technology. Because of that and because it has benefited from donated money and land, it currently offers the lowest-cost suspension services, by far. Their cost for whole-body is less than Alcor's charges for neuro. However, CI doesn't offer a "hi-tech" perfusion service, but rather a primitive service using mortuary-level equipment. However, there's no proof that use of hi-tech, medical-level equipment a la Alcor or Trans Time actually increases your chances of survival. Use of medical-level equipment may be worse than mortuary-level in the hands of people not well trained in its use. For example the last Alcor patient, who was perfused with bleach which ate through a heat exchanger, would have probably been better off with a CI perfusion. Trans Time, Inc. 10208 Pearmain St. Oakland, CA 94603 510-639-1955 Email: quaife@garnet.berkeley.edu Until recently, TT has been embarrassed by its inability to handle neuro suspension of AIDS patients. However, construction is currently under way on an AIDS-capable operating room which, I'm told, should be operational "in a few weeks." When that's completed, TT should be a world-class provider of perfusion services, using hi-tech "medical-level" equipment and proprietary cryoprotective fluids. However, TT is not cost-competitive to CI in storage services. I recommend that anyone contemplating arrangements for his own cryonic suspension contact all the above organizations; ask for details and make up your own mind. There's a mailing list and archive devoted to cryonics; to join the list, send a request to Cryonet -- edgar@spectrx.saigon.com (Edgar W. Swank) SPECTROX SYSTEMS +1.408.252.1005 Cupertino, Ca ------------------------------ Date: Sat, 31 Jul 1993 16:02:03 -0400 (EDT) From: Harry Shapiro Subject: Meta: Brief List Outage At approximately 9:15am Sunday 1 Aug (tomorrow), panix's Internet connection will be unusable for approximately 5 minutes, while Sprint is doing a firmware upgrade on their routers. Sorry for the inconvenience. During this test the list will be disrupted. /hawk -- Harry S. Hawk habs@extropy.org Electronic Communications Officer, Extropy Institute Inc. The Extropians Mailing List, Since 1991 EXTROPY -- A measure of intelligence, information, energy, vitality, experience, diversity, opportunity, and growth. EXTROPIANISM -- The philosophy that seeks to increase extropy. ------------------------------ Date: Sat, 31 Jul 93 15:02:24 CDT From: ddfr@midway.uchicago.edu Subject: Intellectual Property, ppl, etc. This thread seems to involve at least two different issues, and probably more. The two I am thinking of are: 1. Would intellectual property be protected in an anarcho-capitalist world? 2. Would a world without intellectual property protection be substantially less (or more) attractive than one with? I think I have posted on topic 1 before, but it would have been a while back and I may be misremembering. In my view, intellectual property protection is not likely under anarcho-capitalism, for the following interesting reason. A. One of the attractive features of anarcho-capitalism/ppl is that it tends to generate efficient legal rules. The reason is that the rules are being produced as private goods on a market. If there is some change in the rules applying between customers of agency A and customers of agency B which produces net benefits for the customers, then the two agencies will find it in their interest to agree on the change--possibly with a side payment if one set of customers is losing and the other gaining. B. Like other markets, this market will sometimes fail to produce an efficient outcome due to market failure (public goods, externalities, transaction costs, ... ). In this particular market, failure can be expected in cases where a legal change in the legal rule applying between A and B produces a net loss for them but a net benefit when we take into account the effect on C, D, E, ... of the change in the rule between A and B. Anarcho-capitalism, at least of the sort I have described, is built on pairwise contracts, and workable in part for that reason. A global contract in which everyone in the world agrees to a change in legal rules because it produces net benefits seems implausible because of the transaction costs of getting unanimous agreement. In my system, the pairwise contracts are actually negotiated by protection agencies for lots of customers at once, but conceptually it is as if each pair of individuals negotiated the legal rule applying between them. C. Consider the case of intellectual property. The question is whether B is obliged to respect A's intellectual property in something A has written, invented, or whatever. Suppose we start in a world where B has no such obligation, and change to one where he has the obligation. While we do so, everyone else's mutual obligations (including other people's obligations with regard to A and B) are held fixed, since those legal rules are unaffected by the agreement between A and B. The change has four effects: 1. A is richer and B poorer by whatever licensing fee B pays A for the use of any of A's intellectual property B chooses to use. 2. B is poorer and nobody is richer as a result of B choosing not to use some of A's intellectual property that A happens to have priced at more than its value to B. This term would be zero if A could engage in perfect discriminatory pricing with regard to B, but that is implausible. 3. A and B are poorer by whatever they spend on negotiating and enforcing the licensing agreement. 4. A and B are richer as a result of the increased amount of intellectual property that A produces, due to his being able to collect license fees from B. Consider how these terms add up in calculating the net gain (or loss) to A and B of the legal change. 1 is a wash--B loses what A gains. 2 and 3 are net losses. 4 is a net gain. Unfortunately, 4 is a tiny gain. The change increases by 1 the number of people obligated to respect A's intellectual property rights, which causes only a very slight increase in the amount of intellectual property A produces. And only that part of the benefit of the increase that goes to B gets counted in term 4. The benefit that goes to C, D, ... who do not respect A's property right is an external benefit due to the legal change, so does not get included in the calculation. The benefit to X, Y, and Z who are already respecting A's property rights is internal, since it is transferred to A via licensing fees, but it just balances the cost to A of producing the new item of intellectual property--that is why he would not produce it until the legal change added the additional licensing fee from B. I believe it follows from this argument that the costs of the legal change (2 and 3) are included entire in the calculation, but only a small fraction of the benefit (equal to 1/(1+the number of people who have not agreed to respect A's property rights). So unless benefits are enormously greater than costs or the population is tiny, we would not expect the legal change to occur--even if it is efficient. D. Two conclusions follow. First, anarchocapitalism of the sort I have described will probably produce legal rules without intellectual property, even if intellectual property is on net desirable. Second, this conclusion is an exception to the general theorem that anarcho-capitalism produces efficient legal rules. So far as topic 2 (Is intellectual property desirable? very desirable? vital?), I am less certain. There are a lot of good arguments for and against the desirability of intellectual property. My own guess is that although it may well be desirable, it is not essential. There are quite a lot of ways in which ingenious people can get paid for creating new ideas, writings, etc. even in a world in which their rights to control their creations are not legally protectible. So I regard the conclusion above as a flaw in anarcho-capitalism, but not a fatal one. And, of course, alternative legal institutions, such as those we now have, contain much weaker mechanisms for generating efficient law. Even if some form of intellectual property protection is desirable, intellectual property law as it actually exists might well be inferior to a world with no intellectual property protection. David Friedman University of Chicago Law School ------------------------------ Date: Sat, 31 Jul 1993 19:08:33 -0600 (MDT) From: Stanton McCandlish Subject: MISC: MEDIA: more TV stuff, ADIOS: gotta leave the list I happened to be watching tv by accident (at a friends house, tv was on). Some observations: One possibly important news event: In Sussex, NJ, the citizenry had no cops, besides state cops. So, a private security firm started the Sussex Police Dept., a private police force. The people were quite pleased, because they actually did they jobs, and were patrolling constantly (apparently they had some sort of crime problem there, drugs I think, despite the size of the town.) The "real" state cops, who weren't even there most of the time got irked, and had a judge disband the private police force. I have mixed reactions about this. For one thing it's a big win, in the sense that a non-state-contolled police force was in operation, and succeeded rather well for a while. On the other hand, the court order sets what will likely be a strong precedent. And on the third hand (or foot or whatever) people like me who know damn well that such "police" have no authority whatsoever, would have just said "fuck off, rentacop", so their actually effectiveness can be strongly questioned. If any of them actually shot anyone, they'd be up on murder charges. Besides that, I must say that sat. morning cartoons are far far lamer than they used to be. Most of the Looney-Toons cartoons were actually written for adults, and most of the subtleties are lost on children. The new cartoons have no such material and are all hopelessly insipid. Too bad. One thing of interest though, was the commercials. Every other commercial does of course have dinosaurs in it (the best/worst was "B.C. Bikers", tough-looking anthropomorphic dino "action figures" on fanciful motorcycles. Besides this, though, technology is EVERYWHERE. Particularly VR, raytraced animated ads, spaceships, and computers. A lot of the VR isn't VR per se, but general "alternate reality" stuff, where giant kids on skateboards fly over skyscrapers, psychedelic candy radiates from something not unlike a white hole, etc. The kids growing up watching this are not only going to be eager for VR, but will probably be consummate users of it. Those of you expecting to work with VR in coming years will have some VERY stiff competition in 2 decades. Besides this sort of stuff, there was some real VR, including data-gloves and HMD's, both in commercials and cartoons. The toons are also considerable more rique than they used to be. Punk, BTW, has been totally co-opted, chewed up, and regurgitated by pop culture. It's more a joke than anything else now. Half the cartoon kids look like goths and straight-edgers. This comes as a bit of a culture shock, to one who was a "real punk" back in the 80s. Probably the most sickening is all the 'recycling' of old characters in totally off the wall ways (e.g. Balloo from Jungle book is a pilot in "Talespin", Goofy is a family man with munchkin Goofys, and the Addams Family and Beetlejuice are cartoons now, virtually indistinguishable, and both incredibly stupid. Worse yet is all the "babies": baby Goofys, baby Muppets, etc etc. I just never ends. I wonder what this says about the consciousness of American Kids, who apparently cannot handle a cartoon character that is over the age of 5. Very interesting stuff to just observe. In all I still find it to be despicable propagandizing crap, but oh well. In other news, I have to keep off the list for a while. It's eating up too much time and disk space. I'd like to thank those of you who were patient with me when I first got here, flammable as I was. And I'd like the thank the list at large for the informative and interesting discussion. I'll be back. Since it seems traditional: ObPartingShot! Though most of you seem more intelligent than my cat (by default; I don't have a cat), the list traffic is getting really huge (yes I know I've contributed to that). For about the 5th time, I suggest that the list be split into a "serious Extropian subjects" list, wherein the rule of on-topic is heavily enforced, and an "Extropian chatter" list. People that like to ramble on and on about HeX, Jurassic Park, newage vs. reason, and get into flame wars can do this (and they WILL do it) on the 2nd list, while things of more usefulness, like the AIT VirtSem can go on the "serious" list. This list as it stands now seems like trying to locate a physics book in a huge bin fully of jumbled romance novels and comics. -- Stanton McCandlish * Space Migration * Networking * ChaOrder * NO GOV'T. * anton@hydra.unm.edu * Intelligence Increase * Nano * Crypto * NO RELIGION * FidoNet: 1:301/2 * Life Extension * Ethics * VR * Now! * NO MORE LIES! * Noise in the Void BBS * +1-505-246-8515 (24hr, 1200-14400, v32bis, N-8-1) * ------------------------------ Date: Sat, 31 Jul 1993 19:22:58 -0600 (MDT) From: Stanton McCandlish Subject: TECH: encrypted computer? Quoth E. Dean Tribble, verily I say unto thee: -=>Huh? this sounds like you're thinking only of the technology. The -=>problem is that this scheme complicates the heart of a retail -=>business: the sales process. The new sales process requires phone It'll happen eventually, though. Hell, there are a lot of BBSs with an online creditcard program, in which one can buy time on the BBS, order products and services, etc. I could install such a thing in my own BBS in a matter of hours. Requires setting up the credit card mumbo-jumbo with a bank, but that's about it. The BBS software I use can be purchased in this manner. Just call the support board, use the credit card door, and a week later you get a personalized disk and manual in the mail. There's another method of doing this, that uses an external agency (not sure of the specifics; presumable they check to make sure the card number given to you is legit and not a hot card, etc etc.) -=>connection that it didn't before; it requires credit card only -=>transactions; it requires a server at the other end that can cobble [etc etc etc] All true, but soon all irrelevant. The advent of small computers required much retraining, and a whole slew of new ways of doing things, more expenses, whole new personnel depts., etc. Comperization of business still happened. If there's demand for something, and it's doable, it'll get done. Simple. -=>Perhaps easier is to simply record the serial number of every sold -=>package and send it to the manufacturer so that they canknow who -=>originally bought any piece of software that they encounter. This -=>feels a little too Big BRotherish to me though. Whoever makes Word Perfect is already doing this. I was in a software store yesterday, and I saw someone buy WPWin; they were required to fill out the reg. card, and give it to the sales clerk to send in. Fortunately, no ID required, so you can just lie your ass off if you have some need for WP not to know who you are. -- Stanton McCandlish * Space Migration * Networking * ChaOrder * NO GOV'T. * anton@hydra.unm.edu * Intelligence Increase * Nano * Crypto * NO RELIGION * FidoNet: 1:301/2 * Life Extension * Ethics * VR * Now! * NO MORE LIES! * Noise in the Void BBS * +1-505-246-8515 (24hr, 1200-14400, v32bis, N-8-1) * ------------------------------ Date: Sat, 31 Jul 93 18:41:15 -0700 From: dasher@netcom.com (D. Anton Sherwood) Subject: Searle and Starr > This "system" is part human, part inanimate objects. > How can inanimate objects be conscious? Argument from Limited Imagination again. "Books are by definition inanimate. Inanimate things are by definition not conscious. Therefore books cannot embody consciousness." As if consciousness were a unitary thing! A human is made of inanimate components, and is yet animate. Tim has said he disagrees with the mechanistic view of mind, but if he said what his theory is I missed it. If he'll tell, that might help the rest of us understand. I take an operational, game-theory attitude to questions like "what is human" and "what are rights" -- whether you are self-aware is unknowable except to yourself (or subsets of yourself), and the selfish thing for me to do is seek to optimize the likelihood of getting the right answer from the tests I can conveniently apply. I can't distinguish the Chinese Room from Perry's mailbox, so if both behave like Tim's mailbox (I have evidence that Tim's mailbox is directed by a Naked Ape with wavy hair who can converse without referring to lookup tables) I must assume both are effectively conscious. *\\* Anton Ubi scriptum? ------------------------------ Date: Sat, 31 Jul 93 18:43:01 -0700 From: dasher@netcom.com (D. Anton Sherwood) Subject: HeLa cells I remember a couple of news items some years ago about the HeLa cell line. One was from her kinfolk, or maybe some random leftist looking for a bone to pick, complaining that Ms. Lacks had been turned into something between a zombie and a lab animal, and that she wouldn't suffer this indignity if she'd been White. The other news item was that HeLa cells had been found to contaminate various human cell cultures and take them over, so lots of lab work had to be thrown out. Speaking of human cell cultures, I read once that the best source of interferon was a cell line taken from an Israeli foreskin. Anton Sherwood dasher@netcom.com +1 415 267 0685 1800 Market St #207, San Francisco 94102 USA ------------------------------ Date: Sat, 31 Jul 93 18:43:17 -0700 From: dasher@netcom.com (D. Anton Sherwood) Subject: CHAT: Reanimation chores & posthuman mail filters Nick worries that when reanimated Alcor's clients will have an impossible burden of e-mail to catch up on. I have to wonder _what_ anyone would write to such a person's mailbox, besides legal notices. *\\* Anton Ubi scriptum? ------------------------------ Date: Sat, 31 Jul 1993 21:54:16 -0500 From: extr@jido.b30.ingr.com (Craig Presson) Subject: HEX: more concerns In <9307302123.AA21447@frc060>, Tony Hamilton - FES ERG~ writes: [HEx a failed experiment] Well, if it were a game of Life, I'd agree readily. Right now, I think we just need to wait and see what the next phase brings. [...] |> And the problem is that, while there is virtually no risk of such a crash |> (how can there be a crash with no activity?), no-one is investing. Heck, even Hey, he only provided two decimal places. There are several issues that would drop below p0.01 if they could -- e.g., GOD. [...] |> On this point, I can only say - does anyone know what is going to happen |> to the current reputations and values when the "New Deal" goes into effect? |> I admit, I haven't studied the changes well enough to understand what all |> will happen. I felt like we started out with an overvalued Thorne, and too small a money supply. Then various people attempted marketmaking with different approaches, and different assumptions, and interesting chaos resulted. Now we're going to get on a more rational footing with a builtin auction market. Wait & see. ^ / ------/---- extropy@jido.b30.ingr.com (Freeman Craig Presson) /AS 5/20/373 PNO /ExI 4/373 PNO ** E' and E-choice spoken here ------------------------------ End of Extropians Digest V93 #212 ********************************* &