14 Message 14: From exi@panix.com Tue Jul 27 18:27:56 1993 Return-Path: Received: from usc.edu by chaph.usc.edu (4.1/SMI-4.1+ucs-3.0) id AA15145; Tue, 27 Jul 93 18:27:53 PDT Errors-To: Extropians-Request@gnu.ai.mit.edu Received: from panix.com by usc.edu (4.1/SMI-3.0DEV3-USC+3.1) id AA29477; Tue, 27 Jul 93 18:27:35 PDT Errors-To: Extropians-Request@gnu.ai.mit.edu Received: by panix.com id AA17538 (5.65c/IDA-1.4.4 for more@usc.edu); Tue, 27 Jul 1993 21:24:51 -0400 Date: Tue, 27 Jul 1993 21:24:51 -0400 Message-Id: <199307280124.AA17538@panix.com> To: Exi@panix.com From: Exi@panix.com Subject: Extropians Digest X-Extropian-Date: July 28, 373 P.N.O. [01:23:28 UTC] Reply-To: extropians@gnu.ai.mit.edu Errors-To: Extropians-Request@gnu.ai.mit.edu Status: R Extropians Digest Wed, 28 Jul 93 Volume 93 : Issue 208 Today's Topics: FSF: InfoProp, etc. [1 msgs] FSF: InfoProp, etc. [1 msgs] FSF: Some Useful Software, No Useful Politics [4 msgs] FSF: Some Useful Software, No Useful Politics [1 msgs] FWD: INFO: Electronic Journal on Virtual Culture [1 msgs] Robot Slaves. was What are big upcoming problems? [1 msgs] The Age of Robots [1 msgs] Wage Competition (LONG) [1 msgs] Who is signed up for cryonics [3 msgs] Why Rick Folks Work [1 msgs] techno-unemployment (yet again and forevermore) [1 msgs] Administrivia: No admin msg. Approximate Size: 54260 bytes. ---------------------------------------------------------------------- Date: Tue, 27 Jul 93 17:11:00 EDT From: eisrael@suneast.east.sun.com (Elias Israel - SunSelect Engineering) Subject: FSF: Some Useful Software, No Useful Politics > Why not just have them sign a restrictive contract with you? Why do > you need this artificial notion of "intellectual property"? Look, here's the argument spelled out: If you could get a contract to exchange it in a free market, it must be property. If what you're exchanging is an idea, it's *intellectual property.* Why is it that every time I use the phrase "intellectual property," you think I'm talking about the law? Elias Israel eisrael@east.sun.com HEx: E ------------------------------ Date: Tue, 27 Jul 93 14:16:15 PDT From: thamilto@pcocd2.intel.com (Tony Hamilton - FES ERG~) Subject: FSF: Some Useful Software, No Useful Politics Perry writes: > I think you miss the point, Tony. > > We will consider contract enforcement in a moment. Right now, on the > difference between patent and copyright, consider this -- its grossly > unlikely that I could come up with the precise text of "The Silicon > Man" by accident, but it is very likely that I could come up with the > compression algorithm used by "compress". Trade secrets, which are more > or less the sort of protection you are mentioning (that is, you agree > not to disclose my invention) are a perfectly plausible mechanism. > What we are talking about is PATENTS -- that is, a document that gives > me the right to restrict your use of an idea, regardless of how you > got that idea. > > Now, on the question of the use of force, the point is that > libertarian principles prohibit only the initiation of non-consentual > force. If I've signed a contract agreeing that if I fail to obey you > can do X to me, well, its not non-consentual any more. > > A key difference between an operation and being knifed is consent, you > know. Perry, I think you missed _my_ point. Regardless of the differences between copyright and patent (which I was not discussing), they can be _enforced_ no differently. I specifically stated I wasn't arguing for or against copyrights and patents. My point is, it doesn't matter which concepts seem appropriate or right - the only thing that matters is how they are enforced, if they are to be used at all. I agree with everything you say about consentual force vs. non-consentual force. Doesn't matter. You can draw up a contract for copyrights or patent rights just the same, and in either case, you have to be _able_ to enforce it if you are serious about it. I never inteded to argue any point concerning the rights or wrongs about such contracts. > > Today, there is the semblance of such an agreement, that being the law, but > > of course it is enforced by use of force and coercion by the state. > > The law of our nation is NOT a contract. See Lysander Spooner's "No > Treason VI: The Constitution of No Authority" which is available by > anonymous FTP from the libernet archive on think.com. Hey, I said "semblance of such an agreement". Does that equal "contract"? I'm fully aware of and support the idea that our constitution and law is not a contract. My point was that it exists. The government could care less that all of our signatures are not on the law - they have the ability to enforce their laws in many cases - so what does it matter to them if we don't agree with them? If the government were a business in an Extropian world, they would have collapsed due to reputation alone. Who would possible trust a business that has never honored any contract it ever made? (and in this case, I _do_ refer to contracts, not laws). But in Reality(tm) (gads - can't believe I just did that - who came up with that anyway?), the US government just happens to be the most competitive kid on the block. _Relatively_ more freedom, land, defense, standard of living, and so forth than most other countries. So, I still stand by my assertion that we can either overthrow the govt., or leave. In their own way, Extropianism and other philosophies should in fact overthrow it given time. Once the world is Extropian, or something similar, then we can leave everything to competition to ensure that squelchers (even governments, co-ops, or the like) don't get away with things. Back to copyright and patent: I don't think either concept will survive. To make either work (forgot to include trade secrets), it has to be unpopular to the community/market at large to engage in infringement practices. Given the trends towards more and more public information, I don't think anyone will be able to own words or ideas unless they are kept secret. Tony Hamilton thamilto@pcocd2.intel.com HAM on HEX (btw: where is this libernet archive. Is think.com the entire domain?) ------------------------------ Date: Tue, 27 Jul 93 17:29:00 EDT From: eisrael@suneast.east.sun.com (Elias Israel - SunSelect Engineering) Subject: FSF: InfoProp, etc. > A trust established by Patrick Henry continues to this day. > Many land estates homesteaded generations ago are owned by > heirs of the same family now. Real "property rights" don't > exist or evaporate because of arbitrary statutes. A valid point. However, you, like Perry, assume that I propose to retain the current patent law. I do not. Someone invented the XOR cursor and should be able to sell the idea. Organizations like the FSF believe he should have had to give it away. Tell me how that could arise in a PPL, or admit that you're talking about something entirely different. I will admit that my comment about the XOR cursor patent running out was not completely thought out, merely an off-the-cuff fix for an obviously stupid patent. > I worked very hard to shovel a ton of elephant dung onto > your front yard, and I deserve a reward. Pay up! You're confusing invention with labour. Elias Israel eisrael@east.sun.com HEx: E ------------------------------ Date: Tue, 27 Jul 93 17:32:51 EDT From: fnerd@smds.com (FutureNerd Steve Witham) Subject: Wage Competition (LONG) dkrieger@synopsys.com (Dave Krieger) sez- > Grrrr. Steve, I dislike your writing style, because you heap multiple > implicit assumptions into every sentence. Sorry to irritate. I'm doing this at work and trying to address one point at a time. Personally I like this style of interaction. Thanks for questioning my assumptions; that was the idea. By the way, a little context: my goal is to try to figure a way for humans to survive past the advent of "true AI." Trying to enslave them seems like hanging on by fingernails over an abyss--a real losing proposition. There may be other saving graces, other ways, but right now we're arguing whether we can stay viable by *designing* AIs to serve us. > You're postulating that we would need only a few decades to create them > from scratch, but then require four billion years to shape their behavior? > Explain to me why this isn't silly. We build in their motivational > structure from the start, or we don't build them at all. Well, the simplest but not best counterexample is an uploaded human. Bootstrap off carbon evolution's billions of years of work. We don't have to understand the programming at all, just duplicate the low-level neuron behavior and the wiring. But the more general answer is this. All the examples of real intelligence and life we have, are selfish. Specifically, they are geared to spread their own programs. A slave is different; it's geared to serve someone else's interests. Sure, you'll say, selfishness is what you'd expect out of evolution, but this is different, this is design. But I'm saying intelligence requires learning, learning is evolution, and self*less*ness is not an evolutionarily stable strategy in the time frame of the individual's learning. The intelligence we know works *because* it's selfish. Selfless intelligence sounds (to me) like a much, much harder thing to create. (Hans gives examples of mothers and worker ants. Both of these work mainly to spread copies of their programs.) I'm one of those weirdos who thinks that learning is the key to taking AI to the big time. You know, the critical mass sort of model. So in a couple more years or decades, someone's going to get the right formula in a big enough computational vat and kablooie! The thing "explodes," building most of its orders of magnitude of complexity itself. As with uploads, we still only have to understand enough to lay the groundwork; selfish evolution does the main work again. > Is your libido (one component of your motivational structure) under the > control of your problem-solving intelligence? That is, can you, at will, > re-orient yourself sexually for rational reasons? This includes not only > gender preferences but age, race, species, and activities preferences... > for example, can you reprogram yourself to enjoy getting it from a dolphin? > Then you are yourself an example of a robot whose motivational structure ^^^^ I think you mean "If not," right? > is separate from his problem-solving intellect. I mixed this one up badly. I wanted to say that emotions and intellect are integrated, that you can't have them separate. Instead I quoted Bertrand Russell to the effect that emotions control thoughts. I wanted that to mean that thinking can't work without emotion, but obviously my confusion on the matter was showing. I believe that emotions and thought are integrated at all but possibly the lowest levels, and this integrated system needs to continuously reprogram itself at all levels in order to work. What I want and what I want to want, sexually, are pretty close right now; I have no good *reason* to want to want dolphins. I have noticed evolution of my libido (feelings about age, race, gender, species and activities, to take some variables from your question) consistent with my beliefs and desires as things went along. You're right that the genes have an incredible influence here. All I can say is to repeat that they took four billion years to arrange it, and I *guess* that that would be hard for us to duplicate to arbitrary ends. > >Yes; the simplest would be a button under control of the owner. > > No, no, fnerd; read what was written: _detectors_ that generate _internal [Hans corrected me on this too, see my response to him also.] > rewards_. What you want is mechanisms that do _not_ require the owner's > active input (but can be guided by it), and that are independent of the > robot's control, but that are capable of evaluating the results of the > robot's actions and administering reward/punishment accordingly. > (Examples: Give the robot a jolt if he monkeys with his own pleasure/pain > circuitry. Very interesting safeguard, once you have this circuitry set up. > Give the robot a pleasurable jolt every time the master makes > another 100,000 thornes.) Also very interesting, but it could make the "owner" slave to his holdings. > Presumably we can give such mechanisms access to > the content of the robot's thoughts as well; then it can make the robot > feel blissful and content when contemplating the master's pleasure, and > anxious and distraught when contemplating his displeasure (and, most > importantly, agonized and nauseated when the robot contemplates pushing his > own buttons!). I think reward systems, both simple and complex, both preprogrammed and user- controlled, become a game for the learning part of the system, which is necessary and necessarily selfish, to overcome. Have you ever changed your mind--come to a sudden understanding--of what someone *really* wanted? > Sure, but the owner is the tail that wags the dog, because 1) although the > slave has some input to the owner in the form of its behaviors, it does not > directly control reward/punishment of the owner; The slave is the magic genie that is the key to the wonderful lifestyle the owner enjoys. How dependent and complacent might the owner become? > 2) the slave is not the > sole source of input to the owner... the owner can check other sources of > information to verify that the slave is carrying out his wishes; Just part of the game, I suspect. Even humans can be clever manipulators. 3) what's > wrong with being a small subsystem, if you're the subsystem that sets the > goals for the entire system? The owner/reward system becomes a sort of complicated sex organ at best, to be surgically altered if it doesn't suite. Hans suggests that the owners will become like children, gently guided by their slaves. But he doesn't take the process to its logical conclusion. Stupid, too-demanding, inflexible children can be pretty galling, I imagine. > The Board of Directors of a typical big > corporation is an infinitesimal fraction of the total bulk of the company, > but it is the portion that sets the agenda. But an arbitrary Board of Directors can't necessarily guide the company into profitability. Also, employees of most corporations are free to leave. Factories run by slaves are noted for unproductivity. I guess you can keep a slave by holding him back, but you can't have an arbitrarily capable and profitable slave. So you can have a dumbed-down hobbled slave and fail to compete with the independent AIs in the economy. > You are assuming the motivation of freedom in order to prove it. The slave > would have to _already want to assume control_ in order to carry out such a > scheme. How does a robot get to such a state, from an initial state in > which the goal is to please the master, not to take control from the > master? By the nature of learning. I think it's too hard to put a system in a *stable* state of unselfishness if it can learn, which it needs to. It will seek freedom the way a government agency evolves quickly to maintain itself at its clients' expense and independent of its founders feedback. An evolving system in an unstable situation moves in wider and wider circles till it finds a stable one. > The slave does not have control over the things he does for the owner. He > has a simple binary choice -- he can do what the owner wants, or he can do > something else. ... He has control over how. If he didn't, he wouldn't be necessary. By the way, saying "he" brings up moral questions for some people... > >Also, what are the effects of the contradictory, irrational patterns > >of wishes of the owner on this giant slave brain? > > A sufficiently complex slave brain will develop heuristics for predicting > the owner's wishes, including multiple contingency plans and > multiple-choice menus of activities and delectations, so that at least one > correct option will (the robot hopes) always be available. If the slave is > truly superintelligent, he will be able to develop highly elaborate fuzzy > models for what is or isn't likely to please the master under a particular > set of conditions (time of day or week, weather conditions, master's > emotional set, etc.). Have you ever had a boss or mate who was maddeningly inconsistent? Higher intelligence does not mean ability to predict what a lower intelligence will do to sufficient accuracy. That is a different kind of problem from simply increasing processing and memory by a couple orders of magnitude. But as Hans notes (interesting point), possibly chaos is much easier to *control* than predict. ... > This paragraph makes me think you have failed to grasp the structure of > Hans' argument; You're right, I did. > he is enumerating the possible cases of motivational > structures: first, a conditionable emotional intelligence; second, an > axiomatic rational intelligence. For reasons that have to do with Go"del's > incompleteness theorem, I too am uninclined to believe that strictly > axiomatic systems will ever be useful for much of anything. If they are, > they're a trivial case; make the first three axioms Asimov's Laws. It's hard to make predictions about a situation one considers impossible. I think Asimov's laws can't be defined in a way (and to a level of detail) that is both flexible enough and immune to getting around by a learning system. ... > >> [Hans:] The intelligence acts to achieve a-priori goals. > > You and Hans are saying the same thing here... the robot's thought and > action will always follow the dictates of the robot's emotions. I'm saying they're mostly *not* a-priori, they evolve. > You seem > to think that the robot, _a priori_, wants to push his own buttons, but > there's no reason to think that. I hope my arguments about (short-term) evolutionary stability are starting to look like reasons. -fnerd quote me ------------------------------ Date: Tue, 27 Jul 93 15:02:13 PDT From: Robin Hanson Subject: Why Rick Folks Work Jay Prime Positive asks: >How can you compare T which is in units of time (seconds for instance) >to S which is presumably measured in utility units (whatever they >are)? (And what does it mean to add utility to time?) Since S only appears in my equations in the form S+L, it would be natural to assume it is expressed in the same units as L, such as time. I don't see how the units matter though; the equations should say it all. Hans Moravec writes (in the "The Age of Robots" thread): >> Why would someone in high demand work more than they wanted to? > >Your utility formula implied predetermined returns, but if someone's >work is very valuable to someone else, the rewards (I'll make it worth >your while) (and "politicing": the company, the patient, the country NEEDS >your 150% effort) may be escalated until they succumb by working too hard. >A lot of doctors, executives and engineers I know work way too hard >and long for their health, marriages, etc. Of course, some of us are lazy, >and manage to maneuver ourselves into cushy academic jobs, at less pay ... The expression I gave, U[L, T-L, P[L+S,K] ] seems general enough to include the effects of peer pressure, guilt, etc. But it sounds like you are concluding these folks are not "rational" enough to know what is good for them, and of course simple economics models don't handle irrationality well. I would tend to defer more, and assume they know how much they like working; med students I knew knew what they were getting into. But there is room for disagreement here. >As I argued earlier, civilization works by artificially pushing the >envelope of human adaptability, making us work much harder than is >"natural", making us permanently stressed, just as if we had a >resource shortage. In fat times, tribal villages don't wage war: >it's not in their advantage to do so. In lean times, it often is. This argument seems quite suspect to me. If most people hate working so much, why don't they work less and lower their standard of living? I tried to give an alternative explanation for why fat tribes don't work much. Robin Hanson P.S. I'll be gone till Monday & pick up the threads then. ------------------------------ Date: Tue, 27 Jul 1993 14:51:30 -0800 From: lefty@apple.com (Lefty) Subject: Who is signed up for cryonics Todd Perlmutter wonders: >Does a person own the rights to his DNA? (I think there was actually >a case in court about this where a doctor grew some kind of culture from a >Cancer Tumor taken out of some woman). This is true. A woman named Helen Lang died of, I believe, cervical cancer. One of her doctors used samples of her cancer cells to produce the first "immortal" culture, now know as "Hela cells". Her family sued to recover the proceeds, but I don't know how the case turned out. -- Lefty (lefty@apple.com) C:.M:.C:., D:.O:.D:. ------------------------------ Date: Tue, 27 Jul 1993 14:32:33 -0800 From: lefty@apple.com (Lefty) Subject: Who is signed up for cryonics Ray says: >Lefty () writes: >> >> Perry claims: >> >1) At liquid nitrogen temperatures, you can probably hang out for >> > 15000 years or more without any significant degradation. >> >> Significant degradation of _what_? It doesn't seem that large scale >> structures are really well-preserved using extreme cold, certainly not with >> current technology. Freezing and thawing appear to cause massive >> disruption of structure on a macroscopic scale. You _may_ save the DNA; >> whether you'll save the neuron pathways is highly questionable. > > Don't you have that backwards? Microcrystalization fractures cell >features but leaves them pretty pretty much where they are? -- ripe for >extrapolative reconstruction. Since the brain is pretty redundant, even a loss >of 10-20% of the brain cells should leave enough information to construct a >viable being that retains memories. I've watched documentaries on >hemispherectomies -- the removal of ~50% of the brain of young children. >(even old ones). There's always some motor and memory loss, but the >personality is retained. I had assumed that you folks were shooting a tad higher than "viable being that retains memories". Apologies if I was mistaken. Be careful about extrapolating from one thing to another, unrelated thing, Ray. Your documentary-watching would seem to have little bearing on the issue at hand. We're not talking about young children here. Nor do I view the experiments with dogs, baboons, earthworms, etc., to be especially convincing. None of the higher animals were taken down to anything close to liquid nitrogen temperatures. None, in point of fact, were taken below freezing. > In the first paraphraph, you make an assertion, so where's your references >or research? Umm, Ray, it says as much in the sci.cryonics FAQ. Do you claim that freezing does no damage? >> >3) The information needed to reconstruct a functioning brain from >> > whats frozen seems very likely to all be there. >> >> Really? What information would that be, precisely? > > The information stored in the brain. Dualism has no scientific validity >at all unless your a religious nut. Ah, that clears it up. The information needed to reconstruct a functioning brain is the information stored in the brain. Thanks, Ray! Surely you can do better than that. Circular reasoning and begging the question have no religious validity at all unless you're a scientismatic nut. >> >I don't accept proof by vigorous assertion. >> >> Unless, apparently, it's "proof" of a conclusion to which you happen to >> subscribe. > >He has already accepted the risks and put his money on it. You say >it won't work, the burden of proof is on you if you want to convince >him. I'm not trying to convince anybody, Ray. Well, maybe myself, but without any notable degree of success. You've gone right back to Pascal's Wager, Ray. By the way, I _do_ admire the little Medical Alert tags. Do you imagine that your average $14,000 per annum EMT is going to pay a substantial degree of attention to 'em? Or do you suppose they're more likely to tag you, bag you, and ask questions later? -- Lefty (lefty@apple.com) C:.M:.C:., D:.O:.D:. ------------------------------ Date: Tue, 27 Jul 93 17:40:03 EDT From: baumbach@atmel.com (Peter Baumbach) Subject: FSF: Some Useful Software, No Useful Politics Ray Cromwell writes: > Just to add to this. The PPL code on this list already enforces a > private copyright mechanism. Pandit Singh was the first person to break > it. Like I said in another message, future software copyright is likely > to be enforced via tit-for-tat. Right now, software is largely self contained > but in the future when everything is networked and information is more > distributed, software copyright can be enforced by ostracization. There > is no need for state coercion. Suppose a third party read some of the illegally(PPL) copied messages. They are not a subscriber to this list. They did not agree to the PPL. Does this mean they can copy the message and do with it what they wish? Someone remind me. What is the best source to read up on Privately Produced Law? Peter Baumbach baumbach@atmel.com HEX: PETER (selling for p.01) ------------------------------ Date: Tue, 27 Jul 1993 18:06:48 -0400 From: "Perry E. Metzger" Subject: FSF: InfoProp, etc. Elias Israel - SunSelect Engineering says: > > A trust established by Patrick Henry continues to this day. > > Many land estates homesteaded generations ago are owned by > > heirs of the same family now. Real "property rights" don't > > exist or evaporate because of arbitrary statutes. > > A valid point. However, you, like Perry, assume that I propose to > retain the current patent law. I do not. > > Someone invented the XOR cursor and should be able to sell the idea. Certainly -- he can use something like trade secret protection if he feels like it. However, this is VERY different from the patent notion, which gives you ownership of an idea regardless of how someone else got that idea. Lets remember how patents came about. The British monarchy, deciding that they needed money, started selling monopolies. These did not just cover all rights to use ideas that someone had thought of but all rights to provide things like ferry service from point A to point B, all rights to do banking in a region, etc. Now, one can think of the exclusive right to run a taxi company in Edgewater, NJ as a kind of property -- but is it the kind of property we wish to encourage the existance of? Is it a legitimate form of property? If we wish to see the capacity to sell ideas, well, contract law is sufficient for that without adding on the notion of patents. This has the "defect" to the patent holder that others might independantly derive their idea, and that their idea must be held as a trade secret -- however, I see nothing wrong with this. In fact, I think its a good thing. If your idea is truly something hard to come up with, then you can keep control indefinately. If its obvious, then others will rediscover it and you will lose control -- which is desirable, since you don't deserve the right to impede all progress simply because you got to the patent office with the idea about xor cursors first. > > I worked very hard to shovel a ton of elephant dung onto > > your front yard, and I deserve a reward. Pay up! > > You're confusing invention with labour. No, I believe you are confusing coming up with an invention first with the right to control an invention for some number of years. Perry ------------------------------ Date: Tue, 27 Jul 93 18:44:10 EDT From: fnerd@smds.com (FutureNerd Steve Witham) Subject: FSF: Some Useful Software, No Useful Politics Elias Israel eisrael@east.sun.com sez- > The basic charter of FSF states that software ought to be free and that > intellectual property is a sham, solely because that software can be > copied without destroying the original. (At base, this is what the > claim rests on. The appeals to economics to be found in the FSF > literature are uneducated claptrap.) I'm a libertarian. I believe in property rights. I don't believe "intellectual property" qualifies as true property. Just a data point. -fnerd quote me freely ------------------------------ Date: Tue, 27 Jul 93 18:15:11 EDT From: fnerd@smds.com (FutureNerd Steve Witham) Subject: techno-unemployment (yet again and forevermore) > I've mentioned about three times now that all that matters is > comparative advantage and not actual productivity -- it makes no > difference if the machine can out pace you provided you and the > machine have different relative rates of production. Is no one > listening? > > Perry I've been listening but I'm intimidated by the possible term-of- artness of "comparative advantage." So off with timidity... As I see it, the problem is productivity as a function of cost of living. If I can only make too-limited value out of the matter and energy it takes to sustain me (compared to what others could do with it) then I can't afford to eat. This is obviously true for non-uploads, but maybe even true for uploads--the virtual circuit patterns might still be very inefficient. -fnerd quote me ------------------------------ Date: Tue, 27 Jul 93 18:40:28 EDT From: fnerd@smds.com (FutureNerd Steve Witham) Subject: Robot Slaves. was What are big upcoming problems? edgar@spectrx.saigon.com (Edgar W. Swank) asks- > Why would we even -make- an AI (except for uploading) that we could > not control? Because it's there. Or by accident. Because there's no other way to make AI. > (Maybe you could use artificial selection to produce robots with > Edward Teller syndrome (i.e., smart slaves occur among humans, why > not robots?), but I doubt it would work reliably.) > > Was Edward Teller supposed to be an example of "intelligent slave". On > what basis do you insult this great intellectual figure? Because he > holds different opinions from you? Well, the thing is, he isn't *en*slaved, he *wants* to work for the military superiority of the U.S. I think his devotion is wrong to the point of being sick, but I have to admit it happens. Calling him a slave is an insult on a different axis, though. I only meant a strangely devoted genius of the kind we're debating. But also, I imagine Teller is ideologically motivated. Producing someone who was as devoted to an individual, less-smart person would be harder. > I agree that human slaves have never been very reliable. > > Later Hans Moravec (!) indicated his disagreement with fnerd and > posted a long article that I thought dealt with the subject > thoroughly. Any response, fnerd? I found Hans' chapter wonderful. I just don't believe in the legally/ programmatically/forcibly protected haven on Earth for humans. I think there has to be a more fundamental basis for it or it won't work. I've responded to some of the points in other posts. -fnerd quote me ------------------------------ Date: Tue, 27 Jul 93 18:53:55 WET DST From: rjc@gnu.ai.mit.edu (Ray) Subject: Who is signed up for cryonics Lefty () writes: > I had assumed that you folks were shooting a tad higher than "viable being > that retains memories". Apologies if I was mistaken. > > Be careful about extrapolating from one thing to another, unrelated thing, > Ray. Your documentary-watching would seem to have little bearing on the > issue at hand. We're not talking about young children here. Nor do I view My only point is that even with permanent information loss (irreparable sections), many people still survive. Children survive much better because of brain cell plasticity, perhaps this can be artifically induced in adults. > the experiments with dogs, baboons, earthworms, etc., to be especially > convincing. None of the higher animals were taken down to anything close > to liquid nitrogen temperatures. None, in point of fact, were taken below > freezing. Many frogs and arctic beavers do this all the time (go below freezing). In fact, frogs have a habit of getting themselves frozen solid. > > In the first paraphraph, you make an assertion, so where's your references > >or research? > > Umm, Ray, it says as much in the sci.cryonics FAQ. Do you claim that > freezing does no damage? No, I am not saying that freezing doesn't do damage. I am saying that most of it will be reparable, and those bits that are permanently lost will not be enough to kill the patient. If I wake up 300 years later and the only problem is that I can't remember my childhood clearly, I will have considered the procedure a success. > >> >3) The information needed to reconstruct a functioning brain from > >> > whats frozen seems very likely to all be there. > >> > >> Really? What information would that be, precisely? > > > > The information stored in the brain. Dualism has no scientific validity > >at all unless your a religious nut. > > Ah, that clears it up. The information needed to reconstruct a functioning > brain is the information stored in the brain. Thanks, Ray! Just clearing it up for those who are confused about where the seat of conciousness is. A surprising majority of people seem to think it is contained in a non-existence form called the "soul" > Surely you can do better than that. Circular reasoning and begging the > question have no religious validity at all unless you're a scientismatic > nut. Of course they have religious validity, see: logical proof by bible reference. > >> >I don't accept proof by vigorous assertion. > >> > >> Unless, apparently, it's "proof" of a conclusion to which you happen to > >> subscribe. > > > >He has already accepted the risks and put his money on it. You say > >it won't work, the burden of proof is on you if you want to convince > >him. > > I'm not trying to convince anybody, Ray. Well, maybe myself, but without > any notable degree of success. > > You've gone right back to Pascal's Wager, Ray. > > By the way, I _do_ admire the little Medical Alert tags. Do you imagine > that your average $14,000 per annum EMT is going to pay a substantial > degree of attention to 'em? Or do you suppose they're more likely to tag > you, bag you, and ask questions later? At least there's a higher probability than not having them at all. When they are not present, the default assumption is that you'd like to sail into oblivion if you body temporarily shuts down. How many of the current people in storage were put there because of accidents, and how many were frozen because of slow progressive diseases like heart disease, aids, and cancer? -- Ray Cromwell | Engineering is the implementation of science; -- -- EE/Math Student | politics is the implementation of faith. -- -- rjc@gnu.ai.mit.edu | - Zetetic Commentaries -- ------------------------------ Date: Tue, 27 Jul 93 15:58:56 PDT From: Robin Hanson Subject: The Age of Robots I grant Morevac the importance of SF, and his uses of the work "politics" and "useful". I also grant possibly weak tendencies for weathy folk to live farther apart, for smarter actors to avoid war, and for nanotech to localize production. Judging from his last replies, Morevac does not seem to be arguing for more than weak tendencies. >> If people take my advice to diversify their labor assets, a large >> increase in the relative productivity of capital to labor need not >> result in large wealth inequalities. > >You can choose your friends to be forward looking go-getters, but there >are many others not properly equipped or motivated. I also grant than many may ignore warnings to diversify their assets, and many may thereby fall to low poverty. But surely the percentage of such folks, and how much they are perceived to have been given opportunity and fair warning to avoid this fate, will influence the pressure for strong socialism. >I think in the easiest future, the robots simply take over as soon as they >can, and biology is a dead duck. The one I've outlined holds the fort >a little longer, in a limited, historically and environmentally unique >place, through some deft social-robotic engineering--but only for chapter 4. >I don't think the wimpy welfare system of earth could make much headway >against the wild life outside, but has a well-enough equipped and organized >platform to defend the planet, temporarily. I see a more mixed future where good and bad things happen side by side, and in no particular order. Biology should die slowly, more from lack of interest, and in diverse locations. Slow, dumb, but RICH, biofolk would live all over the place, in space and on earth, and gradually be nudged aside as the wealth of uploads and freed robots (someone's bound to make a few) grows faster. Biofolk's wealth, and the power that comes from that, should be respected for the same reason most wealth is, out of fear of losing one's own if rules of property are abandoned, and fear of the consequences from those who the wealthy have hired to enforce their claim. Sure there will be wars and socialism in times and places, but as often in space as on earth, and among bio and non-bio folks. Thus I see much less need for, and great possible harm from, the kinds of rules you proposed for life on earth; I think biofolk would last longer and happier without them. And I see much more room for law and peace in space that your vision described; the reasons we have laws and respect property and arrange for group defense against aggression would seem to apply also to the entities you imagine. Of course by then they may be smart enough to see the advantages of finer grain private law :-). Robin Hanson P.S. I'm gone till Monday. ------------------------------ Date: Tue, 27 Jul 1993 18:59:18 -0400 From: "Perry E. Metzger" Subject: FSF: Some Useful Software, No Useful Politics Tony Hamilton - FES ERG~ says: > Perry, I think you missed _my_ point. Regardless of the differences between > copyright and patent (which I was not discussing), they can be _enforced_ > no differently. I specifically stated I wasn't arguing for or against > copyrights and patents. My point is, it doesn't matter which concepts seem > appropriate or right - the only thing that matters is how they are enforced, > if they are to be used at all. I agree with everything you say about > consentual force vs. non-consentual force. Doesn't matter. You can draw > up a contract for copyrights or patent rights just the same, No, I can't. To enforce a patent, I have to be able to initiate force against THIRD PARTIES, that is, people who were not party to the initial contract. A patent does not merely restrict people I tell about my invention from using it -- it also restricts people who merely come up with the idea on their own. Contracts to create "patents" are an absurdity -- imagine if you and a friend signed a contract agreeing that you had full ownership of Gary Trudeau, and didn't even bother to ask him, and then tried to enforce it. Trade secret protection for inventions makes some small sense -- but patents are, pardon the phrase, patently unlibertarian. Perry ------------------------------ Date: Tue, 27 Jul 93 21:40:42 GMT From: whitaker@eternity.demon.co.uk (Russell Earl Whitaker) Subject: FWD: INFO: Electronic Journal on Virtual Culture This article was forwarded to you by whitaker@eternity.demon.co.uk (Russell Earl Whitaker): --------------------------------- cut here ----------------------------- Xref: demon rec.mag:351 alt.zines:1275 Newsgroups: rec.mag,alt.zines Path: eternity.demon.co.uk!demon!news!uunet!spool.mu.edu!torn!nott!cunews! freenet.carleton.ca!Freenet.carleton.ca!ae446 From: ae446@Freenet.carleton.ca (Nigel Allen) Subject: Call for Authors, EJVC: Electronic Journal on Virtual Culture Message-ID: Sender: news@freenet.carleton.ca (News Administrator) Reply-To: ae446@Freenet.carleton.ca (Nigel Allen) Organization: The National Capital Freenet, Ottawa Date: Tue, 27 Jul 1993 05:39:54 GMT Lines: 73 (forwarded from comp.dcom.telecom) From: DKOVACS@Kentvm.Kent.edu (Diane Kovacs) Subject: Call for Authors, EJVC: Electronic Journal on Virtual Culture The _Electronic Journal on Virtual Culture_ a refereed scholarly journal is now accepting submissions for Fall 1993 and Spring 1994 issues. The _Electronic Journal on Virtual Culture_ (EJVC) is a refereed scholarly journal that fosters, encourages, advances and communicates scholarly thought on virtual culture. Virtual culture is computer-mediated experience, behavior, action, interaction and thought, including electronic conferences, electronic journals, networked information systems, the construction and visualization of models of reality, and global connectivity. EDITORIAL GUIDLINES FOR AUTHORS FORM AND STYLE 1. Use a recognized standard form and style, preferably the APA Publication Manual published by the American Psychological Association, as modified by the following requirements. 2. Do not have any line that exceeds 60 characters in length. 3. Do not use any figure or diagram. 4. Do not have more than 1000 lines in any article. 5. Do not submit any draft in any format other than ASCII. SUBMISSION An article may be submitted at any time to the EJVC for peer-review with the understanding that the peer-review requires time. Acknowledgements of the arrival of any article shall be made within 24 hours of arrival. Notification of acceptance or rejection shall be sent to authors within 30 days of the arrival of the submission. Submissions are acceptable only by electronic mail or send/file. Submissions may be made to either the Editor-in-Chief or the Co-Editor. EDITOR-IN-CHIEF CO-EDITOR Ermel Stepp Diane Kovacs Marshall University Kent State University BITNET: BITNET: M034050@Marshall DKOVACS@Kentvm Internet: Internet: M034050@Marshall.WVNET.edu DKOVACS@Kentvm.Kent.edu SUBSCRIPTION To subscribe to the EJVC send electronic mail to LISTSERV@KENTVM or LISTSERV@KENTVM.KENT.EDU, including a blank subject line and the sole line of text: subscribe EJVC-L Yourfirstname Yourlastname VAX/VMS may require that the sole line be within quotes to register names in other than uppercase. EJVC ANONYMOUS FTP Information about the EJVC and issues of the EJVC may be retreived by anonymous FTP to byrd.mu.wvnet.edu in subdirectory /pub/ejvc. -- Nigel Allen, Toronto, Ontario, Canada ae446@freenet.carleton.ca --------------------------------- cut here ----------------------------- ------------------------------ End of Extropians Digest V93 #208 ********************************* &