From extropians-request@extropy.org Thu Aug 19 01:35:14 1993 Return-Path: Received: from usc.edu by chaph.usc.edu (4.1/SMI-4.1+ucs-3.0) id AA18255; Thu, 19 Aug 93 01:35:11 PDT Errors-To: Extropians-Request@gnu.ai.mit.edu Received: from news.panix.com by usc.edu (4.1/SMI-3.0DEV3-USC+3.1) id AA19353; Thu, 19 Aug 93 01:35:03 PDT Errors-To: Extropians-Request@gnu.ai.mit.edu Received: by news.panix.com id AA05205 (5.65c/IDA-1.4.4 for more@usc.edu); Thu, 19 Aug 1993 04:32:15 -0400 Date: Thu, 19 Aug 1993 04:32:15 -0400 Message-Id: <199308190832.AA05205@news.panix.com> To: Extropians@extropy.org From: Extropians@extropy.org Subject: Extropians Digest X-Extropian-Date: August 19, 373 P.N.O. [08:32:06 UTC] Reply-To: extropians@extropy.org Errors-To: Extropians-Request@gnu.ai.mit.edu Status: RO Extropians Digest Thu, 19 Aug 93 Volume 93 : Issue 230 Today's Topics: [1 msgs] AI/PHIL: The Old Conciousness Canard [1 msgs] AI: slaves, selfishness, evolution [6 msgs] AI: slaves, selfishness, evolution [1 msgs] APTITUDES: Aptitudes and Exercise [2 msgs] Another neato .sig quote from netnews... [1 msgs] HUMOR: Intercoastal phrase book [2 msgs] HUMOR: Tabloid Libertaria [2 msgs] META: Carrier Detect Needed! [3 msgs] help [1 msgs] Administrivia: No admin msg. Approximate Size: 51769 bytes. ---------------------------------------------------------------------- Date: Wed, 18 Aug 93 16:50:09 EDT From: afbarr@Athena.MIT.EDU Subject: AI: slaves, selfishness, evolution will someone please take me off this mailing list? thanks in advance, aaron -------- Aaron Barr 518 Beacon St. afbarr@athena.mit.edu Boston, MA 02215 (617)536-1300 ext. 155 ------------------------------ Date: Wed, 18 Aug 1993 17:16:17 -0400 (EDT) From: Elizabeth Schwartz Subject: META: Carrier Detect Needed! I haven't done any excludes yet... in fact, I haven't gotten any responses to my ::help requests yet. I sent one to the old list address and one to the new. Still, the list does seem to be a WHOLE LOT quieter! ------------------------------ Date: Wed, 18 Aug 1993 17:16:41 -0400 (EDT) From: Elizabeth Schwartz Subject: META: Carrier Detect Needed! ------------------------------ Date: Wed, 18 Aug 93 17:26 EDT From: kqb@whscad1.att.com Subject: APTITUDES: Aptitudes and Exercise When I build up my muscles, I feel a greater need to exercise than I did before. If I then become sedentary for too many days, those muscles will practically scream at me "Exercise me! Exercise me!". While this kind of feedback does have its advantages - it helps keep me in good shape - I have seen more extreme examples that make me wonder. When I see jogaholics driven by their metabolism to run through rain, snow, sleet, and hail (unlike some of our postal carriers), I feel sad for them. Poor junkies. Not only muscles need exercise, though. I think that I have finally discovered a disadvantage to being intelligent. Big brains need plenty of mental exercise. If they do not receive their quota of mental stimulation - due to a time-consuming yet brain-deadening job or simply due to overexposure to TV - those brains express great aggravation and frustration. It doesn't necessarily matter what practical effect derives from that mental stimulation, though. If a crossword puzzle provides the necessary cranial workout, then there is no need to write a novel or fly to the moon. I think that the ancient Chinese emperors understood this; they made certain that the more intelligent of their subjects diverted their attention to abstruse scholarly studies, and thus did not instead focus their attention on activities that could threaten the emperor's power. According to researchers at the Johnson O'Connor Foundation, ALL your aptitudes need exercise. For example, if you have an aptitude for playing a musical instrument, and do not manage to include playing music into your life, you will feel unsatisfied with your life. Something important will be missing. If you do not have an aptitude for music, though, then you can happily proceed through life without it. (For us extropians who seek to expand our mental capabilities, my conclusion is that we will feel a strong *need* to use those capabilities. Once we attain them, they cannot sit idle.) Lately I have noticed that I am getting "itchy" and restless where I am at. The cause could range anywhere from jock itch to underutilization of some of my aptitudes. I just checked and I don't seem to have jock itch, though, so my preliminary diagnosis is leaning the other way. What I want to know from you multi-talented, big brained Extropians is how do I best discover my aptitudes? What information do you have on the Johnson O'Connor Foundation? (Is their testing worth their approx. $500 fee?) What other alternatives do you suggest? Thanks! Kevin Q. Brown kqb@whscad1.att.com ------------------------------ Date: Wed, 18 Aug 1993 14:35:48 -0700 From: dkrieger@Synopsys.COM (Dave Krieger) Subject: HUMOR: Intercoastal phrase book Just got this from a friend ------------------------------------------------------------------------ Here's a handy guide for those of you who have to deal with vendors, customers, or other divisions on the opposite coast. EAST COAST WEST COAST ------------ ------------ absolutely not maybe yes maybe action item by Feb 12 for Joe Joe's working on the problem bozo subcontractor brawl design review dictator facilitator do it and do it now can you sign up for this program? do it right or you're fired I'm confident that you'll get it done f*ck off trust me follow the spec is there a spec? get out of my office let's get a consensus on this one he's a jerk he hasn't signed up for our plan he's a subordinate he's a team player I'll cover your ass consider me your resource ignore him, he's new I'm bringing him up to speed local bar offsite facility meet me in the parking lot let's take this offline oh shit thanks for bringing that to my attention overdesigned robust punch his lights out constructive confrontation shut the f*ck up let me share this with you that's totally incompetent let me build on that point unemployed consulting over budget on schedule under budget we haven't started yet we finished early ...(no translation available)... we're done how do you feel about that? what's you problem? I certainly understand your feelings where's the spec? what's a spec? what's the schedule? what's our game plan? your plan sucks let me share my feelings on this plan that's not my f*cking job we don't have the resources ------------------------------ Date: Wed, 18 Aug 1993 17:37:51 -0400 (EDT) From: Elizabeth Schwartz Subject: help ------------------------------ Date: Wed, 18 Aug 93 14:49:19 PDT From: thamilto@pcocd2.intel.com (Tony Hamilton - FES ERG~) Subject: APTITUDES: Aptitudes and Exercise > Lately I have noticed that I am getting "itchy" and restless where > I am at. The cause could range anywhere from jock itch to > underutilization of some of my aptitudes. I just checked and I don't > seem to have jock itch, though, so my preliminary diagnosis is leaning > the other way. What I want to know from you multi-talented, big brained > Extropians is how do I best discover my aptitudes? What information > do you have on the Johnson O'Connor Foundation? (Is their testing > worth their approx. $500 fee?) What other alternatives do you suggest? > Thanks! I don't think you can truly discover your aptitudes without spending time doing so. I believe that if you take time out to do everything, be everywhere, and experience all, when you are young, you'll probably have exposed yourself to those things you want to do, or are good at, and will therefore be able to more intently focus on those things later. How would you _know_ when you've found an aptitude? Well, if you do everything you can think of, whatever you end up doing again is probably something you're good at. I don't think there needs to be a major, conscious effort to figure it out as long as you've had that exposure to many things. If you instead watch a lot of TV and don't try different things, you'll probably never know. I can't comment on the Foundation, but my instinct tells me that you get what you pay for. If you feel $500 is enough to resolve all your aptitudes and interests, to determine your course in life, then so be it. I don't. Tony Hamilton thamilto@pcocd2.intel.com HAM on HEx ------------------------------ Date: Wed, 18 Aug 93 15:04:10 PDT From: Robert Brooks Subject: AI: slaves, selfishness, evolution > > > Well, evolution seems to disagree with you. I think it is highly unlikely > > that consciousness emerged as an accidental by-product of natural > > selection; rather, consciousness itself has survival value and was thus > > selected for. > > Hi Robert. Your last sentence is a little confusing. Nevertheless, the concept I'm simply proposing that consciousness is not a by-product of something else, or an "epiphenomenon", but rather _directly_ enhances survival, and so is subject to being optimized by Darwinian natural selection. (I also believe the other requirement for an evolveable trait, continuity, is met. That is, there is no fine dividing line between "conscious" and "not conscious", rather, it is something that exists in various degrees.) You seem to hold that we can make unconscious machines that are just as useful to us as conscious ones would be. I claim that the fact that we ourselves are conscious calls this into question. If true, then why would a useless, expensive, trait have evolved. As I indicated, a possible answer is that consciousness _is_ just an epiphenomenon, and doesn't require substantially more hardware (neurons) or software (experience/development time) than other traits which are directly useful (eg. memory, sensory resolution, etc.). I just don't find this very plausible--apparently you do. > Now this is interesting. Above you state that consciousness is not a by-product > of natural selection, but here you talk of nature _making_ us conscious. I > know you probably have a clear model of this in your head, but I hope I'm > not alone in being confused by how you are presenting it. > Again, I'm simply saying that I believe consciousness evolved directly, rather than being a side-effect of other traits. Robert -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~ Robert Brooks ~ ~ Hewlett Packard Company ~ ~ rb@hprpcd.rose.hp.com ~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ------------------------------ Date: Wed, 18 Aug 93 15:25:22 PDT From: thamilto@pcocd2.intel.com (Tony Hamilton - FES ERG~) Subject: HUMOR: Intercoastal phrase book > > Just got this from a friend > > ------------------------------------------------------------------------ > > Here's a handy guide for those of you who have to deal with > vendors, customers, or other divisions on the opposite coast. Not far off, not at all. Here it is again with an extra Intel column (wider than 80 chars...) > EAST COAST WEST COAST INTEL > ------------ ------------ ------- > absolutely not maybe I don't have the resources > yes maybe I'll get back to you > action item by Feb 12 for Joe Joe's working on the problem AR (action required) for Feb 12, 11:45 am > bozo subcontractor Vendor > brawl design review Constructive confrontation > dictator facilitator Manager > do it and do it now can you sign up for this program? You brought it up, so you fix it > do it right or you're fired I'm confident that you'll get it done I'm empowering you to do the job > f*ck off trust me You do your job, I'll do mine > follow the spec is there a spec? Which spec covers this? > get out of my office let's get a consensus on this one I respect your opinion, but... > he's a jerk he hasn't signed up for our plan Just take his name off the signature list > he's a subordinate he's a team player He's an individual contributor > I'll cover your ass consider me your resource It's your problem, you handle it > ignore him, he's new I'm bringing him up to speed Throw him in the fire, he'll do fine > local bar offsite facility Cafeteria > meet me in the parking lot let's take this offline Let's do a 1-on-1 > oh shit thanks for bringing that to my attention F*ck > overdesigned robust Adequate > punch his lights out constructive confrontation Go to his manager > shut the f*ck up let me share this with you Allow me to criticize > that's totally incompetent let me build on that point Uh-huh. Anyway... > unemployed consulting Screwed > over budget on schedule Well planned > under budget we haven't started yet What budget? > we finished early ...(no translation available)... Pats-on-the-back > we're done how do you feel about that? This is actually just phase I > what's you problem? I certainly understand your feelings You're just going to have to face facts > where's the spec? what's a spec? Better go write a spec > what's the schedule? what's our game plan? Show me your Gantt > your plan sucks let me share my feelings on this plan What is your fall-back position? > that's not my f*cking job we don't have the resources That project is "below the line" right now Tony Hamilton thamilto@pcocd2.intel.com HAM on HEx ------------------------------ Date: Wed, 18 Aug 93 18:57 EDT From: kqb@whscad1.att.com Subject: OOPS! - mistakenly forwarded message I noticed that last night a cryonics mailing list mailblast from me managed to find its way to the Extropians mailing list. The inadvertent forwarding has been fixed, so you now don't have to ::exclude me for that reason. (But perhaps you will for another reason!) Kevin Q. Brown INTERNET kqb@whscad1.att.com or kevin_q_brown@att.com ------------------------------ Date: Wed, 18 Aug 93 16:00:28 PDT From: thamilto@pcocd2.intel.com (Tony Hamilton - FES ERG~) Subject: AI: slaves, selfishness, evolution > I'm simply proposing that consciousness is not a by-product of something > else, or an "epiphenomenon", but rather _directly_ enhances survival, and > so is subject to being optimized by Darwinian natural selection. (I also > believe the other requirement for an evolveable trait, continuity, is met. > That is, there is no fine dividing line between "conscious" and "not > conscious", rather, it is something that exists in various degrees.) Okay, I think on this we agree somewhat. See below. > You seem to hold that we can make unconscious machines that are just as > useful to us as conscious ones would be. I claim that the fact that we > ourselves are conscious calls this into question. If true, then why would > a useless, expensive, trait have evolved. This would assume that Darwinian evolution always results in the _best_ possible results. But more to the point, rather than comparing unconscious machines with conscious ones under the assumption that a conscious one is most desired, I am challenging that assumption. The underlying premise in your argument is that a conscious machine is necessary, or adequate. I am saying we don't know that. You also seem to be assuming that what we need is something _better_ than us. I would argue that the perfect slaves, or agents working for us, need to be _different_ than us, not better. I would argue that, as conscious beings, we have the capacity to be as powerful as any other conscious being, including a conscious supercomputer. It's just a matter of technology. Which is more beneficial, to make a computer conscious, and deal with all the issues that have been discussed in doing so, the least of which being your personal safety, or make a conscious being, _you_, capable of carrying out processing as well as a computer? In the former case, you can only guess at the outcome. In the latter, you can be sure that, no matter what the results, they are controllable. Your own mind cannot decide to do anything against your nature. That's a paradox, because any actions taken _are_ a result of your nature. Your own mind can't walk away and jeapordize the investment you've made in improving it. This is why I say that if we need slaves at all (and I don't think we will) they won't necessarily need to be conscious. More likely, slaves of any kind would only be beneficial in so far as to extend your own resources. In other words, you can't be in two places at the same time. But if these agents are conscious, well, then you get into all the problems discussed. So I say, why bother? > As I indicated, a possible answer is that consciousness _is_ just an > epiphenomenon, and doesn't require substantially more hardware (neurons) > or software (experience/development time) than other traits which are > directly useful (eg. memory, sensory resolution, etc.). I just don't > find this very plausible--apparently you do. Well, you then see consciousness as an added feature with an enourmously larger set of specs. I do not. I see it as a particular combination of other features, a particular application of them (memory, senses, who knows what else). > > Now this is interesting. Above you state that consciousness is not a by-product > > of natural selection, but here you talk of nature _making_ us conscious. I > > know you probably have a clear model of this in your head, but I hope I'm > > not alone in being confused by how you are presenting it. > > > Again, I'm simply saying that I believe consciousness evolved directly, > rather than being a side-effect of other traits. Ok, then at least we know what we disagree on :-) (and this _is_ important, because it is senseless to provide argument when you don't even understand where the other person stands on an issue.) But on that issue, I do feel that while there are different levels of consciousness today, I don't believe you would find a lower _average_ level of consciousness as you go back in time. Instead, I feel that at some point, consciousness came to be within a _very_ short time (say a thousand years), ramped up quickly, and has been at its current level for at least the last 3000 years. Of course, this is how I feel about the _natural_ evolution of consciousness. I think that now, we will see a deliberate evolution in this area, ala transhumanist efforts (a word which I don't like, but use for convenience sake on this list). Ultimately, we're just going to disagree on this point. That's alright, though. We're here to discuss, not preach, right? By the way, how's the weather _way_ out there, oh, 20 miles from here? ;-) Tony Hamilton thamilto@pcocd2.intel.com HAM on HEx ------------------------------ Date: Wed, 18 Aug 93 19:22 EDT From: kqb@whscad1.att.com Subject: HUMOR: Tabloid Libertaria [ This message was originally posted July 30, but quickly found its way to the bit bucket instead of the Extropians list, so now I am attempting again to inflict it upon all of you. - KQB ] The Fri. July 30 issue of the Wall Street Journal included the front page article: Winning Formula Corpses, Blood and Sex Put Miami TV Station at Top of News Heap For me the take-home message from this article is that popular media feature a lot of tragedies, violence, and sex because that is what people want to see and hear. That is what sells. When I saw Jurassic Park I wondered why the special effects were so good, but the plot had so many stupidities. Then a scary answer came to me... because Spielberg knows how to make a popular movie. The stupidity isn't an accident; it's intentional because that is what the market wants. The most popular radio station in Sussex County, NJ (where I live) is a station my wife and I call "car crash radio". Their news programmers are determined to bring us the best news in local car crashes. If they are unlucky and nobody got killed in Sussex County lately, they'll report on car crashes in neighboring counties. Give the people what they want! When I read Libernet this morning, and saw the usual sad messages from Libertarians trying to "spread the word" with better, more logical, arguments, I realized that libertarianism doesn't sell well because it's lacking the essential ingredients: violence, tragedy, and sex. So here is my solution: Tabloid Libertaria You'll find it next to the National Enquirer and its cousins at the grocery checkout counter. You'll see movie stars, ugly scars, mutilation, amputation, defenestration, cremation, inflation, car crashes, plane crashes, train crashes, computer crashes, psycho murderers, presidents, rapists, rapees, Kennedys ... your complete daily quota of modern culture, ... with a twist. - The article on the psycho murderer, who gunned down the entire village of Totally Unarmed, NY (including a little old lady and each of her 250 cats), will point out that if the local government had allowed the citizens to keep and bear arms, the psycho murderer would have been put out of business at his first shot. The article also will show the *huge* guns of Bubba Joe Creach and his kin from Armed-to-the-Teeth, Georgia, who stopped a similar psycho murderer without even firing a shot, just by brandishing their awesome weapons. - Each issue will feature another innocent victim of the War on Drugs, showing in gory detail the dead body of the latest homeowner raided and killed due to an erroneous drug bust, or the luxury property stolen by the local gestapo under the forfeiture laws, or the psycho murderers who are set free to make room in the crowded prisons for those unfortunates who were caught ingesting substances of unapproved molecular configuration. - The articles featuring the disfigurement, morbidity, and mortality that accompany all-too-many diseases will point out loudly when other countries have effective treatments that the FDA has not allowed into the USA. ("If this man lived in the USA, the FDA would kill him.") I think it's a winner. Give the public their violence, their sleaze, and their tragedy and then give them the spin you want them to hear. Kevin Q. Brown INTERNET kqb@whscad1.att.com or kevin_q_brown@att.com PS: How about a Waco Watch column? Hold a contest on which unconventional group will get "Waco'd" next. Winners get an all-expenses-paid trip to view the remains. ------------------------------ Date: Wed, 18 Aug 1993 16:57:56 -0700 From: dkrieger@Synopsys.COM (Dave Krieger) Subject: HUMOR: Tabloid Libertaria At 7:22 PM 8/18/93 -0400, kqb@whscad1.att.com wrote: >PS: How about a Waco Watch column? Hold a contest on which unconventional > group will get "Waco'd" next. Winners get an all-expenses-paid trip > to view the remains. On the northbound side of I-5 in mid-California (around Kettleman City, I think) there is a billboard saying: REMEMBER WEAVER-WACO You May Be Next KDNO 98.5 Chuck Harder I'm assuming that Chuck Harder is some talk-radio personality in those latitudes who shares some of our attitudes. Who has more info on this person? dV/dt ------------------------------ Date: Wed, 18 Aug 93 20:18:39 -0400 From: pavel@PARK.BU.EDU (Paul Cisek) Subject: AI: slaves, selfishness, evolution Tony Hamilton writes (#93-8-592): > >... I do feel that while there are different levels of >consciousness today, I don't believe you would find a lower _average_ >level of consciousness as you go back in time. Instead, I feel that at some >point, consciousness came to be within a _very_ short time (say a thousand >years), ramped up quickly, and has been at its current level for at least >the last 3000 years. ... > This is a fairly common opinion, ranging from Jaynes' breakdown of the bicameral mind theory (consiousness began in ancient Greece) to Horace Barlow's theory that consciousness is a social phenomenon. Although I admit it's attractive, I don't believe it myself. I think much of the problem is that we have an introspectively biased viewpoint... Let me explain. Let's suppose that consciousness emerges during evolution as a scheme of extending, through a progressive abstraction process, simple orienting behaviors to something that resembles goal seeking. This may involve the perception of self as an ego with high level needs - but we're still talking simple creatures, fish perhaps (A `high level' need might be locomotion toward a target, instantiated in terms of a motor program of fin movement. Mating behaviors are abstractions upon locomotion, perception, etc.) As the niche becomes more complex, so must the behavior of the organism within it. This behavior requires higher levels of abstraction, until by the time we get to monkeys we have some impressive intelligence and a strong sense of `consciousness'... because it is required by evolution within a primate niche. Now humans come along and invent language. Let's remember now that language in it's crudest form is not unique to humans, it's merely a means of orienting the behavior of others. Humans, however, have been so forced to elaborate on this technique that their language has become an extremely high level abstraction of behavioral concepts. They are `conscious' of their leg configuration, and their movement through space, and their linguistic thought "I am walking". But because the linguistic thought is so powerful and so clear, it is what defines consciousness to them. Thus the invention of language does create `consciousness' if you draw the line between conscious/not at that level of abstraction only achievable by language. And thus Barlow is right, it's a purely social phenomenon. But the larger issue, unbiased by our perceptions, is that this `linguistic' consciousness is an extension of a much older, though not as lucid, `behavioral' consciousness... So in short I agree with Robert Brooks on this, consciousness evolved over a long time because it is a useful means of keeping increasingly abstract behaviors serving the goals of the gene. Whether memes such as religious fanaticism or the love of a foster parent actually help or hurt the gene goals is a secondary issue, nature leaves holes for its own exploitation. Consider parasites as an example of such exploitation, only here the selfish self-replicators are other genes, while in the fanatic example they are memes. As to unselfish slaves, I don't see why problem solving would be impossible within non self-centered goals. Remember that the final judge of what's a good behavior, in all the intelligences we're familiar with, was whether the behavior helps to propagate it's gene. Thus I'm not surprised that all our evidence from nature suggests falsely that intelligence must be selfish. Paul ------------------------------ Date: Wed, 18 Aug 93 17:24:23 PDT From: szabo@netcom.com (Nick Szabo) Subject: AI: slaves, selfishness, evolution Robert Brooks: > I'm simply proposing that consciousness is not a by-product of something > else, or an "epiphenomenon", but rather _directly_ enhances survival, and > so is subject to being optimized by Darwinian natural selection. I'd say consciousness has to have both: * evolved out of very uncommon, "logically deep" building blocks. Otherwise why don't other animals have it? Or maybe some do -- we lack a goood working definition. But here we are using it in relation to a set of rights which we restrict to humans (the right not to be a slave). Perhaps "consciousness" is inseparable from the ability to evolve a set of mutual obligations called "rights". * enhanced fitness, or is inseparable from a structure that enchanced fitness. This brings up an interesting question -- what if the tables turn and concsciousness becomes selected against? If consciousness goes along with intelligence this might actually be happening -- cf. the statistics for education level vs. fertility in develop countries. What happens when we evolve a successor species to humans that _lacks_ consciousness? Do humans hae a right to enslave them? A more concrete example: do we enslave Down's Syndrome patients in institutions right now, or are they free to choose where to live? Also, is consciousness absolute or relative? Will a Jupiter-sized brain consider humans to be conscious? Nick Szabo szabo@netcom.com ------------------------------ Date: Wed, 18 Aug 93 17:58:57 PDT From: Robert Brooks Subject: AI: slaves, selfishness, evolution I fully agree with the argument that we shouldn't try to make conscious slaves, because of the danger they pose. However, I'm not so sure we shouldn't try to make conscious machines and consider them as having equal moral status. While there are risks (ie. that we'll create beings we can't compete with, a la "Pigs in Cyberspace"), there are also benefits. And unless the risk/benefit analysis is clearly negative to _anyone_ with the smarts to accomplish such a thing, it will be done, eventually, and so we had better be prepared for it. In summary, a conscious machine would be much more productive than an unconscious one for general-purpose application, but would make a lousy slave. Rather, it would have to be more like a child, employee, or colleague. > > Well, you then see consciousness as an added feature with an enourmously > larger set of specs. I do not. I see it as a particular combination of other > features, a particular application of them (memory, senses, who knows what > else). > How, then, do you explain the vastly longer developmental stage of a human as compared to a chimp? > By the way, how's the > weather _way_ out there, oh, 20 miles from here? ;-) > Great! So when are we having the Sacramento Area Extropians Lunch #2? Robert -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~ Robert Brooks ~ ~ Hewlett Packard Company ~ ~ rb@hprpcd.rose.hp.com ~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ------------------------------ Date: Wed, 18 Aug 93 21:23:42 WET DST From: rjc@gnu.ai.mit.edu (Ray) Subject: Another neato .sig quote from netnews... Lefty () writes: > "No matter how long you sit crossed-legged like that," responds the master, > "you will never make a Buddha out of yourself." Unless you're an enlightened monk, "Ah, but master, you are trying to make a mirror out of that tile using bulk matter manufacturing techniques. A little application of advanced AI and nanotech, and not only could I make myself into Buddha, you could make Buddha out of that tile." -Ray, Extropian Zen Master "What is the sound of one Asteroid brain hand clapping?" "None. Sound doesn't travel in space." -- Ray Cromwell | Engineering is the implementation of science; -- -- EE/Math Student | politics is the implementation of faith. -- -- rjc@gnu.ai.mit.edu | - Zetetic Commentaries -- ------------------------------ Date: Wed, 18 Aug 1993 20:37:00 -0500 From: "Phil G. Fraering" Subject: AI/PHIL: The Old Conciousness Canard Well, I have all my ducks in a row for excluding threads, and I excluded the previous "AI: slaves, selfishness, evolution" thread line, so let's keep with relevant titles, okay, people? Okay: I have one comment about whoever that guy is that thinks conciousness is younger than the Illiad: how do you account for cultures out of the mainstream? Were all the Australian Aborigines unconcious? American Natives? A lot of these were sorta cut off from everyone else, and even if there were pre-columbian contacts between the old and new worlds, does conciousness spread faster than the common cold or something once it gets loose? (Since noone else has mentioned it, the Eskimos had _minimal_ contact with anyone until modern times; and yes, I'm familiar w/ the nature and extent of the Norse contacts, which seemed to hit a pretty hard language and cultural barrier. I think it was found that their conciousness upon contact was no different than ours.) "If you prick us, do we not... leak?" pgf ------------------------------ Date: Wed, 18 Aug 93 22:19:06 -0400 From: pcm@cs.brown.edu (Peter C. McCluskey) Subject: AI: slaves, selfishness, evolution price@price.demon.co.uk (Michael Clive Price) writes in #93-8-579: >and Peter McCluskey, asks in #93-8-559, >> The more intelligent an entity is, the more phenomena it should be >> able to alter. What do you expect to prevent it from altering its >> primary goals? > >Its own will will prevents alteration to primary goals, just as ours do. >For instance if I were redesigning myself I would make _damned_ sure I >retained survival and curiosity drives (amongst others). Why? Because >they reflect my present goals. I suspect Peter and most extropians >would do the same. If I can retain my present goals at no cost, I presume I will do so. But preventing change to those goals is not a primary goal of mine. >Let's turn the question around: by what criteria (read: what value >goals) would an intelligence select a new set of primary goals? Answer: >with reference to its current set - it doesn't have any other means of >evaluation. It follows, almost by definition, that if an intelligence >wishes to alter a goal that it _not_ a primary goal. (If primary goals >are in conflict then there must be some means of resolution ie some >goals are more primary than others - or else go insane.) Why couldn't the means of resolution amount to something as arbitrary as random choice? Presumably the creators of an AI will attempt to do something better, but it is not obvious their results will be completely predictable or coherently thought out. And I think unforseen byproducts of subgoals are a more likely cause. >> I submit that this problem is typical of intelligences of >> approximately human power. > >Agreed. But I have hope that greater intelligences will not do such >stupid things. I agree that in the long run it will be possible to produce intelligent slaves. I deny that the initial AIs will be great enough to avoid lots of misjudgements. >> If, as I expect, it needs something more like a neural network >> which is trained to maximize a function which approximates the concept >> of "follow the intentions the master is trying to communicate via >> command X", then the correctness of that network will only be known >> for a subset of all possible inputs. > >Why? That's not how traditional programs are tested (ie on all the >inputs), yet we can (sometimes) have high confidence in them. Neural I have consistently found that testing traditional programs is not sufficient to make them reliable. I only have high confidence that programs I write will be predictable if I can understand the source code well enough to know why it ought to work and what test cases are important. With a neural net trained to forecast the stock market, I can't come close to that (although I'm starting to consider investing some of my money based on its results). >networks will probably be susceptible to modular analysis and testing. Everything I know about neural networks and artificial intelligence suggests that the ability to understand the goal "obey orders" will require enormous complexity compared to existing networks, and that the knowledge will not be stored in a very modular fashion (at least with anything resembling current neural net techniques). Analyzing any but the smallest neural nets in use today is substantially harder than getting them to be useful. ----------------------------------------------------------------------------- Peter McCluskey >> pcm@cs.brown.edu >> Essentia non sunt multiplicanda praeter pcm@macgreg.com (new work address) >> necessitatum. -- William of Ockham ----------------------------------------------------------------------------- ------------------------------ Date: Wed, 18 Aug 93 22:24:35 WET DST From: rjc@gnu.ai.mit.edu (Ray) Subject: META: Carrier Detect Needed! Elizabeth Schwartz () writes: > > I haven't done any excludes yet... in fact, I haven't gotten any > responses to my ::help requests yet. I sent one to the old list > address and one to the new. > Still, the list does seem to be a WHOLE LOT quieter! Try it now. There was a problem in the list database. Your email address was wrong so it wasn't recognizing you as a list member and ignored your commands. -Ray -- Ray Cromwell | Engineering is the implementation of science; -- -- EE/Math Student | politics is the implementation of faith. -- -- rjc@gnu.ai.mit.edu | - Zetetic Commentaries -- ------------------------------ End of Extropians Digest V93 #230 *********************************