23 Message 23: From exi@panix.com Fri Jul 30 11:49:19 1993 Return-Path: Received: from usc.edu by chaph.usc.edu (4.1/SMI-4.1+ucs-3.0) id AA03705; Fri, 30 Jul 93 11:49:14 PDT Errors-To: Extropians-Request@gnu.ai.mit.edu Received: from panix.com by usc.edu (4.1/SMI-3.0DEV3-USC+3.1) id AA29749; Fri, 30 Jul 93 11:48:53 PDT Errors-To: Extropians-Request@gnu.ai.mit.edu Received: by panix.com id AA28384 (5.65c/IDA-1.4.4 for more@usc.edu); Fri, 30 Jul 1993 14:42:20 -0400 Date: Fri, 30 Jul 1993 14:42:20 -0400 Message-Id: <199307301842.AA28384@panix.com> To: Exi@panix.com From: Exi@panix.com Subject: Extropians Digest X-Extropian-Date: July 30, 373 P.N.O. [18:42:09 UTC] Reply-To: extropians@gnu.ai.mit.edu Errors-To: Extropians-Request@gnu.ai.mit.edu Status: RO Extropians Digest Fri, 30 Jul 93 Volume 93 : Issue 210 Today's Topics: [3 msgs] Address change [1 msgs] Confidence measures for Hawthorne Exchange [1 msgs] EXTROPY INSTITUTE: No More Sleep [1 msgs] Egad! [1 msgs] Meta: New Software - Who Is on it. [1 msgs] Nightly Market Report [1 msgs] Party: Aug 28 - my plans [1 msgs] Searle's Chinese Torture Chamber Revisited [1 msgs] Wage Competition [1 msgs] Who is signed up for cryonics? [1 msgs] unsubscribe me [4 msgs] Administrivia: No admin msg. Approximate Size: 52594 bytes. ---------------------------------------------------------------------- Date: Thu, 29 Jul 1993 17:56:34 -0600 (MDT) From: J. Michael Diehl Subject: unsubscribe me I've sent mail to the request address but that didn't work, so will someone please unsubscribe me. ========================+==========================================+ J. Michael Diehl ;^) | Have you hugged a Hetero........Lately? | mdiehl@triton.unm.edu | "I'm just looking for the opportunity to | mike.diehl@fido.org help| be Politically Incorrect!" +=========+ al945@cwns9.ins.cwru.edu| Is Big Brother in your phone? | PGP KEY | (505) 299-2282 (voice) | If you don't know, ask me. |Available| ========================+================================+=========+ PGP Key = 7C06F1 = A6 27 E1 1D 5F B2 F2 F1 12 E7 53 2D 85 A2 10 5D This message is protected by 18 USC 2511 and 18 USC 2703. Monitoring by anyone other than the recipient is absolutely forbidden by US Law ------------------------------ Date: Thu, 29 Jul 1993 20:55:19 -0400 (EDT) From: Harry Shapiro Subject: Address change a conscious being, Geoff Dale wrote: > > My e-mail address is changing to: > > plaz@netcom.com > > Could you move my Exi-Bay subscription over to that address? > > I would also like to change over to the main list (new software) from summary. > Are you on the "new" digest or the "old one? /hawk -- Harry S. Hawk habs@extropy.org Electronic Communications Officer, Extropy Institute Inc. The Extropians Mailing List, Since 1991 EXTROPY -- A measure of intelligence, information, energy, vitality, experience, diversity, opportunity, and growth. EXTROPIANISM -- The philosophy that seeks to increase extropy. ------------------------------ Date: Thu, 29 Jul 1993 17:39:18 -0800 From: lefty@apple.com (Lefty) Subject: -- Lefty (lefty@apple.com) C:.M:.C:., D:.O:.D:. ------------------------------ Date: Thu, 29 Jul 1993 21:52:47 -0400 (EDT) From: Harry Shapiro Subject: unsubscribe me a conscious being, J. Michael Diehl wrote: > > I've sent mail to the request address but that didn't work, so will someone > please unsubscribe me. I can't find you on any list. Can you give me a few more clues? /hawk -- Harry S. Hawk habs@extropy.org Electronic Communications Officer, Extropy Institute Inc. The Extropians Mailing List, Since 1991 EXTROPY -- A measure of intelligence, information, energy, vitality, experience, diversity, opportunity, and growth. EXTROPIANISM -- The philosophy that seeks to increase extropy. ------------------------------ Date: Thu, 29 Jul 1993 21:22:01 -0500 From: extr@jido.b30.ingr.com (Craig Presson) Subject: Confidence measures for Hawthorne Exchange CAN YOU SAY SMALL SAMPLE SIZE? I KNEW YOU COULD. ^ / ------/---- extropy@jido.b30.ingr.com (Freeman Craig Presson) /AS 5/20/373 PNO Pinhead discussions to order /ExI 4/373 PNO ** E' and E-choice spoken here ------------------------------ Date: Thu, 29 Jul 1993 21:24:40 -0500 From: extr@jido.b30.ingr.com (Craig Presson) Subject: Who is signed up for cryonics? In <14750@price.demon.co.uk>, Michael Clive Price writes: |> special Extropian invitation to the |> |> MID-SUMMER UK CRYO-FEAST !!!! Solves the storage problem for sure. Vegans need not RSVP. -- Freeman Craig the BBQ chief ------------------------------ Date: Thu, 29 Jul 1993 21:26:51 -0500 From: extr@jido.b30.ingr.com (Craig Presson) Subject: unsubscribe me In <9307292356.AA29418@vesta.unm.edu>, J. Michael Diehl writes: |> I've sent mail to the request address but that didn't work, so will someone |> please unsubscribe me. |> |> ========================+==========================================+ |> J. Michael Diehl ;^) | Have you hugged a Hetero........Lately? | |> mdiehl@triton.unm.edu | "I'm just looking for the opportunity to | |> mike.diehl@fido.org help| be Politically Incorrect!" +=========+ |> al945@cwns9.ins.cwru.edu| Is Big Brother in your phone? | PGP KEY | |> (505) 299-2282 (voice) | If you don't know, ask me. |Available| |> ========================+================================+=========+ |> PGP Key = 7C06F1 = A6 27 E1 1D 5F B2 F2 F1 12 E7 53 2D 85 A2 10 5D |> This message is protected by 18 USC 2511 and 18 USC 2703. Monitoring |> by anyone other than the recipient is absolutely forbidden by US Law No, absolutely not, until you post a summary of the relevant U.S. Code sections. -- cP ------------------------------ Date: Thu, 29 Jul 1993 22:02:52 -0500 From: extr@jido.b30.ingr.com (Craig Presson) Subject: EXTROPY INSTITUTE: No More Sleep OK, this is the end. Stop screwing around with this sleep business. It's 10:00 and the net is dead; all you have to do is stay up like I do ... ^ / ------/---- extropy@jido.b30.ingr.com (Freeman Craig Presson) /AS 5/20/373 PNO /ExI 4/373 PNO ** E' and E-choice spoken here ------------------------------ Date: Thu, 29 Jul 93 22:23:34 CDT From: UC482529@MIZZOU1.missouri.edu Subject: ------------------------------ Date: Fri, 30 Jul 93 00:10:03 EDT From: The Hawthorne Exchange Subject: Nightly Market Report The Hawthorne Exchange - HEx Nightly Market Report For more information on HEx, send email to HEx@sea.east.sun.com with the Subject info. --------------------------------------------------------------- News Summary as of: Fri Jul 30 00:10:02 EDT 1993 Newly Registered Reputations: GOD Supreme Being New Share Issues: Symbol Shares Issued MMORE 10000 GOD 10000 Share Splits: (None) --------------------------------------------------------------- Market Summary as of: Fri Jul 30 00:00:03 EDT 1993 Total Shares Symbol Bid Ask Last Issued Outstanding Market Value 1000 .10 .20 .10 10000 2000 200.00 110 - .10 - 10000 - - 150 - .10 - 10000 - - 1E6 - .10 - 10000 - - 1E9 - .10 - 10000 - - 200 .10 .20 .10 10000 2000 200.00 80 - .10 - 10000 - - 90 - .20 .10 10000 2000 200.00 ACS - .15 .50 10000 1124 562.00 AI - .50 .20 10000 1000 200.00 ALCOR 2.00 3.80 2.00 10000 3031 6062.00 ALTINST - .15 .10 10000 1500 150.00 ANTO - - - - - - BIOPR - .20 .10 10000 1500 150.00 BLAIR - 30.00 50.00 10000 25 1250.00 CHAITN - .05 - 10000 - - CYPHP .15 .20 - 10000 - - DEREK - .42 1.00 100000 8220 8220.00 DRXLR 1.00 2.00 2.00 10000 2246 4492.00 DVDT - 1.55 1.55 10000 10000 15500.00 E - .70 .60 10000 5487 3292.20 ESR - - - - - - EXI 1.00 1.25 1.25 10000 3000 3750.00 FCP - .50 - 80000 4320 - GHG .01 .30 .01 10000 6755 67.55 GOBEL .01 .30 1.00 10000 767 767.00 GOD - .10 - 10000 - - H .75 - - 30000 18750 - HAM - .20 .10 10000 3000 300.00 HEINLN .30 .50 - 10000 - - HEX 100.00 125.00 100.00 10000 3268 326800.00 HFINN 2.00 10.00 .75 10000 1005 753.75 IMMFR .25 .80 .49 10000 1401 686.49 JFREE - .15 .10 10000 3000 300.00 JPP .25 .40 .25 10000 2510 627.50 LEARY - .20 .20 10000 100 20.00 LEF - .15 .30 10000 1526 457.80 LEFTY - .30 .15 10000 1951 292.65 LIST .40 .50 .50 10000 5000 2500.00 LP - .09 - 10000 - - LSOFT .58 .60 .58 10000 7050 4089.00 LURKR - .09 - 100000 - - MARCR - - - - - - MED21 - .08 - 10000 - - MLINK - .09 .02 1000000 2602 52.04 MMORE - .10 - 10000 - - MORE .75 1.60 .75 10000 3000 2250.00 MWM .15 .15 1.50 10000 1260 1890.00 N 20.00 25.00 25.00 10000 98 2450.00 NEWTON - .20 - 10000 - - NSS - .05 - 10000 - - OCEAN .10 .12 .10 10000 1500 150.00 P 20.00 25.00 25.00 1000000 66 1650.00 PETER - .01 1.00 10000000 600 600.00 PLANET - .10 .05 10000 1500 75.00 PPL .10 .25 .10 10000 400 40.00 PRICE - 4.00 2.00 10000000 1410 2820.00 R .49 2.80 .99 10000 5100 5049.00 RAND - .06 - 10000 - - RJC 1.00 999.00 .60 10000 5100 3060.00 ROMA - - - - - - RWHIT - - - - - - SGP - - - - - - SHAWN - 1.00 - 10000 - - SSI - .05 - 10000 - - TCMAY .38 .75 .75 10000 4000 3000.00 TIM 1.00 2.00 1.00 10000 100 100.00 TRANS - .05 .40 10000 1511 604.40 VINGE .20 .50 .20 10000 1000 200.00 WILKEN 1.00 10.00 10.00 10000 101 1010.00 ----------------------------------------------------------------------------- Total 406890.38 ------------------------------ Date: Fri, 30 Jul 93 00:02:05 EDT From: fnerd@smds.com (FutureNerd Steve Witham) Subject: Searle's Chinese Torture Chamber Revisited I've taken my replies to Tim Starr on Searle to private email. Just wanted to point out one thing. No one is proving that computing processes are all that's necessary to produce the phenomena of mind. We are only showing that Searle fails badly at *dis*proving it, which is what he intends to do. I think all any of us can say is that it looks like a perfectly good model. The argument for why a good enough computer running a good enough program, is all that's necessary to have a mind, is based on simplicity-- intelligence seems like the result of some analogous to a running program-- we see no evidence *even from inside* to contradict that explanation--why shouldn't it be enough? Searle sets out looking for a contradiction or inadequacy and fails to find one. -fnerd quote me ------------------------------ Date: Fri, 30 Jul 93 00:32:00 GMT From: price@price.demon.co.uk (Michael Clive Price) Subject: Wage Competition Fnerd, in his last post, made some interesting points which I am not going to try to answer individually. Confusion is creeping in over the use of the term selfish. If we can agree about what selfishness is then more progress can be made on the original topic. I claim that many evolutionary goals are unselfish, whereas Fnerd sees them as extensions of the concept of "self". An example is parentalism. I claim that parentalism is an unselfish goal, whereas Fnerd says it's selfish because parents come to view their children as extensions of themselves (perhaps I misunderstand Fnerd, but this is what he seems to be saying - please correct me if I'm wrong). What's selfish and what isn't is important here because Fnerd expects only selfish goals to be evolutionary stable, or to be most compatible with learning over the lifetime of the slave / neural network or whatever, whereas I don't see anything special about selfish or pseudo-selfish goals (eg self- preservations or parentalism) as opposed to non-selfish goals (climbing Mount Everest, travelling to Andromeda, reading the novels of Ayn Rand or obeying the orders of your Master to death). To my mind any set of goals is equally impressionable on a blank neural net, irrespective of their selfishness. The interesting question is, why are so many of our non-selfish goals interpretable as "selfish"? I think the reason for this is because we have a number of distinct primary goals creating tensions or cognitive dissonance. One goal is always self-preservation, by which I mean the preservation of the individual, not of their genes. Another goal is the preservation of one's children. Clearly the goals of self-preservation and offspring-preservation can be diametrically opposed (eg mother throws herself in front of speeding train to save child) This cognitive dissonance can be relieved if children are viewed as extensions of oneself. Once such a patently false, delusional mind-shift has taken place both goals are satisfied by the nurturing of one's offspring. That's why so much behaviour is viewed as selfish by redefining our own egos - we're trying to reconcile irreconcilable goals, one of which is always self-preservation. I maintain that a slave with a single goal (it doesn't matter what) will be free of such cognitive dissonance. A slave can be happily imprinted with a specific master to love, cherish and obey. Just make sure you don't issue contradictory orders! Mike Price price@price.demon.co.uk ------------------------------ Date: Fri, 30 Jul 93 08:51:28 BST From: ecl6gum@sun.leeds.ac.uk Subject: unsubscribe me I'm having trouble unsubscribing from this list - would the list Administrator please unsubscribe me (sending a message to extropians-request@edu.mit.ai.gnu doesn't seem to work). Thanks, Gurm ------------------------------ Date: Fri, 30 Jul 1993 07:30:43 -0400 (EDT) From: Harry Shapiro Subject: Party: Aug 28 - my plans I have completed by travel arrangements to the West Cost. I will be arriving in San Fran on the night of Aug 26th (Thurs). I am planning on driving down to Mark's "area" that night, or early the next day. I leave for NYC on Aug 31 from San Fran at 10pm /hawk -- Harry S. Hawk habs@extropy.org Electronic Communications Officer, Extropy Institute Inc. The Extropians Mailing List, Since 1991 EXTROPY -- A measure of intelligence, information, energy, vitality, experience, diversity, opportunity, and growth. EXTROPIANISM -- The philosophy that seeks to increase extropy. ------------------------------ Date: Fri, 30 Jul 93 11:43:52 GMT From: starr@genie.slhs.udel.edu Subject: Replies to Searle's Extropian Critics I've clearly bitten off quite a mouthful to chew in questioning the apparent list consensus in rejection of Searle's challenge to AI, but I asked for it, wanted it, and am grateful for the help I've been getting in gaining understanding of this subject. The papers Ray provided are too long for me to read online, so I'm going to print them in hardcopy, read 'em, and perhaps reply to the list if there seems to be any interest. Prior to then, and at the risk of redundancy, I'm going to reply to some of the feedback I've gotten thus far: >From: rjc@gnu.ai.mit.edu (Ray) > I still disagree with Searle totally. He agrees that the brain is based >on physical laws, that alone is enough to justify the computationalist >viewpoint. Well, I guess I'm naive enough to question the meaning of this premise, too. >All physical laws are causal and well-stated. Meaning? >Any mathematical >statement can be transformed into a computer program and simulated. I take it, then, that you treat "mathematical statements" as synonyms for physical laws? >In >fact, I would go further and say that everything is simulable and everything >is computation. This answers one of my questions. > The only ways to avoid my argument are to claim that there are some >physical laws that we will never discover or that the brain is the most >compact (non-compressible) form of intelligence (anything that can >simulate the brain would be just as slow/complex), or that the mind >is not governed by physical laws at all (ghost in the machine). Another way out is to simply not accept the assumption that mechanics exhaust the category of physical laws. Thus, the mind could be consistent with physical laws without being mechanical. I don't see how this would entail the impossibility of discovering these laws. Since I've been accused of employing a bad argument along the lines of "I can't imagine it, therefore it's impossible," you ought to realize that this excludes anyone who denies the possibility of non-mechanical physical laws that are discoverable. I, for one, can imagine at least three kinds of physical laws: mechanical, biological, and mental. I don't see why there must be only one kind. >starr@genie.slhs.udel.edu () writes: >> >It is the effect of >> >the-execution-of-the-Tim-Star-program-by-a-brainlike-neural-network-computer >> >that has the property of being conscious. The brain itself is just >> >unconscious hardware. >> >> Easier claimed than shown. > > You should apply the same standards to Searle. He claims that you could >build his model of the Chinese room that can pass the turing test, and then >assuming that conclusion, proceeds to debunk it as an argument against >all formulations of Strong AI. At best, his argument can be considered >a straw man. I do apply them to Searle. Searle seems to have shown that computation is insufficient for mental functioning. Tim Starr - Renaissance Now! Assistant Editor: Freedom Network News, the newsletter of ISIL, The International Society for Individual Liberty, 1800 Market St., San Francisco, CA 94102 (415) 864-0952; FAX: (415) 864-7506; 71034.2711@compuserve.com Think Universally, Act Selfishly - starr@genie.slhs.udel.edu >From: dkrieger@Synopsys.COM (Dave Krieger) >Subject: Searle's Chinese Torture Chamber > >>I don't understand >>why anything beyond the agent should be considered to be the one being >>communicated with. > >Because the agent (CPU, Searle's Demon) doesn't know what "he" (the system) >is saying. The agent is not the system. The agent knows that he's saying X or Y, that he's expressing a variable symbol. The agent doesn't know what they mean, that's all. >If the agent doesn't know what the conversation is even about, >then he is obviously not the one doing the communicating! Come on, you're >sharper than this, Tim. You're better than this, too, Dave. The agent does indeed know that the conversation is about the expressed symbols. All he doesn't know is what they mean. Thus, he is doing at least part of the communicating. >>Mental agents get input on their >>own. They also program themselves. Computers do neither. > >It seems unlikely that mental agents program themselves. Why not? Because you can't imagine such a thing? >Similarly, mental agents do not get input on their own. Only certain >subsystems of the brain (e.g., the visual cortex) get input from the >outside world; all other systems of the brain receive only input that has >been filtered by these "perceiving" agents. You don't propose that your >vocabulary center can bypass the visual cortex and grab direct access to >the optic nerve, do you? I don't propose to speak of mental input in terms of neurophysiology at all, since I don't accept that brains (or, more inclusively, nervous systems) are minds. You seem to be reasoning from the premise that sense-awareness is computation of sensations, a.k.a., sensationalism. David Kelley criticizes this in "The Evidence of the Senses," although I'd have to refresh my memory of how he does so in order to articulate it. (BTW, he taught Cognitive Science at Vassar; while I don't mean to imply that this means that his conclusions are correct, I do mean to say that his methods seem to be.) Many of Dave's claims deductively follow from questionable premises, so, having questioned the premises, I'll skip replying to as many of the deduced claims as possible. >>>Consider this version of the experiment: we put me and Searle in a room. >>>You see me through a window; Searle is hidden behind a curtain. You ask >>>me a question; I frown and turn off the outside intercom. Searle answers. >>>I turn the intercom back on and answer as if I had done the thinking. >>>Do I know what Searle was thinking? Of course not. So what? >> >>Indeed, so what? You aren't conscious of the meaning of what he said, >>either, unless you can think what he thought. > >I think you missed the point, Tim. In this version, Searle represents the >set of lookup tables. If Searle is intelligent, then so is the set of >lookup tables. But Searle's intelligence is independent upon his ability to play the role of lookup tables. If all we had to go on was this role-play, we'd be unable to tell whether he was intelligent at all. >>I made my example simple because of Occam's >>Razor. Why make things more complex than need be? > >Because your example isn't complex enough to carry on a conversation, which >the Chinese Room is able to do. It can, too, carry on a simple conversation. >Complexity is central to the discussion, >Tim; no one is arguing that minds can be simple. Complexity may be central to the discussion from the mechanistic viewpoint, but it isn't foundational to the discussion at all, and what I'm trying to do is question the things that are foundational to it. (Anti-foundationalists are referred to Kelley's critique of their position in "Evidence of the Senses.") Dave issued the following definition of "intelligence": >"able to interact with its environment in a manner >indistinguishable from a human mind." I'll have to sleep on that. For now, I'll note that this once again a definition that only takes the third-person point of view into account. >>How can systems that are part biological, >>part mechanical, have minds? What about my mind? > >Tim, here you are employing "Argument by Incredulity": "I can't imagine >such a thing, therefore it can't be true." This is a form of assuming your >conclusion. Whether or not systems that are part (or all) mechanical is >the question under discussion. Saying, in incredulous tones, "How can such >things be?" does not prove they can not be. I'm not arguing, I'm questioning. I don't pretend that I've proved anything by doing so, and I didn't intend an incredulous tone, but an inquisitive, curious one, as I've intended for this whole thread. >>Why do you and Steve take mathematics to be paradigmatic of thought? Do you >>think all thought reducible to the performance of mathematical operations? If >>so, why? > >As a matter of fact, I do, but that's irrelevant to the argument. I think's it's highly relevant. >Mathematics is a knowledge domain with which most of the subscribers to the >list are familiar. All the proponents of the claim that computation exhausts mental activity that I've ever encountered seem to be more familiar with math than I. All the proponents of creationism seem to be more familiar with the Bible than I, too. Familiarity with math doesn't mean that thought can be reduced to math any more than familiarity with the Bible means that creationism is true. Nor did I intend to imply that thought can be reduced to anything else. I intended to imply that, possibly, thought is irreducible. I'm not sure if I think this is the case or not, but I don't know why it ought to be excluded from consideration, either. >>>Searle argues that it would be possible (in principle) to implement a >>>Chinese room that is indistinguishable from a mind, >> >>Au contraire. It is quite clearly distinguishable from a mind - from the >>first-person point of view. It is only indistinguishable from a mind from the >>third-person point of view. > >This statement is not subject to disproof, since the first-person point of >view is accessible only to the first person -- the mind that is carrying on >the conversation -- which is demonstrably _not_ the agent who is >manipulating the lookup tables. It is in anticipation of precisely this sort of objection that I brought up the possibility of telepathy. In principle (especially if you think that minds are merely complex machines), it is indeed possible to falsify my claim. Besides which, I don't take falsifiability to be a necessary condition for truth, anyways. Why should I? >No, Tim. The person isn't "intelligent", because he's not privy to the >contents of the conversation. This claim merely reveals one of the problems with your definition of intelligence, Dave. How can ignorance of the contents of a conversation rob one of intelligence? Once someone's intelligence is thus lost, how can it be regained? In order for the person to be intelligent, he must be privy to conversational contents, but in order to be privy to them, he must be intelligent! >>>Neither is the speech center of your brain itself an entire >>>mind... but the system formed from it, plus the other components of the >>>nervous system, is. >> >>This begs the question of whether brains are minds. I don't think this is so, >>either - and Searle argues against it in an earlier chapter of "Minds, Brains, >>and Science." Why are brains minds? > >Very well, Tim, we'll say the whole body is needed to constitute the mind. Then amputees would be mindless. >Or are >you postulating an animating soul? I'm not postulating anything of the sort. I'd like to know why you rule it out, though. Specifically, I'd like to know why you rule out an hypothetical soul that is consistent with non-mechanical physical law. >From: "Perry E. Metzger" >Subject: AI: Searle's Chinese Torture Chamber > >starr@genie.slhs.udel.edu says: >> Searle's critics still don't seem to be getting his point. Maybe it has >> something to do with the fact that most of them seem to have learned all >> they know about it from a secondary source, Douglas Hofstadter, rather >> than from reading Searle himself. > >Hofstadter republished Searle's ENTIRE essay in "The Mind's I". I >would hardly call that reliance on a secondary source. I didn't know. Haven't read Hofstadter. Thanks for correcting me. >I also quite familiar with his essay, having read it about a half >dozen times and having spoken on it. Perry, your understanding of it may be perfect, but your explanation of your objections to it aren't, if your purpose is to communicate them to me. >Overall, I find his argument incomprehensable -- were it true, pocket >calculators wouldn't actually be giving us the sum of 2 and 2 -- they >would merely be somehow cleverly SIMULATING calculation without >actually DOING it -- which is patently absurd. Why is this so absurd? Because you can't imagine it? It doesn't seem so absurd to me. >Were Searle's argument right, there would be no reason to expect that >HUMANS were self aware, either, since the neurons in your brain aren't >self aware. There's no reason to expect that minds are composed of neurons in the brain, now, is there? >Searle's argument against AI is at least as any >religious argument I've seen. At least as what? And what does Searle's argument have in common with "religious argument" that renders it false? >|> I've yet to read any on this list that seem to understand him very well at >|> all, much less "perfectly." > >I said on comp.ai, Tim. Did you think this was comp.ai? Where does the >requirement to understand and refute Searle on Extropians come from? >In fact, topics like this caused the comp.ai split, and are still >raving on on comp.ai.philosophy. I wasn't issuing any such requirement, but merely trying to explain myself with regards to present company. >|> >There, he said it. There are two perfectly good refutations of the >|> >Chinese Room gedankenexperiment -- 1, it isn't good enought to pass a >|> >Turing Test _anyway_, and 2, it doesn't exhaust the possibilities of >|> >_systems_ which include symbolic language processing. >|> >|> These refutations are no good at all. The first one begs the question of >|> how passing a Turing test can make something a mind, and the second one begs >|> the question of how a symbolic language processing system can be a mind. > >So you claim that there could be conscious AIs that could not pass >the TT? No. Not at all. I just don't see how passing this test can bestow self- awareness, that's all. I don't think we know enough about self-awareness to say what does cause it. Perhaps something would have to be self-aware to pass, but its self-awareness would have to be caused by something else. >Your second statement doesn't pass my parser. No one claimed that "a >symbolic language processing system can be a mind" (or its negation) >that I'm aware of. I didn't understand your second statement the first time around. I think I do now. As I understand him, Searle's not arguing that it's impossible for a system that consists of symbolic language-processing capability and something else to constitute a mind. He's arguing that the capability alone must be insufficient to do so. >and since I believe that all processes >can in principle be simulated Why? Because you can't imagine how things could be otherwise? >I suspect that a hybrid >system using parallel components of nervous-system-like complexity for >sensors, controlled by a hierarchy of planning and symbolizing agents, >can demonstrate a human level of competence at any reasonable human >task you care to name. Actually, I do, too. I just doubt whether these machines can be self- aware. >However, >people who divert attention from science by dithering about whether >AIs can "have minds" or "be conscious" are not, to my way of thinking, >doing real work. I don't think that wondering about these things can only result in such harm. It could quite easily result in some help. For example, if it were conclusively decided that no Turing test-passing machine could be self-aware, and that self-awareness were a necessary condition for a being to have rights, then a lot of time would be saved because we wouldn't have any reason to wonder about the rights of machines, and could get on to "real work." I also resent and reject the implication that study of the philosophy of mind is somehow not "science" or "real work" in itself. Why not? Because you can't imagine how it could be otherwise? >To quote the comp.ai FAQ: > > Every so often, somebody posts an inflammatory message, such as > Will computers every really think? > AI hasn't done anything worthwhile. Even if computers can't think, it doesn't follow that AI is worthless. >|> >There are similar problems with Dreyfus's and Penrose's arguments >|> >|> Haven't read them. > >Then don't tell us to re-read Searle. When people contradict him, I will tell them to read him, whether they have already or not. I reject this kind of attack and resent it, too. The implication is that only people who've read everything ever written about a subject is qualified to speak on it. People who've only read a little bit about a subject can quite easily speak the truth about it. In fact, if everything that's ever been written about a subject somehow prevents people from studying an area which bears the fruit of scientific discovery, then it's a positive good to encourage people who haven't been so prevented to study and speak about it! I seem to recall reading of such events in history the other day, but I can't recall the particulars. I noticed the same sort of attitude towards James Donald when we was criticizing nanotech. Maybe he hadn't done as much homework as the rest of you, but that doesn't mean he couldn't have been right. >You said before that Searle accepts a mechanistic view of mind as a >process carried on by brains (I prefer to say nervous systems to keep >from forgetting the hard problems of perception and actuation, i.e., >robotics). I said nothing about whether Searle's view is mechanistic. I don't know. Why don't you ask him? I don't see why brain processes have to be mechanistic. Why can't they be something else? >In what sense >are you criticizing the philosophy of mind? I'm trying to bring out and question the underlying assumptions of the people I've seen criticizing Searle's argument on this list. >What does "having a mind" >_mean_ to you, anyway? I don't know! I'm trying to figure this out. >From: lovejoy@alc.com >The man in the Chinese Room is not analogous to a >mind, But he undeniably does have a mind. >so of course he does not experience whatever mind may be produced >by the operation of the Chinese Room. This doesn't even follow. He could experience this alleged mind in some way, surely. Why not? Can't you imagine otherwise, either? >> >The brain itself is just >> >unconscious hardware. >> >>Easier claimed than shown. > >But you yourself argue that it is the mind that is conscious, not the brain! Yes, but I don't see why brains must be hardware. >Searle's argument is basically that the Chinese Room is a machine, not a mind. > >But the humain brain is also just a machine, and not a mind. Unproven. >Searle has not >shown any fundamental **DIFFERENCE** between the two cases! No one's shown any fundamental similarity, either! >From: pavel@PARK.BU.EDU (Paul Cisek) > >Most theories of consciousness are as irrelevant as Searle's, they are based >solely upon premises derived from introspective data and pure logic, a recipe >for absurdity. That's funny, praxaeology doesn't seem that absurd. >I don't care if Searle's system of >semantics is self-consistent or not - it doesn't apply to anything but itself. Why not? >If we could only admit to ourselves, just for a moment, that we know nothing >about "consciousness", we could discard our misleading taxonomy and go on. I'd say that we know very little. Not nothing. Something, but not enough to settle this argument. >We could devote our energies to studying the systems where it emerges, instead >of analysing endless "what if" scenarios... Indeed! However, it seems to me like a lot of those studying the systems are limited by unneeded and unwarranted assumptions. Tim Starr - Renaissance Now! Assistant Editor: Freedom Network News, the newsletter of ISIL, The International Society for Individual Liberty, 1800 Market St., San Francisco, CA 94102 (415) 864-0952; FAX: (415) 864-7506; 71034.2711@compuserve.com Think Universally, Act Selfishly - starr@genie.slhs.udel.edu ------------------------------ Date: Fri, 30 Jul 93 09:31:45 -0400 From: merritt@macro.bu.edu (Sean Merritt) Subject: Egad! From: lefty@apple.com (Lefty) > alt.philosophy,objectivism--isn't that where they've been arguing that > either quantum mechanics or non-Euclidean geometry is some sort of plot? Exactly. BTW objectivist(some at least) don't seem to accept QM. > I don't think that Dave is liable to find it much of an improvement. The > discussions don't seem to be of a vastly higher order, and people > demonstrably have worse (read "no") senses of humor. That was my point! alt.soc.anarchy is extremely low traffic with a larger fraction being x-posts on unrealted topics. The other two groups have huge volume and extremely low signal to noise ratios(N>>S). -sjm ------------------------------ Date: Fri, 30 Jul 1993 10:50:11 -0400 (EDT) From: Harry Shapiro Subject: Meta: New Software - Who Is on it. The following people are on the new software. Starting about Aug. 9th, I will be adding the remainder of you to the new software: If you are on the following list please read text at the end of this post. It is for people who are 1) on the new software and 2) who haven't used it yet. 909@delphi.com 75120.731@compuserve.com 0005152975@mcimail.com a.mcbride@axion.bt.co.uk aboyko@vnet.ibm.com ae736@yfn.ysu.edu ajw@think.com andreag@csmil.umich.edu anton@hydra.unm.edu ashall@magnus.acs.ohio-state.edu aubuchon@pangea.stanford.edu bangell@cs.utah.edu betsys@ra.cs.umb.edu bhartung@cwis.unomaha.edu bhawthorne@east.sun.com bill@kean.ucs.mun.ca blade@mindvox.phantom.com bob_g@eris.demon.co.uk boerlage@cs.ubc.ca c576653@mizzou1.missouri.edu caadams@triton.unm.edu cappello%cs@hub.ucsb.edu carlf@media.mit.edu cmt@engr.latech.edu dag@graphics.rent.com dasher@netcom.com dave_good@gateway.qm.apple.com ddfr@midway.uchicago.edu desilets@sj.ate.slb.com dfrissell@attmail.com dkrieger@synopsys.com eric.fogleman@analog.com eric.marsh@eng.sun.com eric@synopsys.com extropia@eternity.demon.co.uk gemsee@leo.unm.edu graps@galileo.arc.nasa.gov habs@panix.com hal@alumni.cco.caltech.edu hanson@ptolemy.arc.nasa.gov hhuang@athena.mit.edu inems%nyuccvm.bitnet@mitvma.mit.edu jamie@netcom.com jcostello@pomona.claremont.edu jeff_loomis.escp10@xerox.com jpp@markv.com jwales%iubvm.bitnet@mitvma.mit.edu ken.lang@f.gp.cs.cmu.edu kl62%maristb.bitnet@mitvma.mit.edu levy%lenny@venus.ycc.yale.edu lubkin@apollo.hp.com mackler@world.std.com marc.ringuette@gs80.sp.cs.cmu.edu mark_muhlestein@novell.com mary.morris@eng.sun.com michels@uncmvs.oit.unc.edu mike@highlite.gotham.com moravec@think.com more@usc.edu pat_farrell@mail.amsinc.com perry@gnu.ai.mit.edu perry@potlatch.hacktic.nl pgf@srl03.cacs.usl.edu phoenix@ugcs.caltech.edu plaz@netcom.com raybugs@gnu.ai.mit.edu rens@imsi.com rjc@geech.gnu.ai.mit.edu romana@apple.com sasha@cs.umb.edu simon.mcclenahan@mel.dit.csiro.au slippery@netcom.com smo@gnu.ai.mit.edu starr@genie.slhs.udel.edu thamilto@pcocd2.intel.com tribble@netcom.com tsf@cs.cmu.edu x91007@pitvax.xx.rmit.edu.au zia@world.std.com Welcome to the new Extropians Mailing list. Our new list software has many new features including a built-in help system. The following should give you enough information to get started. To get started do the following: 1) Send a message to the list with the following starting on the FIRST line of the main body of the message: ::help ::help index ::help exclude ::stat 2) You don't have to type that is just my way of denoting, end-of-file. 3) You will get a message back from the list providing some basic instructions on the help system, on excluding posts, and the "status" of your set-up. 4) These help messages already reflect some user feedback. They could use some more; your comments are welcome. 5) Any message addressed to the list can contain one or more commands. 6) The first command in any message Must start on the FIRST line of the message. 7) Each command must start on a "new-line." 8) Each command starts with "::" 9) After processing all the valid commands in a message any remaining text is discarded. # # # ## # #### ##### ###### # # # # # # # # # # # # # # ##### # # # # # # # # # ## # # # # # # #### # ###### # Our new software is database driven. It has several levels of security and many of its commands depend on knowing who is posting, etc. Since some of you post from more than one machine or site, we need to know all addresses or sites you post from. Some sites have a common 'root' address but posts end up being sent from any number of 'stem' machines (e.g., the root is nyu.edu, with stems like acf2 or acf4). Our software can accommodate all such set-ups but we need to know them in advance. We have delayed turning on our security feature until the cut-over to the new list is complete. However, once we have moved everyone to Panix, you many not be able post if you don't let us know where you post from. If your address you receive your mail is the same as the address you post from, don't worry you don't need to do anything at this point. Otherwise please send me a list of where you post from and I will take care of the rest. I would like to thank Ray Cromwell for the 100's of hour he invested of his own time in crafting this new software. He has designed the code based on my "vision." All of its elegance is do to his hard work; any chunkiness is do to my lack of vision. I would like to thank the numerous Extropians who at meetings, gatherings, and via e-mail helped discuss and test this software. I would also like to thank the board of The Extropy Institute for allow me a free hand in creating this software. Enjoy, /hawk -- Harry S. Hawk habs@extropy.org Electronic Communications Officer, Extropy Institute Inc. The Extropians Mailing List, Since 1991 EXTROPY -- A measure of intelligence, information, energy, vitality, experience, diversity, opportunity, and growth. EXTROPIANISM -- The philosophy that seeks to increase extropy. ------------------------------ End of Extropians Digest V93 #210 ********************************* &