From extropians-request@extropy.org Mon Oct 24 09:02:21 1994 Return-Path: extropians-request@extropy.org Received: from usc.edu (usc.edu [128.125.253.136]) by chaph.usc.edu (8.6.8.1/8.6.4) with SMTP id JAA20879 for ; Mon, 24 Oct 1994 09:02:20 -0700 Received: from news.panix.com by usc.edu (4.1/SMI-3.0DEV3-USC+3.1) id AA11503; Mon, 24 Oct 94 09:02:16 PDT Received: (from exi@localhost) by news.panix.com (8.6.9/8.6.9) id MAA12528; Mon, 24 Oct 1994 12:01:39 -0400 Date: Mon, 24 Oct 1994 12:01:39 -0400 Message-Id: <199410241601.MAA12528@news.panix.com> To: Extropians@extropy.org From: Extropians@extropy.org Subject: Extropians Digest #94-10-468 - #94-10-477 X-Extropian-Date: October 24, 374 P.N.O. [12:01:13 UTC] Reply-To: extropians@extropy.org X-Mailer: MailWeir 1.0 Status: RO Extropians Digest Mon, 24 Oct 94 Volume 94 : Issue 296 Today's Topics: Argument by Analogy (Uploads) [1 msgs] Brain backup proposal [2 msgs] FWD: Terra Libra stuff... [1 msgs] learning from ghost tutors [1 msgs] Libertarians grabbing a state [1 msgs] Marketing AI [1 msgs] Reply to challenged uploaders [1 msgs] Searle on AI [1 msgs] Smart Drug / Life Extension Newsletter [1 msgs] Administrivia: Note: I have increased the frequency of the digests to four times a day. The digests used to be processed at 5am and 5pm, but this was too infrequent for the current bandwidth. Now digests are sent every six hours: Midnight, 6am, 12pm, and 6pm. If you experience delays in getting digests, try setting your digest size smaller such as 20k. You can do this by addressing a message to extropians@extropy.org with the body of the message as ::digest size 20 -Ray Approximate Size: 26159 bytes. ---------------------------------------------------------------------- From: maureen.johnson@gonzo.com (Maureen Johnson) Date: Sun, 23 Oct 94 13:17:00 +0000 Subject: [#94-10-468] FWD: Terra Libra stuff... Please reroute my address in the mailing list to reflect Maureen.johnson.gonzo.mailrun@lightspeed.com. Thanks ------------------------------ From: "Peter C. McCluskey" Date: Sun, 23 Oct 1994 19:28:25 -0700 Subject: [#94-10-469] Libertarians grabbing a state >state. Are libertarians inadequately motivated? Are there enough >libertarians willing to uproot themselves to attempt to take over a state? The benefits to each individual who moves solely for this reason is tiny, which makes it hard to start such a moevment in a state small enough that it could be taken over. Libertarians are moving to Silicon Valley for somewhat different reasons that we might be able to take over a county in another decade or two, but there isn't enough power that level to justify the effort. -- --------------------------------------------------------------- Peter McCluskey | pcm@rahul.net | Cardassia delenda est! finger for PGP key | pcm@world.std.com | netcom delenda est! ------------------------------ From: "Peter C. McCluskey" Date: Sun, 23 Oct 1994 19:28:55 -0700 Subject: [#94-10-470] Marketing AI fhapgood@world.std.com (Fred Hapgood) writes in X-Message-Number: #94-10-399: >programmatic content. Even the term 'learning' is looking >problematical -- lots of people working in AI today refuse to >even participate in discusions over what learning means. They >call it 'the L-word' and make the sign of the cross if you try to >bring it up. Gee, when I took a course in Machine Learning at Brown 2 years ago, I didn't notice any serious problems with the following definition: "changes in [a] system that ... enable [it] to do the same task or tasks drawn from the same population more efficiently and more effectively the next time". - Herbert Simon in "Why should machines learn?" in _Machine Learning: An artificial intelligence approach_. I guess we were too busy figuring out how to implement an concept that we all seemed to agree on to notice the shortcomings of the definition. I wonder who the alledged AI people you are talking about are. >I suggest we just concentrate on making a really good Go program. >That's what *I* mean by superhuman intelligence. Sounds like a pretty narrow meaning. -- --------------------------------------------------------------- Peter McCluskey | pcm@rahul.net | Cardassia delenda est! finger for PGP key | pcm@world.std.com | netcom delenda est! ------------------------------ From: sw@tiac.net (Steve Witham) Date: Sun, 23 Oct 1994 23:01:27 -0400 Subject: [#94-10-471] Argument by Analogy (Uploads) Robin Hanson writes- > >Consider most any huge complex computer program, compiled to assembly >code. This doesn't code for a sensitive growth process, but it is >still pretty hard to improve it via low level modifications if one >doesn't understand it at higher levels. That's true of assembly code even more than genes. But I was thinking of small changes to analog parameters--relative speeds of different types of synapses and different reactions at the synapses, etc., things you would expect would vary for one reason or another anyway so we would have some adaptability already. I think if you make modifications at a low level, and they're within your short-term ability to adapt to, then in the longer term you might learn to live with the change at a higher level (of course the change could be bad, or even cause permanent damage). So personal adaptation is another kind of higher-level understanding. Another grace factor is that you could try something for a second and then switch it back, or even have a "dead man's switch" to switch it back for you. Or even go back to a backup if it was really bad. >But I don't mind you trying, >on yourself of course, when you're an upload - maybe you'll show me >wrong. "I'm not going to try it. YOU try it." "I'm not going to try it. Hey, I know, let's get Mikey to try it!" Other people will try it on themselves or uploaded animals first, I'm sure. "He likes it! Hey Mikey!" (By the way, I have done "::exclude all" and I'll be away from my email until Nov. 8. Have a good time, I will!) --Steve - - - - - - - - - - There's more to aspire to than shoddily built tract homes and th' internet!! --Griffy in Zippy ------------------------------ From: sw@tiac.net (Steve Witham) Date: Sun, 23 Oct 1994 23:01:53 -0400 Subject: [#94-10-472] Reply to challenged uploaders >>Just as not being afraid of ghosts is faith. Or belief in >>negative numbers. > >Belief in negative numbers isn't faith, to me... let's take >a airtrack. >plonk< I just dropped it on this virtual table. >I've taken out my virtual marks-a-lot, marked a spot in the >middle "zero." I put a virtual air-cart at that spot, give it >a push towards the "1" I put on one end. It reaches the end and >bounces back past the zero in the middle. Where is it, and >why is faith involved? I mean, come on... Er, that's what I was saying about consciousness. Someday we'll have as clear an idea about it as we do about numbers, and believing that running programs can have it won't be a matter of faith. >>So you sayin' we oughta take next exit off info-way & >>*think* with our *own* li'l brains? --a toadette in Zippy > >Yah, leave the info-way, tune in CNN 24 hours a day, or even >better MTV, and learn to think for yourself. > >(Retch!) Given what Zippy's about, I think the implication might have been, go read a book. My picture of the "info-way" isn't internet but multimedia versions of CNN & MTV. The government-sponsored misinterpretation of internet. --Steve - - - - - - - - - - There's more to aspire to than shoddily built tract homes and th' internet!! --Griffy in Zippy ------------------------------ From: sw@tiac.net (Steve Witham) Date: Sun, 23 Oct 1994 23:01:33 -0400 Subject: [#94-10-473] Brain backup proposal Keith Lynch's says- >>The data will be temporarily segregated by probe number. For each >>pair of probes -- all 5*10^23 ways to select two of 10^12 -- the >>all-over correlation coefficient will be looked at. I think it would be pretty quick to identify the probes that were on the same neuron. That reduces the number of correllations to 5*10^21. Paul Cisek says- >10^23 is a pretty big number. I think losing all the connectivity information >and appealing to correlation to reconstruct it is a big mistake. Let's >consider what would be necessary to reconstruct the connectivity: For >correlation to be at all informative, you'd like to get at least one neural >firing per potential connection to be able to distinguish correlations due to a >connection from those due to chance. That is, you'd need about 10^11 firings. >Given an "average" firing of an "average" neuron of 50Hz, this will >take about 60 years. And I would advise against inducing seisures to >reduce this to say, 10 years... In general Paul's concern seems to be that Lynch's proposal doesn't collect direct data on the brain and its layout. I think with a reasonably complete model of human brains in general, the correllation method might reconstruct a model brain that would have fired the same neurons in the same pattern (or close to it) given the same input. There are probably all sorts of things you could infer about the cells, the synapses, neurotransmitters, and all the details Paul mentioned--given enough data. It's a sort of network tomography. Anyway, maybe a brain that would have acted pretty much the same is close enough. I'm not sure about Paul's method of calculating how much data is needed. We know that the vast majority of potential connections have a weight of zero, so I imagine (!) a method more specialized than plain autocorrellation would sort out true evidence of connections from statistical flukes. The main noise each synapse would have to fight with would be the (<10^5) other synapses into the same neuron. And if you inferred a connection from B-->C instead of A-->C because B fired in a very similar pattern to A, maybe the mistake wouldn't be so bad. Put another way, in a year, you would collect something like 10^9 firings per cell. With 10^5 synapses per cell, that's at least 10^4 bits of information per synapse. (Btw, I am away till Nov. 8.) --Steve - - - - - - - - - - There's more to aspire to than shoddily built tract homes and th' internet!! --Griffy in Zippy ------------------------------ From: pavel@PARK.BU.EDU (Paul Cisek) Date: Sun, 23 Oct 1994 23:42:50 -0400 Subject: [#94-10-474] Brain backup proposal >From: sw@tiac.net (Steve Witham) >X-Message-Number: #94-10-473 > > ... Again, the discussion conveniently avoids the murky stuff and we're back to throwing meaningless numbers back and forth... Oh well, self-delusion is a pleasant meme and it survives. >In general Paul's concern seems to be that Lynch's proposal doesn't collect >direct data on the brain and its layout. Yes, the proposal ignores most of what the current consensus in neuroscience would consider important for brain function. >I'm not sure about Paul's method of calculating how much data is >needed. Please, it's not my proposal. Keith Lynch suggested that we can discard the connectivity information and then reconstruct it through correlation. >We know that the vast majority of potential connections have a weight >of zero, so I imagine (!) a method more specialized than plain autocorrellation >would sort out true evidence of connections from statistical flukes. The >main noise each synapse would have to fight with would be the (<10^5) other >synapses into the same neuron. And if you inferred a connection from B-->C >instead of A-->C because B fired in a very similar pattern to A, maybe the >mistake wouldn't be so bad. But if you've discarded all the connectivity information then you're stuck with checking every potential connection. The point of the proposal by Kieth Lynch was that we don't need to get connectivity info (which would be difficult) and can reconstruct it from correlation. That proposal is subject to various criticisms, one of the least of which is the huge number of spikes one would have to consider. > >Put another way, in a year, you would collect something like 10^9 firings >per cell. With 10^5 synapses per cell, that's at least 10^4 bits of >information per synapse. "Synapses per cell" is a meaningless quantity. It's like "money per person" (not wise if you are trying to record an economy). Why do you then apply division? What does the answer mean? Paul ------------------------------ From: John Morgenthaler Date: Sun, 23 Oct 1994 21:04:46 -0700 (PDT) Subject: [#94-10-475] Smart Drug / Life Extension Newsletter We would like to invite everyone to subscribe to our new online newsletter called "Life Enhancement News". The newsletter covers the latest in smart drug and life extension information. To subscribe send your request to smart@crl.com. The editorial staff includes: John Morgenthaler -- co-author of Smart Drugs & Nutrients, Smart Drugs II: The Next Generation, and co-editor of Stop The FDA. Ward Dean, MD -- co-author of Smart Drugs & Nutrients, Smart Drugs II: The Next Generation, A Neuroendocrine Theory of Aging and Degenerative Disease, and author of Biological Aging Measurement. Please feel free to post this memo to other groups and lists where appropriate. ------------------------------ From: Eric Watt Forste Date: Mon, 24 Oct 94 00:41:46 -0700 Subject: [#94-10-476] Searle on AI Tim-- I really don't have the time to involve myself in this current discussion as I'd like to, but I just want to point out that the best refutation of Searle's Chinese-Room argument I've seen lies on pages 39 through 44 of Paul M. Churchland's A NEUROCOMPUTATIONAL PERSPECTIVE. I will not try to duplicate that argument here, but will merely point out that while Searle successfully proves that the Chinese room cannot have any "intrinsic intentionality", he does so only by assigning such properties to "intrinsic intentionality" that it cannot be had by any human being, any animal, or for that matter, any other physical system in the universe. At other points in his work, he seeks to show that the "intrinsic intentionality" he ascribes to human beings has these properties by repeatedly claiming that it's simply obvious. This is, of course, not much of an argument. Unfortunately, he never makes any other argument more substantial than this to show that human beings really do display the magical properties Searle claims for "intrinsic intentionality". Eric Watt Forste || finger arkuat@c2.org || http://www.c2.org/~arkuat ------------------------------ From: Anders Sandberg Date: Mon, 24 Oct 1994 12:39:14 +0100 (MET) Subject: [#94-10-477] learning from ghost tutors Marvin Minsky says: > The trick is to hang around with your idols, try to anticipate how > they'll solve each problem, or how they'll explain it. After working at > this long enough, you acquire a passable downloaded copy. > > Sometimes I can even get my downloaded ghosts of McCulloch or Feynman to > explain things to me. ... I use a similar method to prepare arguments and anticipate counterarguments for my views (and to keep myself amused). I have created a fundamentalist preacher and a radical environmentalist model which I use to argue with. Its often easier to create good models for people whose views strongly disagree with yours (more emotional "energy" for them to run) and especially if they are dominated by simple ideologies (i.e. they are memebots). I don't know how good this is from a memetic standpoint, since I'm essentially allowing certain memes to run free in a portion of my mind. On the other hand, their presence seems to produce an "immune reaction" making me more aware of them and probably less likely to be truly infected. Creating personal models is a quite interesting project. Anybody who has written a novel or really roleplayed know how tenacious and independent such models can be. They are also very useful tools for learning to see things from different perspectives, something which is important for true understanding. Besides, its always fun to have a few opponents (and supporters) to talk to in your head. As long as the shrinks doesn't find out... :-) ----------------------------------------------------------------------- Anders Sandberg Towards Ascension! nv91-asa@hemul.nada.kth.se http://www.nada.kth.se/~nv91-asa/main.html GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y ------------------------------ End of Extropians Digest V94 #296 *********************************