From greg at electricrain.com Sun Dec 2 20:05:21 2001 From: greg at electricrain.com (Gregory P . Smith) Date: Sat Dec 9 22:11:43 2006 Subject: [p2p-hackers] uServ Message-ID: <20011202200304.A22172@zot.electricrain.com> It looks like these folks at IBM have done a good job at creating something that -normal users- want out a p2p content system: * very simple content publishing * no special software needed to retrieve content from the system * they implemented proxying (relaying) to allow peers behind a firewall to serve, finally someone other than mojonation has done this. http://www.almaden.ibm.com/cs/people/bayardo/userv/userv.html Their reliance on dymanic dns could pose more problems that they realize on the real internet due to more widespread disregard to dns TTLs than there is within an enterprise. It'll be interesting to see. -g From bram at gawth.com Sun Dec 2 20:37:20 2001 From: bram at gawth.com (Bram Cohen) Date: Sat Dec 9 22:11:43 2006 Subject: [p2p-hackers] uServ In-Reply-To: <20011202200304.A22172@zot.electricrain.com> Message-ID: On Sun, 2 Dec 2001, Gregory P . Smith wrote: > It looks like these folks at IBM have done a good job at creating > something that -normal users- want out a p2p content system Indeed they have. It's interface is mostly one of 'publish this file', which returns back a url which the file is now available at, and it continues to be up even if your own connection is sporadic. Of course, this same interface could be made available in a non-p2p way, but the p2p aspect is kinda neat. > Their reliance on dymanic dns could pose more problems that they realize > on the real internet due to more widespread disregard to dns TTLs than > there is within an enterprise. It'll be interesting to see. Akamai uses tons of dynamic DNS and seems to handle those problem okay. I don't know how it works though. -Bram Cohen "Markets can remain irrational longer than you can remain solvent" -- John Maynard Keynes From greg at electricrain.com Mon Dec 3 12:06:32 2001 From: greg at electricrain.com (Gregory P . Smith) Date: Sat Dec 9 22:11:43 2006 Subject: [p2p-hackers] uServ In-Reply-To: ; from bram@gawth.com on Sun, Dec 02, 2001 at 08:34:52PM -0800 References: <20011202200304.A22172@zot.electricrain.com> Message-ID: <20011203120313.A28508@zot.electricrain.com> On Sun, Dec 02, 2001 at 08:34:52PM -0800, Bram Cohen wrote: > On Sun, 2 Dec 2001, Gregory P . Smith wrote: > > Their reliance on dymanic dns could pose more problems that they realize > > on the real internet due to more widespread disregard to dns TTLs than > > there is within an enterprise. It'll be interesting to see. > > Akamai uses tons of dynamic DNS and seems to handle those problem okay. I > don't know how it works though. yep, but akamai has the expensive luxury controlling all of their computers that do hosting and can put boxes infront of them to send connections destined for a failed computer to live one. They only put things on the ISP & Colo edge, not the end user edge. Greg From bram at gawth.com Tue Dec 4 15:59:49 2001 From: bram at gawth.com (Bram Cohen) Date: Sat Dec 9 22:11:43 2006 Subject: [p2p-hackers] multi-source downloads Message-ID: Here's a simple algorithm for doing multi-source downloads when you have resume capability - To begin with, pre-allocate the entire file, you're going to be overwriting different sections of it. When the first conection is made, start downloading from the beginning. When the second connection is made, start downloading at the halfway point between where the first download is currently at and the end. When the third connection is made, pick the larger of the two needed sections and start downloading at it's halfway point. In general, when a new connection is made or a download catches up to the end of the file or the beginning of the next download section, restart that download at the midpoint of the largest section you still need. For sanity, have a maximum on the size of section you're willing to split. This algorithm just does multi-source downloading, it doesn't do 'true' swarming because peers don't make parts of the file available before they've gotten the whole thing. I think the only systems which do that are BitTorrent and Edonkey2000, both of which have their own protocol which breaks the file into fixed-size pieces. -Bram Cohen "Markets can remain irrational longer than you can remain solvent" -- John Maynard Keynes From hal at finney.org Tue Dec 4 16:12:55 2001 From: hal at finney.org (hal@finney.org) Date: Sat Dec 9 22:11:44 2006 Subject: [p2p-hackers] multi-source downloads Message-ID: <200112050006.QAA25445@finney.org> Bram writes: > Here's a simple algorithm for doing multi-source downloads when you have > resume capability - That seems like a good algorithm. To be complete you might want to consider a couple of additional issues, like, what do you do when a download of a section finishes? Probably keep the connection open and find a new section to download using the same algorithm? (Would the sending side be told when to stop downloading or would the receiving side just close the connection when it had enough?) Also, what would you do if a download you initiated closes before you get all the data, or just slows down a whole lot? I find that happens quite a bit with Morpheus. Thanks, Hal From gwachob at wachob.com Tue Dec 4 16:21:52 2001 From: gwachob at wachob.com (Gabe Wachob) Date: Sat Dec 9 22:11:44 2006 Subject: [p2p-hackers] multi-source downloads In-Reply-To: Message-ID: Bram- Your algorithm makes a lot of sense (though perhaps you'd tune it a little to have more than just 2 parts for each iteration? Maybe the number of divisions is a function of the size of the file?) If I understand the algorithm, it seems to imply a download scheme that lets the receiving end say "stop sending me data" -- according to this algorithm, the receiving end will not know exactly how much data it will ultimately be receiving from a particular sender. Is this a correct understanding? This implies some sort of protocol (cough, BEEP, cough) that in uses some sort of chunking, doesn't it (send me 8k, send me 8k, send me 8k) or at least has some sort of windowing so that the receiving end can "turn off the pipe" when it starts to get data that has already been received.. Just thoughts about implementation... -Gabe On Tue, 4 Dec 2001, Bram Cohen wrote: > Here's a simple algorithm for doing multi-source downloads when you have > resume capability - > > To begin with, pre-allocate the entire file, you're going to be > overwriting different sections of it. > > When the first conection is made, start downloading from the beginning. > > When the second connection is made, start downloading at the halfway point > between where the first download is currently at and the end. > > When the third connection is made, pick the larger of the two needed > sections and start downloading at it's halfway point. > > In general, when a new connection is made or a download catches up to the > end of the file or the beginning of the next download section, restart > that download at the midpoint of the largest section you still need. For > sanity, have a maximum on the size of section you're willing to split. > > This algorithm just does multi-source downloading, it doesn't do 'true' > swarming because peers don't make parts of the file available before > they've gotten the whole thing. I think the only systems which do that are > BitTorrent and Edonkey2000, both of which have their own protocol which > breaks the file into fixed-size pieces. > > -Bram Cohen > > "Markets can remain irrational longer than you can remain solvent" > -- John Maynard Keynes > > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > From bram at gawth.com Tue Dec 4 16:25:00 2001 From: bram at gawth.com (Bram Cohen) Date: Sat Dec 9 22:11:44 2006 Subject: [p2p-hackers] multi-source downloads In-Reply-To: <200112050006.QAA25445@finney.org> Message-ID: On Tue, 4 Dec 2001 hal@finney.org wrote: > Bram writes: > > Here's a simple algorithm for doing multi-source downloads when you have > > resume capability - > > That seems like a good algorithm. To be complete you might want to > consider a couple of additional issues, like, what do you do when a > download of a section finishes? Probably keep the connection open and > find a new section to download using the same algorithm? Yes, that was my intention. > (Would the sending side be told when to stop downloading or would the > receiving side just close the connection when it had enough?) If you're literally using http you have to drop the connection and start a new one, if you're using your own protocol there are ways of making it use the same connection, although if you're gonna go the custom protocol route you should probably make it divide the file into coherent pieces which get requested individually. I can go on at length about how BitTorrent does that, although be forewarned it gets very complicated very fast. > Also, what would you do if a download you initiated closes before you > get all the data, or just slows down a whole lot? If it slows down a whole lot it's open section will get mostly finished by much faster connections, since they'll finish whatever they're doing and bisect it. If it drops completely you should restart a later download at it's ending point, viewing that section as equivalent to a twice as large one which gets bisected normally in terms of priority. > I find that happens quite a bit with Morpheus. Yeah, the tricky thing with swarming algorithms is dealing with all the unreliable peers. -Bram Cohen "Markets can remain irrational longer than you can remain solvent" -- John Maynard Keynes From bram at gawth.com Tue Dec 4 16:38:50 2001 From: bram at gawth.com (Bram Cohen) Date: Sat Dec 9 22:11:44 2006 Subject: [p2p-hackers] multi-source downloads In-Reply-To: Message-ID: On Tue, 4 Dec 2001, Gabe Wachob wrote: > Your algorithm makes a lot of sense (though perhaps you'd tune it > a little to have more than just 2 parts for each iteration? Maybe the > number of divisions is a function of the size of the file?) I think splitting in half works best in general. > If I understand the algorithm, it seems to imply a download scheme > that lets the receiving end say "stop sending me data" socket.close() is generally quite effective for that :-) > according to this algorithm, the receiving end will not know exactly > how much data it will ultimately be receiving from a particular > sender. Is this a correct understanding? That is correct. Consider the simple case of two downloaders on a 32k file. If the second download starts when the first one is at 2k, it will start at the 17k mark, but if it starts when the first download is at 4k, it will start at the 18k mark. The first download could finish far sooner if the second download finishes it's section first and bisects the first one's active section. In the cleanup phase with lots of small remaining sections which won't be bisected, it probably makes sense to query for the particular section you need, since in that case you *do* know when the download will end, and there's no need to make a new connection when it does. I can give more detailed examples if anybody wants them. -Bram Cohen "Markets can remain irrational longer than you can remain solvent" -- John Maynard Keynes From gojomo at usa.net Thu Dec 6 12:21:25 2001 From: gojomo at usa.net (Gordon Mohr) Date: Sat Dec 9 22:11:44 2006 Subject: [p2p-hackers] multi-source downloads References: Message-ID: <011f01c17d2c$103d7260$c6efa4d8@golden> I like this algorithm a lot -- but think another twist could help, too. (Bram and I discussed this briefly at the Bitzi party, but as I had a few drinks, I may not have made the case to him as well as I can now... :) Let's consider the basic situation after two connections have opened: two downloads are each progressing towards hard-stop points. (Let's say further that the two downloads together manage to completely saturate your inbound link, so you don't bother to additional links.) Inevitably, one download will reach its hard-stop point first. In order to maintain maximum throughput, a fresh request for part of the still-needed range will have to be issued. And, no matter where you place that request, there's a chance that unpredictable download rates will require you to do this again, because again one connection will finish first. If request overhead is in fact costly, or subject to complicated failure modes (e.g. remote nodes are enforcing slot limits, etc.), then I suggest that adding the capability to request a range IN REVERSE could offer additional benefits on top of Bram's scheme. Here's how it would work: First, start downloading the whole file, from any willing peer, from back-to-front. Second, start downloading the whole file, from any other peer, from front-to-back. Now, for the two-source case, you never need to reissue any requests. Wherever they meet, they meet, and you're done. If however you want additional sources, repeated apply either... The simple approach: Pick the largest remaining needed range and begin a download, from its midpoint, towards the slower of the two approaching frontiers. The rocket-scientist approach: Pick the remaining needed range that is expected to take the longest to disappear. Making a reasonable assumption for the transfer rate of the next connection (for example, the average of the rates observed so far), pick a biased start point inside that needed range such that the new connection and the two existing downloads attacking the same range are all predicted to end at the same time. Of course any time you have 3 or more connections, you again introduce the risk that a connection will reach a local end long before the whole process ends, and a followup request attacking a new range will become necessary. It seems intuitive, though, that this will happen less than half as often as in the forward-only scenario. Considering the common case where Ranged HTTP GETs are the download substrate, it *is* somewhat costly to either (1) break a download prematurely via a close() (and perhaps lose one's spot in a download queue); or (2) adopt chunking overhead to allow mid-stream jumps to different ranges. So an "In-Reverse" option could prove helpful there -- and could be requested with a special header. If some peers don't support "In-Reverse", that's not fatal -- they'll just ignore the request and give you the range you wanted forwards. (So in fact you'd usually try to set up your reverse-gets first, so that if they inadvertently degenerate to forward-gets, you can adjust your subsequent requests accordingly.) - Gojomo ____________________ Gordon Mohr, gojomo@ bitzi.com, Bitzi CTO _ http://bitzi.com _ ----- Original Message ----- From: "Bram Cohen" To: Sent: Tuesday, December 04, 2001 3:56 PM Subject: [p2p-hackers] multi-source downloads > Here's a simple algorithm for doing multi-source downloads when you have > resume capability - > > To begin with, pre-allocate the entire file, you're going to be > overwriting different sections of it. > > When the first conection is made, start downloading from the beginning. > > When the second connection is made, start downloading at the halfway point > between where the first download is currently at and the end. > > When the third connection is made, pick the larger of the two needed > sections and start downloading at it's halfway point. > > In general, when a new connection is made or a download catches up to the > end of the file or the beginning of the next download section, restart > that download at the midpoint of the largest section you still need. For > sanity, have a maximum on the size of section you're willing to split. > > This algorithm just does multi-source downloading, it doesn't do 'true' > swarming because peers don't make parts of the file available before > they've gotten the whole thing. I think the only systems which do that are > BitTorrent and Edonkey2000, both of which have their own protocol which > breaks the file into fixed-size pieces. > > -Bram Cohen > > "Markets can remain irrational longer than you can remain solvent" > -- John Maynard Keynes > > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > From gojomo at usa.net Thu Dec 6 12:21:33 2001 From: gojomo at usa.net (Gordon Mohr) Date: Sat Dec 9 22:11:44 2006 Subject: [p2p-hackers] test Message-ID: <1a7801c17dda$77abb9d0$c6efa4d8@golden> is this thing on? From zooko at zooko.com Thu Dec 6 22:36:06 2001 From: zooko at zooko.com (Zooko) Date: Sat Dec 9 22:11:44 2006 Subject: [p2p-hackers] test In-Reply-To: Message from "Gordon Mohr" of "Wed, 05 Dec 2001 14:16:17 PST." <1a7801c17dda$77abb9d0$c6efa4d8@golden> References: <1a7801c17dda$77abb9d0$c6efa4d8@golden> Message-ID: pong but there is a poblem at the server -- I can't login to investigate. --Z > is this thing on? > > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > From dmarti at zgp.ORG Fri Dec 7 10:53:01 2001 From: dmarti at zgp.ORG (Don Marti) Date: Sat Dec 9 22:11:44 2006 Subject: [p2p-hackers] Administrivia Message-ID: <20011207105223.A18375@zgp.org> If you have tried any action that requires admin approval on this list over the past couple of days, please do it again. Somebody's broken spamware sent Mailman 600-some posts requiring approval and broke the admin interface. So I blew away all the pending requests. (Zooko, it should be working for both of us now.) If you don't know if your mail went through, please check the archives: http://zgp.org/pipermail/p2p-hackers/2001-December/thread.html -- Don Marti What do we want? Free Dmitry! When do we want it? Now! http://zgp.org/~dmarti dmarti@zgp.org Free the web, burn all GIFs. KG6INA http://burnallgifs.org/ From bram at gawth.com Fri Dec 7 14:18:02 2001 From: bram at gawth.com (Bram Cohen) Date: Sat Dec 9 22:11:44 2006 Subject: [p2p-hackers] CFP: Codecon (deadline coming up!) Message-ID: The deadline for submissions is coming up in less than a month, so now's the time to send them in. CALL FOR PRESENTATIONS: CODECON 2002 http://www.codecon.org/ CodeCon 2002, scheduled for February 15, 16, and 17 in San Francisco, California, is the premier event in 2002 for the P2P, cypherpunk, and network/security application developer community. It is a workshop for developers of real-world applications that support individual liberties. During the first two days, our policy is "bring your own code"; while those not demonstrating software are welcome to attend, the focus is primarily on developer discussion. The final day of the workshop is intended to be more inclusive, consisting of public and press demonstrations, interviews, panels and a public session allowing a larger number of presenters to demonstrate their projects in a more informal setting. All presentations must be accompanied by functional applications, ideally open source. Presenters must be one of the active developers of the code in question. CodeCon strongly encourages presenters from non-commercial and academic backgrounds to attend for the purposes of collaboration and the sharing of knowledge by providing free registration to workshop presenters and highly-discounted registration to full-time students. Public session presenters and approved members of the press will receive free registration for the public session on Sunday. IMPORTANT DATES Submissions open: 1 October 2001 Final submission deadline: 1 January 2002 Final notification of acceptance: 15 January 2002 Conference begins: 15 February 2002 Public session and public demonstrations: 17 February 2002 Post-conference web-based proceedings: 15 March 2002 SUGGESTED TOPICS The focus of CodeCon is on running applications which: * use one or more of: cryptography, steganography, distributed network architectures, peer to peer communications, anonymity or pseudonymity * enhance individual power and liberty * can be discussed freely, either by virtue of being open source or having a published protocol, and preferably free of intellectual property restrictions * are generally useful, either directly to a large number of users, or as an example of technology applicable to a larger audience Examples of excellent presentations include Mixmaster remailers and extensions, OpenNap, Swarmcast, Mojo Nation, Magic Money, and OpenPGP applications. Novelty in technical approaches, security assumptions, and end-user functionality are excellent properties. Presentations about basic technologies, such as a new cipher or hash, non-interesting vulnerabilities in existing applications, or discussions of unimplemented protocols are better suited for other conferences. The guidelines for the CodeCon public session on Sunday are less stringent than the main workshop; presentations which are more tangential to CodeCon's focus may be accepted. FORMAT OF PRESENTATIONS (main workshop) Paper and Q&A ------------- For those most comfortable with a traditional conference format, we will accept papers up to 25 pages. We encourage HTML or plain ASCII submissions, but can accept PostScript, PDF, or LaTeX. We will distribute papers in advance of the conference, and will provide 30 or 60 minutes for discussion and Q&A, at the presenter's discretion. In exceptional cases, we will accept anonymous papers and conduct either a non-directed discussion or a Q&A session directed by proxy. All papers should be accompanied by source code or an application. When possible, we would prefer that the application be available for interactive use during the workshop, either on a presenter-provided demonstration machine or one of the conference kiosks. Additionally, during the paper presentation, some use of this demo must be made; it may be relatively brief, but a demonstration of the running application is essential. Interactive demo ---------------- In addition to the traditional conference paper format, we encourage highly interactive presentations. Throughout the event, we will have several kiosks and local servers available for demonstration purposes. We also strongly encourage presenters to bring their own hardware. Application demos can be up to 20 minutes, followed by a period of up to 40 minutes for Q&A, which can include demonstration of additional features of the application not covered in the main presentation. If desired by the presenter, we can distribute URLs of applications several days before the workshop to allow attendees to familiarize themselves with the basics of applications prior to the workshop sessions. Panel ----- In areas where multiple projects fall roughly in the same domain, the most efficient presentation may be a panel with one or more developers from each team. These developers may then individually demonstrate their applications, followed by discussion among the panel and Q&A with the other attendees as to differences in design goals, implementation, and other aspects of the systems. If we receive multiple submissions from related projects for papers or demos, we may suggest to the presenters that they combine into a panel. Additionally, presenters are free to submit jointly as a pre-selected panel. There is some flexibility in requirements and formats for presentations; please enquire if you would like to use an alternate form. FORMAT OF PRESENTATIONS (public session) On the afternoon of Sunday 17 February, we will set aside a substantial amount of time for 5 minute-or-less project public session presentations. Other events on this day, including panels and main presentations, will be targeted at members of the press and public, so brief presentations on Sunday will reach a wide audience. Presenters from the first two days who wish to make an additional public session presentation may do so. SUBMISSION DETAILS Presentations must be performed by one of the active developers on the project. That's the rule -- no code, no mike. Multiple people may be involved in a presentation. You do get in free if you're part of a presentation even if you don't speak during it, so creativity (within reason) is encouraged. The workshop language is English, for both presentations and papers. Ideally, demonstrations should be usable by attendees with 802.11b connected devices either via a web interface, or locally on Windows, UNIX-like, or MacOS platforms. Cross-platform applications are most desirable. Our venue may be 21+. If you are submitting and are under 21, please advise the program committee; we may consider alternate venues for one or more days of the event. If you have a specific day on which you would prefer to present, please advise us. Main workshop submissions should include in the plain-text body of email to submissions@codecon.org the following information: - Name of presenter - Name of others involved in project attending conference - Title of presentation - Brief summary of topic - URL or attachment of example code (must be received by the final submission deadline) - Brief project history - Brief summary of demo, or abstract of paper - Any other details considered relevant Public session submissions should include in the plain-text body of email to submissions@codecon.org the following information: - Name of presenter - Title of presentation - Brief summary of topic - URL or attachment with example code - Any other details PROGRAM COMMITTEE Bram Cohen, BitTorrent Dan Egnor, ofb.net Jered Floyd, Permabit Ian Grigg, Systemics Ryan Lackey, HavenCo Don Marti, LinuxJournal Guido Sanchez, New Hack City Len Sassaman, quickie.net Bill Stewart, AT&T Brandon Wiley, Freenet Jamie Zawinski, DNA Lounge COSTS Recognizing that many of the developers of the most interesting cypherpunk applications are unable to afford accommodations and other expenses in San Francisco, CodeCon will attempt to locate housing and otherwise assist with issues for presenters on a case-by-case basis. Please contact codecon-admin@codecon.org if your submission is accepted but you require assistance to attend. SPONSORSHIP If your organization is interested in sponsoring CodeCon, we would love to hear from you. In particular, we are looking for sponsors for social meals and parties on any of the three days of the conference, as well as sponsors of the conference as a whole, prizes or awards for quality presentations, and assistance with transportation or accommodation for presenters with limited resources. If you might be interested in sponsoring any of these aspects, please contact the conference organizers at codecon-admin@codecon.org. QUESTIONS If you have questions about CodeCon, or would like to contact the organizers, please mail codecon-admin@codecon.org. Please note this address is only for questions and administrative requests, and not for workshop presentation submissions. From burton at openprivacy.org Sat Dec 15 16:07:01 2001 From: burton at openprivacy.org (Kevin A. Burton - burtonator) Date: Sat Dec 9 22:11:44 2006 Subject: [p2p-hackers] CFP: Codecon (deadline coming up!) In-Reply-To: References: Message-ID: <87bsh0kozu.fsf@universe.yi.org> Bram Cohen writes: > The deadline for submissions is coming up in less than a month, so now's > the time to send them in. > > CALL FOR PRESENTATIONS: CODECON 2002 > http://www.codecon.org/ > > CodeCon 2002, scheduled for February 15, 16, and 17 in San Francisco, > California, is the premier event in 2002 for the P2P, cypherpunk, and > network/security application developer community. It is a workshop for > developers of real-world applications that support individual > liberties. Hey Brahm. I *really* want to talk about OpenPrivacy and specifically Reptile at codercon. http://reptile.openprivacy.org Basically Reptile is an Open Source, P2P (JXTA, Freenet bindings, etc) which includes a reputation framework and content syndication framework. The goal is to build a distributed network of knowledge that can improve the way democracy works. The timeframe for codercon is perfet because I want to get 1.0 released around the February timeframe. Maybe at the conference! What do you want in an official proposal? Ideally I would like to avoid writing a paper at the current point in time as I want to concentrate on some other areas. Can I just give you an abstract and then work in the official talk before the conference. Usually I like to work an outline and draw up a kpresenter presentation. Your RFP requested papers approx. 25 pages in length. BTW. Really excited about codecon. It is a GREAT idea and it seems like it will be really fun. BTW2. Sorry about not sending this off the first time I saw the RFP. Kevin -- Kevin A. Burton ( burton@apache.org, burton@openprivacy.org, burtonator@acm.org ) Location - San Francisco, CA, Cell - 415.595.9965 Jabber - burtonator@jabber.org, Web - http://relativity.yi.org/ Microsoft VBScript compilation error '800a03e9' Out of memory ? From burton at openprivacy.org Sat Dec 15 18:06:01 2001 From: burton at openprivacy.org (Kevin A. Burton - burtonator) Date: Sat Dec 9 22:11:44 2006 Subject: [p2p-hackers] CFP: Codecon (deadline coming up!) In-Reply-To: <87bsh0kozu.fsf@universe.yi.org> References: <87bsh0kozu.fsf@universe.yi.org> Message-ID: <874rmrj4vz.fsf@universe.yi.org> burton@openprivacy.org (Kevin A. Burton - burtonator) writes: > Bram Cohen writes: > > > The deadline for submissions is coming up in less than a month, so now's > > the time to send them in. > > > > CALL FOR PRESENTATIONS: CODECON 2002 > > http://www.codecon.org/ > > > > CodeCon 2002, scheduled for February 15, 16, and 17 in San Francisco, > > California, is the premier event in 2002 for the P2P, cypherpunk, and > > network/security application developer community. It is a workshop for > > developers of real-world applications that support individual > > liberties. > > > Hey Brahm. Sorry... this wasn't intended for the list -- Kevin A. Burton ( burton@apache.org, burton@openprivacy.org, burtonator@acm.org ) Location - San Francisco, CA, Cell - 415.595.9965 Jabber - burtonator@jabber.org, Web - http://relativity.yi.org/ I would rather live with a certain amount of private terrorism than with government totalitarianism. -- Harvey Silvergate