From Euseval at aol.com Tue Nov 2 18:00:54 2004 From: Euseval at aol.com (Euseval@aol.com) Date: Sat Dec 9 22:12:43 2006 Subject: [p2p-hackers] Jetiants.tk 0.5.3. with edonkey support (anonymous File Sharing in edonkey style) Message-ID: <0D421D3D.536EAAE3.00175F91@aol.com> Read the description here http://groups.yahoo.com/group/jetiantsp2p/message/775 or at the homepage: http://www.jetiants.tk (at the end) From seberino at spawar.navy.mil Sat Nov 6 05:24:14 2004 From: seberino at spawar.navy.mil (seberino@spawar.navy.mil) Date: Sat Dec 9 22:12:43 2006 Subject: [p2p-hackers] MixMinion vs. onion routing & GNUnet question Message-ID: <20041106052414.GA31089@spawar.navy.mil> How does MixMinion protect against traffic analysis?? Traffic analysis seems to be the reason I think onion routing is unacceptable for a widely used p2p system trying to protect from powerful adversaries like Chinese government. GNUnet seems like a very good project. Probably the best I've seen. It is a modular framework so pieces can be borrowed and built upon at many levels. Would you agree? Do you know how MixMinion differs from GNUnet's protection against traffic analysis? Chris From Euseval at aol.com Sat Nov 6 09:23:59 2004 From: Euseval at aol.com (Euseval@aol.com) Date: Sat Dec 9 22:12:43 2006 Subject: [p2p-hackers] MixMinion vs. onion routing & GNUnet question Message-ID: <3953346F.1F1AEA64.00175F91@aol.com> there is a 4.th alternative: a mixer already built in in the p2p programm, which as well is able to take ed2k links http://www.jetiants.tk this is in an advanced development, while the mix layers haven?t reachched a stadium they were ever stuck together with any p2p. In einer eMail vom Sa, 6. Nov. 2004 6:24 MEZ schreibt seberino@spawar.navy.mil: >How does MixMinion protect >against traffic analysis?? ?Traffic analysis seems to be >the reason I think onion routing is unacceptable for >a widely used p2p system trying to protect from powerful >adversaries like Chinese government. > >GNUnet seems like a very good project. ?Probably the >best I've seen. ?It is a modular framework so pieces can be >borrowed and built upon at many levels. > >Would you agree? ?Do you know how MixMinion differs >from GNUnet's protection against traffic analysis? > >Chris >_______________________________________________ >p2p-hackers mailing list >p2p-hackers@zgp.org >http://zgp.org/mailman/listinfo/p2p-hackers >_______________________________________________ >Here is a web page listing P2P Conferences: >http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences > From fis at wiwi.hu-berlin.de Mon Nov 8 10:14:49 2004 From: fis at wiwi.hu-berlin.de (fis@wiwi.hu-berlin.de) Date: Sat Dec 9 22:12:43 2006 Subject: [p2p-hackers] MixMinion vs. onion routing & GNUnet question In-Reply-To: <20041106052414.GA31089@spawar.navy.mil> References: <20041106052414.GA31089@spawar.navy.mil> Message-ID: <16783.18201.699714.996917@gargle.gargle.HOWL> seberino@spawar.navy.mil writes: > From: seberino@spawar.navy.mil > Date: Fri, 5 Nov 2004 21:24:14 -0800 > Subject: [p2p-hackers] MixMinion vs. onion routing & GNUnet question > [...] > GNUnet seems like a very good project. Probably the > best I've seen. It is a modular framework so pieces can be > borrowed and built upon at many levels. These may be naive questions (I don't know GNUnet too well), but hopefully I am about to learn something: GNUnet tries to achieve at least three goals at the same time that are not perfectly understood and should rather be treated individually: - anonymity - censor resistance - high-performance document distribution What makes you believe the GNUnet-solution for any of these aims can be factored out and used somewhere else? Also, don't the shortcomings of mix networks also apply to Freenet- / GNUnet-style anonymization schemes? In Freenet (at least in some ancient version that I once had a closer look at), I know security is even worse (though still not too bad in my eyes), because the packets don't all travel well-specified mix paths but take shortcuts. To put it more clearly: A network has "perfect anonymity" if any peer in that network can send and receive (variants: a - send only; b - receive only) packets without the contents of the packets being associated with its IP address by the adversary, and it has "high anonymity" if it has perfect anonymity in every transaction with high probability. Then I suspect that no matter what (existing) adversary model you pick, plugging a good mix network into your design on the transport layer gives you the highest anonymity possible. (And at a very good price, too: You can throw more resources at other design requirements, you get more mature anonymity technology, and you can profit from improvements in the field without changing your design at all.) Of course I'd need to define "good mix network" now. But perhaps somebody can already counter or confirm this as is? -matthias From paul at paulbaranowski.org Mon Nov 8 15:10:09 2004 From: paul at paulbaranowski.org (Paul Baranowski) Date: Sat Dec 9 22:12:43 2006 Subject: [p2p-hackers] Anti-censorship Proxy Networks In-Reply-To: <16783.18201.699714.996917@gargle.gargle.HOWL> References: <20041106052414.GA31089@spawar.navy.mil> <16783.18201.699714.996917@gargle.gargle.HOWL> Message-ID: <418F8C51.9010405@paulbaranowski.org> An HTML attachment was scrubbed... URL: http://zgp.org/pipermail/p2p-hackers/attachments/20041108/b88eef49/attachment.htm From paul at paulbaranowski.org Mon Nov 8 15:20:53 2004 From: paul at paulbaranowski.org (Paul Baranowski) Date: Sat Dec 9 22:12:43 2006 Subject: [p2p-hackers] Anti-censorship Proxy Networks (without the HTML this time - sorry!) Message-ID: <418F8ED5.7030207@paulbaranowski.org> First I want to thank everyone for posting such good papers on this mailing list - it has given me lots of good reading material! Now I have a chance to give back to the community...I've been researching the problem of web censorship and how to design a system to get around it. Initially I wanted to build a P2P mixnet so that the users would also have anonymity. It turns out that due to various attacks that it isnt possible to build a "totally decentralized" P2P network - instead it looks more like a star where one server manages many proxy nodes. This is one example where p2p just isnt possible (I know, blasphemy on this mailing list!). Zooko encouraged me to write down my findings, and this is what I came up with: Not Too Few, Not Too Many: Enforcing Minimum Network Knowledge In Distributed Systems http://www.peek-a-booty.org/pbhtml/modules.php?name=Downloads&d_op=getit&lid=12 Comments are welcome. Abstract: Some distributed systems require that each node know as few other nodes as possible while still maintaining connectivity to the system. We define this state as "minimum network knowledge". In particular, this is a requirement for Internet censorship circumvention systems. We describe the constraints on such systems: 1) the Sybil attack, 2) the man-in-the-middle attack, and 3) the spidering attack. The resulting design requirements are thus: 1) An address receiver must discover addresses such that the network Node Arrival Rate <= Node Discovery Rate <= Node Departure Rate, 2) There must be a single centralized trusted address provider, 3) The address provider must uniquely identify address receivers, and 4) The discovery mechanism must involve reverse Turing tests (A.K.A. CAPTCHAs). The "minimum network knowledge" requirement also puts limits on the type of routing the network can perform. We describe a new attack, called the Boomerang attack, where it is possible to discover all the nodes in a network if the network uses mixnet routing. Two other well-known attacks limit the types of routing mechanisms: the distributed denial-of-service attack and the untraceable cracker attack. We describe three routing mechanisms that fit within the constraints: single, double, and triple-hop routing. Single-hop is a basic proxy setup, double-hop routing protects the user's data from snooping proxies, and triple hop hides proxy addresses from trusted exit nodes. From seberino at spawar.navy.mil Mon Nov 8 17:36:40 2004 From: seberino at spawar.navy.mil (seberino@spawar.navy.mil) Date: Sat Dec 9 22:12:43 2006 Subject: [p2p-hackers] Anti-censorship Proxy Networks In-Reply-To: <418F8C51.9010405@paulbaranowski.org> References: <20041106052414.GA31089@spawar.navy.mil> <16783.18201.699714.996917@gargle.gargle.HOWL> <418F8C51.9010405@paulbaranowski.org> Message-ID: <20041108173640.GG28743@spawar.navy.mil> > It turns out that due to various attacks that it isnt possible > to build a "totally decentralized" P2P network. So what is Gnutella then? Do you mean to say *anonymous* decentralized p2p networks are impossible?? GNUnet is working *right now*. So are you really saying it isn't possible to build a *perfectly secure* anonymous p2p system? Are you claiming you've *proved* this or just that you found a new attack GNUnet has to work around? Chris From seberino at spawar.navy.mil Mon Nov 8 17:41:48 2004 From: seberino at spawar.navy.mil (seberino@spawar.navy.mil) Date: Sat Dec 9 22:12:43 2006 Subject: [p2p-hackers] MixMinion vs. onion routing & GNUnet question In-Reply-To: <16783.18201.699714.996917@gargle.gargle.HOWL> References: <20041106052414.GA31089@spawar.navy.mil> <16783.18201.699714.996917@gargle.gargle.HOWL> Message-ID: <20041108174148.GH28743@spawar.navy.mil> > These may be naive questions (I don't know GNUnet too well), but > hopefully I am about to learn something: GNUnet tries to achieve at > least three goals at the same time that are not perfectly understood > and should rather be treated individually: > > - anonymity > - censor resistance > - high-performance document distribution Performance is a secondary goal to the first 2 in GNUnet. The first 2 are related so I'm not sure how or why they need to be treated separately. > Also, don't the shortcomings of mix networks also apply to Freenet- / > GNUnet-style anonymization schemes? > I suspect that no matter what (existing) adversary > model you pick, plugging a good mix network into your design on the > transport layer gives you the highest anonymity possible. I don't know how GNUnet's architecture compares to mix networks. I *do* know that GNUnet attempts to protect against traffic analysis. If you think mix networks are better, they better have good protection against traffic analysis. Can you point us to any good URLs or papers on how mix networks protect against traffic analysis? Chris From paul at paulbaranowski.org Mon Nov 8 17:50:23 2004 From: paul at paulbaranowski.org (Paul Baranowski) Date: Sat Dec 9 22:12:43 2006 Subject: [p2p-hackers] Anti-censorship Proxy Networks In-Reply-To: <20041108173640.GG28743@spawar.navy.mil> References: <20041106052414.GA31089@spawar.navy.mil> <16783.18201.699714.996917@gargle.gargle.HOWL> <418F8C51.9010405@paulbaranowski.org> <20041108173640.GG28743@spawar.navy.mil> Message-ID: <418FB1DF.1030807@paulbaranowski.org> Oops, I might have worded that incorrectly. I meant that a totally decentralized anti-censorship proxy network is impossible. Note that this isnt file-sharing anti-censorship, it is to allow a censored user (a user behind a national firewall) access (e.g. with a web browser) to a censored computer (e.g. web site) on the internet. The problem I was pointing out with regards to mixnets is that you cant have an anonymous anti-censorship proxy network, in the sense that the proxies will know who they are talking to on the user side. Of course, a user could make themselves anonymous by using a anonymizing proxy within their censored environment to connect to the anti-censorship proxy, but he would have to trust that the anonymizing proxy is not controlled by the censors. - Paul seberino@spawar.navy.mil wrote: >> It turns out that due to various attacks that it isnt possible >> to build a "totally decentralized" P2P network. > > > So what is Gnutella then? Do you mean to say *anonymous* > decentralized p2p networks are impossible?? GNUnet is working *right > now*. So are you really saying it isn't possible to build > a *perfectly secure* anonymous p2p system? Are you claiming > you've *proved* this or just that you found a new attack GNUnet > has to work around? > > Chris > From jdd at dixons.org Mon Nov 8 17:54:57 2004 From: jdd at dixons.org (Jim Dixon) Date: Sat Dec 9 22:12:43 2006 Subject: [p2p-hackers] Anti-censorship Proxy Networks In-Reply-To: <20041108173640.GG28743@spawar.navy.mil> References: <20041106052414.GA31089@spawar.navy.mil> <16783.18201.699714.996917@gargle.gargle.HOWL> <418F8C51.9010405@paulbaranowski.org> <20041108173640.GG28743@spawar.navy.mil> Message-ID: On Mon, 8 Nov 2004 seberino@spawar.navy.mil wrote: > > It turns out that due to various attacks that it isnt possible > > to build a "totally decentralized" P2P network. > > So what is Gnutella then? Do you mean to say *anonymous* > decentralized p2p networks are impossible?? GNUnet is working *right > now*. So are you really saying it isn't possible to build > a *perfectly secure* anonymous p2p system? Are you claiming > you've *proved* this or just that you found a new attack GNUnet > has to work around? If an anonymous decentralized p2p network has N members and if the adversary has perfect knowledge of message traffic (the standard assumption), then a participant's anonymity is at best 1/N. This is not perfect anonymity, whatever N might be. It certainly is far from perfect for small N, especially for anyone sending or receiving a number of 'anonymous' messages. In other words perfect security in an anonymous p2p system of any type is impossible. Pretty good security may be possible if you are careful. -- Jim Dixon jdd@dixons.org tel +44 117 982 0786 mobile +44 797 373 7881 http://jxcl.sourceforge.net Java unit test coverage http://xlattice.sourceforge.net p2p communications infrastructure From Euseval at aol.com Mon Nov 8 17:50:23 2004 From: Euseval at aol.com (Euseval@aol.com) Date: Sat Dec 9 22:12:43 2006 Subject: [p2p-hackers] Re: anon-layer comparison Message-ID: <45A63C67.603E1EFC.00175F91@aol.com> jetiants http://www.jetiants.tk Gnu-net http://www.ovmj.org/GNUnet/ I2p http://www.i2p.net/ Tor http://freehaven.net/tor/ These may be naive questions (I don't know GNUnet too well), but > hopefully I am about to learn something: GNUnet tries to achieve at > least three goals at the same time that are not perfectly understood > and should rather be treated individually: > > - anonymity > - censor resistance > - high-performance document distribution Performance is a secondary goal to the first 2 in GNUnet. The first 2 are related so I'm not sure how or why they need to be treated separately. > Also, don't the shortcomings of mix networks also apply to Freenet- / > GNUnet-style anonymization schemes? > I suspect that no matter what (existing) adversary > model you pick, plugging a good mix network into your design on the > transport layer gives you the highest anonymity possible. I don't know how GNUnet's architecture compares to mix networks. I *do* know that GNUnet attempts to protect against traffic analysis. If you think mix networks are better, they better have good protection against traffic analysis. Can you point us to any good URLs or papers on how mix networks protect against traffic analysis? Chris From mccoy at mad-scientist.com Mon Nov 8 21:30:18 2004 From: mccoy at mad-scientist.com (Jim McCoy) Date: Sat Dec 9 22:12:43 2006 Subject: [p2p-hackers] Anti-censorship Proxy Networks In-Reply-To: References: <20041106052414.GA31089@spawar.navy.mil> <16783.18201.699714.996917@gargle.gargle.HOWL> <418F8C51.9010405@paulbaranowski.org> <20041108173640.GG28743@spawar.navy.mil> Message-ID: <6327087C-31CD-11D9-944B-000A95BD758E@mad-scientist.com> On Nov 8, 2004, at 9:54 AM, Jim Dixon wrote: > On Mon, 8 Nov 2004 seberino@spawar.navy.mil wrote: > >>> It turns out that due to various attacks that it isnt possible >>> to build a "totally decentralized" P2P network. >> >> So what is Gnutella then? Do you mean to say *anonymous* >> decentralized p2p networks are impossible?? GNUnet is working *right >> now*. I think the point is that it is not anonymous. I think maybe the problem here is that you might be thinking of this in terms of an either/or situation (e.g. either the system takes steps, no matter how useless they may end up being, to protect user anonymity or they do not) when most of us tend to see the question of how "anonymous" a system is in terms of various shades of grey. Anonymous, decentralized, operational: pick two. > If an anonymous decentralized p2p network has N members and if the > adversary has perfect knowledge of message traffic (the standard > assumption), then a participant's anonymity is at best 1/N. And this is only for a passive adversary. An active adversary (e.g. one that can use DDoS attacks to take out specific nodes whenever it wants to) can make life even more unpleasant. Jim From mccoy at mad-scientist.com Mon Nov 8 21:36:36 2004 From: mccoy at mad-scientist.com (Jim McCoy) Date: Sat Dec 9 22:12:43 2006 Subject: [p2p-hackers] MixMinion vs. onion routing & GNUnet question In-Reply-To: <20041108174148.GH28743@spawar.navy.mil> References: <20041106052414.GA31089@spawar.navy.mil> <16783.18201.699714.996917@gargle.gargle.HOWL> <20041108174148.GH28743@spawar.navy.mil> Message-ID: <444F3E96-31CE-11D9-944B-000A95BD758E@mad-scientist.com> On Nov 8, 2004, at 9:41 AM, seberino@spawar.navy.mil wrote: >> These may be naive questions (I don't know GNUnet too well), but >> hopefully I am about to learn something: GNUnet tries to achieve at >> least three goals at the same time that are not perfectly understood >> and should rather be treated individually: >> >> - anonymity >> - censor resistance >> - high-performance document distribution > > Performance is a secondary goal to the first 2 in GNUnet. The first > 2 are related so I'm not sure how or why they need to be treated > separately. The first two can be related, but do not have to be. An anonymous system is one where the actions of users can't be linked to the real person performing the action and is mostly a factor for reading/downloading data. Censorship resistance seems to be best applied to the security/anonymity of publication. If I can read data from a network without you being able to discover who I am then the system is anonymous. If I can publish data into the network and you cannot prevent me from doing so or prevent other users from accessing the data it is resistant to censorship. A data service can provide one feature and not provide the other (in fact, it is much easier to design a system which actually does only one of these two tasks.) Jim From lutianbo at software.ict.ac.cn Tue Nov 9 02:44:21 2004 From: lutianbo at software.ict.ac.cn (Lutianbo) Date: Sat Dec 9 22:12:43 2006 Subject: [p2p-hackers] permutation Message-ID: <003001c4c606$04ad5bb0$9402000a@ictltbo> Hi all, Would you please tell me some papers about uniform random permutation of an array? Thank you! Regards. ---Lu -------------- next part -------------- An HTML attachment was scrubbed... URL: http://zgp.org/pipermail/p2p-hackers/attachments/20041109/40f7f180/attachment.html From bryan.turner at pobox.com Tue Nov 9 17:31:24 2004 From: bryan.turner at pobox.com (Bryan Turner) Date: Sat Dec 9 22:12:43 2006 Subject: [p2p-hackers] permutation In-Reply-To: Message-ID: Lu, This problem is the same as the card-shuffling problem, you may have more luck locating information on card-shuffling than random array permutations. The basic algorithm goes: Given N items, randomly choose one (1..N). Place that item as the first in the permuted array. Of the remaining N-1 items, choose another, it becomes the second item.. and so on. The algorithm can be performed in O(N) time by using the Swap() function. Just keep a pointer to the current element of the array and move it down the array one step each iteration. Upon choosing the item for the current slot, swap the item which is currently in the slot with the newly selected one. Thus, you can move through the entire array in O(N) swaps, and zero additional space. Now, choose a random number generator which produces a uniform random distribution. Using various math functions you can reduce the output to the desired range (I prefer truncation, using multiple-rounds to find a number in range, thus ensuring the reduction math does not affect the distribution). Once you have a number within the range, the rest should be trivial. --Bryan bryan.turner@pobox.com -----Original Message----- From: p2p-hackers-bounces@zgp.org [mailto:p2p-hackers-bounces@zgp.org]On Behalf Of Lutianbo Sent: Monday, November 08, 2004 9:44 PM To: p2p-hackers@zgp.org Subject: [p2p-hackers] permutation Hi all, Would you please tell me some papers about uniform random permutation of an array? Thank you! Regards. ---Lu -------------- next part -------------- An HTML attachment was scrubbed... URL: http://zgp.org/pipermail/p2p-hackers/attachments/20041109/f1d21df0/attachment.htm From webmaster at software-x.org Sun Nov 7 17:35:04 2004 From: webmaster at software-x.org (RLWagner) Date: Sat Dec 9 22:12:43 2006 Subject: [p2p-hackers] permutation References: <003001c4c606$04ad5bb0$9402000a@ictltbo> Message-ID: <009601c4c4f0$20fe7a40$040aa8c0@softwaredpxjdv> Random Permutation of Index Array http://www.dfanning.com/code_tips/randperm.html http://www.owlnet.rice.edu/~elec428/rng/per1.html ----- Original Message ----- From: Lutianbo To: p2p-hackers@zgp.org Sent: Monday, November 08, 2004 8:44 PM Subject: [p2p-hackers] permutation Hi all, Would you please tell me some papers about uniform random permutation of an array? Thank you! Regards. ---Lu ------------------------------------------------------------------------------ _______________________________________________ p2p-hackers mailing list p2p-hackers@zgp.org http://zgp.org/mailman/listinfo/p2p-hackers _______________________________________________ Here is a web page listing P2P Conferences: http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences -------------- next part -------------- An HTML attachment was scrubbed... URL: http://zgp.org/pipermail/p2p-hackers/attachments/20041107/f10bff91/attachment.html From mgp at ucla.edu Tue Nov 9 17:58:26 2004 From: mgp at ucla.edu (Michael Parker) Date: Sat Dec 9 22:12:43 2006 Subject: [p2p-hackers] permutation In-Reply-To: <003001c4c606$04ad5bb0$9402000a@ictltbo> References: <003001c4c606$04ad5bb0$9402000a@ictltbo> Message-ID: <41910542.5070700@ucla.edu> See http://c2.com/cgi/wiki?LinearShuffle I thought it was interesting that the following code does NOT do fair shuffling: for (int i = 0; i < NUM_ELEMENTS; ++i) { other_index = random() % NUM_ELEMENTS; int temp = array[other_index]; array[other_index] = array[i]; array[i] = temp; } Not exactly intuitive. - Michael Lutianbo wrote: > Hi all, > Would you please tell me some papers about uniform random permutation > of an array? > Thank you! > Regards. > ---Lu > >------------------------------------------------------------------------ > >_______________________________________________ >p2p-hackers mailing list >p2p-hackers@zgp.org >http://zgp.org/mailman/listinfo/p2p-hackers >_______________________________________________ >Here is a web page listing P2P Conferences: >http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences > > From mmukarrams at yahoo.com Wed Nov 10 04:12:17 2004 From: mmukarrams at yahoo.com (mmukarrams) Date: Sat Dec 9 22:12:43 2006 Subject: [p2p-hackers] permutation In-Reply-To: <009601c4c4f0$20fe7a40$040aa8c0@softwaredpxjdv> Message-ID: <20041110041251.1AFD73FDE7@capsicum.zgp.org> I recall an algo from Knuth. I hope I remember it correctly. generate an array of size N for i=1toN begin pick two random numbers, a and b in [0..N-1] swap array [a] and array [b] end you have your random permutation of size N. -- Muhammad Mukarram Bin Tariq _____ From: p2p-hackers-bounces@zgp.org [mailto:p2p-hackers-bounces@zgp.org] On Behalf Of RLWagner Sent: Sunday, November 07, 2004 10:35 PM To: Peer-to-peer development. Subject: Re: [p2p-hackers] permutation Random Permutation of Index Array http://www.dfanning.com/code_tips/randperm.html http://www.owlnet.rice.edu/~elec428/rng/per1.html ----- Original Message ----- From: Lutianbo To: p2p-hackers@zgp.org Sent: Monday, November 08, 2004 8:44 PM Subject: [p2p-hackers] permutation Hi all, Would you please tell me some papers about uniform random permutation of an array? Thank you! Regards. ---Lu _____ _______________________________________________ p2p-hackers mailing list p2p-hackers@zgp.org http://zgp.org/mailman/listinfo/p2p-hackers _______________________________________________ Here is a web page listing P2P Conferences: http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences -------------- next part -------------- An HTML attachment was scrubbed... URL: http://zgp.org/pipermail/p2p-hackers/attachments/20041110/74a5f4b1/attachment.htm From PaulLambert at AirgoNetworks.Com Wed Nov 10 22:34:37 2004 From: PaulLambert at AirgoNetworks.Com (Paul Lambert) Date: Sat Dec 9 22:12:43 2006 Subject: [p2p-hackers] RE: permutation Message-ID: <3FFBC907DD03A34CA4410C5C745DEB1204D9BC45@wnimail.WoodsideNet.Com> > Message: 1 > Date: Wed, 10 Nov 2004 09:12:17 +0500 > From: "mmukarrams" I recall an algo from Knuth. > I hope I remember it correctly. > > generate an array of size N > > for i=1toN > begin > pick two random numbers, a and b in [0..N-1] > swap array [a] and array [b] > end I prefer: def shuffle(array): """ Function to randomly permute a list """ n = len(array) for i in range(n): j = int(random()*n) array(j), array(i) = array(i), array(j) It's much better to only pick one of the index value randomly. Paul From bert at web2peer.com Thu Nov 11 00:19:09 2004 From: bert at web2peer.com (bert@web2peer.com) Date: Sat Dec 9 22:12:43 2006 Subject: [p2p-hackers] RE: permutation Message-ID: <20041111001909.E8A692F910@ws6-3.us4.outblaze.com> It's (typically) much better to use a solution that generates uniform random permutations (one in which each of the n! permutations may appear with equal probability). Both your solutions fail in this regard. See the earlier replies on this topic which already discuss or link to better solutions. Plus this is WAY off topic.... ----- Original Message ----- From: "Paul Lambert" To: Subject: [p2p-hackers] RE: permutation Date: Wed, 10 Nov 2004 14:34:37 -0800 > > > > Message: 1 > > Date: Wed, 10 Nov 2004 09:12:17 +0500 > > From: "mmukarrams" I recall an algo from Knuth. > > > I hope I remember it correctly. > > > > generate an array of size N > > > > for i=1toN > > begin > > pick two random numbers, a and b in [0..N-1] > > swap array [a] and array [b] > > end > > I prefer: > > def shuffle(array): > """ Function to randomly permute a list """ > n = len(array) > for i in range(n): > j = int(random()*n) > array(j), array(i) = array(i), array(j) > > It's much better to only pick one of the index value randomly. > > Paul > > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > _______________________________________________ > Here is a web page listing P2P Conferences: > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences > From hopper at omnifarious.org Thu Nov 11 16:48:39 2004 From: hopper at omnifarious.org (Eric M. Hopper) Date: Sat Dec 9 22:12:43 2006 Subject: [p2p-hackers] permutation In-Reply-To: <20041110041251.1AFD73FDE7@capsicum.zgp.org> References: <20041110041251.1AFD73FDE7@capsicum.zgp.org> Message-ID: <1100191719.1808.74.camel@monster.omnifarious.org> On Wed, 2004-11-10 at 09:12 +0500, mmukarrams wrote: > I recall an algo from Knuth. I hope I remember it correctly. > > generate an array of size N > > for i=1toN > begin > pick two random numbers, a and b in [0..N-1] > swap array [a] and array [b] > end > > you have your random permutation of size N. Nope. In Python, because I like real programming languages for stuff like this: def shuffle_list(l, random): """shuffle_list(l, random) -> None l is the list to be shuffled. It will be altered in the operation. random is a function producing uniformly distributed random numbers in the range [0, 1).""" for i in xrange(0, len(l) - 1): swap_idx = int(random() * (len(l) - i)) l[i], l[swap_idx] = l[swap_idx], l[i] One problem with this is that most random number generators have sequences that are too small to actually generate all possible permutations. For a list of length 50, there are ? 2**214 permutations. If your random number generator has a period of 2**32, then you have no hope of generating the vast majority of those permutations. Have fun (if at all possible), -- The best we can hope for concerning the people at large is that they be properly armed. -- Alexander Hamilton -- Eric Hopper (hopper@omnifarious.org http://www.omnifarious.org/~hopper) -- -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 185 bytes Desc: This is a digitally signed message part Url : http://zgp.org/pipermail/p2p-hackers/attachments/20041111/dd813f19/attachment.pgp From seth.johnson at RealMeasures.dyndns.org Wed Nov 17 06:28:27 2004 From: seth.johnson at RealMeasures.dyndns.org (Seth Johnson) Date: Sat Dec 9 22:12:43 2006 Subject: [p2p-hackers] Seth Johnson: Request for the P2P Workshop at the FTC Message-ID: <419AEF8B.F3EA6F17@RealMeasures.dyndns.org> Below is my request to participate in the FTC's Workshop on "P2P Filesharing," details of which may be found at: > http://www.ftc.gov/bcp/workshops/filesharing/index.htm > http://www.ftc.gov/os/2004/10/041015p2pfrn.pdf Seth --- Request to Participate Federal Trade Commission Peer to Peer Filesharing Workshop Including Comments and Recommendations Seth Johnson I have been a developer of database software since the 1980's, and presently offer professional consultancy services in information quality improvement. My clients have included the Illinois Department of Agriculture, Sony Music, Bertelsmann, and Affinity Health Plan. I am also an advocate and organizer in areas of information freedom. Working with New Yorkers for Fair Use and other groups, I have worked to promote the interests of innovation in information technology for many years, including such areas as patent policy at the World Wide Web Consortium, content control in the broadband Internet, the broadcast flag, software patents, and other issues. At the Internet Commons Congress in March 2004 (http://www.nyfairuse.org/icc), I worked with Daniel Berninger, New Yorkers for Fair Use and others to bring together advocates for many different areas so that they could better coordinate their activities and concerns. A matter of some concern regarding the FTC's workshop on "P2P filesharing technology" arises from its usage of the term "P2P" or "peer to peer." Observing that Napster's centralized data servers were a legal target, some Internet users declared that the use of a central server was unnecessary, because the decentralized architecture of the Internet was inherently not subject to the legal theory behind the charges levied against Napster. As a result, downloadable applications like KaZaA, Grokster and Gnutella took on the label "P2P" to distinguish them from Napster, when in fact the ability for any computer to directly communicate with any other is built into the Internet infrastructure. In addition, the facts that these applications allowed users to open up access to their directories, and that they presented lists of files which users could select to initiate transfers, have often obscured the fact that the applications themselves do not transfer the files, and that the ability to give other users access to local directories is a feature built into ordinary operating systems. This is why "P2P filesharing" is not an appropriate name to describe these applications. These applications simply provide the same function that Napster provided with a centralized server: the ability to find files on the Internet. They are decentralized search engines. They do not perform the file transfers and they do not themselves make peer to peer possible. They allow users to implement a search engine that is distributed across many machines, and the Internet itself does the rest. The description of "P2P filesharing applications" presented in this workshop's call for participation offers nothing to distinguish KaZaA, Grokster or Gnutella from the basic functions of the Internet and ordinary, generally used operating systems. It also makes no mention of the core functionality that these applications actually do provide: search and discovery of the locations of files. Sharing files among a group of users is a basic network capability that operating systems and networks already provide. Among the goals presented by the FTC for this workshop are learning about P2P, including how it works, and discussing self-regulatory, regulatory, and technological responses to a set of risks which the workshop associates with these consumer-friendly decentralized search engines. I suggest that the testimony of those who designed the Internet and those who exercise its basic functions as a matter of their daily productive lives, will provide a stronger framework for understanding the real nature of these risks. One name that should be recommended is David Reed, one of the original architects of the underlying infrastructure of the Internet. He is well-prepared to comment on the relationship between the architecture of the Internet and the capacities for innovation for which it provides. Another name that might be considered is Bram Cohen, the author of BitTorrent. A cursory survey of Sourceforge.net will show a great variety of projects whose authors can testify to their dependence on the peer to peer architecture of the Internet, and to the fact that accessing and distributing of files among peers is an unalterable component of their work. The participation of voices representing development projects such as these is a critical consideration for this workshop. Discussion of consumers' private interests should not be confused with copyright issues. Even greater risks ensue when discussions of filters, privacy, security, adware, viruses, exposure to undesirable material and impairments of computer function are mixed with copyright issues. The result of addressing copyright concerns in the manner of protecting private consumer interests can only be that both copyright and innovation will suffer. Technological developments that affect the capacity of individuals to publish, use, and develop new uses for information will often signal new issues for copyright policy, issues which touch on areas that are necessarily out of the scope of the FTC's mandate for rulemaking or promulgating norms. In particular, among the risks mentioned in the workshop's call for participation is that of exposure of end users to liability to charges of copyright infringement. Addressing this risk within the conceptual framework that the call for participation appears to exhibit, and in terms of the kinds of responses that it cites for consideration, can reasonably be expected to lead to a very limited understanding and an encouragement of prescriptive responses that are not well-advised. More fundamentally, addressing copyright issues within this conceptual framework will result in owners of computers and makers of applications losing their capacity to develop and make use of their computers and the communications infrastructure. It may be that the structure that the workshop will eventually take is to some extent exhibited in the questions presented in the call for participation and the way it contemplates certain risks with regard to consumers' use and understanding of the features of decentralized search applications. Inasmuch as this is true, it would be advisable to adjust the structure of the workshop to more precisely reflect the nature of the subject matter. The scope of the questions should also be expanded and adapted to admit a proper examination of the relationship of the risks to the nature of the technology and the interests of flexibility and innovation; and I would urge the FTC to adapt the conceptual framework and format of the workshop to reflect this purpose more greatly. Opportunity should be provided to describe the architecture of the Internet and how it fosters innovation, and to more precisely define the nature of the applications that are the focus of the workshop. The set of questions on uses of "P2P filesharing" technology should be expanded to admit testimony of those who develop Internet applications. The questions listed in the set addressing the impact of "P2P filesharing" on copyright holders would in fact warrant an extensive process of public inquiry in themselves. Many of these questions address areas that do not pertain specifically or solely to the consideration of the impact of peer to peer technology on copyright holders. The FTC would be well advised to report on the areas alluded to by these questions separately and extensively. The sets of questions addressing identification and disclosure of risks to consumers should be adapted so that the nature and source of the risks are not misconstrued, and so that a more encompassing understanding of the sources of the risks and of prospective solutions can be developed. The questions as a whole exhibit a narrow focus on a set of applications whose characteristics are not properly recognized and understood. The set of questions addressing technological solutions should be decoupled from a narrow focus on specific applications that provide decentralized search capabilities, and should be expanded to admit a broader analysis. The solutions currently identified in the call for participation do not appear to provide for a well-designed response to the full scope of risks and implications elicited by this workshop's areas of consideration. One major source of these risks that some will mention is the undue influence on the market and on copyright policymaking of interests such as market dominant software manufacturers, publishers and the recording and motion picture industries. Monopoly interests in the operating system arena in particular interfere severely with consumers' access to, understanding of and choices with respect to software that can provide far more robust protections than they generally make use of presently. I would greatly appreciate the opportunity to participate in this workshop as a panelist. I also offer to help in advising as to the structure of the workshop and appropriate participants. Above I have mentioned David Reed and Bram Cohen. Voices I can mention in particular as offering constructive and appropriate insight for this proceeding include the following. I mention them in many cases without specific knowledge of their interest in participating, or of their having actually requested to participate: Jay Sulzberger, New Yorkers for Fair Use, jays@panix.com Brett Wynkoop, Wynn Data Limited, wynkoop@wynn.com Michael Smith, LXNY, mesmith@panix.com Miles Nordin, Developer/Systems Administrator, carton@Ivy.NET Dan Berninger, Technology Analyst, dan@danielberninger.com Adam Kosmin, WindowsRefund.net, akosmin@windowsrefund.net Andrew Odlyzko can provide rigorous empirical analysis and data that are highly pertinent to the subject areas addressed by this workshop: Andrew Odlyzko, University of Minnesota, odlyzko@dtc.umn.edu The following are just a few people who can represent specific development projects: Kevin Marks, MediAgora, kmarks@mac.com Lucas Gonze, Webjay, lgonze@panix.com Bram Cohen, BitTorrent, bram@bitconjurer.org The following are good leading voices who would make significant contributions to this workshop: David Reed, SATN.org, dpreed@reed.com Bob Frankston, SATN.org, rmfxixB1@bobf.frankston.com David Isenberg, "The Stupid Network," isen@isen.com Richard Stallman, The GNU project, rms@gnu.org David Sugar, Free Software Foundation, dyfet@gnu.org Fred von Lohmann, Electronic Frontier Foundation, fred@eff.org Gigi Sohn, Public Knowledge, gbsohn@publicknowledge.org Robin Gross, IP Justice, robin@ipjustice.org Chris Hoofnagle, Electronic Privacy Information Clearinghouse, hoofnagle@epic.org Nelson Pavlosky, Free Culture, npavlos1@swarthmore.edu Thank you, Seth Johnson Committee for Independent Technology (SNIP Contact Information) From eugen at leitl.org Wed Nov 17 18:25:25 2004 From: eugen at leitl.org (Eugen Leitl) Date: Sat Dec 9 22:12:43 2006 Subject: [p2p-hackers] [IP] Amateur-to-Amateur (fwd from dave@farber.net) Message-ID: <20041117182525.GJ1457@leitl.org> ----- Forwarded message from David Farber ----- From: David Farber Date: Wed, 17 Nov 2004 12:54:44 -0500 To: Ip Subject: [IP] Amateur-to-Amateur X-Mailer: Apple Mail (2.619) Reply-To: dave@farber.net Begin forwarded message: From: Dan Hunter Date: November 17, 2004 11:56:01 AM EST To: dave@farber.net Subject: Amateur-to-Amateur Dave: The readers of IP might be interested in a new paper that Greg Lastowka and I recently released. It's about copyright, and we try to draw attention to the significance of amateur production of content as a counterweight to all the wailing and gnashing-of-teeth over filesharing. We suggest that amateur production is more significant than previously recognized, and that excessive focus on the protection of music industry and copyright incentives is socially retrograde. Paper available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=601808 Abstract follows. Comments (offlist) always welcome. best wishes Dan. ---- Title: Amateur-to-Amateur Authors: Dan Hunter (Wharton, U.Penn) & Greg Lastowka (Rutgers Law) Abstract: Copyright, it is commonly said, matters in society because it encourages the production of socially beneficial, culturally significant expressive content. However our focus on copyright's recent history blinds us to the social information practices which have always existed. In this article, we examine these social information practices, and query copyright's role within them. We posit a functional model of what is necessary for creative content to move from creator to user. These are the functions dealing with creation, selection, production, dissemination, promotion, sale, and use of expressive content. We demonstrate how centralized commercial control of information content has been the driving force behind copyright's expansion. However, all of the functions that copyright industries used to control are undergoing revolutionary decentralization and disintermediation. Different aspects of information technology, notably the digitization of information, widespread computer ownership, the rise of the Internet, and the development of social software, threaten the viability and desirability of centralized control over every one of the content functions. These functions are increasingly being performed by individuals and disorganized, distributed groups. This raises an issue for copyright as the main regulatory force in information practices, because copyright assumes a central control structure that no longer applies to creative content. We examine the normative implications of this shift for our information policy in this new post-copyright era. Most notably we conclude that copyright law needs to be adjusted in order to recognize the opportunity and desirability of decentralized content, and the expanded marketplace of ideas it promises. _________________________________________________ Dan Hunter Robert F. Irwin IV Term Assistant Professor of Legal Studies The Wharton School University of Pennsylvania 662 John M Huntsman Hall 3730 Walnut St Philadelphia PA 19104 USA ph: +1-215-573-7154 fx: +1-215-573-2006 Research at http://ssrn.com/author=243354 _________________________________________________ ------------------------------------- You are subscribed as eugen@leitl.org To manage your subscription, go to http://v2.listbox.com/member/?listname=ip Archives at: http://www.interesting-people.org/archives/interesting-people/ ----- End forwarded message ----- -- Eugen* Leitl leitl ______________________________________________________________ ICBM: 48.07078, 11.61144 http://www.leitl.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE http://moleculardevices.org http://nanomachines.net -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: not available Url : http://zgp.org/pipermail/p2p-hackers/attachments/20041117/c3a96d1b/attachment.pgp From ian at locut.us Thu Nov 18 16:03:31 2004 From: ian at locut.us (Ian Clarke) Date: Sat Dec 9 22:12:43 2006 Subject: [p2p-hackers] Simple reliable UDP data transfer Message-ID: <64D04EB5-397B-11D9-A632-000D932C5880@locut.us> I am in the process of implementing a simple UDP data transfer algorithm in Java (or more precisely, replacing a braindead implementation with something slightly more respectable). The requirement is simple, get 256k from one node to another over UDP reliably. It should be "TCP friendly", ie. its flow control shouldn't crowd out politer TCP traffic, and packets, for obvious reasons, should be around 1k in size. I have toyed with a variety of ideas, and done some research, but I wanted to see if anyone had any thoughts or advice on the simplest way I can implement something to meet these requirements (I will probably use a straight-forward TCP-style windowed approach). Cheers, Ian. -- Founder, The Freenet Project http://freenetproject.org/ CEO, Cematics Ltd http://cematics.com/ Personal Blog http://locut.us/~ian/blog/ From gbildson at limepeer.com Thu Nov 18 16:29:40 2004 From: gbildson at limepeer.com (Greg Bildson) Date: Sat Dec 9 22:12:43 2006 Subject: [p2p-hackers] Simple reliable UDP data transfer In-Reply-To: <64D04EB5-397B-11D9-A632-000D932C5880@locut.us> Message-ID: LimeWire's udpconnect package does this more generally. i.e. It creates a standard reliable Input and Output stream in Java via UDP. It has a small configurable window and coexists well with TCP. It can also punch firewalls with a little external communication to kick off the connection on both ends. http://www.limewire.org/fisheye/viewrep/limecvs/core/com/limegroup/gnutella/ udpconnect I have thought about how to make this available separately from our codebase but don't have time to do it. You need to simply replace our UDPService with your whatever handle's your UDP sends and receives. You would also likely want to unwrap the data messages from our Gnutella messages (although it would work as is). Thanks -greg > -----Original Message----- > From: p2p-hackers-bounces@zgp.org [mailto:p2p-hackers-bounces@zgp.org]On > Behalf Of Ian Clarke > Sent: Thursday, November 18, 2004 11:04 AM > To: Peer-to-peer development. > Subject: [p2p-hackers] Simple reliable UDP data transfer > > > I am in the process of implementing a simple UDP data transfer > algorithm in Java (or more precisely, replacing a braindead > implementation with something slightly more respectable). > > The requirement is simple, get 256k from one node to another over UDP > reliably. It should be "TCP friendly", ie. its flow control shouldn't > crowd out politer TCP traffic, and packets, for obvious reasons, should > be around 1k in size. > > I have toyed with a variety of ideas, and done some research, but I > wanted to see if anyone had any thoughts or advice on the simplest way > I can implement something to meet these requirements (I will probably > use a straight-forward TCP-style windowed approach). > > Cheers, > > Ian. > > -- > Founder, The Freenet Project http://freenetproject.org/ > CEO, Cematics Ltd http://cematics.com/ > Personal Blog http://locut.us/~ian/blog/ > > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > _______________________________________________ > Here is a web page listing P2P Conferences: > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences From ian at locut.us Thu Nov 18 16:32:23 2004 From: ian at locut.us (Ian Clarke) Date: Sat Dec 9 22:12:43 2006 Subject: [p2p-hackers] Simple reliable UDP data transfer In-Reply-To: References: Message-ID: <6CDD247C-397F-11D9-A632-000D932C5880@locut.us> Thanks Greg, I will take a look. Probably more likely that I would use it for ideas rather than using your actual code, since I have a pre-existing messaging layer that is somewhat unusual. Out of interest, do you address the fact that Thread.sleep() can't be relied on to sleep for less than 50ms, or is that an issue for you? All the best, Ian. On 18 Nov 2004, at 16:29, Greg Bildson wrote: > LimeWire's udpconnect package does this more generally. i.e. It > creates a > standard reliable Input and Output stream in Java via UDP. It has a > small > configurable window and coexists well with TCP. It can also punch > firewalls > with a little external communication to kick off the connection on both > ends. > > http://www.limewire.org/fisheye/viewrep/limecvs/core/com/limegroup/ > gnutella/ > udpconnect > > I have thought about how to make this available separately from our > codebase > but don't have time to do it. You need to simply replace our > UDPService > with your whatever handle's your UDP sends and receives. You would > also > likely want to unwrap the data messages from our Gnutella messages > (although > it would work as is). > > Thanks > -greg > > >> -----Original Message----- >> From: p2p-hackers-bounces@zgp.org >> [mailto:p2p-hackers-bounces@zgp.org]On >> Behalf Of Ian Clarke >> Sent: Thursday, November 18, 2004 11:04 AM >> To: Peer-to-peer development. >> Subject: [p2p-hackers] Simple reliable UDP data transfer >> >> >> I am in the process of implementing a simple UDP data transfer >> algorithm in Java (or more precisely, replacing a braindead >> implementation with something slightly more respectable). >> >> The requirement is simple, get 256k from one node to another over UDP >> reliably. It should be "TCP friendly", ie. its flow control shouldn't >> crowd out politer TCP traffic, and packets, for obvious reasons, >> should >> be around 1k in size. >> >> I have toyed with a variety of ideas, and done some research, but I >> wanted to see if anyone had any thoughts or advice on the simplest way >> I can implement something to meet these requirements (I will probably >> use a straight-forward TCP-style windowed approach). >> >> Cheers, >> >> Ian. >> >> -- >> Founder, The Freenet Project http://freenetproject.org/ >> CEO, Cematics Ltd http://cematics.com/ >> Personal Blog http://locut.us/~ian/blog/ >> >> _______________________________________________ >> p2p-hackers mailing list >> p2p-hackers@zgp.org >> http://zgp.org/mailman/listinfo/p2p-hackers >> _______________________________________________ >> Here is a web page listing P2P Conferences: >> http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences > > > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > _______________________________________________ > Here is a web page listing P2P Conferences: > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences > > -- Founder, The Freenet Project http://freenetproject.org/ CEO, Cematics Ltd http://cematics.com/ Personal Blog http://locut.us/~ian/blog/ From gbildson at limepeer.com Thu Nov 18 16:49:22 2004 From: gbildson at limepeer.com (Greg Bildson) Date: Sat Dec 9 22:12:43 2006 Subject: [p2p-hackers] Simple reliable UDP data transfer In-Reply-To: <6CDD247C-397F-11D9-A632-000D932C5880@locut.us> Message-ID: I've certainly heard about the 50ms limit (on linux no?) but never seen it have a major effect. Then again, our transfers while fast could be faster so that could be one of the limiting factors. Thanks -greg > -----Original Message----- > From: p2p-hackers-bounces@zgp.org [mailto:p2p-hackers-bounces@zgp.org]On > Behalf Of Ian Clarke > Sent: Thursday, November 18, 2004 11:32 AM > To: Peer-to-peer development. > Subject: Re: [p2p-hackers] Simple reliable UDP data transfer > > > Thanks Greg, I will take a look. Probably more likely that I would use > it for ideas rather than using your actual code, since I have a > pre-existing messaging layer that is somewhat unusual. > > Out of interest, do you address the fact that Thread.sleep() can't be > relied on to sleep for less than 50ms, or is that an issue for you? > > All the best, > > Ian. > > On 18 Nov 2004, at 16:29, Greg Bildson wrote: > > > LimeWire's udpconnect package does this more generally. i.e. It > > creates a > > standard reliable Input and Output stream in Java via UDP. It has a > > small > > configurable window and coexists well with TCP. It can also punch > > firewalls > > with a little external communication to kick off the connection on both > > ends. > > > > http://www.limewire.org/fisheye/viewrep/limecvs/core/com/limegroup/ > > gnutella/ > > udpconnect > > > > I have thought about how to make this available separately from our > > codebase > > but don't have time to do it. You need to simply replace our > > UDPService > > with your whatever handle's your UDP sends and receives. You would > > also > > likely want to unwrap the data messages from our Gnutella messages > > (although > > it would work as is). > > > > Thanks > > -greg > > > > > >> -----Original Message----- > >> From: p2p-hackers-bounces@zgp.org > >> [mailto:p2p-hackers-bounces@zgp.org]On > >> Behalf Of Ian Clarke > >> Sent: Thursday, November 18, 2004 11:04 AM > >> To: Peer-to-peer development. > >> Subject: [p2p-hackers] Simple reliable UDP data transfer > >> > >> > >> I am in the process of implementing a simple UDP data transfer > >> algorithm in Java (or more precisely, replacing a braindead > >> implementation with something slightly more respectable). > >> > >> The requirement is simple, get 256k from one node to another over UDP > >> reliably. It should be "TCP friendly", ie. its flow control shouldn't > >> crowd out politer TCP traffic, and packets, for obvious reasons, > >> should > >> be around 1k in size. > >> > >> I have toyed with a variety of ideas, and done some research, but I > >> wanted to see if anyone had any thoughts or advice on the simplest way > >> I can implement something to meet these requirements (I will probably > >> use a straight-forward TCP-style windowed approach). > >> > >> Cheers, > >> > >> Ian. > >> > >> -- > >> Founder, The Freenet Project http://freenetproject.org/ > >> CEO, Cematics Ltd http://cematics.com/ > >> Personal Blog http://locut.us/~ian/blog/ >> >> _______________________________________________ >> p2p-hackers mailing list >> p2p-hackers@zgp.org >> http://zgp.org/mailman/listinfo/p2p-hackers >> _______________________________________________ >> Here is a web page listing P2P Conferences: >> http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences > > > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > _______________________________________________ > Here is a web page listing P2P Conferences: > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences > > -- Founder, The Freenet Project http://freenetproject.org/ CEO, Cematics Ltd http://cematics.com/ Personal Blog http://locut.us/~ian/blog/ _______________________________________________ p2p-hackers mailing list p2p-hackers@zgp.org http://zgp.org/mailman/listinfo/p2p-hackers _______________________________________________ Here is a web page listing P2P Conferences: http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences From paul at ref.nmedia.net Thu Nov 18 23:07:02 2004 From: paul at ref.nmedia.net (Paul Campbell) Date: Sat Dec 9 22:12:43 2006 Subject: [paul@ref.nmedia.net: Re: [p2p-hackers] [IP] Amateur-to-Amateur (fwd from dave@farber.net) Message-ID: <20041118230702.GA24183@ref.nmedia.net> Although I found the concepts expressed in the article interesting as you waxed philosophically, you may want to dig a little deeper into past history. The original copyright law was embodied in the Statute of Anne, 1710. See http://www.copyrighthistory.com/anne.html At the time, that malicious miscreant Gutenberg made a printing press which allowed books to be mass-printed rather than hand transcribing them one page at a time. The printing press threatened to put the entire respectable scribe business out of business, so the Statute of Anne was conceived to shore up the scribing business, as a protectionist measure. In other words, it had nothing at all to do with "protecting intellectual property rights". It was conceived then, just as now, as a means to keep the publishing business afloat. And then, just as now, even with the law behind them, the scribing guild could not manage to forestall the inevitable progress of technology. As I see it, there are two questions here. First, there's an issue of compensating an artist for their work. It is a well known fact that the artists get paid very little from the record companies. They make much better money in concerts and such. Thus, I doubt it even matters to them if copyrights exist or not. Without copyright "protection", the artists would receive additional advertising which increases ticket sales for concerts, etc. The way that an artist needs protection is to prevent people from sneaking into a concert, not to prevent them from hearing the music! Second, there is the question of the value of a publication to the consumer of the work of art. Obviously there isn't a whole lot of value currently if people are willing to flaunt the current law and download media anyways. If anything, I can make a case that we are talking about two different issues: the issue of compensating an artist for their work, and the issue of compensating a publishing/distribution business. Since copyrights set up an exclusive distribution system, they also instantly create a black market. Bootlegs have existed well before the internet. Copyrights are a blatant abuse of a free market economy. However, I can also make a case for when distribution/publishing companies CAN make money even without copyright protection. People routinely buy "collectors editions" of media and pay an extra exhorbitant fee to obtain those even when one isle over, you can buy the same work of art for about half of the price. Similarly, we have a variety of internet examples. Winamp at one time was the #1 music player for PC's. It's a free program, but the company makes money through various other channels. More recently, "MusicMatch" is a commercial software program that does similar things, except that people are willing to pay for it because it has a better organization and music search function. With the amount of stuff out there, there's no reason that a subscription MP3 download service can't exist. The marketing model here is that the file quality is gauranteed to be good (no fakes or noisy copies) and that they offer a very good system for selecting music (reviews, comparisons, suggestions for similar songs and artists). And...absolutely none of these business models requires copyright protection. And none of them will fall on their faces due to black markets. Heck, MusicMatch is a great example. There's already a free software program out there that does virtually the same thing (Winamp) and yet people are buying MusicMatch. Your philosophical paper simply rephrases the issue using the modern equivalent of the printing press (aka P2P file sharing software) to point out the same old tired hypocrisy that exists in the publishing business. Copyrights do not protect the artist, contrary to popular belief. They simply try to stimy free market economics in a protectionist scheme to keep a few companies afloat which would otherwise be completely noncompetitive. It's no wonder that artists are starting their own publishing and distribution outfits in droves. It is protectionism at it's finest, that the very institutions that have been set up for the purpose of mass distribution of media of any form are also the same ones taking every draconian measure necessary to PREVENT distribution of media! From paul at ref.nmedia.net Thu Nov 18 23:33:57 2004 From: paul at ref.nmedia.net (Paul Campbell) Date: Sat Dec 9 22:12:43 2006 Subject: [p2p-hackers] Structural question about key/value replication Message-ID: <20041118233357.GB24183@ref.nmedia.net> I was thinking a lot lately about how keys are replicated and about some of the storage load balancing algorithms that are out there. Chord CFS assigns responsibility for a key to a particular node. Then the next few successors to that node replicate the key to their caches (actually, the node broadcasts the keys it owns). But for the most part, those replicas are not really relied on. There are a bunch of load balancing schemes out there. But essentially, they rely on the idea that there is a particular spot on the DHT where the key is intended to be, and then either reactively or proactively spread the key/value pair out to neighboring nodes. I started writing code in Python to do just that, but then I had a slightly different idea. It closely matches the ideas in Naor/Wieder's DHT. In this model, key's remain with fixed locations on the ring. Nodes however can shift and move around the ring in order to closely match key space densities (which are not necessarily linear). But also, a node has an arc that it is responsible for. The arcs are allowed to be overlapping. This is where the idea differs from Chord. The length and endpoints of the arc is chosen such that each key/value pair has sufficient replica coverage and also such that load balancing is maintained. Routing also routes to the general area instead of targetting the specific point on the DHT since obviously a few nodes left or right (depending on whether routing is bidirectional or not) of the appropriate spot will still reach a good copy. The typical load balancing algorithm that would have simply adjusted a node's ID in order to increase or reduce the arc size now manipulates the end points whenever a node synchronizes with it's most distant neighbors (those nearest end point of an arc). On detection of a node join/failure, it can simply unilaterally alter the arc appropriately without communication. The advantage is that there's no "master node" anymore. In fact, when a node fails (leaves the network), the only thing that happens is that any routing links to it get dropped. Replication and turnover happen pretty much automatically. So...whereas Chord has an explicitly designated "owner" in CFS which maintains the same semantics as the underlying Chord DHT, this system does away with those semantics in favor of a region of pretty much independent neighbors. From lutianbo at software.ict.ac.cn Fri Nov 19 02:53:35 2004 From: lutianbo at software.ict.ac.cn (Lutianbo) Date: Sat Dec 9 22:12:43 2006 Subject: [p2p-hackers] EIGamal encryption Message-ID: <003501c4cde2$f82c6ae0$9402000a@ictltbo> Hi all, Let us recall El-Gamal encryption scheme: p is an appropriate prime number (with a hard discrete logarithm problem), g is a generator of Zp, a random, nonzero x < p?1 is the private key, the corresponding public key is y, where y = gx mod p. A message m < p is encrypted in the following way. First a number k, 0 < k < p ? 1, is chosen uniformly at random. Then we put a = gk mod p and b = m ? yk mod p. The pair (a, b) is a ciphertext of m. My question is How to encrypt a message m using the private key x, and decrypt the corresponding ciphertext with the public key y,g,p. Would you please give me a hand? Thank you! Best regards, ---Tianbo Lu -------------- next part -------------- An HTML attachment was scrubbed... URL: http://zgp.org/pipermail/p2p-hackers/attachments/20041119/d27e9bd8/attachment.html From hal at finney.org Fri Nov 19 04:36:19 2004 From: hal at finney.org (Hal Finney) Date: Sat Dec 9 22:12:43 2006 Subject: [p2p-hackers] EIGamal encryption Message-ID: <20041119043619.7EE6B57E2F@finney.org> > My question is How to encrypt a message m using the private key x, > and decrypt the corresponding ciphertext with the public key y,g,p. Would > you please give me a hand? A good online reference for crypto is the Handbook of Applied Cryptography, http://www.cacr.math.uwaterloo.ca/hac/ . ElGamal keygen, encryption and decryption is discussed in section 8.4. Hal Finney From paul at ref.nmedia.net Fri Nov 19 06:02:27 2004 From: paul at ref.nmedia.net (Paul Campbell) Date: Sat Dec 9 22:12:43 2006 Subject: [p2p-hackers] EIGamal encryption In-Reply-To: <003501c4cde2$f82c6ae0$9402000a@ictltbo> References: <003501c4cde2$f82c6ae0$9402000a@ictltbo> Message-ID: <20041119060227.GA29298@ref.nmedia.net> On Fri, Nov 19, 2004 at 10:53:35AM +0800, Lutianbo wrote: > Hi all, > > Let us recall El-Gamal encryption scheme: p is an appropriate prime number (with > > a hard discrete logarithm problem), g is a generator of Zp, a random, nonzero x < p???1 Almost correct. You are working in the group Zp, which means that the only numbers that exist are 0 through p-1. All calculations are done mod p...so if the result is equal to p (in real integer arithmetic), then it equals zero. If it equals p+1, then it will be a 1. Essentially, the modulus function means to do the arithmetic and then divide by p in integer arithmetic, keeping the remainder. So it doesn't make any sense to use numbers >=p since there's no such animal in the group. However, you are a little off. For ElGamal, it is necessary to maintain x < p-2, not p-1. > is the private key, the corresponding public key is y, where y = gx mod p. A message Actually, the public key is y, p, and g in your notation. And actually, the public key term y = g^x mod p. The private key is x as you said. > m < p is encrypted in the following way. First a number k, 0 < k < p ??? 1, is chosen > > uniformly at random. Then we put a = gk mod p and b = m ? yk mod p. The pair > > (a, b) is a ciphertext of m. Uhh..nope again. k must be < p-1. a = g^k mod p and b = m*(y^k) mod p. > My question is How to encrypt a message m using the private key x, and decrypt the corresponding ciphertext with the public key y,g,p. Would you please give me a hand? Uhh...that's a little confusing there. Let's try to clear this up. The basic idea behind ANY public key encryption algorithm is to make it possible for anyone to encrypt a message, while only those possessing the private key can do decryption. You phrased it backwards. Encryption is normally done with the public key (y, g, p in El Gamal). DECRYPTION is done with the corresponding private key. You clearly stated the ENCRYPTION process already (a = g^k mod p and b = m(y^k) mod p; m is the message and k is a random number). For decryption, use the private key x to calculate a^(p-1-x) mod p. This is equivalent to a^(-x), which is equivalent to g^(-xk). In other words, we can almost recover k in a usable form. Call this c. Second calculate m = cb mod p. There is no reason to generate a new g and p. They can be used as system-wide parameters. In which case the public keys become just y. There are other variations. There is no reason to use Zp. A much easier one to use computationally is the multiplicative group of the finite field F(2,m) of characteristic two...in this case, the underlying group arithmetic becomes just XOR operations which is pretty fast on any machine. Another variation (one that is near and dear to me) is to use points on an elliptic curve. This has the advantage that there are no good algorithms if you use a good curve. And the curve only has to be generated once for the entire system. Essentially, for instance, a 256 bit key has the same security as a 1024 bit key in RSA or Zp or F(2,m). The downside from a practical view is that you have an underlying field algebra system with an elliptic curve algebra system, on which you finally implement your system. Working in 3 math systems simultaneously gets very confusing pretty quickly (from experience). Obviously, you can trivially recover the public key from the private key (just use the calculate to recover the public key) if for some reason you needed to do that as a private key holder. So I'm not sure why you had a problem with "encrypting with the private key". Okay...now there is one other possible reason that I can see you'd want to "encrypt with the private key" as you said. That is if you are working with digital signatures. With digital signatures, the basic idea is to run the "decryption" backwards when signing a document. Since only the private key holder knows what the private key is (nobody else can "decrypt"), any one else can verify the signature by "encrypting" the signature using the public key. However, since ElGamal encryption uses a second random number, there is not as much of a direct encrypt/decrypt correspondence as there is with RSA. Here's the algorithm for signing: 1. Pick a random number k from 1 to p-2 with gcd(k, p-1)=1 2. Calculate a = x^k mod p. Calculate c = k^-1 mod (p-1). 3. Calculate b = c(h-ga) mod (p-1). 4. The signature is (a,b). h is a hash function. It should be a number from 0 to p-2. The reason for calling it a hash function is that normally, documents are much larger than the payload that ElGamal can handle (documents must have fewer bits than p-2). So normally, first run a secure hash algorithm on the document (SHA, or MD5), and then feed the secure hash to the signature algorithm. To verify, do this: Calculate v1=(y^a)(a^b) mod p Calculate v2=g^h mod p If v1=v2, the signature is verified. The DSA algorithm is just a special version of the ElGamal signature scheme. From lutianbo at software.ict.ac.cn Fri Nov 19 06:59:36 2004 From: lutianbo at software.ict.ac.cn (Lutianbo) Date: Sat Dec 9 22:12:43 2006 Subject: [p2p-hackers] EIGamal encryption References: <003501c4cde2$f82c6ae0$9402000a@ictltbo> <20041119060227.GA29298@ref.nmedia.net> Message-ID: <004601c4ce05$55273e10$9402000a@ictltbo> Dear Campbell, Thank you for your reply soon. I describe my question clearly as follows: We assume that Alice (A) want to send a message m to Bob (B) secretly. And B knows the public key of A, but Alice doesn't know Bob's public key. Fistly, we can use RSA encryption scheme. Alice encrypts m with its own RSA private key, and Bob can decrypts the corresponding ciphertext c with Alice's RSA public key. Now, I want to know whether we can use EIGamal encryption scheme. That's to say, Alice encrypts m with its own EIGamal private key, and Bob decrypts the corresponding ciphertext c with Alice's EIGamal public key. Please help me. Thank you! ----- Original Message ----- From: "Paul Campbell" To: "Peer-to-peer development." Sent: Friday, November 19, 2004 2:02 PM Subject: Re: [p2p-hackers] EIGamal encryption > On Fri, Nov 19, 2004 at 10:53:35AM +0800, Lutianbo wrote: > > Hi all, > > > > Let us recall El-Gamal encryption scheme: p is an appropriate prime number (with > > > > a hard discrete logarithm problem), g is a generator of Zp, a random, nonzero x < p???1 > > Almost correct. You are working in the group Zp, which means that the only > numbers that exist are 0 through p-1. All calculations are done mod p...so > if the result is equal to p (in real integer arithmetic), then it equals > zero. If it equals p+1, then it will be a 1. Essentially, the modulus > function means to do the arithmetic and then divide by p in integer > arithmetic, keeping the remainder. So it doesn't make any sense to use > numbers >=p since there's no such animal in the group. > > However, you are a little off. For ElGamal, it is necessary to maintain > x < p-2, not p-1. > > > is the private key, the corresponding public key is y, where y = gx mod p. A message > > Actually, the public key is y, p, and g in your notation. And actually, the > public key term y = g^x mod p. The private key is x as you said. > > > m < p is encrypted in the following way. First a number k, 0 < k < p ??? 1, is chosen > > > > uniformly at random. Then we put a = gk mod p and b = m ? yk mod p. The pair > > > > (a, b) is a ciphertext of m. > > Uhh..nope again. k must be < p-1. a = g^k mod p and b = m*(y^k) mod p. > > > My question is How to encrypt a message m using the private key x, and decrypt the corresponding ciphertext with the public key y,g,p. Would you please give me a hand? > > Uhh...that's a little confusing there. Let's try to clear this up. > > The basic idea behind ANY public key encryption algorithm is to make it > possible for anyone to encrypt a message, while only those possessing the > private key can do decryption. You phrased it backwards. Encryption is > normally done with the public key (y, g, p in El Gamal). DECRYPTION is > done with the corresponding private key. You clearly stated the > ENCRYPTION process already (a = g^k mod p and b = m(y^k) mod p; m is > the message and k is a random number). > > For decryption, use the private key x to calculate a^(p-1-x) mod p. This > is equivalent to a^(-x), which is equivalent to g^(-xk). In other words, > we can almost recover k in a usable form. Call this c. Second calculate > m = cb mod p. > > There is no reason to generate a new g and p. They can be used as > system-wide parameters. In which case the public keys become just > y. > > There are other variations. There is no reason to use Zp. A much easier > one to use computationally is the multiplicative group of the finite > field F(2,m) of characteristic two...in this case, the underlying group > arithmetic becomes just XOR operations which is pretty fast on any machine. > > Another variation (one that is near and dear to me) is to use points on > an elliptic curve. This has the advantage that there are no good algorithms > if you use a good curve. And the curve only has to be generated once for > the entire system. Essentially, for instance, a 256 bit key has the same > security as a 1024 bit key in RSA or Zp or F(2,m). The downside from a > practical view is that you have an underlying field algebra system with > an elliptic curve algebra system, on which you finally implement your > system. Working in 3 math systems simultaneously gets very confusing pretty > quickly (from experience). > > > Obviously, you can trivially recover the public key from the private key > (just use the calculate to recover the public key) if for some reason > you needed to do that as a private key holder. So I'm not sure why you had > a problem with "encrypting with the private key". > > Okay...now there is one other possible reason that I can see you'd want > to "encrypt with the private key" as you said. That is if you are working > with digital signatures. With digital signatures, the basic idea is to run > the "decryption" backwards when signing a document. Since only the private > key holder knows what the private key is (nobody else can "decrypt"), any > one else can verify the signature by "encrypting" the signature using the > public key. > > However, since ElGamal encryption uses a second random number, there is > not as much of a direct encrypt/decrypt correspondence as there is with > RSA. Here's the algorithm for signing: > 1. Pick a random number k from 1 to p-2 with gcd(k, p-1)=1 > 2. Calculate a = x^k mod p. Calculate c = k^-1 mod (p-1). > 3. Calculate b = c(h-ga) mod (p-1). > 4. The signature is (a,b). > h is a hash function. It should be a number from 0 to p-2. The reason for > calling it a hash function is that normally, documents are much larger than > the payload that ElGamal can handle (documents must have fewer bits than > p-2). So normally, first run a secure hash algorithm on the document (SHA, > or MD5), and then feed the secure hash to the signature algorithm. > > To verify, do this: > Calculate v1=(y^a)(a^b) mod p > Calculate v2=g^h mod p > If v1=v2, the signature is verified. > > The DSA algorithm is just a special version of the ElGamal signature scheme. > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > _______________________________________________ > Here is a web page listing P2P Conferences: > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences > From lauri.pesonen at gmail.com Fri Nov 19 11:06:31 2004 From: lauri.pesonen at gmail.com (Lauri Pesonen) Date: Sat Dec 9 22:12:43 2006 Subject: [p2p-hackers] EIGamal encryption In-Reply-To: <004601c4ce05$55273e10$9402000a@ictltbo> References: <003501c4cde2$f82c6ae0$9402000a@ictltbo> <20041119060227.GA29298@ref.nmedia.net> <004601c4ce05$55273e10$9402000a@ictltbo> Message-ID: Hi Lutiambo, On Fri, 19 Nov 2004 14:59:36 +0800, Lutianbo wrote: > Dear Campbell, > > Thank you for your reply soon. I describe my question clearly as follows: > > We assume that Alice (A) want to send a message m to Bob (B) secretly. And B knows the public key of A, but Alice doesn't know Bob's public key. > > Fistly, we can use RSA encryption scheme. RSA suffers from the same problems that I mention below. > Alice encrypts m with its own RSA private key, and Bob can decrypts the corresponding ciphertext c with Alice's RSA public key. This doesn't make any sense from a confidentialtiy point of view since nayone who knows Alice's public key will be able to decrypt the message, not just Bob. This will, I guess, provide you with message integrity and authenticity, i.e. Bob will know that Alice sent them essage and that no one has tampered with it on transit, but the propoer way of doing that would be with digital signatures. What you need to do is set up a symmetric session key between Alice and Bob by doing a Diffie-Helman key exchange for example. The problem with DH is authenticating the parties so that you're no vulnerable to a man-in-the-middle attack. Alice's public key will enable Bob to authenticate Alice, but if Alice doesn't know Bob's public key, there is no way for Alice to authenticate Bob. > Now, I want to know whether we can use EIGamal encryption scheme. That's to say, Alice encrypts m with its own EIGamal private key, and Bob decrypts the corresponding ciphertext c with Alice's EIGamal public key. Please help me. Thank you! Public key encryption is really slow compared to symmetric encryption. If you want to encrypt a single message with a one-way link, like an email, you should use a public key encryption algorithm. But if you're planning on encrypting a stream of data or multiple messages over a two-way link, like a TCP connection, you should use public key algorithms for setting up a symmetric session key and then use symmetric algorithms (e.g. AES) for encrypting the data. In any case, both parties should know each other's public keys so that both parties can be authenticated. Otherwise you're vulnerable to a man-in-the-middle attack and the encryption is useless. If you're encrypting a stream of data, other stuff that you should be aware of are (this is not an exhaustive list): - separate session keys for both parties, i.e. data coming in is encrypted with a different key than data going out - do Encrypt-then-Authenticate, i.e. encrypt your packet and then append a MAC of the cipher text to your packet (HMAC, OMAC, ...) - The MAC keys must be independent of the encryption keys! Again, use one MAC key for outgoing and one for incoming packets - Use nonces / timestamps to protect against replay attacks - Pad your messages - ... You can create the encryption and the MAC keys from a single shared secret by hashing it with a secure hash algorithm (e.g. k1 = SHA1("key1" + shared_secret), key2 = ...). I'm sure there's a bunch of stuff that I'm forgetting here. You might want to look at IETF's GSS (language independent: RFC2743, java bindings: RFC2853). Java 1.4.2 (package org.ietf.jgss) comes with an implementation if you're using Java. Don't know about other languages. I haven't used it myself and I'm not very familiar with it, but it seems to me that you should be able to use any transport with it that you want. -- ! Lauri From cefn.hoile at bt.com Fri Nov 19 13:23:49 2004 From: cefn.hoile at bt.com (cefn.hoile@bt.com) Date: Sat Dec 9 22:12:43 2006 Subject: [p2p-hackers] Structural question about key/value replication Message-ID: <21DA6754A9238B48B92F39637EF307FD05B19EEE@i2km41-ukdy.domain1.systemhost.net> This relates somewhat to the SWAN system I have described before. http://www.cefn.com/cefn/index.php?SwanPaper In SWAN, responsibility for subsections of the space is smeared across a number of independent nodes according to a probability function optimised according to Kleinberg's analysis on Navigable Small World networks http://www.cs.cornell.edu/home/kleinber/swn.d/swn.html [thanks to Erwin Bonsma] SWAN does not incorporate your other suggestion - a scheme for repopulating the identity space to further improve load balancing - although I am not sure it would be applicable to the SWAN case. In the current implementation, owing to the granularity of SWAN nodes (a new node is launched for each key/value pair) key density and node density have a different relationship than that found in Chord. Cefn http://cefn.com DIET Agents Project Team http://diet-agents.sourceforge.net -----Original Message----- From: p2p-hackers-bounces@zgp.org [mailto:p2p-hackers-bounces@zgp.org] On Behalf Of Paul Campbell Sent: 18 November 2004 23:34 To: p2p-hackers@zgp.org Subject: [p2p-hackers] Structural question about key/value replication I was thinking a lot lately about how keys are replicated and about some of the storage load balancing algorithms that are out there. Chord CFS assigns responsibility for a key to a particular node. Then the next few successors to that node replicate the key to their caches (actually, the node broadcasts the keys it owns). But for the most part, those replicas are not really relied on. There are a bunch of load balancing schemes out there. But essentially, they rely on the idea that there is a particular spot on the DHT where the key is intended to be, and then either reactively or proactively spread the key/value pair out to neighboring nodes. I started writing code in Python to do just that, but then I had a slightly different idea. It closely matches the ideas in Naor/Wieder's DHT. In this model, key's remain with fixed locations on the ring. Nodes however can shift and move around the ring in order to closely match key space densities (which are not necessarily linear). But also, a node has an arc that it is responsible for. The arcs are allowed to be overlapping. This is where the idea differs from Chord. The length and endpoints of the arc is chosen such that each key/value pair has sufficient replica coverage and also such that load balancing is maintained. Routing also routes to the general area instead of targetting the specific point on the DHT since obviously a few nodes left or right (depending on whether routing is bidirectional or not) of the appropriate spot will still reach a good copy. The typical load balancing algorithm that would have simply adjusted a node's ID in order to increase or reduce the arc size now manipulates the end points whenever a node synchronizes with it's most distant neighbors (those nearest end point of an arc). On detection of a node join/failure, it can simply unilaterally alter the arc appropriately without communication. The advantage is that there's no "master node" anymore. In fact, when a node fails (leaves the network), the only thing that happens is that any routing links to it get dropped. Replication and turnover happen pretty much automatically. So...whereas Chord has an explicitly designated "owner" in CFS which maintains the same semantics as the underlying Chord DHT, this system does away with those semantics in favor of a region of pretty much independent neighbors. _______________________________________________ p2p-hackers mailing list p2p-hackers@zgp.org http://zgp.org/mailman/listinfo/p2p-hackers _______________________________________________ Here is a web page listing P2P Conferences: http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences From Bernard.Traversat at Sun.COM Fri Nov 19 16:34:02 2004 From: Bernard.Traversat at Sun.COM (Bernard Traversat) Date: Sat Dec 9 22:12:43 2006 Subject: [p2p-hackers] Simple reliable UDP data transfer In-Reply-To: <64D04EB5-397B-11D9-A632-000D932C5880@locut.us> References: <64D04EB5-397B-11D9-A632-000D932C5880@locut.us> Message-ID: <419E207A.90709@sun.com> Ian, You may want to look at JXTA (www.jxta.org). We have implemented an end-to-end reliable TCP/IP layer that can be slided on a number of physical transport links. Also, we have a security layer you can stack on top of this TCP/IP layer if you need to. Hth, B. Ian Clarke wrote: > I am in the process of implementing a simple UDP data transfer > algorithm in Java (or more precisely, replacing a braindead > implementation with something slightly more respectable). > > The requirement is simple, get 256k from one node to another over UDP > reliably. It should be "TCP friendly", ie. its flow control shouldn't > crowd out politer TCP traffic, and packets, for obvious reasons, > should be around 1k in size. > > I have toyed with a variety of ideas, and done some research, but I > wanted to see if anyone had any thoughts or advice on the simplest way > I can implement something to meet these requirements (I will probably > use a straight-forward TCP-style windowed approach). > > Cheers, > > Ian. > > -- > Founder, The Freenet Project http://freenetproject.org/ > CEO, Cematics Ltd http://cematics.com/ > Personal Blog http://locut.us/~ian/blog/ > > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > _______________________________________________ > Here is a web page listing P2P Conferences: > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences -- "As Java implies platform independence, and XML implies language independence, then JXTA implies network independence." From paul at ref.nmedia.net Fri Nov 19 18:02:50 2004 From: paul at ref.nmedia.net (Paul Campbell) Date: Sat Dec 9 22:12:43 2006 Subject: [p2p-hackers] EIGamal encryption In-Reply-To: <004601c4ce05$55273e10$9402000a@ictltbo> References: <003501c4cde2$f82c6ae0$9402000a@ictltbo> <20041119060227.GA29298@ref.nmedia.net> <004601c4ce05$55273e10$9402000a@ictltbo> Message-ID: <20041119180250.GA30409@ref.nmedia.net> On Fri, Nov 19, 2004 at 02:59:36PM +0800, Lutianbo wrote: > Dear Campbell, > > Thank you for your reply soon. I describe my question clearly as follows: > > We assume that Alice (A) want to send a message m to Bob (B) secretly. And B knows the public key of A, but Alice doesn't know Bob's public key. > > Fistly, we can use RSA encryption scheme. > > Alice encrypts m with its own RSA private key, and Bob can decrypts the corresponding ciphertext c with Alice's RSA public key. > It's not sent secretly anymore. Alice encrypts m with it's private key. This means that ANY holder of the public key of Alice can decrypt the message. There is no secrecy here. Might as well send it in the clear. The only reason for "decrypting with a private key" is when you need to prove that you know the private key based on some publicly available information. Since the whole concept of public key cryptography is the idea that encryption is a public function while decryption is not, you are turning the whole thing on it's head. If your intention is simply to set up a private key system, then don't bother with the computational load of public key systems. The only way to send a message to Bob secretly is to obtain some piece of information from Bob that cannot be known to any other party. There are a variety of ways of doing it: 1. Bob sends a public key in the clear. 2. Bob sends a random bit string encrypted with Alice's public key to use as symmetric key (conventional cryptography). 3. Bob sends enough data to set up a two-part public key (Diffie Helman and derivatives). > Now, I want to know whether we can use EIGamal encryption scheme. That's > to say, Alice encrypts m with its own EIGamal private key, and Bob > decrypts the corresponding ciphertext c with Alice's EIGamal public key. > Please help me. Thank you! As I said...you can't do it even with RSA, and you can't do it with El Gamal. From paul at ref.nmedia.net Fri Nov 19 18:24:03 2004 From: paul at ref.nmedia.net (Paul Campbell) Date: Sat Dec 9 22:12:43 2006 Subject: [p2p-hackers] Simple reliable UDP data transfer In-Reply-To: <419E207A.90709@sun.com> References: <64D04EB5-397B-11D9-A632-000D932C5880@locut.us> <419E207A.90709@sun.com> Message-ID: <20041119182403.GB30409@ref.nmedia.net> Ian Clarke wrote: >I am in the process of implementing a simple UDP data transfer >algorithm in Java (or more precisely, replacing a braindead >implementation with something slightly more respectable). > >The requirement is simple, get 256k from one node to another over UDP >reliably. It should be "TCP friendly", ie. its flow control shouldn't >crowd out politer TCP traffic, and packets, for obvious reasons, >should be around 1k in size. > >I have toyed with a variety of ideas, and done some research, but I >wanted to see if anyone had any thoughts or advice on the simplest way >I can implement something to meet these requirements (I will probably >use a straight-forward TCP-style windowed approach). UDP packets are limited based on the underlying networks. Most people assume that payloads should exceed about 500 bytes. TCP on the other hand is basically a stream session. But there's nothing wrong with setting up a session, squirting some data along it, and then tearing the session down again. That is exactly what HTTP does (although it does have provisions for holding a session open for a period of time and handling multiple objects on the same stream). In your case, you are sending a substantial amount of data. There are only three reasons that I can think you want to use UDP for this: 1. You're thinking in terms of a "unit of data" which on it's face, UDP appears to be the better candidate for. Obviously, HTTP is a somewhat evolved protocol for the same purpose. I've seen numerous P2P software programs that just "hijack" the HTTP protocol for that purpose. The code is already written and debugged...so just use it to send objects which aren't HTML. 2. You are truly uni-casting with no reverse stream of ACK's. In this case, it's a fire-and-forget protocol. "Reliable" doesn't make a lot of sense in this context. At best, you'd apply FEC (forward error correction) to avoid packet dropping. With no way to regulate flows, you can't be TCP-friendly anyways. 3. You've read all the stuff related to the overhead problems of TCP. It's true that TCP does have more overhead in the individual packet headers. You can read up on the difference in the airhook protocol (do a web search). However, there's also an issue that doesn't show up except when you start doing real-world testing. Because TCP i the dominant protocol (almost the exclusive protocol until P2P started showing up), operating system and libraries are highly optimized for TCP. On a dialup modem, this doesn't matter at all. But on a LAN or high speed internet connection, throughput will simply never be as high as with TCP, unless the protocol stacks are rewritten. Most of the world uses the BSD TCP/IP protocol stack (even Microsoft almost literally just recompiles it). So when it changes, then the situation may change. So in other words...why do it at all? For a 256K transfer, just use TCP directly, or for a programming convenience, just interface directly to an HTTP protocol stack. From jrydberg at gnu.org Fri Nov 19 18:57:36 2004 From: jrydberg at gnu.org (Johan Rydberg) Date: Sat Dec 9 22:12:43 2006 Subject: [p2p-hackers] EIGamal encryption In-Reply-To: <20041119180250.GA30409@ref.nmedia.net> (Paul Campbell's message of "Fri, 19 Nov 2004 10:02:50 -0800") References: <003501c4cde2$f82c6ae0$9402000a@ictltbo> <20041119060227.GA29298@ref.nmedia.net> <004601c4ce05$55273e10$9402000a@ictltbo> <20041119180250.GA30409@ref.nmedia.net> Message-ID: <87wtwh650f.fsf@gnu.org> Paul Campbell writes: >> Alice encrypts m with its own RSA private key, and Bob can >> decrypts the corresponding ciphertext c with Alice's RSA public key. > Alice encrypts m with it's private key. This means that ANY holder of the > public key of Alice can decrypt the message. There is no secrecy here. Might > as well send it in the clear. IIRC, one of the reasons that encrypting is a public function, and decrypting is private is that if you know the secret key you also know the public key. In the example above, why can't Alice simply encrypt ``m'' with Bob's public key (which she has fetched out-of-bound.)? ~j From ian at locut.us Fri Nov 19 20:27:04 2004 From: ian at locut.us (Ian Clarke) Date: Sat Dec 9 22:12:43 2006 Subject: [p2p-hackers] Simple reliable UDP data transfer In-Reply-To: <20041119182403.GB30409@ref.nmedia.net> References: <64D04EB5-397B-11D9-A632-000D932C5880@locut.us> <419E207A.90709@sun.com> <20041119182403.GB30409@ref.nmedia.net> Message-ID: <602F122E-3A69-11D9-A632-000D932C5880@locut.us> On 19 Nov 2004, at 18:24, Paul Campbell wrote: > In your case, you are sending a substantial amount of data. There are > only > three reasons that I can think you want to use UDP for this: 4. Because I want to be able to establish direct connections behind two peers both of which are behind NATs. This is possible with TCP, but it requires some very low-level TCP stack mangling, it is comparatively easy with UDP. Ian. -- Founder, The Freenet Project http://freenetproject.org/ CEO, Cematics Ltd http://cematics.com/ Personal Blog http://locut.us/~ian/blog/ From bryan.turner at pobox.com Fri Nov 19 21:50:27 2004 From: bryan.turner at pobox.com (Bryan Turner) Date: Sat Dec 9 22:12:43 2006 Subject: [p2p-hackers] Simple reliable UDP data transfer In-Reply-To: <20041119182403.GB30409@ref.nmedia.net> Message-ID: [Sorry for the length, I got to rambling a bit..] > In your case, you are sending a substantial amount of data. > There are only three reasons that I can think you want to use > UDP for this: Paul, Actually there's quite a few useful properties of UDP over TCP for Peer-to-Peer projects. Here are my favorite: - Firewalls (as Ian suggested) - UDP can accept any number of 'connections' over a single port. - Streaming, Datagram, and out-of-band traffic can be combined in the same protocol (ala RTSP). - FEC, Digital Fountains, etc.. - Multicast/Broadcast - Precise timing of packet transfers (for time-critical events such as voice, video, and games). Also, I would like to rebuke the typical view of the TCP-centric internet. This is not completely correct from the perspective of the internet routers (I work at Cisco, which may skew my view of the network toward in-situ devices). Internet core routers tend to track "flows", not Layer-3 connections. Flows are defined by IP headers (Layer 2); source & dest IP addresses. Some high-end products do perform Layer-3+ sniffing, but this is usually not done in the core of the network. TCP has a transfer "ceiling" based on its backoff algorithm and the throughput/latency of the intervening hops. Basically, when packets start dropping, TCP backs off to about 50% of its window. Thus you get a 'saw tooth' graph of the transfer rate, which is far from optimal. Higher throughput links have a worse graph than lower throughput links. Thus a single TCP transfer taking up an OC-48 would *never* achieve true OC-48 transfer speeds, while a TCP transfer over a T1 line would get pretty close. It is a common misconception that packets get 'lost' or 'go missing' in the internet. What is actually occurring is the routers, using various heuristics, simply drop the packet in order to keep the rest of the flows transferring at the highest possible rate. However, TCP is a very "greedy" protocol always increasing its window size until it eventually fills the router's buffers. At this point the router has no choice but to drop packets from that flow. Some research on solving this problem in the routers has been in progress within Cisco. There are also some academic groups working on client/server solutions (http://netlab.caltech.edu/FAST/). The general idea is to backoff only a little at first, and then increase the backoff over time until the transfer speed stabilizes. --------------------------------- Ian, Not to hijack the thread, here's my input on reliable-UDP.. My reliable UDP protocol is based on arbitrary-length messages which are broken up into packets and sent over UDP. Messages are ordered on the other side and delivered in order (per stream), but actual packet ordering may be arbitrary. This allows streams with small, quick packet transfers to intermix with longer packet streams with no visible latency. There are three basic layers, Message Order, Fragmentation, and Packet Ordering: - Message Order Layer assigns a monotonically increasing message number to this message as well as ordering the received messages coming from the other side. Messages are assumed to be reliable (so if msg 100 arrives before msg 99, we just hold delivery of both messages until 99 arrives). - Fragmentation Layer breaks the large messages into small messages and interleaves them in a short queue which is shared by all streams. The idea is to round-robin the streams so they always have one packet in the queue. This layer also re-combines fragments from the packet layer into full messages and passes them up the stack. It keeps a list of fragments for this. - Packet Order Layer assigns a monotonically increasing packet number to each packet sent to a unique destination (called a conversation). This includes a short ACK window from the other side also, so we know immediately when messages go missing. Reliability is achieved by keeping a buffer of all un-acked packets and intermixing them with the outgoing stream until an ACK is received. From your perspective, the 256k of data is just a "message" in this scheme, you just call the API function, pointing it at the data to be transferred. The message layer assigns it a number, then the fragmentation layer chops it up. The packet layer starts transferring it, and as packets are dropped, the ACK stream tells the sender to re-send some messages. On the other side, the packet layer ACKs blocks of messages and passes them up to the fragment layer. Fragment layer keeps the fragments around till it can build a full message (in your case, the whole message). Finally, the message layer delivers it to the app on the other side. "TCP Friendly" is just the heuristics related to the Packet Layer and how often it chooses to send packets. I have a separate module which tracks packet sends/ACKs and calculates throughput. This is queried at regular intervals by the Packet layer to adjust its behavior. There is no 'backoff' or 'window' as in TCP, instead the inter-packet send rate takes on the same behavior. Unfortunately this is not open source, but I hope my descriptions are enough to relate the general behaviors. Hope this helps, and feel free to ask questions. --Bryan bryan.turner@pobox.com From agthorr at barsoom.org Fri Nov 19 22:11:57 2004 From: agthorr at barsoom.org (Daniel Stutzbach) Date: Sat Dec 9 22:12:43 2006 Subject: [p2p-hackers] Simple reliable UDP data transfer In-Reply-To: References: <20041119182403.GB30409@ref.nmedia.net> Message-ID: <20041119221156.GF19140@barsoom.org> On Fri, Nov 19, 2004 at 04:50:27PM -0500, Bryan Turner wrote: > TCP has a transfer "ceiling" based on its backoff algorithm and the > throughput/latency of the intervening hops. Basically, when packets start > dropping, TCP backs off to about 50% of its window. Thus you get a 'saw > tooth' graph of the transfer rate, which is far from optimal. Higher > throughput links have a worse graph than lower throughput links. Thus a > single TCP transfer taking up an OC-48 would *never* achieve true OC-48 > transfer speeds, while a TCP transfer over a T1 line would get pretty close. That's true when you've got a single TCP flow running over a really big pipe. When you've got a large number of flows, they do tend to collectively keep the pipe full if I'm not mistaken. -- Daniel Stutzbach Computer Science Ph.D Student http://www.barsoom.org/~agthorr University of Oregon From eugen at leitl.org Fri Nov 19 22:20:31 2004 From: eugen at leitl.org (Eugen Leitl) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] Simple reliable UDP data transfer In-Reply-To: <20041119221156.GF19140@barsoom.org> References: <20041119182403.GB30409@ref.nmedia.net> <20041119221156.GF19140@barsoom.org> Message-ID: <20041119222031.GT1457@leitl.org> On Fri, Nov 19, 2004 at 02:11:57PM -0800, Daniel Stutzbach wrote: > That's true when you've got a single TCP flow running over a really > big pipe. When you've got a large number of flows, they do tend to > collectively keep the pipe full if I'm not mistaken. Another thing is relativistic latency (e.g. over GEO links, and Interplanet especially). The link is reliable, vaccuum being the FIFO, just the ACK latency sucks. UDP could be useful here. -- Eugen* Leitl leitl ______________________________________________________________ ICBM: 48.07078, 11.61144 http://www.leitl.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE http://moleculardevices.org http://nanomachines.net -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: not available Url : http://zgp.org/pipermail/p2p-hackers/attachments/20041119/8101507d/attachment.pgp From clausen at gnu.org Fri Nov 19 22:57:31 2004 From: clausen at gnu.org (Andrew Clausen) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] EIGamal encryption In-Reply-To: <20041119180250.GA30409@ref.nmedia.net> References: <003501c4cde2$f82c6ae0$9402000a@ictltbo> <20041119060227.GA29298@ref.nmedia.net> <004601c4ce05$55273e10$9402000a@ictltbo> <20041119180250.GA30409@ref.nmedia.net> Message-ID: <20041119225731.GA2394@gnu.org> On Fri, Nov 19, 2004 at 10:02:50AM -0800, Paul Campbell wrote: > > Now, I want to know whether we can use EIGamal encryption scheme. That's > > to say, Alice encrypts m with its own EIGamal private key, and Bob > > decrypts the corresponding ciphertext c with Alice's EIGamal public key. > > Please help me. Thank you! > > As I said...you can't do it even with RSA, and you can't do it with > El Gamal. Every cipher has the property decrypt(encrypt(x)) = x. With RSA, we also have the property encrypt(decrypt(x)) = x. In RSA, everyone knows the modulus "n". Usually, we say that the private key is a number "a" and the public key is (n, b) such that ab = 1 (mod phi(n)). So, the (public, private) pair is ((n, b), a). Notice how symmetrical this all is? ((n, b), a) is a (public, private) pair if and only if ((n, a), b) is a (public, private) pair. This means that you can swap the roles of public and private keys. In the reversed roles, only the private key holder can encrypt, but everyone can decrypt. This is how you can use RSA for digital signatures. So, Tianbo's question is essentially: can the roles of private and public keys be reversed in a similar way for El Gamal? If ((p, g, y), x) is an El Gamal (public, private) pair, is there any cryptosystem in which, say ((p, g, x), y) is a (public, private) pair? Clearly, ((p, g, x), y) isn't going to work, because you can compute y from (p, g, x) easily. (This is different from RSA, where it is difficult to compute "a" from (n, b)). Perhaps there is another transformation of ((p, g, y), x) that would work. I think this is unlikely: I think both "p" and "g" have to remain public, which leave only "x" and "y" as candidates to be swapped. I can't think of any way to prove this formally. It's also worth noting that the El Gamal signature system doesn't allow the public key holder to infer the original document (or original hash) from the signature - if someone had found a way to invert El Gamal, then I think it would be presented in textbooks as another signature scheme. Interesting question, though :) Cheers, Andrew From em at em.no-ip.com Sat Nov 20 02:55:10 2004 From: em at em.no-ip.com (Enzo Michelangeli) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] Simple reliable UDP data transfer References: <64D04EB5-397B-11D9-A632-000D932C5880@locut.us><419E207A.90709@sun.com> <20041119182403.GB30409@ref.nmedia.net> <602F122E-3A69-11D9-A632-000D932C5880@locut.us> Message-ID: <028f01c4ceac$60f6cba0$0200a8c0@em.noip.com> ----- Original Message ----- From: "Ian Clarke" To: "Peer-to-peer development." Sent: Saturday, November 20, 2004 4:27 AM [...] > 4. Because I want to be able to establish direct connections behind two > peers both of which are behind NATs. How do you do that, in the general case where the source port is translated unpredictably by the NATting device? BTW, I'm interested in the same matter (for my http://kadc.sourceforge.net/ ) and in the past I tried to find some opensource implementations in C, to no avail. Cisco has proposed a rUDP protocol, but there seems to be no free code for it. There are also lots of academic papers on alternatives to TCP: see e.g. http://www.evl.uic.edu/eric/atp/ . Another idea would be a sort of TCP-over-RTP, which was also discussed a while ago in the OGG-Vorbis community. I like the idea, despite the overhead, because there is a good security layer for RTP (SRTP), and one would leverage on it and get secure stream connections almost free. Apple has proposed something similar, described at http://developer.apple.com/documentation/QuickTime/QTSS/Concepts/chapter_2_section_13.html : the ACK info is sent in form of bitmaps contained in RTCP APP packets. Finally, Petar Maymounkov (co-inventor of Kademlia) has done some work in this area, in the context of www.rateless.com (which, to my understanding, uses FEC and unacknowledged UDP to achieve high throughput on lossy and high-latency channels - but probably in a TCP-unfriendly way). See e.g. http://www.rateless.com/rcx1.html and http://www.rateless.com/socket.html . Enzo From paul at ref.nmedia.net Sat Nov 20 03:49:51 2004 From: paul at ref.nmedia.net (Paul Campbell) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] Simple reliable UDP data transfer In-Reply-To: References: <20041119182403.GB30409@ref.nmedia.net> Message-ID: <20041120034951.GB15713@ref.nmedia.net> On Fri, Nov 19, 2004 at 04:50:27PM -0500, Bryan Turner wrote: > Paul, > > Actually there's quite a few useful properties of UDP over TCP for > Peer-to-Peer projects. Here are my favorite: > > - Firewalls (as Ian suggested) > - UDP can accept any number of 'connections' over a single port. That is precisely the reason that I'm using it. Because I'm writing DHT code which has a substantial amount of "query/reply" packets to wildly different destinations. About the time that TCP gets done with the initial 3-way handshake, I'm already done. > - Streaming, Datagram, and out-of-band traffic can be combined in the same > protocol (ala RTSP). The same is true in TCP. If you look at a lot of high level TCP code (e.g. HTTP), it tends to look exactly the same way. In fact, the current extended HTTP includes provisions for multiple "streams" (for out of band functions and for encapsulating multiple object requests which typically happen in HTML). In fact, even the venerable telnet protocol has an "out of band" escape function, although it was never used for much beyond the basic functions in the RFC. > - FEC, Digital Fountains, etc.. > - Multicast/Broadcast > - Precise timing of packet transfers (for time-critical events such as > voice, video, and games). In these particular cases, I agree as well. The problem with TCP is that the reliability function gets in the way. The higher level protocol can deal with dropped packets in a cleaner way. Either dropped packets are not an issue (FEC deals with it directly) or else delaying the packet stream simply because it is waiting for the result of a NAK is unacceptable for real time data delivery. > Also, I would like to rebuke the typical view of the TCP-centric internet. > This is not completely correct from the perspective of the internet routers > (I work at Cisco, which may skew my view of the network toward in-situ > devices). Very true. The only place where I know routers pay any attention to the packet payload is when a router is being used as a firewall and it has to "sniff" out certain protocols (e.g. universities trying to block file sharing traffic). > It is a common misconception that packets get 'lost' or 'go missing' in the > internet. What is actually occurring is the routers, using various > heuristics, simply drop the packet in order to keep the rest of the flows > transferring at the highest possible rate. It's not a misconception. I'm very aware of this process simply because I had to make sure that the UDP protocol I am writing is "TCP friendly" (ie, follows the same level of greediness so that it doesn't get starved or act even more aggressively than TCP). In this particular case, the original post said that the goal was to unicast 256K work of data reliably. That's not very UDP-like at all. It is an ideal application of TCP. > Not to hijack the thread, here's my input on reliable-UDP.. > > My reliable UDP protocol is based on arbitrary-length messages which are > broken up into packets and sent over UDP. Messages are ordered on the other > side and delivered in order (per stream), but actual packet ordering may be > arbitrary. This allows streams with small, quick packet transfers to > intermix with longer packet streams with no visible latency. > > There are three basic layers, Message Order, Fragmentation, and Packet > Ordering: > > - Message Order Layer assigns a monotonically increasing message number to > this message as well as ordering the received messages coming from the other > side. Messages are assumed to be reliable (so if msg 100 arrives before msg > 99, we just hold delivery of both messages until 99 arrives). In my own protocol, I made the assumption that out-of-order delivery rarely ever happens (which is true in practice). Thus, I simply reject (NAK) out of order packets under the assumption that the packet was dropped rather than getting out of order. > - Fragmentation Layer breaks the large messages into small messages and > interleaves them in a short queue which is shared by all streams. The idea > is to round-robin the streams so they always have one packet in the queue. > This layer also re-combines fragments from the packet layer into full > messages and passes them up the stack. It keeps a list of fragments for > this. I am doing strictly RPC calls. Thus, everything is a message. I tag every packet with a nonce which is generated by the requester. Different "streams" (actually, RPC's) are tracked by nonce. The request and the reply are tagged with the same nonce. Even multiple-packet messages are assumed to be reasonably short so I don't even attempt to ACK individual packets. Either the entire request and response makes it, or else it is retransmitted. My goal is to support messages that are generally less than 1-2K in length. At the higher level, if there is the potential for an extremely long packet, I purposely could send it in multiple request/reply sequences to keep things under control (not a problem so far). I don't worry about "keeping packets in queue". There is just one queue, and it stores all outgoing RPC requests and replies. The transmit code simply sends one entire message right after the other. The receiving code maintains buffers of all incoming messages (subject to a timeout if the request takes too long to complete). As a request or reply completes, it is dispatched. Overhead is just 5 bytes (4 byte nonce and 1 byte of combined flags and a 4 bit counter to handle messages that are up to 16 packets long). In this scheme, since every request must receive a reply at the higher level, I dispensed with a lower level ACK. Otherwise, you get the situation where you get a 4-way handshake; a request is sent followed by ACK's, and then the response is sent, followed by more ACK's. I know that obviously the ACK's interleave, but since a response performs the same function as an ACK, it eliminated layers and overhead. > - Packet Order Layer assigns a monotonically increasing packet number to > each packet sent to a unique destination (called a conversation). This > includes a short ACK window from the other side also, so we know immediately > when messages go missing. Reliability is achieved by keeping a buffer of > all un-acked packets and intermixing them with the outgoing stream until an > ACK is received. > > From your perspective, the 256k of data is just a "message" in this scheme, > you just call the API function, pointing it at the data to be transferred. > The message layer assigns it a number, then the fragmentation layer chops it > up. The packet layer starts transferring it, and as packets are dropped, > the ACK stream tells the sender to re-send some messages. > > On the other side, the packet layer ACKs blocks of messages and passes them > up to the fragment layer. Fragment layer keeps the fragments around till it > can build a full message (in your case, the whole message). Finally, the > message layer delivers it to the app on the other side. > > "TCP Friendly" is just the heuristics related to the Packet Layer and how > often it chooses to send packets. I have a separate module which tracks > packet sends/ACKs and calculates throughput. This is queried at regular > intervals by the Packet layer to adjust its behavior. There is no 'backoff' > or 'window' as in TCP, instead the inter-packet send rate takes on the same > behavior. I did the same thing. Since my streams are very short (not anything close to what TCP would need to do proper congestion control), I simply track the RTT's and the requests/responses by node. The messages (both requests and responses) sit in a queue before entering the transmit queue. The waiting queue time holds the request for a particular time before releasing it. In the future, I'm thinking about implementing "network coordinates" simply because I would like to have an RTT estimate to use for a node that has never been contacted before (when I have no prior data). As to actual implementation, I'm using the Twisted Matrix framework in Python. My queues are actually Python dictionaries. The key is always either the nonce or a (address, port, nonce) tuple converted to a string (because Python wants it that way). For timing, I simply use the event driven functions in Python directly. So there really isn't a "waiting to transmit" queue as such. I simply execute a "CallLater" function in the reactor code. This blends all the various protocol activities together but it seems to work even for good sized web servers that have been implemented in Twisted so I don't see where this is going to be an issue. The same event driven engine handles timeouts. From paul at ref.nmedia.net Sat Nov 20 03:55:44 2004 From: paul at ref.nmedia.net (Paul Campbell) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] Simple reliable UDP data transfer In-Reply-To: References: <20041119182403.GB30409@ref.nmedia.net> Message-ID: <20041120035544.GC15713@ref.nmedia.net> On Fri, Nov 19, 2004 at 04:50:27PM -0500, Bryan Turner wrote: > [Sorry for the length, I got to rambling a bit..] Ohh...I forgot one point to make. I wasn't accusing the routing hardware of being "TCP-centric". There's really very little at the router and AS level that pays any attention to anything other than the IP level of things. HOWEVER, what I was referring to as "TCP-centric" has nothing to do with the network hardware. It has everything to do with the end points, the user machines and the servers. Almost every operating system (Linux, BSD, MacOS X, and Windows for sure) use the BSD TCP/IP code base. Especially in BSD 4.4, the code base has been highly optimized to improve buffering and dispatching functions for TCP. The same optimizations have NOT been done for UDP. Thus, throughput can and sometimes does significantly bottleneck at the endpoints. The underlying switching architecture is not the culprit of the bottleneck. From coderman at peertech.org Sat Nov 20 04:09:14 2004 From: coderman at peertech.org (coderman) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] Simple reliable UDP data transfer In-Reply-To: <602F122E-3A69-11D9-A632-000D932C5880@locut.us> References: <64D04EB5-397B-11D9-A632-000D932C5880@locut.us> <419E207A.90709@sun.com> <20041119182403.GB30409@ref.nmedia.net> <602F122E-3A69-11D9-A632-000D932C5880@locut.us> Message-ID: <419EC36A.1090203@peertech.org> Ian Clarke wrote: > 4. Because I want to be able to establish direct connections behind > two peers both of which are behind NATs. > > This is possible with TCP, but it requires some very low-level TCP > stack mangling, it is comparatively easy with UDP. UPnP is worth trying first before resorting to more complicated methods. From paul at ref.nmedia.net Sat Nov 20 04:18:16 2004 From: paul at ref.nmedia.net (Paul Campbell) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] EIGamal encryption In-Reply-To: <20041119225731.GA2394@gnu.org> References: <003501c4cde2$f82c6ae0$9402000a@ictltbo> <20041119060227.GA29298@ref.nmedia.net> <004601c4ce05$55273e10$9402000a@ictltbo> <20041119180250.GA30409@ref.nmedia.net> <20041119225731.GA2394@gnu.org> Message-ID: <20041120041816.GD15713@ref.nmedia.net> On Sat, Nov 20, 2004 at 09:57:31AM +1100, Andrew Clausen wrote: > On Fri, Nov 19, 2004 at 10:02:50AM -0800, Paul Campbell wrote: > > > Now, I want to know whether we can use EIGamal encryption scheme. That's > > > to say, Alice encrypts m with its own EIGamal private key, and Bob > > > decrypts the corresponding ciphertext c with Alice's EIGamal public key. > > > Please help me. Thank you! > > > > As I said...you can't do it even with RSA, and you can't do it with > > El Gamal. > > Every cipher has the property decrypt(encrypt(x)) = x. > With RSA, we also have the property encrypt(decrypt(x)) = x. In the full context of my post, I was stating that you can't send a message secretly when it only requires your own public key to decrypt it, which is PUBLIC. So the inverted system of RSA or any other public key system provides no secrecy, without losing the assymetric nature of it. If you want to maintain secrecy, then you've just converted your computationally intensive public key system into a private key system...which is kind of pointless. There is an application that you mentioned though. If you do use the reversed system as a digital signature, nobody cares what the underyling nonce (the hash) value is. That's public knowledge. The point is to prove that you possess the private key. Of course as you mentioned, one of the curious things about the ElGamal digital signature scheme is that due to the darned random nonce, it's not even possible to recover the "original document" (the hash value). However, analogous to that, El Gamal has a "feature" that RSA doesn't have. The nonce is not recoverable by the private key holder. Thus, the sender can test a few different nonces to attempt to control the bits in the ciphertext. This can be used as a subliminal communication channel and is undetectable by the private key holder. There are ElGamal variations though that disrupt the subliminal channel as well. From paul at ref.nmedia.net Sat Nov 20 04:40:58 2004 From: paul at ref.nmedia.net (Paul Campbell) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] Simple reliable UDP data transfer In-Reply-To: <028f01c4ceac$60f6cba0$0200a8c0@em.noip.com> References: <20041119182403.GB30409@ref.nmedia.net> <602F122E-3A69-11D9-A632-000D932C5880@locut.us> <028f01c4ceac$60f6cba0$0200a8c0@em.noip.com> Message-ID: <20041120044058.GE15713@ref.nmedia.net> On Sat, Nov 20, 2004 at 10:55:10AM +0800, Enzo Michelangeli wrote: > ----- Original Message ----- > From: "Ian Clarke" > To: "Peer-to-peer development." > Sent: Saturday, November 20, 2004 4:27 AM > > [...] > > 4. Because I want to be able to establish direct connections behind two > > peers both of which are behind NATs. > > How do you do that, in the general case where the source port is > translated unpredictably by the NATting device? You can't. If the source port is unpredictably translated at both ends, it is impossible to make contact since it is not possible to detect which port to hammer externally. HOWEVER, a lot of NAT's don't translate ports unpredictably. So it is possible if only the addresses are translated to use a proxy. First send a packet to the proxy notifying the NAT'd machine that you wish to make contact and indicate the address and port. Then begin sending packets to that port. The NAT'd machine simultaneously attempts the same thing. The NAT recognizes a stream of packets in both directions and makes the assumption that the external packets are responses to an internally initiated conversation. This scenario can even work NAT to NAT, as long as port translation isn't going on. If address translation is going on, forget it! There are several documents discussing NAT busting out there. However, it boils down to either port translation is not done, the NAT is programmed to pass on traffic destined to certain ports onto certain internal machines & ports, or else it's not possible. > Finally, Petar Maymounkov (co-inventor of Kademlia) has done some work in > this area, in the context of www.rateless.com (which, to my understanding, > uses FEC and unacknowledged UDP to achieve high throughput on lossy and > high-latency channels - but probably in a TCP-unfriendly way). See e.g. > http://www.rateless.com/rcx1.html and > http://www.rateless.com/socket.html . You can do it in a TCP-friendly way. The key is to maintain the aggressiveness of the throughput algorithm in a max-min way. It is not TCP-friendly if you don't follow that rule. That doesn't mean that binary exponential backoff is the ONE TRUE WAY to achieve the same aggressiveness. From mgp at ucla.edu Sat Nov 20 07:57:38 2004 From: mgp at ucla.edu (Michael Parker) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] Simple reliable UDP data transfer In-Reply-To: <20041120034951.GB15713@ref.nmedia.net> References: <20041119182403.GB30409@ref.nmedia.net> <20041120034951.GB15713@ref.nmedia.net> Message-ID: <419EF8F2.70309@ucla.edu> I have a robust implementation of Vivaldi coded in Java, which should be easily translated to any other object-oriented programming language (I say OO because it uses the singleton and abstract factory patterns, although the former is easily removed from the implementation). It includes the 2D/3D/5D Euclidian models, and the height-vector model specified in their SIGCOMM paper. Let me know if it's of any interest to you -- if so, I can try to add some Javadoc to it and send it (although I'm pretty busy these days, I can't garuantee the Javadoc part). All I may ask in return is more description of your UDP-based protocol, since I'm writing my own DHT-based p2p app and will reach the network layer eventually, needing inspiration... - Michael Parker >In >the future, I'm thinking about implementing "network coordinates" simply >because I would like to have an RTT estimate to use for a node that has >never been contacted before (when I have no prior data). > > From ian at locut.us Sat Nov 20 11:02:09 2004 From: ian at locut.us (Ian Clarke) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] Simple reliable UDP data transfer In-Reply-To: <419EC36A.1090203@peertech.org> References: <64D04EB5-397B-11D9-A632-000D932C5880@locut.us> <419E207A.90709@sun.com> <20041119182403.GB30409@ref.nmedia.net> <602F122E-3A69-11D9-A632-000D932C5880@locut.us> <419EC36A.1090203@peertech.org> Message-ID: <9FCB502E-3AE3-11D9-A632-000D932C5880@locut.us> On 20 Nov 2004, at 04:09, coderman wrote: > Ian Clarke wrote: > >> 4. Because I want to be able to establish direct connections behind >> two peers both of which are behind NATs. >> >> This is possible with TCP, but it requires some very low-level TCP >> stack mangling, it is comparatively easy with UDP. > > UPnP is worth trying first before resorting to more complicated > methods. I don't consider this to be more complicated than UPnP, it is actually surprisingly simple. Also, since UPnP is still far from pervasive, I would need to implement this one way or the other anyhow. Ian. -- Founder, The Freenet Project http://freenetproject.org/ CEO, Cematics Ltd http://cematics.com/ Personal Blog http://locut.us/~ian/blog/ From ian at locut.us Sat Nov 20 11:07:50 2004 From: ian at locut.us (Ian Clarke) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] Simple reliable UDP data transfer In-Reply-To: <20041120044058.GE15713@ref.nmedia.net> References: <20041119182403.GB30409@ref.nmedia.net> <602F122E-3A69-11D9-A632-000D932C5880@locut.us> <028f01c4ceac$60f6cba0$0200a8c0@em.noip.com> <20041120044058.GE15713@ref.nmedia.net> Message-ID: <6ACD3024-3AE4-11D9-A632-000D932C5880@locut.us> On 20 Nov 2004, at 04:40, Paul Campbell wrote: > On Sat, Nov 20, 2004 at 10:55:10AM +0800, Enzo Michelangeli wrote: >> ----- Original Message ----- >> From: "Ian Clarke" >> To: "Peer-to-peer development." >> Sent: Saturday, November 20, 2004 4:27 AM >> >> [...] >>> 4. Because I want to be able to establish direct connections behind >>> two >>> peers both of which are behind NATs. >> >> How do you do that, in the general case where the source port is >> translated unpredictably by the NATting device? > > You can't. If the source port is unpredictably translated at both > ends, it > is impossible to make contact since it is not possible to detect which > port > to hammer externally. Well, that is not strictly true. In my application peers do not assume that their "external" source port will be the same as their "internal" port - they detect what is is as part of their assimilation into the network, at the same time that they determine their external IP address. It has been working very nicely even for NATs which mangle the source port on UDP packets. Ian. -- Founder, The Freenet Project http://freenetproject.org/ CEO, Cematics Ltd http://cematics.com/ Personal Blog http://locut.us/~ian/blog/ From ian at locut.us Sat Nov 20 11:33:45 2004 From: ian at locut.us (Ian Clarke) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] Simple reliable UDP data transfer In-Reply-To: References: Message-ID: <0981F273-3AE8-11D9-A632-000D932C5880@locut.us> On 19 Nov 2004, at 21:50, Bryan Turner wrote: > [Sorry for the length, I got to rambling a bit..] Not at all, this is fascinating - and given that P2P applications are likely to form an increasing proportion of Internet traffic, it is essential that the protocols used account for the lessons of the past. > Not to hijack the thread, here's my input on reliable-UDP.. Interesting, although it seems like it might be overkill for what I need :-/ Here is what I am doing now, I was working on the assumption that I would need to revamp it, but after testing for a few days, I'm surprised by how well it is working, so perhaps that won't be necessary. Comments (and/or shrieks of abject horror) are welcome: The 256k block is split into 256 packets which are transmitted at regular intervals determined by our upstream bandwidth limit which we adjust on the fly (see below). If two or more blocks are being transmitted at once, then they share the available upstream bandwidth (ie. for 2 the interval for each is doubled). The transmitter will always transmit the lowest untransmitted block next. Once every few packets (I use 64) the sender sends a check message which contains a bit array indicating which packets it has sent. The receiver, on getting this, checks to see that the transmitted blocks correspond to what it has received, and if not, sends a message to the sender indicating which blocks are missing. The sender marks these as untransmitted and thus resends them at the next interval. On completion the sender sends a check message and awaits either a completed message from the receiver, or another retransmission request. These are resent periodically if sender or receiver don't hear from each-other to account for the possibility that they are dropped. Right now the upstream bandwidth limit is fixed, but if I decide to stick with this algorithm I will adjust it based on round-trip time. The way I do this is the question mark right now, but essentially I was thinking that I would set a RTT threshold designed to be triggered when the user's last-mile connection starts to buffer packets. At this point I use an additive increase multiplicative decrease approach using constants that achieve TCP friendliness as described in this paper (look around page 13): http://www.cs.utexas.edu/users/lam/Vita/Misc/YangLam00tr.pdf Specifically: public static final float PACKET_DROP_DECREASE_MULTIPLE = 0.875f; public static final float PACKET_TRANSMIT_INCREMENT = (4 * (1 - (PACKET_DROP_DECREASE_MULTIPLE * PACKET_DROP_DECREASE_MULTIPLE))) / 3; Anyway, that is the rough outline, comments are appreciated. Incidentally, the fruits of this labour will be open source, you can see what I have done so-far at http://dijjer.org/. You can browse the messy and so-far uncommented code here: http://cvs.sourceforge.net/viewcvs.py/dijjer/Dijjer/ (Don't worry, I will go through, tidy up, and comment it in due course). I'm trying to maintain a low profile until the code is reasonably stable, but that can create a chicken and egg issue because its hard to know that it is stable until it has been tested on a reasonably large scale. Ian. -- Founder, The Freenet Project http://freenetproject.org/ CEO, Cematics Ltd http://cematics.com/ Personal Blog http://locut.us/~ian/blog/ From paul at ref.nmedia.net Sat Nov 20 11:39:11 2004 From: paul at ref.nmedia.net (Paul Campbell) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] Simple reliable UDP data transfer In-Reply-To: <0981F273-3AE8-11D9-A632-000D932C5880@locut.us> References: <0981F273-3AE8-11D9-A632-000D932C5880@locut.us> Message-ID: <20041120113911.GB16150@ref.nmedia.net> On Sat, Nov 20, 2004 at 11:33:45AM +0000, Ian Clarke wrote: > Once every few packets (I use 64) the sender sends a check message > which contains a bit array indicating which packets it has sent. The > receiver, on getting this, checks to see that the transmitted blocks > correspond to what it has received, and if not, sends a message to the > sender indicating which blocks are missing. The sender marks these as > untransmitted and thus resends them at the next interval. On > completion the sender sends a check message and awaits either a > completed message from the receiver, or another retransmission request. > These are resent periodically if sender or receiver don't hear from > each-other to account for the possibility that they are dropped. Just reading through this, it seems that you could save a packet (and an RTT) if you just had the receiver mark it's own bit array and send that back. Obviously the receiver has to be able to identify the packet ordering somehow anyways. So if the receiver sends a bit array simultaneously marking what it has received on a periodic basis, the transmitter can monitor without having an extra wait cycle. It simply keeps a bit array which has 3 states: not sent, waiting for acknowledgement, and completed. Also, if you want to get more complicated, you could consider the IDA (information dispersal algorithm) which lets you incrementally (and by slight adjustment) send a mild amount of additional redundancy to overcome packet dropping by routers. You don't have to use the IDA scheme either. The recent LT, Raptor, and Online Codes provide similar functions to the IDA scheme. With all of these code systems, the idea is that it changes the receiver's point of view from "I need to receive packets 1-1000" to "I need to receive at least 1000 packets". The response to packet loss (aside from flow control) is to increase the amount of redundancy in order to overcome further losses. > Right now the upstream bandwidth limit is fixed, but if I decide to > stick with this algorithm I will adjust it based on round-trip time. > The way I do this is the question mark right now, but essentially I was > thinking that I would set a RTT threshold designed to be triggered when > the user's last-mile connection starts to buffer packets. At this > point I use an additive increase multiplicative decrease approach using > constants that achieve TCP friendliness as described in this paper > (look around page 13): > > http://www.cs.utexas.edu/users/lam/Vita/Misc/YangLam00tr.pdf > > Specifically: > > public static final float PACKET_DROP_DECREASE_MULTIPLE = 0.875f; > public static final float PACKET_TRANSMIT_INCREMENT = (4 * (1 - > (PACKET_DROP_DECREASE_MULTIPLE * > PACKET_DROP_DECREASE_MULTIPLE))) / 3; That's all you got to do. Maintain the same aggressiveness when it comes to flow control. No more or less than TCP. From ian at locut.us Sat Nov 20 12:15:07 2004 From: ian at locut.us (Ian Clarke) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] Simple reliable UDP data transfer In-Reply-To: <20041120113911.GB16150@ref.nmedia.net> References: <0981F273-3AE8-11D9-A632-000D932C5880@locut.us> <20041120113911.GB16150@ref.nmedia.net> Message-ID: On 20 Nov 2004, at 11:39, Paul Campbell wrote: > On Sat, Nov 20, 2004 at 11:33:45AM +0000, Ian Clarke wrote: >> Once every few packets (I use 64) the sender sends a check message >> which contains a bit array indicating which packets it has sent. The >> receiver, on getting this, checks to see that the transmitted blocks >> correspond to what it has received, and if not, sends a message to the >> sender indicating which blocks are missing. The sender marks these as >> untransmitted and thus resends them at the next interval. On >> completion the sender sends a check message and awaits either a >> completed message from the receiver, or another retransmission >> request. >> These are resent periodically if sender or receiver don't hear from >> each-other to account for the possibility that they are dropped. > > Just reading through this, it seems that you could save a packet (and > an > RTT) if you just had the receiver mark it's own bit array and send that > back. Obviously the receiver has to be able to identify the packet > ordering > somehow anyways. So if the receiver sends a bit array simultaneously > marking what it has received on a periodic basis, the transmitter can > monitor without having an extra wait cycle. It simply keeps a bit array > which has 3 states: not sent, waiting for acknowledgement, and > completed. Well, one thing I was not clear on is that the transmitter does not stop transmitting when it sends a check packet, so there isn't really a wait cycle as such (if I understand what you are saying). The reason I do it this way is that if the receiver sent a list of packets it has received to the transmitter, packets sent between when the receiver sends the check packet, and the transmitter receives it, will be inappropriately marked as untransmitted by the transmitter, and then retransmitted. Your scheme does eliminate the need for a small packet, but may cause the unnecessary retransmission of the much larger data-carrying packets, so this would be a false economy. > Also, if you want to get more complicated, you could consider the > IDA (information dispersal algorithm) which lets you incrementally > (and by slight adjustment) send a mild amount of additional redundancy > to > overcome packet dropping by routers. You don't have to use the IDA > scheme > either. The recent LT, Raptor, and Online Codes provide similar > functions > to the IDA scheme. With all of these code systems, the idea is that it > changes the receiver's point of view from "I need to receive packets > 1-1000" > to "I need to receive at least 1000 packets". The response to packet > loss > (aside from flow control) is to increase the amount of redundancy in > order > to overcome further losses. I have actually used that approach in the past (using forward error correction). One issue is that, while is certainly reduces the need for check packets, it often lead to over-transmission of the data packets between when the receiver got the 1000th packet and the sender received the receivers notification that it was done. I was unconvinced that it was worth the additional complexity which is why I went for this simpler scheme on this occasion. Finally, a question for the list: What, if any, simplifications can be achieved given that, in the vast majority of cases, we know that the bottleneck will be the last-mile upstream bandwidth? I can't think of anything, but perhaps someone else can. Ian. -- Founder, The Freenet Project http://freenetproject.org/ CEO, Cematics Ltd http://cematics.com/ Personal Blog http://locut.us/~ian/blog/ From ian at locut.us Sat Nov 20 12:22:55 2004 From: ian at locut.us (Ian Clarke) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] Simple reliable UDP data transfer In-Reply-To: References: Message-ID: On 18 Nov 2004, at 16:49, Greg Bildson wrote: > I've certainly heard about the 50ms limit (on linux no?) but never > seen it > have a major effect. Then again, our transfers while fast could be > faster > so that could be one of the limiting factors. In previous applications (and probably this one when I get around to it) I have tackled this issue by periodically skipping the sleep() such that the average transmission frequency is what i want it to be. Assuming that there is 1k per packet, if you don't do that, you are limited to about 20k per second, which is just about enough for most broadband connections. I can't recall which platforms are affected by this limitation, I think there is stuff in Java 1.5 that is more precise, but I am sticking with more established APIs for compatibility with open source JREs. Ian. -- Founder, The Freenet Project http://freenetproject.org/ CEO, Cematics Ltd http://cematics.com/ Personal Blog http://locut.us/~ian/blog/ From paul at ref.nmedia.net Sat Nov 20 23:44:41 2004 From: paul at ref.nmedia.net (Paul Campbell) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] Simple reliable UDP data transfer In-Reply-To: References: <0981F273-3AE8-11D9-A632-000D932C5880@locut.us> <20041120113911.GB16150@ref.nmedia.net> Message-ID: <20041120234441.GA19705@ref.nmedia.net> On Sat, Nov 20, 2004 at 12:15:07PM +0000, Ian Clarke wrote: > Well, one thing I was not clear on is that the transmitter does not > stop transmitting when it sends a check packet, so there isn't really a > wait cycle as such (if I understand what you are saying). The reason I > do it this way is that if the receiver sent a list of packets it has > received to the transmitter, packets sent between when the receiver > sends the check packet, and the transmitter receives it, will be > inappropriately marked as untransmitted by the transmitter, and then > retransmitted. Your scheme does eliminate the need for a small packet, > but may cause the unnecessary retransmission of the much larger > data-carrying packets, so this would be a false economy. That can be cured too. Conceptually, the packets are organized in a circular linked list. At the beginning, the transmitter initializes a bit array of "sent" packets to all zeroes. The transmitter begins sending packets starting with packet #0, in order, all the way through the list. At the end, the transmitter wraps around the circle back to packet #0, Any packet with a corresponding "0" bit in the "sent" array is sent out. After that, the "sent" array is marked with a "1". The transmitter stops when it gets an "all ones" packet from the receiver or when it has an all-one's bit array (actually, it pauses and waits for a receiver response). Periodically, the receiver sends an updated "received" array. The transmitter first does a logical XOR with that array and the "sent" array. This leaves only bits set for packets which have been sent but not yet received. Now the transmitter counts backwards on this "lost" array from the pointer to the last sent packet, resetting every bit along the way, stopping at the first zero, and wrapping around the circular list. Then the transmitter XOR's this new result and stores it as the "sent" array. What happens this time is that all of the packets that were lost and just not yet received are reset. The one pathoogical exception to this situation is if the very last packet sent out before those packets that are pending was lost. But then you'd have a hard time detecting this versus a variety of other network problems anyways. Thus, the transmitter can accurately detect "sent but not received" packets and sends only those packets which have not yet been received and are not in flight. As I mentioned, there's the one pathological case left but that does require a "marker" from the transmitter (your transmitter summary/request packet is a marker). Or you can accept the slightly lower efficiency of a single pathological case that doesn't break the protocol (the packet just gets delayed until another receiver update cycle). From gbildson at limepeer.com Sun Nov 21 04:35:44 2004 From: gbildson at limepeer.com (gbildson@limepeer.com) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] Simple reliable UDP data transfer In-Reply-To: References: Message-ID: <1101011744.41a01b201f730@cyrus.limewire.com> Hmmm. Well we have certainly achieved transfer rates of 70Kbytes/sec with data packet sizes of 500 bytes. However, you shouldn't use a constant delay between sends and in fact I send multiple packets consecutively with no delay. You need to detect how many you can send consecutively without > ~3% packet loss (particularly when you're sending fixed 500 byte data blocks). i.e. You need to vary both the delay and the consecutive sends before a delay for maximum throughput. The traffic is delivered in bursts. Thanks -greg Quoting Ian Clarke : > On 18 Nov 2004, at 16:49, Greg Bildson wrote: > > I've certainly heard about the 50ms limit (on linux no?) but never > > seen it > > have a major effect. Then again, our transfers while fast could be > > faster > > so that could be one of the limiting factors. > > In previous applications (and probably this one when I get around to > it) I have tackled this issue by periodically skipping the sleep() such > that the average transmission frequency is what i want it to be. > > Assuming that there is 1k per packet, if you don't do that, you are > limited to about 20k per second, which is just about enough for most > broadband connections. > > I can't recall which platforms are affected by this limitation, I think > there is stuff in Java 1.5 that is more precise, but I am sticking with > more established APIs for compatibility with open source JREs. > > Ian. > > -- > Founder, The Freenet Project http://freenetproject.org/ > CEO, Cematics Ltd http://cematics.com/ > Personal Blog http://locut.us/~ian/blog/ > > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > _______________________________________________ > Here is a web page listing P2P Conferences: > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences > From baford at mit.edu Sun Nov 21 18:29:49 2004 From: baford at mit.edu (Bryan Ford) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] Simple reliable UDP data transfer Message-ID: <200411211329.49389.baford@mit.edu> Ian Clarke wrote: >On 19 Nov 2004, at 18:24, Paul Campbell wrote: >> In your case, you are sending a substantial amount of data. There are >> only >> three reasons that I can think you want to use UDP for this: > >4. Because I want to be able to establish direct connections behind two >peers both of which are behind NATs. > >This is possible with TCP, but it requires some very low-level TCP >stack mangling, it is comparatively easy with UDP. If by "low-level TCP stack mangling" you mean doing something that requires changing the kernel or otherwise doing stuff that typically requires special privileges, that's not entirely true. Basic "TCP hole punching" can be done in a fashion almost entirely identical to the way it's done in UDP, without the application requiring any TCP stack changes or other special privileges. The only downside is that, unsurprisingly, somewhat fewer existing NATs are already "TCP hole punching friendly" than are "UDP hole punching friendly" - about 60% versus 75% in some preliminary measurement results that I've included in the following draft paper on the topic: http://www.brynosaurus.com/pub/os/nat.pdf The difference between 60% and 75% is by no means insignificant, of course, but I think it's arguable whether the difference is really qualitative enough to justify the use of UDP for things that TCP would otherwise be good for: either way, hole punching works with "a lot of, but not all, existing NATs." Either way, the application faces a choice about what to do with misbehaved NATs: just not work at all, or forward traffic through well-known servers, or attempt more sophisticated and delicate tricks such as port number prediction or "low-level TCP stack mangling." In a subsequent message, Bryan Turner wrote: > Actually there's quite a few useful properties of UDP over TCP for >Peer-to-Peer projects. Here are my favorite: > >- Firewalls (as Ian suggested) See above. >- UDP can accept any number of 'connections' over a single port. TCP can do this too, using the SO_REUSEADDR (and SO_REUSEPORT on BSD) socket option that every mature TCP implementation supports. In fact, port reuse is fundamental to the basic, straightforward TCP hole punching algorithm I described in the above paper. >- Streaming, Datagram, and out-of-band traffic can be combined in the same >protocol (ala RTSP). Such things are also already routinely done over TCP, as Paul Campbell pointed out in another message. I have no argument with the various other reasons that were pointed out for why UDP might be more appropriate than TCP; I just wanted to point out that a few of the reasons commonly perceived as "most important" may not be so important after all. Cheers, Bryan From neumann at lostwebsite.net Sun Nov 21 20:39:36 2004 From: neumann at lostwebsite.net (neumann@lostwebsite.net) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] Questions on Plaxton Mesh Message-ID: <200411211539.36463.neumann@lostwebsite.net> Hi all, I'm writting a report about a how P2P and evolving using a set of P2P networks. Well, long story short, I need to describe the concept of Plaxton mesh and how it's used in the Tapestry P2P network. I'm stumbling on 1 thing: the difference between primary neighors and secondary neighbors. I've got the Plaxton paper right in front of me but all that mathematical notation is giving me headaches! I would be happy if someone could explain me how the 2 are different in few words. I need something to see if my vague idea of it is not incorrect. I don't need to go into details. Thank you Fran?ois-Denis Gonthier From ian at locut.us Sun Nov 21 21:41:13 2004 From: ian at locut.us (Ian Clarke) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] Simple reliable UDP data transfer In-Reply-To: <200411211329.49389.baford@mit.edu> References: <200411211329.49389.baford@mit.edu> Message-ID: <10E02909-3C06-11D9-A632-000D932C5880@locut.us> On 21 Nov 2004, at 18:29, Bryan Ford wrote: > Ian Clarke wrote: >> On 19 Nov 2004, at 18:24, Paul Campbell wrote: >>> In your case, you are sending a substantial amount of data. There are >>> only >>> three reasons that I can think you want to use UDP for this: >> >> 4. Because I want to be able to establish direct connections behind >> two >> peers both of which are behind NATs. >> >> This is possible with TCP, but it requires some very low-level TCP >> stack mangling, it is comparatively easy with UDP. > > If by "low-level TCP stack mangling" you mean doing something that > requires > changing the kernel or otherwise doing stuff that typically requires > special > privileges, that's not entirely true. Basic "TCP hole punching" can > be done > in a fashion almost entirely identical to the way it's done in UDP, > without > the application requiring any TCP stack changes or other special > privileges. To the best of my understanding it cannot be achieved from Java, which is my choice of implementation language, and which therefore rules it out for me. All the best, Ian. -- Founder, The Freenet Project http://freenetproject.org/ CEO, Cematics Ltd http://cematics.com/ Personal Blog http://locut.us/~ian/blog/ From em at em.no-ip.com Mon Nov 22 03:22:19 2004 From: em at em.no-ip.com (Enzo Michelangeli) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] Simple reliable UDP data transfer References: <200411211329.49389.baford@mit.edu> Message-ID: <002f01c4d042$81b8b000$0200a8c0@em.noip.com> ----- Original Message ----- From: "Bryan Ford" To: Sent: Monday, November 22, 2004 2:29 AM Subject: Re: [p2p-hackers] Simple reliable UDP data transfer [...] > http://www.brynosaurus.com/pub/os/nat.pdf > > The difference between 60% and 75% is by no means insignificant, > of course, but I think it's arguable whether the difference is > really qualitative enough to justify the use of UDP for things > that TCP would otherwise be good for: either way, hole punching > works with "a lot of, but not all, existing NATs." > Either way, the application faces a choice about what to do > with misbehaved NATs: just not work at all, or forward traffic > through well-known servers, or attempt more sophisticated and > delicate tricks such as port number prediction or "low-level > TCP stack mangling." I favour "using as proxy/reflector other peers advertising their non-NATted status and the amount of bandwidth they are willing to donate for that purpose" (as it's done, to my understanding, by Skype). This is a P2P variant of forwarding traffic through well-known servers. Enzo From paul at ref.nmedia.net Mon Nov 22 04:53:45 2004 From: paul at ref.nmedia.net (Paul Campbell) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] Simple reliable UDP data transfer In-Reply-To: <200411211329.49389.baford@mit.edu> References: <200411211329.49389.baford@mit.edu> Message-ID: <20041122045345.GA9372@ref.nmedia.net> As I understand it when it comes to "NAT busting", there are a variety of techniques. And the official term that I've seen used is "NAT penetration". First off, there are the easy ways out: 1. Use a proxy. The Circle (http://thecircle.org.au) does it this way. It contains a short shell script internally. If it can login to a remote (non-NAT'd) Unix box, it sends the shell script to act as a remote proxy. Simply, but blunt. Skype does the same thing. It simply uses non-NAT'd nodes as proxies (often without informing the user). I've seen similar code in other P2P software. The downside: higher load (on the proxies). I kind of like thecircle's technique (get your own proxy!) but it's not an "idiot proof" one. 2. Use any NAT-specific functions. These include: SOCKS UPnP Manually configure port forwarding for specific ports. 3. Figure out how the NAT translates addresses and then use that information. This is documented in RFC 3027. For instance, some NAT's use "best effort"; they try to use the same port number that was used internally. In this case, through the use of proxies to negotiate a communication, both NAT'd hosts initiate contact with each other directly by sending packets blindly to the appropriate port, and waiting for the port to open once both NAT tables are "primed" (the NAT assumes since traffic left for that address/port, then there is probably some returning as well). A slightly more complicated case is when NAT's simply increment a counter and use that as the port. In this case, first the ports have to be "aligned". Through a proxy, both NAT's have to discover the nearest identical port. The very next port that gets opened on one NAT will be the one desired. So the one that has to "catch up" uses "doomed packets"; packets with a time-to-live timer (TTL) set to "4", which means that they get pitched before arriving at the other end. This allows the internal computer to use the full LAN bandwidth to flood the NAT with packets until it aligns the port numbers. Then communication can be initiated as usual. There are also some suggestions to "bust the NAT" by overflowing it's translation buffers. However, these last techniques are getting to the point where the question is just how often this stuff comes up. Also, newer NAT's are starting to support the "best effort translation" concept mostly because those darned end users are demanding it so that their P2P software works correctly. From ian at locut.us Mon Nov 22 11:28:50 2004 From: ian at locut.us (Ian Clarke) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] Simple reliable UDP data transfer In-Reply-To: <20041122045345.GA9372@ref.nmedia.net> References: <200411211329.49389.baford@mit.edu> <20041122045345.GA9372@ref.nmedia.net> Message-ID: On 22 Nov 2004, at 04:53, Paul Campbell wrote: > As I understand it when it comes to "NAT busting", there are a variety > of > techniques. And the official term that I've seen used is "NAT > penetration". I have discovered that to some non-techies, terms such as "NAT busting" and "NAT penetration" raise concerns that this technique might somehow reduce the effectiveness of NATs from a security point of view. It can be tricky to articulate why this is not the case :-/ > Skype does the same thing. It simply uses non-NAT'd nodes as proxies > (often > without informing the user). I've seen similar code in other P2P > software. IIRC Skype also does UDP NAT penetration, but will relay through non-NAT'd nodes if this fails. Ian. -- Founder, The Freenet Project http://freenetproject.org/ CEO, Cematics Ltd http://cematics.com/ Personal Blog http://locut.us/~ian/blog/ From bert at web2peer.com Mon Nov 22 16:15:06 2004 From: bert at web2peer.com (bert@web2peer.com) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] Simple reliable UDP data transfer Message-ID: <20041122161506.A68712F8E6@ws6-3.us4.outblaze.com> ----- Original Message ----- From: "Enzo Michelangeli" To: "Peer-to-peer development." Subject: Re: [p2p-hackers] Simple reliable UDP data transfer Date: Mon, 22 Nov 2004 11:22:19 +0800 > I favour "using as proxy/reflector other peers advertising their > non-NATted status and the amount of bandwidth they are willing to > donate for that purpose" (as it's done, to my understanding, by Skype). > This is a P2P variant of forwarding traffic through well-known servers. But this should be a last resort. If you can easily NAT-bust then you remove one potential bottleneck and point of failure. Relaying requires both up and downstreamining from the relay point, so your throughput is capped by the (typically slow) upstream rate of the relay node. That said, when you're trying to expose protocols that weren't designed with NAT-busting in mind (e.g. HTTP) then this becomes the only option. Bert P.S.: If you visit http://bayardo.youserv.net/ you'll be visiting a site which typically resides either behind my home NAT or behind my company firewall, and is relayed through a home DSL connection. (I'm currently experimenting with feasibility of HTTP relaying over typical internet connections, so you are encouraged to click thaelink and help test it out.) From greg at electricrain.com Mon Nov 22 18:30:59 2004 From: greg at electricrain.com (Gregory P. Smith) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] Simple reliable UDP data transfer In-Reply-To: <419EC36A.1090203@peertech.org> References: <64D04EB5-397B-11D9-A632-000D932C5880@locut.us> <419E207A.90709@sun.com> <20041119182403.GB30409@ref.nmedia.net> <602F122E-3A69-11D9-A632-000D932C5880@locut.us> <419EC36A.1090203@peertech.org> Message-ID: <20041122183059.GD31980@zot.electricrain.com> On Fri, Nov 19, 2004 at 08:09:14PM -0800, coderman wrote: > Ian Clarke wrote: > > >4. Because I want to be able to establish direct connections behind > >two peers both of which are behind NATs. > > > >This is possible with TCP, but it requires some very low-level TCP > >stack mangling, it is comparatively easy with UDP. > > UPnP is worth trying first before resorting to more complicated methods. UPnP is a security nightmare that should be turned off by default on all equipment (not that it is) due to the number of buggy implementations out there (by design; its way too complex). UPnP will never be found on large nat networks either, only tiny home nets. support it when it exists? fine. but don't rely on or promote upnp. From wesley at felter.org Mon Nov 22 20:41:57 2004 From: wesley at felter.org (Wes Felter) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] Simple reliable UDP data transfer In-Reply-To: <20041122183059.GD31980@zot.electricrain.com> References: <64D04EB5-397B-11D9-A632-000D932C5880@locut.us> <419E207A.90709@sun.com> <20041119182403.GB30409@ref.nmedia.net> <602F122E-3A69-11D9-A632-000D932C5880@locut.us> <419EC36A.1090203@peertech.org> <20041122183059.GD31980@zot.electricrain.com> Message-ID: <41A24F15.20803@felter.org> Gregory P. Smith wrote: > UPnP is a security nightmare that should be turned off by default on > all equipment (not that it is) due to the number of buggy > implementations out there (by design; its way too complex). Correct UPnP implementations (should any exist) should be disabled by default because other implementations are buggy? What are these bugs, anyway? I'd prefer to see UPnP on by default to hammer home the point that NATs are not firewalls. -- Wes Felter - wesley@felter.org - http://felter.org/wesley/ From baford at mit.edu Mon Nov 22 20:52:48 2004 From: baford at mit.edu (Bryan Ford) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] Simple reliable UDP data transfer Message-ID: <200411221552.48825.baford@mit.edu> Ian Clarke wrote: >On 22 Nov 2004, at 04:53, Paul Campbell wrote: >> As I understand it when it comes to "NAT busting", there are a variety >> of >> techniques. And the official term that I've seen used is "NAT >> penetration". > >I have discovered that to some non-techies, terms such as "NAT busting" >and "NAT penetration" raise concerns that this technique might somehow >reduce the effectiveness of NATs from a security point of view. It can >be tricky to articulate why this is not the case :-/ For precisely this reason I prefer the more friendly-sounding term "NAT traversal", which is also technically more accurate. We're not "busting through" or "penetrating" a NAT's security barrier from the outside as an attacker, but rather working from the inside to set up completely legitimate communication paths that cross the NAT but do not (or at least should not) compromise the NAT's security in any way. Bryan From deepextacy at hotmail.com Tue Nov 23 13:43:58 2004 From: deepextacy at hotmail.com (JOSH GARDEN) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] stop... Message-ID: hello, I would like to ask you to please take me of your mail list as i would not like to recievce them anymore. Thanks josh. From alexsurf7 at yahoo.com Tue Nov 23 16:22:26 2004 From: alexsurf7 at yahoo.com (Alex) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] take me off list Message-ID: <20041123162226.28574.qmail@web41908.mail.yahoo.com> hello, I would like to ask you to please take me of your mail list as i would not like to recievce them anymore. Thanks --------------------------------- Do you Yahoo!? The all-new My Yahoo! – Get yours free! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://zgp.org/pipermail/p2p-hackers/attachments/20041123/ff5b1b2a/attachment.html From samnospam at bcgreen.com Wed Nov 24 04:17:21 2004 From: samnospam at bcgreen.com (Stephen Samuel (leave the email alone)) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] Simple reliable UDP data transfer Message-ID: <41A40B50.6040501@bcgreen.com> > Right now the upstream bandwidth limit is fixed, but if I decide to > stick with this algorithm I will adjust it based on round-trip time. I don't understand why people link RTT with bandwidth... I don't think that that's a very good measure. As an example: A local user with a 56K modem might have an RTT of ~50ms and an available bandwidth of 30Kbps A transatlantic user with a high speed link may have an RTT of 200ms and available bandwidth of 1Megabit. Very little relationship between the two numbers. The difference that RTT makes is how big of a window you need to buffer to be able to maintain both bandwidth and retransmissability. (( worst case example would be an earth-mars link with the planned laser-based transmission hardware. This should give you bandwidth of >1megabit with an RTT of up to 1 hour.)) -- Stephen Samuel +1(604)876-0426 samnospam@bcgreen.com http://www.bcgreen.com/ Powerful committed communication. Transformation touching the jewel within each person and bringing it to light. From ian at locut.us Wed Nov 24 07:58:11 2004 From: ian at locut.us (Ian Clarke) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] Simple reliable UDP data transfer In-Reply-To: <41A40B50.6040501@bcgreen.com> References: <41A40B50.6040501@bcgreen.com> Message-ID: <96224E6C-3DEE-11D9-A632-000D932C5880@locut.us> On 24 Nov 2004, at 04:17, Stephen Samuel (leave the email alone) wrote: >> Right now the upstream bandwidth limit is fixed, but if I decide to >> stick with this algorithm I will adjust it based on round-trip time. > > I don't understand why people link RTT with bandwidth... I think the reason is that an increasing RTT indicates congestion along the path, since congestion on a link will cause buffering of packets before they start to be dropped. It is therefore a good way to anticipate when packets will start to be dropped. Ian. -- Founder, The Freenet Project http://freenetproject.org/ CEO, Cematics Ltd http://cematics.com/ Personal Blog http://locut.us/~ian/blog/ From dbarrett at quinthar.com Wed Nov 24 08:59:53 2004 From: dbarrett at quinthar.com (David Barrett) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] Command line Win32 socket enumerator / killer Message-ID: <20041124085959.0BF2C3FD73@capsicum.zgp.org> Can anyone recommend a command-line Win32 application that both enumerates and kills sockets? Basically, I like the functionality offered by the GUI TCPView from Sysinternals. It shows all active sockets on the system, and allows you to right-click on any connection and kill it manually. This is great for testing the resilience of a networked application to flaky connections. Unfortunately, the command-line version of TCPView only supports socket enumeration -- it has no option to kill established sockets. I'm looking for a command-line option so I can automate testing of a networked application using standard scripting techniques. I see Sysinternals has graciously supplied some source-code that I'm sure I could paw through to make this functionality myself, but I'd really like to see if there's anything off the shelf I could use first. Any suggestions? -david From lyndon.samson at gmail.com Wed Nov 24 12:28:48 2004 From: lyndon.samson at gmail.com (Lyndon Samson) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] Re: Simple reliable UDP data transfer In-Reply-To: References: Message-ID: You may find this interesting http://www.ietf.org/rfc/rfc3208.txt?number=3208 Its specific to multicast, but some of the concepts are applicable to building a reliable unicast over UDP -- Into RFID? www.rfidnewsupdate.com Simple, fast, news. From justin at chapweske.com Wed Nov 24 16:02:11 2004 From: justin at chapweske.com (Justin Chapweske) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] Re: Simple reliable UDP data transfer In-Reply-To: References: Message-ID: <1101312132.20537.170.camel@bog> Some better resources are the RFCs put out by the Reliable Multicast Transport group at the IETF. The topic of TCP friendliness has been well-explored by this group: http://www.ietf.org/html.charters/rmt-charter.html Note that you don't need to use FEC in their protocols. You can get away with using a NOP encoding just fine. So you could simply use ALC/ LCT as a standard encapsulation layer. These are the protocols that Swarmcast was originally based off of, and our new WAN Transport XNE product implements the entire FLUTE/ALC/LCT stack, so I've had a very long relationship with these protocols and have found them to be very flexible and well-specified. On Wed, 2004-11-24 at 23:28 +1100, Lyndon Samson wrote: > You may find this interesting > > http://www.ietf.org/rfc/rfc3208.txt?number=3208 > > Its specific to multicast, but some of the concepts are applicable to > building a reliable unicast over UDP -Justin From amit.agr at gmail.com Wed Nov 24 16:18:32 2004 From: amit.agr at gmail.com (Amit Agrawal) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] new to p2p hackers Message-ID: <24e1dca604112408182be4222b@mail.gmail.com> hi I have just joined the p2p hackers mailing list.I know about the basics of peer to peer and how some popular p2p systems work. It would be great if someone can point me to resources on p2p about the prevelant problems and issues to be taken into consideration while designing these systems. What are the various compromisies one has to make etc. Any help would be greatly appreciated. amit -- ( ) 3-| | !-| c | From gbildson at limepeer.com Wed Nov 24 17:30:33 2004 From: gbildson at limepeer.com (Greg Bildson) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] Simple reliable UDP data transfer In-Reply-To: <41A40B50.6040501@bcgreen.com> Message-ID: In my extreme hatchet job attempt at congestion control - along with various other measures, I make use of not the RTT itself but the increase in RTT above a lower bound average. This seems to be a useful precursor to some cases of packet loss. Thanks -greg > -----Original Message----- > From: p2p-hackers-bounces@zgp.org [mailto:p2p-hackers-bounces@zgp.org]On > Behalf Of Stephen Samuel (leave the email alone) > Sent: Tuesday, November 23, 2004 11:17 PM > To: p2p-hackers@zgp.org > Subject: [p2p-hackers] Simple reliable UDP data transfer > > > > Right now the upstream bandwidth limit is fixed, but if I decide to > > stick with this algorithm I will adjust it based on round-trip time. > > I don't understand why people link RTT with bandwidth... I don't think > that that's a very good measure. As an example: > A local user with a 56K modem might have an RTT of ~50ms and an > available bandwidth of 30Kbps > A transatlantic user with a high speed link may have an RTT of 200ms > and available bandwidth of 1Megabit. > > Very little relationship between the two numbers. The difference > that RTT makes is how big of a window you need to buffer to be > able to maintain both bandwidth and retransmissability. > > (( worst case example would be an earth-mars link with the planned > laser-based transmission hardware. This should give you bandwidth > of >1megabit with an RTT of up to 1 hour.)) > > -- > Stephen Samuel +1(604)876-0426 samnospam@bcgreen.com > http://www.bcgreen.com/ > Powerful committed communication. Transformation touching > the jewel within each person and bringing it to light. > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > _______________________________________________ > Here is a web page listing P2P Conferences: > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences From mfreed at cs.nyu.edu Wed Nov 24 23:12:33 2004 From: mfreed at cs.nyu.edu (Michael J. Freedman) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] Re: Simple reliable UDP data transfer In-Reply-To: References: Message-ID: You also might wish to check out the Datagram Congestion Control Protocol (DCCP), which basically is UDP + congestion control. http://www.icir.org/kohler/dccp/ The site includes some IETF drafts. Although I guess the initial problem Ian posed was UDP + reliability, so this may not be of interest. Still, if p2p protocols continue to be TCP unfriendly, I think we'll certainly see more pushback from network operators to ensure congestion-controlled protocols. --mike On Wed, 24 Nov 2004, Lyndon Samson wrote: > Date: Wed, 24 Nov 2004 23:28:48 +1100 > From: Lyndon Samson > To: p2p-hackers@zgp.org > Subject: [p2p-hackers] Re: Simple reliable UDP data transfer > > You may find this interesting > > http://www.ietf.org/rfc/rfc3208.txt?number=3208 > > Its specific to multicast, but some of the concepts are applicable to > building a reliable unicast over UDP ----- "Not all those who wander are lost." www.michaelfreedman.org From amit.agr at gmail.com Thu Nov 25 14:17:43 2004 From: amit.agr at gmail.com (Amit Agrawal) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] Re: Simple reliable UDP data transfer In-Reply-To: References: Message-ID: <24e1dca60411250617218c43ec@mail.gmail.com> --i am resending it ..think the previous message bounced hi I have just joined the p2p hackers mailing list.I know about the basics of peer to peer and how some popular p2p systems work. It would be great if someone can point me to resources on p2p about the prevelant problems and issues to be taken into consideration while designing these systems. What are the various compromisies one has to make etc. Any help would be greatly appreciated. amit -- ( ) 3-| | !-| c | From ian at locut.us Thu Nov 25 15:18:07 2004 From: ian at locut.us (Ian Clarke) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] Dijjer design docs now online Message-ID: <35C5CE98-3EF5-11D9-A632-000D932C5880@locut.us> Hi, as some of you may have noticed, Dijjer, the P2P HTTP cache I am working on received rather more attention than I had intended at this early stage of development (ie. a Slashdot story). Anyway, in my efforts to capitalise on this attention, I have been feverishly working on some informal documents describing various aspects of its architecture which are available on this web page: http://dijjer.org/index.php?page=development I think Dijjer may well prove to be a useful platform for trying out a variety of ideas in P2P, and there is plenty of stuff to do, so I am hopeful that some people on this list might be interested in contributing to the project. If you are interested, please feel free to join our Development mailing list and introduce yourself: http://lists.sourceforge.net/lists/listinfo/dijjer-devel Kind regards, Ian. -- Founder, The Freenet Project http://freenetproject.org/ CEO, Cematics Ltd http://cematics.com/ Personal Blog http://locut.us/~ian/blog/ From krnelson at gmail.com Fri Nov 26 20:03:28 2004 From: krnelson at gmail.com (Keith) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] Re: Simple reliable UDP data transfer In-Reply-To: References: Message-ID: The File Service Protocol (FSP) provides reliable UDP data transfer: http://fsp.sourceforge.net/ "Maximum FSP speed is by design lower than maximum speed of TCP based protocols because it has only 1 packet in the network... "Design of FSP protocol makes impossible to send more than 1 packet into network. This is nice method for bandwidth protection." http://cvs.sourceforge.net/viewcvs.py/fsp/fsp/INFO?rev=1.3 -- Keith On Wed, 24 Nov 2004 18:12:33 -0500 (EST), Michael J. Freedman wrote: > You also might wish to check out the Datagram Congestion Control Protocol > (DCCP), which basically is UDP + congestion control. > > http://www.icir.org/kohler/dccp/ > > The site includes some IETF drafts. > > Although I guess the initial problem Ian posed was UDP + reliability, so > this may not be of interest. Still, if p2p protocols continue to be TCP > unfriendly, I think we'll certainly see more pushback from network > operators to ensure congestion-controlled protocols. > > --mike > > On Wed, 24 Nov 2004, Lyndon Samson wrote: > > > Date: Wed, 24 Nov 2004 23:28:48 +1100 > > From: Lyndon Samson > > To: p2p-hackers@zgp.org > > Subject: [p2p-hackers] Re: Simple reliable UDP data transfer > > > > You may find this interesting > > > > http://www.ietf.org/rfc/rfc3208.txt?number=3208 > > > > Its specific to multicast, but some of the concepts are applicable to > > building a reliable unicast over UDP > > ----- > "Not all those who wander are lost." www.michaelfreedman.org > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > _______________________________________________ > Here is a web page listing P2P Conferences: > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences > From dbarrett at quinthar.com Sat Nov 27 01:45:25 2004 From: dbarrett at quinthar.com (David Barrett) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] Why UDP and not TCP? In-Reply-To: Message-ID: <20041127014535.9512F3FD08@capsicum.zgp.org> We've had a long-ranging discussion on how to overcome UDP's inherently unreliable nature, but I'm confused: what overwhelming benefits do you see to UDP that can't be found in TCP? Elsewhere, I've heard the general arguments: 1) UDP is faster (ie, lower latency) 2) UDP is more efficient (ie, lower bandwidth) 3) UDP is easier (ie, no TCP shutdown issues) 4) UDP is more scalable (ie, no inbound connection limits) However, it seems these arguments are only really true if in the application: (from http://www.atlasindia.com/multicast.htm) - Messages require no acknowledgement - Messages between hosts are sporadic or irregular - Reliability is implemented at the process level. Reliable file transfer (the impetus for our discussion, I think) doesn't seem to be a good match for the above criteria. Indeed, it would seem to me that in this situation: 1) Latency is less important than throughput 2) TCP/UDP are similarly efficient because the payload will likely dwarf any packet overhead 3) A custom reliability layer in software is harder than a standardized, worldwide, off-the-shelf reliability layer implemented in hardware 4) The user will run out of bandwidth faster than simultaneous TCP inbound connections. At least, that's what my view tells me. What am I missing? Is there another angle to the UDP/TCP protocol selection that I'm not seeing? I've seen mention of congestion -- does UDP somehow help resolve this? Alternatively, do you find yourself forced to use UDP against your will? I really don't want to start a religious war, but I would like to know what holes exist in my reasoning above. Thanks! -david From travis at redswoosh.net Sat Nov 27 02:14:16 2004 From: travis at redswoosh.net (Travis Kalanick) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] Why UDP and not TCP? In-Reply-To: <20041127014535.9512F3FD08@capsicum.zgp.org> Message-ID: <200411270214.iAR2EiaL001452@be9.noc0.redswoosh.com> David, The main reason P2P is moving toward reliable-flow-controlled-UDP is that UDP allows for widely available straight forward techniques to route around NATs in NAT-to-NAT file delivery scenarios. I believe this was covered in the thread, but it may be such common knowledge by now that we only refer to it implicitly. Mangling TCP to implement similar traversal techniques is a substantially more difficult task. Though not impossible at all, it's a tricky bit of hacking you'll need to do to make it work. Travis -----Original Message----- From: p2p-hackers-bounces@zgp.org [mailto:p2p-hackers-bounces@zgp.org] On Behalf Of David Barrett Sent: Friday, November 26, 2004 5:45 PM To: P2P Hackers Subject: [p2p-hackers] Why UDP and not TCP? We've had a long-ranging discussion on how to overcome UDP's inherently unreliable nature, but I'm confused: what overwhelming benefits do you see to UDP that can't be found in TCP? Elsewhere, I've heard the general arguments: 1) UDP is faster (ie, lower latency) 2) UDP is more efficient (ie, lower bandwidth) 3) UDP is easier (ie, no TCP shutdown issues) 4) UDP is more scalable (ie, no inbound connection limits) However, it seems these arguments are only really true if in the application: (from http://www.atlasindia.com/multicast.htm) - Messages require no acknowledgement - Messages between hosts are sporadic or irregular - Reliability is implemented at the process level. Reliable file transfer (the impetus for our discussion, I think) doesn't seem to be a good match for the above criteria. Indeed, it would seem to me that in this situation: 1) Latency is less important than throughput 2) TCP/UDP are similarly efficient because the payload will likely dwarf any packet overhead 3) A custom reliability layer in software is harder than a standardized, worldwide, off-the-shelf reliability layer implemented in hardware 4) The user will run out of bandwidth faster than simultaneous TCP inbound connections. At least, that's what my view tells me. What am I missing? Is there another angle to the UDP/TCP protocol selection that I'm not seeing? I've seen mention of congestion -- does UDP somehow help resolve this? Alternatively, do you find yourself forced to use UDP against your will? I really don't want to start a religious war, but I would like to know what holes exist in my reasoning above. Thanks! -david _______________________________________________ p2p-hackers mailing list p2p-hackers@zgp.org http://zgp.org/mailman/listinfo/p2p-hackers _______________________________________________ Here is a web page listing P2P Conferences: http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences From bryan.turner at pobox.com Sat Nov 27 02:27:48 2004 From: bryan.turner at pobox.com (Bryan Turner) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] Why UDP and not TCP? References: <20041127014535.9512F3FD08@capsicum.zgp.org> Message-ID: <005f01c4d428$b04ca590$6601a8c0@aspen> David, Continuing on the topic of UDP vs. TCP.. > 1) Latency is less important than throughput > 2) TCP/UDP are similarly efficient because the payload will likely dwarf any > packet overhead > 3) A custom reliability layer in software is harder than a standardized, > worldwide, off-the-shelf reliability layer implemented in hardware > 4) The user will run out of bandwidth faster than simultaneous TCP inbound > connections. 1) True, for the case we're examining I believe latency is not the issue. 2) True also. Overhead is minimal, and a reliability layer in UDP would be roughly equivalent to TCP overhead. 3) TCP is not implemented in hardware, it is software on every stack that I am aware of. Some amount of 'sniffing' occurs at the hardware level, but this is never to the level of the entire protocol. I've heard whispers of "TCP offloading engines", which are described as TCP-on-a-chip, but these have all turned out to be separate processors for the TCP stack (ie: software running on some other peripheral). Also, this would only exist on high-end routers/servers, a far cry from the desktop PCs most of the P2P community is targeting. None the less, TCP is standard and does run reliably on the current infrastructure - but a software reliability UDP layer would behave no worse than the current TCP stack on 99% of the client machines. 4) Now for the meat of my argument.. TCP as it is used by most software today is greedy and uncooperative to the system. Case in point: I tried to run seeds for 40+ bit torrent files in an attempt to 'give back to the community'. It was impossible to keep *ANY* of the seeds running because bit torrent wanted to run 4+ separate TCP connections per file, totally swamping my NAT box (Linksys router). With a cooperative protocol, there is no limit to the number of simultaneous connections, as they can all be handled off one port. No one builds their TCP applications (or servers) in this manner, and I'd be willing to bet that if you sat down to write a TCP application, you would end up writing one with the same uncooperative behavior - the protocol and APIs lend themselves to this style. I won't argue that UDP is better, but since you already have to think about all of this when writing a protocol over UDP, it becomes almost trivial to add in a little more cooperative spirit. Perhaps someone could write a tutorial on TCP which shows how to enact a community spirit with respect to the computer's resources (or NAT box in this case). $0.02 --Bryan bryan.turner@pobox.com ----- Original Message ----- From: "David Barrett" To: "P2P Hackers" Sent: Friday, November 26, 2004 8:45 PM Subject: [p2p-hackers] Why UDP and not TCP? > We've had a long-ranging discussion on how to overcome UDP's inherently > unreliable nature, but I'm confused: what overwhelming benefits do you see > to UDP that can't be found in TCP? > > Elsewhere, I've heard the general arguments: > > 1) UDP is faster (ie, lower latency) > 2) UDP is more efficient (ie, lower bandwidth) > 3) UDP is easier (ie, no TCP shutdown issues) > 4) UDP is more scalable (ie, no inbound connection limits) > > However, it seems these arguments are only really true if in the > application: (from http://www.atlasindia.com/multicast.htm) > > - Messages require no acknowledgement > - Messages between hosts are sporadic or irregular > - Reliability is implemented at the process level. > > Reliable file transfer (the impetus for our discussion, I think) doesn't > seem to be a good match for the above criteria. Indeed, it would seem to me > that in this situation: > > 1) Latency is less important than throughput > 2) TCP/UDP are similarly efficient because the payload will likely dwarf any > packet overhead > 3) A custom reliability layer in software is harder than a standardized, > worldwide, off-the-shelf reliability layer implemented in hardware > 4) The user will run out of bandwidth faster than simultaneous TCP inbound > connections. > > At least, that's what my view tells me. What am I missing? Is there > another angle to the UDP/TCP protocol selection that I'm not seeing? I've > seen mention of congestion -- does UDP somehow help resolve this? > Alternatively, do you find yourself forced to use UDP against your will? > > I really don't want to start a religious war, but I would like to know what > holes exist in my reasoning above. Thanks! > > -david > From dbarrett at quinthar.com Sat Nov 27 02:33:20 2004 From: dbarrett at quinthar.com (David Barrett) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] Why UDP and not TCP? In-Reply-To: <200411270214.iAR2EiaL001452@be9.noc0.redswoosh.com> Message-ID: <20041127023325.C0F5B3FC2C@capsicum.zgp.org> Ah, now I see. I didn't put 1+1 together to figure out why reliable UDP was suddenly so important. Thanks. > -----Original Message----- > From: p2p-hackers-bounces@zgp.org [mailto:p2p-hackers-bounces@zgp.org] On > Behalf Of Travis Kalanick > Sent: Friday, November 26, 2004 7:14 PM > To: 'Peer-to-peer development.' > Subject: RE: [p2p-hackers] Why UDP and not TCP? > > David, > > The main reason P2P is moving toward reliable-flow-controlled-UDP is that > UDP allows for widely available straight forward techniques to route > around > NATs in NAT-to-NAT file delivery scenarios. > > I believe this was covered in the thread, but it may be such common > knowledge by now that we only refer to it implicitly. > > Mangling TCP to implement similar traversal techniques is a substantially > more difficult task. Though not impossible at all, it's a tricky bit of > hacking you'll need to do to make it work. > > Travis > > -----Original Message----- > From: p2p-hackers-bounces@zgp.org [mailto:p2p-hackers-bounces@zgp.org] On > Behalf Of David Barrett > Sent: Friday, November 26, 2004 5:45 PM > To: P2P Hackers > Subject: [p2p-hackers] Why UDP and not TCP? > > We've had a long-ranging discussion on how to overcome UDP's inherently > unreliable nature, but I'm confused: what overwhelming benefits do you see > to UDP that can't be found in TCP? > > Elsewhere, I've heard the general arguments: > > 1) UDP is faster (ie, lower latency) > 2) UDP is more efficient (ie, lower bandwidth) > 3) UDP is easier (ie, no TCP shutdown issues) > 4) UDP is more scalable (ie, no inbound connection limits) > > However, it seems these arguments are only really true if in the > application: (from http://www.atlasindia.com/multicast.htm) > > - Messages require no acknowledgement > - Messages between hosts are sporadic or irregular > - Reliability is implemented at the process level. > > Reliable file transfer (the impetus for our discussion, I think) doesn't > seem to be a good match for the above criteria. Indeed, it would seem to > me > that in this situation: > > 1) Latency is less important than throughput > 2) TCP/UDP are similarly efficient because the payload will likely dwarf > any > packet overhead > 3) A custom reliability layer in software is harder than a standardized, > worldwide, off-the-shelf reliability layer implemented in hardware > 4) The user will run out of bandwidth faster than simultaneous TCP inbound > connections. > > At least, that's what my view tells me. What am I missing? Is there > another angle to the UDP/TCP protocol selection that I'm not seeing? I've > seen mention of congestion -- does UDP somehow help resolve this? > Alternatively, do you find yourself forced to use UDP against your will? > > I really don't want to start a religious war, but I would like to know > what > holes exist in my reasoning above. Thanks! > > -david > > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > _______________________________________________ > Here is a web page listing P2P Conferences: > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences > > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > _______________________________________________ > Here is a web page listing P2P Conferences: > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences From dbarrett at quinthar.com Sat Nov 27 03:10:52 2004 From: dbarrett at quinthar.com (David Barrett) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] Why UDP and not TCP? In-Reply-To: <005f01c4d428$b04ca590$6601a8c0@aspen> Message-ID: <20041127031059.B0D933FD26@capsicum.zgp.org> > -----Original Message----- > From: Bryan Turner > Subject: Re: [p2p-hackers] Why UDP and not TCP? > 3) TCP is not implemented in hardware, it is software on every stack that > I am aware of. Oops, my mistake. Thanks for the correction. > 4) Now for the meat of my argument.. TCP as it is used by most software > today is greedy and uncooperative to the system. Well, I do agree with you here, at least partially. Yes, most applications aren't written without "conservation of resources" in mind (though the BitTorrent example is particularly surprising, and egregious). But that's hardly TCP's fault, even if TCP does encourage it. However the same accusation could be leveled against UDP for encouraging "unnecessarily chatty, poll-ridden, pessimistically-retransmitting protocols" that might be avoided with a stateful, session-based TCP connection. Again, there's no reason a UDP "session" couldn't be as stateful as a TCP one, but the protocol and APIs lend themselves to this style. Regardless, I think we both agree that ultimately it's not the protocol, but the programmer who's to blame for wasteful resource consumption. > And I'd be > willing to bet that if you sat down to write a TCP application, you would > end up writing one with the same uncooperative behavior - the protocol and > APIs lend themselves to this style. Heh, I'll happily take that bet, as I'm writing a "resource friendly" application right now. The way I get around massive amounts of sockets is to multiplex "virtual connections" over a single socket. Granted, I use a ton of "virtual connections", but they're really cheap to mux/demux. -david From carllos at lia.ufc.br Sat Nov 27 03:56:26 2004 From: carllos at lia.ufc.br (Carlos Eduardo Araujo Vieira) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] Fwd: Call for Papers SBRC 2005 Message-ID: <60588.201.9.15.190.1101527786.squirrel@webmail.lia.ufc.br> Please forward this message to maillists of interest. Apologies if you receive this message more than one time. CALL FOR PAPERS SBRC 2005 ---- Presentation The Brazilian Symposium on Computer Networks (SBRC) is an annual event promoted by the Brazilian Computing Society (SBC) through its Network and Distributed Systems Committee, and by the National Laboratory on Computer Networks (LARC). Along the years, the SBRC became the most important national cientific event on Computer Network and Distributed Systems, besides being one of the most prestigious events on computer science in . The importance of the SBRC can be observed through its paper submission rate, through the high quality of the submitted works, and through the increasing number of attendees. The 23nd SBRC, that will take place in Fortaleza - CE, from May 09 to 13 2005, is organized by the Computer Science Department of the Federal University of Cear? (UFC) and State University of Cear? (UECE). The main purpose of the SBRC is to offer a debating and meeting environment for the academic community and counts with the participation of business and governmental entities that work in the symposium areas. Currently, the activities of the symposium are spread over five days and include: technical sessions, tutorials, short courses, workshops, panels and discussion, in addition to business expositions. Important Dates 15/12/2004 - Paper submission deadline 14/03/2005 - Notification to authors 28/03/2005 - Camera ready due Topics Authors are invited to submit original full papers reporting research, experiments, design and development results. Each submitted paper will be reviewed by at least three experts. The major topics of interest include (but are not limited to): - Addressing & location management - Digital TV - Distributed Algorithms - Fault Tolerance - Grid Computing - Mobile Agents - Middleware - Multimedia distributed systems - Real time distributed systems - Security - Specification, validation, verification, and implementation of protocols and distributed systems- Web Services - Ad hoc and sensor networks - Active networks and VPNs (Virtual Private Networks)- Delay Tolerant Networking - MPLS - Multicast - Network applications & services - Network Management and operation - NGN (Next Generation Networks) - Optical networks - Peer-to-peer (P2P) and overlay networks - Performance, Scalability, and Reliability - Quality of service (QoS) and Service Level Agreement (SLA) - Routing and switching - Traffic engineering, measurement and monitoring - Wireless Communication and Mobility Contacts Jos? Augusto Suruagy Monteiro (suruagy@unifacs.br) or Jos? Neuman de Sousa (neuman@lia.ufc.br) – Program Committee Co-chairs. For more information please visit: http://www.sbrc2005.ufc.br/english From roberto at dellapasqua.com Sun Nov 28 14:25:48 2004 From: roberto at dellapasqua.com (Roberto Della Pasqua) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] I need a super expert P2P In-Reply-To: Message-ID: <41568C9D01BC6EE1@vsmtp4.tin.it> (added by postmaster@virgilio.it) I kind developers, Please forgive this maybe-offtopic message. I need a highend strong coder (or more of one, we can build a team) for a P2P DHT project in Delphi language. The project has humanitarian objectives. The job is done remotely through a VPN/Terminal server. High award available. If some if you is interested please write to mine email roberto at dellapasqua dot com, or ICQ 164672275. Resume of highend skills are welcome. Very thank you. Roberto Della Pasqua From lutianbo at software.ict.ac.cn Mon Nov 29 03:01:44 2004 From: lutianbo at software.ict.ac.cn (Lutianbo) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] residue class Message-ID: <001601c4d5bf$c2684660$9402000a@ictltbo> Hi all, Would you please give me some meterial about residue class in group theory. Thank you! Best regards. Tianbo Lu -------------- next part -------------- An HTML attachment was scrubbed... URL: http://zgp.org/pipermail/p2p-hackers/attachments/20041129/820c3986/attachment.htm From wesley at felter.org Tue Nov 30 01:58:25 2004 From: wesley at felter.org (Wes Felter) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] Why UDP and not TCP? In-Reply-To: <005f01c4d428$b04ca590$6601a8c0@aspen> References: <20041127014535.9512F3FD08@capsicum.zgp.org> <005f01c4d428$b04ca590$6601a8c0@aspen> Message-ID: <523331B0-4273-11D9-AEDE-000393A581BE@felter.org> On Nov 26, 2004, at 8:27 PM, Bryan Turner wrote: > 4) Now for the meat of my argument.. TCP as it is used by most > software > today is greedy and uncooperative to the system. Case in point: I > tried to > run seeds for 40+ bit torrent files in an attempt to 'give back to the > community'. It was impossible to keep *ANY* of the seeds running > because > bit torrent wanted to run 4+ separate TCP connections per file, totally > swamping my NAT box (Linksys router). > > With a cooperative protocol, there is no limit to the number of > simultaneous connections, as they can all be handled off one port. It sounds like you have a cheap NAT. Whether you're using TCP or UDP, you probably have some kind of "connection" or "session" abstraction, which means somewhere in memory you have state for each connection, and you have a hash table to keep track of it, and you demux the incoming packets by looking up some header fields into this hash table. The overhead is more or less the same either way. Wes Felter - wesley@felter.org - http://felter.org/wesley/ From Digitalgruvmoves at aol.com Tue Nov 30 02:09:52 2004 From: Digitalgruvmoves at aol.com (Digitalgruvmoves@aol.com) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] Why UDP and not TCP? Message-ID: Whats a good new p2p filesharing download to use? Limeware just started acting nuts. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://zgp.org/pipermail/p2p-hackers/attachments/20041129/dbde68e0/attachment.html From gbildson at limepeer.com Tue Nov 30 21:11:40 2004 From: gbildson at limepeer.com (Greg Bildson) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] Why UDP and not TCP? In-Reply-To: Message-ID: If you believe that there are problems with LimeWire, you should submit them to bugs@limewire and they will be looked into promptly. If you have not already, you should also upgrade to version 4.2.3 to get rid of some potential startup issues with old GWebcaches. LimeWire is a "good new" p2p application - check out that firewall-to-firewall transfer in the new version. ;) Thanks -greg -----Original Message----- From: p2p-hackers-bounces@zgp.org [mailto:p2p-hackers-bounces@zgp.org]On Behalf Of Digitalgruvmoves@aol.com Sent: Monday, November 29, 2004 9:10 PM To: p2p-hackers@zgp.org Subject: Re: [p2p-hackers] Why UDP and not TCP? Whats a good new p2p filesharing download to use? Limeware just started acting nuts. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://zgp.org/pipermail/p2p-hackers/attachments/20041130/95aabe03/attachment.htm From dbarrett at quinthar.com Tue Nov 30 23:28:02 2004 From: dbarrett at quinthar.com (David Barrett) Date: Sat Dec 9 22:12:44 2006 Subject: [p2p-hackers] Why UDP and not TCP? In-Reply-To: Message-ID: <20041130232809.5CBB23FCF7@capsicum.zgp.org> How does the Firewall-to-Firewall portion of Limewire work? Does it use un-firewalled clients as relay servers? It doesn't sound like it, but I thought that's the only solution that truly works in all situations. The "features history" page mentions this on the entry for 8.12.2004: "Firewall to Firewall transfers allows two people behind firewalls to connect directly to each other and transfer data. This makes use of UDP, and a third party to coordinate the initial messaging. . Normally, firewalled users would only be able to download from other hosts who are not firewalled, which is of course severely limited. With firewall to firewall transfers, firewalled users can now access the full 100% of hosts." This implies something like the NAT-to-NAT trick works with firewalls also. I'm a little shaky on how UDP works with firewalls, do both clients initiate a conversation with a third party, and then the third party hands back information IP/port information of the pre-established out-bound connection? How does this work if the firewall simply blocks all UDP traffic? However, the website is either out of date or there's more to the story because the FAQ says: http://www.limewire.com/english/content/faq.shtml#fir1 "Q: What if I'm behind a firewall? A: LimeWire will work when a user is behind certain types of firewalls, but will not work behind certain other types. If you are behind a firewall, you will not be able to download anything from a user that's also behind a firewall. In general, if you can connect (you will see your "connection status" in the lower left hand corner of the application) using LimeWire, you should be able to download and upload files, but LimeWire will not work if you have either a web-only proxy or a SOCKS proxy." What's the full story? -david _____ From: p2p-hackers-bounces@zgp.org [mailto:p2p-hackers-bounces@zgp.org] On Behalf Of Greg Bildson Sent: Tuesday, November 30, 2004 2:12 PM To: Peer-to-peer development. Subject: RE: [p2p-hackers] Why UDP and not TCP? If you believe that there are problems with LimeWire, you should submit them to bugs@limewire and they will be looked into promptly. If you have not already, you should also upgrade to version 4.2.3 to get rid of some potential startup issues with old GWebcaches. LimeWire is a "good new" p2p application - check out that firewall-to-firewall transfer in the new version. ;) Thanks -greg -----Original Message----- From: p2p-hackers-bounces@zgp.org [mailto:p2p-hackers-bounces@zgp.org]On Behalf Of Digitalgruvmoves@aol.com Sent: Monday, November 29, 2004 9:10 PM To: p2p-hackers@zgp.org Subject: Re: [p2p-hackers] Why UDP and not TCP? Whats a good new p2p filesharing download to use? Limeware just started acting nuts. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://zgp.org/pipermail/p2p-hackers/attachments/20041130/567003be/attachment.html