From lemonobrien at yahoo.com Sat Apr 1 21:33:18 2006 From: lemonobrien at yahoo.com (Lemon Obrien) Date: Sat Dec 9 22:13:12 2006 Subject: [p2p-hackers] Hard question.... In-Reply-To: Message-ID: <20060401213318.94694.qmail@web53604.mail.yahoo.com> i have files being exchanged/downloaded using udp (last resort if tcp fails); now, with udp you know you'll need to do error correction if a packet is missing; i do this by sending a 're-send' message; but, as some of you know...you can not flod the network with thousands of resends...especially if you relay messages to the destination node...so, my question is; what is a good average mean time to keep sending 're-sends'...i have it working with 500 milli-seconds and 1000...i want fast? Also, if no data is recieved, how long should i wait till i determine the connection is no longer valid? thanks lemon You don't get no juice unless you squeeze Lemon Obrien, the Third. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://zgp.org/pipermail/p2p-hackers/attachments/20060401/6fdca00d/attachment.html From agthorr at cs.uoregon.edu Sat Apr 1 22:03:48 2006 From: agthorr at cs.uoregon.edu (Daniel Stutzbach) Date: Sat Dec 9 22:13:12 2006 Subject: [p2p-hackers] Hard question.... In-Reply-To: <20060401213318.94694.qmail@web53604.mail.yahoo.com> References: <20060401213318.94694.qmail@web53604.mail.yahoo.com> Message-ID: <20060401220348.GA3384@cs.uoregon.edu> On Sat, Apr 01, 2006 at 01:33:18PM -0800, Lemon Obrien wrote: > udp you know you'll need to do error correction if a packet is > missing; i do this by sending a 're-send' message; but, as some of > you know...you can not flod the network with thousands of > resends...especially if you relay messages to the destination > node...so, my question is; what is a good average mean time to keep > sending 're-sends'...i have it working with 500 milli-seconds and > 1000...i want fast? This is a complex topic and TCP does a lot behind the scenes to get it right. You need a dynamic mechanism to balance: - Sending packets as quickly as possible - But not so fast that they cause congestion and heavy packet loss Rather than starting from scratch, I recommend studying the way TCP handles this problem. Here are some good references: Van Jacobson, "Congestion Avoidance and Control", SIGCOMM, 1988 http://ee.lbl.gov/papers/congavoid.pdf ^-- Van Jacobson encountered the problem you describe in the original implementation of BSD's TCP and fixed it. RFC 3782: The NewReno Modification to TCP's Fast Recovery Algorithm http://www.faqs.org/rfcs/rfc3782.html ^-- Other researchers make a small tweak giving a significant performance boost W. Richard Stevens, TCP IP Illustrated, Volumes 1 and 2. ^-- Volume 2 walks through the BSD TCP source code and explains how it all works. The SACK extension to TCP may also be useful (especially if your application does not need in-order delivery), but it's best to get everything else working first before you worry too much about that. -- Daniel Stutzbach Computer Science Ph.D Student http://www.barsoom.org/~agthorr University of Oregon From bob.harris.spamcontrol at gmail.com Sat Apr 1 22:55:35 2006 From: bob.harris.spamcontrol at gmail.com (Bob Harris) Date: Sat Dec 9 22:13:12 2006 Subject: [p2p-hackers] Hard question.... In-Reply-To: <20060401213318.94694.qmail@web53604.mail.yahoo.com> References: <20060401213318.94694.qmail@web53604.mail.yahoo.com> Message-ID: You want to look into network coding. Coding transformsdata packets into encoded packets, from which, if you have enough, you can recover the original data packets. So there is no notion of resends with coding. You simply encode and blast your file. When the receiver has received N+epsilon, it'll be able to decipher all N. With coding, there is no need for a back channel from the receiver-to-sender for resends. Bob On 4/1/06, Lemon Obrien wrote: > > i have files being exchanged/downloaded using udp (last resort if tcp > fails); now, with > udp you know you'll need to do error correction if a packet is missing; i > do this by sending a 're-send' message; but, as some of you know...you can > not flod the network with thousands of resends...especially if you relay > messages to the destination node...so, my question is; what is a good > average mean time to keep sending 're-sends'...i have it working with 500 > milli-seconds and 1000...i want fast? > > Also, if no data is recieved, how long should i wait till i determine the > connection is no longer valid? > > thanks > lemon > > > You don't get no juice unless you squeeze > Lemon Obrien, the Third. > > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > _______________________________________________ > Here is a web page listing P2P Conferences: > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://zgp.org/pipermail/p2p-hackers/attachments/20060401/846a5f0c/attachment.htm From matthew at matthew.at Sat Apr 1 23:12:31 2006 From: matthew at matthew.at (Matthew Kaufman) Date: Sat Dec 9 22:13:12 2006 Subject: [p2p-hackers] Hard question.... In-Reply-To: Message-ID: <028201c655e1$ccc59060$02c7cac6@matthewdesk> Bob Harris: > With coding, there is no need for a back channel from the receiver-to-sender for > resends. Forward error correction has its place, but it is no excuse for eliminating the feedback necessary to perform proper congestion control. There are numerous reasons why protocols which fail to perform congestion control (including RTP, as used for VOIP) are a bad idea for both the individual user (end-link saturation, excess queueing, impact on the congestion management of parallel TCP flows, routers which drop or de-prioritize nonconforming flows, etc.) and the Internet as a whole (router queueing, congestion collapse, etc.). TCP or protocols with TCP-friendly congestion management are mandatory for bulk transfer of data. TCP is the easy answer. Reimplementing TCP on UDP or using TFRC on UDP is the not-so-easy answer. My personal (albeit biased) suggestion is to use amicima's MFP, which gets you congestion controlled delivery for both reliable *and* unreliable flows, among many other features. Matthew Kaufman matthew@matthew.at http://www.amicima.com From dbarrett at quinthar.com Sat Apr 1 23:24:29 2006 From: dbarrett at quinthar.com (David Barrett) Date: Sat Dec 9 22:13:12 2006 Subject: [p2p-hackers] Hard question.... In-Reply-To: <20060401220348.GA3384@cs.uoregon.edu> Message-ID: <20060401232437.5F6853FCF8@capsicum.zgp.org> I entirely agree with Daniel and Matthew here. TCP does an amazingly good job on this. I've abstracted the TCP algorithm into a simple class here: http://svn.iglance.com/svn/trunk/iglance/client/GTCP.h http://svn.iglance.com/svn/trunk/iglance/client/GTCP.cpp Basically, use GTCPServer to regulate the send speed and process acknowledgements, and GTCPClient to construct and maintain the bit-vector sent back with acknowledgements. It's not "real" TCP for a wide range of reasons, but it might give you a start on constructing your own UDP congestion-control mechanism. -david > -----Original Message----- > From: p2p-hackers-bounces@zgp.org [mailto:p2p-hackers-bounces@zgp.org] On > Behalf Of Daniel Stutzbach > Sent: Saturday, April 01, 2006 2:04 PM > To: Peer-to-peer development. > Subject: Re: [p2p-hackers] Hard question.... > > On Sat, Apr 01, 2006 at 01:33:18PM -0800, Lemon Obrien wrote: > > udp you know you'll need to do error correction if a packet is > > missing; i do this by sending a 're-send' message; but, as some of > > you know...you can not flod the network with thousands of > > resends...especially if you relay messages to the destination > > node...so, my question is; what is a good average mean time to keep > > sending 're-sends'...i have it working with 500 milli-seconds and > > 1000...i want fast? > > This is a complex topic and TCP does a lot behind the scenes to get it > right. You need a dynamic mechanism to balance: > - Sending packets as quickly as possible > - But not so fast that they cause congestion and heavy packet loss > > Rather than starting from scratch, I recommend studying the > way TCP handles this problem. Here are some good references: > > Van Jacobson, "Congestion Avoidance and Control", SIGCOMM, 1988 > http://ee.lbl.gov/papers/congavoid.pdf > > ^-- Van Jacobson encountered the problem you describe in the original > implementation of BSD's TCP and fixed it. > > RFC 3782: The NewReno Modification to TCP's Fast Recovery Algorithm > http://www.faqs.org/rfcs/rfc3782.html > > ^-- Other researchers make a small tweak giving a significant > performance boost > > W. Richard Stevens, TCP IP Illustrated, Volumes 1 and 2. > > ^-- Volume 2 walks through the BSD TCP source code and explains how it > all works. > > The SACK extension to TCP may also be useful (especially if your > application does not need in-order delivery), but it's best to get > everything else working first before you worry too much about that. > > -- > Daniel Stutzbach Computer Science Ph.D Student > http://www.barsoom.org/~agthorr University of Oregon > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > _______________________________________________ > Here is a web page listing P2P Conferences: > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences From bob.harris.spamcontrol at gmail.com Sat Apr 1 23:26:44 2006 From: bob.harris.spamcontrol at gmail.com (Bob Harris) Date: Sat Dec 9 22:13:12 2006 Subject: [p2p-hackers] Hard question.... In-Reply-To: <028201c655e1$ccc59060$02c7cac6@matthewdesk> References: <028201c655e1$ccc59060$02c7cac6@matthewdesk> Message-ID: On 4/1/06, Matthew Kaufman wrote: > > Forward error correction has its place, but it is no excuse for > eliminating > the feedback necessary to perform proper congestion control. I agree. I suggested getting rid of resends and selective acks via coding, not ditching congestion control altogether. Coding can get rid of ack packets that often carry just a tiny bit of information. And it solves his problem of what resend timeout parameters to pick. We know too little about Lemon's system to jump to any conclusions about congestion control - for all I know, it's a point-to-multipoint system with built-in throttling. My personal (albeit biased) suggestion is to use amicima's MFP, which gets > you congestion controlled delivery for both reliable *and* unreliable > flows, > among many other features. Sounds cool, does it work on Linux? Bob. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://zgp.org/pipermail/p2p-hackers/attachments/20060401/69551240/attachment.html From matthew at matthew.at Sat Apr 1 23:33:20 2006 From: matthew at matthew.at (Matthew Kaufman) Date: Sat Dec 9 22:13:12 2006 Subject: [p2p-hackers] Hard question.... In-Reply-To: Message-ID: <029c01c655e4$ac640ba0$02c7cac6@matthewdesk> Bob Harris: > Sounds cool, does it work on Linux? Yes. See our website at www.amicima.com for more... overview of the protocol: http://www.amicima.com/technology/mfp.html protocol documentation: http://www.amicima.com/developers/documentation.html reference implementation: http://www.amicima.com/developers/downloads.html The reference implementation is written in ANSI C and builds and runs on Linux, FreeBSD, Mac OS X, Solaris, and Win32 (that we know of). Matthew Kaufman matthew@matthew.at http://www.amicima.com From bob.harris.spamcontrol at gmail.com Sat Apr 1 23:54:43 2006 From: bob.harris.spamcontrol at gmail.com (Bob Harris) Date: Sat Dec 9 22:13:12 2006 Subject: [p2p-hackers] Hard question.... In-Reply-To: <20060401232437.5F6853FCF8@capsicum.zgp.org> References: <20060401220348.GA3384@cs.uoregon.edu> <20060401232437.5F6853FCF8@capsicum.zgp.org> Message-ID: David, Matthew and Daniel, While I agree that TCP flow control is good and all, I worry a bit about the TCP high-horse and the many newbies who misunderstand it. Without implicating anyone, it's worth pointing out that TCP is not sacrosanct, it does not provide immunity from congestion, and it does not guarantee fair bandwidth sharing at the host level. I can create hundreds of TCP (or TCP-like) flows in parallel, easily consume more than my fair share of bandwidth, and easily create congestion at the routers by closing and creating TCP connections (slow start, anyone?). Many p2p apps do exactly that: open many connections to many other hosts. In fact, I'm cranky at the moment because some idiot's p2p download is consuming all the bandwidth at my current wireless hotspot. Maybe what we need is to extend the TCP ideas from the flow level to the host-level (and either embed them deep into the OS or enforce them via traffic shaping). That said, it's better to use a protocol with built-in congestion control than without, and it's better to adopt TCP's flow control than either nothing or something untested at large. Bob. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://zgp.org/pipermail/p2p-hackers/attachments/20060401/2c8e93f9/attachment.htm From dbarrett at quinthar.com Sun Apr 2 00:01:01 2006 From: dbarrett at quinthar.com (David Barrett) Date: Sat Dec 9 22:13:12 2006 Subject: [p2p-hackers] Hard question.... In-Reply-To: Message-ID: <20060402000111.D28233FCF8@capsicum.zgp.org> Totally agree that TCP isn't the final word. Just suggesting that using or replicating it is the fastest way to create an application that is generally friendly to other TCP streams, and is generally good at doing congestion control over the real-world internet. -david _____ From: p2p-hackers-bounces@zgp.org [mailto:p2p-hackers-bounces@zgp.org] On Behalf Of Bob Harris Sent: Saturday, April 01, 2006 3:55 PM To: Peer-to-peer development. Subject: Re: [p2p-hackers] Hard question.... David, Matthew and Daniel, While I agree that TCP flow control is good and all, I worry a bit about the TCP high-horse and the many newbies who misunderstand it. Without implicating anyone, it's worth pointing out that TCP is not sacrosanct, it does not provide immunity from congestion, and it does not guarantee fair bandwidth sharing at the host level. I can create hundreds of TCP (or TCP-like) flows in parallel, easily consume more than my fair share of bandwidth, and easily create congestion at the routers by closing and creating TCP connections (slow start, anyone?). Many p2p apps do exactly that: open many connections to many other hosts. In fact, I'm cranky at the moment because some idiot's p2p download is consuming all the bandwidth at my current wireless hotspot. Maybe what we need is to extend the TCP ideas from the flow level to the host-level (and either embed them deep into the OS or enforce them via traffic shaping). That said, it's better to use a protocol with built-in congestion control than without, and it's better to adopt TCP's flow control than either nothing or something untested at large. Bob. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://zgp.org/pipermail/p2p-hackers/attachments/20060401/53abeda4/attachment.html From coderman at gmail.com Sun Apr 2 00:48:30 2006 From: coderman at gmail.com (coderman) Date: Sat Dec 9 22:13:12 2006 Subject: [p2p-hackers] Hard question.... In-Reply-To: References: <20060401220348.GA3384@cs.uoregon.edu> <20060401232437.5F6853FCF8@capsicum.zgp.org> Message-ID: <4ef5fec60604011648k6b001504h8beb20e21fc50b51@mail.gmail.com> On 4/1/06, Bob Harris wrote: > ... > I can create hundreds of TCP (or TCP-like) flows in parallel, easily consume > more > than my fair share of bandwidth, and easily create congestion at the routers > by > closing and creating TCP connections (slow start, anyone?). Many p2p apps do > exactly that: open many connections to many other hosts. > > In fact, I'm cranky at the moment because some idiot's p2p download > is consuming > all the bandwidth at my current wireless hotspot. Maybe what we need is to > extend the TCP > ideas from the flow level to the host-level (and either embed them deep into > the OS > or enforce them via traffic shaping). traffic shaping is an excellent idea and something i encourage and use routinely. implementing policy at the host/endpoint level is much better than trying to kludge it within an application (throttling TCP sockets in userspace, etc) that has a very limited view of network capability and status. regarding UDP: the reliable multicast charter has done a lot of work to couple congestion avoidance and reliable transmission for datagram transport without the full overhead of a TCP like mechanism. my personal preference when using UDP to many endpoints (although i admit i've focused mostly on signalling/control channels with UDP) is to limit overall throughput to a fixed fraction of available bandwidth. this way TCP and other transports can negotiate session capacity within the remaining bandwidth. From dbarrett at quinthar.com Sun Apr 2 01:00:07 2006 From: dbarrett at quinthar.com (David Barrett) Date: Sat Dec 9 22:13:12 2006 Subject: [p2p-hackers] Hard question.... In-Reply-To: <4ef5fec60604011648k6b001504h8beb20e21fc50b51@mail.gmail.com> Message-ID: <20060402010011.656EE3FCF8@capsicum.zgp.org> > -----Original Message----- > From: coderman > Subject: Re: [p2p-hackers] Hard question.... > > my personal preference when using UDP to many endpoints (although i > admit i've focused mostly on signalling/control channels with UDP) is > to limit overall throughput to a fixed fraction of available > bandwidth. this way TCP and other transports can negotiate session > capacity within the remaining bandwidth. Incidentally, how are you measuring "available bandwidth"? -david From coderman at gmail.com Sun Apr 2 01:20:28 2006 From: coderman at gmail.com (coderman) Date: Sat Dec 9 22:13:12 2006 Subject: [p2p-hackers] Hard question.... In-Reply-To: <20060402010011.656EE3FCF8@capsicum.zgp.org> References: <4ef5fec60604011648k6b001504h8beb20e21fc50b51@mail.gmail.com> <20060402010011.656EE3FCF8@capsicum.zgp.org> Message-ID: <4ef5fec60604011720p1f0c8c85p55188f4eee695767@mail.gmail.com> On 4/1/06, David Barrett wrote: > ... > Incidentally, how are you measuring "available bandwidth"? right now i pass the buck and let the user pick a suitable limit. if excessive loss is detected continuously the stack can cut by half or exit with error. i'm still looking for better ways to do this; ideally it would be tied to kernel level shaping and based on a historical view of channel capacity. From dbarrett at quinthar.com Sun Apr 2 01:42:48 2006 From: dbarrett at quinthar.com (David Barrett) Date: Sat Dec 9 22:13:12 2006 Subject: [p2p-hackers] Hard question.... In-Reply-To: <4ef5fec60604011720p1f0c8c85p55188f4eee695767@mail.gmail.com> Message-ID: <20060402014250.6AB2F3FCA5@capsicum.zgp.org> > -----Original Message----- > From: coderman > Sent: Saturday, April 01, 2006 5:20 PM > To: Peer-to-peer development. > Subject: Re: [p2p-hackers] Hard question.... > > On 4/1/06, David Barrett wrote: > > ... > > Incidentally, how are you measuring "available bandwidth"? > > right now i pass the buck and let the user pick a suitable limit. if > excessive loss is detected continuously the stack can cut by half or > exit with error. > > i'm still looking for better ways to do this; ideally it would be tied > to kernel level shaping and based on a historical view of channel > capacity. Got it. Has anyone else had good experience trying to measure this automatically in the real world? -david From gbildson at limepeer.com Sun Apr 2 02:14:27 2006 From: gbildson at limepeer.com (gbildson@limepeer.com) Date: Sat Dec 9 22:13:12 2006 Subject: [p2p-hackers] Hard question.... In-Reply-To: <20060402014250.6AB2F3FCA5@capsicum.zgp.org> References: <20060402014250.6AB2F3FCA5@capsicum.zgp.org> Message-ID: <1143944067.442f338314d9e@cyrus.limewire.com> I've missed part of this conversation but here is my two cents on this specific question - just keep increasing the amount of data that you are sending in bursts and the speed of those bursts until you achieve a certain target error rate. i.e. 2% or whatever. After bumping up against failures, you should be able to get a sense of an optimal rate. Be sensitive to TCP congestion at the same time. I back off if the round trip time starts spiking. Thanks -greg Quoting David Barrett : > > -----Original Message----- > > From: coderman > > Sent: Saturday, April 01, 2006 5:20 PM > > To: Peer-to-peer development. > > Subject: Re: [p2p-hackers] Hard question.... > > > > On 4/1/06, David Barrett wrote: > > > ... > > > Incidentally, how are you measuring "available bandwidth"? > > > > right now i pass the buck and let the user pick a suitable limit. if > > excessive loss is detected continuously the stack can cut by half or > > exit with error. > > > > i'm still looking for better ways to do this; ideally it would be tied > > to kernel level shaping and based on a historical view of channel > > capacity. > > Got it. Has anyone else had good experience trying to measure this > automatically in the real world? > > -david > > > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > _______________________________________________ > Here is a web page listing P2P Conferences: > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences > From dbarrett at quinthar.com Sun Apr 2 02:28:25 2006 From: dbarrett at quinthar.com (David Barrett) Date: Sat Dec 9 22:13:12 2006 Subject: [p2p-hackers] Hard question.... In-Reply-To: <1143944067.442f338314d9e@cyrus.limewire.com> Message-ID: <20060402022833.63D493FCA5@capsicum.zgp.org> That makes sense, but it's a bit of a catch-22: In order to not saturate the connection you need to know what's available. But to know what's available, you need to saturate the connection. I'm curious if there's another way. -david > -----Original Message----- > From: gbildson@limepeer.com [mailto:gbildson@limepeer.com] > Sent: Saturday, April 01, 2006 6:14 PM > To: Peer-to-peer development.; David Barrett > Cc: 'Peer-to-peer development.' > Subject: RE: [p2p-hackers] Hard question.... > > I've missed part of this conversation but here is my two cents on this > specific > question - just keep increasing the amount of data that you are sending > in > bursts and the speed of those bursts until you achieve a certain target > error > rate. i.e. 2% or whatever. After bumping up against failures, you should > be > able to get a sense of an optimal rate. Be sensitive to TCP congestion at > the > same time. I back off if the round trip time starts spiking. > > Thanks > -greg > > > Quoting David Barrett : > > > -----Original Message----- > > > From: coderman > > > Sent: Saturday, April 01, 2006 5:20 PM > > > To: Peer-to-peer development. > > > Subject: Re: [p2p-hackers] Hard question.... > > > > > > On 4/1/06, David Barrett wrote: > > > > ... > > > > Incidentally, how are you measuring "available bandwidth"? > > > > > > right now i pass the buck and let the user pick a suitable limit. if > > > excessive loss is detected continuously the stack can cut by half or > > > exit with error. > > > > > > i'm still looking for better ways to do this; ideally it would be tied > > > to kernel level shaping and based on a historical view of channel > > > capacity. > > > > Got it. Has anyone else had good experience trying to measure this > > automatically in the real world? > > > > -david > > > > > > _______________________________________________ > > p2p-hackers mailing list > > p2p-hackers@zgp.org > > http://zgp.org/mailman/listinfo/p2p-hackers > > _______________________________________________ > > Here is a web page listing P2P Conferences: > > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences > > > From gbildson at limepeer.com Sun Apr 2 02:52:31 2006 From: gbildson at limepeer.com (gbildson@limepeer.com) Date: Sat Dec 9 22:13:12 2006 Subject: [p2p-hackers] Hard question.... In-Reply-To: <20060402022833.63D493FCA5@capsicum.zgp.org> References: <20060402022833.63D493FCA5@capsicum.zgp.org> Message-ID: <1143946351.442f3c6f44dea@cyrus.limewire.com> I'm gonna go with _no_. The amount of time it is saturated is just a blip. Thanks -greg Quoting David Barrett : > That makes sense, but it's a bit of a catch-22: > > In order to not saturate the connection you need to know what's available. > But to know what's available, you need to saturate the connection. > > I'm curious if there's another way. > > -david > > > -----Original Message----- > > From: gbildson@limepeer.com [mailto:gbildson@limepeer.com] > > Sent: Saturday, April 01, 2006 6:14 PM > > To: Peer-to-peer development.; David Barrett > > Cc: 'Peer-to-peer development.' > > Subject: RE: [p2p-hackers] Hard question.... > > > > I've missed part of this conversation but here is my two cents on this > > specific > > question - just keep increasing the amount of data that you are sending > > in > > bursts and the speed of those bursts until you achieve a certain target > > error > > rate. i.e. 2% or whatever. After bumping up against failures, you should > > be > > able to get a sense of an optimal rate. Be sensitive to TCP congestion at > > the > > same time. I back off if the round trip time starts spiking. > > > > Thanks > > -greg > > > > > > Quoting David Barrett : > > > > -----Original Message----- > > > > From: coderman > > > > Sent: Saturday, April 01, 2006 5:20 PM > > > > To: Peer-to-peer development. > > > > Subject: Re: [p2p-hackers] Hard question.... > > > > > > > > On 4/1/06, David Barrett wrote: > > > > > ... > > > > > Incidentally, how are you measuring "available bandwidth"? > > > > > > > > right now i pass the buck and let the user pick a suitable limit. if > > > > excessive loss is detected continuously the stack can cut by half or > > > > exit with error. > > > > > > > > i'm still looking for better ways to do this; ideally it would be tied > > > > to kernel level shaping and based on a historical view of channel > > > > capacity. > > > > > > Got it. Has anyone else had good experience trying to measure this > > > automatically in the real world? > > > > > > -david > > > > > > > > > _______________________________________________ > > > p2p-hackers mailing list > > > p2p-hackers@zgp.org > > > http://zgp.org/mailman/listinfo/p2p-hackers > > > _______________________________________________ > > > Here is a web page listing P2P Conferences: > > > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences > > > > > > > > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > _______________________________________________ > Here is a web page listing P2P Conferences: > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences > From matthew at matthew.at Sun Apr 2 03:38:47 2006 From: matthew at matthew.at (Matthew Kaufman) Date: Sat Dec 9 22:13:12 2006 Subject: [p2p-hackers] Hard question.... In-Reply-To: Message-ID: <02b501c65606$f6235120$02c7cac6@matthewdesk> Bob Harris: > While I agree that TCP flow control is good and all, I worry a bit about > the TCP high-horse and the many newbies who misunderstand it. I worry more about the newbies who don't understand how much a host's TCP implementation is doing for them, and go off and naively implement UDP-based bulk transfer or streaming protocols. TCP handles RTT calculation (including a good try at not getting it wrong in the face of extra retransmissions), retransmission timing, a sliding window (instead of lock-step wait-for-ack), flow control against the receiver's buffer *and* a good attempt at congestion control when loss is detected. > Without > implicating anyone, it's worth pointing out that TCP is not sacrosanct, it > does not provide immunity from congestion, and it does not guarantee > fair bandwidth sharing at the host level. True enough. TCP is also showing its age... Even with window scaling, large delay*bandwidth isn't well-tolerated. The AIMD algorithm isn't sufficient for large delay*bandwidth either, especially if there's slight nonzero loss. TCP doesn't have selective acknowlegements by default, and the rarely-implemented SACK specification actually allows the receiver to renege on acknowledgement claims, which causes transmit buffer issues when lots of data is in-flight. And most TCP implementations don't properly implement the specification with regard to capping the max RTO, so brief link outages take much longer to recover from than they should. And there's *still* the SYN flood problem, session hijacking potential, and everything else that's been discovered over the years. > I can create hundreds of TCP (or TCP-like) flows in parallel, easily consume more > than my fair share of bandwidth, and easily create congestion at the routers by > closing and creating TCP connections (slow start, anyone?). Many p2p apps do > exactly that: open many connections to many other hosts. Sure. You could also use TCP to saturate your link simply by issuing millions of simultaneous new connections. But TCP is the standard for how a bulk transfer flow should behave in the face of loss. There are traffic shaping devices that can detect flows that fail to back off *like TCP does* in the presence of loss, and they will severely penalize such flows. That's a great reason to use a TCP-like or TCP-friendly algorithm for congestion control. Another great reason is to look at the behavior of TCP flow if a parallel flow takes more or less than what TCP would, given the same average loss and same RTT... What you'll discover is that TCP operates essentially on a knife-edge... Take a little more, and you'll drown out the TCP flows. Be a little more timid, and TCP will take most of the available bandwidth. The amicima MFP implementation knows about this and uses it to its advantage when using priority to adapt congestion response, but not knowing at all and naively ignoring the situation will make users unhappy when they start trying to do two things at once. > In fact, I'm cranky at the moment because some idiot's p2p download is consuming > all the bandwidth at my current wireless hotspot. Maybe what we need is to extend the TCP > ideas from the flow level to the host-level (and either embed them deep into the OS > or enforce them via traffic shaping). amicima's MFP does share congestion state between all flows that travel between a given pair of hosts, which results in much better behavior in the case where you have multiple parallel file transfers... In addition, there's flow prioritization, which allows a higher priority flow (eg., a VOIP flow) to get first dibs on the available bandwidth, rather than simply taking its chances. And MFP also shares received priority data with all the other hosts it is talking to, so that if A is sending high priority data to C, and B is sending low priority data to C, B knows to be more aggressive in backing off if it detects loss so as to leave inbound room at C for the flow from A. Obviously if you throw TCP flows into the mix, you don't get all the benefits, but you do still get *tested* TCP-friendly performance (some of the TCP-friendly rate control algorithms are actually quite poor in real life, due to excess time constant in their feedback loop or other subtle flaws). Getting congestion control to work properly in MFP took the majority of our development time. We tried several alternative approaches... TFRC-like algorithms, explicit loss reporting vs. deriving loss from acknowledgements, token-bucket rate shaping vs. data-in-flight control. The theory says that a whole lot of things will work. In practice, there's only a few that operate correctly in real life, and there's a lot of tricks (eg., how we calculate RTT) that improve performance more than you might expect at first glance. Knowing now how many programmer hours it took to get it to even a passable state, I wouldn't recommend the exercise to anyone. > That said, it's better to use a protocol with built-in congestion control than without Absolutely. In fact, for bulk transfer or streaming media, developers should consider congestion control *mandatory* for proper behavior on the Internet, and yes, that *should* include RTP VOIP flows too. > and it's better to adopt TCP's flow control than either nothing or something untested at large. TCP's flow control (and of course there's several flavors... Reno, Vegas, etc.) is both a good start, and what almost all the other traffic is using... So you either need to emulate it, or come up with something that interoperates fairly when the majority of the other parallel flows *are* TCP. And if you don't know how to do that correctly, or don't have the time to implement *and test* it, you should just use TCP or some other protocol stack that has solved the problem already. Matthew Kaufman matthew@matthew.at http://www.amicima.com From matthew at matthew.at Sun Apr 2 03:45:32 2006 From: matthew at matthew.at (Matthew Kaufman) Date: Sat Dec 9 22:13:12 2006 Subject: [p2p-hackers] Hard question.... In-Reply-To: <20060402014250.6AB2F3FCA5@capsicum.zgp.org> Message-ID: <02bf01c65607$e783c450$02c7cac6@matthewdesk> David Barrett: > Has anyone else had good experience trying to > measure this automatically in the real world? If you can accurately measure RTT, and accurately compute a transmit window from observed loss the same way that TCP does, then you have all the raw data necessary to know what the available bandwidth for a TCP-friendly flow was over the last RTT. And since you should be TCP-friendly, that bandwidth is in fact the "available bandwidth", even if slightly more (or MUCH more, if the loss isn't from congestion) actually exists. And since you can't predict the future, and overall congestion and loss varies much more rapidly than you might expect, you can't do any better than knowing the past. Within one RTT, if you do it right... Worse, if you're trying to calculate it from longer-term observations like algorithm-based TFRC does. Matthew Kaufman matthew@matthew.at http://www.amicima.com From matthew at matthew.at Sun Apr 2 03:47:53 2006 From: matthew at matthew.at (Matthew Kaufman) Date: Sat Dec 9 22:13:12 2006 Subject: [p2p-hackers] Hard question.... In-Reply-To: <1143946351.442f3c6f44dea@cyrus.limewire.com> Message-ID: <02c901c65608$3b9b2bf0$02c7cac6@matthewdesk> gbildson@limepeer.com: > I'm gonna go with _no_. The amount of time it is saturated > is just a blip. I'm going with "no" as well. And note that my previous note about computing bandwidth based on accurate RTT and accurate window calculation assumes that you have enough pending data that you are actively driving the window to maximum size and probing for loss. If not, then you'll just calculate your (lower) actual consumed bandwidth. Matthew Kaufman matthew@matthew.at http://www.amicima.com From ap at hamachi.cc Sun Apr 2 04:41:35 2006 From: ap at hamachi.cc (Alex Pankratov) Date: Sat Dec 9 22:13:12 2006 Subject: [p2p-hackers] Hard question.... In-Reply-To: <1143944067.442f338314d9e@cyrus.limewire.com> References: <20060402014250.6AB2F3FCA5@capsicum.zgp.org> <1143944067.442f338314d9e@cyrus.limewire.com> Message-ID: <442F55FF.2080107@hamachi.cc> gbildson@limepeer.com wrote: > I've missed part of this conversation but here is my two cents on this specific > question - just keep increasing the amount of data that you are sending in > bursts and the speed of those bursts until you achieve a certain target error > rate. i.e. 2% or whatever. After bumping up against failures, you should be > able to get a sense of an optimal rate. Be sensitive to TCP congestion at the > same time. I back off if the round trip time starts spiking. I want to second RTT-based congestion avoidance approach. Given that it is *the* idea behind TCP/Vegas, it is nothing new, but the nice thing about it is that it works very well for consumer Internet connections. The reason being is that their bandwidth is typically capped by queuing traffic shapers (as opposed by an actual hardware limits). So once some the shaper starts queuing packets, it can be detected by a sender by looking at RTT going up. It can also be detected by the recipient and thus allow for a faster (pre-)congestion detection. This however requires both sides to first synchronize their clocks, and it's really worth doing only if the link has very large latency. Alex From lemonobrien at yahoo.com Sun Apr 2 04:56:42 2006 From: lemonobrien at yahoo.com (Lemon Obrien) Date: Sat Dec 9 22:13:12 2006 Subject: The Lazy Susan...RE: [p2p-hackers] Hard question.... In-Reply-To: <028201c655e1$ccc59060$02c7cac6@matthewdesk> Message-ID: <20060402045642.57952.qmail@web53603.mail.yahoo.com> I'm passive aggresive so my algorythms tend to be...X sends 'stream-file' to Y, Y sends chunks of 'file-data' sequenced and sessioned back to X. X stores all 'file-data' as they appear and sends 'resend' when a sequence is missing; Y calulates according to sequence; and sends 'file-data' to X. Y sends EOF. and X can send close closing the session streaming on Y. data is read in sequence as a stream. messages are relayed...i call it lazy cause a resend is only sent when a sequence is determined to be missing. I believe tcp does an 'ack' for each node it traverses. i can calculate a runing-mean to some elasped-total. Matthew Kaufman wrote: Bob Harris: > With coding, there is no need for a back channel from the receiver-to-sender for > resends. Forward error correction has its place, but it is no excuse for eliminating the feedback necessary to perform proper congestion control. There are numerous reasons why protocols which fail to perform congestion control (including RTP, as used for VOIP) are a bad idea for both the individual user (end-link saturation, excess queueing, impact on the congestion management of parallel TCP flows, routers which drop or de-prioritize nonconforming flows, etc.) and the Internet as a whole (router queueing, congestion collapse, etc.). TCP or protocols with TCP-friendly congestion management are mandatory for bulk transfer of data. TCP is the easy answer. Reimplementing TCP on UDP or using TFRC on UDP is the not-so-easy answer. My personal (albeit biased) suggestion is to use amicima's MFP, which gets you congestion controlled delivery for both reliable *and* unreliable flows, among many other features. Matthew Kaufman matthew@matthew.at http://www.amicima.com _______________________________________________ p2p-hackers mailing list p2p-hackers@zgp.org http://zgp.org/mailman/listinfo/p2p-hackers _______________________________________________ Here is a web page listing P2P Conferences: http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences You don't get no juice unless you squeeze Lemon Obrien, the Third. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://zgp.org/pipermail/p2p-hackers/attachments/20060401/85cf4ddd/attachment.htm From mgp at ucla.edu Sun Apr 2 05:03:34 2006 From: mgp at ucla.edu (Michael Parker) Date: Sat Dec 9 22:13:12 2006 Subject: [p2p-hackers] Hard question.... In-Reply-To: <20060402014250.6AB2F3FCA5@capsicum.zgp.org> References: <20060402014250.6AB2F3FCA5@capsicum.zgp.org> Message-ID: <20060401210334.yqiwac39gksoccks@mail.ucla.edu> The network research lab at my school has a tool called CapProbe that allows fast and accurate measuring of capacity estimation. I haven't used it personally, but I know it relies on measuring the dispersion of packet pairs. The paper from SIGCOMM 2004 is at: http://www.cs.ucla.edu/NRL/CapProbe/files/04_SIGCOMM_CapProbe.pdf The software can be downloaded from http://www.cs.ucla.edu/NRL/CapProbe. Both kernel-level and user-level versions are available for Linux. - Mike Quoting David Barrett : >> -----Original Message----- >> From: coderman >> Sent: Saturday, April 01, 2006 5:20 PM >> To: Peer-to-peer development. >> Subject: Re: [p2p-hackers] Hard question.... >> >> On 4/1/06, David Barrett wrote: >> > ... >> > Incidentally, how are you measuring "available bandwidth"? >> >> right now i pass the buck and let the user pick a suitable limit. if >> excessive loss is detected continuously the stack can cut by half or >> exit with error. >> >> i'm still looking for better ways to do this; ideally it would be tied >> to kernel level shaping and based on a historical view of channel >> capacity. > > Got it. Has anyone else had good experience trying to measure this > automatically in the real world? > > -david > > > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > _______________________________________________ > Here is a web page listing P2P Conferences: > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences > From agthorr at cs.uoregon.edu Sun Apr 2 05:07:51 2006 From: agthorr at cs.uoregon.edu (Daniel Stutzbach) Date: Sat Dec 9 22:13:12 2006 Subject: [p2p-hackers] Hard question.... In-Reply-To: <20060402022833.63D493FCA5@capsicum.zgp.org> References: <1143944067.442f338314d9e@cyrus.limewire.com> <20060402022833.63D493FCA5@capsicum.zgp.org> Message-ID: <20060402050750.GA620@cs.uoregon.edu> Saturation is the goal. Wasted bandwidth is bad. Super-saturation, where you application is blasting everything else into oblivion, is what you need to avoid (and you should specifically test how your app competes with TCP). In practice you typically oscillate around the saturation point, sometimes leaving a little bandwidth unused, sometimes super-saturating and causing packets to queue (increasing RTT and eventually packet loss). You can't really avoid this because the saturation point moves depending on what other applications are doing. On Sat, Apr 01, 2006 at 06:28:25PM -0800, David Barrett wrote: > That makes sense, but it's a bit of a catch-22: > > In order to not saturate the connection you need to know what's available. > But to know what's available, you need to saturate the connection. > > I'm curious if there's another way. > > -david > > > From: gbildson@limepeer.com [mailto:gbildson@limepeer.com] > > Sent: Saturday, April 01, 2006 6:14 PM > > To: Peer-to-peer development.; David Barrett > > Cc: 'Peer-to-peer development.' > > Subject: RE: [p2p-hackers] Hard question.... > > > > I've missed part of this conversation but here is my two cents on this > > specific > > question - just keep increasing the amount of data that you are sending > > in > > bursts and the speed of those bursts until you achieve a certain target > > error > > rate. i.e. 2% or whatever. After bumping up against failures, you should > > be > > able to get a sense of an optimal rate. Be sensitive to TCP congestion at > > the > > same time. I back off if the round trip time starts spiking. > > > > Thanks > > -greg > > > > > > Quoting David Barrett : > > > > From: coderman > > > > Sent: Saturday, April 01, 2006 5:20 PM > > > > To: Peer-to-peer development. > > > > Subject: Re: [p2p-hackers] Hard question.... > > > > > > > > On 4/1/06, David Barrett wrote: > > > > > ... > > > > > Incidentally, how are you measuring "available bandwidth"? > > > > > > > > right now i pass the buck and let the user pick a suitable limit. if > > > > excessive loss is detected continuously the stack can cut by half or > > > > exit with error. > > > > > > > > i'm still looking for better ways to do this; ideally it would be tied > > > > to kernel level shaping and based on a historical view of channel > > > > capacity. > > > > > > Got it. Has anyone else had good experience trying to measure this > > > automatically in the real world? > > > > > > -david > > > > > > > > > _______________________________________________ > > > p2p-hackers mailing list > > > p2p-hackers@zgp.org > > > http://zgp.org/mailman/listinfo/p2p-hackers > > > _______________________________________________ > > > Here is a web page listing P2P Conferences: > > > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences > > > > > > > > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > _______________________________________________ > Here is a web page listing P2P Conferences: > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences > -- Daniel Stutzbach Computer Science Ph.D Student http://www.barsoom.org/~agthorr University of Oregon From lemonobrien at yahoo.com Sun Apr 2 05:57:56 2006 From: lemonobrien at yahoo.com (Lemon Obrien) Date: Sat Dec 9 22:13:12 2006 Subject: Hard Question...Re: The Lazy Susan...RE: [p2p-hackers] Hard question.... In-Reply-To: <20060402045642.57952.qmail@web53603.mail.yahoo.com> Message-ID: <20060402055756.90257.qmail@web53610.mail.yahoo.com> given my alogrythm...timestamp the 'stream-file' message; or when sent to save space...and for each 'file-data' packet received calculate the running average of ( now - timestamp ) / # sequences recieved. Can't resend anything unless we've recieved something...how long to wait; if the connection goes down it should end rather quickly...responsively? Lemon Obrien wrote: I'm passive aggresive so my algorythms tend to be...X sends 'stream-file' to Y, Y sends chunks of 'file-data' sequenced and sessioned back to X. X stores all 'file-data' as they appear and sends 'resend' when a sequence is missing; Y calulates according to sequence; and sends 'file-data' to X. Y sends EOF. and X can send close closing the session streaming on Y. data is read in sequence as a stream. messages are relayed...i call it lazy cause a resend is only sent when a sequence is determined to be missing. I believe tcp does an 'ack' for each node it traverses. i can calculate a runing-mean to some elasped-total. Matthew Kaufman wrote: Bob Harris: > With coding, there is no need for a back channel from the receiver-to-sender for > resends. Forward error correction has its place, but it is no excuse for eliminating the feedback necessary to perform proper congestion control. There are numerous reasons why protocols which fail to perform congestion control (including RTP, as used for VOIP) are a bad idea for both the individual user (end-link saturation, excess queueing, impact on the congestion management of parallel TCP flows, routers which drop or de-prioritize nonconforming flows, etc.) and the Internet as a whole (router queueing, congestion collapse, etc.). TCP or protocols with TCP-friendly congestion management are mandatory for bulk transfer of data. TCP is the easy answer. Reimplementing TCP on UDP or using TFRC on UDP is the not-so-easy answer. My personal (albeit biased) suggestion is to use amicima's MFP, which gets you congestion controlled delivery for both reliable *and* unreliable flows, among many other features. Matthew Kaufman matthew@matthew.at http://www.amicima.com _______________________________________________ p2p-hackers mailing list p2p-hackers@zgp.org http://zgp.org/mailman/listinfo/p2p-hackers _______________________________________________ Here is a web page listing P2P Conferences: http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences You don't get no juice unless you squeeze Lemon Obrien, the Third._______________________________________________ p2p-hackers mailing list p2p-hackers@zgp.org http://zgp.org/mailman/listinfo/p2p-hackers _______________________________________________ Here is a web page listing P2P Conferences: http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences You don't get no juice unless you squeeze Lemon Obrien, the Third. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://zgp.org/pipermail/p2p-hackers/attachments/20060401/298b958e/attachment.html From lln at it.uu.se Sun Apr 2 09:03:07 2006 From: lln at it.uu.se (=?ISO-8859-1?Q?Lars-=C5ke_Larzon?=) Date: Sat Dec 9 22:13:12 2006 Subject: [p2p-hackers] Hard question.... In-Reply-To: <442F55FF.2080107@hamachi.cc> References: <20060402014250.6AB2F3FCA5@capsicum.zgp.org> <1143944067.442f338314d9e@cyrus.limewire.com> <442F55FF.2080107@hamachi.cc> Message-ID: <710D6EE9-9315-497C-B3BC-0FC3F71C4286@it.uu.se> 2 apr 2006 kl. 06.41 skrev Alex Pankratov: > > > gbildson@limepeer.com wrote: >> I've missed part of this conversation but here is my two cents on >> this specific >> question - just keep increasing the amount of data that you are >> sending in >> bursts and the speed of those bursts until you achieve a certain >> target error >> rate. i.e. 2% or whatever. After bumping up against failures, >> you should be >> able to get a sense of an optimal rate. Be sensitive to TCP >> congestion at the >> same time. I back off if the round trip time starts spiking. > > I want to second RTT-based congestion avoidance approach. Given > that it > is *the* idea behind TCP/Vegas, it is nothing new, but the nice thing > about it is that it works very well for consumer Internet connections. > RTT spikes can occur for many reasons other than congestion, especially if you have links that insists on in-order frame delivery in your path. So, being too sensitive to RTT spikes can actually give you quite poor performance. It is also quite common that RTT variations occur on much shorter timescales than you are able to detect them on. So, when you detect the spike, it may be long gone and your reaction might be more or less meaningless. Regarding TCP/Vegas, it requires a quite precise clock to operate properly if I remember it correctly. There are many newer, simpler schemes that also look at RTT variations without the need for such precision. TCP Westwood is one of them, but there are others as well. /Lars-?ke -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4030 bytes Desc: not available Url : http://zgp.org/pipermail/p2p-hackers/attachments/20060402/162b27d0/smime.bin From m.rogers at cs.ucl.ac.uk Sun Apr 2 12:06:12 2006 From: m.rogers at cs.ucl.ac.uk (Michael Rogers) Date: Sat Dec 9 22:13:12 2006 Subject: [p2p-hackers] Hard question.... In-Reply-To: <4ef5fec60604011648k6b001504h8beb20e21fc50b51@mail.gmail.com> References: <20060401220348.GA3384@cs.uoregon.edu> <20060401232437.5F6853FCF8@capsicum.zgp.org> <4ef5fec60604011648k6b001504h8beb20e21fc50b51@mail.gmail.com> Message-ID: <442FBE34.7030702@cs.ucl.ac.uk> By the way the RFC for DCCP was just published: http://www.rfc-editor.org/rfc/rfc4340.txt "It may be useful to think of DCCP as TCP minus bytestream semantics and reliability, or as UDP plus congestion control, handshakes, and acknowledgements." Cheers, Michael From dbarrett at quinthar.com Sun Apr 2 21:34:43 2006 From: dbarrett at quinthar.com (David Barrett) Date: Sat Dec 9 22:13:12 2006 Subject: [p2p-hackers] Hard question.... In-Reply-To: <20060402050750.GA620@cs.uoregon.edu> Message-ID: <20060402213447.16FAC3FCFB@capsicum.zgp.org> > -----Original Message----- > From: Daniel Stutzbach > Sent: Saturday, April 01, 2006 9:08 PM > To: 'Peer-to-peer development.' > Subject: Re: [p2p-hackers] Hard question.... > > Saturation is the goal. Wasted bandwidth is bad. Well, yes, but I'm asking "how do you measure how much bandwidth is currently being wasted by other applications, and then only use that amount"? -david From osokin at osokin.com Sun Apr 2 22:47:01 2006 From: osokin at osokin.com (Serguei Osokine) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] Hard question.... In-Reply-To: <20060402213447.16FAC3FCFB@capsicum.zgp.org> Message-ID: On Sunday, April 02, 2006 David Barrett wrote: > ..."how do you measure how much bandwidth is currently being wasted > by other applications, and then only use that amount"? My understanding is that the only realistic approach is to make your ramping up less aggressive than other streams (which basically means "less aggressive than TCP"), and that will automatically make your traffic utilize the full bandwidth in the absence of other data transfers, and shrink back when other transport (mail, Web, etc) use the connection. Basically, the idea is that you don't have to measure; you just use whatever's left. I believe Dijjer might be doing something like that with its GAIMD approach, though I'm not sure. I mean, in principle GAIMD should allow you to achieve that, but I don't know whether Dijjer actually does use it this way, or just tries to exactly simulate the normal TCP ramping up, in which case it simply does not kill the concurrent TCP streams, but does not shrink back in their presence. Their description of being "TCP friendly" is a bit ambiguous on that. And of course, Amicima might be doing something similar, but I'm not sure whether it is GAIMD-based or not. Again, their site is ambiguous here, and all the details seem to be mostly in the code - if there is any document that clearly describes their backoff and ramping up strategies, I must have missed it. Matthew, do you have such a description somewhere? Best wishes - S.Osokine. 2 Apr 2006. -----Original Message----- From: p2p-hackers-bounces@zgp.org [mailto:p2p-hackers-bounces@zgp.org]On Behalf Of David Barrett Sent: Sunday, April 02, 2006 2:35 PM To: 'Peer-to-peer development.' Subject: RE: [p2p-hackers] Hard question.... > -----Original Message----- > From: Daniel Stutzbach > Sent: Saturday, April 01, 2006 9:08 PM > To: 'Peer-to-peer development.' > Subject: Re: [p2p-hackers] Hard question.... > > Saturation is the goal. Wasted bandwidth is bad. Well, yes, but I'm asking "how do you measure how much bandwidth is currently being wasted by other applications, and then only use that amount"? -david _______________________________________________ p2p-hackers mailing list p2p-hackers@zgp.org http://zgp.org/mailman/listinfo/p2p-hackers _______________________________________________ Here is a web page listing P2P Conferences: http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences From jrydberg at gnu.org Sun Apr 2 23:53:55 2006 From: jrydberg at gnu.org (Johan Rydberg) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] Hard question.... In-Reply-To: <029c01c655e4$ac640ba0$02c7cac6@matthewdesk> (Matthew Kaufman's message of "Sat, 1 Apr 2006 15:33:20 -0800") References: <029c01c655e4$ac640ba0$02c7cac6@matthewdesk> Message-ID: <874q1bzg8c.fsf@night.trouble.net> "Matthew Kaufman" writes: > Bob Harris: >> Sounds cool, does it work on Linux? > > Yes. See our website at www.amicima.com for more... > > overview of the protocol: http://www.amicima.com/technology/mfp.html > protocol documentation: http://www.amicima.com/developers/documentation.html > reference implementation: http://www.amicima.com/developers/downloads.html MFP Operation documentation has been coming soon for a long time. Any chance we will ever see it? ~j From agthorr at cs.uoregon.edu Mon Apr 3 03:49:33 2006 From: agthorr at cs.uoregon.edu (Daniel Stutzbach) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] Hard question.... In-Reply-To: <20060402213447.16FAC3FCFB@capsicum.zgp.org> References: <20060402050750.GA620@cs.uoregon.edu> <20060402213447.16FAC3FCFB@capsicum.zgp.org> Message-ID: <20060403034932.GA3496@cs.uoregon.edu> On Sun, Apr 02, 2006 at 02:34:43PM -0700, David Barrett wrote: > > Saturation is the goal. Wasted bandwidth is bad. > > Well, yes, but I'm asking "how do you measure how much bandwidth is > currently being wasted by other applications, and then only use that > amount"? That is not actually the right question, because the answer is typically "None or very, very little". TCP's goal is to achieve saturation and not leave unused capacity. If you try to measure the "available bandwidth" that TCP leaves behind, you're not going to have much to work with. Your app need to grab its "fair share", which means implementing a congestion control policy that plays nicely with TCP. -- Daniel Stutzbach Computer Science Ph.D Student http://www.barsoom.org/~agthorr University of Oregon From dbarrett at quinthar.com Mon Apr 3 05:58:05 2006 From: dbarrett at quinthar.com (David Barrett) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] Hard question.... In-Reply-To: <20060403034932.GA3496@cs.uoregon.edu> Message-ID: <20060403055830.093563FD4C@capsicum.zgp.org> > -----Original Message----- > From: Daniel Stutzbach > Sent: Sunday, April 02, 2006 8:50 PM > To: 'Peer-to-peer development.' > Subject: Re: [p2p-hackers] Hard question.... > > On Sun, Apr 02, 2006 at 02:34:43PM -0700, David Barrett wrote: > > > Saturation is the goal. Wasted bandwidth is bad. > > > > Well, yes, but I'm asking "how do you measure how much bandwidth is > > currently being wasted by other applications, and then only use that > > amount"? > > That is not actually the right question, because the answer is > typically "None or very, very little". TCP's goal is to achieve > saturation and not leave unused capacity. If you try to measure the > "available bandwidth" that TCP leaves behind, you're not going to have > much to work with. Um... most connections aren't saturated 24x7. Like, I have a 6Mbps connection and sometimes I'm just using AIM. In this situation, I'd like to measure that 5.9Mbps is free. Any clever ideas on how to accomplish this? -david From matthew at matthew.at Mon Apr 3 06:09:07 2006 From: matthew at matthew.at (Matthew Kaufman) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] Hard question.... In-Reply-To: <20060403055830.093563FD4C@capsicum.zgp.org> Message-ID: <034101c656e5$20c7eab0$02c7cac6@matthewdesk> David Barrett: > Um... most connections aren't saturated 24x7. Like, I have a > 6Mbps connection and sometimes I'm just using AIM. In this > situation, I'd like to measure that 5.9Mbps is free. 5.9Mbps is free to where? I'll bet that 5.9 Mbps isn't even free to the first IP hop you see, much of the time. What really matters is how much bandwidth is available between you *and the source or sink you are trying to communicate with* Matthew Kaufman matthew@matthew.at http://www.amicima.com From lln at it.uu.se Mon Apr 3 06:22:20 2006 From: lln at it.uu.se (=?ISO-8859-1?Q?Lars-=C5ke_Larzon?=) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] Hard question.... In-Reply-To: <20060403055830.093563FD4C@capsicum.zgp.org> References: <20060403055830.093563FD4C@capsicum.zgp.org> Message-ID: <107A358C-822E-4977-9D56-1D922D736326@it.uu.se> > Um... most connections aren't saturated 24x7. Like, I have a 6Mbps > connection and sometimes I'm just using AIM. In this situation, > I'd like to > measure that 5.9Mbps is free. Any clever ideas on how to > accomplish this? > Well, if you are absolutely sure that your own connection always is the path bottleneck, you could simply keep track of your observed peak capacities, calculate an estimated capacity X and assume that X- your current load should be available. Keeping that estimate X on the lower end would give you a good enough approximation. This can be a good strategy for low-bandwidth access networks that then won't have to ramp up as slowly as TCP slowstart dictates. But then, if you have a 6Mbps connection to the Internet, how often is that the actual bottleneck? /Lars-?ke -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4030 bytes Desc: not available Url : http://zgp.org/pipermail/p2p-hackers/attachments/20060403/8edab43c/smime.bin From bob.harris.spamcontrol at gmail.com Mon Apr 3 06:30:11 2006 From: bob.harris.spamcontrol at gmail.com (Bob Harris) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] Hard question.... In-Reply-To: <034101c656e5$20c7eab0$02c7cac6@matthewdesk> References: <20060403055830.093563FD4C@capsicum.zgp.org> <034101c656e5$20c7eab0$02c7cac6@matthewdesk> Message-ID: Two reminders: (1) you gotta keep in mind where the bottlenecks are, and (2) network usage is bursty. So (1): TCP flows will achieve "min-max fair share" of the bandwidth, i.e. they will saturate a link to the maximum capacity of the bottleneck between the source and the sink. Suppose you have: ------------ D ------ F A ---- B ---- C -----< ------------ E Suppose DF is the bottleneck, and AB has 6 Mb bandwidth. Flow A-F might consume somewhere between 0.5 DF to 1 DF on AB. The AE will have plenty of bandwidth left over. On (2): Suppose there is an AF flow, with the bottleneck link at AB@6Mb. The flow will not be consuming bandwidth constantly - there will be bursts of activity. AIM may not have anything to send most of the time. When it does, it will likely slow-start to bottleneck capacity pretty quickly. Another flow, say AE, should get 5.9 Mb by bursting to 6 Mb when the link is free, and throttling to 3Mb when there is competition. So achieving 5.9 depends on the "over time" behavior of the protocol as opposed to how it shares the bandwidth "over space." So that's two different scenarios where there would be unused capacity on the link. I think I summarized David's scenario accurately. Cheers, Bob. On 4/3/06, Matthew Kaufman wrote: > > David Barrett: > > Um... most connections aren't saturated 24x7. Like, I have a > > 6Mbps connection and sometimes I'm just using AIM. In this > > situation, I'd like to measure that 5.9Mbps is free. > > 5.9Mbps is free to where? > > I'll bet that 5.9 Mbps isn't even free to the first IP hop you see, much > of > the time. > > What really matters is how much bandwidth is available between you *and > the > source or sink you are trying to communicate with* > > Matthew Kaufman > matthew@matthew.at > http://www.amicima.com > > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > _______________________________________________ > Here is a web page listing P2P Conferences: > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://zgp.org/pipermail/p2p-hackers/attachments/20060403/ec3576b6/attachment.htm From dbarrett at quinthar.com Mon Apr 3 06:38:13 2006 From: dbarrett at quinthar.com (David Barrett) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] Hard question... In-Reply-To: <107A358C-822E-4977-9D56-1D922D736326@it.uu.se> Message-ID: <20060403063816.779003FD1E@capsicum.zgp.org> > -----Original Message----- > From: Lars-?ke Larzon > Subject: Re: [p2p-hackers] Hard question.... > > > Um... most connections aren't saturated 24x7. Like, I have a 6Mbps > > connection and sometimes I'm just using AIM. In this situation, > > I'd like to > > measure that 5.9Mbps is free. Any clever ideas on how to > > accomplish this? > > > > Well, if you are absolutely sure that your own connection always is > the path bottleneck, you could simply keep track of your observed > peak capacities, calculate an estimated capacity X and assume that X- > your current load should be available. Keeping that estimate X on the > lower end would give you a good enough approximation. This can be a > good strategy for low-bandwidth access networks that then won't have > to ramp up as slowly as TCP slowstart dictates. Excellent, this is precisely the sort of answer I'm looking to get. The challenge with this strategy (as I see it) is resolving the "shared LAN" problem: how does client A measure bottleneck utilization if client B is also behind the same bottleneck? Recall, the point is for a client to say "I want to use X% of excess bottleneck capacity". This means knowing the "total capacity" and "current utilization". A client could monitor its peak usage and infer that the total capacity is at least this. And it can measure its current usage and infer the current utilization is at least this. But if there's another client behind the same bottleneck, it's only seeing a piece of the picture. Now, as Serguei suggested, perhaps the whole attempt is moot and it's just better to have a very conservative TCP-like stream, and trust that it'll only grow to use excess capacity. And as Bob mentioned, bursty traffic complicates this analysis. And clearly, as Lars said this only matters if you're absolutely sure your own connection is the bottleneck. But I'm curious if there are any other clever techniques out there (like Michael's suggestion of CapProbe) that might be used to identify bottleneck capacity and current utilization, irrespective of who you might contact on the other side of the bottleneck, and irrespective of how many clients are on this side of the bottleneck. -david From mgp at ucla.edu Mon Apr 3 08:36:18 2006 From: mgp at ucla.edu (Michael Parker) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] Hard question... In-Reply-To: <20060403063816.779003FD1E@capsicum.zgp.org> References: <20060403063816.779003FD1E@capsicum.zgp.org> Message-ID: <20060403013618.znos6qdigokc4kwo@mail.ucla.edu> From what I understand (which isn't much), CapProbe allows estimating the total end-to-end bandwidth of a link, which is necessarily the narrow link when there is no cross traffic inducing queueing delays. What it sounds like you want is something that measures available bandwidth instead, in which case I think you should look at something like Spruce: http://project-iris.net/irisbib/papers/spruce:imc03/paper.pdf If I recall correctly, however, Spruce can take awhile to run, while CapProbe is fairly quick (on a LAN, only a hundredth of a second... on a WAN, obviously more, but still reasonable). The two are, I think, complimentary tools. If you could instead rephrase "I want to use X% of excess bottleneck capacity" as "I want to use X bps of the link without backing down anyone else's stream", then I imagine you could simply ramp up a TCP-like stream and monitor packet loss to infer whether you were backing down someone else's stream. - Mike Quoting David Barrett : >> -----Original Message----- >> From: Lars-?ke Larzon >> Subject: Re: [p2p-hackers] Hard question.... >> >> > Um... most connections aren't saturated 24x7. Like, I have a 6Mbps >> > connection and sometimes I'm just using AIM. In this situation, >> > I'd like to >> > measure that 5.9Mbps is free. Any clever ideas on how to >> > accomplish this? >> > >> >> Well, if you are absolutely sure that your own connection always is >> the path bottleneck, you could simply keep track of your observed >> peak capacities, calculate an estimated capacity X and assume that X- >> your current load should be available. Keeping that estimate X on the >> lower end would give you a good enough approximation. This can be a >> good strategy for low-bandwidth access networks that then won't have >> to ramp up as slowly as TCP slowstart dictates. > > Excellent, this is precisely the sort of answer I'm looking to get. > > The challenge with this strategy (as I see it) is resolving the "shared LAN" > problem: how does client A measure bottleneck utilization if client B is > also behind the same bottleneck? > > Recall, the point is for a client to say "I want to use X% of excess > bottleneck capacity". This means knowing the "total capacity" and "current > utilization". > > A client could monitor its peak usage and infer that the total capacity is > at least this. And it can measure its current usage and infer the current > utilization is at least this. But if there's another client behind the same > bottleneck, it's only seeing a piece of the picture. > > Now, as Serguei suggested, perhaps the whole attempt is moot and it's just > better to have a very conservative TCP-like stream, and trust that it'll > only grow to use excess capacity. And as Bob mentioned, bursty traffic > complicates this analysis. And clearly, as Lars said this only matters if > you're absolutely sure your own connection is the bottleneck. > > But I'm curious if there are any other clever techniques out there (like > Michael's suggestion of CapProbe) that might be used to identify bottleneck > capacity and current utilization, irrespective of who you might contact on > the other side of the bottleneck, and irrespective of how many clients are > on this side of the bottleneck. > > -david > > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > _______________________________________________ > Here is a web page listing P2P Conferences: > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences > From dcarboni at gmail.com Mon Apr 3 17:03:19 2006 From: dcarboni at gmail.com (Davide "dada" Carboni) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] Socks and Java Message-ID: <71b79fa90604031003x6bcee728yd7e987648fd76f46@mail.gmail.com> Hi, I'm trying to play with SOCKS in Java and I found very easy to force connections to go via a SOCKS server. For instance the CONNECT is just a matter of few lines of code: SocketAddress addr = new InetSocketAddress("socks.mydomain.com", 1080); Proxy proxy = new Proxy(Proxy.Type.SOCKS, addr); URL url = new URL("ftp://ftp.gnu.org/README"); URLConnection conn = url.openConnection(proxy); What I cannot understand is how to use SOCKS also for binding an application behind a firewall to a given port. In other words how to use the BIND operation in SOCKS. I know that with jsocks.sourceforge.net is possible to instantiate a special SocksServerSocket to socksfy server sockets, but what I'm asking here is if it is possible also with the standard Java API. TIA Bye. -- Prima il 30% poi Barbolomeo. -- http://people.crs4.it/dcarboni From agthorr at cs.uoregon.edu Mon Apr 3 18:13:36 2006 From: agthorr at cs.uoregon.edu (Daniel Stutzbach) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] Hard question.... In-Reply-To: <20060403055830.093563FD4C@capsicum.zgp.org> References: <20060403034932.GA3496@cs.uoregon.edu> <20060403055830.093563FD4C@capsicum.zgp.org> Message-ID: <20060403181335.GA2500@cs.uoregon.edu> On Sun, Apr 02, 2006 at 10:58:05PM -0700, David Barrett wrote: > > From: Daniel Stutzbach > > That is not actually the right question, because the answer is > > typically "None or very, very little". TCP's goal is to achieve > > saturation and not leave unused capacity. If you try to measure the > > "available bandwidth" that TCP leaves behind, you're not going to have > > much to work with. > > Um... most connections aren't saturated 24x7. Well, sure, but anytime there's another bulk transfer going on in the background, "available bandwidth" measurements are not very useful. That's all I'm saying ;) > Like, I have a 6Mbps connection and sometimes I'm just using AIM. > In this situation, I'd like to measure that 5.9Mbps is free. Any > clever ideas on how to accomplish this? There are bunch of heuristic techniques to estimate the available bandwidth, but I'm not an expert on them. Search for "available bandwidth estimation" and "packet pair". -- Daniel Stutzbach Computer Science Ph.D Student http://www.barsoom.org/~agthorr University of Oregon From dbarrett at quinthar.com Mon Apr 3 18:17:53 2006 From: dbarrett at quinthar.com (David Barrett) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] Hard question.... In-Reply-To: <20060403181335.GA2500@cs.uoregon.edu> Message-ID: <20060403181759.6769E3FD45@capsicum.zgp.org> > -----Original Message----- > From: Daniel Stutzbach > Subject: Re: [p2p-hackers] Hard question.... > > There are bunch of heuristic techniques to estimate the available > bandwidth, but I'm not an expert on them. Search for "available > bandwidth estimation" and "packet pair". Ah, good, thank you for the pointers. -david From matthew at matthew.at Mon Apr 3 19:06:57 2006 From: matthew at matthew.at (Matthew Kaufman) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] Hard question... In-Reply-To: <20060403063816.779003FD1E@capsicum.zgp.org> Message-ID: <036601c65751$ca7d13c0$02c7cac6@matthewdesk> David Barrett: > The challenge with this strategy (as I see it) is resolving > the "shared LAN" > problem: how does client A measure bottleneck utilization if > client B is also behind the same bottleneck? The related challenge (which I alluded to in my last email) is that you have often have NO IDEA where the actual bottleneck is, much less what is behind the bottleneck with you. In that case of a 6 Mbps DSL connection, a likely bottleneck is the first ATM hop upstream of the DSLAM, which includes all the other people who have "6 Mbps connections" attached to your DSLAM or cluster of DSLAMs at a given CO, and the other likely bottleneck is the final ATM link into the access router... what you see as the first IP hop when you traceroute, which includes hundreds to tens of thousands of customers. Neither one of those is the "6 Mbps" connection to your house, which is much less likely to be the actual bottleneck encountered by incoming IP traffic. For that matter, in most P2P cases with asymmetric connectivity, the real bottleneck is the sum of the upstream bandwidth available to the senders. At the IP layer, the best you could do is try to measure the instantaneously available bandwidth between you and the first IP hop (likely to be several ATM hops and maybe some MPLS too, these days) outside your LAN, but that depends on being able to get the access router (which is often heavily loaded) to respond in such a way that is useful (remembering that ICMP responses will take a lower priority path than actual traffic would inside that router), and would still require that you briefly saturate the connection in order to see what happens. And then even that measurement is already one RTT old by the time you have the data. Remember: just because something is easy to measure (bandwidth utilization on your downlink in the single host on a DSL line case, for instance) doesn't mean that the measurement has any value. Perhaps this is a case where a statement of the actual problem that one is trying to solve would be helpful. Matthew Kaufman matthew@matthew.at http://www.amicima.com From dcarboni at gmail.com Tue Apr 4 17:52:32 2006 From: dcarboni at gmail.com (Davide "dada" Carboni) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] ANN: tutorial on NAT and P2P Message-ID: <71b79fa90604041052o427cfcd0q1dcd51e335616168@mail.gmail.com> Hi, I've prepared a slide-based course on NAT traversal. You can find it on http://p2p-mentor.berlios.de/ Any comment is welcome. Bye. -- Prima il 30% poi Barbolomeo. -- http://people.crs4.it/dcarboni From ardagna at dti.unimi.it Mon Apr 10 08:54:13 2006 From: ardagna at dti.unimi.it (Claudio Agostino Ardagna) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] CFP: 2ND INTERNATIONAL WORKSHOP ON SECURITY AND TRUST MANAGEMENT (STM'06) Message-ID: <028f01c65c7c$5737e9a0$1e00000a@Berlino> [Apologies if you receive multiple copies of this message] CALL FOR PAPERS *********************************************************************************************** 2ND INTERNATIONAL WORKSHOP ON SECURITY AND TRUST MANAGEMENT (STM'06) Hamburg,Germany - September 20, 2006 (in conjunction with ESORICS 2006) http://www.hec.unil.ch/STM06/ *********************************************************************************************** STM (Security and Trust Management) is a recently established working group of ERCIM (European Research Consortium in Informatics and Mathematics). STM 2006 is the second workshop in this series, and has the following aims: - to investigate the foundations and applications of security and trust in ICT - to study the deep interplay between trust management and common security issues such as confidentiality, integrity and availability - to identify and promote new areas of research connected with security management, e.g. dynamic and mobile coalition management (e.g., P2P, MANETs, Web/GRID services) - to identify and promote new areas of research connected with trust management, e.g. reputation, recommendation, collaboration etc - to provide a platform for presenting and discussing emerging ideas and trends Topics of interest include but are not limited to: - semantics and computational models for security and trust - security and trust management architectures, mechanisms and policies - networked systems security - privacy and anonymity - Identity management - ICT for securing digital as well as physical assets - cryptography The primary focus is on high-quality original unpublished research, case studies, and implementation experiences. We encourage submissions discussing the application and deployment of security technologies in practice. Paper submissions. Submitted papers must not substantially overlap papers that have been published or that are simultaneously submitted to a journal or a conference with proceedings. Papers must have authors' affiliation and contact information on the first page. Papers are limited to 12 pages in ENTCS style format (using the generic template). Excessively long papers will be returned without review. Accepted papers will be published in a post-workshop ENTCS volume. To submit a paper, please visit http://www.easychair.org/STM06/ . For more information contact stm06@dti.unimi.it Papers must be received by the deadline of May 15, 2006. IMPORTANT DATES Paper submission due: May 15, 2006 Acceptance notification: June 26, 2006 Final Papers due: August 20, 2006 GENERAL CHAIRS Solange Ghernaouti H?lie Univ. Lausanne, CH email: sgh@unil.ch Ulrich Ultes-Nitsche Univ. Fribourg, CH email: uun@unifr.ch PROGRAM CO-CHAIRS Sandro Etalle University of Twente, NL email: sandro.etalle@utwente.nl Pierangela Samarati Universita' di Milano - Italy email: samarati@dti.unimi.it PUBLICATION CHAIR Sara Foresti Universita' di Milano - Italy email: foresti@dti.unimi.it PUBLICITY CHAIR Claudio A. Ardagna Universita' di Milano - Italy email: ardagna@dti.unimi.it PROGRAM COMMITTEE: Viajy Atluri, Rutgers Univ., USA Joris Claessens, Microsoft EMIC, DE Sabrina De Capitani di Vimercati, Univ. Milano, IT Theo Dimitrakos, British Telecom, UK Mara Isabel Gonz?lez Vasco, Univ. Rey Juan Carlos, SP Stefanos Gritzalis, Univ. of Aegean, GR Peter Herrmann, NTNU, NO Valerie Issarny, INRIA, FR Guenter Karjoth, IBM Research, CH Antonio Lioy, Politecnico di Torino, IT Javier Lopez, Univ. Malaga, SP Fabio Martinelli, IIT-CNR, IT Sjouke Mauw, Technical Univ. Eindhoven, NL Daniel Olmedilla, L3S, GR Babak Sadighi, SICS, SE Luca Vigano', ETH Zurich, CH Will Winsborough, Univ. Texas at S. Antonio, USA Ting Yu, North Carolina State Univ., USA Alec Yasinsac, Florida State Univ., USA This call for papers and additional information about the conference can be found at http://www.hec.unil.ch/STM06 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://zgp.org/pipermail/p2p-hackers/attachments/20060410/ad0b09d8/attachment.html From czigola at elte.hu Tue Apr 11 20:24:56 2006 From: czigola at elte.hu (Czigola Gabor) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] Looking for papers about general resource sharing over p2p In-Reply-To: <71b79fa90604041052o427cfcd0q1dcd51e335616168@mail.gmail.com> References: <71b79fa90604041052o427cfcd0q1dcd51e335616168@mail.gmail.com> Message-ID: Halo! I'm watching the list for weeks, but this is my first letter. I'm involved in a small project, and we want to specify and implement a software for sharing resources (disk space, later cpu time) over a p2p network. My preceptions are: - It should be scalable. It's not designed (but able) to work as a world-wide file-music-video sharing network. It should work as well in a home LAN as in a WAN. - The network should be individual (per network) extensible on the software side (like for authentication or mounting the overall shared disk space etc.) - Security, of course! - It's not going to be ONE network. But you should be able to create your own one. My first guess was to take a public-private key pair, let the one be the ID of the network (like an IP address), and the other can be used as the ID of the network master. The network master should be able to scale his network, set security behavior, quota, or whatever. How can it be used: - The Tb of unused disk space in office computers could be shared together, and those computers could work at the same time like an NFS server, without affecting the normal behavior of the involved computers (much). - File server without a server - Framework for using unused resources So, my question now is not "How to do it?", figuring this out is my (our) job (but it is not impossible, that I will ask later more questions), but: I'm looking for papers, documentation etc. of other p2p networks with similar goals. I'm sure that I'm not the first one with the idea. Thanks! -- Czigola, Gabor (ps: I'm not a native English speaker, if something looks funny in the text, it's not a typo, it's a deficit of my knowledge.) From lists at user-land.org Tue Apr 11 20:58:53 2006 From: lists at user-land.org (Philippe Landau) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] Bounty for Open Source Trust System Message-ID: <443C188D.7060805@user-land.org> We would like to offer 3000 Euro for an Open Source P2P Trust Community System. It should allow participants to easily build communities of peers they trust, and share comments about peers and other entities. Comments should be categorised into trust levels needed to view them. Data and communication will be encrypted and distributed. Rapid implementation is key, later improvements will be paid for additionally. The project is non-commercial, the aim to enable family functions on a global level. Details are open for discussion, input is welcome. Kind regards Philippe From bob.harris.spamcontrol at gmail.com Tue Apr 11 21:24:31 2006 From: bob.harris.spamcontrol at gmail.com (Bob Harris) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] Bounty for Open Source Trust System In-Reply-To: <443C188D.7060805@user-land.org> References: <443C188D.7060805@user-land.org> Message-ID: Give the bounty to the Credence folks, they already did this. And from what I understand it's backwards compatible with Gnutella. A trust overlay on top of a p2p overlay. Bob On 4/11/06, Philippe Landau wrote: > > We would like to offer 3000 Euro for an > Open Source P2P Trust Community System. > > It should allow participants to easily build communities > of peers they trust, and share comments about peers > and other entities. Comments should be categorised > into trust levels needed to view them. > Data and communication will be encrypted and distributed. > Rapid implementation is key, later improvements > will be paid for additionally. > > The project is non-commercial, > the aim to enable family functions on a global level. > Details are open for discussion, input is welcome. > > Kind regards Philippe > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > _______________________________________________ > Here is a web page listing P2P Conferences: > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://zgp.org/pipermail/p2p-hackers/attachments/20060411/b5c2d15d/attachment.htm From coderman at gmail.com Tue Apr 11 21:31:54 2006 From: coderman at gmail.com (coderman) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] Bounty for Open Source Trust System In-Reply-To: References: <443C188D.7060805@user-land.org> Message-ID: <4ef5fec60604111431m4cb99f8ancb69330f7f6a076a@mail.gmail.com> On 4/11/06, Bob Harris wrote: > Give the bounty to the Credence folks, they already did this. And from what > I understand it's backwards compatible with Gnutella. A trust overlay > on top of a p2p overlay. credence requires explicit user feedback and provides a very one dimensional view of reputation. the drawbacks to this approach (while still much better than nothing) are well documented. i'm much more fond of implicit feedback based on user interaction with the resources they obtain (see feedbackfs in the archives) which gives a richer view of reputation between peers. (for example, grouping you with peers who provide not only honest meta data, but also relevant resources based on your preferences / history) From egs+p2phackers at cs.cornell.edu Tue Apr 11 21:00:32 2006 From: egs+p2phackers at cs.cornell.edu (Emin Gun Sirer) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] Bounty for Open Source Trust System In-Reply-To: <4ef5fec60604111431m4cb99f8ancb69330f7f6a076a@mail.gmail.com> References: <443C188D.7060805@user-land.org> <4ef5fec60604111431m4cb99f8ancb69330f7f6a076a@mail.gmail.com> Message-ID: <1144789232.19426.13.camel@dhcp98-175.cs.cornell.edu> Let me interject two factoids to make sure no myths are propagated: - The current Credence implementation uses explicit feedback. There is no reason why you couldn't use implicit indications of trust, if your application had such indicators. It turns out that there are no such good implicit indicators in p2p filesharing - sharing a file is not a good indicator that the user would vouch for that file. Our paper has the details. - Credence computes a "very multidimensional" trust metric for each participant. Unlike Google's global page rank, Credence conceptually computes a separate trust metric for each peer from the point of view of every other peer. So X might rank high and be trustworthy for Y, but not for Z. Best, Gun (& Kevin). On Tue, 2006-04-11 at 14:31 -0700, coderman wrote: > On 4/11/06, Bob Harris wrote: > > Give the bounty to the Credence folks, they already did this. And from what > > I understand it's backwards compatible with Gnutella. A trust overlay > > on top of a p2p overlay. > > credence requires explicit user feedback and provides a very one > dimensional view of reputation. the drawbacks to this approach (while > still much better than nothing) are well documented. > > i'm much more fond of implicit feedback based on user interaction with > the resources they obtain (see feedbackfs in the archives) which gives > a richer view of reputation between peers. (for example, grouping you > with peers who provide not only honest meta data, but also relevant > resources based on your preferences / history) > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > _______________________________________________ > Here is a web page listing P2P Conferences: > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences > From coderman at gmail.com Wed Apr 12 00:04:20 2006 From: coderman at gmail.com (coderman) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] Bounty for Open Source Trust System In-Reply-To: <1144789232.19426.13.camel@dhcp98-175.cs.cornell.edu> References: <443C188D.7060805@user-land.org> <4ef5fec60604111431m4cb99f8ancb69330f7f6a076a@mail.gmail.com> <1144789232.19426.13.camel@dhcp98-175.cs.cornell.edu> Message-ID: <4ef5fec60604111704o38037fa2o78fa0a3dc2cc5989@mail.gmail.com> On 4/11/06, Emin Gun Sirer wrote: > Let me interject two factoids to make sure no myths are propagated: > > - The current Credence implementation uses explicit feedback. There is > no reason why you couldn't use implicit indications of trust, if > your application had such indicators. thank you for pointing this out. do you know offhand how easy/difficult it would be to extend the feedback mechanism to support arbitrary qualifiers? > It turns out that there are > no such good implicit indicators in p2p filesharing - sharing a file > is not a good indicator that the user would vouch for that file. Our > paper has the details. indeed. this is a hard problem and the good solutions are very invasive and carry significant privacy concerns (for example, feedbackfs monitors what files you open, how far you read into them, if you copied them, deleted them, read them end to end many times, etc. these actions are used to build implicit feedback (positive or negative) associated with distinct file based resources. the privacy concerns of such "file system profiling" should not be understated and is why i've been detoured into strong security for so long) > - Credence computes a "very multidimensional" trust metric for each > participant. Unlike Google's global page rank, Credence conceptually > computes a separate trust metric for each peer from the point of > view of every other peer. So X might rank high and be trustworthy > for Y, but not for Z. i should have clarified; what i meant by one dimensional is that the explicit feedback is used to indicate whether the meta data / names associated are accurate or not. while this is computed individually for each peer you communicate with (an excellent decision, btw) it is still a single aspect ("trustworthy meta data: yes / no || positive / negative") of peer reputation. i mean to highlight this limited aspect only because what most people want is "relevant" resources, and not necessarily "accurate meta data", although the two often intersect. thanks again for the clarification. From kwalsh at cs.cornell.edu Wed Apr 12 04:15:24 2006 From: kwalsh at cs.cornell.edu (Kevin Walsh) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] Bounty for Open Source Trust System In-Reply-To: <4ef5fec60604111431m4cb99f8ancb69330f7f6a076a@mail.gmail.com> References: <443C188D.7060805@user-land.org> <4ef5fec60604111431m4cb99f8ancb69330f7f6a076a@mail.gmail.com> Message-ID: I certainly agree that there are drawbacks to Credence's approach, as any. And our papers try to discuss some of them. But I don't think that is what you meant by "well documented". If you could send me (or the listserv) some pointers or references, I'd be happy to see them. We are always interested in potential improvements, weaknesses, or other approaches to similar problems of trust in p2p networks. -Kevin On Tue, 11 Apr 2006, coderman wrote: > credence requires explicit user feedback and provides a very one > dimensional view of reputation. the drawbacks to this approach (while > still much better than nothing) are well documented. From kwalsh at cs.cornell.edu Wed Apr 12 04:19:10 2006 From: kwalsh at cs.cornell.edu (Kevin Walsh) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] Bounty for Open Source Trust System In-Reply-To: <4ef5fec60604111704o38037fa2o78fa0a3dc2cc5989@mail.gmail.com> References: <443C188D.7060805@user-land.org> <4ef5fec60604111431m4cb99f8ancb69330f7f6a076a@mail.gmail.com> <1144789232.19426.13.camel@dhcp98-175.cs.cornell.edu> <4ef5fec60604111704o38037fa2o78fa0a3dc2cc5989@mail.gmail.com> Message-ID: On Tue, 11 Apr 2006, coderman wrote: > On 4/11/06, Emin Gun Sirer wrote: > > Let me interject two factoids to make sure no myths are propagated: > > > > - The current Credence implementation uses explicit feedback. There is > > no reason why you couldn't use implicit indications of trust, if > > your application had such indicators. > > thank you for pointing this out. do you know offhand how > easy/difficult it would be to extend the feedback mechanism to support > arbitrary qualifiers? One of the earliest wish-list items was to allow more specific voting. Maybe I am out of touch, but I was pretty surprised at how many people wanted to be able to say things like "The file name is bogus, but the bitrate, artist, and file type are all correct." Our most recent release has a pretty general framework already in place to handle aribtrary statements of this sort. The user interface can now generate statements about file types, bitrates, and file names, and I don't see any reason not to add other things too. The main issue is trying to keep the GUI simple, and being careful about the schema. Details are in our nsdi paper due out in a few weeks. > > It turns out that there are no such good implicit > > indicators in p2p filesharing - sharing a file > > is not a good indicator that the user would vouch for that file. Our > > paper has the details. > > indeed. this is a hard problem and the good solutions are very > invasive and carry significant privacy concerns (for example, > feedbackfs monitors what files you open, how far you read into them, > if you copied them, deleted them, read them end to end many times, > etc. these actions are used to build implicit feedback (positive or > negative) associated with distinct file based resources. the privacy > concerns of such "file system profiling" should not be understated and > is why i've been detoured into strong security for so long) Exactly right. I had though of trying to extract info from Windows Media Player (which lets users rate items from 1-star to 5-stars, and also has implicit automatic ratings based on usage), or adding a similar feature to LimeWire's player. I didn't like the privacy implications, and I expect others wouldn't either. But that is not to say that implicit metrics are always bad, especially in other domains. They just seem to be for file sharing. >> - Credence computes a "very multidimensional" trust metric for each [snip] > i mean to highlight this limited aspect only because what most people > want is "relevant" resources, and not necessarily "accurate meta > data", although the two often intersect. I guess the problem would be to define "relevant". In the existing networks, queries tend to be short, vague, and have no context. Since it is not at all obvious what the user is looking for in the first place, it would be kind of hard to decide what is "relevant" in the file sharing world. I'm not too sure what Philippe's bounty is looking for, though. He doesn't mention files, or sharing, but does mention "family functions on a global level". Can someone clue me in to what that is? -Kevin From coderman at gmail.com Wed Apr 12 17:43:16 2006 From: coderman at gmail.com (coderman) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] Bounty for Open Source Trust System In-Reply-To: References: <443C188D.7060805@user-land.org> <4ef5fec60604111431m4cb99f8ancb69330f7f6a076a@mail.gmail.com> Message-ID: <4ef5fec60604121043t8df3420m966b4cd9707f4b60@mail.gmail.com> On 4/11/06, Kevin Walsh wrote: > ... If you could send me (or the listserv) > some pointers or references, I'd be happy to see them. We are always > interested in potential improvements, weaknesses, or other approaches to > similar problems of trust in p2p networks. sure, most of these are related to web services or agent systems but the concepts are generally applicable. (this is a link dump from some old bookmarks; i'd appreciate any new research / papers / projects that might be useful) Implicit Feedback for Recommender Systems (1998) http://citeseer.ist.psu.edu/oard98implicit.html Dynamic Information Filtering (2001) http://citeseer.ist.psu.edu/baudisch01dynamic.html Implicit Rating and Filtering (1998) http://citeseer.ist.psu.edu/nichols98implicit.html Implicit Interest Indicators (2001) http://citeseer.ist.psu.edu/claypool01implicit.html Emergent Properties of Referral Systems (2003) http://citeseer.ist.psu.edu/yolum03emergent.html User Interactions with Everyday Applications as Context for Just-in-time Information Access (2000) http://citeseer.ist.psu.edu/budzik00user.html From coderman at gmail.com Wed Apr 12 18:00:24 2006 From: coderman at gmail.com (coderman) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] Bounty for Open Source Trust System In-Reply-To: References: <443C188D.7060805@user-land.org> <4ef5fec60604111431m4cb99f8ancb69330f7f6a076a@mail.gmail.com> <1144789232.19426.13.camel@dhcp98-175.cs.cornell.edu> <4ef5fec60604111704o38037fa2o78fa0a3dc2cc5989@mail.gmail.com> Message-ID: <4ef5fec60604121100w53aed9e4xe348db94b7b5128e@mail.gmail.com> On 4/11/06, Kevin Walsh wrote: > ... > One of the earliest wish-list items was to allow more specific voting. > Maybe I am out of touch, but I was pretty surprised at how many people > wanted to be able to say things like "The file name is bogus, but the > bitrate, artist, and file type are all correct." Our most recent release > has a pretty general framework already in place to handle aribtrary > statements of this sort. that seems reasonable; my wife frequently comes across music that has the wrong artist or title but sounds good and is worth keeping. in such a situation it would be nice to vote/rate a subset of the meta data individually, so the correct parts can be propagated while the incorrect bits are deprecated and replaced with accurate details. > The user interface can now generate statements about file types, bitrates, > and file names, and I don't see any reason not to add other things too. > The main issue is trying to keep the GUI simple, and being careful about > the schema. Details are in our nsdi paper due out in a few weeks. excellent; can you post an update here when it is available? the user interface issues are usually the crux of the problem, although a good interface can make explicit feedback useful and commonly used. (you understand my fondness of implicit metrics, good UI is not my forte :) > I guess the problem would be to define "relevant". In the existing > networks, queries tend to be short, vague, and have no context. Since it > is not at all obvious what the user is looking for in the first place, it > would be kind of hard to decide what is "relevant" in the file sharing > world. true. i should clarify that feedbackfs tracks user ID and program path (/bin/ls, /usr/bin/firefox, etc) so the relevance of a resource can vary greatly depending on the application and user. i tend to think of recommendation and relevance in a richer context where you have sequences of resources with detailed implicit metrics attached in distinct domains of usage (music, web documents, video, etc). this is a long term goal and different than simple keyword based searching where accurate meta data alone can provide relevant results in most cases. > I'm not too sure what Philippe's bounty is looking for, though. He doesn't > mention files, or sharing, but does mention "family functions on a global > level". Can someone clue me in to what that is? i'd like some more explanation as well. trust and reputation covers a lot of techniques and concepts. :) From kwalsh at cs.cornell.edu Wed Apr 12 19:08:07 2006 From: kwalsh at cs.cornell.edu (Kevin Walsh) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] Bounty for Open Source Trust System In-Reply-To: <4ef5fec60604121100w53aed9e4xe348db94b7b5128e@mail.gmail.com> References: <443C188D.7060805@user-land.org> <4ef5fec60604111431m4cb99f8ancb69330f7f6a076a@mail.gmail.com> <1144789232.19426.13.camel@dhcp98-175.cs.cornell.edu> <4ef5fec60604111704o38037fa2o78fa0a3dc2cc5989@mail.gmail.com> <4ef5fec60604121100w53aed9e4xe348db94b7b5128e@mail.gmail.com> Message-ID: On Wed, 12 Apr 2006, coderman wrote: >> The user interface can now generate statements about file types, bitrates, >> and file names, and I don't see any reason not to add other things too. >> The main issue is trying to keep the GUI simple, and being careful about >> the schema. Details are in our nsdi paper due out in a few weeks. > > excellent; can you post an update here when it is available? Consider it done. The update has been available for a few weeks at http://www.cs.cornell.edu/People/egs/credence, and recently made available at http://sourceforge.net/projects/credence as well. From kwalsh at cs.cornell.edu Wed Apr 12 19:11:22 2006 From: kwalsh at cs.cornell.edu (Kevin Walsh) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] Bounty for Open Source Trust System In-Reply-To: <4ef5fec60604121043t8df3420m966b4cd9707f4b60@mail.gmail.com> References: <443C188D.7060805@user-land.org> <4ef5fec60604111431m4cb99f8ancb69330f7f6a076a@mail.gmail.com> <4ef5fec60604121043t8df3420m966b4cd9707f4b60@mail.gmail.com> Message-ID: Thanks for the list coderman. I will take a look at these in some more detail. Just to clarify one point, Credence strives hard to NOT be a recommender system or referral system. I know the problems such systems have, and we definitely put some thought into making credence not fall into the same traps. But I will be interested to see if some of those papers have something relevant to Credence's model. -Kevin On Wed, 12 Apr 2006, coderman wrote: > On 4/11/06, Kevin Walsh wrote: >> ... If you could send me (or the listserv) >> some pointers or references, I'd be happy to see them. We are always >> interested in potential improvements, weaknesses, or other approaches to >> similar problems of trust in p2p networks. > > sure, most of these are related to web services or agent systems but > the concepts are generally applicable. (this is a link dump from some > old bookmarks; i'd appreciate any new research / papers / projects > that might be useful) > > Implicit Feedback for Recommender Systems (1998) > http://citeseer.ist.psu.edu/oard98implicit.html > > Dynamic Information Filtering (2001) > http://citeseer.ist.psu.edu/baudisch01dynamic.html > > Implicit Rating and Filtering (1998) > http://citeseer.ist.psu.edu/nichols98implicit.html > > Implicit Interest Indicators (2001) > http://citeseer.ist.psu.edu/claypool01implicit.html > > Emergent Properties of Referral Systems (2003) > http://citeseer.ist.psu.edu/yolum03emergent.html > > User Interactions with Everyday Applications as Context for > Just-in-time Information Access (2000) > http://citeseer.ist.psu.edu/budzik00user.html > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > _______________________________________________ > Here is a web page listing P2P Conferences: > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences > From coderman at gmail.com Wed Apr 12 19:32:03 2006 From: coderman at gmail.com (coderman) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] Bounty for Open Source Trust System In-Reply-To: References: <443C188D.7060805@user-land.org> <4ef5fec60604111431m4cb99f8ancb69330f7f6a076a@mail.gmail.com> <4ef5fec60604121043t8df3420m966b4cd9707f4b60@mail.gmail.com> Message-ID: <4ef5fec60604121232i4129b5fbn715eec5f5844a2c3@mail.gmail.com> On 4/12/06, Kevin Walsh wrote: > ... Just to clarify one point, Credence strives hard to NOT be a > recommender system or referral system. I know the problems such systems > have, and we definitely put some thought into making credence not fall > into the same traps. But I will be interested to see if some of those > papers have something relevant to Credence's model. i think you'll find there is a lot more similarity than expected. the main differences i've encountered seem to be pull vs. push and user interaction. the metrics and techniques used inside are often applicable to a wide variety of applications. (that is, the process that leads to recommendation is just as easily tied to a positive reputation. the feedback / metrics are often applicable to both) when you mentioned the update, did you mean code and not the paper? i see reference to the NSDI paper here: http://www.cs.cornell.edu/People/egs/credence/paper.html but no link. (do they require no prior publication?) thanks again for the explanations and pointers. From kwalsh at cs.cornell.edu Wed Apr 12 20:33:25 2006 From: kwalsh at cs.cornell.edu (Kevin Walsh) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] Bounty for Open Source Trust System In-Reply-To: <4ef5fec60604121232i4129b5fbn715eec5f5844a2c3@mail.gmail.com> References: <443C188D.7060805@user-land.org> <4ef5fec60604111431m4cb99f8ancb69330f7f6a076a@mail.gmail.com> <4ef5fec60604121043t8df3420m966b4cd9707f4b60@mail.gmail.com> <4ef5fec60604121232i4129b5fbn715eec5f5844a2c3@mail.gmail.com> Message-ID: On Wed, 12 Apr 2006, coderman wrote: > when you mentioned the update, did you mean code and not the paper? Both. > i see reference to the NSDI paper here: > http://www.cs.cornell.edu/People/egs/credence/paper.html but no link. > (do they require no prior publication?) There is a link at http://www.cs.cornell.edu/~kwalsh/ (just added a few hours ago). The main page (the url you give) is updated as well, but the new link won't get propagated to that page for several more hours. Regards, Kevin From bcg at utas.edu.au Thu Apr 13 05:51:22 2006 From: bcg at utas.edu.au (Brad Goldsmith) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] ANN: CompTorrent: Applying BitTorrent Techniques to Distributed Computing Message-ID: <5b1c45972215fa13f79e6bad04b450fd@utas.edu.au> Hi All, Just a plug about a idea that I am pursuing as part of my research in P2P. The abstract: "This paper describes 'CompTorrent', a general purpose distributed computing platform that uses techniques derived from the popular BitTorrent file sharing protocol. The result is a grid swarm that requires only the creation and seed hosting of a comptorrent file, which contains the algorithm code and data set metrics, to facilitate the computing exercise. Peers need only obtain this comptorrent file and then join the swarm using the CompTorrent application. This paper describes the protocol, discusses its operation and provides directions for current and future research." Its a departmental technical report and is available here if anyone is interested: http://eprints.comp.utas.edu.au:81/archive/00000270/ Cheers, Brad --- Brad Goldsmith School of Computing University of Tasmania, Tasmania, Australia Homepage: http://www.comp.utas.edu.au/users/bcg/ Office: Launceston Campus, Computing Building, V-177 Telephone: (03) 6324 3389 International: +61-3-6324 3389 Facsimile: (03) 6324 3368 International: +61-3-6324 3368 From markm at cs.jhu.edu Sun Apr 16 03:06:30 2006 From: markm at cs.jhu.edu (Mark S. Miller) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] A dissertation on the rationale, philosophy, and goals of E and related systems Message-ID: <4441B4B6.2010006@cs.jhu.edu> Apologies for the wide distribution, but elements of this dissertation are germane to each of these lists. Feedback appreciated, but please reply to me or on an appropriate list, rather than using "Reply all". The copyright notice is interim, until I figure out what open license I want on this. Robust Composition: Towards a Unified Approach to Access Control and Concurrency Control When separately written programs are composed so that they may cooperate, they may instead destructively interfere in unanticipated ways. These hazards limit the scale and functionality of the software systems we can successfully compose. This dissertation presents a framework for enabling those interactions between components needed for the cooperation we intend, while minimizing the hazards of destructive interference. Great progress on the composition problem has been made within the object paradigm, chiefly in the context of sequential, single-machine programming among benign components. We show how to extend this success to support robust composition of concurrent and potentially malicious components distributed over potentially malicious machines. We present E, a distributed, persistent, secure programming language, and CapDesk, a virus-safe desktop built in E, as embodiments of the techniques we explain. My dissertation at Johns Hopkins University, found at http://www.erights.org/talks/thesis/index.html Advisor: Jonathan S. Shapiro. Readers: Scott Smith, Yair Amir. -- Cheers, --MarkM From arun.kumar at kerika.com Wed Apr 19 20:09:22 2006 From: arun.kumar at kerika.com (Arun Kumar) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] Kerika: a new p2p collaboration system Message-ID: <444698F2.4030103@kerika.com> Hi folks, We are making Kerika available to the public as part of a wide beta testing program. We like to call it the smarter alternative to email for sharing documents, ideas and projects within distributed teams. The application is written in Java and is supported for Windows and Macs at present; we will likely add Linux support in the future (we know it runs on Linux; just don't have the resources to properly test that platform.) Kerika uses the JXTA platform for its p2p connectivity. It isn't pure p2p; we added what we call a "server assist": if you update a project or document that you are sharing with some buddies, some of whom happen to be offline at the moment, a "storage peer" at our data center stands in for your missing buddies. When your buddies come back online, they can get any updates that they might otherwise have missed. I would welcome feedback from everyone. You can check out the software at www.kerika.com, in particular take a look at some of the many Flash demos at http://www.kerika.com/flash_demos.html. Kerika is free at the moment; in the future it may be offered as a subscription service. Thanks, Arun Kumar From lists at user-land.org Fri Apr 21 07:18:55 2006 From: lists at user-land.org (Philippe Landau) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] Bounty for Open Source Trust System In-Reply-To: <443C188D.7060805@user-land.org> References: <443C188D.7060805@user-land.org> Message-ID: <4448875F.4070002@user-land.org> Good morning. Why is it that i received no reply from someone interested in doing something like this ? Do all programmers just earn so much more now ? Or do you think it is a bad idea ? Or is the audience of this list just tiny ? I now pay 200 Euro to the one finding a programmer who is inspired by the following idea and able to pull it off. Kind regards Philippe -- Philippe Landau wrote: > We would like to offer 3000 Euro for an > Open Source P2P Trust Community System. > > It should allow participants to easily build communities > of peers they trust, and share comments about peers > and other entities. Comments should be categorised > into trust levels needed to view them. > Data and communication will be encrypted and distributed. > Rapid implementation is key, later improvements > will be paid for additionally. > > The project is non-commercial, > the aim to enable family functions on a global level. > Details are open for discussion, input is welcome. > > Kind regards Philippe > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > _______________________________________________ > Here is a web page listing P2P Conferences: > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences > From solipsis at pitrou.net Fri Apr 21 11:10:18 2006 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] Bounty for Open Source Trust System In-Reply-To: <4448875F.4070002@user-land.org> References: <443C188D.7060805@user-land.org> <4448875F.4070002@user-land.org> Message-ID: <1145617818.5720.9.camel@fsol> Hi Philippe, Le vendredi 21 avril 2006 ? 09:18 +0200, Philippe Landau a ?crit : > Good morning. > > Why is it that i received > no reply from someone interested in doing something like this ? > > Do all programmers just earn so much more now ? > Or do you think it is a bad idea ? > Or is the audience of this list just tiny ? Perhaps it is because : - you don't say precisely what the project is - you don't say who "we" is - you don't say if there is a design ready to implement, or if doing the design is part of the "bounty" The third question is especially important. If there is no precise design ready, I don't think you can expect someone would do the whole thing for 3000 euro (which can pay between two weeks and one month of work, depending on the person - perhaps two months for a student). Regards Antoine. From karlanmitchell at comcast.net Fri Apr 21 21:58:12 2006 From: karlanmitchell at comcast.net (Karlan Mitchell) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] Re: Bounty for Open Source Trust System (Philippe Landau) Message-ID: <44495574.6030008@comcast.net> Tell us a little about yourself, and where your coming from, where your going? Wee need specifics for anything. Also, 200 Euro is not that much money for such a job, however you pay what you can; So I understand. Trust systems involve a lot of real world testing. -------------- next part -------------- An embedded message was scrubbed... From: Karlan Mitchell Subject: Re: Bounty for Open Source Trust System (Philippe Landau) Date: Fri, 21 Apr 2006 12:42:26 -0700 Size: 717 Url: http://zgp.org/pipermail/p2p-hackers/attachments/20060421/a87a0d6c/BountyforOpenSourceTrustSystemPhilippeLandau.mht From coderman at gmail.com Fri Apr 21 22:59:35 2006 From: coderman at gmail.com (coderman) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] Bounty for Open Source Trust System In-Reply-To: <4448875F.4070002@user-land.org> References: <443C188D.7060805@user-land.org> <4448875F.4070002@user-land.org> Message-ID: <4ef5fec60604211559o7c127296g6ee52ae80a972ce4@mail.gmail.com> On 4/21/06, Philippe Landau wrote: > ... > Why is it that i received > no reply from someone interested in doing something like this ? virtue is its own reward! a less flippant reply: echoing Antoine's remarks, please specify the following: - what are the precise technical requirements, sufficient to easily determine whether a given application/system meets them or not. the mention of communities, peers and trust in a few sentences is insufficient. i'd expect a detailed description of mandatory requirements to take a number of pages at least. - who will be judging the merit / acceptance of submissions? you? a committee of experts? the public vote? etc. - what are the time constraints of entry: when are initial submissions due, when are final packaged releases due, when is judging completed, when are winner(s) announced and how will funds be transferred? - what are the IP / copyright restrictions / requirements associated with entry. is any open source license acceptable. BSD license? do implementers have to give up copyright ownership to accept the prize? answering these questions will go a long way toward making your challenge more legitimate / acceptable. From lemonobrien at yahoo.com Sat Apr 22 00:35:33 2006 From: lemonobrien at yahoo.com (Lemon Obrien) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] Bounty for Open Source Trust System In-Reply-To: <4ef5fec60604211559o7c127296g6ee52ae80a972ce4@mail.gmail.com> Message-ID: <20060422003534.82472.qmail@web53606.mail.yahoo.com> give me 250k and i'll consider. coderman wrote: On 4/21/06, Philippe Landau wrote: > ... > Why is it that i received > no reply from someone interested in doing something like this ? virtue is its own reward! a less flippant reply: echoing Antoine's remarks, please specify the following: - what are the precise technical requirements, sufficient to easily determine whether a given application/system meets them or not. the mention of communities, peers and trust in a few sentences is insufficient. i'd expect a detailed description of mandatory requirements to take a number of pages at least. - who will be judging the merit / acceptance of submissions? you? a committee of experts? the public vote? etc. - what are the time constraints of entry: when are initial submissions due, when are final packaged releases due, when is judging completed, when are winner(s) announced and how will funds be transferred? - what are the IP / copyright restrictions / requirements associated with entry. is any open source license acceptable. BSD license? do implementers have to give up copyright ownership to accept the prize? answering these questions will go a long way toward making your challenge more legitimate / acceptable. _______________________________________________ p2p-hackers mailing list p2p-hackers@zgp.org http://zgp.org/mailman/listinfo/p2p-hackers _______________________________________________ Here is a web page listing P2P Conferences: http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences You don't get no juice unless you squeeze Lemon Obrien, the Third. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://zgp.org/pipermail/p2p-hackers/attachments/20060421/4dc342f5/attachment.htm From eunsoo at research.panasonic.com Mon Apr 24 15:30:39 2006 From: eunsoo at research.panasonic.com (Eunsoo Shim) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] definitions of super node (peer) and oridnary node (peer) Message-ID: <444CEF1F.7040807@research.panasonic.com> Hi, What would be good defintions of super node (peer) and ordinary node (peer)? The definitions should not be specific to Skype or Kazaa but for more general cases. Your input would be appreciated. Thanks. Eunsoo From lemonobrien at yahoo.com Mon Apr 24 18:20:15 2006 From: lemonobrien at yahoo.com (Lemon Obrien) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] definitions of super node (peer) and oridnary node (peer) In-Reply-To: <444CEF1F.7040807@research.panasonic.com> Message-ID: <20060424182015.8794.qmail@web53615.mail.yahoo.com> firewall, accepts unsolicted connections/messages Eunsoo Shim wrote: Hi, What would be good defintions of super node (peer) and ordinary node (peer)? The definitions should not be specific to Skype or Kazaa but for more general cases. Your input would be appreciated. Thanks. Eunsoo _______________________________________________ p2p-hackers mailing list p2p-hackers@zgp.org http://zgp.org/mailman/listinfo/p2p-hackers _______________________________________________ Here is a web page listing P2P Conferences: http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences You don't get no juice unless you squeeze Lemon Obrien, the Third. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://zgp.org/pipermail/p2p-hackers/attachments/20060424/a43e6ca0/attachment.html From eunsoo at research.panasonic.com Mon Apr 24 19:22:53 2006 From: eunsoo at research.panasonic.com (Eunsoo Shim) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] definitions of super node (peer) and oridnary node (peer) In-Reply-To: <20060424182015.8794.qmail@web53615.mail.yahoo.com> References: <20060424182015.8794.qmail@web53615.mail.yahoo.com> Message-ID: <444D258D.2090707@research.panasonic.com> Thanks for the input. What do you mean by "firewall"? Eunsoo Lemon Obrien wrote: > firewall, accepts unsolicted connections/messages > > */Eunsoo Shim /* wrote: > > Hi, > > What would be good defintions of super node (peer) and ordinary > node (peer)? > The definitions should not be specific to Skype or Kazaa but for more > general cases. > Your input would be appreciated. > Thanks. > > Eunsoo > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > _______________________________________________ > Here is a web page listing P2P Conferences: > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences > > > > > You don't get no juice unless you squeeze > Lemon Obrien, the Third. > >------------------------------------------------------------------------ > >_______________________________________________ >p2p-hackers mailing list >p2p-hackers@zgp.org >http://zgp.org/mailman/listinfo/p2p-hackers >_______________________________________________ >Here is a web page listing P2P Conferences: >http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences > > From lemonobrien at yahoo.com Mon Apr 24 19:56:11 2006 From: lemonobrien at yahoo.com (Lemon Obrien) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] definitions of super node (peer) and oridnary node (peer) In-Reply-To: <444D258D.2090707@research.panasonic.com> Message-ID: <20060424195612.89462.qmail@web53614.mail.yahoo.com> if the node is behind a firewall it most likely, unless the user chooses, will not be able to accept unsolicited connections/messages from unknown peers. Eunsoo Shim wrote: Thanks for the input. What do you mean by "firewall"? Eunsoo Lemon Obrien wrote: > firewall, accepts unsolicted connections/messages > > */Eunsoo Shim /* wrote: > > Hi, > > What would be good defintions of super node (peer) and ordinary > node (peer)? > The definitions should not be specific to Skype or Kazaa but for more > general cases. > Your input would be appreciated. > Thanks. > > Eunsoo > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > _______________________________________________ > Here is a web page listing P2P Conferences: > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences > > > > > You don't get no juice unless you squeeze > Lemon Obrien, the Third. > >------------------------------------------------------------------------ > >_______________________________________________ >p2p-hackers mailing list >p2p-hackers@zgp.org >http://zgp.org/mailman/listinfo/p2p-hackers >_______________________________________________ >Here is a web page listing P2P Conferences: >http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences > > _______________________________________________ p2p-hackers mailing list p2p-hackers@zgp.org http://zgp.org/mailman/listinfo/p2p-hackers _______________________________________________ Here is a web page listing P2P Conferences: http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences You don't get no juice unless you squeeze Lemon Obrien, the Third. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://zgp.org/pipermail/p2p-hackers/attachments/20060424/d30182fa/attachment.htm From ap at hamachi.cc Mon Apr 24 18:56:07 2006 From: ap at hamachi.cc (Alex Pankratov) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] Real-world UPnP stats Message-ID: <444D1F47.1070905@hamachi.cc> We've recently added UPnP support to our client software and now I got some server-side stats and they are most interesting. Check this out - Roughly a half of all clients that reported success talking to their 'routers' and establishing TCP/UDP port mappings were still inaccessible from an outside via their mapped ports. Our UPnP code is written from scratch, so if the client says that ports are mapped, there was in fact a 200 response for respective SOAP request from the router. I was expecting some degree of failures due to double NAT'ing, additional firewalling, etc .. but 50% ? Anyone care to comment or compare this to their own numbers ? Alex From matthew at matthew.at Mon Apr 24 22:09:36 2006 From: matthew at matthew.at (Matthew Kaufman) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] definitions of super node (peer) and oridnary node(peer) In-Reply-To: <20060424182015.8794.qmail@web53615.mail.yahoo.com> Message-ID: <035201c667eb$c8fa4c20$0202fea9@matthewdesk> Having the concept of "super node" implies a hierarchical network. I'd say that a "super node" is one that is selected (could be self-election based on node capabilities, could be chosen from a central arbiter, could be up to the user, could be some combination of the above) to take on more than its own equal fraction of the computation, storage, or routing tasks in the distributed network. A typical required capability is "accessible without restriction from arbitrary other nodes" (so that it may be used as a rendezvous point, for instance), and another typical metric is "has been up for a while, as is expected to stay up for a while", but those aren't the only ways to make the decision. An "ordinary node" in a flat network is just like any other node, and in a hierarchical network containing "super nodes" is one that has not been selected to be such a "super node". Matthew Kaufman matthew@matthew.at http://www.amicima.com _____ From: p2p-hackers-bounces@zgp.org [mailto:p2p-hackers-bounces@zgp.org] On Behalf Of Lemon Obrien Sent: Monday, April 24, 2006 11:20 AM To: Peer-to-peer development. Subject: Re: [p2p-hackers] definitions of super node (peer) and oridnary node(peer) firewall, accepts unsolicted connections/messages Eunsoo Shim wrote: Hi, What would be good defintions of super node (peer) and ordinary node (peer)? The definitions should not be specific to Skype or Kazaa but for more general cases. Your input would be appreciated. Thanks. Eunsoo _______________________________________________ p2p-hackers mailing list p2p-hackers@zgp.org http://zgp.org/mailman/listinfo/p2p-hackers _______________________________________________ Here is a web page listing P2P Conferences: http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences You don't get no juice unless you squeeze Lemon Obrien, the Third. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://zgp.org/pipermail/p2p-hackers/attachments/20060424/c94467b4/attachment.html From dbarrett at quinthar.com Mon Apr 24 23:38:25 2006 From: dbarrett at quinthar.com (David Barrett) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] Real-world UPnP stats In-Reply-To: <444D1F47.1070905@hamachi.cc> Message-ID: <20060424235039.1C25A3FC27@capsicum.zgp.org> Yikes, that's pretty bad. Did you also capture which fraction of clients report that UPnP works at all? We use UPnP but aren't currently tuned to capture this information, though I'll plan to in the future. When building your UPnP layer, did you find that the various routers were faithful to the spec, or was there a lot of tweaking for specific NAT vendors? -david > -----Original Message----- > From: Alex Pankratov > Sent: Monday, April 24, 2006 11:56 AM > To: Peer-to-peer development. > Subject: [p2p-hackers] Real-world UPnP stats > > We've recently added UPnP support to our client software and > now I got some server-side stats and they are most interesting. > > Check this out - > > Roughly a half of all clients that reported success talking to > their 'routers' and establishing TCP/UDP port mappings were > still inaccessible from an outside via their mapped ports. > > Our UPnP code is written from scratch, so if the client says that > ports are mapped, there was in fact a 200 response for respective > SOAP request from the router. > > I was expecting some degree of failures due to double NAT'ing, > additional firewalling, etc .. but 50% ? > > Anyone care to comment or compare this to their own numbers ? > > Alex > > > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > _______________________________________________ > Here is a web page listing P2P Conferences: > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences From coderman at gmail.com Tue Apr 25 17:59:54 2006 From: coderman at gmail.com (coderman) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] Real-world UPnP stats In-Reply-To: <444D1F47.1070905@hamachi.cc> References: <444D1F47.1070905@hamachi.cc> Message-ID: <4ef5fec60604251059qd03f23dpa42f106dec343b55@mail.gmail.com> On 4/24/06, Alex Pankratov wrote: > ... > Roughly a half of all clients that reported success talking to > their 'routers' and establishing TCP/UDP port mappings were > still inaccessible from an outside via their mapped ports. > ... > Anyone care to comment or compare this to their own numbers ? what ports? many ISP's block port 80/443 for residential customers. did you try ports >1024? From ap at hamachi.cc Tue Apr 25 19:19:09 2006 From: ap at hamachi.cc (Alex Pankratov) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] Real-world UPnP stats In-Reply-To: <4ef5fec60604251059qd03f23dpa42f106dec343b55@mail.gmail.com> References: <444D1F47.1070905@hamachi.cc> <4ef5fec60604251059qd03f23dpa42f106dec343b55@mail.gmail.com> Message-ID: <444E762D.1050209@hamachi.cc> coderman wrote: > On 4/24/06, Alex Pankratov wrote: >> ... >> Roughly a half of all clients that reported success talking to >> their 'routers' and establishing TCP/UDP port mappings were >> still inaccessible from an outside via their mapped ports. >> ... >> Anyone care to comment or compare this to their own numbers ? > > what ports? many ISP's block port 80/443 for residential customers. > > did you try ports >1024? They are all dynamic, ie >1024. One other thing that I should've mentioned is that there's plenty of clients that have only TCP or only UDP mapping not working, but not both. From ap at hamachi.cc Tue Apr 25 19:24:34 2006 From: ap at hamachi.cc (Alex Pankratov) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] Real-world UPnP stats In-Reply-To: <20060424235039.1C25A3FC27@capsicum.zgp.org> References: <20060424235039.1C25A3FC27@capsicum.zgp.org> Message-ID: <444E7772.5060901@hamachi.cc> David Barrett wrote: > Yikes, that's pretty bad. Did you also capture which fraction of clients > report that UPnP works at all? Not sure I follow. How would you define these clients ? > We use UPnP but aren't currently tuned to > capture this information, though I'll plan to in the future. > > When building your UPnP layer, did you find that the various routers were > faithful to the spec, or was there a lot of tweaking for specific NAT > vendors? No, not much tweaking at all .. One thing that was evident is a difference in response time. For example some Linksys models may take up to 5 seconds to respond to AddPortMapping request. From dbarrett at quinthar.com Wed Apr 26 03:09:21 2006 From: dbarrett at quinthar.com (David Barrett) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] Real-world UPnP stats In-Reply-To: <444E7772.5060901@hamachi.cc> Message-ID: <20060426030927.9E2D13FC35@capsicum.zgp.org> > -----Original Message----- > From: Alex Pankratov > > David Barrett wrote: > > Yikes, that's pretty bad. Did you also capture which fraction of > clients > > report that UPnP works at all? > > Not sure I follow. How would you define these clients ? Sorry, I was unclear. I mean, "what fraction of NAT'd clients have NATs that respond 200OK to your UPnP SOAP attempt?" I'm just curious how widely UPnP is deployed. -david From auto43348 at hushmail.com Wed Apr 26 19:20:27 2006 From: auto43348 at hushmail.com (auto43348@hushmail.com) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] Real-world UPnP stats Message-ID: <200604261920.k3QJKTE4016379@mailserver2.hushmail.com> Many people (myself included) specificly turn UPnP off. It seems like a pretty open security hole, even though I could see how it would make p2p apps work a lot easier. rearden >Date: Tue, 25 Apr 2006 20:09:21 -0700 >From: "David Barrett" >Subject: RE: [p2p-hackers] Real-world UPnP stats >To: "'Peer-to-peer development.'" >Message-ID: <20060426030927.9E2D13FC35@capsicum.zgp.org> >Content-Type: text/plain; charset="us-ascii" > >> -----Original Message----- >> From: Alex Pankratov >> >> David Barrett wrote: >> > Yikes, that's pretty bad. Did you also capture which fraction >of >> clients >> > report that UPnP works at all? >> >> Not sure I follow. How would you define these clients ? > >Sorry, I was unclear. I mean, "what fraction of NAT'd clients >have NATs >that respond 200OK to your UPnP SOAP attempt?" > >I'm just curious how widely UPnP is deployed. > >-david > Concerned about your privacy? Instantly send FREE secure email, no account required http://www.hushmail.com/send?l=480 Get the best prices on SSL certificates from Hushmail https://www.hushssl.com?l=485 From ap at hamachi.cc Wed Apr 26 19:50:40 2006 From: ap at hamachi.cc (Alex Pankratov) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] Real-world UPnP stats In-Reply-To: <20060426030927.9E2D13FC35@capsicum.zgp.org> References: <20060426030927.9E2D13FC35@capsicum.zgp.org> Message-ID: <444FCF10.8020908@hamachi.cc> David Barrett wrote: >> -----Original Message----- >> From: Alex Pankratov >> >> David Barrett wrote: >>> Yikes, that's pretty bad. Did you also capture which fraction of >> clients >>> report that UPnP works at all? >> Not sure I follow. How would you define these clients ? > > Sorry, I was unclear. I mean, "what fraction of NAT'd clients have NATs > that respond 200OK to your UPnP SOAP attempt?" > > I'm just curious how widely UPnP is deployed. Ah. That'd be a interesting statistics to have indeed, but - no - I don't have it at the moment. From mfreed at cs.nyu.edu Fri Apr 28 15:56:05 2006 From: mfreed at cs.nyu.edu (Michael J Freedman) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] Announcing the 'illuminati' measurement project: Please help! Message-ID: Hi, We'd like to invite people to check out a new network and Web measurement project that we've started, aimed to better understand the extent, type, and location of NATs, DNS resolvers, and proxies with respect to clients. http://illuminati.coralcdn.org/ We're looking for people to contribute to our measurement efforts by inserting just a couple lines of HTML onto any popular web pages that they might run. Much like in SETI@Home, you can register a unique team before you do so, and we'll track your team's individual contribution: http://illuminati.coralcdn.org/teams/ More discussion about our goals and techniques can be found at the website, as well as some preliminary statistics from our measurements, which just passed 1 million unique hosts. Thanks, Mike Freedman Martin Casado ----- www.michaelfreedman.org www.coralcdn.org From travis at redswoosh.net Sat Apr 29 09:15:20 2006 From: travis at redswoosh.net (Travis Kalanick) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] Please help me update my address book Message-ID: <1146302120.4588.6028.sendUpdate@mx.plaxo.com> Skipped content of type multipart/alternative-------------- next part -------------- A non-text attachment was scrubbed... Name: Travis Kalanick.vcf Type: text/x-vcard Size: 378 bytes Desc: not available Url : http://zgp.org/pipermail/p2p-hackers/attachments/20060429/f13cced5/TravisKalanick.vcf From gojomo at bitzi.com Sat Apr 29 18:13:10 2006 From: gojomo at bitzi.com (Gordon Mohr (@ Bitzi)) Date: Sat Dec 9 22:13:16 2006 Subject: [p2p-hackers] Please help me update my address book In-Reply-To: <1146302120.4588.6028.sendUpdate@mx.plaxo.com> References: <1146302120.4588.6028.sendUpdate@mx.plaxo.com> Message-ID: <4453ACB6.1020201@bitzi.com> Ah, like the turning of the leaves, the Travis Kalanick Plaxo message (and followup apology) is part of the wonderful and eternal cycle of life here at p2p-hackers. :) - Gordon Travis Kalanick wrote: > Peer-to-peer, > > I'm updating my address book. Please take a moment to update your latest > contact information. I use Plaxo to manage my personal address book. > > Thanks for your help, and please don't hesitate to use this as an excuse > to send me a note. > > Travis Kalanick > travis@redswoosh.net > Red Swoosh, Inc. > > > > > Click the buttons below to change or confirm your info > > > Peer-to-peer development. > no title > no company > no work address > > > p2p-hackers@zgp.org > no web page > IM: none > > tel: none > fax: none > mobile: none > pager: none > > > > Is this information correct? > > > > > > P.S. I've attached my current information in a vcard. If you get Plaxo > too, we'll stay in touch automatically. > > If you do not wish to receive update request emails from Travis > Kalanick, click here > to > opt-out. > > > ------------------------------------------------------------------------ > > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > _______________________________________________ > Here is a web page listing P2P Conferences: > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences