From matthew at where.matthew.at Mon Jan 3 16:28:34 2005 From: matthew at where.matthew.at (Matthew Kaufman) Date: Sat Dec 9 22:12:49 2006 Subject: [p2p-hackers] hello fellow p2p hackers... Message-ID: <200412312349.iBVNnuB53155@where.matthew.at> Just a note of introduction... Some of you may know me as the founder of some SF/Monterey Bay Area ISPs (scruz-net, Tycho Networks) or as the network architect responsible for designing and implementing DSL.net's IP backbone, but for the last 8 months I've actually been doing p2p hacking instead of building ISPs. What we've been building is outlined a bit at www.amicima.com Now, I know what you're thinking... the source code isn't freely available to download from there. That's because I haven't found a good way to pay my mortgage with open source... so until I can find someone to pay me for my development time, so that I can pay my mortgage, it'll have to stay proprietary, in the hope that I can make a bit from licensing or the app we're building on top of what we've done so far. On the other hand, I do believe in sharing information learned, and so I've joined this list to be a contributor to discussions like "how do I do reliable UDP and make it TCP-friendly", because "how" is a lot different than "thousands of hours of implementation", and the Internet wouldn't be where it is today if nobody shared the "hows" and "whys". In short, "hello" and hope everyone has a good 2005... Matthew Kaufman matthew@matthew.at From eugen at leitl.org Fri Jan 7 09:17:48 2005 From: eugen at leitl.org (Eugen Leitl) Date: Sat Dec 9 22:12:49 2006 Subject: [p2p-hackers] [FoRK] Hamachi "mediated" peer-to-peer sounds interesting (fwd from meltsner@gmail.com) Message-ID: <20050107091748.GI9221@leitl.org> ----- Forwarded message from Ken Meltsner ----- From: Ken Meltsner Date: Thu, 6 Jan 2005 16:32:36 -0600 To: FoRK Subject: [FoRK] Hamachi "mediated" peer-to-peer sounds interesting Reply-To: meltsner@alum.mit.edu Basically, a way to get around NAT and other router issues for a peer-to-peer system, mostly seamlessly integrated as a special network driver. Systems connect to a back end server which relays traffic between peers on named private networks. Sort of P2P meets VPN -- if they added HTTPS tunneling, it would run through nearly any corporate firewall/proxy server. No magic, as far as I can tell, but apparently a decent piece of work. I like the named private network capability in principle. Ken Meltsner Excerpt from http://www.hamachi.cc/security showing a sound approach (I think) to security, including public key exchange: The Framework A Hamachi system is comprised of backend servers and end-node peer clients. Server nodes track client's locations and provide mediation services required for establishing direct peer-to-peer tunnels between client nodes. When the client is activated, it establishes TCP connection to one of the mediation servers and starts speaking Hamachi protocol to log itself in and synchronize with other clients. The rest of the document deals with security provisions of this protocol, which ensure both privacy and authentication of client-server and client-client communications. Client Identity A Hamachi client is identified by its Hamachi network addresses. The address is assigned the first time the client connects to the mediation servers and it stays the same for as long as client's account exists in the system. The client also generates an RSA key pair, which is used for authentication purposes during login sequence. The public key is passed to the server once - during the first connection when creating new account. To perform regular login, the client submits its identity and uses private key to sign server's challange as described below. The server verifies the signature and this authenticates the client. Server Identity Each Hamachi server owns an RSA keypair. The public key is distributed with client's installation package and thus it is known to the client prior to the first contact. When the client connects to the server, it announces which identity he expects the server to have. If the server has requested identity, the login sequence commences. In the last message of this sequence the server sends a signature of client's data and this confirms server's identity to the client. Message Security The first thing that happens after the client connects to the server is a key exchange. This exchange produces keying material used for encrypting and authenticating all other protocol messages. Messages are encrypted with symmetric cipher algorithm and authenticated with MAC. Every message is also uniquely numbered to prevent replay attacks. Crypto Suite Crypto suite specifies exact algorithms and their parameters used for performing key exchange, key derivation and message encryption. Default crypto suite is defined as follows - DH group - 2048-bit MODP group from RFC 3526 Message encryption - AES-256-CBC using ESP-style padding Message authentication - 96-bit version of HMAC-SHA1 Protocol Details HELO Client connects to the server and sends HELO message: HELO CryptoSuite ServerKfp Ni Gi CryptoSuite is 1 for default crypto suite, ServerKfp is OpenSSH-style fingerprint of expected server public key, Ni and Gi are client's 1024-bit nonce and public DH exponent. If the server has a public key that matches ServerKfp, it replies with: HELO OK Nr Gr where Nr and Gr are server's nonce and public DH exponent. KEYMAT At this point both server and client can compute shared DH secret and generate keying material as follows - KEYMAT = T1 | T2 | T3 | ... T1 = prf (K, Ni | Nr | 0x01) T2 = prf (K, T1 | Ni | Nr | 0x02) T3 = prf (K, T2 | Ni | Nr | 0x03) ... where K is a shared DH secret, and prf is HMAC-SHA1. All subsequent protocol messages are encrypted with the Ke key and authenticated using the Ka key. Ke and Ka are taken from KEYMAT. In case of default crypto suite Ke uses first 256 bits of KEYMAT, and Ka - next 160 bits. Message Protection Prior to encrypting protocol message the sender pads it to the size of cipher block (16 bytes with default crypto suite) using ESP padding. The message is then encrypted and prepended with a message ID, which is a monotonically increasing 32 bit number. As the last step HMAC is generated over the whole message (ID and encrypted data), appended at the end and the message is sent out. Above message protection scheme is consistent with those employed by TLS, IKE/IPsec. AUTH The client logs into the system by sending AUTH message: AUTH Identity Signature(Ni | Nr | Gi | Gr, Kp_cli) where Identity is client's 32-bit Hamachi address and Signature is a concatenation of nonces and public DH exponents encrypted with client's private key. The server uses Id to locate client's account, obtains its public key and verifies the signature. If the signature is correct, the server replies with: AUTH OK Signature(Nr | Ni | Gr | Gi, Kp_srv) where Signature is created using server's private key that matches ServerKfp from HELO message. Peer to peer traffic When two Hamachi clients start talking to each other, they employ the same message protection as when talking to the server. Currently clients do not perform the key exchange of their own, they use keying material provided by the server instead. This keying mechanism is used on temporary basis and will only be available during beta testing. The production release will have clients obtaining KEYMAT through their own key exchange using each other's RSA keys for authentication. _______________________________________________ FoRK mailing list http://xent.com/mailman/listinfo/fork ----- End forwarded message ----- -- Eugen* Leitl leitl ______________________________________________________________ ICBM: 48.07078, 11.61144 http://www.leitl.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE http://moleculardevices.org http://nanomachines.net -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: not available Url : http://zgp.org/pipermail/p2p-hackers/attachments/20050107/adff8509/attachment.pgp From wesley at felter.org Fri Jan 7 20:16:56 2005 From: wesley at felter.org (Wes Felter) Date: Sat Dec 9 22:12:49 2006 Subject: [p2p-hackers] [FoRK] Hamachi "mediated" peer-to-peer sounds interesting (fwd from meltsner@gmail.com) In-Reply-To: <20050107091748.GI9221@leitl.org> References: <20050107091748.GI9221@leitl.org> Message-ID: <41DEEE38.1000703@felter.org> Eugen Leitl wrote: > ----- Forwarded message from Ken Meltsner ----- > Basically, a way to get around NAT and other router issues for a > peer-to-peer system, mostly seamlessly integrated as a special network > driver. Systems connect to a back end server which relays traffic > between peers on named private networks. Sort of P2P meets VPN -- if > they added HTTPS tunneling, it would run through nearly any corporate > firewall/proxy server. (The subject line is a little misleading; almost all P2P now uses mediated firewall traversal, so that's hardly noteworthy.) Hamachi is interesting because it appears to provide NAT/firewall traversal for arbitrary unmodified applications; the downside is that a Hamachi node can only talk to other Hamachi nodes. Teredo provides NAT traversal for any IPv6 application, with the advantage that Teredo nodes are full peers on the IPv6 Internet (such as it is). I wonder why they didn't use IPSec. A simple GUI to set up P2P IPSec groups might be interesting. If Hamachi really uses the 5.0.0.0/8 address block, that's a little naughty since IANA has not assigned that block: http://www.iana.org/assignments/ipv4-address-space -- Wes Felter - wesley@felter.org - http://felter.org/wesley/ From adam at cypherspace.org Fri Jan 7 20:34:32 2005 From: adam at cypherspace.org (Adam Back) Date: Sat Dec 9 22:12:49 2006 Subject: [p2p-hackers] Re: Hamachi "mediated" peer-to-peer sounds interesting (fwd from meltsner@gmail.com) In-Reply-To: <20050107091748.GI9221@leitl.org> References: <20050107091748.GI9221@leitl.org> Message-ID: <20050107203432.GA14959@bitchcake.off.net> Ken Meltsner wrote: > Basically, a way to get around NAT and other router issues for a > peer-to-peer system, mostly seamlessly integrated as a special network > driver. Systems connect to a back end server which relays traffic > between peers on named private networks. Sort of P2P meets VPN -- if > they added HTTPS tunneling, it would run through nearly any corporate > firewall/proxy server. Well if they really relayed traffic between peers on their back end server their pipe would be saturated. (Think kazaa or bit-torrent over hamachi). I hope they actually use the server just for mediation, and send the traffic direct between peers. Unfortunately the documentation is rather light so it's difficult to tell what it does in this regard. I've cc'd Alex Pankratov who is the author (I presume). However maybe this beta version is not complete in that regard. Some other things such as the server mediated key exchange are obviously not shipable grade (server knows all symmetric keys!) Adam From zubin_madon at yahoo.com Fri Jan 7 22:43:34 2005 From: zubin_madon at yahoo.com (zubin madon) Date: Sat Dec 9 22:12:49 2006 Subject: [p2p-hackers] (no subject) Message-ID: <20050107224335.40827.qmail@web30309.mail.mud.yahoo.com> We are an early stage p2p startup. We are looking for some solid developers/architects who know p2p well, and are great coders. If you are interested, please send an email to me: zubin_madon@yahoo.com __________________________________ Do you Yahoo!? Yahoo! Mail - You care about security. So do we. http://promotions.yahoo.com/new_mail From a.cyberdemon at gmail.com Mon Jan 10 23:42:08 2005 From: a.cyberdemon at gmail.com (Cyber Demon) Date: Sat Dec 9 22:12:49 2006 Subject: [p2p-hackers] Simple lightweight DHT Message-ID: <5899e51005011015425f56e284@mail.gmail.com> I am trying to use DHT as part of a decentralised system, I have found many examples of this such as Bamboo and Tangle. I am looking for the barebones of a distributed hash table, a very simple example that show the concepts behind DHT's, helping me to design a specific hash table for my needs. Things that would help for example would be papers that focus on implementing DHT's, or explanations, designs, and even pointers to specific components within applications that may help my understanding and design efforts. I also wonder if it is a large task to implement a simple dht, are we talking a couple of days to get something basic working? I hope I am not just looking for the easy way out, but I dont want to spend the same amount of time attempting to use Bamboo or another system, when I could have put together something very specific to my needs, to be built upon later. Finally, are many of you guys working on mobile, ad-hoc projects? This is my area of interest, and I thought I'd ask who else is looking at such projects. Thanks. Peter From srhea at cs.berkeley.edu Tue Jan 11 00:00:10 2005 From: srhea at cs.berkeley.edu (Sean C. Rhea) Date: Sat Dec 9 22:12:49 2006 Subject: [p2p-hackers] Simple lightweight DHT In-Reply-To: <5899e51005011015425f56e284@mail.gmail.com> References: <5899e51005011015425f56e284@mail.gmail.com> Message-ID: On Jan 10, 2005, at 3:42 PM, Cyber Demon wrote: > I also wonder if it is a large task to implement a simple dht, are we > talking a couple of days to get something basic working? I hope I am > not just looking for the easy way out, but I dont want to spend the > same amount of time attempting to use Bamboo or another system, when I > could have put together something very specific to my needs, to be > built upon later. I wrote the Bamboo router and got it working in about a week, although I had the experience and code base from writing Tapestry (another DHT) before that, so I had a bit of a head start. It took me another year to get Bamboo to perform as well as it does today. I believe the Chord people had a similar experience. So you COULD write your own DHT pretty quickly, but why WOULD you? I definitely believe you might want to due to some inherent feature of the Bamboo implementation that annoys you (it's written in Java, it uses UDP, I don't know). But the code base itself is designed to be reused across many applications. At Berkeley, it's been used in the PIER, OceanStore, OpenDHT, and multiple class projects. There's even a tutorial on how to use it. And in the case that you don't like Java, you could try the MIT Chord implementation, which is also pretty good and is written in C++. Both DHTs are available under open source licenses. My recommendation, in short, is to find an existing DHT you can live with and extend it. It will save you a lot of time. Sean -- Boredom is always counterrevolutionary. -------------- next part -------------- A non-text attachment was scrubbed... Name: PGP.sig Type: application/pgp-signature Size: 186 bytes Desc: This is a digitally signed message part Url : http://zgp.org/pipermail/p2p-hackers/attachments/20050110/ef1cc577/PGP.pgp From travis at redswoosh.net Tue Jan 11 01:57:49 2005 From: travis at redswoosh.net (Travis Kalanick) Date: Sat Dec 9 22:12:49 2006 Subject: [p2p-hackers] UDP Keep-alive In-Reply-To: <01b401c4ee0f$54e30220$0200a8c0@em.noip.com> Message-ID: <200501110159.j0B1xbaL003218@be9.noc0.redswoosh.com> Anybody have a good idea how long the average NAT/Gateway keeps alive a UDP connection? From pjkirner at comcast.net Tue Jan 11 02:11:14 2005 From: pjkirner at comcast.net (PJ Kirner) Date: Sat Dec 9 22:12:49 2006 Subject: [p2p-hackers] UDP Keep-alive In-Reply-To: <200501110159.j0B1xbaL003218@be9.noc0.redswoosh.com> References: <200501110159.j0B1xbaL003218@be9.noc0.redswoosh.com> Message-ID: <41E335C2.7040906@comcast.net> Travis - This is not authoritative but this might be a start. It does describe different types of NAT behaviors: http://www.ietf.org/internet-drafts/draft-ietf-behave-nat-00.txt it is from the new IETF "BEHAVE" WG http://www.ietf.org/html.charters/behave-charter.html After ignoring NAT's for years, the IETF is now trying to make some suggestions on how ideal NAT should behave. (I'm sure the pun is intended!) PJ Travis Kalanick wrote: >Anybody have a good idea how long the average NAT/Gateway keeps alive a UDP >connection? > > >_______________________________________________ >p2p-hackers mailing list >p2p-hackers@zgp.org >http://zgp.org/mailman/listinfo/p2p-hackers >_______________________________________________ >Here is a web page listing P2P Conferences: >http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences > > > From ap at hamachi.cc Tue Jan 11 06:42:46 2005 From: ap at hamachi.cc (Alex Pankratov) Date: Sat Dec 9 22:12:49 2006 Subject: [p2p-hackers] UDP Keep-alive In-Reply-To: <200501110159.j0B1xbaL003218@be9.noc0.redswoosh.com> References: <200501110159.j0B1xbaL003218@be9.noc0.redswoosh.com> Message-ID: <41E37566.3070709@hamachi.cc> Anywhere between 20 seconds and an hour. Rather typical behaviour is to start with few minutes and once there's a traffic in reverse direction - increase the timeout by an order of magnitude. Alex Travis Kalanick wrote: > Anybody have a good idea how long the average NAT/Gateway keeps alive a UDP > connection? > > > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > _______________________________________________ > Here is a web page listing P2P Conferences: > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences > > From zooko at zooko.com Tue Jan 11 10:29:27 2005 From: zooko at zooko.com (Zooko O'Whielacronx) Date: Sat Dec 9 22:12:49 2006 Subject: [p2p-hackers] Simple lightweight DHT In-Reply-To: References: <5899e51005011015425f56e284@mail.gmail.com> Message-ID: On 2005, Jan 10, at 20:00, Sean C. Rhea wrote: > I wrote the Bamboo router and got it working in about a week, although > I had the experience and code base from writing Tapestry (another DHT) > before that, so I had a bit of a head start. > > It took me another year to get Bamboo to perform as well as it does > today. I believe the Chord people had a similar experience. What a fascinating story! What things did you have to learn and invent during the course of that year to improve the performance of Bamboo? Regards, Zooko From jrydberg at gnu.org Tue Jan 11 14:45:11 2005 From: jrydberg at gnu.org (Johan Rydberg) Date: Sat Dec 9 22:12:49 2006 Subject: [p2p-hackers] Simple lightweight DHT In-Reply-To: (Sean C. Rhea's message of "Mon, 10 Jan 2005 16:00:10 -0800") References: <5899e51005011015425f56e284@mail.gmail.com> Message-ID: <873bx8rpmw.fsf@gnu.org> "Sean C. Rhea" writes: > And in the case that you don't like Java, you could try the MIT Chord > implementation, which is also pretty good and is written in C++. > > Both DHTs are available under open source licenses. You could also take a look at my little library [1]. It does not implement a DHT, but instead provides a key-based routing API that is modeled after [2]. The implementation is more or less a re-organization of the Chord implementation used by i3, with a few twitches here and there (more to come in the future.) It is written in C and released under GPL. To test the API I've written a few test programs, where one is a really simple DHT (not yet finished, though.) See test-3.c in src/. But as I said, it is just test programs, but my plan is to implement a "real" DHT using the library. Please note that I've only been working on it for a few weeks, and it's still a moving target, so the API will most likely change in the future weeks, months to come. brgds, Johan [1] http://savannah.nongnu.org/cgi-bin/viewcvs/peerfs/ [2] Towards a Common API for Structured Peer-to-Peer Overlays. http://www.project-iris.net/irisbib/papers/iptps:apis/paper.pdf From mgp at ucla.edu Tue Jan 11 17:32:37 2005 From: mgp at ucla.edu (Michael Parker) Date: Sat Dec 9 22:12:49 2006 Subject: [p2p-hackers] Simple lightweight DHT In-Reply-To: References: <5899e51005011015425f56e284@mail.gmail.com> Message-ID: <41E40DB5.8020709@ucla.edu> I was about to ask a similar question... You're a seasoned peer-to-peer developer -- for those of us who are developers just starting out, and perhaps trying to invent our own new, novel topologies and systems, what are the hardest things to 'get right' in a peer-to-peer system? I've read the paper "Designing a DHT for Low Latency and High Throughput" [1] by the Chord group at MIT. It seems to sum up pretty well what their challenges were, and what practices they found were best. I know there are some other people out on this mailing list who could answer this too (Clarke, Freedman... if you feel that your name should be on this list, just respond). Sorry to try and drag you into the spotlight, but you definitely have a captive audience ;) - Michael Parker [1] http://citeseer.ist.psu.edu/dabek04designing.html Zooko O'Whielacronx wrote: > On 2005, Jan 10, at 20:00, Sean C. Rhea wrote: > >> I wrote the Bamboo router and got it working in about a week, >> although I had the experience and code base from writing Tapestry >> (another DHT) before that, so I had a bit of a head start. >> >> It took me another year to get Bamboo to perform as well as it does >> today. I believe the Chord people had a similar experience. > > > What a fascinating story! What things did you have to learn and > invent during the course of that year to improve the performance of > Bamboo? > > Regards, > > Zooko > > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > _______________________________________________ > Here is a web page listing P2P Conferences: > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences > From ian at locut.us Tue Jan 11 19:17:05 2005 From: ian at locut.us (Ian Clarke) Date: Sat Dec 9 22:12:49 2006 Subject: [p2p-hackers] Altnet goes after p2p networks with obvious patent Message-ID: <6169CDE3-6405-11D9-AA32-000D932C5880@locut.us> http://p2pnet.net/story/3512 It seems that Altnet is finally going after file sharing networks with its laughably obvious patent on requesting files by a hash of the file's contents (fortunately Freenet's developers are predominantly European, and thus are largely immune to this). IIRC this patent was filed in 1997. I think it is very important that those attacked challenge this patent head-on, either by claiming it is invalid due to being obvious, or finding prior art. I vaguely recall the last time I researched this that there was prior art from as early as 1990, I think it was Project Xanadu (http://xanadu.com/). Can anyone provide specific pointers to good examples of prior art? If Altnet succeeds in extorting any money out of these P2P companies it will only serve to encourage them to attack others. Ian. -- Founder, The Freenet Project http://freenetproject.org/ CEO, Cematics Ltd http://cematics.com/ Personal Blog http://locut.us/~ian/blog/ From gbildson at limepeer.com Tue Jan 11 20:20:08 2005 From: gbildson at limepeer.com (Greg Bildson) Date: Sat Dec 9 22:12:49 2006 Subject: [p2p-hackers] Altnet goes after p2p networks with obvious patent In-Reply-To: <6169CDE3-6405-11D9-AA32-000D932C5880@locut.us> Message-ID: Just to be clear on the patents involved, there are two. One was filed in 1997. The other was filed in 1999. This is a newer one of some kind (1999): http://tinyurl.com/5vzf4 This is the 1997 one: http://tinyurl.com/6zcu7 Any information about these and prior art would be greatly appreciated. Thanks -greg Greg Bildson CTO, COO, Lime Wire LLC (212) 219-6047 http://www.limewire.com http://www.limewire.org http://www.magnetmix.com > -----Original Message----- > From: p2p-hackers-bounces@zgp.org [mailto:p2p-hackers-bounces@zgp.org]On > Behalf Of Ian Clarke > Sent: Tuesday, January 11, 2005 2:17 PM > To: Peer-to-peer development. > Subject: [p2p-hackers] Altnet goes after p2p networks with obvious > patent > > > http://p2pnet.net/story/3512 > > It seems that Altnet is finally going after file sharing networks with > its laughably obvious patent on requesting files by a hash of the > file's contents (fortunately Freenet's developers are predominantly > European, and thus are largely immune to this). > > IIRC this patent was filed in 1997. I think it is very important that > those attacked challenge this patent head-on, either by claiming it is > invalid due to being obvious, or finding prior art. > > I vaguely recall the last time I researched this that there was prior > art from as early as 1990, I think it was Project Xanadu > (http://xanadu.com/). > > Can anyone provide specific pointers to good examples of prior art? If > Altnet succeeds in extorting any money out of these P2P companies it > will only serve to encourage them to attack others. > > Ian. > > -- > Founder, The Freenet Project http://freenetproject.org/ > CEO, Cematics Ltd http://cematics.com/ > Personal Blog http://locut.us/~ian/blog/ > > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > _______________________________________________ > Here is a web page listing P2P Conferences: > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences From eugen at leitl.org Tue Jan 11 20:36:39 2005 From: eugen at leitl.org (Eugen Leitl) Date: Sat Dec 9 22:12:50 2006 Subject: [p2p-hackers] Decentralize BitTorrent with Kenosis Message-ID: <20050111203639.GS9221@leitl.org> Link: http://slashdot.org/article.pl?sid=05/01/11/1625205 Posted by: CmdrTaco, on 2005-01-11 18:02:00 from the do-you-peer-what-he-is-saying dept. [1]UnderScan writes "Eric Ries, writer/programmer/CTO, authored an article '[2]Kenosis and the World Free Web' at Freshmeat [Owned by Slashdot's Parent OSTG]. [3]Kenosis is described as a 'fully-distributed peer-to-peer RPC system built on top of XMLRPC.' He has combined his Kenosis with BitTorrent & removed the need for a centralized tracker. He states: 'To demonstrate Kenosis's suitability for these new applications, we have used it to improve upon another peer-to-peer filesharing application that Just Works: BitTorrent. BitTorrent does one thing incredibly well. Using a centralized "tracker," BitTorrent manages efficient distribution of data that is in high demand. We have extended BitTorrent, using Kenosis, to eliminate this dependence on a centralized tracker.' See also the [4]Kenosis README for details on using Kenosis-enabled BitTorrent." [5]Click Here References 1. mailto:jjp6893NO@SPAMnetscape.net 2. http://freshmeat.net/articles/view/1440/ 3. http://kenosis.sourceforge.net/ 4. http://kenosis.sourceforge.net/README_KENOSIS.txt 5. http://ads.osdn.com/?ad_id=5671&alloc_id=12342&site_id=1&request_id=4395475&op=click&page=%2farticle%2epl ----- End forwarded message ----- -- Eugen* Leitl leitl ______________________________________________________________ ICBM: 48.07078, 11.61144 http://www.leitl.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE http://moleculardevices.org http://nanomachines.net -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: not available Url : http://zgp.org/pipermail/p2p-hackers/attachments/20050111/bd7b9d62/attachment.pgp From coderman at peertech.org Tue Jan 11 21:01:33 2005 From: coderman at peertech.org (coderman) Date: Sat Dec 9 22:12:50 2006 Subject: [p2p-hackers] UDP Keep-alive In-Reply-To: <200501110159.j0B1xbaL003218@be9.noc0.redswoosh.com> References: <200501110159.j0B1xbaL003218@be9.noc0.redswoosh.com> Message-ID: <41E43EAD.30900@peertech.org> Travis Kalanick wrote: >Anybody have a good idea how long the average NAT/Gateway keeps alive a UDP >connection? > > From my personal experience: (meaning yours may vary :) Most NAT's appear to give a 1-5 minute timeout since last packet seen. It would be nice to find a market summary of various NAT behaviors. Anyone know of such a thing? The timeout is trivial to avoid with loose UDP NAT (simply send / recv a packet from any peer within the few minute window). For symmetric NAT it is more of a pain. Each session must send traffic within the timeout window which raises overall communication significantly for large numbers of logical NAT-UDP connections. Regards, From eugen at leitl.org Tue Jan 11 21:13:48 2005 From: eugen at leitl.org (Eugen Leitl) Date: Sat Dec 9 22:12:50 2006 Subject: [p2p-hackers] UDP Keep-alive In-Reply-To: <41E43EAD.30900@peertech.org> References: <200501110159.j0B1xbaL003218@be9.noc0.redswoosh.com> <41E43EAD.30900@peertech.org> Message-ID: <20050111211348.GY9221@leitl.org> On Tue, Jan 11, 2005 at 01:01:33PM -0800, coderman wrote: > Most NAT's appear to give a 1-5 minute timeout since last > packet seen. It would be nice to find a market summary of > various NAT behaviors. Anyone know of such a thing? Here's a data point: I've spent the better part of the day trying to remove default NAT (60 sec TCP, 180 sec UDP) idle connection decay from a Draytek Vigor 2900G with latest firmware. To no avail, had to send email to support (probably, /dev/null). Proprietary NAT boxes are evil, period. Which reminds me (since I've got my IPv6 subnet approved a few days ago): what's p2p application situation for IPv6? Can anyone give a brief summary? > The timeout is trivial to avoid with loose UDP NAT (simply > send / recv a packet from any peer within the few minute > window). For symmetric NAT it is more of a pain. Each > session must send traffic within the timeout window which > raises overall communication significantly for large numbers > of logical NAT-UDP connections. -- Eugen* Leitl leitl ______________________________________________________________ ICBM: 48.07078, 11.61144 http://www.leitl.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE http://moleculardevices.org http://nanomachines.net -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: not available Url : http://zgp.org/pipermail/p2p-hackers/attachments/20050111/47efff35/attachment.pgp From adam at cypherspace.org Tue Jan 11 21:17:41 2005 From: adam at cypherspace.org (Adam Back) Date: Sat Dec 9 22:12:50 2006 Subject: some prior art pointers (Re: [p2p-hackers] Altnet goes after p2p networks with obvious patent) In-Reply-To: <6169CDE3-6405-11D9-AA32-000D932C5880@locut.us> References: <6169CDE3-6405-11D9-AA32-000D932C5880@locut.us> Message-ID: <20050111211741.GA7992@bitchcake.off.net> I released the version 0.01 of the "eternity server" 1st May 1997 [1]. It uses SHA1 hashes of URLs (rather than bodies) because eternity URLs are intended to be persistent. The bodies are signed and the signature involves a hash and the first publication includes a signature and the author's key. Not having read the altnet patent I don't know if this helps or not. But I think Eric Hughes (cypherpunks co-founder) had thought of document or URL hash based routing as he described it to me (in email) after I released the eternity server alpha. I believe he gave some talks on his "Universal Piracy Service" around 1996/1997. This had similar observations to eternity (anonymity + censor-resistant broadcast channel = censor resistant publishing system). Zooko mentioned in [2] Eric did in fact file a patent (#6,122,372) in 1997 that included the idea of using a hash of a message as an ID of that message. This matches my recollection of what Eric told me of his plans back in 97. Also there is WAX and related work on secure online electronic books for healthcare which involves document hashes, and sigantures for secured hypertext links to non-mutable content [3] I believe wax itself was published 1996 or earlier. Anyway should give those who care about patents some things to dig into :-) Adam [1] sci.crypt announce http://groups-beta.google.com/group/sci.crypt/browse_thread/thread/fc99cc9e62a17b80/9d31aebfcff358c6?q=eternity+service&_done=%2Fgroups%3Fq%3Deternity+service%26qt_s%3DSearch+Groups%26&_doneTitle=Back+to+Search&&d#9d31aebfcff358c6 [2] http://zgp.org/pipermail/p2p-hackers/2004-March/001754.html [3] http://citeseer.ist.psu.edu/7494.html Secure Books: Protecting the Distribution of Knowledge (Make Corrections) (4 citations) Ross J. Anderson, V?clav Maty?s Jr., Fabien A. P. Petitcolas, Iain E. Buchan, Rudolf Hanka IWSP: International Workshop on Security Protocols, LNCS On Tue, Jan 11, 2005 at 07:17:05PM +0000, Ian Clarke wrote: > http://p2pnet.net/story/3512 > > It seems that Altnet is finally going after file sharing networks with > its laughably obvious patent on requesting files by a hash of the > file's contents (fortunately Freenet's developers are predominantly > European, and thus are largely immune to this). > > IIRC this patent was filed in 1997. I think it is very important that > those attacked challenge this patent head-on, either by claiming it is > invalid due to being obvious, or finding prior art. > > I vaguely recall the last time I researched this that there was prior > art from as early as 1990, I think it was Project Xanadu > (http://xanadu.com/). > > Can anyone provide specific pointers to good examples of prior art? If > Altnet succeeds in extorting any money out of these P2P companies it > will only serve to encourage them to attack others. > > Ian. > > -- > Founder, The Freenet Project http://freenetproject.org/ > CEO, Cematics Ltd http://cematics.com/ > Personal Blog http://locut.us/~ian/blog/ > > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > _______________________________________________ > Here is a web page listing P2P Conferences: > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences From markm at cs.jhu.edu Tue Jan 11 21:32:15 2005 From: markm at cs.jhu.edu (Mark Miller) Date: Sat Dec 9 22:12:50 2006 Subject: [p2p-hackers] Altnet goes after p2p networks with obvious patent In-Reply-To: <6169CDE3-6405-11D9-AA32-000D932C5880@locut.us> References: <6169CDE3-6405-11D9-AA32-000D932C5880@locut.us> Message-ID: <41E445DF.5040107@cs.jhu.edu> Ian Clarke wrote: > I vaguely recall the last time I researched this that there was prior > art from as early as 1990, I think it was Project Xanadu > (http://xanadu.com/). Yes. http://zgp.org/pipermail/p2p-hackers/2004-March/subject.html#1753 especially http://zgp.org/pipermail/p2p-hackers/2004-March/001751.html -- Text by me above is hereby placed in the public domain Cheers, --MarkM From ian at locut.us Tue Jan 11 21:37:43 2005 From: ian at locut.us (Ian Clarke) Date: Sat Dec 9 22:12:50 2006 Subject: [p2p-hackers] Altnet goes after p2p networks with obvious patent In-Reply-To: <41E445DF.5040107@cs.jhu.edu> References: <6169CDE3-6405-11D9-AA32-000D932C5880@locut.us> <41E445DF.5040107@cs.jhu.edu> Message-ID: <06BF7C22-6419-11D9-AA32-000D932C5880@locut.us> On 11 Jan 2005, at 21:32, Mark Miller wrote: > Ian Clarke wrote: >> I vaguely recall the last time I researched this that there was prior >> art from as early as 1990, I think it was Project Xanadu >> (http://xanadu.com/). > Yes. > http://zgp.org/pipermail/p2p-hackers/2004-March/subject.html#1753 > especially > http://zgp.org/pipermail/p2p-hackers/2004-March/001751.html I wonder what it takes to meet the standard for prior art for patents - and whether any of this would meet that standard? Ian. -- Founder, The Freenet Project http://freenetproject.org/ CEO, Cematics Ltd http://cematics.com/ Personal Blog http://locut.us/~ian/blog/ From travis at redswoosh.net Tue Jan 11 21:42:49 2005 From: travis at redswoosh.net (Travis Kalanick) Date: Sat Dec 9 22:12:50 2006 Subject: [p2p-hackers] UDP Keep-alive In-Reply-To: <20050111211348.GY9221@leitl.org> Message-ID: <200501112144.j0BLiaaL008479@be9.noc0.redswoosh.com> Thanks for the input! Here's something I found: http://www.tomax7.com/mcse/cisco_ipcommands.htm Cisco gear does a 300 sec default timeout on NAT translation tables for UDP connections. -----Original Message----- From: p2p-hackers-bounces@zgp.org [mailto:p2p-hackers-bounces@zgp.org] On Behalf Of Eugen Leitl Sent: Tuesday, January 11, 2005 1:14 PM To: Peer-to-peer development. Subject: Re: [p2p-hackers] UDP Keep-alive On Tue, Jan 11, 2005 at 01:01:33PM -0800, coderman wrote: > Most NAT's appear to give a 1-5 minute timeout since last > packet seen. It would be nice to find a market summary of > various NAT behaviors. Anyone know of such a thing? Here's a data point: I've spent the better part of the day trying to remove default NAT (60 sec TCP, 180 sec UDP) idle connection decay from a Draytek Vigor 2900G with latest firmware. To no avail, had to send email to support (probably, /dev/null). Proprietary NAT boxes are evil, period. Which reminds me (since I've got my IPv6 subnet approved a few days ago): what's p2p application situation for IPv6? Can anyone give a brief summary? > The timeout is trivial to avoid with loose UDP NAT (simply > send / recv a packet from any peer within the few minute > window). For symmetric NAT it is more of a pain. Each > session must send traffic within the timeout window which > raises overall communication significantly for large numbers > of logical NAT-UDP connections. -- Eugen* Leitl leitl ______________________________________________________________ ICBM: 48.07078, 11.61144 http://www.leitl.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE http://moleculardevices.org http://nanomachines.net From chsimps at glue.umd.edu Tue Jan 11 21:56:26 2005 From: chsimps at glue.umd.edu (Charles Simpson) Date: Sat Dec 9 22:12:50 2006 Subject: [p2p-hackers] Altnet goes after p2p networks with obvious patent In-Reply-To: <06BF7C22-6419-11D9-AA32-000D932C5880@locut.us> References: <6169CDE3-6405-11D9-AA32-000D932C5880@locut.us> <41E445DF.5040107@cs.jhu.edu> <06BF7C22-6419-11D9-AA32-000D932C5880@locut.us> Message-ID: On Jan 11, 2005, at 4:37 PM, Ian Clarke wrote: > On 11 Jan 2005, at 21:32, Mark Miller wrote: >> Ian Clarke wrote: >>> I vaguely recall the last time I researched this that there was >>> prior art from as early as 1990, I think it was Project Xanadu >>> (http://xanadu.com/). >> Yes. >> http://zgp.org/pipermail/p2p-hackers/2004-March/subject.html#1753 >> especially >> http://zgp.org/pipermail/p2p-hackers/2004-March/001751.html > > I wonder what it takes to meet the standard for prior art for patents > - and whether any of this would meet that standard? > > Ian. > In the United States, prior art is defined in US Code Title 35, 102 (http://www.law.cornell.edu/uscode/html/uscode35/ usc_sec_35_00000102----000-.html). I don't know whether any of those meet the standards for prior art as I don't think any of it was patented - which rules out (b). The standards for "known or use by" are sketchy in part (a) and as I am not a lawyer, I really don't know whether these qualify under that criteria. Regards, Charles From markm at cs.jhu.edu Tue Jan 11 22:02:27 2005 From: markm at cs.jhu.edu (Mark Miller) Date: Sat Dec 9 22:12:50 2006 Subject: [p2p-hackers] Altnet goes after p2p networks with obvious patent In-Reply-To: <06BF7C22-6419-11D9-AA32-000D932C5880@locut.us> References: <6169CDE3-6405-11D9-AA32-000D932C5880@locut.us> <41E445DF.5040107@cs.jhu.edu> <06BF7C22-6419-11D9-AA32-000D932C5880@locut.us> Message-ID: <41E44CF3.2050107@cs.jhu.edu> Ian Clarke wrote: > On 11 Jan 2005, at 21:32, Mark Miller wrote: >> http://zgp.org/pipermail/p2p-hackers/2004-March/subject.html#1753 >> especially >> http://zgp.org/pipermail/p2p-hackers/2004-March/001751.html > > I wonder what it takes to meet the standard for prior art for patents - > and whether any of this would meet that standard? IANAL. Is there an intellectual property lawyer in the house? Btw, in case it's relevant: I've served as an expert witness in a patent case in the past, and would be happy to do so again. If anyone's interested, please contact me privately. -- Text by me above is hereby placed in the public domain Cheers, --MarkM From srhea at cs.berkeley.edu Tue Jan 11 22:35:06 2005 From: srhea at cs.berkeley.edu (Sean C. Rhea) Date: Sat Dec 9 22:12:50 2006 Subject: [p2p-hackers] Simple lightweight DHT In-Reply-To: References: <5899e51005011015425f56e284@mail.gmail.com> Message-ID: <0B2C36FE-6421-11D9-AA02-000A95AC8464@cs.berkeley.edu> On Jan 11, 2005, at 2:29 AM, Zooko O'Whielacronx wrote: > What a fascinating story! What things did you have to learn and > invent during the course of that year to improve the performance of > Bamboo? Both versions of the Bamboo paper (the technical report and the USENIX paper) cover some of the story. They're available here: http://oceanstore.cs.berkeley.edu/publications/papers/abstracts/bamboo- tr.html They don't cover all of the mistakes we made, but they cover what we came to decide were the most important things. I also recommend the Chord paper from NSDI that Michael Parker pointed out: http://citeseer.ist.psu.edu/dabek04designing.html Oh, yeah: also see the quote below. :) Sean -- The unavoidable price of reliability is simplicity. -- C.A.R. Hoare -------------- next part -------------- A non-text attachment was scrubbed... Name: PGP.sig Type: application/pgp-signature Size: 186 bytes Desc: This is a digitally signed message part Url : http://zgp.org/pipermail/p2p-hackers/attachments/20050111/87665926/PGP.pgp From ian at locut.us Tue Jan 11 22:38:33 2005 From: ian at locut.us (Ian Clarke) Date: Sat Dec 9 22:12:50 2006 Subject: [p2p-hackers] Altnet goes after p2p networks with obvious patent In-Reply-To: <06BF7C22-6419-11D9-AA32-000D932C5880@locut.us> References: <6169CDE3-6405-11D9-AA32-000D932C5880@locut.us> <41E445DF.5040107@cs.jhu.edu> <06BF7C22-6419-11D9-AA32-000D932C5880@locut.us> Message-ID: <866A7763-6421-11D9-AA32-000D932C5880@locut.us> On 11 Jan 2005, at 21:37, Ian Clarke wrote: > On 11 Jan 2005, at 21:32, Mark Miller wrote: >> Ian Clarke wrote: >>> I vaguely recall the last time I researched this that there was >>> prior art from as early as 1990, I think it was Project Xanadu >>> (http://xanadu.com/). >> Yes. >> http://zgp.org/pipermail/p2p-hackers/2004-March/subject.html#1753 >> especially >> http://zgp.org/pipermail/p2p-hackers/2004-March/001751.html > > I wonder what it takes to meet the standard for prior art for patents > - and whether any of this would meet that standard? Here is some information that may be useful: http://www.tms.org/pubs/journals/JOM/matters/matters-9106.html Ian. -- Founder, The Freenet Project http://freenetproject.org/ CEO, Cematics Ltd http://cematics.com/ Personal Blog http://locut.us/~ian/blog/ From srhea at cs.berkeley.edu Tue Jan 11 23:29:29 2005 From: srhea at cs.berkeley.edu (Sean C. Rhea) Date: Sat Dec 9 22:12:50 2006 Subject: [p2p-hackers] Altnet goes after p2p networks with obvious patent In-Reply-To: References: Message-ID: On Jan 11, 2005, at 12:20 PM, Greg Bildson wrote: > Any information about these and prior art would be greatly appreciated. "The HTTP Distribution and Replication Protocol" uses what it calls "Content Indentifers" to name data; they're MD5 hashes over the documents' contents. It's a W3C document from August 1997, two months before the first of the two Altnet patents was filed: http://www.w3.org/TR/NOTE-drp-19970825 It also uses the term "Content Based Addressing", which also sounds like prior art to me. And you can't argue that it wasn't "known", being a W3C technical report. Sean -- But to say that the race is the metaphor for the life is to miss the point. The race is everything. It obliterates whatever isn't racing. Life is the metaphor for the race. -- Donald Antrim -------------- next part -------------- A non-text attachment was scrubbed... Name: PGP.sig Type: application/pgp-signature Size: 186 bytes Desc: This is a digitally signed message part Url : http://zgp.org/pipermail/p2p-hackers/attachments/20050111/dc93d710/PGP.pgp From mccoy at mad-scientist.com Wed Jan 12 04:32:59 2005 From: mccoy at mad-scientist.com (Jim McCoy) Date: Sat Dec 9 22:12:50 2006 Subject: [p2p-hackers] Altnet goes after p2p networks with obvious patent In-Reply-To: <6169CDE3-6405-11D9-AA32-000D932C5880@locut.us> References: <6169CDE3-6405-11D9-AA32-000D932C5880@locut.us> Message-ID: <09E96204-6453-11D9-A789-000A95BD758E@mad-scientist.com> FWIW, at Electric Communities we were using hashes for identifying objects within a distributed network by early 1995 or so. It was for identifying endpoints (object instances) and communication peers within the network as well as determine uniqueness of an object (and the lineage probably ties back to Xanadu as well...waaaay too many ex-Xanadu people at that company...) Hits claims 2, 3, 4, 10, 11, 12, 15 and 17 of the 1997 claim by Altnet. Jim On Jan 11, 2005, at 11:17 AM, Ian Clarke wrote: > http://p2pnet.net/story/3512 > > It seems that Altnet is finally going after file sharing networks with > its laughably obvious patent on requesting files by a hash of the > file's contents (fortunately Freenet's developers are predominantly > European, and thus are largely immune to this). > > IIRC this patent was filed in 1997. I think it is very important that > those attacked challenge this patent head-on, either by claiming it is > invalid due to being obvious, or finding prior art. > > I vaguely recall the last time I researched this that there was prior > art from as early as 1990, I think it was Project Xanadu > (http://xanadu.com/). > > Can anyone provide specific pointers to good examples of prior art? > If Altnet succeeds in extorting any money out of these P2P companies > it will only serve to encourage them to attack others. > > Ian. From eugen at leitl.org Wed Jan 12 13:19:01 2005 From: eugen at leitl.org (Eugen Leitl) Date: Sat Dec 9 22:12:50 2006 Subject: [p2p-hackers] UDP Keep-alive In-Reply-To: <20050111211348.GY9221@leitl.org> References: <200501110159.j0B1xbaL003218@be9.noc0.redswoosh.com> <41E43EAD.30900@peertech.org> <20050111211348.GY9221@leitl.org> Message-ID: <20050112131901.GA15767@leitl.org> On Tue, Jan 11, 2005 at 10:13:48PM +0100, Eugen Leitl wrote: > On Tue, Jan 11, 2005 at 01:01:33PM -0800, coderman wrote: > > > Most NAT's appear to give a 1-5 minute timeout since last > > packet seen. It would be nice to find a market summary of > > various NAT behaviors. Anyone know of such a thing? > > Here's a data point: I've spent the better part of the day trying to remove > default NAT (60 sec TCP, 180 sec UDP) idle connection decay from a Draytek > Vigor 2900G with latest firmware. > > To no avail, had to send email to support (probably, /dev/null). > Proprietary NAT boxes are evil, period. I take that back, at least partly. Here's what Draytek support told me: " Thanks for your e-mail. Actually, 60 seconds idle time of TCP is for uncomplete TCP connection, as we know, TCP connection has 3-way handshakes menchanism, when the TCP +SYN send out, a session established, if the 3-way handshakes fail to complete, router will delete it after 60 seconds idle. But if the TCP 3-way handshakes is completed, the session should be 'persistent', it is only removed if it idles for 24 hours. For your scenario, +when your browser call the cgi-bin, it's certainly after tcp connection established, so there should be no problems. Do you mean you have encountered problems when you try to call the cgi-bin? Or you haven't meet problem but just want to prevent potential issue? We'll look forward your further news. " I'm reasonably certain it's a firmware bug, or maybe a misconfiguration. For time being I've switched over to Linksys WRT54G with Alchemy-6.0-RC5a v3.01.3.8sv, and the problem has gone away. > Which reminds me (since I've got my IPv6 subnet approved a few days ago): > what's p2p application situation for IPv6? Can anyone give a brief summary? -- Eugen* Leitl leitl ______________________________________________________________ ICBM: 48.07078, 11.61144 http://www.leitl.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE http://moleculardevices.org http://nanomachines.net -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: not available Url : http://zgp.org/pipermail/p2p-hackers/attachments/20050112/96d7b3fb/attachment.pgp From ian at locut.us Wed Jan 12 20:22:41 2005 From: ian at locut.us (Ian Clarke) Date: Sat Dec 9 22:12:50 2006 Subject: [p2p-hackers] Altnet goes after p2p networks with obvious patent In-Reply-To: <6169CDE3-6405-11D9-AA32-000D932C5880@locut.us> References: <6169CDE3-6405-11D9-AA32-000D932C5880@locut.us> Message-ID: There is a pretty good article on the Washington Post website about Altnet's latest bid to be the least popular players in the peer-to-peer space: http://www.washingtonpost.com/wp-dyn/articles/A3396-2005Jan12.html One point of interest to me was that in the article an Altnet lawyer claims that at least one of these patents was previously upheld in court: "But Hadley said a federal jury has already upheld the validity of at least one of the patents. In 2000, one of the original patent holders -- a San Francisco firm called Digital Island Inc. -- sued Web content manager Akamai Technologies, claiming that the company was violating the hashing patent. Akamai prevailed in the ensuing trial, when a jury decided that the company was not using the patented technology, but the same panel concluded that the patent itself was valid, Hadley said." Clearly I am not a lawyer (and everything I know about IP law I wish I didn't need to know), but I am not sure why a jury, having decided that Akamai doesn't use the patented technology, would bother to comment on whether the patent was valid. Was the validity of the patent even contested? I have found what may be this ruling at: http://pacer.mad.uscourts.gov/dc/opinions/zobel/pdf/ cable%20v%20akamai%20revised.pdf If this is, indeed, the relevant document (and it probably isn't), then it makes no comment as to the validity of the 5,978,791 patent, it merely points out that Akamai didn't use it in the first place. Perhaps someone could comment on whether Hadley's claim is accurate - and what it means? All the best, Ian. -- Founder, The Freenet Project http://freenetproject.org/ CEO, Cematics Ltd http://cematics.com/ Personal Blog http://locut.us/~ian/blog/ From sdaswani at gmail.com Thu Jan 13 01:10:19 2005 From: sdaswani at gmail.com (Susheel Daswani) Date: Sat Dec 9 22:12:50 2006 Subject: [p2p-hackers] Re: Altnet goes after p2p networks with obvious patent Message-ID: <1cd056b905011217105c878084@mail.gmail.com> The whole Altnet fiasco is highly unfortunate (and displays what is wrong with software patents), and I'm sure that if it came down to it the patent would be found invalid. In the short term, the best way to attack the problem might be to 'fight fire with fire' - does anyone hold any patent that they could assert against Altnet (If not, file quickly ;))? If so, that person could throw an infringement claim in Altnet's way, and then they could barter widespread use of Altnet's hash patent via a cross-licensing agreement. Big companies do this all the time, of course. Susheel From eugen at leitl.org Thu Jan 13 10:48:53 2005 From: eugen at leitl.org (Eugen Leitl) Date: Sat Dec 9 22:12:50 2006 Subject: [p2p-hackers] [i2p] Distributed Search Engine (fwd from tkaitchuck@comcast.net) Message-ID: <20050113104853.GN9221@leitl.org> ----- Forwarded message from Tom Kaitchuck ----- From: Tom Kaitchuck Date: Thu, 13 Jan 2005 00:51:51 -0600 To: i2p@i2p.net Subject: [i2p] Distributed Search Engine User-Agent: KMail/1.7.2 For those of you that do not know, I am currently working on building a distributed search engine for I2P. While it is still in an alpha state, it is approaching the point where it could use some wider testing. It is now in cvs under the module khksearch. I was planning to hold off on releasing it until I fixed a bug preventing servers from joining in mid operation, but it has proved elusive enough, that I think more eyeballs may help. One thing that some of you may be interested in even if you don't care about the search engine itself, is that to make it work with I2P I took the streaming library for Java and put it into a wrapper class that imitates java.net so all one has to do is take the wrapper code put it in the class path and in your java program replace "import java.net.*" with "Import search.connection.*" and your app is instantly ported to I2P. (Assuming it is fairly simplistic and only has one socket server per Jvm instance. But this could easily be improved upon if anyone is interested. ) There is still lots to do, not all of which requires huge technical skill. (Code cleanup, Better instructions, Startup scripts for windows and other JVMs) Also the existing awt interface needs to be converted into an applet or so that it can run within a webpage. The biggest thing that remains to be done is implementing the ranking code, I plan to do this next. As far as the license goes, it will we a free software license that permits modification and public access to the source. (probably lgpl or similar) However all of the scripts and all of the code for the wrapper, were written by me, and are public domain. So if you are interested in helping out, or would just like to play with it, check it out. _______________________________________________ i2p mailing list i2p@i2p.net http://i2p.dnsalias.net/mailman/listinfo/i2p ----- End forwarded message ----- -- Eugen* Leitl leitl ______________________________________________________________ ICBM: 48.07078, 11.61144 http://www.leitl.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE http://moleculardevices.org http://nanomachines.net -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: not available Url : http://zgp.org/pipermail/p2p-hackers/attachments/20050113/6e29f322/attachment.pgp From bryan.turner at pobox.com Fri Jan 14 01:34:13 2005 From: bryan.turner at pobox.com (Bryan Turner) Date: Sat Dec 9 22:12:50 2006 Subject: [p2p-hackers] Generalizing BitTorrent.. References: <20050113104853.GN9221@leitl.org> Message-ID: <004d01c4f9d9$27b7c170$6901a8c0@aspen> Hello p2p-hackers, I've been thinking about BitTorrent recently and had some ideas on improving the protocol. I'm sure others have had similar thoughts, and I'm interested to hear your opinions. The basic idea is best described with a real-world example. There are a number of "Full-MAME" torrents, one for each version of MAME (for those who don't know, MAME is an arcade emulator and the torrents contain the ROMs for the arcade games). As MAME is updated frequently, there is a string of these torrents on the net. Each one is very large (10 GB), and contains 95% of the EXACT SAME data from the previous torrent. The other 5% is a small amount of changes, and some new content. If a user is running the v0.7 torrent and has become a Seed, he serves ONLY the v0.7 peers. When v0.8 is released, his Seed status is essentially useless to the peers in the v0.8 crowd, even though he has > 90% of the same data. And again, when v0.9 is released, the same problem. It seems like there should be an extension to the protocol to allow for this type of 'shared data' among torrents. Let me take this concept a bit further. You could think of the v0.7, v0.8, and v0.9 torrents as being three separate torrents - or you could think of them as ONE torrent, with overlapping pieces. If there were a tracker that tracked the "meta-torrent" of all three versions, it would be valuable to ALL peers interested in ANY of the versions. Taken to the extreme, if you were to gather up other (non MAME) torrents in the same manner and glue them all together, the resulting meta-torrent (and tracker) would be valuable to any client interested in any data tracked by the meta-torrent tracker. Likewise, clients could exchange with other clients for any piece of data in the meta-torrent shared between the clients. For example, if Client A is interested in Fedora 3, a some MP3s, and MAME v0.7, while Client B is interested in Fedora 2, some MP3s, and MAME v0.8, then their shared meta-torrent is all the data shared between Fedora 2 & 3, the mutually-interesting MP3s, and the data shared between MAME v0.7 & v0.8. Conceptually, nothing has changed, we're just aggregating the set of files being exchanged, and allowing torrents to overlap where ever it is natural. Many BitTorrent client applications already allow a similar selection process, they allow the user to choose some subset of data to transfer from a complete torrent. The proposed changes are essentially the same idea, except that the client software automatically selects the pieces to download/ignore from the meta-torrent based on the sub-torrent files that the user has selected. I have a few ideas on how to implement this as well, but I don't want to waste bandwidth if the group isn't interested. Thanks for reading! --Bryan bryan.turner@pobox.com From bryan.turner at pobox.com Fri Jan 14 02:31:37 2005 From: bryan.turner at pobox.com (Bryan Turner) Date: Sat Dec 9 22:12:50 2006 Subject: [p2p-hackers] Simple lightweight DHT References: <5899e51005011015425f56e284@mail.gmail.com> <41E40DB5.8020709@ucla.edu> Message-ID: <007001c4f9e1$2c2cae20$6901a8c0@aspen> Zooko/Michael, I've implemented four DHTs to date (Chord, Kademlia, Continuous-Discrete, and a proprietary one). I'm not sure there's much of a 'story' to it, but here are some excerpts of my trials.. Chord [1] Conceptually, Chord is very simple and that is one of the factors motivating our decision to build it. We had a basic Chord running between three nodes in about a month. This included a 256-bit keyspace (SHA-256), recursive lookup, standard finger & successor tables, and a very simplistic stabilize protocol. After scaling to a grid of 72 machines, we had some difficulty maintaining the stability of the ring. Our node loss event was updating only the predecessor and successor (to save broadcasting to all fingers, etc). This, and the basic stabilization protocol (trade successor lists every 10 sec with predecessor) did not keep the ring stable. We adjusted it to broadcast the node loss and fixed the problem - I still hold that this should not be necessary for stability. We also had routing problems during churn. It seems that some nodes would 'skip over' the destination node, sending the lookup further around the ring than necessary. This would cause lookups to run around the ring multiple times before landing at the right spot. I think we finally tracked this down to new nodes joining, and somewhere in the process of updating the finger tables there was a period where routes were allowed to jump too far in the ring. We eventually fixed this too. Kademlia [2] By this time I was recognizing some of the same problems that the Kademlia group had noted with Chord; non bidirectional links, uni-directional routing, static stabilization rate, and a host of problems if you threw a knife switch between two halves of the network. The switchover to Kademlia from Chord was almost trivial, about two weeks of work. All of the structures are essentially the same, requiring only an extension of the finger table to hold multiple entries and timestamps. Lookup, routing, etc.. was already written to be flexible so our metric was the only thing that changed. We did not implement the alpha parameter, nor any of the system-level features like 24-hour lifespans, etc. These did not fit the project. Kademlia introduced no additional problems, although it was much more difficult to explain how data got grouped together & replicated when tech-savvy customers started probing. Kademlia worked fine for quite some time, and it was the protocol for which our UDP stack was built (see a previous posts for that design). Voronoi Diagrams [3,4,5] At some point, I realized Kademlia still had not solved the knife-switch problem. This is very important to our project, as the installations are known to be geographically diverse and the WAN connections to remote sites are flaky at best. Also, there are large differences in node's computational and memory resources which were not being considered. After reading all the papers under the Continuous-Discrete chain [3,4,5], I must say it blew my mind. Unfortunately we were not in a position to use or implement all of their ideas. Also, these papers are very math heavy and took awhile to digest. The switch from Kademlia to 1D Continuous-Discrete is essentially the same as from Chord to Kademlia. The protocols are almost exactly the same, but Continuous-Discrete is significantly more flexible. For instance, there is no restriction to 2^k link table, the algorithm works just fine with 2, or any number, or a different number per node, so long as you always maintain the neighbor links. Fault-tolerance is a trivial extension to the protocol where nodes 'overlap' each other in the ID space, and this can be an arbitrary overlap unlike Kademlia/Chord. Kademlia's alpha parameter is also mapped to an arbitrarily wide path through the ID space. It is also stable without broadcasting node loss events using a simple suspicion event (lookups are not routed to suspected nodes). Basically take any fixed, rigid features of Chord/Kademlia and make them flexible, you get Continuous-Discrete. I've been happy with this design for about a year, generalizing some parts of the system and finally fixing the knife-switch problem using a proprietary protocol built into the now-upgraded 2D Voronoi DHT system. Since our application only uses the DHT as a session set-up, we are not too worried about the lookup times and stabilization times, as long as each lookup does eventually succeed (or properly notify with a failure). To date, our system has been tested with 50 nodes on a dedicated grid at full speed, saturating 100 Mbps links for over 36 hours. This was a fairly small test, with only 50 keys per node, to test collision and caching. Total development time has been about 2 years, although less than 6 months of that is in tuning/hacking the DHT protocols. --Bryan bryan.turner@pobox.com References: [1] http://www.pdos.lcs.mit.edu/papers/chord:sigcomm01/chord_sigcomm.pdf [2] http://citeseer.ist.psu.edu/529075.html [3] http://www.wisdom.weizmann.ac.il/~naor/PAPERS/dh.pdf [4] http://iptps03.cs.berkeley.edu/final-papers/simple_fault_tolerant.pdf [5] http://citeseer.ist.psu.edu/562059.html ----- Original Message ----- From: "Michael Parker" To: "Peer-to-peer development." Sent: Tuesday, January 11, 2005 12:32 PM Subject: Re: [p2p-hackers] Simple lightweight DHT > I was about to ask a similar question... You're a seasoned peer-to-peer > developer -- for those of us who are developers just starting out, and > perhaps trying to invent our own new, novel topologies and systems, what > are the hardest things to 'get right' in a peer-to-peer system? > > I've read the paper "Designing a DHT for Low Latency and High > Throughput" [1] by the Chord group at MIT. It seems to sum up pretty > well what their challenges were, and what practices they found were best. > > I know there are some other people out on this mailing list who could > answer this too (Clarke, Freedman... if you feel that your name should > be on this list, just respond). Sorry to try and drag you into the > spotlight, but you definitely have a captive audience ;) > > - Michael Parker > > [1] http://citeseer.ist.psu.edu/dabek04designing.html > > > Zooko O'Whielacronx wrote: > > > On 2005, Jan 10, at 20:00, Sean C. Rhea wrote: > > > >> I wrote the Bamboo router and got it working in about a week, > >> although I had the experience and code base from writing Tapestry > >> (another DHT) before that, so I had a bit of a head start. > >> > >> It took me another year to get Bamboo to perform as well as it does > >> today. I believe the Chord people had a similar experience. > > > > > > What a fascinating story! What things did you have to learn and > > invent during the course of that year to improve the performance of > > Bamboo? > > > > Regards, > > > > Zooko > > > > _______________________________________________ > > p2p-hackers mailing list > > p2p-hackers@zgp.org > > http://zgp.org/mailman/listinfo/p2p-hackers > > _______________________________________________ > > Here is a web page listing P2P Conferences: > > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences > > > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > _______________________________________________ > Here is a web page listing P2P Conferences: > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences From mllist at vaste.mine.nu Fri Jan 14 02:40:22 2005 From: mllist at vaste.mine.nu (Vaste) Date: Sat Dec 9 22:12:50 2006 Subject: [p2p-hackers] Generalizing BitTorrent.. In-Reply-To: <004d01c4f9d9$27b7c170$6901a8c0@aspen> References: <20050113104853.GN9221@leitl.org> <004d01c4f9d9$27b7c170$6901a8c0@aspen> Message-ID: <41E73116.9080400@vaste.mine.nu> Bryan Turner wrote: > Hello p2p-hackers, > > I've been thinking about BitTorrent recently and had some ideas on > improving the protocol. I'm sure others have had similar thoughts, > and I'm interested to hear your opinions. > > [situation where e.g. some peers are interested in A, some in B and > most in C] I'd say the problem boils down to finding trading partners with common interests. (How to do this decentralized?) The Fedora guys share an interest in Fedora data and would likely benefit from being introduced to each other. They still share the interest in MAME the rest of the MAME-swarm has, but these two peers in particular have high common interest (how to measure? MB of common interest?). But, how does this affect the randomness of BitTorrent's network? Let's say there's 50 or so Fedora&MAME-guys. Then these might be happy to connect only to each other (e.g. they've found 30 peers that are "better" than the rest of the MAME-swarm) and an isolated island might be formed. This is obviously contra-productive e.g. if there's no MAME-seed in the island, as then no MAME data will ever spread there. But if one does "normal" random selection (ignoring common interest) the Fedora-guys might never find each other at all, if the MAME-swarm is sufficiently large. So what to do? /Vaste From cefn.hoile at bt.com Fri Jan 14 11:45:06 2005 From: cefn.hoile at bt.com (cefn.hoile@bt.com) Date: Sat Dec 9 22:12:50 2006 Subject: [p2p-hackers] Generalizing BitTorrent.. Message-ID: <21DA6754A9238B48B92F39637EF307FD05B1A02D@i2km41-ukdy.domain1.systemhost.net> There could be a big difference between the systems you would use for the first case (incremental releases to the same file structure), as opposed to the latter cases (where different distributions sharing files need to be able to seamlessly interoperate in order to benefit from whatever optimisations are possible). The first case could be solved by a convention to distribute the first release as a self contained filesystem, and all future releases as diff patches. The latter cases you describe are more interesting in their consequences though. There may be a way in which you could build up the latter case from conventions too, although encoding these conventions in a tool would be advantageous. Perhaps a series of file hashes could be chained together to define a release of an individual file (release 1.0, patch 2.0, patch 3.0, patch 4.0), and distributions (filesystems made up out of multiple files) in a similar way. I guess I am suggesting that you might be able to do this without changing the Bittorrent protocol. Perhaps you could suggest how protocol changes would improve efficiency over a scheme based on conventions similar to those described above. This could also be related to the following project... http://www.pdos.lcs.mit.edu/ivy/ ...which uses publishing of changelogs to maintain a distributed versioned filesystem. Cefn http://cefn.com -----Original Message----- From: p2p-hackers-bounces@zgp.org [mailto:p2p-hackers-bounces@zgp.org] On Behalf Of Bryan Turner Sent: 14 January 2005 01:34 To: Peer-to-peer development. Subject: [p2p-hackers] Generalizing BitTorrent.. Hello p2p-hackers, I've been thinking about BitTorrent recently and had some ideas on improving the protocol. I'm sure others have had similar thoughts, and I'm interested to hear your opinions. The basic idea is best described with a real-world example. There are a number of "Full-MAME" torrents, one for each version of MAME (for those who don't know, MAME is an arcade emulator and the torrents contain the ROMs for the arcade games). As MAME is updated frequently, there is a string of these torrents on the net. Each one is very large (10 GB), and contains 95% of the EXACT SAME data from the previous torrent. The other 5% is a small amount of changes, and some new content. If a user is running the v0.7 torrent and has become a Seed, he serves ONLY the v0.7 peers. When v0.8 is released, his Seed status is essentially useless to the peers in the v0.8 crowd, even though he has > 90% of the same data. And again, when v0.9 is released, the same problem. It seems like there should be an extension to the protocol to allow for this type of 'shared data' among torrents. Let me take this concept a bit further. You could think of the v0.7, v0.8, and v0.9 torrents as being three separate torrents - or you could think of them as ONE torrent, with overlapping pieces. If there were a tracker that tracked the "meta-torrent" of all three versions, it would be valuable to ALL peers interested in ANY of the versions. Taken to the extreme, if you were to gather up other (non MAME) torrents in the same manner and glue them all together, the resulting meta-torrent (and tracker) would be valuable to any client interested in any data tracked by the meta-torrent tracker. Likewise, clients could exchange with other clients for any piece of data in the meta-torrent shared between the clients. For example, if Client A is interested in Fedora 3, a some MP3s, and MAME v0.7, while Client B is interested in Fedora 2, some MP3s, and MAME v0.8, then their shared meta-torrent is all the data shared between Fedora 2 & 3, the mutually-interesting MP3s, and the data shared between MAME v0.7 & v0.8. Conceptually, nothing has changed, we're just aggregating the set of files being exchanged, and allowing torrents to overlap where ever it is natural. Many BitTorrent client applications already allow a similar selection process, they allow the user to choose some subset of data to transfer from a complete torrent. The proposed changes are essentially the same idea, except that the client software automatically selects the pieces to download/ignore from the meta-torrent based on the sub-torrent files that the user has selected. I have a few ideas on how to implement this as well, but I don't want to waste bandwidth if the group isn't interested. Thanks for reading! --Bryan bryan.turner@pobox.com _______________________________________________ p2p-hackers mailing list p2p-hackers@zgp.org http://zgp.org/mailman/listinfo/p2p-hackers _______________________________________________ Here is a web page listing P2P Conferences: http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences From bryan.turner at pobox.com Fri Jan 14 17:29:01 2005 From: bryan.turner at pobox.com (Bryan Turner) Date: Sat Dec 9 22:12:50 2006 Subject: [p2p-hackers] Generalizing BitTorrent.. In-Reply-To: <21DA6754A9238B48B92F39637EF307FD05B1A02D@i2km41-ukdy.domain1.systemhost.net> Message-ID: <200501141729.j0EHT2W0029655@rtp-core-1.cisco.com> I apologize to Vaste for missing his earlier post on this topic (http://zgp.org/pipermail/p2p-hackers/2004-December/002290.html). We're definitely thinking along the same lines. Cefn: > I guess I am suggesting that you might be able to do this without > changing the Bittorrent protocol. Perhaps you could suggest how > protocol changes would improve efficiency over a scheme based on > conventions similar to those described above. > I believe the protocol needs to change for several reasons: 1. The Bit Torrent protocol exchanges 'piece lists' between clients as a bit vector for a particular torrent. This bit vector (and later 'update' messages) require that each client have the exact same torrent as the clients they are connected to. It is impossible to generalize this to piece lists for meta-torrents without changing the protocol. 2. Trackers track torrents by their torrent IDs, and gather statistics in aggregate for a specific torrent. This information in its current form can't be generalized for meta-torrents. 3. The protocol assumes only one torrent is being transferred at a time (new versions and advanced clients remove this restriction, but not intelligently). If a user is interested in several files, it should seek out the clients which are interested in the SAME files, and keep them preferentially over other clients. This is not possible using the current set of messages in Bit Torrent. -------------------------- Since there seems to be some interest, here's how I would change things.. First, the torrent file should be thought of as a catalog of pieces to retrieve, and instructions on how to paste them together into a collection. The trackers and clients don't care about the final glue-up, they simply locate and exchange pieces based on their piece ID. Second, piece IDs should be their Content Hash. This guarantees that overlapping catalogs of pieces will share the same pool of peers. Since peers are looking for pieces by content hash, they don't care which torrent their trading partner is interested in, only that the data is the same. Third, trackers would be generalized to track pieces instead of torrents. Clients register their interest in a piece at the tracker, and the tracker returns a bucket of peers who are interested in the same piece (instead of peers interested in the same torrent). Forth, the piece list exchange needs some way to exchange all the piece IDs that the client is interested in. I propose Bloom Filters, and one of the optimized set reconciliation protocols in the literature. Thus, when two peers meet, they calculate their shared interest. This could even be done using 'fuzzy' math to get an approximation of the shared interest. If it is high enough, they could complete the full exchange. Content Hashes lead naturally into.. DHTs. So trackers could build a P2P network (Chord, Pastry, etc..) where the keys are piece IDs and the value returned by the DHT is a random list of other peers looking for that piece. Client software bootstraps by selecting a sample of pieces from all the torrents you are interested in, performs lookups to gather a list of potential peers, then filters for the peers with similar interests. As your interests change, or as clients come & go, you can re-register your interest in pieces, or lookup another bucket of peers from the DHT to trade with. 'Seed' peers register their interest in all of the pieces they are seeding, and are naturally found by the peers who perform lookups for those pieces. Finally, the peers could also be Trackers simply by joining the DHT and taking some of the load of handling lookup requests. Thus following the eXeem model of distributing trackers across all the peers. I hope that came out legibly.. --Bryan bryan.turner@pobox.com From srhea at cs.berkeley.edu Fri Jan 14 17:49:38 2005 From: srhea at cs.berkeley.edu (Sean C. Rhea) Date: Sat Dec 9 22:12:50 2006 Subject: [p2p-hackers] Fwd: Prior art for the AltNet patent... Message-ID: I think this should be archived on the list. Sean Begin forwarded message: > From: Ethan Miller > Date: January 14, 2005 9:12:14 AM PST > To: ian@locut.us, srhea@cs.berkeley.edu > Cc: "Ethan L. Miller" > Subject: Prior art for the AltNet patent... > > For some of the claims (including claim 1), you might want to look at: > > Jeff Hollingsworth and Ethan Miller, "Using Content-Derived Names for > Configuration Management," 1997 Symposium on Software Reusability (SSR > '97), Boston, MA, May 1997, pages 104?109. > > > Publication date is May 1997, so it definitely predates the patent. > It's not the only paper, either. Merkle's hash trees are from the > 1980's. > > ethan -- Moments alone in the mirror have a playwright inside them; fragments of destiny fly out of those moments and into your eyes. -- Deb Margolin -------------- next part -------------- A non-text attachment was scrubbed... Name: PGP.sig Type: application/pgp-signature Size: 186 bytes Desc: This is a digitally signed message part Url : http://zgp.org/pipermail/p2p-hackers/attachments/20050114/b692fd65/PGP.pgp From greg at electricrain.com Sat Jan 15 08:02:54 2005 From: greg at electricrain.com (Gregory P. Smith) Date: Sat Dec 9 22:12:50 2006 Subject: [p2p-hackers] Generalizing BitTorrent.. In-Reply-To: <004d01c4f9d9$27b7c170$6901a8c0@aspen> References: <20050113104853.GN9221@leitl.org> <004d01c4f9d9$27b7c170$6901a8c0@aspen> Message-ID: <20050115080254.GW12820@zot.electricrain.com> > > The basic idea is best described with a real-world example. There are a > number of "Full-MAME" torrents, one for each version of MAME (for those who > don't know, MAME is an arcade emulator and the torrents contain the ROMs for > the arcade games). As MAME is updated frequently, there is a string of > these torrents on the net. Each one is very large (10 GB), and contains 95% > of the EXACT SAME data from the previous torrent. The other 5% is a small > amount of changes, and some new content. > > If a user is running the v0.7 torrent and has become a Seed, he serves > ONLY the v0.7 peers. When v0.8 is released, his Seed status is essentially > useless to the peers in the v0.8 crowd, even though he has > 90% of the same > data. And again, when v0.9 is released, the same problem. It seems like > there should be an extension to the protocol to allow for this type of > 'shared data' among torrents. The flaw in this logic here is that to aggregate common data across different instances of content in a system you need to be able to locate and identify the common data portions. In a typical tarball of a new version of something where only 5% of the files have updated -most- of the hashes of the fixed sized pieces are likely to change; certianly -way- more than 5% anyways. why? because the common data has shifted around or in the case of compressed streams of data (.tar.bz2) the entire stream will be different. To get any benefit from this the content would need to be extreemly carefully packaged. no zips, no tars, no compression, etc. That alone could destroy the benefit. Others have already mentioned an alternate general solution that applies to -any- distribution method (the linux kernel is distributed this way): updates that share data should be published as binary diffs against the previous version. Downloading n+1 becomes a recursive "download n and the n->n+1 diff" operation. What you're really desiring is for peers to integrate the diff knowledge so that it doesn't need to be done manually and so that it automatically decides when the base+sum(diffs against base) warrants just issuing a new base to distribute for future diffs to start from. (fwiw, some version control systems make many of the same decisions as to how they store versions of data internally) -greg From adam at cypherspace.org Sat Jan 15 11:12:59 2005 From: adam at cypherspace.org (Adam Back) Date: Sat Dec 9 22:12:50 2006 Subject: [p2p-hackers] Generalizing BitTorrent.. In-Reply-To: <20050115080254.GW12820@zot.electricrain.com> References: <20050113104853.GN9221@leitl.org> <004d01c4f9d9$27b7c170$6901a8c0@aspen> <20050115080254.GW12820@zot.electricrain.com> Message-ID: <20050115111259.GA28255@bitchcake.off.net> What about rsync? Perhaps you could just start from what a given peer has, rsync that to n different peers. Presuming the n peers have the same starting point. Gregory Smith wrote: > -most- of the hashes of the fixed sized pieces are likely to change; > certianly -way- more than 5% anyways. why? because the common data > has shifted around or in the case of compressed streams of data > (.tar.bz2) the entire stream will be different. To get any benefit > from this the content would need to be extreemly carefully packaged. > no zips, no tars, no compression, etc. That alone could destroy the > benefit. Yah so you should gzip / bzip the chunks on download, that way the binary diff (rsync) gets to see the diffs. btw you can think of rsync as an interactive compression algorithm discovering and fetching the diffs between what the client has and what the server has. > updates that share data should be published as binary diffs against > the previous version. Downloading n+1 becomes a recursive "download > n and the n->n+1 diff" operation. btw What binary diff does the kernel distribution use? I don't see a binary diff package installed but maybe I'm missing it. Are you talking about the source diffs? (These are not binary). Adam From sdaswani at gmail.com Sun Jan 16 02:21:38 2005 From: sdaswani at gmail.com (Susheel Daswani) Date: Sat Dec 9 22:12:50 2006 Subject: [p2p-hackers] Altnet goes after p2p networks with obvious patent Message-ID: <1cd056b905011518213af08b36@mail.gmail.com> "Clearly I am not a lawyer (and everything I know about IP law I wish I didn't need to know), but I am not sure why a jury, having decided that Akamai doesn't use the patented technology, would bother to comment on whether the patent was valid. Was the validity of the patent even contested?" I'm researching the case, but the jury may have made a special verdict, whereby they are posed several questions and they rule on each in turn (Federal Rules of Civil Procedure 49(a)). So it is very possible that they could find the patent valid but Akamai non-infringing. More to follow. That brief you sent seemed to be in regards to the trial court case which was superseded by an appellate case (I think). Susheel From ian at locut.us Sun Jan 16 11:48:17 2005 From: ian at locut.us (Ian Clarke) Date: Sat Dec 9 22:12:50 2006 Subject: [p2p-hackers] Altnet goes after p2p networks with obvious patent In-Reply-To: <1cd056b905011518213af08b36@mail.gmail.com> References: <1cd056b905011518213af08b36@mail.gmail.com> Message-ID: <835EC3B0-67B4-11D9-AA32-000D932C5880@locut.us> Since then I spoke to a lawyer, and he pointed out that jurys rule on matters of fact, not matters of law, and the validity of this patent would be a matter of law - making it very unlikely that their claim is accurate. Ian. On 16 Jan 2005, at 02:21, Susheel Daswani wrote: > "Clearly I am not a lawyer (and everything I know about IP law I wish I > didn't need to know), but I am not sure why a jury, having decided that > Akamai doesn't use the patented technology, would bother to comment on > whether the patent was valid. Was the validity of the patent even > contested?" > > I'm researching the case, but the jury may have made a special > verdict, whereby they are posed several questions and they rule on > each in turn (Federal Rules of Civil Procedure 49(a)). So it is very > possible that they could find the patent valid but Akamai > non-infringing. > > More to follow. That brief you sent seemed to be in regards to the > trial court case which was superseded by an appellate case (I think). > Susheel > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > _______________________________________________ > Here is a web page listing P2P Conferences: > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences > > -- Founder, The Freenet Project http://freenetproject.org/ CEO, Cematics Ltd http://cematics.com/ Personal Blog http://locut.us/~ian/blog/ From sdaswani at gmail.com Sun Jan 16 18:36:53 2005 From: sdaswani at gmail.com (Susheel Daswani) Date: Sat Dec 9 22:12:50 2006 Subject: [p2p-hackers] Altnet goes after p2p networks with obvious patent In-Reply-To: <835EC3B0-67B4-11D9-AA32-000D932C5880@locut.us> References: <1cd056b905011518213af08b36@mail.gmail.com> <835EC3B0-67B4-11D9-AA32-000D932C5880@locut.us> Message-ID: <1cd056b90501161036249d6cfc@mail.gmail.com> Well definitely defer to his advice - I'm only 1/6th a lawyer, and maybe not even that much if you have to count bar passage :). But at least I don't charge ;). That makes sense though - juries are factfinders, so if all the facts regarding patent validity were uncontested by both parties, the ruling would be a matter of law for a judge. I've been looking at the case Akamai Techs. v. Cable & Wireless Internet Servs., 344 F.3d 1186. The patents mentioned in this case are Patent No. 6,108,703 and Patent No. 6,185,598. Neither of those match the patent that Altnet is alleging infringed, right? I'm not sure if the patent numbering system is funky or whatever. Also, there was definitely a jury decision at the trial court level, but I'm not sure regarding what. I'm going to read this case fully soon and then I'll report back. I am just afraid it isn't relevant. Sush On Sun, 16 Jan 2005 11:48:17 +0000, Ian Clarke wrote: > Since then I spoke to a lawyer, and he pointed out that jurys rule on > matters of fact, not matters of law, and the validity of this patent > would be a matter of law - making it very unlikely that their claim is > accurate. > > Ian. > > On 16 Jan 2005, at 02:21, Susheel Daswani wrote: > > > "Clearly I am not a lawyer (and everything I know about IP law I wish I > > didn't need to know), but I am not sure why a jury, having decided that > > Akamai doesn't use the patented technology, would bother to comment on > > whether the patent was valid. Was the validity of the patent even > > contested?" > > > > I'm researching the case, but the jury may have made a special > > verdict, whereby they are posed several questions and they rule on > > each in turn (Federal Rules of Civil Procedure 49(a)). So it is very > > possible that they could find the patent valid but Akamai > > non-infringing. > > > > More to follow. That brief you sent seemed to be in regards to the > > trial court case which was superseded by an appellate case (I think). > > Susheel > > _______________________________________________ > > p2p-hackers mailing list > > p2p-hackers@zgp.org > > http://zgp.org/mailman/listinfo/p2p-hackers > > _______________________________________________ > > Here is a web page listing P2P Conferences: > > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences > > > > > -- > Founder, The Freenet Project http://freenetproject.org/ > CEO, Cematics Ltd http://cematics.com/ > Personal Blog http://locut.us/~ian/blog/ > > From shiner_chen at yahoo.com.cn Tue Jan 18 02:50:48 2005 From: shiner_chen at yahoo.com.cn (shiner chen) Date: Sat Dec 9 22:12:50 2006 Subject: [p2p-hackers] reliable file transfer over UDP Message-ID: <20050118025048.59895.qmail@web15508.mail.cnb.yahoo.com> I want to implement the reliable file transfer over UDP.at the same time ,it can throught the NAT, that is , the two peers behind the different NAT can transfer file each other . Can you send the code to me ,if you has the code for that. my email:shiner_chen@yahoo.com.cn thanks! Shiner Chen 17th ,Jan 2005 --------------------------------- Do You Yahoo!? 注册世界一流品质的雅虎免费电邮 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://zgp.org/pipermail/p2p-hackers/attachments/20050118/05ded879/attachment.html From ardagna at dti.unimi.it Tue Jan 18 08:57:22 2005 From: ardagna at dti.unimi.it (Claudio Agostino Ardagna) Date: Sat Dec 9 22:12:50 2006 Subject: [p2p-hackers] CFP: 10th European Symposium on Research in Computer Security Message-ID: <00b701c4fd3b$b9175b70$0b00000a@Berlino> [Apologies if you receive multiple copies of this message] CALL FOR PAPERS ESORICS 2005 10TH EUROPEAN SYMPOSIUM ON RESEARCH IN COMPUTER SECURITY Milan, Italy - September 14-16, 2005 Organized by University of Milan http://esorics05.dti.unimi.it/ *********************************************************************** SUBMISSION OF PAPERS: MARCH 25, 2005 *********************************************************************** Papers offering novel research contributions in any aspect of computer security are solicited for submission to the Tenth European Symposium on Research in Computer Security (ESORICS 2005). Organized in a series of European countries, ESORICS is confirmed as the European research event in computer security. The symposium started in 1990 and has been held on alternate years in different European countries and attracts an international audience from both the academic and industrial communities. From 2002 it has been held yearly. The Symposium has established itself as one of the premiere, international gatherings on information assurance. Papers may present theory, technique, applications, or practical experience on topics including: - access control - accountability - anonymity - applied cryptography - authentication - covert channels - cryptographic protocols - cybercrime - data and application security - data integrity - denial of service attacks - dependability - digital right managament - firewalls - formal methods in security - identity management - inference control - information dissemination control - information flow control - information warfare - intellectual property protection - intrusion tolerance - language-based security - network security - non-interference - peer-to-peer security - privacy-enhancing technology - pseudonymity - secure electronic commerce - security administration - security as quality of service - security evaluation - security management - security models - security requirements engineering - security verification - smartcards - steganography - subliminal channels - survivability - system security - transaction management - trust models and trust management policies - trustworthy user devices The primary focus is on high-quality original unpublished research, case studies and implementation experiences. We encourage submissions of papers discussing industrial research and development. Proceedings will be published by Springer-Verlag in the Lecture Notes in Computer Science series. INSTRUCTIONS FOR PAPER SUBMISSIONS Submitted papers must not substantially overlap papers that have been published or that are simultaneously submitted to a journal or a conference with proceedings. Papers should be at most 15 pages excluding the bibliography and well-marked appendices (using 11-point font), and at most 20 pages total. Committee members are not required to read the appendices, and so the paper should be intelligible without them. To submit a paper, send to esorics05@dti.unimi.it a plain ASCII text email containing the title and abstract of your paper, the authors' names, email and postal addresses, phone and fax numbers, and identification of the contact author. To the same message, attach your submission (as a MIME attachment) in PDF or portable postscript format. Do NOT send files formatted for word processing packages (e.g., Microsoft Word or WordPerfect files). Submissions not meeting these guidelines risk rejection without consideration of their merits. Submissions must be received by March 25, 2005 in order to be considered. Notification of acceptance or rejection will be sent to authors by May 30, 2005. Authors of accepted papers must be prepared to sign a copyright statement and must guarantee that their paper will be presented at the conference. Authors of accepted papers must follow the Springer Information for Authors' guidelines for the preparation of the manuscript and use the templates provided there. GENERAL CHAIR Pierangela Samarati University of Milan email: samarati@dti.unimi.it PROGRAM CHAIRS Sabrina De Capitani di Vimercati University of Milan email: decapita@dti.unimi.it Paul Syverson Naval Research Laboratory url: www.syverson.org PUBLICATION CHAIR Dieter Gollman TU Hamburg-Harburg email: diego@tuhh.de PUBLICITY CHAIR Claudio A. Ardagna University of Milan, Italy email: ardagna@dti.unimi.it IMPORTANT DATES Paper Submission due: March 25, 2005 Notification: May 30, 2005 Final papers due: June 30, 2005 PROGRAM COMMITTEE Rakesh Agrawal, IBM Almaden Research Center, USA Gerard Allwein, Naval Research Laboratory, USA Ross Anderson, University of Cambridge, UK Vijay Atluri, Rutgers University, USA Michael Backes, IBM Zurich Research Laboratory, Switzerland Jan Camenisch, IBM Zurich Research Laboratory, Switzerland David Chadwick, University of Kent, UK Marc Dacier, Institut Eur?com, France George Danezis, University of Cambridge, UK Simon Foley, University College, Ireland Sushil Jajodia, George Mason University, USA Dogan Kesdogan, RWTH Aachen, Informatik IV, Germany Peng Liu, The Pennsylvania State University, USA Javier Lopez, University of Malaga, Spain Heiko Mantel, ETH-Zentrum, Switzerland Nick Mathewson, The Free Haven Project, USA Patrick McDaniel, The Pennsylvania State University, USA Peng Ning, NC State University, USA Peter Ryan, University of Newcastle upon Tyne, UK Kazue Sako, NEC Corporation, Japan Pierangela Samarati, University of Milan, Italy Mariemma I. Yague, University of Malaga, Spain Vanessa Teague, University of Melbourne, Australia (Not yet completed) -------------- next part -------------- An HTML attachment was scrubbed... URL: http://zgp.org/pipermail/p2p-hackers/attachments/20050118/ae874e6b/attachment.htm From bryan.turner at pobox.com Wed Jan 19 15:19:22 2005 From: bryan.turner at pobox.com (Bryan Turner) Date: Sat Dec 9 22:12:50 2006 Subject: [p2p-hackers] Generalizing BitTorrent.. Message-ID: <200501191519.j0JFJQoA019503@rtp-core-2.cisco.com> Greg, Actually, this is not a problem for torrents. Bram was very inventive with the torrent file format and allows torrent builders to include a ragged hierarchy. In the case of the MAME torrents, the torrent file is literally 95% the same. The ragged hierarchy (including directories and file names, but not flags to my knowledge) is included in the torrent, followed by a list of fixed-size chunks assigned to each file in-order. The last chunk of each file may be smaller than the fixed size. In effect, a torrent file is already a catalog of how to glue the pieces together! The main difference is in what you're looking for - a piece or an entire torrent. I propose looking for each piece separately, while Bit Torrent searches for each torrent as a whole. Good torrents tend to individually gzip the files in-place, then export the entire directory as a torrent (rather than as a tar/gzip archive). This is how the MAME torrents are designed and it works incredibly smoothly. Only the changes are downloaded between versions, just like a base + diffs model. I believe the method I proposed is more general, because it includes the base + diffs model as well as ragged shared-hierarchy systems that have nothing else in common. For instance, a source distribution of a large open-source project. In order to distribute the entire project in one lump, it may include significant common files from other open-source projects. This leads to many torrents each sharing the common libraries. Peers looking for Project A trade common files with peers looking for Project B, and also with peers looking only for the common library. Seeds of one project are also seeds of all the others which include common functionality. --Bryan bryan.turner@pobox.com -----Original Message----- From: p2p-hackers-bounces@zgp.org [mailto:p2p-hackers-bounces@zgp.org] On Behalf Of Gregory P. Smith Sent: Saturday, January 15, 2005 3:03 AM To: Peer-to-peer development. Subject: Re: [p2p-hackers] Generalizing BitTorrent.. The flaw in this logic here is that to aggregate common data across different instances of content in a system you need to be able to locate and identify the common data portions. In a typical tarball of a new version of something where only 5% of the files have updated -most- of the hashes of the fixed sized pieces are likely to change; certianly -way- more than 5% anyways. why? because the common data has shifted around or in the case of compressed streams of data (.tar.bz2) the entire stream will be different. To get any benefit from this the content would need to be extreemly carefully packaged. no zips, no tars, no compression, etc. That alone could destroy the benefit. Others have already mentioned an alternate general solution that applies to -any- distribution method (the linux kernel is distributed this way): updates that share data should be published as binary diffs against the previous version. Downloading n+1 becomes a recursive "download n and the n->n+1 diff" operation. What you're really desiring is for peers to integrate the diff knowledge so that it doesn't need to be done manually and so that it automatically decides when the base+sum(diffs against base) warrants just issuing a new base to distribute for future diffs to start from. (fwiw, some version control systems make many of the same decisions as to how they store versions of data internally) -greg From jcea at argo.es Wed Jan 19 16:11:37 2005 From: jcea at argo.es (Jesus Cea) Date: Sat Dec 9 22:12:50 2006 Subject: [p2p-hackers] Generalizing BitTorrent.. In-Reply-To: <200501191519.j0JFJQoA019503@rtp-core-2.cisco.com> References: <200501191519.j0JFJQoA019503@rtp-core-2.cisco.com> Message-ID: <41EE86B9.10606@argo.es> Bryan Turner wrote: > In effect, a torrent file is already a catalog of how to glue the > pieces together! The main difference is in what you're looking for - a > piece or an entire torrent. I propose looking for each piece separately, > while Bit Torrent searches for each torrent as a whole. Exactly. An implementation problem is that a lot of BT clients keep an "open file descriptor" active for each file in the torrent, reaching easily the OS limit. They should keep only a small number of file descriptors around, perhaps reusing them in a LRU or random way. Braindead clients... :-) -- Jesus Cea Avion _/_/ _/_/_/ _/_/_/ jcea@argo.es http://www.argo.es/~jcea/ _/_/ _/_/ _/_/ _/_/ _/_/ _/_/ _/_/ _/_/_/_/_/ PGP Key Available at KeyServ _/_/ _/_/ _/_/ _/_/ _/_/ "Things are not so easy" _/_/ _/_/ _/_/ _/_/ _/_/ _/_/ "My name is Dump, Core Dump" _/_/_/ _/_/_/ _/_/ _/_/ "El amor es poner tu felicidad en la felicidad de otro" - Leibniz From eugen at leitl.org Wed Jan 19 19:35:56 2005 From: eugen at leitl.org (Eugen Leitl) Date: Sat Dec 9 22:12:50 2006 Subject: [p2p-hackers] [IP] CA State bill could cripple P2P (fwd from dave@farber.net) Message-ID: <20050119193555.GF9221@leitl.org> ----- Forwarded message from David Farber ----- From: David Farber Date: Wed, 19 Jan 2005 13:51:24 -0500 To: Ip Subject: [IP] CA State bill could cripple P2P User-Agent: Microsoft-Entourage/11.1.0.040913 Reply-To: dave@farber.net ------ Forwarded Message From: Dewayne Hendricks Reply-To: Date: Wed, 19 Jan 2005 01:38:48 -0800 To: Dewayne-Net Technology List Subject: [Dewayne-Net] CA State bill could cripple P2P State bill could cripple P2P By John Borland Story last modified Tue Jan 18 17:55:00 PST 2005 A bill introduced in California's Legislature last week has raised the possibility of jail time for developers of file-swapping software who don't stop trades of copyrighted movies and songs online. The proposal, introduced by Los Angeles Sen. Kevin Murray, takes direct aim at companies that distribute software such as Kazaa, eDonkey or Morpheus. If passed and signed into law, it could expose file-swapping software developers to fines of up to $2,500 per charge, or a year in jail, if they don't take "reasonable care" in preventing the use of their software to swap copyrighted music or movies--or child pornography. Peer-to-peer software companies and their allies immediately criticized the bill as a danger to technological innovation, and as potentially unconstitutional. "State Sen. Murray did not choose to seek out the facts before introducing misguided legislation that effectively would make criminals out of many companies that bring jobs and economic growth to California," Mike Weiss, CEO of Morpheus parent StreamCast Networks, said in a statement. "This bill is an attack on innovation itself and tax-paying California-based businesses like StreamCast depend on that freedom to innovate." The bill comes as much of the technology world is waiting for the Supreme Court to rule on the legal status of file-swapping technology. Federal courts have twice ruled that peer-to-peer software companies are not legally responsible for the illegal actions of people using their products. Hollywood studios and record companies appealed those decisions to the nation's top court, which is expected to rule on the issue this summer. In the meantime, entertainment companies' push for federal legislation on file-swapping issue has been put temporarily on the back burner. A controversial bill that would have put more legal responsibility on the peer-to-peer developers failed to pass at the end of last year's congressional session. California has taken a lead among states in putting pressure on the file-swapping world. Attorney General Bill Lockyer was a key figure last year in pushing for more state-level legal scrutiny of the companies' actions, and Gov. Arnold Schwarzenegger has sought to ban illegal downloading on any state computers, including those owned by the state university systems. [snip] Archives at: Weblog at: ------ End of Forwarded Message ------------------------------------- You are subscribed as eugen@leitl.org To manage your subscription, go to http://v2.listbox.com/member/?listname=ip Archives at: http://www.interesting-people.org/archives/interesting-people/ ----- End forwarded message ----- -- Eugen* Leitl leitl ______________________________________________________________ ICBM: 48.07078, 11.61144 http://www.leitl.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE http://moleculardevices.org http://nanomachines.net -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: not available Url : http://zgp.org/pipermail/p2p-hackers/attachments/20050119/34368013/attachment.pgp From sam at neurogrid.com Wed Jan 19 23:09:54 2005 From: sam at neurogrid.com (Sam Joseph) Date: Sat Dec 9 22:12:50 2006 Subject: [p2p-hackers] Workshop on Agents and Peer-to-Peer Computing (AP2PC 2005) Message-ID: <41EEE8C2.7060605@neurogrid.com> *** our apologies if you receive multiple copies of this e-mail *** Preliminary Call for Papers for the Fourth International Workshop on Agents and Peer-to-Peer Computing (AP2PC 2005) http://p2p.ingce.unibo.it/ held in AAMAS 2005 International Conference on Autonomous Agents and MultiAgent Systems Utrecht University, Netherlands. from 25 July - 29 July 2005. CALL FOR PAPERS Peer-to-peer (P2P) computing has attracted enormous media attention, initially spurred by the popularity of file sharing systems such as Napster, Gnutella, and Morpheus. Systems like BitTorrent and eDonkey have continued to sustain that attention. The peers are autonomous, or as some call them, first-class citizens. P2P networks are emerging as a new distributed computing paradigm for their potential to harness the computing power of the hosts composing the network and make their under-utilized resources available to others. New techniques such as distributed hash-tables (DHTs), semantic routing, and Plaxton Meshes are being combined with traditional concepts such as Hypercubes, Trust Metrics and caching techniques to pool together the untapped computing power at the "edges" of the internet. These new techniques and possibilities have generated a lot of interest in many industrial organizations recently, and has resulted in the creation of a P2P working group for undertaking standardization activities in this area. (http://www.irtf.org/charters/p2prg.html). In P2P computing peers and services organise themselves dynamically without central coordination in order to foster knowledge sharing and collaboration, both in cooperative and non-cooperative environments. The success of P2P systems strongly depends on a number of factors. First, the ability to ensure equitable distribution of content and services. Economic and business models which rely on incentive mechanisms to supply contributions to the system are being developed, along with methods for controlling the "free riding" issue. Second, the ability to enforce provision of trusted services. Reputation based P2P trust management models are becoming a focus of the research community as a viable solution. The trust models must balance both constraints imposed by the environment (e.g. scalability) and the unique properties of trust as a social and psychological phenomenon. Recently, we are also witnessing a move of the P2P paradigm to embrace mobile computing in an attempt to achieve even higher ubiquitousness. The possibility of services related to physical location and the relation with agents in physical proximity could introduce new opportunities and also new technical challenges. Although researchers working on distributed computing, MultiAgent Systems, databases and networks have been using similar concepts for a long time, it is only fairly recently that papers motivated by the current P2P paradigm have started appearing in high quality conferences and workshops. Research in agent systems in particular appears to be most relevant because, since their inception, MultiAgent Systems have always been thought of as networks of peers. The MultiAgent paradigm can thus be superimposed on the P2P architecture, where agents embody the description of the task environments, the decision-support capabilities, the collective behavior, and the interaction protocols of each peer. The emphasis in this context on decentralization, user autonomy, ease and speed of growth that gives P2P its advantages, also leads to significant potential problems. Most prominent among these problems are coordination: the ability of an agent to make decisions on its own actions in the context of activities of other agents, and scalability: the value of the P2P systems lies in how well they scale along several dimensions, including complexity, heterogeneity of peers, robustness, traffic redistribution, and so on. It is important to scale up coordination strategies along multiple dimensions to enhance their tractability and viability, and thereby to widen the application domains. These two problems are common to many large-scale applications. Without coordination, agents may be wasting their efforts, squander resources and fail to achieve their objectives in situations requiring collective effort. This workshop will bring together researchers working on agent systems and P2P computing with the intention of strengthening this connection. Researchers from other related areas such as distributed systems, networks and database systems will also be welcome (and, in our opinion, have a lot to contribute). We seek high-quality and original contributions on the general theme of "Agents and P2P Computing". The following is a non-exhaustive list of topics of special interest: - Intelligent agent techniques for P2P computing - P2P computing techniques for MultiAgent Systems - The Semantic Web, Semantic Coordination Mechanisms and P2P systems - Scalability, coordination, robustness and adaptability in P2P systems - Self-organization and emergent behavior in P2P networks - E-commerce and P2P computing - Participation and Contract Incentive Mechanisms in P2P Systems - Computational Models of Trust and Reputation - Community of interest building and regulation, and behavioral norms - Intellectual property rights in P2P systems - P2P architectures - Scalable Data Structures for P2P systems - Services in P2P systems (service definition languages, service discovery, filtering and composition etc.) - Knowledge Discovery and P2P Data Mining Agents - P2P oriented information systems - Information ecosystems and P2P systems - Security issues in P2P networks - Pervasive computing based on P2P architectures (ad-hoc networks,wireless communication devices and mobile systems) - Grid computing solutions based on agents and P2P paradigms - Legal issues in P2P networks PANEL The theme of the panel will be Decentralised Trust in P2P and MultiAgent Systems. As P2P and MultiAgent systems become larger and more diverse the risks of interacting with malicious peers become increasingly problematic. The panel will address how computational trust issues can be addressed in P2P and MultiAgent systems. The panel will involve short presentations by thepanelists followed by a discussion session involving the audience. IMPORTANT DATES Paper submission: 14th March 2005 Acceptance notification: 18th April 2005 Workshop: 25-26th July 2005 Camera ready for post-proceedings: 17th August 2005 REGISTRATION Accomodation and workshop registration will be handled by the AAMAS 2005 organization along with the main conference registration. SUBMISSION INSTRUCTIONS Unpublished papers should be formatted according to the LNCS/LNAI author instructions for proceedings and they should not be longer than 12 pages (about 5000 words including figures, tables, references, etc.). A web submission interface will be provided shortly at http://p2p.ingce.unibo.it/ At the very least we would encourage all authors to read the abstracts of the papers submitted to previous workshops: http://p2p.ingce.unibo.it/2002/ http://www.springeronline.com/sgw/cda/frontpage/0,11855,5-40109-22-2991818-0,00.html http://p2p.ingce.unibo.it/2003/ http://www.springeronline.com/sgw/cda/frontpage/0,11855,5-40109-22-37060961-0,00.html http://p2p.ingce.unibo.it/2004/ Particular preference will be given to novel approaches and those papers that build upon the contributions of papers presented at previous AP2PC workshops. In addition, on the workshop website we will present more precise details of how papers will be judged for inclusion. So please check http://p2p.ingce.unibo.it/ before your final submission. PUBLICATION Accepted papers will be distributed to the workshop participants as workshop notes. As in previous years post-proceedings of the revised papers (namely accepted papers presented at the workshop) will be submitted for publication to Springer in Lecture Notes in Computer Science series. ORGANIZING COMMITTEE Program Co-chairs Zoran Despotovic School of Computer and Communication Sciences, E'cole Polytechnique Fe'de'rale de Lausanne (EPFL) CH-1015 Lausanne, Switzerland Email zoran.despotovic@epfl.ch Sam Joseph (main contact) Dept. of Information and Computer Science, University of Hawaii at Manoa, USA 1680 East-West Road, POST 309, Honolulu, HI 96822 E-mail: srjoseph@hawaii.edu Claudio Sartori Dept. of Electronics, Computer Science and Systems, University of Bologna, Italy Viale Risorgimento, 2 - 40136 Bologna Italy E-mail: claudio.sartori@unibo.it Panel Chair Munindar P. Singh Dept. of Computer Science, North Carolina State University, USA E-mail: mpsingh@eos.ncsu.edu PROGRAM COMMITTEE Karl Aberer, EPFL, Lausanne, Switzerland Alessandro Agostini, ITC-IRST, Trento, Italy Sonia Bergamaschi, University of Modena & Reggio-Emilia, Italy M. Brian Blake, Georgetown University, USA Rajkumar Buyya, University of Melbourne, Australia Ooi Beng Chin, National University of Singapore, Singapore Paolo Ciancarini, University of Bologna, Italy Costas Courcoubetis, Athens University of Economics and Business, Greece Yogesh Deshpande, University of Western Sydney, Australia Asuman Dogac, Middle East Technical University, Turkey Boi V. Faltings, EPFL, Lausanne, Switzerland Maria Gini, University of Minnesota, USA Chihab Hanachi, University of Toulouse, France Mark Klein, Massachusetts Institute of Technology, USA Matthias Klusch, DFKI, Saarbrucken, Germany Yannis Labrou, PowerMarket Inc., USA Tan Kian Lee, National University of Singapore, Singapore Dejan Milojicic, Hewlett Packard Labs, USA Alberto Montresor, University of Bologna, Italy Luc Moreau, University of Southampton, UK Jean-Henry Morin, University of Geneve, Switzerland John Mylopoulos, University of Toronto, Canada Andrea Omicini, University of Bologna, Italy Maria Orlowska, University of Queensland, Australia Aris. M. Ouksel, University of Illinois at Chicago, USA Mike Papazoglou, Tilburg University, Netherlands Terry R. Payne, University of Southampton, UK Paolo Petta, Austrian Research Institute for AI, Austria, Jeremy Pitt, Imperial College, UK Dimitris Plexousakis, Institute of Computer Science, FORTH, Greece Martin Purvis, University of Otago, New Zealand Omer F. Rana, Cardiff University, UK Douglas S. Reeves, North Carolina State University, USA Thomas Risse, Fraunhofer IPSI, Darmstadt, Germany Pierangela Samarati, University of Milan, Italy Christophe Silbertin-Blanc, University of Toulouse, France Maarten van Steen, Vrije Universiteit, Netherlands Markus Stumptner, University of South Australia, Australia Katia Sycara, Robotics Institute, Carnegie Mellon University, USA Peter Triantafillou, Technical University of Crete, Greece Anand Tripathi, University of Minnesota, USA Vijay K. Vaishnavi, Georgia State University, USA Francisco Valverde-Albacete, Universidad Carlos III de Madrid, Spain Maurizio Vincini, University of Modena & Reggio-Emilia, Italy Fang Wang, Btexact Technologies, UK Gerhard Weiss, Technische Universitaet, Germany Bin Yu, North Carolina State University, USA Franco Zambonelli, University of Modena & Reggio-Emilia, Italy From bryan.turner at pobox.com Fri Jan 21 17:43:34 2005 From: bryan.turner at pobox.com (Bryan Turner) Date: Sat Dec 9 22:12:50 2006 Subject: [p2p-hackers] Altnet goes after p2p networks with obvious patent Message-ID: <200501211743.j0LHhZW0008009@rtp-core-1.cisco.com> Another note regarding content hashing.. I'm reviewing the CFS [1] paper and it specifically states: "CFS authenticates data by naming it with .. content hashes .. The use of content hashes to securely link together different pieces of data is due to Merkle [2] .." The Merkle reference is 1987, regarding digital signatures of a document. Not sure if it suggests using the signature as a handle to the document, which appears to be the claim of this patent, but the CFS authors seem to think so. --Bryan bryan.turner@cisco.com [1] http://www.pdos.lcs.mit.edu/papers/cfs:sosp01/cfs_sosp.pdf [2] Merkle, R. C. A digital signature based on a conventional encryption function. Advances in Cryptology - CRYPTO '87 (Berlin, 1987), C. Pomerance, Ed., vol. 293 of Lecture Notes in Computer Science, Springer-Verlag, pp. 369-378 -----Original Message----- From: p2p-hackers-bounces@zgp.org [mailto:p2p-hackers-bounces@zgp.org] On Behalf Of Ian Clarke Sent: Tuesday, January 11, 2005 2:17 PM To: Peer-to-peer development. Subject: [p2p-hackers] Altnet goes after p2p networks with obvious patent http://p2pnet.net/story/3512 It seems that Altnet is finally going after file sharing networks with its laughably obvious patent on requesting files by a hash of the file's contents (fortunately Freenet's developers are predominantly European, and thus are largely immune to this). IIRC this patent was filed in 1997. I think it is very important that those attacked challenge this patent head-on, either by claiming it is invalid due to being obvious, or finding prior art. I vaguely recall the last time I researched this that there was prior art from as early as 1990, I think it was Project Xanadu (http://xanadu.com/). Can anyone provide specific pointers to good examples of prior art? If Altnet succeeds in extorting any money out of these P2P companies it will only serve to encourage them to attack others. Ian. -- Founder, The Freenet Project http://freenetproject.org/ CEO, Cematics Ltd http://cematics.com/ Personal Blog http://locut.us/~ian/blog/ _______________________________________________ p2p-hackers mailing list p2p-hackers@zgp.org http://zgp.org/mailman/listinfo/p2p-hackers _______________________________________________ Here is a web page listing P2P Conferences: http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences From Serguei.Osokine at efi.com Tue Jan 25 18:32:35 2005 From: Serguei.Osokine at efi.com (Serguei Osokine) Date: Sat Dec 9 22:12:50 2006 Subject: [p2p-hackers] Using UDT for swarming Message-ID: <4A60C83D027E224BAA4550FB1A2B120E0DC308@fcexmb04.efi.internal> People, does anyone have an opinion about UDT? http://www.ietf.org/internet-drafts/draft-gg-udt-01.txt http://www.ncdm.uic.edu/papers/udt-protocol.pdf http://www.ncdm.uic.edu/papers/udt-control.pdf - at the first glance it looks like it might be pretty relevant for the swarming transfers, since it tries to maintain the fairness in the bandwidth allocation between multiple data streams with very different RTTs. That is something that TCP cannot do, and I have a suspicion that this issue might be one of the reasons of why BitTorrent works wonderfully when you have just one file to upload, but as soon as you get multiple simultaneous uploads, thing go sour pretty fast (i.e. Exeem and most of the P2P apps) - you start seeing very slow streams and such. (Actually, I'm mentioning swarming just because it tends to increase the number of concurrent uploads system-wide; even without it, the concurrent uploads have always been a problem in any P2P system I can think of, with a possible exception of Onion Networks - I'm not sure how well does Justin handle concurrency.) Anyway - did anyone try UDT or at least looked into it? Greg, LimeWire uses something much more simple, correct? Best wishes - S.Osokine. 25 Jan 2005. From gbildson at limepeer.com Tue Jan 25 19:38:25 2005 From: gbildson at limepeer.com (Greg Bildson) Date: Sat Dec 9 22:12:50 2006 Subject: [p2p-hackers] Using UDT for swarming In-Reply-To: <4A60C83D027E224BAA4550FB1A2B120E0DC308@fcexmb04.efi.internal> Message-ID: I was looking around for these kinds of things last spring but didn't find anything that great. Given that this submission is dated August 2004, I guess this proposal didn't exist at the time. The implementation of our reliable UDP with hole punching protocol was basically done by August 2004. This proposal looks interesting. I'm sure it is much more rigorous than what we did but at the same time, the protocol overhead looks fairly substantial. What we did can also coexist with UDP delivered Gnutella messages - for better or worse. Thanks -greg > -----Original Message----- > From: p2p-hackers-bounces@zgp.org [mailto:p2p-hackers-bounces@zgp.org]On > Behalf Of Serguei Osokine > Sent: Tuesday, January 25, 2005 1:33 PM > To: p2p-hackers@zgp.org > Subject: [p2p-hackers] Using UDT for swarming > > > > People, does anyone have an opinion about UDT? > > http://www.ietf.org/internet-drafts/draft-gg-udt-01.txt > http://www.ncdm.uic.edu/papers/udt-protocol.pdf > http://www.ncdm.uic.edu/papers/udt-control.pdf > > - at the first glance it looks like it might be pretty relevant for > the swarming transfers, since it tries to maintain the fairness in > the bandwidth allocation between multiple data streams with very > different RTTs. That is something that TCP cannot do, and I have > a suspicion that this issue might be one of the reasons of why > BitTorrent works wonderfully when you have just one file to upload, > but as soon as you get multiple simultaneous uploads, thing go sour > pretty fast (i.e. Exeem and most of the P2P apps) - you start seeing > very slow streams and such. > > (Actually, I'm mentioning swarming just because it tends to increase > the number of concurrent uploads system-wide; even without it, the > concurrent uploads have always been a problem in any P2P system I can > think of, with a possible exception of Onion Networks - I'm not sure > how well does Justin handle concurrency.) > > Anyway - did anyone try UDT or at least looked into it? Greg, > LimeWire uses something much more simple, correct? > > Best wishes - > S.Osokine. > 25 Jan 2005. > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > _______________________________________________ > Here is a web page listing P2P Conferences: > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences From matthew at matthew.at Tue Jan 25 19:58:51 2005 From: matthew at matthew.at (Matthew Kaufman) Date: Sat Dec 9 22:12:50 2006 Subject: [p2p-hackers] Using UDT for swarming In-Reply-To: <4A60C83D027E224BAA4550FB1A2B120E0DC308@fcexmb04.efi.internal> Message-ID: <014101c50318$4dbf5390$02c7cac6@matthewdesk> This protocol is clearly designed to operate in something other than today's consumer Internet. It was designed to outperform traditional TCP over large delay*bandwidth networks, but actually gives TCP the edge in more typical network settings. For file transfers over actual connections you're likely to see (unless you have a GigE plugged right into Internet2 on your desk), TCP is a better choice, and if you need UDP for NAT traversal or other reasons, there are better performing (and simpler to implement, as this requires things like fine-grained clocks) choices. And if your design parameter is large numbers of streams with vastly different RTTs and you wanted to improve over TCP's handling of this, you'd make different choices than were made for UDT. Matthew Kaufman matthew@matthew.at http://www.amicima.com > -----Original Message----- > From: p2p-hackers-bounces@zgp.org > [mailto:p2p-hackers-bounces@zgp.org] On Behalf Of Serguei Osokine > Sent: Tuesday, January 25, 2005 10:33 AM > To: p2p-hackers@zgp.org > Subject: [p2p-hackers] Using UDT for swarming > > > > People, does anyone have an opinion about UDT? > From Serguei.Osokine at efi.com Tue Jan 25 20:29:07 2005 From: Serguei.Osokine at efi.com (Serguei Osokine) Date: Sat Dec 9 22:12:50 2006 Subject: [p2p-hackers] Using UDT for swarming Message-ID: <4A60C83D027E224BAA4550FB1A2B120E0DC30D@fcexmb04.efi.internal> On Tuesday, January 25, 2005 Matthew Kaufman wrote: > It was designed to outperform traditional TCP over large delay* > bandwidth networks, but actually gives TCP the edge in more typical > network settings. Well, that depends on what "large delay*bandwidth" is. Even the slow link might fall into this category if the RTT is high enough. So to be fair to UDT, it might prove to be quite applicable for a typical concurrent upload situation. Like I said, I did not do any careful analysis of the UDT and of its behaviour on the slower links and was wondering if anyone did and whether there are any better alternatives. > And if your design parameter is large numbers of streams with vastly > different RTTs and you wanted to improve over TCP's handling of this, > you'd make different choices than were made for UDT. Yeah, that was pretty much the reason behind my question. What would be these different choices? Got any pointers? The apparent UDT intent to optimize the gigabit transfers - possibly at the expense of the slower streams - was a bit of a warning flag for me, too - but I did not see any project that even asks these questions, much less answers them. One possible exception might be all this FEC vodoo in the multicasting group, but for me it feels like a bit of an overkill for the job (even if I would be sure that FLUTE/ALC/LCT stack really does resolve these issues, which I'm not). Best wishes - S.Osokine. 25 Jan 2005. -----Original Message----- From: p2p-hackers-bounces@zgp.org [mailto:p2p-hackers-bounces@zgp.org]On Behalf Of Matthew Kaufman Sent: Tuesday, January 25, 2005 11:59 AM To: 'Peer-to-peer development.' Subject: RE: [p2p-hackers] Using UDT for swarming This protocol is clearly designed to operate in something other than today's consumer Internet. It was designed to outperform traditional TCP over large delay*bandwidth networks, but actually gives TCP the edge in more typical network settings. For file transfers over actual connections you're likely to see (unless you have a GigE plugged right into Internet2 on your desk), TCP is a better choice, and if you need UDP for NAT traversal or other reasons, there are better performing (and simpler to implement, as this requires things like fine-grained clocks) choices. And if your design parameter is large numbers of streams with vastly different RTTs and you wanted to improve over TCP's handling of this, you'd make different choices than were made for UDT. Matthew Kaufman matthew@matthew.at http://www.amicima.com > -----Original Message----- > From: p2p-hackers-bounces@zgp.org > [mailto:p2p-hackers-bounces@zgp.org] On Behalf Of Serguei Osokine > Sent: Tuesday, January 25, 2005 10:33 AM > To: p2p-hackers@zgp.org > Subject: [p2p-hackers] Using UDT for swarming > > > > People, does anyone have an opinion about UDT? > _______________________________________________ p2p-hackers mailing list p2p-hackers@zgp.org http://zgp.org/mailman/listinfo/p2p-hackers _______________________________________________ Here is a web page listing P2P Conferences: http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences From justin at chapweske.com Tue Jan 25 21:30:25 2005 From: justin at chapweske.com (Justin Chapweske) Date: Sat Dec 9 22:12:50 2006 Subject: [p2p-hackers] Using UDT for swarming In-Reply-To: <4A60C83D027E224BAA4550FB1A2B120E0DC30D@fcexmb04.efi.internal> References: <4A60C83D027E224BAA4550FB1A2B120E0DC30D@fcexmb04.efi.internal> Message-ID: <1106688625.27262.582.camel@bog> > answers them. One possible exception might be all this FEC vodoo in > the multicasting group, but for me it feels like a bit of an overkill > for the job (even if I would be sure that FLUTE/ALC/LCT stack really > does resolve these issues, which I'm not). The FLUTE/ALC/LCT stack does indeed solve the long fat network problem and we have a number of customers deploying the solution for transfer of extremely large data sets (100 GB+) over both satellite and terrestrial networks. However, the protocol stack has a huge learning curve, is quite complicated to implement, and is massive overkill for many applications. However, our bread-and-butter tends to be providing massive overkill solutions, so it fits quite nicely with what we do :) For the majority of folks, a UDT-type approach might be the best way to go. But honestly, UPnP is becoming increasingly deployed, and Joe P2P Hacker is likely to completely botch basic congestion control, so it might be best for people to stick with TCP and not potentially bring the Internet to a screeching halt. Thanks, -Justin P.S. We'll likely be doing a public release of our satellite/multicast file transfer product sometime this quarter, so anyone that is interested in the mean time can feel free to contact me directly. From Serguei.Osokine at efi.com Tue Jan 25 22:13:16 2005 From: Serguei.Osokine at efi.com (Serguei Osokine) Date: Sat Dec 9 22:12:50 2006 Subject: [p2p-hackers] Using UDT for swarming Message-ID: <4A60C83D027E224BAA4550FB1A2B120E0DC30F@fcexmb04.efi.internal> On Tuesday, January 25, 2005 Justin Chapweske wrote: > The FLUTE/ALC/LCT stack does indeed solve the long fat network > problem and we have a number of customers deploying the solution > for transfer of extremely large data sets (100 GB+) over both > satellite and terrestrial networks. So that would be solving only "long fat" or also "different RTT" problem? That is, do you handle simultaneous terrestrial (low RTT) and satellite (high RTT) streams without the latter getting lower bandwidth than it should due to the unfairness of competition with the former? Best wishes - S.Osokine. 25 Jan 2005. -----Original Message----- From: p2p-hackers-bounces@zgp.org [mailto:p2p-hackers-bounces@zgp.org]On Behalf Of Justin Chapweske Sent: Tuesday, January 25, 2005 1:30 PM To: Peer-to-peer development. Subject: RE: [p2p-hackers] Using UDT for swarming > answers them. One possible exception might be all this FEC vodoo in > the multicasting group, but for me it feels like a bit of an overkill > for the job (even if I would be sure that FLUTE/ALC/LCT stack really > does resolve these issues, which I'm not). The FLUTE/ALC/LCT stack does indeed solve the long fat network problem and we have a number of customers deploying the solution for transfer of extremely large data sets (100 GB+) over both satellite and terrestrial networks. However, the protocol stack has a huge learning curve, is quite complicated to implement, and is massive overkill for many applications. However, our bread-and-butter tends to be providing massive overkill solutions, so it fits quite nicely with what we do :) For the majority of folks, a UDT-type approach might be the best way to go. But honestly, UPnP is becoming increasingly deployed, and Joe P2P Hacker is likely to completely botch basic congestion control, so it might be best for people to stick with TCP and not potentially bring the Internet to a screeching halt. Thanks, -Justin P.S. We'll likely be doing a public release of our satellite/multicast file transfer product sometime this quarter, so anyone that is interested in the mean time can feel free to contact me directly. _______________________________________________ p2p-hackers mailing list p2p-hackers@zgp.org http://zgp.org/mailman/listinfo/p2p-hackers _______________________________________________ Here is a web page listing P2P Conferences: http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences From justin at chapweske.com Tue Jan 25 22:35:39 2005 From: justin at chapweske.com (Justin Chapweske) Date: Sat Dec 9 22:12:50 2006 Subject: [p2p-hackers] Using UDT for swarming In-Reply-To: <4A60C83D027E224BAA4550FB1A2B120E0DC30F@fcexmb04.efi.internal> References: <4A60C83D027E224BAA4550FB1A2B120E0DC30F@fcexmb04.efi.internal> Message-ID: <1106692539.27262.592.camel@bog> On Tue, 2005-01-25 at 14:13 -0800, Serguei Osokine wrote: > On Tuesday, January 25, 2005 Justin Chapweske wrote: > > The FLUTE/ALC/LCT stack does indeed solve the long fat network > > problem and we have a number of customers deploying the solution > > for transfer of extremely large data sets (100 GB+) over both > > satellite and terrestrial networks. > > So that would be solving only "long fat" or also "different RTT" > problem? That is, do you handle simultaneous terrestrial (low RTT) and > satellite (high RTT) streams without the latter getting lower bandwidth > than it should due to the unfairness of competition with the former? The "long fat" and "different RTT" are really two faces to the same problem, so solving one should take care of the other. Both are caused by the fact that high RTT flows get pummeled by packet loss, while low RTT flows are able to bounce back quite quickly after packet loss. -Justin From Serguei.Osokine at efi.com Tue Jan 25 23:11:08 2005 From: Serguei.Osokine at efi.com (Serguei Osokine) Date: Sat Dec 9 22:12:50 2006 Subject: [p2p-hackers] Using UDT for swarming Message-ID: <4A60C83D027E224BAA4550FB1A2B120E0DC311@fcexmb04.efi.internal> On Tuesday, January 25, 2005 Justin Chapweske wrote: > The "long fat" and "different RTT" are really two faces to the > same problem, so solving one should take care of the other. Both > are caused by the fact that high RTT flows get pummeled by packet > loss, while low RTT flows are able to bounce back quite quickly > after packet loss. Fair enough - thanks! (Sometimes "long fat" is understood as "saturating the long fat pipe" without any special attention paid to the fairness issues - for example, by UDP-bombing the link, the other streams be damned. Just wanted to make sure that we're on the same page here; sorry for insulting your intelligence :-) Best wishes - S.Osokine. 25 Jan 2005. -----Original Message----- From: p2p-hackers-bounces@zgp.org [mailto:p2p-hackers-bounces@zgp.org]On Behalf Of Justin Chapweske Sent: Tuesday, January 25, 2005 2:36 PM To: Peer-to-peer development. Subject: RE: [p2p-hackers] Using UDT for swarming On Tue, 2005-01-25 at 14:13 -0800, Serguei Osokine wrote: > On Tuesday, January 25, 2005 Justin Chapweske wrote: > > The FLUTE/ALC/LCT stack does indeed solve the long fat network > > problem and we have a number of customers deploying the solution > > for transfer of extremely large data sets (100 GB+) over both > > satellite and terrestrial networks. > > So that would be solving only "long fat" or also "different RTT" > problem? That is, do you handle simultaneous terrestrial (low RTT) and > satellite (high RTT) streams without the latter getting lower bandwidth > than it should due to the unfairness of competition with the former? The "long fat" and "different RTT" are really two faces to the same problem, so solving one should take care of the other. Both are caused by the fact that high RTT flows get pummeled by packet loss, while low RTT flows are able to bounce back quite quickly after packet loss. -Justin _______________________________________________ p2p-hackers mailing list p2p-hackers@zgp.org http://zgp.org/mailman/listinfo/p2p-hackers _______________________________________________ Here is a web page listing P2P Conferences: http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences From seth.johnson at RealMeasures.dyndns.org Wed Jan 26 20:17:02 2005 From: seth.johnson at RealMeasures.dyndns.org (Seth Johnson) Date: Sat Dec 9 22:12:50 2006 Subject: [p2p-hackers] Meeting with FTC Friday 28 January 2005 Message-ID: <41F7FABE.99813EAA@RealMeasures.dyndns.org> > -------- Original Message -------- > Subject: Meeting with FTC Friday 28 January 2005 > Date: Wed, 26 Jan 2005 14:21:12 -0500 (EST) > From: Jay Sulzberger > > If you wish to meet with the FTC regarding Microsoft and > computer vendors' refusal to honor the refund clause of the > Microsoft End User License Agreement, write to Jay > Sulzberger at > > jays@panix.com > > by 2300 tonight, that is, 11:00 pm Wednesday 26 January > 2005. Please put the string > > Refund Action > > in the subject line. > > We will be meeting with the FTC Friday 28 January 2005. > Transport and other matters must be arranged before 1000 > Thursday 27 January 2005. > > Thank you! > > oo--JS. -- No virus found in this outgoing message. Checked by AVG Anti-Virus. Version: 7.0.300 / Virus Database: 265.7.4 - Release Date: 1/25/05 -- No virus found in this outgoing message. Checked by AVG Anti-Virus. Version: 7.0.300 / Virus Database: 265.7.4 - Release Date: 1/25/05 From eugen at leitl.org Thu Jan 27 14:10:35 2005 From: eugen at leitl.org (Eugen Leitl) Date: Sat Dec 9 22:12:50 2006 Subject: [p2p-hackers] Re: Simson Garfinkel analyses Skype - Open Society Institute (fwd from pgut001@cs.auckland.ac.nz) Message-ID: <20050127141035.GE1404@leitl.org> (followup on Simson's paper on Skype security, or, rather, its crypto snake oil content) ----- Forwarded message from Peter Gutmann ----- From: pgut001@cs.auckland.ac.nz (Peter Gutmann) Date: Wed, 12 Jan 2005 05:00:29 +1300 To: daw-usenet@taverner.CS.Berkeley.EDU Cc: cryptography@metzdowd.com Subject: Re: Simson Garfinkel analyses Skype - Open Society Institute David Wagner writes: >>Is Skype secure? > >The answer appears to be, "no one knows". There have been other posts about this in the past, even though they use known algorithms the way they use them is completely homebrew and horribly insecure: Raw, unpadded RSA, no message authentication, no key verification, no replay protection, etc etc etc. It's pretty much a textbook example of the problems covered in the writeup I did on security issues in homebrew VPNs last year. (Having said that, the P2P portion of Skype is quite nice, it's just the security area that's lacking. Since the developers are P2P people, that's somewhat understandable). Peter. --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo@metzdowd.com ----- End forwarded message ----- -- Eugen* Leitl leitl ______________________________________________________________ ICBM: 48.07078, 11.61144 http://www.leitl.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE http://moleculardevices.org http://nanomachines.net -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: not available Url : http://zgp.org/pipermail/p2p-hackers/attachments/20050127/279a03de/attachment.pgp From solipsis at pitrou.net Fri Jan 28 10:32:53 2005 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat Dec 9 22:12:50 2006 Subject: [p2p-hackers] reliable udp transfer redux ;) Message-ID: <1106908373.6591.8.camel@p-dhcp-333-72.rd.francetelecom.fr> Hi, I'm sorry to bother the list once again with this topic (a popular one for sure ;-)), but I was wondering if there were established results or studies on the Web about reliable UDP transfer methods *beyond ad-hoc designs*. In other words: is there a consensus converging towards identical (well-defined, potentially interoperable) algorithms, or does everyone still hack their own flavour of reliable UDP each time there is a need for it ? It strikes me that there may be an awful lot of effort duplication... and I'd like to avoid contributing to it ;) Regards Antoine. -- http://solipsis.netofpeers.net/ From travis at redswoosh.net Fri Jan 28 11:45:40 2005 From: travis at redswoosh.net (Travis Kalanick) Date: Sat Dec 9 22:12:50 2006 Subject: [p2p-hackers] Red Swoosh hiring P2P Tech!! Message-ID: <200501281149.j0SBnOaL003449@be9.noc0.redswoosh.com> Red Swoosh tech friends, Red Swoosh is doing well these days and will be hiring a few P2P demi-god developers (contractor and full-time), as well as a senior tech mgmt position in the SFbay and los angeles areas ASAP. If you're interested in getting in on the ground floor (we've been a very lean org.) of a company that grows with its revenues (cash flow positive for over 2 years!!) and it's exciting customer base, send me an email. . . or call me at 310.666.1429 (email is better because I'm in Europe so I won't be able to answer calls until next Tuesday). I WILL BE SCHEDULING INTERVIEWS FOR NEXT WEEK!!. . . .AND HIRING SOON THEREAFTER Some cool stats/factoids: Peak concurrent, persistent connections: 170K Peak data delivered in a day: 7TB Backend technology: in-memory distributed database Backend platform: Linux Development language: C++ Client base: In the millions Pass along to other interested parties. . . Look forward to hearing from you. Travis Travis Kalanick Red Swoosh, Inc. Founder, CEO travis@redswoosh.net (v) 310.666.1429 (f) 253.322.9478 AIM: ScourTrav123 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://zgp.org/pipermail/p2p-hackers/attachments/20050128/bb1ba64c/attachment.html From wiiat at kis-lab.com Sat Jan 29 01:24:57 2005 From: wiiat at kis-lab.com (WI-IAT 2005) Date: Sat Dec 9 22:12:50 2006 Subject: [p2p-hackers] IEEE/WIC/ACM Intelligent Agent Technology - IAT 2005 Message-ID: <20050129012500.7DAB33FC32@capsicum.zgp.org> [Apologies if you receive this more than once] ##################################################################### IEEE/WIC/ACM Intelligent Agent Technology 2005 CALL FOR PAPERS ##################################################################### 2005 IEEE/WIC/ACM International Conference on Intelligent Agent Technology (IAT'05) September 19-22, 2005 Compiegne University of Technology, France http://www.comp.hkbu.edu.hk/IAT05/ http://www.hds.utc.fr/IAT05/ Sponsored By IEEE Computer Society Web Intelligence Consortium (WIC) Association for Computing Machinery (ACM) ********************************************************************** - Paper submission due: April 3, 2005 - Submission websites: http://www.comp.hkbu.edu.hk/IAT05/ http://www.hds.utc.fr/IAT05/ - Electronic submissions are required in the form of PDF or PS files ********************************************************************** The 2005 IEEE/WIC/ACM International Conference on Intelligent Agent Technology (IAT'05) will be jointly held with the 2005 IEEE/WIC/ACM International Conference on Web Intelligence (WI'05 http://www.comp.hkbu.edu.hk/WI05/, http://www.hds.utc.fr/WI05/). The IEEE/WIC/ACM 2005 joint conferences are sponsored and organized by IEEE Computer Society Technical Committee on Computational Intelligence (TCCI) (http://www.cs.uvm.edu/~xwu/tcci/index.shtml), Web Intelligence Consortium (WIC) (http://wi-consortium.org), and ACM-SIGART (http://www.acm.org/sigart/). The upcoming meeting in this conference series follows the great success of IAT-99 held in Hong Kong, IAT-01 held in Maebashi City, Japan, IAT-03 held in Halifax, Canada, and IAT-04 in Beijing, China. +++++++ Topics +++++++ The topics and areas include, but not limited to: * Autonomy-Oriented Computing (AOC) Agent-Based Complex Systems Modeling and Development Agent-Based Simulation Autonomy-Oriented Modeling and Computation Methods Behavioral Self-Organization Complex Behavior Characterization and Engineering Emergent Behavior Hard Computational Problem Solving Self-Organized Criticality Self-Organized Intelligence Swarm Intelligence Nature-Inspired Paradigms * Autonomous Knowledge and Information Agents Agent-Based Distributed Data Mining Agent-Based Knowledge Discovery And Sharing Autonomous Information Services Distributed Knowledge Systems Emergent Natural Law Discovery in Multi-Agent Systems Evolution of Knowledge Networks Human-Agent Interaction Information Filtering Agents Knowledge Aggregation Knowledge Discovery Ontology-Based Information Services * Agent Systems Modeling and Methodology Agent Interaction Protocols Cognitive Architectures Cognitive Modeling of Agents Emotional Modeling Fault-Tolerance in Multi-Agent Systems Formal Framework for Multi-Agent Systems Information Exchanges in Multi-Agent Systems Learning and Self-Adaptation in Multi-Agent Systems Mobile Agent Languages and Protocols Multi-Agent Autonomic Architectures Multi-Agent Coordination Techniques Multi-Agent Planning and Re-Planning Peer-to-Peer Models for Multi-Agent Systems Reinforcement Learning Social Interactions in Multi-Agent Systems Task-Based Agent Context Task-Oriented Agents * Distributed Problem Solving Agent-Based Grid Computing Agent Networks in Distributed Problem Solving Collective Group Behavior Coordination and Cooperation Distributed Intelligence Dynamics of Agent Groups and Populations Efficiency and Complexity Issues Market-Based Computing Problem-Solving in Dynamic Environments Distributed Search * Autonomous Auctions and Negotiation Agent-Based Marketplaces Auction Markets Combinatorial Auctions Hybrid Negotiation Integrative Negotiation Mediating Agents Pricing Agents Thin Double Auctions * Applications Agent-Based Assistants Agent-Based Virtual Enterprise Embodied Agents and Agent-Based Systems Applications Interface Agents Knowledge and Data Intensive Systems Perceptive Animated Interfaces Scalability Social Simulation Socially Situated Planning Software and Pervasive Agents Tools and Standards Ubiquitous Systems and E-Technology Agents Ubiquitous Software Services Virtual Humans XML-Based Agent Systems +++++++++++++++++ Important Dates +++++++++++++++++ Electronic submission of full papers: ** April 3, 2005 ** Notification of paper acceptance: June 9, 2005 Workshop and tutorial proposals: June 9, 2005 Camera-ready of accepted papers: July 4, 2005 Workshops/Tutorials: September 19, 2005 Conference: September 20-22, 2005 ++++++++++++++++++++++++++++++++++++ On-Line Submissions and Publication ++++++++++++++++++++++++++++++++++++ High-quality papers in all IAT related areas are solicited. Papers exploring new directions or areas will receive a careful and supportive review. All submitted papers will be reviewed on the basis of technical quality, relevance, significance, and clarity. Note that IAT'05 will accept ONLY on-line submissions, containing PDF (PostScript or MS-Word) versions. The conference proceedings will be published by the IEEE Computer Society Press. IAT'05 also welcomes Industry Track and Demo submissions, Workshop and Tutorial proposals. More detailed instructions and the On-Line Submission Form can be found from the IAT'05 homepages: http://www.comp.hkbu.edu.hk/IAT05/ or http://www.hds.utc.fr/IAT05. A selected number of IAT'05 accepted papers will be expanded and revised for inclusion in Web Intelligence and Agent Systems: An International Journal (http://wi-consortium.org/journal.html) and in Annual Review of Intelligent Informatics (http://www.wi-consortium.org/annual.html) The best paper awards will be conferred on the authors of the best papers at the conference. ++++++++++++++++++++++++ Conference Organization ++++++++++++++++++++++++ Conference Chairs: Pierre Morizet, University of Technology of Compiegne, France Jiming Liu, Hong Kong Baptist University, Hong Kong Program Chair: Andrzej Skowron, Warsaw University, Poland Steering Committee Chair: Ning Zhong, Maebashi Institute of Technology, Japan IAT-Track Program Co-chairs: Jean-Paul Barthes, University of Technology of Compiegne, France Lakhmi Jain, University of South Australia, Australia Ron Sun, Rensselaer Polytechnic Institute, USA WI-Track Program Co-chairs: Rakesh Agrawal, IBM Almaden Research Center, USA Mike Luck, University of Southampton, UK Takahira Yamaguchi, Shizuoka University, Japan IAT-Track Program Vice Chairs: Barbara Dunin-Keplicz, Warsaw University, Poland Amal El Fallah-Segrougchni, University of Paris 6, France Eugenio Oliveira, University of Porto, Portugl Marek Sergot, Imperial College, UK Jeffrey M. Bradshaw, UWF/Institute for Human and Machine Cognition, USA Katia Sycara, Carnegie Mellon University, USA Maria Gini, University of Minnesota, USA Churn-Jung Liau, Academia Sinica, Taiwan Zhongzhi Shi, Chinese Academy of Sciences, China Liz Sonenberg, The University of Melbourne, Australia WI-Track Program Vice Chairs: Matthias Klusch, German Research Center for Artificial Intelligence, Germany Joost Kok, Leiden University, The Netherlands Steve Willmott, Universitat Politccnica de Catalunya, Spain Ubbo Visser, Universitat Bremen, Germany Mario Cannataro, University "Magna Grecia" of Catanzaro, Italy Nick Cercone, Dalhousie University, Canada W. Lewis Johnson, University of Southern California, USA Lina Zhou, University of Maryland, Baltimore County, USA Massimo Marchiori, MIT Lab for Computer Science, USA Sankar K. Pal, Indian Statistical Institute, India Toyoaki Nishida, Kyoto University, Japan Einoshin Suzuki, Yokohama National University, Japan Chengqi Zhang, University of Technology, Sydney, Australia Industry/Demo-Track Chairs: Jianchang Mao, Yahoo! Inc., USA Toshiharu Sugawara, NTT Communication Science Laboratories, Japan Workshop Chair: Pawan Lingras, Saint Mary's University, Canada Tutorial Chair: Rineke Verbrugge, University of Groningen, NL Publicity Chairs: Jim Peters, University of Manitoba, Canada James Wang, Clemson University, USA Organizing Chair: Francois Peccoud, University of Technology of Compiegne, France Local Arrangement Chairs: Marie-Helene Abel, University of Technology of Compiegne, France Claude Moulin, University of Technology of Compiegne, France *** Contact Information *** wi-iat05@maebashi-it.org