From sam at neurogrid.com Thu Aug 1 05:15:01 2002 From: sam at neurogrid.com (Sam Joseph) Date: Sat Dec 9 22:11:46 2006 Subject: [p2p-hackers] Decentralized MetaData Search Strategies Message-ID: <3D4927E2.3090906@neurogrid.com> Hi all, I have revised and expanded my decentralised meta-data strategies document to include more thoughts on the issues and some more systems (JXTASearch, SIONet, Reptile, Semplesh). http://www.neurogrid.net/Decentralized_Meta-Data_Strategies-neat.html The document is still draft, and I would be very grateful for any further feedback you can give me. CHEERS> SAM From arachnid at mad.scientist.com Thu Aug 1 15:52:01 2002 From: arachnid at mad.scientist.com (Nick Johnson) Date: Sat Dec 9 22:11:46 2006 Subject: [p2p-hackers] 'Flat' P2P Idea Message-ID: <1028280299.3d4a4febd3aba@goliath.notdot.net> I've been a lurker on the list for a while, but I thought I'd come out and propose an idea that I'd really appreciate comments & criticism on: Essentially, instead of a tree-based structure, in which requests can be forwarded multiple hops from the originating host to one that answers it, I would like to propose a 'flat' structured one. Essentially, the system would maintain a list of IP addresses. Every time a message is recieved from a particular servent on the network, the IP address of that servent is placed at the top of the list. The basic operations would be as follows: Message Reception: The IP address of the sending servent is moved to the top of the list. The message is passed on to the appropriate handler for processing Message Transmission: The message is sent to the first n IP addresses on the top of the list, where n is defined by the application sending the message Discovery: The servent sends a message to one or more hosts chosen from the top of the list, requesting the top n hosts from their list be transmitted to it. Those hosts are added to the top of the list. That's the basic idea. It seems to me that it should work, but how well is subject to debate, and perhaps only trying it will determine that. All messages would be sent via UDP to minimise the per-host resource costs. Data transfers would be done via out-of-band means (eg HTTP). Obviously, improvements to this could be made, but I'd like to hear the opinions of others first. Pros: - No message multiplication. Abusers cannot rely on the duplication of messages as an easy way to flood the network - Messages draw attention. Every servent that recieves a message puts the sender to the top of their list. Hence, the more messages a servent sends, the higher up the lists of other servents it is, and the more messages it ends up recieving. A host is more likely to flood itself off the network than anyone else! - Traffic is self-limiting. If a host is recieving too much traffic, it need only decrease the volume of messages it sends, and it will tend to be lower on most servents lists, and hence recieve less traffic. - Quiet nodes are not penalised. As above, if your node is 'quiet', it recieves little traffic. If someone is running a node they want to provide data from, it can periodically send ping messages to other nodes to remind them it's still around. - Faulty nodes quickly leave the network. If a node disconnects or has a particularaly bad connection, few messages will be recieved from it and it will move down in peoples lists. - P2P messages do not impair file transfer. When a user is transferring files, they are not likely to send any messages, so they recieve fewer as well. Cons: - 'Noisy' clients rise to the top. Clients that share little or nothing but send many messages will appear near the top. Possibly some sort of 'usefulness rating' could help with this. - Send in your own. Part of the reason I'm posting this is to see what's wrong with it ;). I have little experience in this area and appreciate comments. Nick From coderman at mindspring.com Thu Aug 1 18:07:01 2002 From: coderman at mindspring.com (coderman) Date: Sat Dec 9 22:11:46 2006 Subject: [p2p-hackers] 'Flat' P2P Idea References: <1028280299.3d4a4febd3aba@goliath.notdot.net> Message-ID: <3D49F6A5.7070002@mindspring.com> Hi Nick, Nick Johnson wrote: > I've been a lurker on the list for a while, but I thought I'd come out and > propose an idea that I'd really appreciate comments & criticism on: > Essentially, instead of a tree-based structure, in which requests can be > forwarded multiple hops from the originating host to one that answers it, I > would like to propose a 'flat' structured one. Essentially, the system would > maintain a list of IP addresses. Every time a message is recieved from a > particular servent on the network, the IP address of that servent is placed at > the top of the list. Sounds like a good idea to me. This works nicely for lightweight messaging protocols over a UDP transport to support dual NAT communications and the high number of direct connections required for a good flat topology. Check out alpine: http://cubicmetercrystal.com/alpine/ which implements search & discovery using this type of network topology and lightweight messaging. > The basic operations would be as follows: > > Message Reception: > The IP address of the sending servent is moved to the top of the list. The > message is passed on to the appropriate handler for processing > > Message Transmission: > The message is sent to the first n IP addresses on the top of the list, where n > is defined by the application sending the message What might be even better is to let the requestor determine how many nodes this query should be sent to, and also terminate a linear broadcast / overlay broadcast at will. > ... > > Cons: > - 'Noisy' clients rise to the top. Clients that share little or nothing but send > many messages will appear near the top. Possibly some sort of 'usefulness > rating' could help with this. If you couple reputation / performance tracking with discovery requests you can tune out this kind of noisy peer. > - Send in your own. Part of the reason I'm posting this is to see what's wrong > with it ;). I have little experience in this area and appreciate comments. I am a bit biased but I really think this is the best way to discover resources in a volatile, dynamic network like the internet where peer churn rates, short lived resources and congestion all make discovery much more difficult. From sam at neurogrid.com Thu Aug 1 18:23:01 2002 From: sam at neurogrid.com (Sam Joseph) Date: Sat Dec 9 22:11:46 2006 Subject: [p2p-hackers] 'Flat' P2P Idea References: <1028280299.3d4a4febd3aba@goliath.notdot.net> <3D49F6A5.7070002@mindspring.com> Message-ID: <3D49E0B5.90007@neurogrid.com> Hi Nick, Coders, coderman wrote: >> Cons: >> - 'Noisy' clients rise to the top. Clients that share little or >> nothing but send >> many messages will appear near the top. Possibly some sort of >> 'usefulness >> rating' could help with this. > > > If you couple reputation / performance tracking with discovery > requests you can > tune out this kind of noisy peer. > I'm biased too, but I think this is definitely a good way to go. NeuroGrid is a project that focuses on reputation/performance tracking in distributed environments. http://www.neurogrid.net/php/whitepaper.php http://www.neurogrid.net/NeuroGridSimulations_mod_b.pdf The crucial difficulties are which statistics to track, how to cope with the fact that they will go out of date and how to limit the storage of the statistics. The optimal approach will presumably combine a set of representative statistics (both explicit and implicit), include some sort of natural weighting that emphasies recent stats, and a cache pruning approach where old/unreliable stats get thrown away. However in some respects it might make more sense to maintain any stats that give a strong indication good or bad and discard those that are ambiguous. i.e. it would be good to maintain info on frequent spammers etc. CHEERS> SAM From arachnid at mad.scientist.com Thu Aug 1 19:17:01 2002 From: arachnid at mad.scientist.com (Nick Johnson) Date: Sat Dec 9 22:11:46 2006 Subject: [p2p-hackers] 'Flat' P2P Idea In-Reply-To: <3D49F6A5.7070002@mindspring.com> References: <1028280299.3d4a4febd3aba@goliath.notdot.net> <3D49F6A5.7070002@mindspring.com> Message-ID: <1028292474.3d4a7f7a8f4f9@goliath.notdot.net> Quoting coderman : > Hi Nick, > > > Nick Johnson wrote: > > I've been a lurker on the list for a while, but I thought I'd come out > and > > propose an idea that I'd really appreciate comments & criticism on: > > Essentially, instead of a tree-based structure, in which requests can > be > > forwarded multiple hops from the originating host to one that answers > it, I > > would like to propose a 'flat' structured one. Essentially, the system > would > > maintain a list of IP addresses. Every time a message is recieved from > a > > particular servent on the network, the IP address of that servent is > placed at > > the top of the list. > > Sounds like a good idea to me. This works nicely for lightweight > messaging > protocols over a UDP transport to support dual NAT communications and > the > high number of direct connections required for a good flat topology. Yes, though AFAIK this would only work well through NAT if loose-UDP is enabled, otherwise the host behind the NAT firewall would only be able to recieve messages from a host it had already contacted. > Check out alpine: http://cubicmetercrystal.com/alpine/ which implements > search > & discovery using this type of network topology and lightweight > messaging. This looks very interesting indeed - I'll have to take a closer look as soon as I have a chance (and am not at work ;) > > The basic operations would be as follows: > > > > Message Reception: > > The IP address of the sending servent is moved to the top of the list. > The > > message is passed on to the appropriate handler for processing > > > > Message Transmission: > > The message is sent to the first n IP addresses on the top of the > list, where n > > is defined by the application sending the message > > What might be even better is to let the requestor determine how many > nodes > this query should be sent to, and also terminate a linear broadcast / > overlay > broadcast at will. This is what I meant actually - I was envisaging a structure where applications sit on top of a 'transport layer' - the applications provide a way to interpret messages from the transport layer (such as a file-sharing app, a chat app, etc) and optionally a user interface. > I am a bit biased but I really think this is the best way to discover > resources > in a volatile, dynamic network like the internet where peer churn rates, > short > lived resources and congestion all make discovery much more difficult. Exactly my thoughts. I'm glad to see I wasn't decieving myself (probably ;). Nick From robs at research.att.com Thu Aug 1 19:34:01 2002 From: robs at research.att.com (Rob Sherwood) Date: Sat Dec 9 22:11:46 2006 Subject: [p2p-hackers] 'Flat' P2P Idea In-Reply-To: <1028280299.3d4a4febd3aba@goliath.notdot.net>; from arachnid@mad.scientist.com on Fri, Aug 02, 2002 at 09:24:59PM +1200 References: <1028280299.3d4a4febd3aba@goliath.notdot.net> Message-ID: <20020801223407.P24302@research.att.com> On Fri, Aug 02, 2002 at 09:24:59PM +1200, Nick Johnson wrote: > I've been a lurker on the list for a while, but I thought I'd come out and > propose an idea that I'd really appreciate comments & criticism on: > Essentially, instead of a tree-based structure, in which requests can be > forwarded multiple hops from the originating host to one that answers it, I > would like to propose a 'flat' structured one. Essentially, the system would > maintain a list of IP addresses. Every time a message is recieved from a > particular servent on the network, the IP address of that servent is placed at > the top of the list. The basic operations would be as follows: [ from one lurker to another ;) ] Assuming I understand the protocol correctly, it would seem to me if there was a flood of traffic, this network would stand a good chance of becoming disconnected. As I understand it, the connection from A -> B is essentially "forgotten" when B becomes the n+1th person on A's list, correct? Now, for simplicities sake, assume that A and B have the same n (it's not necessary in general). All it would take is n messages from different people before they got a message off to each other for the pair to become disconnected. Depending on traffic patterns and the size of n, this may not be likely under normal conditions, but it is certainly easy enough to pull off as a mallicious attack. A possible solution is to only move the message sender to the top of the list with some probablility p, where p is adjusted based on the rate of messages in the system. Another possible critique of this system is it would be impossible to take advantage of any sort of additional network topology information, i.e. cluster nodes together which are "close" in some sort of networking sense. OTOH, this point is kind of moot, as I don't know of any widely deployed tree based P2P systems which actually do this :) Just my $0.02, but definitely an interesting design. - Rob . From bradneuberg at yahoo.com Thu Aug 1 22:04:01 2002 From: bradneuberg at yahoo.com (Brad Neuberg) Date: Sat Dec 9 22:11:46 2006 Subject: [p2p-hackers] Performance of JXTA? References: <1028280299.3d4a4febd3aba@goliath.notdot.net> <20020801223407.P24302@research.att.com> Message-ID: <3D4A127C.6080103@yahoo.com> I am interested in anyone's opinion on whether JXTA is scalable. It seems to use things similar to flooding techniques. Please reply to me personally and not the list to prevent list-traffic. Thanks, Brad Neuberg From tpm101 at gmx.net Thu Aug 1 22:52:01 2002 From: tpm101 at gmx.net (Tim Muller) Date: Sat Dec 9 22:11:46 2006 Subject: [p2p-hackers] Performance of JXTA? In-Reply-To: <3D4A127C.6080103@yahoo.com> References: <1028280299.3d4a4febd3aba@goliath.notdot.net> <20020801223407.P24302@research.att.com> <3D4A127C.6080103@yahoo.com> Message-ID: <200208020651.31391.tpm101@gmx.net> On Friday 02 August 2002 06:02, Brad Neuberg wrote: > I am interested in anyone's opinion on whether JXTA is scalable. It > seems to use things similar to flooding techniques. Please reply to me > personally and not the list to prevent list-traffic. Isn't that what the list is for? I'd be quite interested in any opinions on that as well, and I'm sure I'm not the only one. Am I? Cheers -Tim From blair at orcaware.com Thu Aug 1 22:58:01 2002 From: blair at orcaware.com (Blair Zajac) Date: Sat Dec 9 22:11:46 2006 Subject: [p2p-hackers] Performance of JXTA? References: <1028280299.3d4a4febd3aba@goliath.notdot.net> <20020801223407.P24302@research.att.com> <3D4A127C.6080103@yahoo.com> <200208020651.31391.tpm101@gmx.net> Message-ID: <3D4A1F49.103525B3@orcaware.com> Tim Muller wrote: > > On Friday 02 August 2002 06:02, Brad Neuberg wrote: > > > I am interested in anyone's opinion on whether JXTA is scalable. It > > seems to use things similar to flooding techniques. Please reply to me > > personally and not the list to prevent list-traffic. > > Isn't that what the list is for? > > I'd be quite interested in any opinions on that as well, and I'm sure I'm not > the only one. Am I? I'm interested also. Now there's more traffic to reply to the non-list issue than there would be only the question were asked. Best, Blair -- Blair Zajac Web and OS performance plots - http://www.orcaware.com/orca/ From arachnid at mad.scientist.com Thu Aug 1 23:47:01 2002 From: arachnid at mad.scientist.com (Nick Johnson) Date: Sat Dec 9 22:11:46 2006 Subject: [p2p-hackers] 'Flat' P2P Idea Message-ID: <1028308818.3d4abf52d3abc@goliath.notdot.net> > Assuming I understand the protocol correctly, it would seem to me if > there was a flood of traffic, this network would stand a good chance > of becoming disconnected. As I understand it, the connection from A > -> > B is essentially "forgotten" when B becomes the n+1th person on A's > list, > correct? Now, for simplicities sake, assume that A and B have the same > n > (it's not necessary in general). All it would take is n messages from > different people before they got a message off to each other for the > pair > to become disconnected. Depending on traffic patterns and the size of > n, > this may not be likely under normal conditions, but it is certainly > easy > enough to pull off as a mallicious attack. Hmm. UDP spoofing is an issue that hadn't occurred to me. One other solution that might both solve that and improve the reliability of the network would be to send out a ping packet every time a new host is discovered, and not add them to the list until they are 'validated' by recieving a corresponding pong packet. Apart from eliminating packet forging as an effective flooding method, it would also ensure that a host is not at the top of anyones list unless they know packets can reach them. > A possible solution is to only move the message sender to the top > of the list with some probablility p, where p is adjusted based > on the rate of messages in the system. This sounds like a good idea, however couldn't a malicious user flood most of the good hosts off the list before this starts to effect the chance of putting a host up the top? Also, it would make flooding harder, but it would also make it harder to get legitimate hosts on the list. One of the advantages this network would (should?) have over tree-based networks is that there is no way to attack the whole network - you can only attack individual clients, and though client IPs are easy to obtain, a malicious users ability to attack them is limited by his own bandwidth. If we can protect against UDP spoofing, even attacking individual clients should be difficult, save for flooding them off the network altogether. However, as I write one other spoofing problem occurs to me - a malicious user could spoof packets as being from the victim, and send them to multiple clients requesting large amounts of data - a lot like a 'smurf' attack. As long as a user can request a larger amount of data than they can send, this could be a problem. :( > Another possible critique of this system is it would be impossible to > take advantage of any sort of additional network topology information, > i.e. cluster nodes together which are "close" in some sort of > networking > sense. OTOH, this point is kind of moot, as I don't know of any > widely > deployed tree based P2P systems which actually do this :) Though it doesn't actively take advantage of network topology, it could be made to do so passively. Routes that have low reliability would pass few packets, leading to hosts on the other side of those links being low on a servents list. Servents could also apply a rating scheme partially dependant on the round-trip time for packets. From sam at neurogrid.com Fri Aug 2 01:43:01 2002 From: sam at neurogrid.com (Sam Joseph) Date: Sat Dec 9 22:11:46 2006 Subject: [p2p-hackers] Performance of JXTA? References: <1028280299.3d4a4febd3aba@goliath.notdot.net> <20020801223407.P24302@research.att.com> <3D4A127C.6080103@yahoo.com> Message-ID: <3D4A3636.6060703@neurogrid.com> Brad Neuberg wrote: > I am interested in anyone's opinion on whether JXTA is scalable. It > seems to use things similar to flooding techniques. Please reply to > me personally and not the list to prevent list-traffic. Please excuse me if I reply to the list since I would be interested in hearing other people's input on this as well. I have been investigating JXTA and JXTASearch, and I am unclear about a number of things. It seems that JXTA supports a number of different protocols. http://spec.jxta.org/v1.0/docbook/JXTAProtocols.html Two of them seem particularly relevant to routing, PDP and ERP (Peer Discovery Protocol and Endpoint Routing Protocol). PDP supports peers advertising things about themselves, and ERP provides a framework for establishing a route between peers that currently do not know about each other. The ERP process appears to involves using PDP to establish which other peers are playing the role of Routers, and then contacting them to try and establish a route to the desired destination. The process of establishing a route does appear to have some flooding aspects in that routers query other routers they know in order to establish a sequence of hops that will link the start and end points in a route. However it is not clear to what extent different JXTA applications use these different protocols. It would seem perfectly plausible to build a JXTA application that relied on flooding techniques and also one that didn't. A single JXTA node could operate like Alpine (maintain many individual connections and not forward messages) and never use the ERP. Conversely it would seem that one could equally implement a gnutella clone on top of JXTA. So I think the question becomes not whether JXTA is scalable, but whether a particular routing technique is scalable. I think there is some consensus that an limited degree of flooding between hubs or supernodes in a peer to peer network is relatively scalable (re. FastTrack, LimeWire SuperPeers etc.), but it would be nice to put that into more quantitative terms at some point :-) I'm no JXTA expert, but I would guess that JXTA itself is not commited to a particular routing technique .... CHEERS> SAM From bradneuberg at yahoo.com Fri Aug 2 02:20:02 2002 From: bradneuberg at yahoo.com (Brad Neuberg) Date: Sat Dec 9 22:11:46 2006 Subject: [p2p-hackers] Does JXTA work as a "Universal Toolkit"? References: <1028280299.3d4a4febd3aba@goliath.notdot.net> <20020801223407.P24302@research.att.com> <3D4A127C.6080103@yahoo.com> <3D4A3636.6060703@neurogrid.com> Message-ID: <3D4A4E39.6090604@yahoo.com> Here's another question: JXTA purports to be a "Universal Toolkit" of sorts, solving many of the problems that a P2P developer would have to go through. From a distance it looks like this to me: it has some nice abstractions, such as Pipes, Peers, Peer Groups, and Peer Services, as well as some core services, such as a Router Service, a Rendezvous Service, a Membership Service, etc. As I get closer to it, though, my brain starts to hurt. I start thinking about the P2P design patterns for distributed storage that I know about, such as distributed hashtables, emergent networks, etc., and they don't seem to fit terribly well into the actualities of JXTA; there seems to be some cognitive dissonance, at least for me. From far away I can easily fit P2P patterns into JXTA, but up-close I find that things just don't fit. Have other people found this themselves? It does surprise me how many _new_ projects there are that _don't_ use JXTA, such as Mnet, BitTorrent, The Circle, etc. I am interested in soliciting from folks why they chose not to use JXTA, such as whether it was performance issues, language issues, a raunchy API (my opinion), etc. Thanx, Brad Neuberg From bradneuberg at yahoo.com Fri Aug 2 02:25:02 2002 From: bradneuberg at yahoo.com (Brad Neuberg) Date: Sat Dec 9 22:11:46 2006 Subject: [p2p-hackers] Performance of JXTA? References: <1028280299.3d4a4febd3aba@goliath.notdot.net> <20020801223407.P24302@research.att.com> <3D4A127C.6080103@yahoo.com> <3D4A3636.6060703@neurogrid.com> Message-ID: <3D4A4F8B.4000302@yahoo.com> Sam Joseph wrote: > Brad Neuberg wrote: > >> I am interested in anyone's opinion on whether JXTA is scalable. It >> seems to use things similar to flooding techniques. Please reply to >> me personally and not the list to prevent list-traffic. > > > Please excuse me if I reply to the list since I would be interested in > hearing other people's input on this as well. > > I have been investigating JXTA and JXTASearch, and I am unclear about > a number of things. It seems that JXTA supports a number of different > protocols. > http://spec.jxta.org/v1.0/docbook/JXTAProtocols.html > > Two of them seem particularly relevant to routing, PDP and ERP (Peer > Discovery Protocol and Endpoint Routing Protocol). PDP supports peers > advertising things about themselves, and ERP provides a framework for > establishing a route between peers that currently do not know about > each other. The ERP process appears to involves using PDP to > establish which other peers are playing the role of Routers, and then > contacting them to try and establish a route to the desired > destination. The process of establishing a route does appear to have > some flooding aspects in that routers query other routers they know in > order to establish a sequence of hops that will link the start and end > points in a route. > > However it is not clear to what extent different JXTA applications use > these different protocols. It would seem perfectly plausible to build > a JXTA application that relied on flooding techniques and also one > that didn't. A single JXTA node could operate like Alpine (maintain > many individual connections and not forward messages) and never use > the ERP. Conversely it would seem that one could equally implement a > gnutella clone on top of JXTA. Hi Sam. I am still personally trying to understand how a custom service "builds" on top of JXTA. Do you leverage these individual protocols, such as PDP, to advertise to other nodes? Do you simply go "out of band" and use your own communication technique? This confusion ties in with another message I just posted to the list about the "success" of JXTA as a universal toolkit. > > So I think the question becomes not whether JXTA is scalable, but > whether a particular routing technique is scalable. I think there is > some consensus that an limited degree of flooding between hubs or > supernodes in a peer to peer network is relatively scalable (re. > FastTrack, LimeWire SuperPeers etc.), but it would be nice to put that > into more quantitative terms at some point :-) I agree. I really wish there was some strong, theoretical work studying the scalability of the JXTA protocols. It seems ripe for an academic paper. This is something I have really appreciated about academics working on the distributed hashtable stuff; they have tried to pin exact big-O values on various aspects of the network. > > I'm no JXTA expert, but I would guess that JXTA itself is not commited > to a particular routing technique .... > > CHEERS> SAM > > > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > From sam at neurogrid.com Fri Aug 2 02:56:02 2002 From: sam at neurogrid.com (Sam Joseph) Date: Sat Dec 9 22:11:46 2006 Subject: [p2p-hackers] Performance of JXTA? References: <1028280299.3d4a4febd3aba@goliath.notdot.net> <20020801223407.P24302@research.att.com> <3D4A127C.6080103@yahoo.com> <3D4A3636.6060703@neurogrid.com> <3D4A4F8B.4000302@yahoo.com> Message-ID: <3D4A58F8.7020807@neurogrid.com> Brad Neuberg wrote: > Sam Joseph wrote: > >> However it is not clear to what extent different JXTA applications >> use these different protocols. It would seem perfectly plausible to >> build a JXTA application that relied on flooding techniques and also >> one that didn't. A single JXTA node could operate like Alpine >> (maintain many individual connections and not forward messages) and >> never use the ERP. Conversely it would seem that one could equally >> implement a gnutella clone on top of JXTA. > > > Hi Sam. I am still personally trying to understand how a custom > service "builds" on top of JXTA. Do you leverage these individual > protocols, such as PDP, to advertise to other nodes? Do you simply go > "out of band" and use your own communication technique? This > confusion ties in with another message I just posted to the list about > the "success" of JXTA as a universal toolkit. Yeah, I can't claim to really understand the details here, having not implemented anything on top of JXTA. I assume that JXTAs approach involves leverage of these protocols. It has been suggested to me that the protocols allow for different kinds of implementations. I know at least one group that seems to have done something partially innovative on top of JXTA, called Anthill: http://www.neurogrid.net/Decentralized_Meta-Data_Strategies-neat.html#Anthill However I personally predisposed to the Tristero P2P interoperability project: http://tristero.sourceforge.net/ which attempts to specify a set on API/interfaces that leave the very lowest level open to implementation by anyone. I assume that JXTA does not support this, i.e. that the implementations of the lowest level such as pipes, adverts etc., is fixed, but I may be wrong. >> So I think the question becomes not whether JXTA is scalable, but >> whether a particular routing technique is scalable. I think there is >> some consensus that an limited degree of flooding between hubs or >> supernodes in a peer to peer network is relatively scalable (re. >> FastTrack, LimeWire SuperPeers etc.), but it would be nice to put >> that into more quantitative terms at some point :-) > > > I agree. I really wish there was some strong, theoretical work > studying the scalability of the JXTA protocols. It seems ripe for an > academic paper. This is something I have really appreciated about > academics working on the distributed hashtable stuff; they have tried > to pin exact big-O values on various aspects of the network. I hate to say it but I think you are asking for something that is inherently contradictory. I mean the JXTA protocols are not by themselves enough to support a complete P2P system. You have to build something on top of them before you can start asking about scalability. I mean the Gnutella protocol includes specification of a forwarding model, i.e. forward to everyone. But the JXTA protocols appear to leave that open, i.e. before actually building a P2P system on JXTA you have to make decisions about the routing policies. Am I wrong? Is there anybody on this list actually familiar enough with JXTA to confirm or deny this. CHEERS> SAM From cefn.hoile at bt.com Fri Aug 2 04:45:01 2002 From: cefn.hoile at bt.com (cefn.hoile@bt.com) Date: Sat Dec 9 22:11:46 2006 Subject: [p2p-hackers] 'Flat' P2P Idea Message-ID: Nick, So does this mean that the actual routing of requests is handled at the application level? (i.e. The transport layer just tries to maintain connectivity to a finite number of hosts with evidence of their recent existence, and is not a component of a broadcast strategy.) Isn't there a danger that non-conformant peers will be happy with flooding their inbound connection, but continue to introduce messages on their outbound connection to disrupt the network? Can inbound and outbound routes be truly independent, does anyone know? In other words, non conformant peers don't have to forward, and their finite outbound link is the limitation which is visible to the peer network (the inbound UDP packets to the malicious peer are just dropped invisibly in the network, or ignored by the peer). It's true that the ping strategy you suggest may help here, but it seems like a problem that a single sent message, and a single successful ping could be sufficient to remove a host address from another peer. Cefn -----Original Message----- From: Nick Johnson [mailto:arachnid@mad.scientist.com] Sent: 02 August 2002 10:25 To: p2p-hackers@zgp.org Subject: [p2p-hackers] 'Flat' P2P Idea From coderman at mindspring.com Fri Aug 2 07:42:02 2002 From: coderman at mindspring.com (coderman) Date: Sat Dec 9 22:12:02 2006 Subject: [p2p-hackers] 'Flat' P2P Idea References: <1028308818.3d4abf52d3abc@goliath.notdot.net> Message-ID: <3D4AB590.7020900@mindspring.com> Nick Johnson wrote: > > Hmm. UDP spoofing is an issue that hadn't occurred to me. One other solution > that might both solve that and improve the reliability of the network would be > to send out a ping packet every time a new host is discovered, and not add them > to the list until they are 'validated' by recieving a corresponding pong packet. > Apart from eliminating packet forging as an effective flooding method, it would > also ensure that a host is not at the top of anyones list unless they know > packets can reach them. Use a simple handshake to exhancge initial sequences (ala TCP) and you can prevent remote UDP spoofing as much as possible. To prevent a DoS attack forcing someone of the lists you mentioned, do not track location in the list by most recent recv, but rather by the reputation metric discussed in prior emails. Subtle side effects that impact scalability and robustness can arise from simple oversights. Massive bandwidth amlplification in gnutella for example. >>A possible solution is to only move the message sender to the top >>of the list with some probablility p, where p is adjusted based >>on the rate of messages in the system. > > This sounds like a good idea, however couldn't a malicious user flood most of > the good hosts off the list before this starts to effect the chance of putting a > host up the top? Also, it would make flooding harder, but it would also make it > harder to get legitimate hosts on the list. Right. I don't think this is a good solution either. > However, as I write one other spoofing problem occurs to me - a malicious user > could spoof packets as being from the victim, and send them to multiple clients > requesting large amounts of data - a lot like a 'smurf' attack. As long as a > user can request a larger amount of data than they can send, this could be a > problem. :( You need to implement a connection setup handshake that provides connection identifiers and sequences to prevent remote malicious spoofing of UDP packets. Bandwidth used on a per peer basis can be used to track peers using an inordinate amount of transfer. These peers can then be throttled back or disconnected. From justin at chapweske.com Fri Aug 2 12:21:01 2002 From: justin at chapweske.com (Justin Chapweske) Date: Sat Dec 9 22:12:02 2006 Subject: [p2p-hackers] A Note on Distributed Computing References: <060e01c237f9$55407ff0$407b9fa8@lss.emc.com> <20020731195410.GB347@sporty.spiceworld> <07ab01c238cf$f70a5370$407b9fa8@lss.emc.com> Message-ID: <3D4ADB87.3000802@chapweske.com> Not that Jeff directly states this, but in most cases, the concept of an OS implies some sort of transpart functionality that hides the details of the hardware from the application. In a distributed/internet/p2p OS, the underlying hardware is the network itself, with its high latency and common communication failures. Personally I agree with the views presented in "A Note on Distributed Computing" (http://research.sun.com/techrep/1994/abstract-29.html) that states that the goal of transparent services over a distributed network is actually a pretty bad idea. If you haven't read it, read "A Note..". -Justin Jeff Darcy wrote: >>If I may interject here, could we go into help the ignorant mode for a >>second and explain exactly what a "P2P OS" is supposed to be in more >>concrete terms. > > > A pretty good start would be good old POSIX semantics, only applied to > resources across networks instead of on a single machine. > Open/close/read/write for files regardless of where they're located, in a > single file namespace. Fork/wait/kill for processes regardless of whether > they're they're located, in a single process namespace. Memory allocation, > pipes, memory-mapped I/O, other forms of IPC that work the same regardless > of location. Think of what had to happen to go from uniprocessors to > shared-memory SMP, and imagine providing the same levels of functionality > and integration for MP without the shared-memory hardware. Alternatively, > imagine an OS that would take a program written today for some current OS > and run it *unmodified*, taking productive advantage of resources on > multiple machines. It's not that outlandish a goal, really. Some would > claim it has even been done already - by Amoeba, by AEGIS, by VAXClusters, > by Plan 9, by QNX. -- Justin Chapweske, Onion Networks http://onionnetworks.com/ From jeff at platypus.ro Fri Aug 2 13:21:01 2002 From: jeff at platypus.ro (Jeff Darcy) Date: Sat Dec 9 22:12:02 2006 Subject: [p2p-hackers] A Note on Distributed Computing References: <060e01c237f9$55407ff0$407b9fa8@lss.emc.com> <20020731195410.GB347@sporty.spiceworld> <07ab01c238cf$f70a5370$407b9fa8@lss.emc.com> <3D4ADB87.3000802@chapweske.com> Message-ID: <087401c23a61$e93f1760$407b9fa8@lss.emc.com> > Personally I agree with the views presented in "A Note on Distributed > Computing" (http://research.sun.com/techrep/1994/abstract-29.html) that > states that the goal of transparent services over a distributed network > is actually a pretty bad idea. > > If you haven't read it, read "A Note..". I read it, and I still reject the implied appeal to authority. There's a lot of merit to the "virtualization considered harmful" idea, of which the cited note is just one example among many. I agree that trying to hide too much about the underlying hardware can be a mistake...but so can revealing too much. The trick is to develop abstractions that are flexible enough to take advantage of whatever actual environment you're in, but general enough that they're not actually *tied* to that environment. Consider processes and virtual memory in UNIX, for example. They've evolved a long way from the original pretense of actual processors running on actual memory, but they still hide the details of context switching and address translation...and that's still a good thing. What are the equivalent abstractions that would be needed for a P2P OS? Obviously, people are still working on that, but "location" is probably a good example. It's unbelievably cumbersome to require that application layers have full knowledge of topology and routing, but at the same time it would be nice for lower layers to provide them with some abstraction of network distance. Being able to state that some items (e.g. communicating processes) should be located close together and that others (e.g. data replicas) should be as far apart as possible is all the topology involvement many applications need. These things can be expressed quite well in abstract terms, and then mapped as well as possible onto the actual topology by the OS. Many distributed applications are already layered this way internally, with the layer boundaries strikingly similar from one application to the next. When I mentioned POSIX semantics, it was only as a starting point. Pure unadorned/unextended POSIX would be ill-suited to distributed environments, but I believe it would be possible to define extensions appropriate for those environments and come up with a reasonable complete description of the services an Internet OS would provide. Or maybe not. Maybe distributed environments are so different that a whole different paradigm is needed, though my own experience from UMA to NUMA to shared-memory clusters to shared-nothing clusters to geographically-distributed systems leads me to believe otherwise. My point is still that the idea of an Internet OS is not as fundamentally broken or unworthy of consideration as you and Bram make it out to be. It still has value in defining what services all this infrastructure will provide for people who actually have some task in mind other than the creation of still more infrastructure. From jeff at platypus.ro Fri Aug 2 13:21:03 2002 From: jeff at platypus.ro (Jeff Darcy) Date: Sat Dec 9 22:12:02 2006 Subject: [p2p-hackers] A Note on Distributed Computing References: <060e01c237f9$55407ff0$407b9fa8@lss.emc.com> <20020731195410.GB347@sporty.spiceworld> <07ab01c238cf$f70a5370$407b9fa8@lss.emc.com> <3D4ADB87.3000802@chapweske.com> Message-ID: <087401c23a61$e93f1760$407b9fa8@lss.emc.com> > Personally I agree with the views presented in "A Note on Distributed > Computing" (http://research.sun.com/techrep/1994/abstract-29.html) that > states that the goal of transparent services over a distributed network > is actually a pretty bad idea. > > If you haven't read it, read "A Note..". I read it, and I still reject the implied appeal to authority. There's a lot of merit to the "virtualization considered harmful" idea, of which the cited note is just one example among many. I agree that trying to hide too much about the underlying hardware can be a mistake...but so can revealing too much. The trick is to develop abstractions that are flexible enough to take advantage of whatever actual environment you're in, but general enough that they're not actually *tied* to that environment. Consider processes and virtual memory in UNIX, for example. They've evolved a long way from the original pretense of actual processors running on actual memory, but they still hide the details of context switching and address translation...and that's still a good thing. What are the equivalent abstractions that would be needed for a P2P OS? Obviously, people are still working on that, but "location" is probably a good example. It's unbelievably cumbersome to require that application layers have full knowledge of topology and routing, but at the same time it would be nice for lower layers to provide them with some abstraction of network distance. Being able to state that some items (e.g. communicating processes) should be located close together and that others (e.g. data replicas) should be as far apart as possible is all the topology involvement many applications need. These things can be expressed quite well in abstract terms, and then mapped as well as possible onto the actual topology by the OS. Many distributed applications are already layered this way internally, with the layer boundaries strikingly similar from one application to the next. When I mentioned POSIX semantics, it was only as a starting point. Pure unadorned/unextended POSIX would be ill-suited to distributed environments, but I believe it would be possible to define extensions appropriate for those environments and come up with a reasonable complete description of the services an Internet OS would provide. Or maybe not. Maybe distributed environments are so different that a whole different paradigm is needed, though my own experience from UMA to NUMA to shared-memory clusters to shared-nothing clusters to geographically-distributed systems leads me to believe otherwise. My point is still that the idea of an Internet OS is not as fundamentally broken or unworthy of consideration as you and Bram make it out to be. It still has value in defining what services all this infrastructure will provide for people who actually have some task in mind other than the creation of still more infrastructure. From jeff at platypus.ro Fri Aug 2 13:21:04 2002 From: jeff at platypus.ro (Jeff Darcy) Date: Sat Dec 9 22:12:02 2006 Subject: [p2p-hackers] A Note on Distributed Computing References: <060e01c237f9$55407ff0$407b9fa8@lss.emc.com> <20020731195410.GB347@sporty.spiceworld> <07ab01c238cf$f70a5370$407b9fa8@lss.emc.com> <3D4ADB87.3000802@chapweske.com> Message-ID: <087401c23a61$e93f1760$407b9fa8@lss.emc.com> > Personally I agree with the views presented in "A Note on Distributed > Computing" (http://research.sun.com/techrep/1994/abstract-29.html) that > states that the goal of transparent services over a distributed network > is actually a pretty bad idea. > > If you haven't read it, read "A Note..". I read it, and I still reject the implied appeal to authority. There's a lot of merit to the "virtualization considered harmful" idea, of which the cited note is just one example among many. I agree that trying to hide too much about the underlying hardware can be a mistake...but so can revealing too much. The trick is to develop abstractions that are flexible enough to take advantage of whatever actual environment you're in, but general enough that they're not actually *tied* to that environment. Consider processes and virtual memory in UNIX, for example. They've evolved a long way from the original pretense of actual processors running on actual memory, but they still hide the details of context switching and address translation...and that's still a good thing. What are the equivalent abstractions that would be needed for a P2P OS? Obviously, people are still working on that, but "location" is probably a good example. It's unbelievably cumbersome to require that application layers have full knowledge of topology and routing, but at the same time it would be nice for lower layers to provide them with some abstraction of network distance. Being able to state that some items (e.g. communicating processes) should be located close together and that others (e.g. data replicas) should be as far apart as possible is all the topology involvement many applications need. These things can be expressed quite well in abstract terms, and then mapped as well as possible onto the actual topology by the OS. Many distributed applications are already layered this way internally, with the layer boundaries strikingly similar from one application to the next. When I mentioned POSIX semantics, it was only as a starting point. Pure unadorned/unextended POSIX would be ill-suited to distributed environments, but I believe it would be possible to define extensions appropriate for those environments and come up with a reasonable complete description of the services an Internet OS would provide. Or maybe not. Maybe distributed environments are so different that a whole different paradigm is needed, though my own experience from UMA to NUMA to shared-memory clusters to shared-nothing clusters to geographically-distributed systems leads me to believe otherwise. My point is still that the idea of an Internet OS is not as fundamentally broken or unworthy of consideration as you and Bram make it out to be. It still has value in defining what services all this infrastructure will provide for people who actually have some task in mind other than the creation of still more infrastructure. From Bernard.Traversat at Sun.Com Fri Aug 2 13:27:02 2002 From: Bernard.Traversat at Sun.Com (Bernard Traversat) Date: Sat Dec 9 22:12:02 2006 Subject: [p2p-hackers] Performance of JXTA? References: <1028280299.3d4a4febd3aba@goliath.notdot.net> <20020801223407.P24302@research.att.com> <3D4A127C.6080103@yahoo.com> <200208020651.31391.tpm101@gmx.net> <3D4A1F49.103525B3@orcaware.com> Message-ID: <3D4AE522.8090206@Sun.Com> Blair Zajac wrote: >Tim Muller wrote: > >>On Friday 02 August 2002 06:02, Brad Neuberg wrote: >> >>>I am interested in anyone's opinion on whether JXTA is scalable. It >>>seems to use things similar to flooding techniques. Please reply to me >>>personally and not the list to prevent list-traffic. >>> >>Isn't that what the list is for? >> >>I'd be quite interested in any opinions on that as well, and I'm sure I'm not >>the only one. Am I? >> > >I'm interested also. > >Now there's more traffic to reply to the non-list issue than there >would be only the question were asked. > >Best, >Blair > Hi, Let me try here to give a quick overview about JXTA scalability. The JXTA protocols overall intent is to provide a pluggable and generic platform framework. The key point here is providing building-block mechanisms where policies can be customized by developers to achieve ultimately the best performance and scalability. Scalability is something we are constantly reevaluating with the help and comments of the JXTA community and everyone. For instance, we have just published a couple of new proposed enhancements to improve scalability (http://platform.jxta.org/java/currentwork.html) We looked at scalability in two dimensions: - "Vertical" scalability to insure that a single platform instance performs as well as it can. Before we can address multi-peer scalability issues we need to insure that a platform instance makes the best usage of its local resource (CPU, memory, network). We are working towards implementing efficient thread pools, reducing extra message buffering and copying within the platform implementation. We are moving towards a model where peers establish a connection, get their addressed messages, and then remain quiescent until another message is available or the connection is closed. By suitably controlling the permitted number connections, this model will enable massive peer connection churn rate. Lot of works was done in this area in the latest Platform release. -"Horizontal" scalability to address multi-peer scalability operations. JXTA Horizontal scalability is based on the following core concepts: 1) JXTA from the beginning makes the assumption that the way to achieve overall scalability is to continuously adapt the network towards direct peer-to-peer interactions between peers (i.e. minimizing intermediary hops and overall resource consumption), and scope peer interactions within well-localized peergroups to minimize propagation of queries within the entire network. 2) "PeerGroup". Peergroups define logical virtual domains irrespective of the peers physical network topology. A peer can join as many peergroups as it wants. Peers can create as many peergroups as they needs. A peergroup can contain as many peers as it is necessary. The JXTA protocols only define the mechanisms to create, advertise and join peergroups, not policies. For instances, peers can assign different membership policies when creating peergroups. Peergroups are primarily used for scoping peers interactions so only peers that are members of the peergroup handle the discovery or propagate requests associated with a peergroup. A JXTA propagate request only floods the peers that have currently members of the peergroup. All non-member peers don't see the request. JXTA does not impose any specific organization for peergroups. One may create a "hierarchy" of peergroups or have a "flat" peergroup world. This is left to application or service developers. 3) "Rendezvous" Peers. Within each peergroup a number of edge peers can dynamically elect themselves to act as rendezvous for a peergroup. Rendezvous are used to facilitate the "long range" discovery of resource advertisements in a peergroup. Edge peers publish advertisements of resources they want to share their known rendezvous to make it easier for peers to find these advertisements. Peers search for rendezvous peers and query them when looking for an advertisement, if they cannot find them via their local proximity network (i.e.subnet multicast discovery). Rendezvous are scoped by peergroups. Each peergroup has its own set of rendezvous. A single peer can act as rendezvous for multiple peergroups if it wants, so it will have to become a member of the peergroup. In the current implementation, when a request arrives at a rendezvous it is propagated to all other known rendezvous for this peergroup, as well as the edge peers known by these rendezvous (i.e. flooding) until the advertisement is found. Queries have a TTL and loops are detected. We are looking to add a couple a enhancements to improve the rendezvous propagation due to its limited scalability. First, we are looking to limit the propagation to only the rendezvous peers, by adding an advertisement index service on the rendezvous and have edge peers publish indices of their shared advertisements onto rendezvous. This will permit to have rendezvous search queries to be only propagated between rendezvous, not the edge peers. We are also looking to define a loosely-coupled "topology" structure for rendezvous to enable the use of the most efficient propagation or search mechanism. For example, rendezvous will maintain a semi-consistent view of all available rendezvous and build rendezvous "walker" to propagate the query or forward the query to the rendezvous that is likely to have the information. It important to point out that we are looking to provide a pluggable "walker" framework where each service will have the ability to define their own walker policy to visit the rendezvous. A service could use a Distributed Index Hash walker (Can, chord, etc.). We are looking to implement a semi-consistent hash index distribution. Other services could use a multicast tree or a sequential walk to visit all the rendezvous. The walker pluggable framework we believe is essential for scalability to guarantee that the best walker policy can be defined by each service. 5) Every resource shared by a peer in the JXTA network is described by an advertisement (XML meta-data document). Advertisements have an associated expiration lifetime. Advertisements are purged from each peer cache when their lifetime expired. The advertisement expiration mechanism enables to self healing the network in a completely decentralized manner. Peers may at any time renew the lifetime of their advertisements by extending the lifetime or republishing the advertisement. This uniform representation enables the entire JXTA protocols to use a unique resolution mechanism (JXTA Resolver) for discovering and binding resources (Peers, PeerGroups, Pipes, etc.) to a physical location 6) Source Based Routing and Dynamic Next Hope Route Forwarding JXTA uses two based mechanisms for routing messages within the JXTA network. Messages carry an optional "routing" header that contains the aggregate forward and reverse routes ( sequence of hops to reach the destination) that a JXTA message is in process to take to reach its destination and has taken from its original source. Any peer that runs a router service has the ability to cache route information and optimize the route if it knows a better route. At any point in time, if the next hop in the route is not reachable the peer can issue a dynamic route discovery requests to find a new valid route, update the routing header and forward the message to the next hop. Routes as any resources are represented by advertisements with their own expiration lifetime. Bad routes are purged when their lifetimes expire. When a message is received, the destination can use the reverse route information to reply to the source peer. So much for my quick overview, sorry if this message is too long :-) I am sure I may have missed a couple of other points. Now, JXTA is a work in progress, we just submitted an IETF draft. We need help of everyone in the P2P community to improve and refine what we have, introduce new ideas to make JXTA the most successful possible. Cheers, Bernard Traversat Project JXTA Sun Microsystems Ps: May want to read this to learn more about JXTA: http://www.jxta.org/docs/JXTAprotocols.pdf -------------- next part -------------- An HTML attachment was scrubbed... URL: http://zgp.org/pipermail/p2p-hackers/attachments/20020802/cd1427ee/attachment.html From alk at pobox.com Fri Aug 2 13:39:01 2002 From: alk at pobox.com (Tony Kimball) Date: Sat Dec 9 22:12:02 2006 Subject: [p2p-hackers] A Note on Distributed Computing In-Reply-To: <3D4ADB87.3000802@chapweske.com> References: <07ab01c238cf$f70a5370$407b9fa8@lss.emc.com> <3D4ADB87.3000802@chapweske.com> Message-ID: <200208021539.14318.alk@pobox.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Friday 02 August 2002 14:20, Justin Chapweske wrote: > ...the goal of transparent services over a distributed network > is actually a pretty bad idea. It is precisely as good or bad an idea as hot-pluggable hardware. Recent OS work has come to deal well with a dynamic resource set. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.0.6 (GNU/Linux) Comment: For info see http://www.gnupg.org iD8DBQE9Su3wiZRIr8ozroIRAgEuAJoCiHMGajH+xfzdH/6Rjx5HOYpESACeN9Hk qWqy2kN9GkjKQOdNtnsHJmg= =bgqH -----END PGP SIGNATURE----- From decoy at iki.fi Fri Aug 2 14:32:01 2002 From: decoy at iki.fi (Sampo Syreeni) Date: Sat Dec 9 22:12:03 2006 Subject: [p2p-hackers] A Note on Distributed Computing In-Reply-To: <3D4ADB87.3000802@chapweske.com> Message-ID: On 2002-08-02, Justin Chapweske uttered to p2p-hackers@zgp.org: >Personally I agree with the views presented in "A Note on Distributed >Computing" (http://research.sun.com/techrep/1994/abstract-29.html) that >states that the goal of transparent services over a distributed network >is actually a pretty bad idea. To a degree the point is sound. But I'd much rather follow that path of abstracting those differences that will remain into separate interfaces to objects which may otherwise be used in ways we are used to. That gives us code reuse, a learning curve which is far less steep than the alternative, more possibilities for beneficial abstraction and modularity plus more. Also, the writers seem to forget that some of the earlier paradigms which were expected to take, but didn't, in fact expose a *lot* to the programmer. That's the case with most message passing architectures, at least, and client-server in the IP world. The same goes for soft NFS mounts, which were used as an example -- people simply do not have the time or the energy to get their apps running on something with twice the error handling requirements of an earlier platform. Another reason is that most of the things earlier distributed environments were trying to accomplish were just simpler to achieve in a centralized one; an economic point, this one. Hiding and abstraction are good, when they don't fall prey to overt ambition (pace CORBA). Not all differences between local and remote objects can be hidden, so that shouldn't be the goal. But trying to unify as much of the two as possible isn't bad, either. I think the goal is to hide as much as possible by default, but still expose lower level details through suitable, regular interfaces to those who need 'em and know how to take advantage of them. -- Sampo Syreeni, aka decoy - mailto:decoy@iki.fi, tel:+358-50-5756111 student/math+cs/helsinki university, http://www.iki.fi/~decoy/front openpgp: 050985C2/025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2 From jeff at platypus.ro Fri Aug 2 14:37:01 2002 From: jeff at platypus.ro (Jeff Darcy) Date: Sat Dec 9 22:12:03 2006 Subject: [p2p-hackers] A Note on Distributed Computing References: <060e01c237f9$55407ff0$407b9fa8@lss.emc.com> <20020731195410.GB347@sporty.spiceworld> <07ab01c238cf$f70a5370$407b9fa8@lss.emc.com> <3D4ADB87.3000802@chapweske.com> <087401c23a61$e93f1760$407b9fa8@lss.emc.com> Message-ID: <00c501c23a6c$a0dcd060$2002a8c0@jdarcyvaio> Did that message appear three times for anyone else? I certainly didn't *send* it three times. ----- Original Message ----- From: "Jeff Darcy" To: Sent: Friday, August 02, 2002 4:19 PM Subject: Re: [p2p-hackers] A Note on Distributed Computing From Bernard.Traversat at Sun.Com Fri Aug 2 16:12:02 2002 From: Bernard.Traversat at Sun.Com (Bernard Traversat) Date: Sat Dec 9 22:12:03 2006 Subject: [p2p-hackers] Performance of JXTA? References: <1028280299.3d4a4febd3aba@goliath.notdot.net> <20020801223407.P24302@research.att.com> <3D4A127C.6080103@yahoo.com> <3D4A3636.6060703@neurogrid.com> Message-ID: <3D4B0BD0.9070509@Sun.Com> Sam Joseph wrote: > Brad Neuberg wrote: > >> I am interested in anyone's opinion on whether JXTA is scalable. It >> seems to use things similar to flooding techniques. Please reply to >> me personally and not the list to prevent list-traffic. > > > Please excuse me if I reply to the list since I would be interested in > hearing other people's input on this as well. > > I have been investigating JXTA and JXTASearch, and I am unclear about > a number of things. It seems that JXTA supports a number of different > protocols. > http://spec.jxta.org/v1.0/docbook/JXTAProtocols.html > > Two of them seem particularly relevant to routing, PDP and ERP (Peer > Discovery Protocol and Endpoint Routing Protocol). PDP supports peers > advertising things about themselves, and ERP provides a framework for > establishing a route between peers that currently do not know about > each other. The ERP process appears to involves using PDP to > establish which other peers are playing the role of Routers, and then > contacting them to try and establish a route to the desired > destination. The process of establishing a route does appear to have > some flooding aspects in that routers query other routers they know in > order to establish a sequence of hops that will link the start and end > points in a rout > > However it is not clear to what extent different JXTA applications use > these different protocols. It would seem perfectly plausible to build > a JXTA application that relied on flooding techniques and also one > that didn't. A single JXTA node could operate like Alpine (maintain > many individual connections and not forward messages) and never use > the ERP. Conversely it would seem that one could equally implement a > gnutella clone on top of JXTA. > Sam, You are correct. The pluggable policy framework of the JXTA platform was intended to enable such diversities. The JXTA platform does provide a default routing (ERP) and propagation policies (Rendezvous) . Our current focus is to make the default policies as scalable as possible extending the protocols to avoid flooding. Applications will still have the ability to overwrite or extend the default policies. B. > > > I'm no JXTA expert, but I would guess that JXTA itself is not commited > to a particular routing technique .... > > CHEERS> SAM > > > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers From justin at chapweske.com Fri Aug 2 19:33:02 2002 From: justin at chapweske.com (Justin Chapweske) Date: Sat Dec 9 22:12:03 2006 Subject: [p2p-hackers] A Note on Distributed Computing References: Message-ID: <3D4B40BA.9010509@chapweske.com> I of course agree with Sampo and Jeff that, in general, hiding and abstraction are good software design qualities. But I think there are certain things that do not lend themselves well to abstraction, such as latency, and error conditions. So while desktop OS's do a great job of abstracting hardware, they do a pretty poor job of masking hard drive latency. Think about it, how many people on this list would perform I/O in a GUI event thread? So, desktop OS's do a good job of abstracting hardware...What hardware is there to abstract on the internet? It all looks the same to me if you are either using a standard protocol to access them, or running the same software on each node. So, desktop OS's also do a good job of memory management. They can provide gigabytes of "virtual memory" even if only 64M of RAM is available. But again, from a latency perspective they do a *horrible* job of abstracting memory. Virtual memory doesn't actually make it practical for a single application to use more than the physically available RAM, because the application's continous swapping would be unacceptable from a performance perspective. Okay, so maybe we look at a desktop OS's process control and its ability to run a process across any number of CPUs. This smells kind of like mobile agents in the distributed computing world. Now, I think mobile agents are neat, but I have yet to find enough uses for them to make me think that they would qualify as first-class citizens in an Internet OS like processes do on a desktop OS. There are also the distributed processing SETI@home-style applications that would qualify as a distributed process in an Internet OS. But, honestly, how many programs need huge amounts of CPU-power, but can tolerate huge latencies? Personally, I've been satisified with the CPU-power of desktop PCs for a number of years. Developers don't need an Internet OS, they need web services, and they need libraries that do some of the hard work for them. Developers want distributed hash tables, search web services (Tristero), and content delivery systems. I think people who are envisioning an "Internet OS" have some great ideas for how some of those pieces might be implemented. And I really want to hear about these ideas, but due to the large number of impedance mismatches, referring to the "Internet OS" only leads to innefficient and vague communication. -- Justin Chapweske, Onion Networks http://onionnetworks.com/ From adam at cypherspace.org Fri Aug 2 20:23:01 2002 From: adam at cypherspace.org (Adam Back) Date: Sat Dec 9 22:12:03 2006 Subject: "p2p OS" / distributed system OSes (Re: [p2p-hackers] A Note on Distributed Computing) In-Reply-To: <3D4B40BA.9010509@chapweske.com>; from justin@chapweske.com on Fri, Aug 02, 2002 at 09:32:26PM -0500 References: <3D4B40BA.9010509@chapweske.com> Message-ID: <20020803042213.A495637@exeter.ac.uk> There are a few examples of first-class (in the object sense) distributed OSes, eg TWOS (Time-Warp Operating System), which is available on various parallel and distributed computer systems. It's a special purpose operating system for Parallel Discrete Event Simulation (PDES). So yes it's a real operating system, replete with useful abstractions allowing convenient marshalling of the processing power of a large number of processors, potentially communicating over high latenecy links -- it could be used on the "internet". While the abstractions and design patterns supported by TWOS may not be useful for other types of application, and would probably feel quite alien and unfamiliar to someone not familiar with the domain of PDES, it is an operating system, and it provides convenient abstractions and design patterns which have emerged in that community. So I think "internet OS" and "p2p OS" do not have obvious candidate OSes right now which qualify for those terms, but that will change in the future. I don't find the terms vacuous, rather I think the lack of clarity in what exactly would qualify comes from the current relative immaturity of p2p and loosely parallel applications programming. As useful ideas are grouped as design patterns, these will likely eventually get integrated with OS support, and eventually operating systems specifically specialized to support these application classes. The JXTA work is interesting from that perspective -- it is one of the few current projects trying to generalise design patterns, which is a useful step towards operating system support. (Custom Applications -> Generic Library -> OS support -> special purpose OSes for application domains). btw The design pattern used to abstract away communications latency in distributed systems is called "parallel slackness". (TWOS uses this abstraction as do a number of distributed system frameworks.) Adam -- http://www.cypherspace.org/adam/ On Fri, Aug 02, 2002 at 09:32:26PM -0500, Justin Chapweske wrote: > I of course agree with Sampo and Jeff that, in general, hiding and > abstraction are good software design qualities. But I think there are > certain things that do not lend themselves well to abstraction, such as > latency, and error conditions. > > So while desktop OS's do a great job of abstracting hardware, they do a > pretty poor job of masking hard drive latency. Think about it, how many > people on this list would perform I/O in a GUI event thread? > > So, desktop OS's do a good job of abstracting hardware...What hardware > is there to abstract on the internet? It all looks the same to me if > you are either using a standard protocol to access them, or running the > same software on each node. > > So, desktop OS's also do a good job of memory management. They can > provide gigabytes of "virtual memory" even if only 64M of RAM is > available. But again, from a latency perspective they do a *horrible* > job of abstracting memory. Virtual memory doesn't actually make it > practical for a single application to use more than the physically > available RAM, because the application's continous swapping would be > unacceptable from a performance perspective. > > Okay, so maybe we look at a desktop OS's process control and its ability > to run a process across any number of CPUs. This smells kind of like > mobile agents in the distributed computing world. Now, I think mobile > agents are neat, but I have yet to find enough uses for them to make me > think that they would qualify as first-class citizens in an Internet OS > like processes do on a desktop OS. > > There are also the distributed processing SETI@home-style applications > that would qualify as a distributed process in an Internet OS. But, > honestly, how many programs need huge amounts of CPU-power, but can > tolerate huge latencies? Personally, I've been satisified with the > CPU-power of desktop PCs for a number of years. > > Developers don't need an Internet OS, they need web services, and they > need libraries that do some of the hard work for them. Developers want > distributed hash tables, search web services (Tristero), and content > delivery systems. > > I think people who are envisioning an "Internet OS" have some great > ideas for how some of those pieces might be implemented. And I really > want to hear about these ideas, but due to the large number of impedance > mismatches, referring to the "Internet OS" only leads to innefficient > and vague communication. From jeff at platypus.ro Sat Aug 3 07:29:02 2002 From: jeff at platypus.ro (Jeff Darcy) Date: Sat Dec 9 22:12:03 2006 Subject: [p2p-hackers] A Note on Distributed Computing References: <3D4B40BA.9010509@chapweske.com> Message-ID: <00eb01c23afa$11735000$2002a8c0@jdarcyvaio> > Developers don't need an Internet OS, they need web services Yeah, because "web services" is so much better defined and less buzzwordy than "Internet OS" right? ;-) From jeff at platypus.ro Sat Aug 3 07:29:04 2002 From: jeff at platypus.ro (Jeff Darcy) Date: Sat Dec 9 22:12:03 2006 Subject: [p2p-hackers] A Note on Distributed Computing References: <3D4B40BA.9010509@chapweske.com> Message-ID: <00eb01c23afa$11735000$2002a8c0@jdarcyvaio> > Developers don't need an Internet OS, they need web services Yeah, because "web services" is so much better defined and less buzzwordy than "Internet OS" right? ;-) From justin at chapweske.com Sat Aug 3 07:56:02 2002 From: justin at chapweske.com (Justin Chapweske) Date: Sat Dec 9 22:12:03 2006 Subject: [p2p-hackers] A Note on Distributed Computing References: <3D4B40BA.9010509@chapweske.com> <00eb01c23afa$11735000$2002a8c0@jdarcyvaio> Message-ID: <3D4BEECC.5030109@chapweske.com> "web services" is definately an overblown buzzword, but I think most developers have the same definition of what that term means. A web service is a remote programmatic interface that is accessable by HTTP, often in combination with an XML-based encoding such as XML-RPC or SOAP. So if I tell this group that I'm writing a web service that when given two IP addresses, returns the estimated latency between those hosts, I think you guys will have a pretty good idea of what I'm talking about. Oh, and by the way, there is such a project available at (http://idmaps.eecs.umich.edu/index.php) Jeff Darcy wrote: >>Developers don't need an Internet OS, they need web services > > > Yeah, because "web services" is so much better defined and less buzzwordy > than "Internet OS" right? ;-) > > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers -- Justin Chapweske, Onion Networks http://onionnetworks.com/ From oskar at freenetproject.org Sat Aug 3 08:19:02 2002 From: oskar at freenetproject.org (Oskar Sandberg) Date: Sat Dec 9 22:12:03 2006 Subject: [p2p-hackers] A Note on Distributed Computing In-Reply-To: <3D4BEECC.5030109@chapweske.com> References: <3D4B40BA.9010509@chapweske.com> <00eb01c23afa$11735000$2002a8c0@jdarcyvaio> <3D4BEECC.5030109@chapweske.com> Message-ID: <20020803151817.GB5414@sporty.spiceworld> On Sat, Aug 03, 2002 at 09:55:08AM -0500, Justin Chapweske wrote: > "web services" is definately an overblown buzzword, but I think most > developers have the same definition of what that term means. A web > service is a remote programmatic interface that is accessable by HTTP, > often in combination with an XML-based encoding such as XML-RPC or SOAP. This must be a new development, because at the O'Reilly conference in DC I must have gotten 20 different definitions ranging from "Anything that uses HTTP" to "Anything that uses XML" and everything inbetween. I also thought it was mostly Microsoft astroturfers that wanted them... -- Oskar Sandberg oskar@freenetproject.org From dave at userland.com Sat Aug 3 08:24:01 2002 From: dave at userland.com (Dave Winer) Date: Sat Dec 9 22:12:03 2006 Subject: [p2p-hackers] A Note on Distributed Computing References: <3D4B40BA.9010509@chapweske.com> <00eb01c23afa$11735000$2002a8c0@jdarcyvaio> <3D4BEECC.5030109@chapweske.com> <20020803151817.GB5414@sporty.spiceworld> Message-ID: <084001c23b01$c39d57b0$33a1dc40@murphy> De-lurking, perhaps temporarily. I wrote the foreword to the O'Reilly book on XML-RPC, and while it doesn't use the "web services" term too much (if at all), I do explain why this stuff is important. http://davenet.userland.com/2001/05/17/forewordToOreillysXmlrpcBook Basic one-phrase summary: Turn the Internet into a scripting environment. Dave From oskar at freenetproject.org Sat Aug 3 08:33:02 2002 From: oskar at freenetproject.org (Oskar Sandberg) Date: Sat Dec 9 22:12:03 2006 Subject: [p2p-hackers] Does JXTA work as a "Universal Toolkit"? In-Reply-To: <3D4A4E39.6090604@yahoo.com> References: <1028280299.3d4a4febd3aba@goliath.notdot.net> <20020801223407.P24302@research.att.com> <3D4A127C.6080103@yahoo.com> <3D4A3636.6060703@neurogrid.com> <3D4A4E39.6090604@yahoo.com> Message-ID: <20020803153246.GC5414@sporty.spiceworld> On Fri, Aug 02, 2002 at 02:17:45AM -0700, Brad Neuberg wrote: > Here's another question: JXTA purports to be a "Universal Toolkit" of > sorts, solving many of the problems that a P2P developer would have to > go through. From a distance it looks like this to me: it has some nice > abstractions, such as Pipes, Peers, Peer Groups, and Peer Services, as > well as some core services, such as a Router Service, a Rendezvous > Service, a Membership Service, etc. As I get closer to it, though, my > brain starts to hurt. I start thinking about the P2P design patterns > for distributed storage that I know about, such as distributed > hashtables, emergent networks, etc., and they don't seem to fit terribly > well into the actualities of JXTA; there seems to be some cognitive > dissonance, at least for me. From far away I can easily fit P2P > patterns into JXTA, but up-close I find that things just don't fit. > Have other people found this themselves? Absolutely. As far as i have been able to tell JXTA gives nothing to people who are working on distributed peer to peer architectures, as opposed to peer to peer applications, and in some ways even attempts to pull the rug out from under their feet by presenting the architecture as a solved problem - which it clearly is not. As one example of a peer to peer system I have nothing against JXTA, but the assumptions on which it is based and the structures it uses means it can neither be compatible nor indeed compete with a lot of the other systems being developed. > It does surprise me how many > _new_ projects there are that _don't_ use JXTA, such as Mnet, > BitTorrent, The Circle, etc. I am interested in soliciting from folks > why they chose not to use JXTA, such as whether it was performance > issues, language issues, a raunchy API (my opinion), etc. I don't see how this can surprise you, it would surprise me greatly if any of them were (for one, AFAIK, the only JXTA implementation is in java, and the programs you mentioned are not.) There is no reason to see JXTA as the be all end all of peer to peer architectures, rather it is just another in the long line of them - just one that seemingly still lacks real world applications (not that there is anyting wrong with either of those things.) -- Oskar Sandberg oskar@freenetproject.org From tboyle at rosehill.net Sat Aug 3 09:48:01 2002 From: tboyle at rosehill.net (Todd Boyle) Date: Sat Dec 9 22:12:03 2006 Subject: [p2p-hackers] Does JXTA work as a "Universal Toolkit"? In-Reply-To: <20020803153246.GC5414@sporty.spiceworld> References: <3D4A4E39.6090604@yahoo.com> <1028280299.3d4a4febd3aba@goliath.notdot.net> <20020801223407.P24302@research.att.com> <3D4A127C.6080103@yahoo.com> <3D4A3636.6060703@neurogrid.com> <3D4A4E39.6090604@yahoo.com> Message-ID: <5.1.0.14.0.20020803092726.025df750@popmail.cortland.com> At 08:32 AM 8/3/2002, Oskar Sandberg wrote: >On Fri, Aug 02, 2002 at 02:17:45AM -0700, Brad Neuberg wrote: > > Here's another question: JXTA purports to be a "Universal Toolkit" of > > sorts, solving many of the problems that a P2P developer would have to > > go through. From a distance it looks like this to me: it has some nice > > abstractions, such as Pipes, Peers, Peer Groups, and Peer Services, as > > well as some core services, such as a Router Service, a Rendezvous > > Service, a Membership Service, etc. As I get closer to it, though, my > > brain starts to hurt. I start thinking about the P2P design patterns > > for distributed storage that I know about, such as distributed > > hashtables, emergent networks, etc., and they don't seem to fit terribly > > well into the actualities of JXTA; there seems to be some cognitive > > dissonance, at least for me. From far away I can easily fit P2P > > patterns into JXTA, but up-close I find that things just don't fit. > > Have other people found this themselves? > >Absolutely. As far as i have been able to tell JXTA gives nothing to >people who are working on distributed peer to peer architectures, as >opposed to peer to peer applications, and in some ways even attempts to >pull the rug out from under their feet by presenting the architecture as >a solved problem - which it clearly is not. The lower you go in an OS, computing language or OSI protocol stack, the larger the corporations and levels of government that control it, and the more artifacts are built into it for the capture of rent by the organizations that consorted to built it. Accordingly, freedom might only exist at the application level, or perhaps, a modeling layer from which the application layer is built. If anything good is ever going to happen, that is not bound to a particular OS, hardware platform or stack etc., it will be a vision expressed as a model that can be built in multiple languages, i.e. without dependencies on something like JXTA. I don't think this necessitates that every application on sourceforge actually be built in multiple languages or platforms. But wherever an application or especially, a platform or OS, gets sufficiently advanced to outrun its competition, rent collecting happens, in other words I should hope that developers "say no" to commercial tool kits and instead, get right down to accomplishing the same things in another language in order to calm user fears and win user adoption. Probably, I'm not the first person to think of these things, TOdd From lgonze at panix.com Sat Aug 3 10:29:01 2002 From: lgonze at panix.com (Lucas Gonze) Date: Sat Dec 9 22:12:03 2006 Subject: [p2p-hackers] Does JXTA work as a "Universal Toolkit"? In-Reply-To: <20020803153246.GC5414@sporty.spiceworld> Message-ID: > As one example of a peer to peer system I have nothing against JXTA, but > the assumptions on which it is based and the structures it uses means it > can neither be compatible nor indeed compete with a lot of the other > systems being developed. I think the Jxta abstractions of endpoints and pipes are pretty, but the toolkit is too heavyweight to adopt incrementally. with p2p infrastructure what's not going to happen is to get everybody in the world to adopt your "kernel". useful infrastructure tools are more like awk and sed than the hurd. everybody agrees that unix-like abstractions are a great thing, but jxta needs to find a way to provide them in a less encompassing way. xml-rpc works because it's not intrusive. you don't join it, you plug it in. same for all web services -- instead of myvar=`echo "$val" | awk '{print $3}'` you do myvar=`echo "$val" | xargs ... lynx -dump http://awker.org/{}/field3" or more realistically, to create a file of rhubarb pie recipies: echo "rhubarb%20pie%20recipe" | xargs -i'{}' lynx -dump "http://google.com/search?hl=en&ie=UTF-8&oe=UTF-8&q={}" | egrep "[0-9]*\. http:/" | grep -v google.com | grep -v cache | sed 's/[0-9]*\.//' | xargs -i'{}' lynx -dump {} > rhubarb_pie_recipies.txt - Lucas From tkimball at odo.net Tue Aug 6 08:51:01 2002 From: tkimball at odo.net (Tony Kimball) Date: Sat Dec 9 22:12:03 2006 Subject: [p2p-hackers] Internet operating system In-Reply-To: <3D45E80F.40105@chapweske.com> References: <3D45E80F.40105@chapweske.com> Message-ID: <200207292225.11460.tkimball@odo.net> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Monday 29 July 2002 20:12, Justin Chapweske wrote: > My principal objection is that it is a non-idea around which no > meaningful discussion can occur. Its like a "personal democracy". It > sounds like a thought provoking idea, but it really doesn't mean anything. (Perhaps the topic is more apposite to some other mailing list, for the time being; however, I address it in the venue of origin, since I really don't have a clue about such fine points of ettiquette.) There seems to me to be some distinguishable meaning. What prevents that meaning from being distinct, clear, well-defined, is the fact that it refers to a hypothetical future invention, but that does not make the phrase intrinsically meaningless any more than was the term "voice telegraphy" in the 1860s. Perhaps what suggests the notion of an Internet operating system worthy of the name (as opposed specifically to Cisco's IOS), is the marketing slogan, "the network *is* the computer". It would be a mistake to write this aphorism off as sheer marketing hyperbole, and to the degree that it has a useful meaning, the notion of an Internet operating system is lent some clarity: An Internet operating system is a body of software which allocates resources to the requests of client programs in the specific case in which the client programs and/or resources are distributed and Internetworked. That Internet operating system is a p2p entity if it operates on the basis of purely local policies. It adds utility -- adds *value* -- to the computers which are cooperating under its aegis by marshalling resources otherwise unavailable to a given application. to address a need or solve a problem for it's users. Being both abstract and unrealized are not sufficient grounds for the term to be taken as meaningless. It is a matter of some interest to me, what validity there is in that old saw, "the network *is* the computer". It seems that a great many fundamental computing primitives are purely communicative operations. Often the n 1's that add up to an algorithmic O(n) are each individually operations in which a value is communicated from one location to another. When computation is conceived as being performed by communication, something already evident comes poignantly to the fore: It is the heirarchical latency of Internetworked systems, in which the coarsest layer of application protocols embrace latencies a full 9 orders of magnitude greater than the latencies of the finest grain layer of register operations, that represent the first barrier to a meaningful integration of Internetworked systems by an Internet operating system. But this latency heirarchy is not fundamentally different in kind from the storage heirarchies managed by conventional operating systems -- nor even very different in degree, when tertiary storage systems are considered. But only slightly less obvious than this barrier, and definitely a fresher, and less understood problem, is the barrier of ownership, trust and permission. It is difficult to conceive of an operating system which runs on a system of processors which are so balkanized, so mutually hostile, as the disparate nodes of the Internet, which are as often or more often perceived as competing -- sometimes in the most malicious ways -- as they are percieved as cooperating. Still I must ask, is it realistic to take it for granted that this less tractable barrier represents a problem so formidable that no innovation in technique or approach can resolve it? And when such a body of innovation is realized, by some one or more p2p-hackers, what are we to call the result, if not, truly, an Internet operating system? Finally, the use of the term is analogically productive. For example: The p2p applications of today (and such they are, whether or not they wish to be called that) in effect run on the bare metal of the network which is the computer. By analogy to the operating system of a local node, one can hypothesize that the Internet operating system of the future can find a value-niche in which are made some trade-offs of flexibility and performance against ease of use and safety, and the complexity of implementation of low-level primitives is hiden from the application programmer. This analogy is productive in that it suggests the rough outlines of a programme of development. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.0.6 (GNU/Linux) Comment: For info see http://www.gnupg.org iD8DBQE9RgcPiZRIr8ozroIRAny0AJ9BrdUr9pTYlRoe9btAK0sU13HB6QCgmaus CQEse9Fxzn0gn0H06p0iKfg= =UuJe -----END PGP SIGNATURE----- From arachnid at notdot.net Tue Aug 6 08:51:03 2002 From: arachnid at notdot.net (Nick Johnson) Date: Sat Dec 9 22:12:03 2006 Subject: [p2p-hackers] 'Flat' P2P Idea In-Reply-To: <20020801223407.P24302@research.att.com> References: <1028280299.3d4a4febd3aba@goliath.notdot.net> <20020801223407.P24302@research.att.com> Message-ID: <1028308107.3d4abc8bf0f79@goliath.notdot.net> > Assuming I understand the protocol correctly, it would seem to me if > there was a flood of traffic, this network would stand a good chance > of becoming disconnected. As I understand it, the connection from A > -> > B is essentially "forgotten" when B becomes the n+1th person on A's > list, > correct? Now, for simplicities sake, assume that A and B have the same > n > (it's not necessary in general). All it would take is n messages from > different people before they got a message off to each other for the > pair > to become disconnected. Depending on traffic patterns and the size of > n, > this may not be likely under normal conditions, but it is certainly > easy > enough to pull off as a mallicious attack. Hmm. UDP spoofing is an issue that hadn't occurred to me. One other solution that might both solve that and improve the reliability of the network would be to send out a ping packet every time a new host is discovered, and not add them to the list until they are 'validated' by recieving a corresponding pong packet. Apart from eliminating packet forging as an effective flooding method, it would also ensure that a host is not at the top of anyones list unless they know packets can reach them. > A possible solution is to only move the message sender to the top > of the list with some probablility p, where p is adjusted based > on the rate of messages in the system. This sounds like a good idea, however couldn't a malicious user flood most of the good hosts off the list before this starts to effect the chance of putting a host up the top? Also, it would make flooding harder, but it would also make it harder to get legitimate hosts on the list. One of the advantages this network would (should?) have over tree-based networks is that there is no way to attack the whole network - you can only attack individual clients, and though client IPs are easy to obtain, a malicious users ability to attack them is limited by his own bandwidth. If we can protect against UDP spoofing, even attacking individual clients should be difficult, save for flooding them off the network altogether. However, as I write one other spoofing problem occurs to me - a malicious user could spoof packets as being from the victim, and send them to multiple clients requesting large amounts of data - a lot like a 'smurf' attack. As long as a user can request a larger amount of data than they can send, this could be a problem. :( > Another possible critique of this system is it would be impossible to > take advantage of any sort of additional network topology information, > i.e. cluster nodes together which are "close" in some sort of > networking > sense. OTOH, this point is kind of moot, as I don't know of any > widely > deployed tree based P2P systems which actually do this :) Though it doesn't actively take advantage of network topology, it could be made to do so passively. Routes that have low reliability would pass few packets, leading to hosts on the other side of those links being low on a servents list. Servents could also apply a rating scheme partially dependant on the round-trip time for packets. From justin at chapweske.com Tue Aug 6 09:38:01 2002 From: justin at chapweske.com (Justin Chapweske) Date: Sat Dec 9 22:12:03 2006 Subject: [p2p-hackers] Internet operating system References: <3D45E80F.40105@chapweske.com> <200207292225.11460.tkimball@odo.net> Message-ID: <3D4FFB59.9060505@chapweske.com> Okay, I have no problem talking about an "Internet OS" if people agree on Tony's definition. Tony Kimball wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On Monday 29 July 2002 20:12, Justin Chapweske wrote: > >>My principal objection is that it is a non-idea around which no >>meaningful discussion can occur. Its like a "personal democracy". It >>sounds like a thought provoking idea, but it really doesn't mean anything. > > > (Perhaps the topic is more apposite to some other mailing list, > for the time being; however, I address it in the venue of origin, > since I really don't have a clue about such fine points of ettiquette.) > > There seems to me to be some distinguishable meaning. What > prevents that meaning from being distinct, clear, well-defined, > is the fact that it refers to a hypothetical future invention, but > that does not make the phrase intrinsically meaningless any more > than was the term "voice telegraphy" in the 1860s. > > Perhaps what suggests the notion of an Internet operating system > worthy of the name (as opposed specifically to Cisco's IOS), is the > marketing slogan, "the network *is* the computer". It would be a > mistake to write this aphorism off as sheer marketing hyperbole, > and to the degree that it has a useful meaning, the notion of an > Internet operating system is lent some clarity: An Internet operating > system is a body of software which allocates resources to the requests > of client programs in the specific case in which the client programs > and/or resources are distributed and Internetworked. That Internet > operating system is a p2p entity if it operates on the basis of purely > local policies. It adds utility -- adds *value* -- to the computers which > are cooperating under its aegis by marshalling resources otherwise > unavailable to a given application. to address a need or solve a problem for > it's users. Being both abstract and unrealized are not sufficient grounds > for the term to be taken as meaningless. > > It is a matter of some interest to me, what validity there is in that old > saw, "the network *is* the computer". It seems that a great many > fundamental computing primitives are purely communicative operations. > Often the n 1's that add up to an algorithmic O(n) are each individually > operations in which a value is communicated from one location to another. > When computation is conceived as being performed by communication, > something already evident comes poignantly to the fore: > > It is the heirarchical latency of Internetworked systems, in which the > coarsest layer of application protocols embrace latencies a full 9 orders of > magnitude greater than the latencies of the finest grain layer of register > operations, that represent the first barrier to a meaningful integration of > Internetworked systems by an Internet operating system. But this latency > heirarchy is not fundamentally different in kind from the storage heirarchies > managed by conventional operating systems -- nor even very different in > degree, when tertiary storage systems are considered. > > But only slightly less obvious than this barrier, and definitely a fresher, > and less understood problem, is the barrier of ownership, trust and > permission. It is difficult to conceive of an operating system which runs on > a system of processors which are so balkanized, so mutually hostile, as the > disparate nodes of the Internet, which are as often or more often perceived > as competing -- sometimes in the most malicious ways -- as they are percieved > as cooperating. > > Still I must ask, is it realistic to take it for granted that this less > tractable barrier represents a problem so formidable that no innovation in > technique or approach can resolve it? And when such a body of innovation is > realized, by some one or more p2p-hackers, what are we to call the result, if > not, truly, an Internet operating system? > > Finally, the use of the term is analogically productive. For example: The p2p > applications of today (and such they are, whether or not they wish to be > called that) in effect run on the bare metal of the network which is the > computer. By analogy to the operating system of a local node, one can > hypothesize that the Internet operating system of the future can find a > value-niche in which are made some trade-offs of flexibility and performance > against ease of use and safety, and the complexity of implementation of > low-level primitives is hiden from the application programmer. This analogy > is productive in that it suggests the rough outlines of a programme of > development. > > > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.0.6 (GNU/Linux) > Comment: For info see http://www.gnupg.org > > iD8DBQE9RgcPiZRIr8ozroIRAny0AJ9BrdUr9pTYlRoe9btAK0sU13HB6QCgmaus > CQEse9Fxzn0gn0H06p0iKfg= > =UuJe > -----END PGP SIGNATURE----- > > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers -- Justin Chapweske, Onion Networks http://onionnetworks.com/ From lgonze at panix.com Tue Aug 6 11:29:01 2002 From: lgonze at panix.com (Lucas Gonze) Date: Sat Dec 9 22:12:03 2006 Subject: [p2p-hackers] Internet operating system In-Reply-To: <3D4FFB59.9060505@chapweske.com> Message-ID: Justin Chapweske wrote: > Okay, I have no problem talking about an "Internet OS" if people agree > on Tony's definition. I agree with Tony's definition. From Bernard.Traversat at Sun.Com Tue Aug 6 14:00:01 2002 From: Bernard.Traversat at Sun.Com (Bernard Traversat) Date: Sat Dec 9 22:12:03 2006 Subject: [p2p-hackers] Does JXTA work as a "Universal Toolkit"? References: <1028280299.3d4a4febd3aba@goliath.notdot.net> <20020801223407.P24302@research.att.com> <3D4A127C.6080103@yahoo.com> <3D4A3636.6060703@neurogrid.com> <3D4A4E39.6090604@yahoo.com> <20020803153246.GC5414@sporty.spiceworld> Message-ID: <3D5030FF.1060707@Sun.Com> >On Fri, Aug 02, 2002 at 02:17:45AM -0700, Brad Neuberg wrote: > >>Here's another question: JXTA purports to be a "Universal Toolkit" of >>sorts, solving many of the problems that a P2P developer would have to >>go through. From a distance it looks like this to me: it has some nice >>abstractions, such as Pipes, Peers, Peer Groups, and Peer Services, as >>well as some core services, such as a Router Service, a Rendezvous >>Service, a Membership Service, etc. As I get closer to it, though, my >>brain starts to hurt. I start thinking about the P2P design patterns >>for distributed storage that I know about, such as distributed >>hashtables, emergent networks, etc., and they don't seem to fit terribly >>well into the actualities of JXTA; there seems to be some cognitive >>dissonance, at least for me. From far away I can easily fit P2P >>patterns into JXTA, but up-close I find that things just don't fit. >>Have other people found this themselves? >> Brad, Let me try to help. JXTA defines three layers: core, service and application. The core layer provides the virtual network overlay fabric abstracting the underlying physical network topology providing a uniform addressing space (JXTA IDs) and resource representation (i.e.JXTA advertisements). The service layer provides the pluggable policy layer allowing customization of every peergroup domains (membership, routing, propagation, etc..). When creating a new peergroup you have the ability to specify your own set of services, or use default policies like (Rendezvous, SecureMembership service). The main interface mechanism between the service layer and the core are advertisements (XML documents). The core defines a set of advertisements. Services can manage or extend these advertisements adding their own tags. For instance, when trying to locate a resource (pipe, content, peer, etc.), the core will attempt to "resolve" or bind the resource to a physical peer location using the associated resource advertisement(s). Any service as the ability to inject new advertisements to help or direct the core to perform this resolution operation in a more efficient way. For example, a distributed hash service could publish an advertisement indicating on which hashed peer(s) the content will be found. > >any of them were (for one, AFAIK, the only JXTA implementation is in >java, and the programs you mentioned are not.) > We just completed a C implementation of the JXTA protocols. It's available at jxta-c.jxta.org. Hope this help. B. > > >There is no reason to see JXTA as the be all end all of peer to peer >architectures, rather it is just another in the long line of them - just >one that seemingly still lacks real world applications (not that there is >anyting wrong with either of those things.) > From p2phackers at bondolo.org Wed Aug 7 14:26:01 2002 From: p2phackers at bondolo.org (p2phackers@bondolo.org) Date: Sat Dec 9 22:12:03 2006 Subject: [p2p-hackers] Does JXTA work as a "Universal Toolkit"? References: <1028280299.3d4a4febd3aba@goliath.notdot.net> <20020801223407.P24302@research.att.com> <3D4A127C.6080103@yahoo.com> <3D4A3636.6060703@neurogrid.com> <3D4A4E39.6090604@yahoo.com> <20020803153246.GC5414@sporty.spiceworld> Message-ID: <3D51905C.80202@bondolo.org> Oskar Sandberg wrote: > On Fri, Aug 02, 2002 at 02:17:45AM -0700, Brad Neuberg wrote: > >> Here's another question: JXTA purports to be a "Universal Toolkit" of >> sorts, solving many of the problems that a P2P developer would have >> to go through. From a distance it looks like this to me: it has some >> nice abstractions, such as Pipes, Peers, Peer Groups, and Peer >> Services, as well as some core services, such as a Router Service, a >> Rendezvous Service, a Membership Service, etc. As I get closer to >> it, though, my brain starts to hurt. I start thinking about the P2P >> design patterns for distributed storage that I know about, such as >> distributed hashtables, emergent networks, etc., and they don't seem >> to fit terribly well into the actualities of JXTA; there seems to be >> some cognitive dissonance, at least for me. From far away I can >> easily fit P2P patterns into JXTA, but up-close I find that things >> just don't fit. Have other people found this themselves > I am indeed curious (and probably others are as well) in hearing what is not working for you. > Absolutely. As far as i have been able to tell JXTA gives nothing to > people who are working on distributed peer to peer architectures, as > opposed to peer to peer applications, and in some ways even attempts to > pull the rug out from under their feet by presenting the architecture as > a solved problem - which it clearly is not. I can't agree at all. (maybe its that I work on JXTA) It almost sounds as if you are agruing against abstraction. As you indicate, JXTA provides some very useful abstractions for application developers, but those abstractions need implementations. That's where the people on this list are more likely to particpate. In what sense to you see JXTA as forcing a particular solution or implementation onto the presented abstractions? It's my feeling that there isn't much in JXTA that CAN'T BE CHANGED by providing alternate implementatations of the abstractions. If there is a problem, the JXTA community really does want to know about it (and we might be able to help solve it). > As one example of a peer to peer system I have nothing against JXTA, but > the assumptions on which it is based and the structures it uses means it > can neither be compatible nor indeed compete with a lot of the other > systems being developed. Some of that is inevitable, some decisions are mutually exclusive and others are made in different places and at different times. JXTA does make an effort to mitigate this in the way that it uses meta-data and the few restrictions it does place upon that metadata. What assumptions do you see that are flawed or overly restrictive? >> It does surprise me how many _new_ projects there are that _don't_ >> use JXTA, such as Mnet, BitTorrent, The Circle, etc. I am interested >> in soliciting from folks why they chose not to use JXTA, such as >> whether it was performance issues, language issues, a raunchy API (my >> opinion), etc. >> > > I don't see how this can surprise you, it would surprise me greatly if > any of them were (for one, AFAIK, the only JXTA implementation is in > java, and the programs you mentioned are not.) There is a client side implementation in C as well. Yes, its not as complete as the Java 2 implementation but like every other project we don't have infinite resources. It is, however, a lot closer to being useful than starting a new P2P architecture implementation from scratch. Porting is expensive work. To be honest, my feeling is that if the JXTA people spent all their time on porting and no time on the technology that there would be even more to complain about with JXTA.... It is a disappointment to everyone who works on JXTA that more of these projects don't use JXTA as a basis for their development. We certainly try to do everything we can to be helpful and accomodating. There are a number of of viable projects which are using JXTA, ones with actual happy end-users and satisfied developers. More would, of course, be better. > There is no reason to see JXTA as the be all end all of peer to peer > architectures, rather it is just another in the long line of them - just > one that seemingly still lacks real world applications (not that there is > anyting wrong with either of those things.) JXTA wasn't intended to be "the last P2P architecture". In particular its more about being a framework than a particular policy or implementation. The intention is to serve application writers by creating some stable ground on which they can write P2P applications without being closely bound to the underlying implementation. If the approach is successful, an application, for example, shouldn't care if service discovery is done via multicast, gnutella style flooding, DHT, LDAP, Jini, WSDL, UDDI, CORBA, JMS or whatever.... Eager to continue the discussion, Mike From bram at gawth.com Thu Aug 8 21:04:01 2002 From: bram at gawth.com (Bram Cohen) Date: Sat Dec 9 22:12:03 2006 Subject: [p2p-hackers] meeting sunday Message-ID: Looking at the calendar, I notice that we're calendrically scheduled to have a meeting this sunday, the 12th. Same old time, same old place, the metreon, 3pm. Dave Molnar may be in town the following week, so we may have an impromptu meeting then. -Bram Cohen "Markets can remain irrational longer than you can remain solvent" -- John Maynard Keynes From adam at cypherspace.org Tue Aug 13 21:05:01 2002 From: adam at cypherspace.org (Adam Back) Date: Sat Dec 9 22:12:03 2006 Subject: [p2p-hackers] tech contact at kazaa? Message-ID: <20020814050444.A849941@exeter.ac.uk> Anyone know the CTO or tech people at kazaa? I need an email address. Thanks Adam From arma at mit.edu Thu Aug 15 13:28:01 2002 From: arma at mit.edu (Roger Dingledine) Date: Sat Dec 9 22:12:03 2006 Subject: [p2p-hackers] Reminder: ACM Workshop on Privacy in the Electronic Society Message-ID: <20020815162733.K3264@moria.seul.org> (Submissions are due in 8 days.) Please re-distribute as appropriate... CALL FOR PAPERS WORKSHOP ON PRIVACY IN THE ELECTRONIC SOCIETY Washington, DC, USA - November 21, 2002 Sponsored by ACM SIGSAC - in association with 9th ACM CCS Conference -------------------------------------------------------------------- The increased power and interconnectivity of computer systems available today provide the ability of storing and processing large amounts of data, resulting in networked information accessible from anywhere at any time. It is becoming easier to collect, exchange, access, process, and link information. This global scenario has inevitably resulted in an increasing degree of awareness with respect to privacy. Privacy issues have been the subject of public debates and the need for privacy-aware policies, regulations, and techniques has been widely recognized. Goal of this workshop is to discuss the problems of privacy in the global interconnected societies and possible solutions to it. The workshop seeks submissions from academia and industry presenting novel research on all theoretical and practical aspects of electronic privacy, as well as experimental studies of fielded systems. We encourage submissions from other communities such as law and business that present these communities' perspectives on technological issues. Topics of interest include, but are not limited to: - anonymity, pseudonymity, and unlinkability - business model with privacy requirements - data protection from correlation and leakage attacks - electronic communication privacy - information dissemination control - privacy-aware access control - privacy in the digital business - privacy enhancing technologies - privacy policies and human rights - privacy and anonymity in Web transactions - privacy threats - privacy and confidentiality management - privacy in the electronic records - privacy in health care and public administration - public records and personal privacy - privacy and virtual identity - personally identifiable information - privacy policy enforcement - privacy and data mining - relationships between privacy and security - user profiling - wireless privacy PAPER SUBMISSIONS Submitted papers must not substantially overlap papers that have been published or that are simultaneously submitted to a journal or a conference with proceedings. Papers should be at most 15 pages excluding the bibliography and well-marked appendices (using 11-point font and reasonable margins on letter-size paper), and at most 20 pages total. Committee members are not required to read the appendices, and so the paper should be intelligible without them. Papers should have a cover page with the title, authors, abstract and contact information. To submit a paper, send to wpes@dti.unimi.it a plain ASCII text email containing the title and abstract of your paper, the authors' names, email and postal addresses, phone and fax numbers, and identification of the contact author. To the same message, attach your submission (as a MIME attachment) in PDF or portable postscript format. Do NOT send files formatted for word processing packages (e.g., Microsoft Word or WordPerfect files). Papers must be received by the deadline of AUGUST 23, 2002. Notification of acceptance or rejection will be sent to the authors no later than OCTOBER 13, 2002, and authors will have an opportunity to revise for preproceedings version by NOVEMBER 8, 2002. Authors of accepted papers must guarantee that their paper will be presented at the workshop. During the workshop preproceedings will be made available. Final proceedings with be published, after the workshop, by ACM. Final versions are not due until after the workshop, giving the authors the opportunity to revise their papers based on discussions during the meeting. PROGRAM CHAIR Pierangela Samarati Dipartimento di Tecnologie dell'Informazione Universita` di Milano email: samarati@dti.unimi.it phone: +39-02-503.30061 fax: +39-02-503.30010 GENERAL CHAIR Sushil Jajodia George Mason University, USA email: jajodia@ise.gmu.edu PUBLICITY CHAIR Sabrina De Capitani di Vimercati University of Brescia, ITALY email: decapita@ing.unibs.it PROGRAM COMMITTEE Lawrence H. Cox, NC for Health Statistics, USA Lorrie Cranor, AT&T Labs-Research, USA Sabrina De Capitani di Vimercati, U. Brescia, Italy Roger Dingledine, The Free Haven Project, USA Avi Rubin, AT&T Labs-Research, USA Andrea Servida, CEC, Belgium Peter Swire, George Washington Un., USA Paul Syverson, Naval Research Lab, USA Michael Waidner, IBM Zurich Research Lab, Switzerland Chenxi Wang, Carnegie Mellon University, USA Rigo Wenning, W3C, France Marc Wilikens, Joint Research Center, Italy Marianne Winslett, U. of Illinois Urbana-Champaign, USA Rebecca Wright, Stevens Institute of Technology, USA This call for papers and additional information about the conference can be found at http://seclab.dti.unimi.it/~wpes. From mike at bondolo.org Sat Aug 17 07:32:03 2002 From: mike at bondolo.org (Mike Duigou) Date: Sat Dec 9 22:12:03 2006 Subject: [p2p-hackers] Does JXTA work as a "Universal Toolkit"? References: <1028280299.3d4a4febd3aba@goliath.notdot.net> <20020801223407.P24302@research.att.com> <3D4A127C.6080103@yahoo.com> <3D4A3636.6060703@neurogrid.com> <3D4A4E39.6090604@yahoo.com> <20020803153246.GC5414@sporty.spiceworld> Message-ID: <3D501D77.6040509@bondolo.org> Oskar Sandberg wrote: >On Fri, Aug 02, 2002 at 02:17:45AM -0700, Brad Neuberg wrote: > > >>Here's another question: JXTA purports to be a "Universal Toolkit" of >>sorts, solving many of the problems that a P2P developer would have to >>go through. From a distance it looks like this to me: it has some nice >>abstractions, such as Pipes, Peers, Peer Groups, and Peer Services, as >>well as some core services, such as a Router Service, a Rendezvous >>Service, a Membership Service, etc. As I get closer to it, though, my >>brain starts to hurt. I start thinking about the P2P design patterns >>for distributed storage that I know about, such as distributed >>hashtables, emergent networks, etc., and they don't seem to fit terribly >>well into the actualities of JXTA; there seems to be some cognitive >>dissonance, at least for me. From far away I can easily fit P2P >>patterns into JXTA, but up-close I find that things just don't fit. >>Have other people found this themselves >> I am indeed curious in hearing what is not working for you. > >Absolutely. As far as i have been able to tell JXTA gives nothing to >people who are working on distributed peer to peer architectures, as >opposed to peer to peer applications, and in some ways even attempts to >pull the rug out from under their feet by presenting the architecture as >a solved problem - which it clearly is not. > I can't agree at all. (maybe its that I work on JXTA) It almost sounds as if you are agruing against abstraction. As you indicate, JXTA provides some very useful abstractions for application developers, but those abstractions need implementations. That's where the people on this list are more likely to particpate. In what sense to you see JXTA as forcing a particular solution or implementation onto the presented abstractions? It's my feeling that there isn't much in JXTA that CAN'T BE CHANGED by providing alternate implementatations of the abstractions. If there is a problem, the JXTA community really does want to know about it (and we might be able to help solve it). >As one example of a peer to peer system I have nothing against JXTA, but >the assumptions on which it is based and the structures it uses means it >can neither be compatible nor indeed compete with a lot of the other >systems being developed. > Some of that is inevitable, some decisions are mutually exclusive and others are made in different places and at different times. JXTA does make an effort to mitigate this in the way that it uses meta-data and the few restrictions it does place upon that metadata. What assumptions do you see that are flawed or overly restrictive? >>It does surprise me how many >>_new_ projects there are that _don't_ use JXTA, such as Mnet, >>BitTorrent, The Circle, etc. I am interested in soliciting from folks >>why they chose not to use JXTA, such as whether it was performance >>issues, language issues, a raunchy API (my opinion), etc. >> >> > >I don't see how this can surprise you, it would surprise me greatly if >any of them were (for one, AFAIK, the only JXTA implementation is in >java, and the programs you mentioned are not.) > There is a client side implementation in C as well. Yes, its not as complete as the Java 2 implementation but like every other project we don't have infinite resources. It is, however, a lot closer to being useful than starting a new P2P architecture implementation from scratch. Porting is expensive work. To be honest, my feeling is that if the JXTA people spent all their time on porting and no time on the technology that there would be even more to complain about with JXTA.... It is a disappointment to everyone who works on JXTA that more of these projects don't use JXTA as a basis for their development. We certainly try to do everything we can to be helpful and accomodating. There are a number of of viable projects which are using JXTA, ones with actual happy end-users and satisfied developers. More would, of course, be better. >There is no reason to see JXTA as the be all end all of peer to peer >architectures, rather it is just another in the long line of them - just >one that seemingly still lacks real world applications (not that there is >anyting wrong with either of those things.) > JXTA wasn't intended to be "the last P2P architecture". In particular its more about being a framework than a particular policy or implementation. The intention is to serve application writers by creating some stable ground on which they can write P2P applications without being closely bound to the underlying implementation. If successful, an application for example shouldn't care if service discovery is done via multicast, gnutella style flooding, DHT, LDAP, Jini, WSDL, UDDI, CORBA, JMS or whatever.... Eager to continue the discussion, Mike From sam at neurogrid.com Sat Aug 17 09:22:02 2002 From: sam at neurogrid.com (Sam Joseph) Date: Sat Dec 9 22:12:03 2006 Subject: [p2p-hackers] Does JXTA work as a "Universal Toolkit"? References: <1028280299.3d4a4febd3aba@goliath.notdot.net> <20020801223407.P24302@research.att.com> <3D4A127C.6080103@yahoo.com> <3D4A3636.6060703@neurogrid.com> <3D4A4E39.6090604@yahoo.com> <20020803153246.GC5414@sporty.spiceworld> <3D501D77.6040509@bondolo.org> Message-ID: <3D5E79EF.9050506@neurogrid.com> Mike Duigou wrote: > It is a disappointment to everyone who works on JXTA that more of > these projects don't use JXTA as a basis for their development. We > certainly try to do everything we can to be helpful and accomodating. > There are a number of of viable projects which are using JXTA, ones > with actual happy end-users and satisfied developers. More would, of > course, be better. It may just be me, but I think that the reason why many projects don't use JXTA, is because of the way in which JXTA was released by Sun. I would be happy to be corrected but I think part of the problem is that Sun, as a large multinational corporation, released JXTA at a time when a lot of anarchic projects were getting off the ground. I think a lot of people felt that JXTA was Sun's attempt to muscle in without much thought about who they were muscling in on, or rather they were coming in with the main aim being increasing Sun's profits (they are after all a company). Now of course none of this is relevant to whether JXTA is a good platform now, or whether it is supporting good projects now, or even whether JXTA should or shouldn't be used by P2P developers, but I thought it might be interesting to explain some of my thinking. I write p2p applications in Java and so it would seem like JXTA would be an ideal thing for me to use. I was at the first p2p conference when Bill Joy and others starting talking about JXTA. All that really came across was "hey p2p is cool, we want to be doing p2p too". The abstractions such as pipes and advertisments that we then saw as the conceptual building blocks of JXTA, didn't seem to have much to do with the concepts that the various open source p2p projects were struggling with. In a broader sense it may be the "wary of Sun's motives" anarchic p2p-hackers who lose out because they fail to capitalise on the resources that Sun is putting in. In my experience people organising p2p projects (and probably open source projects in general) are very busy. They are constantly trying to fix problems, learn about new technologies, digest and integrate ideas. I know that my feeling about JXTA has been and continues to be, that I would much rather spend my time investigating projects that seem to have something more directly novel to offer, and perhaps more importantly are not being funded by a multinational like Sun. MNet, Plesh, BitTorrent, Tristero, OCN. These projects are interesting because they don't derive from a centralized mind-set, and however many community processes, and decentralised network systems Sun produces, they are still fundamentally a big centralised monolithic corporation who's sole purpose is to make money. Maybe this is just me, but this is the kind of thinking that makes me tend to focus on projects other than JXTA. Maybe JXTA has got some wonderful things to offer, and maybe I'll miss out - I guess only time will tell. CHEERS> SAM From scottp at conexus-inc.com Mon Aug 19 18:41:01 2002 From: scottp at conexus-inc.com (Scott Persinger) Date: Sat Dec 9 22:12:03 2006 Subject: [p2p-hackers] Re: Does JXTA work as a "Universal Toolkit"? (Scott Persinger) Message-ID: <002601c24796$47b70b40$4d01a8c0@scottmobile> I think it's interesting to look at why Sun built and then released JXTA. If you think about it, p2p applications are really about leveraging the power of the desktop. The problem is, Sun doesn't make any money on the desktop - they make all their money on servers. So if you look at something like a collaboration application, to the extent that it lessens the need for a server (which would quite likely be running Linux or Solaris), then its eating into Sun's revenue. The real problem is that p2p is much more strategic for Microsoft and Linux. So why would Sun build JXTA? Well, one answer could be as a defense to maintain the Java developer base - don't want people migrating to .NET because they want to build p2p apps. Probably more likely Bill Joy made an early argument about the importance of non-PC devices on the network and how Java had to play in that space. You also have to ask why Sun released JXTA to a consortium? They haven't done that with any other technologies they thought were important (like web services, which notably is seen as an enterprise technology). I think it's clear that Sun decided p2p is not strategic for them, so they told the JXTA group to wrap it up and push it out. They figure it's still good to have the Java alternative, but they're not going to fund architectures that don't need a server! So I have to wonder what support there really is from Sun anymore. It seems like JXTA was definitely thrown together in a hurry. I guess the question is how effective anyone thinks the consortium will be in tightening the platform and making it really usable. --Scott -------------- next part -------------- An HTML attachment was scrubbed... URL: http://zgp.org/pipermail/p2p-hackers/attachments/20020819/24f6b618/attachment.htm From Bernard.Traversat at Sun.Com Wed Aug 21 09:26:02 2002 From: Bernard.Traversat at Sun.Com (Bernard Traversat) Date: Sat Dec 9 22:12:03 2006 Subject: [p2p-hackers] Does JXTA work as a "Universal Toolkit"? References: <1028280299.3d4a4febd3aba@goliath.notdot.net> <20020801223407.P24302@research.att.com> <3D4A127C.6080103@yahoo.com> <3D4A3636.6060703@neurogrid.com> <3D4A4E39.6090604@yahoo.com> <20020803153246.GC5414@sporty.spiceworld> <3D501D77.6040509@bondolo.org> <3D5E79EF.9050506@neurogrid.com> Message-ID: <3D63BF0B.7030609@Sun.Com> Sam Joseph wrote: > > > I write p2p applications in Java and so it would seem like JXTA would > be an ideal thing for me to use. I was at the first p2p conference > when Bill Joy and others starting talking about JXTA. All that really > came across was "hey p2p is cool, we want to be doing p2p too". The > abstractions such as pipes and advertisments that we then saw as the > conceptual building blocks of JXTA, didn't seem to have much to do > with the concepts that the various open source p2p projects were > struggling with. Sam, JXTA goal is to create an open generic virtual network overlay abstracting the underlying physical network topology to provide a uniform peer addressing, messaging and discovery abstraction to P2P application developers. Most P2P systems have to implement in some ways or another similar abstractions peer discovery, routing, NAT and firewall piercings, etc. Many P2P developers should not have to worry about the low-level networking layer, so they can focus on building new kinds of decentralized collaborative and content sharing applications. Before TCP/IP, people had to implement their own network transport to build interesting applications (ftp, telnet). In the similar way, JXTA is trying to provide a minimal open P2P network infrastructure to accelerate the development of P2P applications. Now, JXTA is a work in progress and lot of things remain to be done. This gives a chance for everybody to participate and contribute. > > In a broader sense it may be the "wary of Sun's motives" anarchic > p2p-hackers who lose out because they fail to capitalise on the > resources that Sun is putting in. I rather see it the other way :-) JXTA is loosing in not beeing able to capitalize on the wealth of knowledge and resource available in the community. This is why it is essential for the JXTA community to be as inclusive as possible and reach the most people. More P2P experts we have helping us refine and improve JXTA better everybody will be at the end. This will give everybody the chance to build the most innovative P2P applications. B. > > > In my experience people organising p2p projects (and probably open > source projects in general) are very busy. They are constantly trying > to fix problems, learn about new technologies, digest and integrate > ideas. I know that my feeling about JXTA has been and continues to be, > that I would much rather spend my time investigating projects that > seem to have something more directly novel to offer, and perhaps more > importantly are not being funded by a multinational like Sun. MNet, > Plesh, BitTorrent, Tristero, OCN. These projects are interesting > because they don't derive from a centralized mind-set, and however > many community processes, and decentralised network systems Sun > produces, they are still fundamentally a big centralised monolithic > corporation who's sole purpose is to make money. > > Maybe this is just me, but this is the kind of thinking that makes me > tend to focus on projects other than JXTA. Maybe JXTA has got some > wonderful things to offer, and maybe I'll miss out - I guess only time > will tell. > > CHEERS> SAM > > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers From adam at cypherspace.org Wed Aug 21 10:30:01 2002 From: adam at cypherspace.org (Adam Back) Date: Sat Dec 9 22:12:03 2006 Subject: [p2p-hackers] Does JXTA work as a "Universal Toolkit"? In-Reply-To: <3D63BF0B.7030609@Sun.Com>; from Bernard.Traversat@Sun.Com on Wed, Aug 21, 2002 at 09:25:47AM -0700 References: <1028280299.3d4a4febd3aba@goliath.notdot.net> <20020801223407.P24302@research.att.com> <3D4A127C.6080103@yahoo.com> <3D4A3636.6060703@neurogrid.com> <3D4A4E39.6090604@yahoo.com> <20020803153246.GC5414@sporty.spiceworld> <3D501D77.6040509@bondolo.org> <3D5E79EF.9050506@neurogrid.com> <3D63BF0B.7030609@Sun.Com> Message-ID: <20020821182921.A1055787@exeter.ac.uk> I think the problem is that the technology that goes below the JXTA abstractions is also where the bulk of the P2P research and experimentation is taking place. That layer is where the interesting problems lie. eg. distributed search algorithms, content location, peer discovery, and the different properties one wants those to exhibit: efficiency, scalability, load balancing, publisher and reader privacy. The problem I think is that as this layer is still in a state of flux, with many remaining unanswered questions, it doesn't help a lot to build abstractions yet. If we don't know what works best at this layer, it seems difficult to build abstractions. A given set of abstractions may wall the experimenter off from implementing some techniques inside the abstractions. The NAT and dealing with the firewall problem are well-defined enough to standardize and build abstractions for. Perhaps the goal is more to facilitate rapid prototyping for people to do their experimenting with the JXTA framework, and to extend the framework where existing abstractions do not fit some new model. This might for example allow someone to focus on one area they are interested in (eg distributed search) without having to build stub parts that they are not currently investiging. Adam On Wed, Aug 21, 2002 at 09:25:47AM -0700, Bernard Traversat wrote: > JXTA goal is to create an open generic virtual network overlay > abstracting the underlying physical network topology to provide a > uniform peer addressing, messaging and discovery abstraction to P2P > application developers. Most P2P systems have to implement in some > ways or another similar abstractions peer discovery, routing, NAT > and firewall piercings, etc. Many P2P developers should not have to > worry about the low-level networking layer, so they can focus on > building new kinds of decentralized collaborative and content > sharing applications. Before TCP/IP, people had to implement their > own network transport to build interesting applications (ftp, > telnet). In the similar way, JXTA is trying to provide a minimal > open P2P network infrastructure to accelerate the development of P2P > applications. From Bernard.Traversat at Sun.Com Wed Aug 21 11:41:01 2002 From: Bernard.Traversat at Sun.Com (Bernard Traversat) Date: Sat Dec 9 22:12:03 2006 Subject: [p2p-hackers] Re: Does JXTA work as a "Universal Toolkit"? (Scott Persinger) References: <002601c24796$47b70b40$4d01a8c0@scottmobile> Message-ID: <3D63DE9B.4020708@Sun.Com> Scott Persinger wrote: > I think it's interesting to look at why Sun built and then released > JXTA. If you think about it, p2p > applications are really about leveraging the power of the desktop. The > problem is, Sun doesn't > make any money on the desktop - they make all their money on servers. > > So if you look at something like a collaboration application, to the > extent that it lessens the need > for a server (which would quite likely be running Linux or Solaris), > then its eating into Sun's revenue. > The real problem is that p2p is much more strategic for Microsoft and > Linux. > > So why would Sun build JXTA? > Well, one answer could be as a defense to maintain the > Java developer base - don't want people migrating to .NET because they > want to build p2p > apps. Probably more likely Bill Joy made an early argument about the > importance of > non-PC devices on the network and how Java had to play in that space. > > You also have to ask why Sun released JXTA to a consortium? They haven't > done that with any other technologies they thought were important > (like web services, which > notably is seen as an enterprise technology). I think it's clear that > Sun decided p2p is > not strategic for them, so they told the JXTA group to wrap it up and > push it out. They > figure it's still good to have the Java alternative, but they're not > going to fund architectures > that don't need a server! > > So I have to wonder what support there really is from Sun anymore. Scott, Sun is and remains an active participant to the JXTA community. JXTA was released as an open source project under an Apache-like license simply because we believed the JXTA core protocols specification and implementations needed to be open and free, and to ensure the open source community could be involved in driving and refining the protocols definition. There is a C implementation of the protocols (jxta-c.jxta.org) available. B. > It seems like JXTA was > definitely thrown together in a hurry. I guess the question is how > effective anyone thinks the > consortium will be in tightening the platform and making it really usable. > > --Scott > From bradneuberg at yahoo.com Wed Aug 21 12:09:01 2002 From: bradneuberg at yahoo.com (Brad Neuberg) Date: Sat Dec 9 22:12:03 2006 Subject: [p2p-hackers] Does JXTA work as a "Universal Toolkit"? In-Reply-To: <20020821182921.A1055787@exeter.ac.uk> Message-ID: <20020821190825.93844.qmail@web14103.mail.yahoo.com> --- Adam Back wrote: > I think the problem is that the technology that goes > below the JXTA > abstractions is also where the bulk of the P2P > research and > experimentation is taking place. > > That layer is where the interesting problems lie. > eg. distributed > search algorithms, content location, peer discovery, > and the different > properties one wants those to exhibit: efficiency, > scalability, load > balancing, publisher and reader privacy. > > The problem I think is that as this layer is still > in a state of flux, > with many remaining unanswered questions, it doesn't > help a lot to > build abstractions yet. If we don't know what works > best at this > layer, it seems difficult to build abstractions. A > given set of > abstractions may wall the experimenter off from > implementing some > techniques inside the abstractions. > Adam, I completely agree. This is one of the issues I have had with JXTA. Unfortunately, JXTA does not make it easy to replace it's core algorithms with custom ones. JXTA _does_ make this possible, but it is not a simple thing. You have to have a deep understanding of the entire system before you can realistically replace things. For example, there is a service known as the Rendezvous Service, which is a generic query/response architecture. The Rendezvous Service is the basis for distributed search, resource naming, and resource finding in JXTA currently (everything is a stored advertisement), and uses a flooding model where other Rendezvous Nodes are queried until the resource is found. One could replace the Rendezvous Service with one that operates similar to Kademlia, for example, but it is not trivial. If JXTA made it much easier to replace core protocols then I would be more excited about it. > The NAT and dealing with the firewall problem are > well-defined enough > to standardize and build abstractions for. > > > Perhaps the goal is more to facilitate rapid > prototyping for people to > do their experimenting with the JXTA framework, and > to extend the > framework where existing abstractions do not fit > some new model. > > This might for example allow someone to focus on one > area they are > interested in (eg distributed search) without having > to build stub > parts that they are not currently investiging. > > Adam > > On Wed, Aug 21, 2002 at 09:25:47AM -0700, Bernard > Traversat wrote: > > JXTA goal is to create an open generic virtual > network overlay > > abstracting the underlying physical network > topology to provide a > > uniform peer addressing, messaging and discovery > abstraction to P2P > > application developers. Most P2P systems have to > implement in some > > ways or another similar abstractions peer > discovery, routing, NAT > > and firewall piercings, etc. Many P2P developers > should not have to > > worry about the low-level networking layer, so > they can focus on > > building new kinds of decentralized collaborative > and content > > sharing applications. Before TCP/IP, people had to > implement their > > own network transport to build interesting > applications (ftp, > > telnet). In the similar way, JXTA is trying to > provide a minimal > > open P2P network infrastructure to accelerate the > development of P2P > > applications. > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers From bradneuberg at yahoo.com Wed Aug 21 12:13:01 2002 From: bradneuberg at yahoo.com (Brad Neuberg) Date: Sat Dec 9 22:12:03 2006 Subject: [p2p-hackers] Does JXTA work as a "Universal Toolkit"? In-Reply-To: <3D63BF0B.7030609@Sun.Com> Message-ID: <20020821191202.44501.qmail@web14102.mail.yahoo.com> --- Bernard Traversat wrote: > Sam Joseph wrote: > > > > > > > I write p2p applications in Java and so it would > seem like JXTA would > > be an ideal thing for me to use. I was at the > first p2p conference > > when Bill Joy and others starting talking about > JXTA. All that really > > came across was "hey p2p is cool, we want to be > doing p2p too". The > > abstractions such as pipes and advertisments that > we then saw as the > > conceptual building blocks of JXTA, didn't seem to > have much to do > > with the concepts that the various open source p2p > projects were > > struggling with. > > Sam, > > JXTA goal is to create an open generic virtual > network overlay > abstracting the underlying physical network topology > to provide a > uniform peer addressing, > messaging and discovery abstraction to P2P > application developers. The problem is most of the developers in the P2P community don't _want_ the underlying physical topology to be hidden yet; that is what they are experimenting with! We aren't quite at the level of where we can take the P2P layer for granted, except for dealing with NATed and Firewalled peers, because the problem space is not well-defined yet or developers have different needs. JXTA needs to make it much easier to do plumbing-level experimentation rather than just application-level experimentation, and it does not currently do this. > Most P2P > systems have to implement in some ways or another > similar abstractions > peer discovery, routing, NAT and firewall piercings, > etc. Many P2P > developers > should not have to worry about the low-level > networking layer, so they > can focus on building new kinds of decentralized > collaborative > and content sharing applications. Before TCP/IP, > people had to > implement their own network transport to build > interesting applications > (ftp, telnet). In the similar way, JXTA is trying to > provide a minimal > open P2P network infrastructure to accelerate the > development of P2P > applications. Now, > JXTA is a work in progress and lot of things remain > to be done. This gives > a chance for everybody to participate and > contribute. > > > > > In a broader sense it may be the "wary of Sun's > motives" anarchic > > p2p-hackers who lose out because they fail to > capitalise on the > > resources that Sun is putting in. > > I rather see it the other way :-) JXTA is loosing in > not > beeing able to capitalize on the wealth of knowledge > and resource > available in the community. This is why it is > essential for the JXTA > community to be as inclusive as possible and reach > the most people. > More P2P experts we have helping us refine and > improve JXTA > better everybody will be at the end. This will give > everybody > the chance to build the most innovative P2P > applications. > > B. > > > > > > > > In my experience people organising p2p projects > (and probably open > > source projects in general) are very busy. They > are constantly trying > > to fix problems, learn about new technologies, > digest and integrate > > ideas. I know that my feeling about JXTA has been > and continues to be, > > that I would much rather spend my time > investigating projects that > > seem to have something more directly novel to > offer, and perhaps more > > importantly are not being funded by a > multinational like Sun. MNet, > > Plesh, BitTorrent, Tristero, OCN. These projects > are interesting > > because they don't derive from a centralized > mind-set, and however > > many community processes, and decentralised > network systems Sun > > produces, they are still fundamentally a big > centralised monolithic > > corporation who's sole purpose is to make money. > > > > Maybe this is just me, but this is the kind of > thinking that makes me > > tend to focus on projects other than JXTA. Maybe > JXTA has got some > > wonderful things to offer, and maybe I'll miss out > - I guess only time > > will tell. > > > > CHEERS> SAM > > > > _______________________________________________ > > p2p-hackers mailing list > > p2p-hackers@zgp.org > > http://zgp.org/mailman/listinfo/p2p-hackers > > > > > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers From Bernard.Traversat at Sun.Com Wed Aug 21 12:47:01 2002 From: Bernard.Traversat at Sun.Com (Bernard Traversat) Date: Sat Dec 9 22:12:03 2006 Subject: [p2p-hackers] Does JXTA work as a "Universal Toolkit"? References: <1028280299.3d4a4febd3aba@goliath.notdot.net> <20020801223407.P24302@research.att.com> <3D4A127C.6080103@yahoo.com> <3D4A3636.6060703@neurogrid.com> <3D4A4E39.6090604@yahoo.com> <20020803153246.GC5414@sporty.spiceworld> <3D501D77.6040509@bondolo.org> <3D5E79EF.9050506@neurogrid.com> <3D63BF0B.7030609@Sun.Com> <20020821182921.A1055787@exeter.ac.uk> Message-ID: <3D63EE2A.2030501@Sun.Com> Adam Back wrote: >I think the problem is that the technology that goes below the JXTA >abstractions is also where the bulk of the P2P research and >experimentation is taking place. > >That layer is where the interesting problems lie. eg. distributed >search algorithms, content location, peer discovery, and the different >properties one wants those to exhibit: efficiency, scalability, load >balancing, publisher and reader privacy. > > Adam, Yes. fully agree. JXTA intent is not to close or restrict development in the core platform area. To the contrary, JXTA provides a pluggable policy framework to enable core developers to plug their own core policies into the platform to replace default ones as they want. One of the key abstraction JXTA provides is the notion of peergroups. Peergroups allow you to define your own set of core policies to be used within a specific peergroup domain. You can define as many peergroups as you want using your own or default core policies. When a peer joins a peergroup the default core policies are replaced by the peergroup defined ones. So, when your peer is in one peergroup it can use a specific search algorithm, in another peergroup it can use a different search policy. >The problem I think is that as this layer is still in a state of flux, >with many remaining unanswered questions, it doesn't help a lot to >build abstractions yet. If we don't know what works best at this >layer, it seems difficult to build abstractions. A given set of >abstractions may wall the experimenter off from implementing some >techniques inside the abstractions. > >The NAT and dealing with the firewall problem are well-defined enough >to standardize and build abstractions for. > > >Perhaps the goal is more to facilitate rapid prototyping for people to >do their experimenting with the JXTA framework, and to extend the >framework where existing abstractions do not fit some new model. > Absolutly!! This is why it is important for JXTA to make sure that people can plug their own core policies as easily as possible. This is something we are working on and need help from people that are developing core network P2P services. This will also enable application developers to pick and choose the best core policies for their specific applications. > >This might for example allow someone to focus on one area they are >interested in (eg distributed search) without having to build stub >parts that they are not currently investiging. > > You got it :-) B. >Adam > >On Wed, Aug 21, 2002 at 09:25:47AM -0700, Bernard Traversat wrote: > > >>JXTA goal is to create an open generic virtual network overlay >>abstracting the underlying physical network topology to provide a >>uniform peer addressing, messaging and discovery abstraction to P2P >>application developers. Most P2P systems have to implement in some >>ways or another similar abstractions peer discovery, routing, NAT >>and firewall piercings, etc. Many P2P developers should not have to >>worry about the low-level networking layer, so they can focus on >>building new kinds of decentralized collaborative and content >>sharing applications. Before TCP/IP, people had to implement their >>own network transport to build interesting applications (ftp, >>telnet). In the similar way, JXTA is trying to provide a minimal >>open P2P network infrastructure to accelerate the development of P2P >>applications. >> >> >_______________________________________________ >p2p-hackers mailing list >p2p-hackers@zgp.org >http://zgp.org/mailman/listinfo/p2p-hackers > > From Bernard.Traversat at Sun.Com Wed Aug 21 13:22:02 2002 From: Bernard.Traversat at Sun.Com (Bernard Traversat) Date: Sat Dec 9 22:12:03 2006 Subject: [p2p-hackers] Does JXTA work as a "Universal Toolkit"? References: <20020821190825.93844.qmail@web14103.mail.yahoo.com> Message-ID: <3D63F663.1010808@Sun.Com> Brad Neuberg wrote: >Adam, I completely agree. This is one of the issues I >have had with JXTA. Unfortunately, JXTA does not make >it easy to replace it's core algorithms with custom >ones. JXTA _does_ make this possible, but it is not a >simple thing. You have to have a deep understanding >of the entire system before you can realistically >replace things. > > For example, there is a service known >as the Rendezvous Service, which is a generic >query/response architecture. The Rendezvous Service >is the basis for distributed search, resource naming, >and resource finding in JXTA currently (everything is >a stored advertisement), > Brad, Agree. This is an area we are currently addressing to simply the pluggin process. You may want to check the JXTA platform current work in progress link related to the Rendezvous service and query propagation at: http://platform.jxta.org/java/currentwork.html We are looking to have some of these enhancements available in an upcoming platform release. Thanks for looking into it. B. >and uses a flooding model >where other Rendezvous Nodes are queried until the >resource is found. One could replace the Rendezvous >Service with one that operates similar to Kademlia, >for example, but it is not trivial. If JXTA made it >much easier to replace core protocols then I would be >more excited about it. > Agree.Thanks for the clarifications. B. > > > >>The NAT and dealing with the firewall problem are >>well-defined enough >>to standardize and build abstractions for. >> >> >>Perhaps the goal is more to facilitate rapid >>prototyping for people to >>do their experimenting with the JXTA framework, and >>to extend the >>framework where existing abstractions do not fit >>some new model. >> >>This might for example allow someone to focus on one >>area they are >>interested in (eg distributed search) without having >>to build stub >>parts that they are not currently investiging. >> >>Adam >> >>On Wed, Aug 21, 2002 at 09:25:47AM -0700, Bernard >>Traversat wrote: >> >> >>>JXTA goal is to create an open generic virtual >>> >>> >>network overlay >> >> >>>abstracting the underlying physical network >>> >>> >>topology to provide a >> >> >>>uniform peer addressing, messaging and discovery >>> >>> >>abstraction to P2P >> >> >>>application developers. Most P2P systems have to >>> >>> >>implement in some >> >> >>>ways or another similar abstractions peer >>> >>> >>discovery, routing, NAT >> >> >>>and firewall piercings, etc. Many P2P developers >>> >>> >>should not have to >> >> >>>worry about the low-level networking layer, so >>> >>> >>they can focus on >> >> >>>building new kinds of decentralized collaborative >>> >>> >>and content >> >> >>>sharing applications. Before TCP/IP, people had to >>> >>> >>implement their >> >> >>>own network transport to build interesting >>> >>> >>applications (ftp, >> >> >>>telnet). In the similar way, JXTA is trying to >>> >>> >>provide a minimal >> >> >>>open P2P network infrastructure to accelerate the >>> >>> >>development of P2P >> >> >>>applications. >>> >>> >>_______________________________________________ >>p2p-hackers mailing list >>p2p-hackers@zgp.org >>http://zgp.org/mailman/listinfo/p2p-hackers >> >> > >_______________________________________________ >p2p-hackers mailing list >p2p-hackers@zgp.org >http://zgp.org/mailman/listinfo/p2p-hackers > > From rrrw at neofonie.de Thu Aug 22 01:43:01 2002 From: rrrw at neofonie.de (Ronald Wertlen) Date: Sat Dec 9 22:12:03 2006 Subject: [p2p-hackers] Does JXTA work as a "Universal Toolkit"? In-Reply-To: <20020821202202.2474.82167.Mailman@capsicum.zgp.org> References: <20020821202202.2474.82167.Mailman@capsicum.zgp.org> Message-ID: <14790000.1030005773@[172.27.40.119]> Hi all, I think you have have said just about everything that needs to be said, from politics to technology, however I seem to recall that one of the main aspects of JXTA was standardising P2P so that different P2P systems could interoperate and perhaps communicate at a basic level with each other. What I am saying is you can throw away the entire JXTA Implementation and DIY your own routing, discovery and comms (I am thinking of gossiping mechanisms, bloom filters, DHTs, PASTRY, NeuroGrid-style learning and so on) but as long as you form your messages using JXTA XML you can interface with other p2p groups much easier. From my point of view it is possible to do this. One would then expose certain peers in the network which advertise services which leverage the power of the entire group and make it available to other groups in a standard way. This may seem like just pushing the problem of agreeing on semantics further back... but I don't think so. I think along the way you have gained something in (at least) agreeing on the protocol. The question is just: how much have you lost? And I think the answer will be chaotically gleaned in good time. Ron From BradNeuberg at yahoo.com Thu Aug 22 01:52:01 2002 From: BradNeuberg at yahoo.com (Brad Neuberg) Date: Sat Dec 9 22:12:03 2006 Subject: [p2p-hackers] Does JXTA work as a "Universal Toolkit"? References: <20020821202202.2474.82167.Mailman@capsicum.zgp.org> <14790000.1030005773@[172.27.40.119]> Message-ID: <3D64A6C3.9010802@yahoo.com> The interoperability of JXTA is really nice. Every peer automatically joins the default NetPeerGroup; they then look for the specific Peer Group that implements their particular P2P service, such as a NapsterGroup or a DistributedStorageGroup, for example. Individual Peer Groups can implement their own distributed services, and can also override the default routing, rendezvous, searching, etc. algorithms with their own within that particular Peer Group. This means that Peer Groups can internally implement their own protocols, even getting away from the JXTA XML notation, but everyone can still potentially interoperate with higher-level services in the NetPeerGroup or in other peer groups. Even better, JXTA includes ways to specify where to dynamically find the code for a new service that you may encounter in a Peer Group, so the door is potentially open for JXTA applications that can organically "learn" new P2P services as it hops between Peer Groups. Now all the JXTA team needs to do is get their C JXTA library to the same level as the Java library; with a full C JXTA implementation, other languages will be very easy since it is easy to bind to C-based APIs from most languages. Ronald Wertlen wrote: > > Hi all, > > I think you have have said just about everything that needs to be said, > from politics to technology, however I seem to recall that one of the > main aspects of JXTA was standardising P2P so that different P2P systems > could interoperate and perhaps communicate at a basic level with each > other. > > What I am saying is you can throw away the entire JXTA Implementation > and DIY your own routing, discovery and comms (I am thinking of > gossiping mechanisms, bloom filters, DHTs, PASTRY, NeuroGrid-style > learning and so on) but as long as you form your messages using JXTA XML > you can interface with other p2p groups much easier. From my point of > view it is possible to do this. > > One would then expose certain peers in the network which advertise > services which leverage the power of the entire group and make it > available to other groups in a standard way. > > This may seem like just pushing the problem of agreeing on semantics > further back... but I don't think so. I think along the way you have > gained something in (at least) agreeing on the protocol. The question is > just: how much have you lost? And I think the answer will be > chaotically gleaned in good time. > > Ron > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > From Bernard.Traversat at Sun.Com Thu Aug 22 18:05:01 2002 From: Bernard.Traversat at Sun.Com (Bernard Traversat) Date: Sat Dec 9 22:12:03 2006 Subject: [p2p-hackers] Does JXTA work as a "Universal Toolkit"? References: <20020821202202.2474.82167.Mailman@capsicum.zgp.org> <14790000.1030005773@[172.27.40.119]> <3D64A6C3.9010802@yahoo.com> Message-ID: <3D658A0A.2040902@Sun.Com> Brad Neuberg wrote: > The interoperability of JXTA is really nice. Every peer automatically > joins the default NetPeerGroup; they then look for the specific Peer > Group that implements their particular P2P service, such as a > NapsterGroup or a DistributedStorageGroup, for example. Individual > Peer Groups can implement their own distributed services, and can also > override the default routing, rendezvous, searching, etc. algorithms > with their own within that particular Peer Group. This means that Peer > Groups can internally implement their own protocols, even getting away > from the JXTA XML notation, but everyone can still potentially > interoperate with higher-level services in the NetPeerGroup or in > other peer groups. Even better, JXTA includes ways to specify where to > dynamically find the code for a new service that you may encounter in > a Peer Group, so the door is potentially open for JXTA applications > that can organically "learn" new P2P services as it hops between Peer > Groups. You have it right. This core construct enables JXTA to share not just static data, but code, service, resource, etc. The platform is using this mechanism to instantiate itself :-). > Now all the JXTA team needs to do is get their C JXTA library to the > same level as the Java library; with a full C JXTA implementation, > other languages will be very easy since it is easy to bind to C-based > APIs from most languages. We made good progress on the C implementation with the help of community members. We have a fully compliant edge-peer C implementation of the JXTA protocols. The C implementation is available at jxta-c.jxta.org. The implementation is using the Apache Portable Runtime (APR) environment. So, it's available on Linux, Windows, etc.. B. > > > Ronald Wertlen wrote: > >> >> Hi all, >> >> I think you have have said just about everything that needs to be >> said, from politics to technology, however I seem to recall that one >> of the main aspects of JXTA was standardising P2P so that different >> P2P systems could interoperate and perhaps communicate at a basic >> level with each other. >> >> What I am saying is you can throw away the entire JXTA Implementation >> and DIY your own routing, discovery and comms (I am thinking of >> gossiping mechanisms, bloom filters, DHTs, PASTRY, NeuroGrid-style >> learning and so on) but as long as you form your messages using JXTA >> XML you can interface with other p2p groups much easier. From my >> point of view it is possible to do this. >> >> One would then expose certain peers in the network which advertise >> services which leverage the power of the entire group and make it >> available to other groups in a standard way. >> >> This may seem like just pushing the problem of agreeing on semantics >> further back... but I don't think so. I think along the way you have >> gained something in (at least) agreeing on the protocol. The question >> is just: how much have you lost? And I think the answer will be >> chaotically gleaned in good time. >> >> Ron >> _______________________________________________ >> p2p-hackers mailing list >> p2p-hackers@zgp.org >> http://zgp.org/mailman/listinfo/p2p-hackers >> > > > > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers From bert at akamail.com Sun Aug 25 09:24:01 2002 From: bert at akamail.com (Bert) Date: Sat Dec 9 22:12:03 2006 Subject: [p2p-hackers] HTTPS over Port 80 References: <20020821202202.2474.82167.Mailman@capsicum.zgp.org> <14790000.1030005773@[172.27.40.119]> <3D64A6C3.9010802@yahoo.com> <3D658A0A.2040902@Sun.Com> Message-ID: <3D6905D4.3040403@akamail.com> OK, so I know this is an unusual thing to do, but I'm wondering if there is anything in any protocol specification that states you should NOT use the HTTPS protocol over port 80. To minimize firewall issues, I've implemented my app to support both HTTP and HTTPS over the same port (80) instead of requiring the opening of port 443 in addition to port 80. For the most part this works really well, e.g. try: https://replicator-userv.userv.web.cmu.edu:80/ and then http://replicator-userv.userv.web.cmu.edu/ ....but there are some really irritating bugs in both Mozilla and Internet Explorer that keep this from working smoothly. The most egregious problem is with IE 5.5/6.0 which, when routing through a web proxy, requests the proxy connect to port "0" instead of port "80" when presented with an HTTPS link with a port 80 spec (in otherwords it fails completely). Mozilla has a nasty problem where it rewrites the URL without the :80 after loading the page (causing all relative URL's in the page to fail, along with any "reload" attempts). So what's the deal? Is this just typical crappy browser implementation, or am I just wrong in thinking this should work at all? Thanks, Bert From miles at milessabin.com Sun Aug 25 15:23:01 2002 From: miles at milessabin.com (Miles Sabin) Date: Sat Dec 9 22:12:03 2006 Subject: [p2p-hackers] HTTPS over Port 80 In-Reply-To: <3D6905D4.3040403@akamail.com> References: <20020821202202.2474.82167.Mailman@capsicum.zgp.org> <3D658A0A.2040902@Sun.Com> <3D6905D4.3040403@akamail.com> Message-ID: <200208252322.23498.miles@milessabin.com> Bert wrote, > So what's the deal? Is this just typical crappy browser > implementation, or am I just wrong in thinking this should work at > all? It sounds like typical crappy browser implementation. If you've not seen it already, take a look at RFC 2817, "Upgrading to TLS Within HTTP/1.1", ftp://ftp.isi.edu/in-notes/rfc2817.txt although none of the mainstream browsers/servers implement it to the best of my knowledge. I've thought about doing what you've done before now, but never dug into any of the details. How much of the head of the inbound connection do you have to inspect on the server-side to distinguish between HTTP and HTTPS? Cheers, Miles From sam at neurogrid.com Sun Aug 25 22:27:01 2002 From: sam at neurogrid.com (Sam Joseph) Date: Sat Dec 9 22:12:03 2006 Subject: [p2p-hackers] Does JXTA work as a "Universal Toolkit"? References: <20020821202202.2474.82167.Mailman@capsicum.zgp.org> <14790000.1030005773@[172.27.40.119]> Message-ID: <3D69BE17.1060407@neurogrid.com> Ronald Wertlen wrote: > I think you have have said just about everything that needs to be > said, from politics to technology, however I seem to recall that one > of the main aspects of JXTA was standardising P2P so that different > P2P systems could interoperate and perhaps communicate at a basic > level with each other. > > What I am saying is you can throw away the entire JXTA Implementation > and DIY your own routing, discovery and comms (I am thinking of > gossiping mechanisms, bloom filters, DHTs, PASTRY, NeuroGrid-style > learning and so on) but as long as you form your messages using JXTA > XML you can interface with other p2p groups much easier. From my point > of view it is possible to do this. Interoperability is key. But why using JXTA XML? Personally I am tempted to use Tristero XML-RPC which is designed specifically with interoperability in mind. http://tristero.sourceforge.net/ If it's a hassle to take out the JXTA core, why use JXTA XML and tempt developers into that trap at all? Naturally some of us have invested time in learning JXTA and thus have a head start. Others have invested time in Tristero. Perhaps there is no particular advantage in choosing one or the other, as long as we end up using the same thing in the end. Personally I'm going to put my money (or rather my open source development efforts) into tristero. May the least frustrating system for developers emerge as the standard! CHEERS> SAM From Wolfgang.Mueller2 at uni-bayreuth.de Mon Aug 26 00:28:01 2002 From: Wolfgang.Mueller2 at uni-bayreuth.de (Wolfgang =?iso-8859-1?q?M=FCller?=) Date: Sat Dec 9 22:12:03 2006 Subject: [p2p-hackers] AntHill/JXTA? In-Reply-To: <3D4A58F8.7020807@neurogrid.com> References: <1028280299.3d4a4febd3aba@goliath.notdot.net> <3D4A4F8B.4000302@yahoo.com> <3D4A58F8.7020807@neurogrid.com> Message-ID: <200208261126.35705.Wolfgang.Mueller2@uni-bayreuth.de> Am Freitag, 2. August 2002 12:03 schrieb Sam Joseph: Hi Sam, Hi all, First I'd like to say that I found your http://www.neurogrid.net/Decentralized_Meta-Data_Strategies-neat.html really useful. Second, I'd like to ask where there is a JXTA-enabled version of AntHill. It has been announced, but downloading the AntHill 1.0 sources and doing in the AntHill source directory find . -name "*.java" | xargs grep -i jxta gives me absolutely nothing. In addition to that, what I understand about jxta and your discussions about jxta makes it sound really hard to adapt AntHill to jxta (JXTA gurus, please complain if I am wrong): The ants have control about the connections, so they can tell their nest (i.e. the peer they're visiting) to suppress one peer from their neighbour list, and to add another one. This in itself I find a security problem: If we assume that we let all kinds of ants visit our nest, anyone could write an ant that visits all netst, and when going back destroys all links of the nest (unless there is a mechanism against that that I have not seen, yet. AntHill gurus, please correct me, if I am wrong.). It would be great if anybody could point me to where AntHill is heading, and if there are any efforts going on towards making the ants run in the wild. Cheers, Wolfgang From alexandra_rodrigues at lycos.com Mon Aug 26 13:51:02 2002 From: alexandra_rodrigues at lycos.com (alexandra rodrigues) Date: Sat Dec 9 22:12:03 2006 Subject: [p2p-hackers] AntHill/JXTA? Message-ID: Why don´t u try in Astalavista? There´s lots of information there. sweet kisses, alex On Mon, 26 Aug 2002 11:26:35 Wolfgang Müller wrote: >Am Freitag, 2. August 2002 12:03 schrieb Sam Joseph: > >Hi Sam, >Hi all, >First I'd like to say that I found your >http://www.neurogrid.net/Decentralized_Meta-Data_Strategies-neat.html really >useful. Second, I'd like to ask where there is a JXTA-enabled version of >AntHill. It has been announced, but downloading the AntHill 1.0 sources and >doing in the AntHill source directory > >find . -name "*.java" | xargs grep -i jxta > >gives me absolutely nothing. In addition to that, what I understand about jxta >and your discussions about jxta makes it sound really hard to adapt AntHill >to jxta (JXTA gurus, please complain if I am wrong): The ants have control >about the connections, so they can tell their nest (i.e. the peer they're >visiting) to suppress one peer from their neighbour list, and to add another >one. This in itself I find a security problem: If we assume that we let all >kinds of ants visit our nest, anyone could write an ant that visits all >netst, and when going back destroys all links of the nest (unless there is a >mechanism against that that I have not seen, yet. AntHill gurus, please >correct me, if I am wrong.). > >It would be great if anybody could point me to where AntHill is heading, and >if there are any efforts going on towards making the ants run in the wild. > >Cheers, >Wolfgang > > >_______________________________________________ >p2p-hackers mailing list >p2p-hackers@zgp.org >http://zgp.org/mailman/listinfo/p2p-hackers > ___________________________________________________ Communicate with others using Lycos Mail for FREE! http://mail.lycos.com From bert at akamail.com Mon Aug 26 21:39:01 2002 From: bert at akamail.com (Bert) Date: Sat Dec 9 22:12:03 2006 Subject: [p2p-hackers] HTTPS over Port 80 References: <20020821202202.2474.82167.Mailman@capsicum.zgp.org> <3D658A0A.2040902@Sun.Com> <3D6905D4.3040403@akamail.com> <200208252322.23498.miles@milessabin.com> Message-ID: <3D6B0371.3090407@akamail.com> Miles Sabin wrote: >It sounds like typical crappy browser implementation. > That's what I figured .... > >If you've not seen it already, take a look at RFC 2817, "Upgrading to >TLS Within HTTP/1.1", > > ftp://ftp.isi.edu/in-notes/rfc2817.txt > >although none of the mainstream browsers/servers implement it to the >best of my knowledge. > Yup, and therein lies the problem. It's a shame there's no real competition in browsers anymore. IE's buggy implementaiton has effectively become the standard, not the actual standards docs. Surprisingly Netscape 4.X seems to be the most reliable for HTTPS over port 80 -- so far I've encountered no problems at all with it. > >I've thought about doing what you've done before now, but never dug into >any of the details. How much of the head of the inbound connection do >you have to inspect on the server-side to distinguish between HTTP and >HTTPS? > I am using an IBM-internal crypto/SSL library (not open source unfortunately) that I didn't implement myself. It identifies plain from SSL/TLS connections using only the first byte: for plain connections the first byte value must be less than 127 and must not be equal to 20, 21, 22, or 23. This is sufficient to discern HTTP from HTTPS. Bert From Bernard.Traversat at Sun.Com Wed Aug 28 09:47:01 2002 From: Bernard.Traversat at Sun.Com (Bernard Traversat) Date: Sat Dec 9 22:12:03 2006 Subject: [p2p-hackers] AntHill/JXTA? References: <1028280299.3d4a4febd3aba@goliath.notdot.net> <3D4A4F8B.4000302@yahoo.com> <3D4A58F8.7020807@neurogrid.com> <200208261126.35705.Wolfgang.Mueller2@uni-bayreuth.de> Message-ID: <3D6CFE6D.7020405@Sun.Com> Wolfgang M?ller wrote: >Am Freitag, 2. August 2002 12:03 schrieb Sam Joseph: > >Hi Sam, >Hi all, >First I'd like to say that I found your >http://www.neurogrid.net/Decentralized_Meta-Data_Strategies-neat.html really >useful. Second, I'd like to ask where there is a JXTA-enabled version of >AntHill. It has been announced, but downloading the AntHill 1.0 sources and >doing in the AntHill source directory > >find . -name "*.java" | xargs grep -i jxta > >gives me absolutely nothing. > > In addition to that, what I understand about jxta >and your discussions about jxta makes it sound really hard to adapt AntHill >to jxta (JXTA gurus, please complain if I am wrong): The ants have control >about the connections, so they can tell their nest (i.e. the peer they're >visiting) to suppress one peer from their neighbour list, and to add another >one. > Hi Wolfgang, The JXTA platform provides only mechanisms to open pipe connections and scope peer interactions. Your application is in charge to decide which peers you want or are allowed to talk to. You can control the underlying routing so peers which do have not the right credentials are excluded from routing your messages or talking to you. The JXTA platform also allows you to register firewall filters to filter in-coming traffics and protect your peers againsts other malicious peers. Cheers, B. >This in itself I find a security problem: If we assume that we let all >kinds of ants visit our nest, anyone could write an ant that visits all >netst, and when going back destroys all links of the nest (unless there is a >mechanism against that that I have not seen, yet. AntHill gurus, please >correct me, if I am wrong.). > >It would be great if anybody could point me to where AntHill is heading, and >if there are any efforts going on towards making the ants run in the wild. > >Cheers, >Wolfgang > > >_______________________________________________ >p2p-hackers mailing list >p2p-hackers@zgp.org >http://zgp.org/mailman/listinfo/p2p-hackers > > From Wolfgang.Mueller2 at uni-bayreuth.de Wed Aug 28 23:51:01 2002 From: Wolfgang.Mueller2 at uni-bayreuth.de (Wolfgang =?iso-8859-1?q?M=FCller?=) Date: Sat Dec 9 22:12:03 2006 Subject: [p2p-hackers] AntHill/JXTA? In-Reply-To: <3D6CFE6D.7020405@Sun.Com> References: <1028280299.3d4a4febd3aba@goliath.notdot.net> <200208261126.35705.Wolfgang.Mueller2@uni-bayreuth.de> <3D6CFE6D.7020405@Sun.Com> Message-ID: <200208290847.52641.Wolfgang.Mueller2@uni-bayreuth.de> > Hi Wolfgang, Salut Bernard > The JXTA platform provides only mechanisms to open pipe connections > and scope peer interactions. Your application is in charge to decide which > peers you want or are allowed to talk to. You can control the underlying > routing so peers which do have not the right credentials are excluded > from routing > your messages or talking to you. The JXTA platform also allows you > to register firewall filters to filter in-coming traffics and > protect your peers againsts other malicious peers. > > Cheers, > > B. Thanks a lot, it convinces me that I have to look into JXTA more closely. To rephrase what I was saying: the AntHill agents have control on the routing level, if I am not mistaken. Wolfgang From Wolfgang.Mueller2 at uni-bayreuth.de Wed Aug 28 23:53:01 2002 From: Wolfgang.Mueller2 at uni-bayreuth.de (Wolfgang =?iso-8859-1?q?M=FCller?=) Date: Sat Dec 9 22:12:03 2006 Subject: [p2p-hackers] AntHill/JXTA? In-Reply-To: References: Message-ID: <200208290850.38577.Wolfgang.Mueller2@uni-bayreuth.de> Am Montag, 26. August 2002 22:49 schrieb alexandra rodrigues: > Why don?t u try in Astalavista? There?s lots of information there. Thanks a lot, alex, but I think we are talking about two different AntHills there. The AntHill I found in Astalavista was a JavaScript editor, and not an agent-based P2P framework, or did I miss something? Cheers, Wolfgang From rajesh at infoglyptic.com Thu Aug 29 00:02:02 2002 From: rajesh at infoglyptic.com (Rajesh Acharya) Date: Sat Dec 9 22:12:03 2006 Subject: [p2p-hackers] AntHill/JXTA? In-Reply-To: <200208290850.38577.Wolfgang.Mueller2@uni-bayreuth.de> References: <200208290850.38577.Wolfgang.Mueller2@uni-bayreuth.de> Message-ID: <02082912461400.13534@pc70.ig.com> Hi Wolfgang, > > Thanks a lot, alex, but I think we are talking about two different AntHills > there. The AntHill I found in Astalavista was a JavaScript editor, and not > an agent-based P2P framework, or did I miss something? > Cheers, > Wolfgang I am not sure of your requirement. But Aisland is a project which has come very far in agent based P2P framework using JXTA. Mat Dietrich is doing an excellent job there. Please see if it is relevant. http://aisland.jxta.org check it out from CVS and see if you can give a lending hand to Mat who is looking for developers to join the effort and improve aisland. I have a lot to discuss about JXTA on this list to add to what Bernard explained but I am unable to do that due to many reasons. More later. Regards, Rajesh From Wolfgang.Mueller2 at uni-bayreuth.de Thu Aug 29 00:08:01 2002 From: Wolfgang.Mueller2 at uni-bayreuth.de (Wolfgang =?iso-8859-1?q?M=FCller?=) Date: Sat Dec 9 22:12:03 2006 Subject: [p2p-hackers] AntHill/JXTA? In-Reply-To: <02082912461400.13534@pc70.ig.com> References: <200208290850.38577.Wolfgang.Mueller2@uni-bayreuth.de> <02082912461400.13534@pc70.ig.com> Message-ID: <200208290905.17300.Wolfgang.Mueller2@uni-bayreuth.de> > I am not sure of your requirement. But Aisland is a project which has come > very far in agent based P2P framework using JXTA. Mat Dietrich is doing an > excellent job there. Please see if it is relevant. http://aisland.jxta.org > check it out from CVS and see if you can give a lending hand to Mat who is > looking for developers to join the effort and improve aisland. > I have a lot to discuss about JXTA on this list to add to what Bernard > explained but I am unable to do that due to many reasons. > > More later. > > Regards, > > Rajesh Thanks a lot for the link. The page looks great. I will look into the code ASAP. Cheers, Wolfgang From istoica at cs.berkeley.edu Fri Aug 30 20:44:02 2002 From: istoica at cs.berkeley.edu (Ion Stoica) Date: Sat Dec 9 22:12:03 2006 Subject: [p2p-hackers] Call for participation: IPTPS'03 Message-ID: <3D6FFFEA.4F849A9A@cs.berkeley.edu> Please accept my apology if you receive multiple copies of this message. Ion ---- The 2nd International Workshop on Peer-to-Peer Systems (IPTPS'03) 20-21 February, 2003 Claremont Hotel, Berkeley, CA, USA. (http://iptps03.cs.berkeley.edu) Important Dates: * 25 October 2002 : Submission of position papers * 20 December 2002 : Notification of Acceptance * 15 January 2003 : Camera-ready copies * 20-21 February 2002 : IPTPS'03 The 2nd International Workshop on Peer-to-Peer Systems (IPTPS'03) aims to provide a forum for researchers active in peer-to-peer computing to discuss the state-of-the-art and to identify key research challenges in peer-to-peer computing. IPTPS'03 hopes to continue and build on the success of the first workshop, IPTPS'02. The goal of the workshop is to examine peer-to-peer technologies, applications and systems, and also to identify key research issues and challenges that lie ahead. In the context of this workshop, peer-to-peer systems are characterized as being decentralized, self-organizing distributed systems, in which all or most communication is symmetric. Topics of interest include, but are not limited to: * peer-to-peer applications and services * peer-to-peer systems and infrastructures * peer-to-peer algorithms * security in peer-to-peer systems * robustness in peer-to-peer systems * anonymity and anti-censorship * performance of peer-to-peer systems * workload characterization for peer-to-peer systems The workshops aims to bring together researchers and practitioners in the fields of systems, networking, and theory. The program of the workshop will be a combination of invited talks, presentations of position papers, and discussions. To ensure a productive workshop environment, attendance will be limited to about 50 participants who are active in the field. Each potential participant should submit a position paper of 5 pages or less that exposes a new problem, advocates a specific solution, or reports on actual experience. Participants will be invited based on the originality, technical merit and topical relevance of their submissions, as well as the likelihood that the ideas expressed in their submissions will lead to insightful technical discussions at the workshop. Please do not submit abbreviated versions of journal or conference papers. Organizers: Program Committee: Miguel Castro, Microsoft Research Joe Hellerstein, UC Berkeley Richard Karp, UC Berkeley Frans Kaashoek, MIT (co-chair) Nancy Lynch, MIT David Mazieres, New York University Robert Morris, MIT Ion Stoica, UC Berkeley (co-chair) Marvin Theimer, Microsoft Research Amin Vahdat, Duke University Geoffrey Voelker, UC San Diego Ellen Zegura, Georgia Tech Hui Zhang, CMU Steering Committee: Druschel, Rice University Frans Kaashoek, MIT Antony Rowstron, Microsoft Research Scott Shenker, ICIR, Berkeley Ion Stoica, UC Berkeley Administrative Assistant: Bob Miller, UC Berkeley