From cefn.hoile at bt.com Mon Nov 3 10:50:26 2003 From: cefn.hoile at bt.com (cefn.hoile@bt.com) Date: Sat Dec 9 22:12:23 2006 Subject: [p2p-hackers] DIET Agents released as Open Source Message-ID: <21DA6754A9238B48B92F39637EF307FD0218066A@i2km41-ukdy.domain1.systemhost.net> The DIET Agents platform has been released as Open Source. It is a light-weight, multi-agent platform for decentralised computing. A bottom-up design was used to ensure that the platform is lightweight, scalable, robust, adaptive and extensible. It is especially suitable for rapidly developing and deploying peer-to-peer prototype applications and adaptive, distributed applications that use bottom-up or nature-inspired techniques. The platform is available from the DIET Agents website at http://diet-agents.sourceforge.net. The website also provides other resources, such as details about the design philosophy, a tutorial, API documentation, access to mailing lists and a basic visualiser. Agents in the platform can be thought of as small, mobile processes. Agents have a minimal memory footprint and inter-agent communication can be very fast. It is possible to run over 100,000 agents on an ordinary desktop machine and there are no inherent limitations on scalability when running applications across multiple machines. The fail-fast, resource constrained execution of kernel functions lets systems gracefully cope with overload and failure. Feedback provided by the kernel enables agents to adapt to changing conditions and overload. A high quality Object-Oriented design ensures that the code is general, modular and extensible. We encourage everyone to download the software, try it out and we welcome any feedback. Cefn Hoile, on behalf of the BT Exact DIET Agents team. http://diet-agents.sourceforge.net From brutfood at yahoo.com Mon Nov 3 14:11:17 2003 From: brutfood at yahoo.com (=?iso-8859-1?q?Daniel=20Freeman?=) Date: Sat Dec 9 22:12:23 2006 Subject: [p2p-hackers] Hi Message-ID: <20031103141117.5562.qmail@web40019.mail.yahoo.com> Hi! everyone. I'm new to this list, and I joined because I don't get the peer to peer ethos. The idea of storing files on consumer kit worries me. (Cost, Coffee spillages etc.) But because some people believe so strongly in it - there must be something in it - so I look forward to reading your discussions. I am mostly interested in Internet Computing (thin client computing and applications/ Internet Operating System etc). I have recently started a discussion forum on this. http://i2genius.com/forum It is still in its infancy, and there is also a place for peer to peer discussions - I'd like to get this going. I think most people who have joined so far are 'client-server centralists' like me - so it would be good to get the input of peer to peer advocates, and perhaps argue about our world views ;) I'll even create new categories if there is enough interest. Daniel http://personals.yahoo.com.au - Yahoo! Personals New people, new possibilities. FREE for a limited time. From wesley at felter.org Tue Nov 4 01:30:12 2003 From: wesley at felter.org (Wes Felter) Date: Sat Dec 9 22:12:23 2006 Subject: [p2p-hackers] Hi In-Reply-To: <20031103141117.5562.qmail@web40019.mail.yahoo.com> References: <20031103141117.5562.qmail@web40019.mail.yahoo.com> Message-ID: <6F134115-0E66-11D8-AD0F-000393A581BE@felter.org> On Nov 3, 2003, at 8:11 AM, Daniel Freeman wrote: > I'm new to this list, and I joined because I don't get > the peer to peer ethos. The idea of storing files on > consumer kit worries me. (Cost, Coffee spillages > etc.) But because some people believe so strongly in > it - there must be something in it - so I look forward > to reading your discussions. Sure, I'll take the bait. No sane people advocate storing primary copies of files on random computers. P2P backup within workgroups is a popular idea, but there's usually more than one backup for each file and the notion of a workgroup presupposes a minimal level of trust. There are P2P caching systems, which almost all use cryptographic hashes to ensure data integrity. (Reliability is not a concern in a caching system, since you can always bypass the cache and go to the authoritative source.) The only P2P primary storage system that comes to mind is OceanStore, which is only P2P in the sense that all the servers view each other as peers; in many respects it is a client-server system. One thing that all these systems have in common is redundancy. And then there's good old-fashioned file sharing, which doesn't even try to be reliable, but what do you expect for free? Wes Felter - wesley@felter.org - http://felter.org/wesley/ From brutfood at yahoo.com Thu Nov 6 03:02:46 2003 From: brutfood at yahoo.com (=?iso-8859-1?q?Daniel=20Freeman?=) Date: Sat Dec 9 22:12:23 2006 Subject: [p2p-hackers] Re: Hi! (Why Peer to Peer?) Message-ID: <20031106030246.71571.qmail@web40016.mail.yahoo.com> Thanks for taking the bait Wes :), Thank you Jack also for your email also outlining the benefits of replication (redundancy) of data. I'm still not convinced though. I need to know what's wrong with good old fashioned client-server? What advantages Peer to Peer brings? I suspect that the Peer to Peer ethos fits into speculations about a future telecommunications/data/entertainment network with distributed switching intelligence (agents) etc. Am I right? I still don't think the consumer Internet Network is suitable. Putting aside issues of coffee spillages, there is also the overhead of running the server, also consumer Asymetrical broadband is geared for downloads, not serving. It is easy to get confused regarding the reasons for Peer to Peer. The 'piracy' community embraced it because it made things harder for the owners of copyright, and their lawyers to take legal action against a single entity. It also appeals to the rebellious, anarchistic element on the Internet. (although personally, I think it plays into Micro$ofts hands by maintaining the workstation status-quo rather than the undiscovered country of consumer thin-client internet consoles). Peer to peer is also a technology that is bound to make people say 'hey cool', and the sort of thing that universities and research departments will start throwing at every conceivable scenario without much regard for whether it is appropriate or not. So won't someone please tell me - what use is it? Years ago, I worked on Neural Networks - MLP's. My managers would get really excited by the idea of new technologies, without bothering to understand its mechanics or limitations. It was percieved as an all powerful panacea - it wasn't. But it probably was a 'sexy' way to get more research funding ;). Is this now the case with Peer to Peer research? Finally, I'd like to plug my forum again ( i2genius.com/forum ) - please come along and say hello ;) Daniel http://personals.yahoo.com.au - Yahoo! Personals New people, new possibilities. FREE for a limited time. From seth.johnson at realmeasures.dyndns.org Thu Nov 6 03:26:40 2003 From: seth.johnson at realmeasures.dyndns.org (Seth Johnson) Date: Sat Dec 9 22:12:23 2006 Subject: [p2p-hackers] Re: Hi! (Why Peer to Peer?) Message-ID: Peer to peer is the Internet. Peer to peer is: 1) TCP 2) IP 3) DNS The reason for peer to peer is innovation, that's all. Separate the transport from the content and application layer, and you can build anything. Seth -----Original Message----- From: Daniel Freeman Date: Thu, 6 Nov 2003 14:02:46 +1100 (EST) Subject: [p2p-hackers] Re: Hi! (Why Peer to Peer?) > Thanks for taking the bait Wes :), Thank you Jack also > for your email also outlining the benefits of > replication (redundancy) of data. I'm still not > convinced though. I need to know what's wrong with > good old fashioned client-server? What advantages > Peer to Peer brings? > > I suspect that the Peer to Peer ethos fits into > speculations about a future > telecommunications/data/entertainment network with > distributed switching intelligence (agents) etc. Am I > right? > > I still don't think the consumer Internet Network is > suitable. Putting aside issues of coffee spillages, > there is also the overhead of running the server, also > consumer Asymetrical broadband is geared for > downloads, not serving. > > It is easy to get confused regarding the reasons for > Peer to Peer. The 'piracy' community embraced it > because it made things harder for the owners of > copyright, and their lawyers to take legal action > against a single entity. It also appeals to the > rebellious, anarchistic element on the Internet. > (although personally, I think it plays into Micro$ofts > hands by maintaining the workstation status-quo rather > than the undiscovered country of consumer thin-client > internet consoles). Peer to peer is also a technology > that is bound to make people say 'hey cool', and the > sort of thing that universities and research > departments will start throwing at every conceivable > scenario without much regard for whether it is > appropriate or not. > > So won't someone please tell me - what use is it? > > Years ago, I worked on Neural Networks - MLP's. My > managers would get really excited by the idea of new > technologies, without bothering to understand its > mechanics or limitations. It was percieved as an all > powerful panacea - it wasn't. But it probably was a > 'sexy' way to get more research funding ;). Is this > now the case with Peer to Peer research? > > Finally, I'd like to plug my forum again ( > i2genius.com/forum ) - please come along and say hello > ;) > > Daniel > > http://personals.yahoo.com.au - Yahoo! Personals > New people, new possibilities. FREE for a limited time. > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > _______________________________________________ > Here is a web page listing P2P Conferences: > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences From bryan.turner at pobox.com Thu Nov 6 03:51:07 2003 From: bryan.turner at pobox.com (Bryan Turner) Date: Sat Dec 9 22:12:23 2006 Subject: [p2p-hackers] Re: Hi! (Why Peer to Peer?) References: <20031106030246.71571.qmail@web40016.mail.yahoo.com> Message-ID: <007001c3a419$39c02fe0$6601a8c0@aspen> I guess I'm confused in the opposite direction.. what does client/server have that is even remotely close to peer-2-peer? Client/Server is totally encompassed by P2P (by definition) and can be extended in ways that centralized servers cannot. Some projects which are fully-P2P: - Distributed Simulation (Military HLA https://www.dmso.mil/public/transition/hla) - File Sharing (http://www.clearcube.com) - Bandwidth Sharing (http://bitconjurer.org/BitTorrent) - Distributed File Systems (http://research.microsoft.com/sn/Farsite) Some projects which are based on P2P concepts: - PC Management/Migration (http://www.clearcube.com) - Beowulf Clusters (http://www.beowulf.org) - Grid Processing (SETI@Home; http://setiathome.ssl.berkeley.edu) - Web Caching (http://www.akamai.com) - Fault Tolerance (Hive Computing; http://www.tsunamiresearch.com/hivecomputing) I might also mention that P2P is in no way a "new" technology. Ever since there were computers of roughly equal capacity linked by wires, there have been P2P systems. The very fabric of the internet is composed entirely of P2P routers! Each router exchanges messages with others it knows about, building a topology of the network around it, and calculating the optimal path between any two subnets. These protocols (BGP, EIGRP, etc..) are P2P to the very bone. --Bryan bryan.turner@pobox.com From badapple at netnitco.net Thu Nov 6 03:49:10 2003 From: badapple at netnitco.net (Fred Grott) Date: Sat Dec 9 22:12:23 2006 Subject: [p2p-hackers] Re: Hi! (Why Peer to Peer?) In-Reply-To: <007001c3a419$39c02fe0$6601a8c0@aspen> References: <20031106030246.71571.qmail@web40016.mail.yahoo.com> <007001c3a419$39c02fe0$6601a8c0@aspen> Message-ID: <3FA9C4B6.1050106@netnitco.net> Bryan Turner wrote: > I guess I'm confused in the opposite direction.. what does client/server >have that is even remotely close to peer-2-peer? Client/Server is totally >encompassed by P2P (by definition) and can be extended in ways that >centralized servers cannot. > >Some projects which are fully-P2P: >- Distributed Simulation (Military HLA >https://www.dmso.mil/public/transition/hla) >- File Sharing (http://www.clearcube.com) >- Bandwidth Sharing (http://bitconjurer.org/BitTorrent) >- Distributed File Systems (http://research.microsoft.com/sn/Farsite) > >Some projects which are based on P2P concepts: >- PC Management/Migration (http://www.clearcube.com) >- Beowulf Clusters (http://www.beowulf.org) >- Grid Processing (SETI@Home; http://setiathome.ssl.berkeley.edu) >- Web Caching (http://www.akamai.com) >- Fault Tolerance (Hive Computing; >http://www.tsunamiresearch.com/hivecomputing) > > I might also mention that P2P is in no way a "new" technology. Ever >since there were computers of roughly equal capacity linked by wires, there >have been P2P systems. The very fabric of the internet is composed entirely >of P2P routers! Each router exchanges messages with others it knows about, >building a topology of the network around it, and calculating the optimal >path between any two subnets. These protocols (BGP, EIGRP, etc..) are P2P >to the very bone. > >--Bryan >bryan.turner@pobox.com > >_______________________________________________ >p2p-hackers mailing list >p2p-hackers@zgp.org >http://zgp.org/mailman/listinfo/p2p-hackers >_______________________________________________ >Here is a web page listing P2P Conferences: >http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences > > > Would the org impl of Usenet also fall in the category as it was org designed to move files, in this case news files, amoung peers? From blanu at bozonics.com Thu Nov 6 08:18:15 2003 From: blanu at bozonics.com (Brandon Wiley) Date: Sat Dec 9 22:12:23 2006 Subject: [p2p-hackers] Re: Hi! (Why Peer to Peer?) In-Reply-To: <20031106030246.71571.qmail@web40016.mail.yahoo.com> Message-ID: > It is easy to get confused regarding the reasons for > Peer to Peer. The 'piracy' community embraced it > because it made things harder for the owners of > copyright, and their lawyers to take legal action > against a single entity. Actually the pirates embraced it because it offered a more efficient use of bandwidth and the piracy community is always pushing the limits of bandwidth usage. The popular tools in the piracy community offer no anonymity over FTP. BitTorrent is easier to track than FTP actually. Serious pirates don't have time for protection because they have to push out gigs of data daily. From blanu at bozonics.com Thu Nov 6 08:48:08 2003 From: blanu at bozonics.com (Brandon Wiley) Date: Sat Dec 9 22:12:23 2006 Subject: [p2p-hackers] Re: Hi! (Why Peer to Peer?) In-Reply-To: Message-ID: > Peer to peer is the Internet. > > Peer to peer is: > > 1) TCP > 2) IP > 3) DNS > > The reason for peer to peer is innovation, that's all. Separate the > transport from the content and application layer, and you can build > anything. This is something I hear a lot from people who weren't paying attention when "peer-to-peer" started as a thing people talked about. Peer-to-peer started when Napster, Gnutella, and Freenet all popped up at the same time from out of nowhere. It was a technologically-mediated movement to achieve social ends that a bunch of college kids started. Then some more people jumped on writing similar but more advanced applications or had been working on them for a while but decided to sign on to the p2p meme. Then Tim O'Reilly or someone else over there called it Peer-to-Peer in order, I assume, to try to get something marketable out of it. Suddenly there was a P2PCon and everyone working on this stuff met each other and it became a "thing". Then some P2P companies popped up and crashed, the RIAA shut down some people, projects died, were reborn, or splintered. Also Sun, Microsoft, and IBM tried to pretend that they had some involvement. Then some guys at MIT published this Chord paper and after that people started spending a lot of time reading and writing papers and attending this new batch of academic p2p conferences. That's about where we are now. There's a lot of innovation in the academic field and limited interesting stuff going on in industry and open source, but that balance will shift in various ways over time as it has been doing so far. So I am disdainful when I hear things like the Internet is P2P and Usenet was P2P, etc.. Such comments ignore the fact that P2P as a thing people talked about specifically, as a term, refers to a specific set of developments and philosophies which came out of this brief decentralized movement. The Internet is not p2p in spirit. All of the traffic is routed by large centralized hubs. 802.11b mesh networks are p2p. From sam at neurogrid.com Thu Nov 6 10:05:03 2003 From: sam at neurogrid.com (Sam Joseph) Date: Sat Dec 9 22:12:23 2006 Subject: [p2p-hackers] Re: Hi! (Why Peer to Peer?) In-Reply-To: References: Message-ID: <3FAA1CCF.3030207@neurogrid.com> Hi Brandon, Excellent summary - as usual you put into words just what I had been unable or unmotivated to articulate. This should go up on a web page somewhere under the title "P2P in 2003". Of course we could add something about the goals of the original projects (Napster, Gnutella, Freenet) which might be summarised as trying to connect together all the storage space available on all the "occasionally connected" PCs out there and seeing what interesting properties could be derived. Things like anonymity, redundancy, resilience etc. I have got into arguments with people who say that Napster was not novel at all. A colleague of mine maintains that Napster was purely a user interface development in that someone had bolted together a browser and a server. Either way I think it was an important development, and the motives of the people creating it (they wanted to share stuff more easily I guess) were at least as important to understand as why people starting using it in such droves. CHEERS> SAM Brandon Wiley wrote: >This is something I hear a lot from people who weren't paying attention >when "peer-to-peer" started as a thing people talked about. Peer-to-peer >started when Napster, Gnutella, and Freenet all popped up at the same time >from out of nowhere. It was a technologically-mediated movement to achieve >social ends that a bunch of college kids started. Then some more people >jumped on writing similar but more advanced applications or had been >working on them for a while but decided to sign on to the p2p meme. Then >Tim O'Reilly or someone else over there called it Peer-to-Peer in order, >I assume, to try to get something marketable out of it. Suddenly there was >a P2PCon and everyone working on this stuff met each other and it became a >"thing". Then some P2P companies popped up and crashed, the RIAA shut down >some people, projects died, were reborn, or splintered. Also Sun, >Microsoft, and IBM tried to pretend that they had some involvement. > >Then some guys at MIT published this Chord paper and after that people >started spending a lot of time reading and writing papers and attending >this new batch of academic p2p conferences. That's about where we are now. >There's a lot of innovation in the academic field and limited interesting >stuff going on in industry and open source, but that balance will shift in >various ways over time as it has been doing so far. > >So I am disdainful when I hear things like the Internet is P2P and Usenet >was P2P, etc.. Such comments ignore the fact that P2P as a thing people >talked about specifically, as a term, refers to a specific set of >developments and philosophies which came out of this brief decentralized >movement. The Internet is not p2p in spirit. All of the traffic is routed >by large centralized hubs. 802.11b mesh networks are p2p. > > >_______________________________________________ >p2p-hackers mailing list >p2p-hackers@zgp.org >http://zgp.org/mailman/listinfo/p2p-hackers >_______________________________________________ >Here is a web page listing P2P Conferences: >http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences > > > > From zooko at zooko.com Thu Nov 6 12:50:28 2003 From: zooko at zooko.com (Bryce Wilcox-O'Hearn) Date: Sat Dec 9 22:12:23 2006 Subject: [p2p-hackers] Re: Hi! (Why Peer to Peer?) In-Reply-To: Message from Brandon Wiley of "Thu, 06 Nov 2003 02:48:08 CST." References: Message-ID: One of my least favorite topics of conversation is "What is P2P?". The underlying problem is that the phrase "P2P" has multiple overlapping meanings, and people often misunderstand one another and get into arguments unnecessarily. The simplest solution would be for everyone to stop using the word "P2P" and instead substitute a more informative word for the specific thing that they have in mind. But that isn't going to happen, so instead I'll enumerate the major meanings: 1. "P2P" -- the final hype wave of the dotcom boom of the late 90's; This might sound flippant, but there are a lot of people that I deal with (businessmen, investors) for whom this is the primary meaning! To these people, to say that "TCP/IP is P2P" or even to say that "The Chord Distributed Hash Table is P2P" is simply incorrect usage of the word "P2P". To them, P2P was a set of businesses and Internet applications (starting with Napster, Gnutella, and Freenet) that they were trying to make money from in the year 2000. By the way, the first hype wave of the next boom (?) is called "Friend of a Friend" or "FOAF" or "social software". Ironically, FOAF is what I meant when I said "P2P" for the last couple of years. When I spoke at the First O'Reilly P2P Conference in early 2001, what I talked about was FOAF. However, no business person is *ever* going to refer to a modern business plan in 2003 as "P2P", any more than they would have referred to one in 2001 as "push technology". Therefore, this use of the term is now historical, except for file-sharing software like Kazaa. 2. "P2P" -- distributed hash tables and related "emergent networks" research; The authors of Chord and the other DHTs were probably inspired by Napster, Gnutella, and Freenet. However, the science of distributed data structures and decentralized, self-organizing networks is only partially related to the P2P hype of the year 2000. That relation is that most of the former category, and a few of the latter, share an aversion to central points of control. I spoke at the First International P2P Workshop in early 2002, and casually mentioned that "P2P is over." The roomful of eminent scientists gasped, laughed, shook their heads. "Wait wait," I hastened to explain, "The *business* stuff is over. The science stuff is still fine, of course." 3. "P2P" -- an underground movement of software development with social goals; In this sense the "p2p punks" are the successors to the cypherpunks, except that p2p punks actually do write code. What do BitTorrent, bitzi.com, and anonymous remailer systems have in common? It isn't something that they share with the "business hype" kind of P2P, and it isn't something that they share with the "networking research" kind of P2P. It's just that the inventors all know each other and socialize on the p2p-hackers mailing list and the #p2p-hackers IRC channel. I spoke at the first CodeCon conference in 2002. None of the presenters had a business plan for making millions of dollars, and only one was talking about implementing a DHT, but when when describing them collectively in casual conversation I might call them "P2P hackers". 4. "P2P" -- all of the computers can act both as client and as server; This is the definition typically used by newcomers from the network engineering world. This definition is used when arguing about what is or isn't P2P, or when arguing about whether P2P is good or bad. Other than that, this definition isn't used much. Hope this helps. Regards, Bryce Wilcox-O'Hearn From cefn.hoile at bt.com Thu Nov 6 14:08:12 2003 From: cefn.hoile at bt.com (cefn.hoile@bt.com) Date: Sat Dec 9 22:12:23 2006 Subject: [p2p-hackers] Re: Hi! (Why Peer to Peer?) Message-ID: <21DA6754A9238B48B92F39637EF307FD0204B9E0@i2km41-ukdy.domain1.systemhost.net> Basically I see Peer to Peer as a deployment paradigm. It usually depends upon shared network standards (TCP, IP, DNS(2go)), yes. It's not new, it was reflected in the early internet, but this symmetry was disrupted by the asymmetry of commercial internet development. (Berners Lee released both the text browser and server for researchers to run on _their own_ machines) P2P is a deployment paradigm, not a set of technologies. It may in fact use a bunch of technologies which are common with other defined areas such as Grid, RAID etc. However, we shouldn't forget what is compelling about it as a deployment paradigm, which is... * it leverages the resources of participants' machines. As a consequence of this it is possible... * to deploy applications merely as software, or with minimal infrastructure. * to deploy applications which serve large numbers of people, without having to provision, overprovision, build or maintain major centralised resources - improving the business case * to deploy applications for free, (since infrastructure costs are minimal) - improving the the marketing of your application, and stimulating viral uptake * to deploy applications which exploit participants resources - you try running instant messaging without client-side resources * and many more funky features People with a technology-oriented focus complain 'Napster wasn't REALLY P2P". Who cares whether it was fully decentralised. The value came from the resources contributed by participants, which happened to be controlled through a centralised infrastructure. Of course, all of the concepts mentioned in this thread are relevant, and link together with P2P, but lets not forget where the value is. Cefn http://www.cefn.com -----Original Message----- From: Sam Joseph [mailto:sam@neurogrid.com] Sent: 06 November 2003 10:05 To: Peer-to-peer development. Subject: Re: [p2p-hackers] Re: Hi! (Why Peer to Peer?) Hi Brandon, Excellent summary - as usual you put into words just what I had been unable or unmotivated to articulate. This should go up on a web page somewhere under the title "P2P in 2003". Of course we could add something about the goals of the original projects (Napster, Gnutella, Freenet) which might be summarised as trying to connect together all the storage space available on all the "occasionally connected" PCs out there and seeing what interesting properties could be derived. Things like anonymity, redundancy, resilience etc. I have got into arguments with people who say that Napster was not novel at all. A colleague of mine maintains that Napster was purely a user interface development in that someone had bolted together a browser and a server. Either way I think it was an important development, and the motives of the people creating it (they wanted to share stuff more easily I guess) were at least as important to understand as why people starting using it in such droves. CHEERS> SAM Brandon Wiley wrote: >This is something I hear a lot from people who weren't paying attention >when "peer-to-peer" started as a thing people talked about. >Peer-to-peer started when Napster, Gnutella, and Freenet all popped up >at the same time from out of nowhere. It was a technologically-mediated >movement to achieve social ends that a bunch of college kids started. >Then some more people jumped on writing similar but more advanced >applications or had been working on them for a while but decided to >sign on to the p2p meme. Then Tim O'Reilly or someone else over there >called it Peer-to-Peer in order, I assume, to try to get something >marketable out of it. Suddenly there was a P2PCon and everyone working >on this stuff met each other and it became a "thing". Then some P2P >companies popped up and crashed, the RIAA shut down some people, >projects died, were reborn, or splintered. Also Sun, Microsoft, and IBM >tried to pretend that they had some involvement. > >Then some guys at MIT published this Chord paper and after that people >started spending a lot of time reading and writing papers and attending >this new batch of academic p2p conferences. That's about where we are >now. There's a lot of innovation in the academic field and limited >interesting stuff going on in industry and open source, but that >balance will shift in various ways over time as it has been doing so >far. > >So I am disdainful when I hear things like the Internet is P2P and >Usenet was P2P, etc.. Such comments ignore the fact that P2P as a thing >people talked about specifically, as a term, refers to a specific set >of developments and philosophies which came out of this brief >decentralized movement. The Internet is not p2p in spirit. All of the >traffic is routed by large centralized hubs. 802.11b mesh networks are >p2p. > > >_______________________________________________ >p2p-hackers mailing list >p2p-hackers@zgp.org http://zgp.org/mailman/listinfo/p2p-hackers >_______________________________________________ >Here is a web page listing P2P Conferences: >http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences > > > > _______________________________________________ p2p-hackers mailing list p2p-hackers@zgp.org http://zgp.org/mailman/listinfo/p2p-hackers _______________________________________________ Here is a web page listing P2P Conferences: http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences From jeff at pl.atyp.us Thu Nov 6 14:20:08 2003 From: jeff at pl.atyp.us (Jeff Darcy) Date: Sat Dec 9 22:12:23 2006 Subject: [p2p-hackers] Re: Hi! (Why Peer to Peer?) In-Reply-To: <07FBB852E7D37D4F95879AF229258FF69B15A2@washington.revivio.com> Message-ID: <07FBB852E7D37D4F95879AF229258FF60CE530@washington.revivio.com> > The Internet is not p2p in spirit. All of the > traffic is routed by large centralized hubs. Those who do not know history... The *inter*net was very much designed around the idea of equal entities communicating in a symmetric fashion. There's no concept in the transport or lower protocols of one type of equipment always being the requester and another being the responder, as was the case in other networks at the time. Most of the protocols such as FTP or SMTP establish roles for each participant in a conversation, but from very early days it was the case that the roles might change from session to session. The fact that *over a decade later* the infrastructure that grew up around the internet technology became rather centralized was actually a great shock and disappointment to the internet's designers. The web - a relative newcomer, but now the dominant usage paradigm - is in many ways the antithesis of what they had intended. The internet at its low to middle levels is very much p2p (small letters, technical term that has existed for ages) even though it might not be P2P (big letters, marketing term of more recent vintage). The thing that I think distinguishes "modern P2P" with older modes of operation is not really anything to do with peer vs. hierarchical relationships at all. Many old-style protocols involved peers, while many new-style ones involve brokers or supernodes or some such. The biggest difference I see is that new-style P2P involves multi-way instead of two-way conversations. For example, in FTP one machine opens a connection to one other machine and transfers files. In P2P a machine might first contact a broker, which then refers them to *several* other nodes to/from which it transfers files or pieces of files simultaneously. The set of other nodes might even (in fact, in many systems is highly likely to) change during the session, and there might be various sorts of request routing/forwarding/delegation within the higher-level protocol. This model involves a fundamentally different set of algorithms and protocols than simple two-way communication, and it is what makes current work in P2P interesting. From jdd at dixons.org Thu Nov 6 14:56:18 2003 From: jdd at dixons.org (Jim Dixon) Date: Sat Dec 9 22:12:23 2006 Subject: [p2p-hackers] Re: Hi! (Why Peer to Peer?) In-Reply-To: Message-ID: <20031106142658.J68332-100000@localhost> On Thu, 6 Nov 2003, Brandon Wiley wrote: > So I am disdainful when I hear things like the Internet is P2P and Usenet > was P2P, etc.. Such comments ignore the fact that P2P as a thing people > talked about specifically, as a term, refers to a specific set of > developments and philosophies which came out of this brief decentralized > movement. The Internet is not p2p in spirit. All of the traffic is routed > by large centralized hubs. 802.11b mesh networks are p2p. The Internet is split into a large number of Autonomous Systems, ASs. Some of these are huge, some are tiny. Routing is enabled by the exchange of routing information using a protocol called BGP4 ("Border Gateway Protocol"). The parties involved are called peers. The process is called peering. When I got into the business in 1994, the first thing that I did was set up peering with other networks. Everyone involved then, and everyone involved now, thinks of BGP4 as a peer-to-peer protocol -- because it is. This exchange of routing information is not the same as routing, but it is essential to routing. It is true that the two functions, routing and managing the routing tables, are carried out on the same machine. Within ASs, routing is managed using an IGP ("Interior Gateway Protocol"), usually OSPF or IS-IS. These are also peer-to-peer protocols. In other words, the Internet backbone is run as a peer-to-peer network and has been from the beginning. End users may perceive the Internet differently. But those who operate the backbone understand it and manage it as a p2p network composed of thousands of smaller p2p networks. Anyone wanting to build a successful p2p network needs to study and understand the Internet. The p2p network that is the Internet backbone is the largest and most successful p2p network in existence. BGP is purely a peer-to-peer process. There are other aspects of the Internet, the DNS, for example, that are what might be called server-assisted p2p networks. The world's name servers exchange information among themselves, but go to the root name servers to find the authoritative name server for a given domain. There are many who argue that this is a design flaw, that the domain name system should be run as a pure p2p network. My main point here is that the p2p networks are not something new technically. People were consciously designing and building p2p networks twenty years ago. The operator community has enormous amounts of experience in the technology; people designing p2p networks now should study the Internet, avoid its mistakes, learn from its experience. -- Jim Dixon jdd@dixons.org tel +44 117 982 0786 mobile +44 797 373 7881 From jdd at dixons.org Thu Nov 6 15:25:39 2003 From: jdd at dixons.org (Jim Dixon) Date: Sat Dec 9 22:12:23 2006 Subject: [p2p-hackers] Re: Hi! (Why Peer to Peer?) In-Reply-To: <07FBB852E7D37D4F95879AF229258FF60CE530@washington.revivio.com> Message-ID: <20031106151207.C68332-100000@localhost> On Thu, 6 Nov 2003, Jeff Darcy wrote: > The thing that I think distinguishes "modern P2P" with older modes of > operation is not really anything to do with peer vs. hierarchical > relationships at all. Many old-style protocols involved peers, while > many new-style ones involve brokers or supernodes or some such. The > biggest difference I see is that new-style P2P involves multi-way > instead of two-way conversations. Backbone routers typically have at least dozens, often hundreds of peering sessions running. This is not new: it's been at the heart of the Internet since it began. Well, shortly after it began ;-) > For example, in FTP one machine opens > a connection to one other machine and transfers files. In P2P a machine > might first contact a broker, which then refers them to *several* other > nodes to/from which it transfers files or pieces of files > simultaneously. The set of other nodes might even (in fact, in many > systems is highly likely to) change during the session, and there might > be various sorts of request routing/forwarding/delegation within the > higher-level protocol. This model involves a fundamentally different > set of algorithms and protocols than simple two-way communication, and > it is what makes current work in P2P interesting. A router establishing peering with other networks (say after it has been rebooted) exhibits similar behaviour. It announces itself to peers who are known to it, accepts peering requests from routers that satisfy certain criteria. Then it tells everyone else about the routes that it knows and everyone else tells it about the routes that they know. These routes will have been passed across the Internet, often around the world, in similar exchanges of information between peers. Once the sessions are up, chatter between routers continues at a lower level, as connections come up and go down, as networks join and leave, all over the world. There are several tens of thousands of Autonomous Systems and over a hundred thousand routes in the globally-shared routing table. Networks have been talking BGP to one another without any break at all for many years, even while the BGP protocol itself has evolved. -- Jim Dixon jdd@dixons.org tel +44 117 982 0786 mobile +44 797 373 7881 From seth.johnson at realmeasures.dyndns.org Thu Nov 6 15:35:35 2003 From: seth.johnson at realmeasures.dyndns.org (Seth Johnson) Date: Sat Dec 9 22:12:23 2006 Subject: [p2p-hackers] Re: Hi! (Why Peer to Peer?) Message-ID: Yep. The main struggle was won way back in the 1970's, when David Reed et al. convinced Vinton Cerf to build the protocols based on separation of a packetizing transport layer from the "content" or application layer. The Internet is P2P. Seth -----Original Message----- From: Jim Dixon Date: Thu, 6 Nov 2003 14:56:18 +0000 (GMT) Subject: Re: [p2p-hackers] Re: Hi! (Why Peer to Peer?) > On Thu, 6 Nov 2003, Brandon Wiley wrote: > > > So I am disdainful when I hear things like the Internet is P2P and > Usenet > > was P2P, etc.. Such comments ignore the fact that P2P as a thing > people > > talked about specifically, as a term, refers to a specific set of > > developments and philosophies which came out of this brief > decentralized > > movement. The Internet is not p2p in spirit. All of the traffic is > routed > > by large centralized hubs. 802.11b mesh networks are p2p. > > The Internet is split into a large number of Autonomous Systems, ASs. > Some of these are huge, some are tiny. Routing is enabled by the > exchange > of routing information using a protocol called BGP4 ("Border Gateway > Protocol"). The parties involved are called peers. The process is > called > peering. When I got into the business in 1994, the first thing that > I did > was set up peering with other networks. Everyone involved then, and > everyone involved now, thinks of BGP4 as a peer-to-peer protocol -- > because it is. This exchange of routing information is not the same > as > routing, but it is essential to routing. It is true that the two > functions, routing and managing the routing tables, are carried out > on the > same machine. > > Within ASs, routing is managed using an IGP ("Interior Gateway > Protocol"), > usually OSPF or IS-IS. These are also peer-to-peer protocols. > > In other words, the Internet backbone is run as a peer-to-peer > network and > has been from the beginning. > > End users may perceive the Internet differently. But those who > operate > the backbone understand it and manage it as a p2p network composed of > thousands of smaller p2p networks. > > Anyone wanting to build a successful p2p network needs to study and > understand the Internet. The p2p network that is the Internet > backbone > is the largest and most successful p2p network in existence. > > BGP is purely a peer-to-peer process. There are other aspects of the > Internet, the DNS, for example, that are what might be called > server-assisted p2p networks. The world's name servers exchange > information among themselves, but go to the root name servers to find > the authoritative name server for a given domain. There are many who > argue that this is a design flaw, that the domain name system should > be > run as a pure p2p network. > > My main point here is that the p2p networks are not something new > technically. People were consciously designing and building p2p > networks > twenty years ago. The operator community has enormous amounts of > experience in the technology; people designing p2p networks now > should > study the Internet, avoid its mistakes, learn from its experience. > > -- > Jim Dixon jdd@dixons.org tel +44 117 982 0786 mobile +44 797 373 > 7881 > > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > _______________________________________________ > Here is a web page listing P2P Conferences: > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences From seth.johnson at realmeasures.dyndns.org Thu Nov 6 16:06:06 2003 From: seth.johnson at realmeasures.dyndns.org (Seth Johnson) Date: Sat Dec 9 22:12:23 2006 Subject: [p2p-hackers] Re: Hi! (Why Peer to Peer?) Message-ID: I would like to observe the important consequence that should be acknowledged and borne in mind here: When people talk about regulating so-called "P2P" file-sharing apps, they are talking specifically about abolishing the fundamental protocols of the Internet. Seth -----Original Message----- From: Jim Dixon Date: Thu, 6 Nov 2003 15:25:39 +0000 (GMT) Subject: RE: [p2p-hackers] Re: Hi! (Why Peer to Peer?) > On Thu, 6 Nov 2003, Jeff Darcy wrote: > > > The thing that I think distinguishes "modern P2P" with older modes > of > > operation is not really anything to do with peer vs. hierarchical > > relationships at all. Many old-style protocols involved peers, > while > > many new-style ones involve brokers or supernodes or some such. > The > > biggest difference I see is that new-style P2P involves multi-way > > instead of two-way conversations. > > Backbone routers typically have at least dozens, often hundreds of > peering > sessions running. This is not new: it's been at the heart of the > Internet > since it began. Well, shortly after it began ;-) > > > For example, in FTP one machine > opens > > a connection to one other machine and transfers files. In P2P a > machine > > might first contact a broker, which then refers them to *several* > other > > nodes to/from which it transfers files or pieces of files > > simultaneously. The set of other nodes might even (in fact, in > many > > systems is highly likely to) change during the session, and there > might > > be various sorts of request routing/forwarding/delegation within > the > > higher-level protocol. This model involves a fundamentally > different > > set of algorithms and protocols than simple two-way communication, > and > > it is what makes current work in P2P interesting. > > A router establishing peering with other networks (say after it has > been > rebooted) exhibits similar behaviour. It announces itself to peers > who > are known to it, accepts peering requests from routers that satisfy > certain criteria. Then it tells everyone else about the routes that > it > knows and everyone else tells it about the routes that they know. > These > routes will have been passed across the Internet, often around the > world, > in similar exchanges of information between peers. Once the sessions > are > up, chatter between routers continues at a lower level, as > connections > come up and go down, as networks join and leave, all over the world. > There are several tens of thousands of Autonomous Systems and over a > hundred thousand routes in the globally-shared routing table. > Networks > have been talking BGP to one another without any break at all for > many years, > even while the BGP protocol itself has evolved. > > -- > Jim Dixon jdd@dixons.org tel +44 117 982 0786 mobile +44 797 373 > 7881 > > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > _______________________________________________ > Here is a web page listing P2P Conferences: > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences From cefn.hoile at bt.com Thu Nov 6 17:17:12 2003 From: cefn.hoile at bt.com (cefn.hoile@bt.com) Date: Sat Dec 9 22:12:23 2006 Subject: [p2p-hackers] Re: Hi! (Why Peer to Peer?) Message-ID: <21DA6754A9238B48B92F39637EF307FD02180672@i2km41-ukdy.domain1.systemhost.net> Zooko wrote: "One of my least favorite topics of conversation is "What is P2P?"." I agree with you. I think the original question was "Why Peer to Peer?" or in other words "what is the value in Peer to Peer"? This is perhaps easier to answer, and more relevant. As it turns out, it may not be so different technologically from other approaches. It may not be so different in communications from other approaches. However, there's something very distinctive about applications coming out of nowhere, reaching hundreds of thousands of users within just a few months, and potentially millions by the end of a year, with no major upfront investment and no obvious revenue stream. There's something worth noting in that. Previous cases, (including some of the core internet technologies), may take advantage of the same ways of generating/accessing value. For example, the Internet itself depends upon companies choosing to contribute resources to expand the network, which in turn adds value to the network. We saw a similarly distinctive adoption curve for the internet. However, the cases of file-sharing underlined the _untapped_ opportunities which exist for this same approach, which go beyond physical network deployment, and creep into content delivery, workspaces, collaborative filtering, and other application-level functions. We don't have to say P2P is entirely new. We don't have to say that it's entirely different. But we can reflect on the virtues of the mode of deployment which enabled Napster, KaZaA etc to achieve such user bases with so little investment, challenging entrenched providers along the way. I basically agree with Shirky that these virtues are fundamentally about exploiting "resources at the edge", specifically the resources of the application users themselves. Cefn http://www.cefn.com From seth.johnson at realmeasures.dyndns.org Thu Nov 6 17:40:16 2003 From: seth.johnson at realmeasures.dyndns.org (Seth Johnson) Date: Sat Dec 9 22:12:23 2006 Subject: [p2p-hackers] Re: Hi! (Why Peer to Peer?) Message-ID: As David Reed (who in large measure is responsible for the IP protocol), the technological value of P2P (or the end-to-end principle) is its basis in the separation of the packetizing transport layer from the applications layer. It fosters innovation by eliminating "early binding," that is, building in preestablished assumptions about the structure of applications. Add to that IP addressing, and everybody has the power to build an app of any sort, which has nearly equivalent chances of gaining adoption simply on the basis of usability and/or quality. Seth Johnson -----Original Message----- From: Date: Thu, 6 Nov 2003 17:17:12 -0000 Subject: RE: [p2p-hackers] Re: Hi! (Why Peer to Peer?) > Zooko wrote: "One of my least favorite topics of conversation is > "What > is P2P?"." > > I agree with you. I think the original question was "Why Peer to > Peer?" > or in other words "what is the value in Peer to Peer"? > > This is perhaps easier to answer, and more relevant. > > As it turns out, it may not be so different technologically from > other > approaches. It may not be so different in communications from other > approaches. > > However, there's something very distinctive about applications coming > out of nowhere, reaching hundreds of thousands of users within just a > few months, and potentially millions by the end of a year, with no > major > upfront investment and no obvious revenue stream. There's something > worth noting in that. > > Previous cases, (including some of the core internet technologies), > may > take advantage of the same ways of generating/accessing value. > > For example, the Internet itself depends upon companies choosing to > contribute resources to expand the network, which in turn adds value > to > the network. We saw a similarly distinctive adoption curve for the > internet. > > However, the cases of file-sharing underlined the _untapped_ > opportunities which exist for this same approach, which go beyond > physical network deployment, and creep into content delivery, > workspaces, collaborative filtering, and other application-level > functions. > > We don't have to say P2P is entirely new. We don't have to say that > it's > entirely different. But we can reflect on the virtues of the mode of > deployment which enabled Napster, KaZaA etc to achieve such user > bases > with so little investment, challenging entrenched providers along the > way. > > I basically agree with Shirky that these virtues are fundamentally > about > exploiting "resources at the edge", specifically the resources of the > application users themselves. > > Cefn > http://www.cefn.com > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > _______________________________________________ > Here is a web page listing P2P Conferences: > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences From moore at eds.org Thu Nov 6 18:46:02 2003 From: moore at eds.org (Jonathan Moore) Date: Sat Dec 9 22:12:23 2006 Subject: [p2p-hackers] Re: Hi! (Why Peer to Peer?) In-Reply-To: <20031106030246.71571.qmail@web40016.mail.yahoo.com> References: <20031106030246.71571.qmail@web40016.mail.yahoo.com> Message-ID: <1068144362.18425.35.camel@tot> On Wed, 2003-11-05 at 19:02, Daniel Freeman wrote: > I need to know what's wrong with > good old fashioned client-server? What advantages > Peer to Peer brings? In a robust p2p systome the avalabule resources growes linearly with the number of clients. It a client server modle the volume othe the resourses is a constant defined by the resourses of the server. The internet is end to end not p2p. The end to end prinisible is very imporent and the evlotion of p2p would have been difficult or imposable with out it but it is not the same thing. End to end describes a systome where the termanal nodes of a comunacaton streem a responsoble for mantaining there comunacation. This is a atemped to reduce the load on the net work between the end points by giving them as little to do as posible. P2P systome are ones where the resourses of the networks are considered in aggreat. You do not consider if a pretiular server has a file but rathere weathere the file is on the network. Napster was a p2p app becouse, where there was a centreal server for search, file transefers were between client nodes not form server to client. -Jonathan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: This is a digitally signed message part Url : http://zgp.org/pipermail/p2p-hackers/attachments/20031106/cf635efd/attachment.pgp From justin at chapweske.com Thu Nov 6 19:01:00 2003 From: justin at chapweske.com (Justin Chapweske) Date: Sat Dec 9 22:12:23 2006 Subject: [p2p-hackers] Re: Hi! (Why Peer to Peer?) In-Reply-To: <1068144362.18425.35.camel@tot> References: <20031106030246.71571.qmail@web40016.mail.yahoo.com> <1068144362.18425.35.camel@tot> Message-ID: <3FAA9A6C.7010400@chapweske.com> I like this notion. P2P systems often enable aggregation of resources in a "centralized" cloud in much the same way that client/server systems do, but w/o loss of the end-to-end principals. It may be difficult to see how this applies to a system like Swarmcast, which drives all content from a central server, but the resources that are aggregated in Swarmcast isn't content, its bandwidth. With this line of taxonomy, file sharing systems aggregate both bandwidth and files, while SETI@Home aggregates CPU resources. And now I will go back to not caring about how to define P2P, since nowadays it is too synonymous with "piracy". -Justin > > In a robust p2p systome the avalabule resources growes linearly with the > number of clients. It a client server modle the volume othe the > resourses is a constant defined by the resourses of the server. > > > The internet is end to end not p2p. The end to end prinisible is very > imporent and the evlotion of p2p would have been difficult or imposable > with out it but it is not the same thing. End to end describes a systome > where the termanal nodes of a comunacaton streem a responsoble for > mantaining there comunacation. This is a atemped to reduce the load on > the net work between the end points by giving them as little to do as > posible. > > P2P systome are ones where the resourses of the networks are considered > in aggreat. You do not consider if a pretiular server has a file but > rathere weathere the file is on the network. Napster was a p2p app > becouse, where there was a centreal server for search, file transefers > were between client nodes not form server to client. > > -Jonathan -- Justin Chapweske, Onion Networks http://onionnetworks.com/ From blanu at bozonics.com Fri Nov 7 01:12:31 2003 From: blanu at bozonics.com (Brandon Wiley) Date: Sat Dec 9 22:12:23 2006 Subject: [p2p-hackers] Re: Hi! (Why Peer to Peer?) In-Reply-To: <20031106142658.J68332-100000@localhost> Message-ID: > > The Internet is not p2p in spirit. All of the traffic is routed > > by large centralized hubs. 802.11b mesh networks are p2p. > > The Internet is split into a large number of Autonomous Systems, ASs. > Some of these are huge, some are tiny. Routing is enabled by the exchange > of routing information using a protocol called BGP4 ("Border Gateway > Protocol"). The parties involved are called peers. The process is called > peering. Yes of course I know how the Internet works. This is the same arguement that people use for Usenet being P2P, that the Usenet servers are equal and therefore peers and therefore P2P. The reason this does not fit into the philosophy of the social movement known as P2P is because in P2P peers map more or less map onto people. P2P runs on consumer-grade computers in people's homes and office desks. BGP is very much a big corporation technology and not an individual person technology. From blanu at bozonics.com Fri Nov 7 01:17:20 2003 From: blanu at bozonics.com (Brandon Wiley) Date: Sat Dec 9 22:12:23 2006 Subject: [p2p-hackers] Re: Hi! (Why Peer to Peer?) In-Reply-To: <3FAA9A6C.7010400@chapweske.com> Message-ID: > And now I will go back to not caring about how to define P2P, since > nowadays it is too synonymous with "piracy". I generally use the term "decentralization" now. From wesley at felter.org Fri Nov 7 06:20:25 2003 From: wesley at felter.org (Wes Felter) Date: Sat Dec 9 22:12:23 2006 Subject: [p2p-hackers] Re: Hi! (Why Peer to Peer?) In-Reply-To: <20031106030246.71571.qmail@web40016.mail.yahoo.com> References: <20031106030246.71571.qmail@web40016.mail.yahoo.com> Message-ID: <794673EA-10EA-11D8-AD0F-000393A581BE@felter.org> On Nov 5, 2003, at 9:02 PM, Daniel Freeman wrote: > I need to know what's wrong with > good old fashioned client-server? What advantages > Peer to Peer brings? Stepping aside of the "what is P2P?" morass... My primary interest in P2P is to reduce cost. There are a variety of services that people won't pay for (for various reasons), so building such services using a client-server architecture tends to create a money pit. I'd prefer to just build it P2P so that it's free to run and free to use. A second benefit is that P2P overcomes a collective action problem with running servers. If a group of people want to run Lotus Notes they have to convince/pay someone to maintain the server, but it they use Groove then each person just maintains his own copy of Groove. A third benefit is that P2P ought to scale down as easily as it scales up. Setting up a server for a small group is often not worth the effort, but auto-configuring P2P apps should work fine for small groups. You are right that P2P vs. client-server depends on your worldview. I'm not a Microsoft fan, but I do like self-sufficient "fat" PCs instead of thin clients. I'm a big non-believer when it comes to agents, though, since I don't see any advantage to them in today's world. Wes Felter - wesley@felter.org - http://felter.org/wesley/ From tav at espians.com Fri Nov 7 14:23:46 2003 From: tav at espians.com (tav) Date: Sat Dec 9 22:12:23 2006 Subject: [p2p-hackers] Re: Hi! (Why Peer to Peer?) In-Reply-To: References: Message-ID: <3FABAAF2.3030603@espians.com> ~~~~~ Bryce Wilcox-O'Hearn wrote: zooko> Ironically, FOAF is what I meant when I said "P2P" zooko> for the last couple of years. When I spoke at the zooko> First O'Reilly P2P Conference in early 2001, what zooko> I talked about was FOAF. ah, so you meant P2P as in People-2-People! -- best regards, tav tav@espians.com From jdd at dixons.org Fri Nov 7 14:33:57 2003 From: jdd at dixons.org (Jim Dixon) Date: Sat Dec 9 22:12:23 2006 Subject: [p2p-hackers] Re: Hi! (Why Peer to Peer?) In-Reply-To: Message-ID: <20031107135936.X68332-100000@localhost> On Thu, 6 Nov 2003, Brandon Wiley wrote: > > > The Internet is not p2p in spirit. All of the traffic is routed > > > by large centralized hubs. 802.11b mesh networks are p2p. > > > > The Internet is split into a large number of Autonomous Systems, ASs. > > Some of these are huge, some are tiny. Routing is enabled by the exchange > > of routing information using a protocol called BGP4 ("Border Gateway > > Protocol"). The parties involved are called peers. The process is called > > peering. > > Yes of course I know how the Internet works. This is the same arguement > that people use for Usenet being P2P, that the Usenet servers are equal > and therefore peers and therefore P2P. As indeed they are. However, you should understand that most networks operate two kinds of Usenet machines. Some specialize in transporting the rivers of news articles; these are organized into a global p2p network. The ISP I was last with was not unusual in having a cluster of p2p machines for redundancy. Each machine had a link to all of the others in the cluster and also peered with news machines on other networks. None of these machines allowed client connections: none acted as servers. Then we had other machines that acted as servers, each taking news feeds from two of the backbone machines and then accepting connections from clients. Users saw these machines and drew their impression of how things work from those connections - but those of us who ran the news machines were keenly aware of how much of a p2p operation this is. We had a large variety of machines connected to the Internet backbone. Most of these ran idle most of the time -- typically at something like 5% CPU. The news machines ran flat out almost all of the time: often over 90% CPU, often running their disks near maximum transfer rates, often running out of memory. The typical news machine lasted roughly 6-12 months, and then was replaced because its drives were broken and/or it didn't have sufficient memory expansion capacity and/or the CPU was just too slow to pump the news through: it couldn't keep up with its peers. > The reason this does not fit into the philosophy of the social movement > known as P2P is because in P2P peers map more or less map onto people. P2P > runs on consumer-grade computers in people's homes and office desks. > BGP is very much a big corporation technology and not an individual person > technology. If you wish. However, your original contention was not philosophical, but technical: that the Internet is not p2p, but the 802.11b mesh networks are: > > > The Internet is not p2p in spirit. All of the traffic is routed > > > by large centralized hubs. 802.11b mesh networks are p2p. The Internet backbone and the system for distributing Usenet news are both most certainly p2p networks. Anyone designing new p2p technologies would be well-advised to study the problems that the Internet has encountered and the solutions devised. Internet traffic has grown exponentially, without interruption, despite running up against one technical barrier after another, for more than 20 years. None of the recent "P2P" phenomena has been managed any similar success; most have collapsed at the first hurdle. -- Jim Dixon jdd@dixons.org tel +44 117 982 0786 mobile +44 797 373 7881 From rah at shipwright.com Fri Nov 7 15:36:34 2003 From: rah at shipwright.com (R. A. Hettinga) Date: Sat Dec 9 22:12:23 2006 Subject: [p2p-hackers] Re: Hi! (Why Peer to Peer?) In-Reply-To: References: Message-ID: At 7:17 PM -0600 11/6/03, Brandon Wiley wrote: >I generally use the term "decentralization" now. None dare call it "geodesic", of course... ;-) Cheers, RAH -- ----------------- R. A. Hettinga The Internet Bearer Underwriting Corporation 44 Farquhar Street, Boston, MA 02131 USA "... however it may deserve respect for its usefulness and antiquity, [predicting the end of the world] has not been found agreeable to experience." -- Edward Gibbon, 'Decline and Fall of the Roman Empire' From seth.johnson at realmeasures.dyndns.org Fri Nov 7 23:31:31 2003 From: seth.johnson at realmeasures.dyndns.org (Seth Johnson) Date: Sat Dec 9 22:12:23 2006 Subject: [p2p-hackers] Jack the Ripper Spectrum Reallocation Message-ID: (Link from Boing Boing blog) > http://arnoldkling.com/~arnoldsk/aimst5/valenti.html An Open Letter to Jack Valenti by Arnold Kling “The FCC scored a big victory for consumers and the preservation of high value over-the-air free broadcasting with its decision on the Broadcast Flag. This puts digital TV on the same level playing field as cable and satellite delivery. All the way around, the consumer wins, and free TV stays alive.” --Jack Valenti, Motion Picture Association of America Dear Mr. Valenti, I am a consumer, and I did not win when the FCC voted 5-0 to require personal computers and other devices that might store video files to comply with a technical specification designed to protect copyright of high-definition television (HDTV). In this letter, I am going to do two things. First, I am going to explain why I am mad. Then, I am going to explain how I plan to get even. The High Cost of Free TV I am one of the small minority of Americans that still gets free TV. I do not subscribe to cable or satellite TV. Accordingly, I am one of the "human shields" that you and other lobbyists are using to justify imposing a hardware tax on the entire nation. I should hasten to add that I make no claim to be a cable-TV "have- not." Instead, I am a cable-TV "do-not." My wife and I have determined that there is nothing on cable TV that is so compelling that it justifies a subscription. Cost is not the issue. For our family's sake, we prefer not to have cable TV. The Broadcast Flag technology is supposed to benefit me, by encouraging broadasters to send HDTV signals over "free" TV. I am as excited about this as I am about Cable TV, which is to say--not at all. I have no desire to encourage broadcasters to send HDTV signals. I do not think that my fellow cable TV have-nots and do-nots care about this issue, either. I'll bet that not one of us has ever written to our Congressperson expressing our need to watch HDTV sent over the airwaves. Please note that it is inaccurate to refer to broadcast HDTV as "free TV," particularly in the wake of the broadcast flag regulation. In fact, HDTV is going to be very expensive for the economy as a whole, as millions of devices will now have to be made to conform to the Broadcast Flag standard. Furthermore, I predict that individuals will spend time and resources trying to "hack" the Broadcast Flag, which will lead to modifications of the technology, which will layer on more costs to the economy. In short, you are claiming to represent consumers like me when you do not. You are claiming to preserve "free" TV when in fact you are increasing the cost to consumers--not just those of us who still view broadcast television, but also the vast majority of consumers who subscribe to pay-TV services as well as consumers who might not use television at all but wish to buy computers or other devices with electronic file-storage capability. Getting Even I have no plans to try to try to hack the broadcast flag. I do not care enough about your precious content to watch it, much less copy it. I will get back at you another way. Another subsidy that "free TV" enjoys is the allocation of spectrum. I hereby declare that subsidy null and void. I am announcing the Jack Valenti Spectrum Re-allocation. As of November 4, 2003, the spectrum that was allocated for HDTV is now allocated for spread-spectrum wireless. I will not buy any device for the purpose of receiving HDTV. Instead, I will gladly purchase devices that will route packets via the Internet Protocol over that spectrum. In the neighborhood of my house, IP packets will take precedence over HDTV signals. I recommend that other consumers adopt the Jack Valenti Spectrum Re- allocation. I am talking about massive civil disobedience of the FCC. Remember, anyone who receives television over cable or satellite will give up nothing by assigning higher priority to IP packets. For anyone who misses broadcast television, it would be better to give them taxpayer dollars to subscribe to satellite TV than for consumers to pay the Broadcast Flag hardware tax. By re-allocating spectrum from HDTV to wireless IP, we can kill two legacy birds with one stone. We can hasten the demise of the phone companies--because with a wireless "last mile" the wireless Internet can replace traditional land lines and cell phones; and we can show Jack Valenti, the movie industry, and the television industry what it really means to "score a big victory for consumers." To comment on this essay, go to the thread at Broadcast Flag This (http://www.corante.com/bottomline/archives/000589.html) From Paul.Harrison at infotech.monash.edu.au Sun Nov 9 02:42:57 2003 From: Paul.Harrison at infotech.monash.edu.au (Paul Harrison) Date: Sat Dec 9 22:12:23 2006 Subject: [p2p-hackers] Re: Hi! (Why Peer to Peer?) In-Reply-To: <20031106030246.71571.qmail@web40016.mail.yahoo.com> Message-ID: On Thu, 6 Nov 2003, [iso-8859-1] Daniel Freeman wrote: ... > I still don't think the consumer Internet Network is > suitable. Putting aside issues of coffee spillages, > there is also the overhead of running the server, also > consumer Asymetrical broadband is geared for > downloads, not serving. > For most people, the supercomputer that sits on their desktop does very little all day, and their hard disk contains several gigabytes of unused storage. That's a lot of resources that could be better used. A single cable-modem may not have a fast upload speed, but the collective torrent of packets they can produce is rather large. There are obviously reliability problems with this... but the internet, for example, is also built on unreliable services. Ethernet is designed to trash packets arbitrarily under high load, as is the IP layer. Many highly unreliable machines may be more reliable in aggregate than a single highly reliable machine. > Years ago, I worked on Neural Networks - MLP's. My > managers would get really excited by the idea of new > technologies, without bothering to understand its > mechanics or limitations. It was percieved as an all > powerful panacea - it wasn't. But it probably was a > 'sexy' way to get more research funding ;). Is this > now the case with Peer to Peer research? There's a lot of wasted resources that "P2P" could potentially use, to provide more services and to increase reliability, which i think is the main cause of excitement. I'm not sure it's going to get a whole lot of research funding, post napster, but that doesn't really matter with the infinite monkeys connected to the internet. Money doesn't really seem to be involved in P2P, it seems to often involve things that look more like barter. cheers, Paul Harrison Email: pfh@logarithmic.net Current cost to save one life: approx AU$300 (US$200) From jdl at vinecorp.com Sun Nov 9 05:11:42 2003 From: jdl at vinecorp.com (jdl@vinecorp.com) Date: Sat Dec 9 22:12:36 2006 Subject: [p2p-hackers] [Silicon Valley] P2Punks next Tuesday, Nov 11 7:30pm onward at DSRC Message-ID: <20031109001142.B21437@lynx.phpwebhosting.com> It's that time again... --- Where: Dana Street Roasting Company 744 Dana St., Mountain View Phone: (650) 390-9638 1/2 block off Castro St. When: 7:30pm onward From sparenet at yahoo.com Sun Nov 9 15:09:35 2003 From: sparenet at yahoo.com (Byron Higgins) Date: Sat Dec 9 22:12:36 2006 Subject: [p2p-hackers] Re: Hi! (Why Peer to Peer?) In-Reply-To: Message-ID: <20031109150935.14927.qmail@web40711.mail.yahoo.com> The "mystigue" of the internet is long past... P2P is to your average user that little "naughty" thrill left in the internet . On a daily basis I get well over 200 emails inviting me to yet another Gold or Platinum card, to increase my bustsize or to extend my penis, or to visit a farm utopia where nubile teens fornicate with every domesticated ( and some undomesticated)animal known to man....( or woman?????) .... I wonder....... You say that the average business or home desktop/laptop is under utilised and I totally agree and P2P are perhaps "right" in utilising that waste. BUT......really explaining to your average Joe what he's signing up for.. that his computer/Network will help run an advertisment campaign for a company halfway across the globe is a little on the scary side. Your average user might infact be violently anti-smoking and the P2P's ad campaign might be for Cancer Lights or any other cigarette brand, so his machine/network is used, unbeknown to him, to propogate a product he is opposed to. P2P the biggest problem other than bandwidth is knowing whats on the other end..... Allowing some stranger into your office and allowing him to shuffle your papers on your desk is one thing..... to allow him to read your private confidential notes and files is another...... You say that to some the thought of gaining either new software or hardware for your technicians/IT's is comparible to an orgasm, I say that to place a "new" card in one's machine or to power up a 3.0GHz CPU for the first time is very much like that for us teckies....the thrill of the unknown is always exciting.... yet..... YET I must disagree. every technician I have ever spoken to (irrispective of age) has taken that card or CPU to its limits or beyond. For intstance I have recently upgraded to an Athlon64.. I cant wait for software to run it to its limits. Quiet honestly Linux is all showing us the way.......BUT..... Again BUT..... we now have the HARDWARE ..... but almost no recognised Software Manufacturer has caught up .... Microsoft, Sysoft and every gamer is waiting for Intel to launch their 64 before announcing any software for this new CPU....... No matter what its all the same. I say use that unutilised CPU and diskspace to a purpose other than advertising......somthing beneficial to humanity..... Byron Higgins sparenet@yahoo.com --------------------------------- Do you Yahoo!? Protect your identity with Yahoo! Mail AddressGuard -------------- next part -------------- An HTML attachment was scrubbed... URL: http://zgp.org/pipermail/p2p-hackers/attachments/20031109/644d3ea8/attachment.html From bram at gawth.com Mon Nov 10 01:33:25 2003 From: bram at gawth.com (Bram Cohen) Date: Sat Dec 9 22:12:36 2006 Subject: [p2p-hackers] CodeCon 2004 call for papers Message-ID: CodeCon 3.0 February 20-22, 2004 San Francisco CA, USA www.codecon.org Call For Papers CodeCon is the premier showcase of active hacker projects. It is an excellent opportunity for developers to demonstrate their work and keep abreast of what's going on in their community. All presentations must include working demonstrations, ideally open source. Presenters must be one of the active developers of the code in question. We emphasize that demonstrations be of *working* code. CodeCon strongly encourages presenters from non-commercial and academic backgrounds to attend for the purposes of collaboration and the sharing of knowledge by providing free registration to workshop presenters and discounted registration to full-time students. We hereby solicit papers and demonstrations. * Papers and proposals due: December 15, 2003 * Authors notified: January 1, 2004 Possible topics include, but are by no means restricted to: * community-based web sites - forums, weblogs, personals * development tools - languages, debuggers, version control * file sharing systems - swarming distribution, distributed search * security products - mail encryption, intrusion detection, firewalls Presentations will be a 45 minutes long, with 15 minutes allocated for Q&A. Overruns will be truncated. Submission details: Submissions are being accepted immediately. Acceptance dates are November 1, and December 15. After the first acceptance date, submissions will be either accepted, rejected, or deferred to the second acceptance date. The conference language is English. Ideally, demonstrations should be usable by attendees with 802.11b connected devices either via a web interface, or locally on Windows, UNIX-like, or MacOS platforms. Cross-platform applications are most desirable. Our venue will be 21+. If you have a specific day on which you would prefer to present, please advise us. To submit, send mail to submissions@codecon.org including the following information: * Project name * url of project home page * tagline - one sentence or less summing up what the project does * names of presenter(s) and urls of their home pages, if they have any * one-paragraph bios of presenters (optional) * project history, no more than a few sentences * what will be done in the project demo * major achievement(s) so far * claim(s) to fame, if any * future plans Program Chair: Bram Cohen General Chair: Len Sassaman Program Committee: * Bram Cohen * Len Sassaman * Jonathan Moore * Jered Floyd * Brandon Wiley Sponsorship: If your organization is interested in sponsoring CodeCon, we would love to hear from you. In particular, we are looking for sponsors for social meals and parties on any of the three days of the conference, as well as sponsors of the conference as a whole, prizes or awards for quality presentations, scholarships for qualified applicants, and assistance with transportation or accommodation for presenters with limited resources. If you might be interested in sponsoring any of these aspects, please contact the conference organizers at codecon-admin@codecon.org. Press policy: CodeCon strives to be a conference for developers, with strong audience participation. As such, we need to limit the number of complimentary passes for non-developer attendees. Press passes are limited to one pass per publication, and must be approved prior to the registration deadline (to be announced later). If you are a member of the press, and interested in covering CodeCon, please contact us early by sending email to press@codecon.org. Members of the press who do not receive press-passes are welcome to participate as regular conference attendees. Questions: If you have questions about CodeCon, or would like to contact the organizers, please mail codecon-admin@codecon.org. Please note this address is only for questions and administrative requests, and not for workshop presentation submissions. -Bram Cohen "Markets can remain irrational longer than you can remain solvent" -- John Maynard Keynes From sam at neurogrid.com Mon Nov 10 04:23:52 2003 From: sam at neurogrid.com (Sam Joseph) Date: Sat Dec 9 22:12:36 2006 Subject: [p2p-hackers] P2P Simulators Review Paper Message-ID: <3FAF12D8.8080906@neurogrid.com> Hi All, So I've written a review paper summarising smoe of the different p2p simulators available as well as tried to give some more details on how the NeuroGrid simulator works. It's been published in the p2pjounrnal, which you can get at the following link: http://p2pjournal.com/issues/November03.pdf CHEERS> SAM From TSchlabach at gmx.net Mon Nov 10 10:34:42 2003 From: TSchlabach at gmx.net (Torsten Schlabach) Date: Sat Dec 9 22:12:36 2006 Subject: [p2p-hackers] Skype References: <1515.1068322821@www49.gmx.net> Message-ID: <6059.1068460482@www30.gmx.net> Hi everybody! I am sure you have noticed Skype (www.skype.com). Bryce Wilcox-O'Hearn made me aware of this article that is talking about Skype competitors: by the way, here is an article about skype competitors http://www.voxilla.com/modules.php?op=modload&name=News&file=article&sid=18&mode=thread&order=0&thold=0 This article says that there is nothing new about Skype, it was just plain VoIP. I thought what's new about is was: - It's P2P, i.e. no need for any central servers. I think this is what doomed the use of Microsoft NetMeeting and the like as the public server were always overcrowded and / or abused in different ways. - It works fine through NAT routers. (I think it does not necessarily work well with corporate firewalls, does it?) - They claim at least that they have done a lot to reach the maximum possible sound quality because of proprietary codecs that are optimized for latency and low bandwidth. I can subscribe to that they get superior sound out of very high latency lines. I am not sure I ever catchec up with the others. Aren't these good arguments? Despite me hating the idea that one commercial company would own a de-facto standard for P2P telephony? Torsten From aloeser at cs.tu-berlin.de Mon Nov 10 16:58:37 2003 From: aloeser at cs.tu-berlin.de (Alexander =?iso-8859-1?Q?L=F6ser?=) Date: Sat Dec 9 22:12:36 2006 Subject: [p2p-hackers] Conjunctive Queries References: <1515.1068322821@www49.gmx.net> <6059.1068460482@www30.gmx.net> Message-ID: <3FAFC3BC.F00B9E@cs.tu-berlin.de> Hi, traditional P2P Systems work mostly with DHT allowing a simple (key, value) search. However, in some applications more complex queries are used, such as conjunctive queries: Select all BOOKS Where Author="Broekstra" AND Language="English" Consider now ten peers 1..10. 1 Author="Broekstra" 2 Author="Broekstra", Language="English" 3 Author="abc" 4 Author="Broekstra", Language="English" 5 Author="abc" ... 10 3 Author="sdfsdf" A simple strategy would be to query the first predicate and then issue a query for the secound predicate. So Peer 1, 2 and 4 provide Books with the author Name="Broekstra". The are queried again for the predicate "Language="english" Peer 2 and 4 can satisfy the query. Does anybody knows P2P Systems allowing such conjunctive queries? What is their query strategy? Alex -- ___________________________________________________________ M.Sc., Dipl. Wi.-Inf. Alexander L?ser Technische Universitaet Berlin Fakultaet IV - CIS bmb+f-Projekt: "New Economy, Neue Medien in der Bildung" hp: http://cis.cs.tu-berlin.de/~aloeser/ office: +49- 30-314-25551 fax : +49- 30-314-21601 ___________________________________________________________ From sam at neurogrid.com Mon Nov 10 23:20:16 2003 From: sam at neurogrid.com (Sam Joseph) Date: Sat Dec 9 22:12:36 2006 Subject: [p2p-hackers] Conjunctive Queries In-Reply-To: <3FAFC3BC.F00B9E@cs.tu-berlin.de> References: <1515.1068322821@www49.gmx.net> <6059.1068460482@www30.gmx.net> <3FAFC3BC.F00B9E@cs.tu-berlin.de> Message-ID: <3FB01D30.4090501@neurogrid.com> Hi Alex, Semplesh had such an approach - see a summary in: http://www.neurogrid.net/php/publications.php Joseph S. & Hoshiai T. (2003a) "/Decentralized Meta-Data Strategies: Effective Peer-to-Peer Search./" (English) IEICE Transactions on Communications Vol.E86-B No.6 pp.1740-1753 Semplesh is covered in section 3.13 and there are various other approaches described to more complex queries in p2p systems CHEERS> SAM Alexander L?ser wrote: >Hi, >traditional P2P Systems work mostly with DHT allowing a simple (key, value) search. However, in some >applications more complex queries are used, such as conjunctive queries: > >Select all BOOKS >Where Author="Broekstra" AND Language="English" > >Consider now ten peers 1..10. >1 Author="Broekstra" >2 Author="Broekstra", Language="English" >3 Author="abc" >4 Author="Broekstra", Language="English" >5 Author="abc" >... >10 3 Author="sdfsdf" > >A simple strategy would be to query the first predicate and then issue a query for the secound predicate. >So Peer 1, 2 and 4 provide Books with the author Name="Broekstra". The are queried again for the predicate >"Language="english" Peer 2 and 4 can satisfy the query. > >Does anybody knows P2P Systems allowing such conjunctive queries? What is their query strategy? > >Alex >-- >___________________________________________________________ > > M.Sc., Dipl. Wi.-Inf. Alexander L?ser > Technische Universitaet Berlin Fakultaet IV - CIS > bmb+f-Projekt: "New Economy, Neue Medien in der Bildung" > hp: http://cis.cs.tu-berlin.de/~aloeser/ > office: +49- 30-314-25551 > fax : +49- 30-314-21601 >___________________________________________________________ > > >_______________________________________________ >p2p-hackers mailing list >p2p-hackers@zgp.org >http://zgp.org/mailman/listinfo/p2p-hackers >_______________________________________________ >Here is a web page listing P2P Conferences: >http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences > > > > From brutfood at yahoo.com Tue Nov 11 00:52:24 2003 From: brutfood at yahoo.com (=?iso-8859-1?q?Daniel=20Freeman?=) Date: Sat Dec 9 22:12:36 2006 Subject: [p2p-hackers] RE: Why peer to peer Message-ID: <20031111005224.66492.qmail@web40009.mail.yahoo.com> I enjoyed reading your thought provoking replies to my Why Peer to Peer? question, as they ranged from the mechanics of networking to the admirable social philosophies of the p2p movement. But the Internet it is not a network of peers. There are producers and consumers. Few -> Many. It is likely that members of this forum, like myself are in the first category. Content creators. So when they network with each other, they do so as peers. My main interest is making the Internet accessible to people. And by people, I mean non-technical, normal people ;) My philosophy is that they could be empowered with the right 'tools', then this would enable them to WRITE TO the Internet, instead of just being information consumers. I've been more focussed on the applications than the network. These 'right tools' would incorporate intuitive people-oriented interfaces (not like Wiki!). This was my motivation in writing my prototype Internet Operating System. This includes experimental applications intended to allow a naive user to create web pages, galleries, multimedia, drawn and 3D content. My vision was the sort of system where parents could, for example, contribute to a pool multimedia learning resources for their children. Or other communities could be constructed that utilse rich media. Unfortunately, I've had to put my high ideals and aspirations on the back burner for now ;) I haven't really been able to find 'peers' on the Internet who share these goals, I'll see how my forum goes, but I only managed to recruit one peer to peer person from here. For now, I've turned my attention and the IOS technology to business tools now (I'm not an achedemic, so I have to be a filthy capitalist sometimes). i2genius.com/forum http://personals.yahoo.com.au - Yahoo! Personals New people, new possibilities. FREE for a limited time. From aloeser at cs.tu-berlin.de Tue Nov 11 14:14:15 2003 From: aloeser at cs.tu-berlin.de (Alexander =?iso-8859-1?Q?L=F6ser?=) Date: Sat Dec 9 22:12:36 2006 Subject: [p2p-hackers] XML in DHT References: <1515.1068322821@www49.gmx.net> <6059.1068460482@www30.gmx.net> <3FAFC3BC.F00B9E@cs.tu-berlin.de> <3FB01D30.4090501@neurogrid.com> Message-ID: <3FB0EEB7.FD95FEC8@cs.tu-berlin.de> Hi, does anybody know existing P2P systems storing XML paths in DHT as values? E.g. /Top/Computers/Internet/Searching/Search_Engines/ What do they store exactly in the DHT? The whole pfad /Top/Computers/Internet/Searching/Search_Engines/ or just the leave and a link to the parent? (0) Top/ (1) (1) Computers/ (2) (2) Internet/ (3) (3) Searching/ (4) (4) Search_Engines/ (5) Alex Sam Joseph wrote: > Hi Alex, > > Semplesh had such an approach - see a summary in: > > http://www.neurogrid.net/php/publications.php > > Joseph S. & Hoshiai T. (2003a) > "/Decentralized Meta-Data Strategies: Effective Peer-to-Peer Search./" > (English) > IEICE Transactions on Communications Vol.E86-B No.6 pp.1740-1753 > > Semplesh is covered in section 3.13 and there are various other > approaches described to more complex queries in p2p systems > > CHEERS> SAM > > Alexander L?ser wrote: > > >Hi, > >traditional P2P Systems work mostly with DHT allowing a simple (key, value) search. However, in some > >applications more complex queries are used, such as conjunctive queries: > > > >Select all BOOKS > >Where Author="Broekstra" AND Language="English" > > > >Consider now ten peers 1..10. > >1 Author="Broekstra" > >2 Author="Broekstra", Language="English" > >3 Author="abc" > >4 Author="Broekstra", Language="English" > >5 Author="abc" > >... > >10 3 Author="sdfsdf" > > > >A simple strategy would be to query the first predicate and then issue a query for the secound predicate. > >So Peer 1, 2 and 4 provide Books with the author Name="Broekstra". The are queried again for the predicate > >"Language="english" Peer 2 and 4 can satisfy the query. > > > >Does anybody knows P2P Systems allowing such conjunctive queries? What is their query strategy? > > > >Alex > >-- > >___________________________________________________________ > > > > M.Sc., Dipl. Wi.-Inf. Alexander L?ser > > Technische Universitaet Berlin Fakultaet IV - CIS > > bmb+f-Projekt: "New Economy, Neue Medien in der Bildung" > > hp: http://cis.cs.tu-berlin.de/~aloeser/ > > office: +49- 30-314-25551 > > fax : +49- 30-314-21601 > >___________________________________________________________ > > > > > >_______________________________________________ > >p2p-hackers mailing list > >p2p-hackers@zgp.org > >http://zgp.org/mailman/listinfo/p2p-hackers > >_______________________________________________ > >Here is a web page listing P2P Conferences: > >http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences > > > > > > > > > > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > _______________________________________________ > Here is a web page listing P2P Conferences: > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences -- ___________________________________________________________ M.Sc., Dipl. Wi.-Inf. Alexander L?ser Technische Universitaet Berlin Fakultaet IV - CIS bmb+f-Projekt: "New Economy, Neue Medien in der Bildung" hp: http://cis.cs.tu-berlin.de/~aloeser/ office: +49- 30-314-25551 fax : +49- 30-314-21601 ___________________________________________________________ From anwitaman at hotmail.com Tue Nov 11 14:16:32 2003 From: anwitaman at hotmail.com (Anwitaman Datta) Date: Sat Dec 9 22:12:36 2006 Subject: [p2p-hackers] Meta-indexing/search in DHTs Message-ID: Apart from the partial keyword search that is typically done in DHTs by hashing fragments of the word (like groups of 3 letters), another trouble with DHTs is to index and meta-information. We (www.p-grid.org) are trying to thus design a query adaptive mechanism to index meta-data in what we call a partial DHT. I guess many of us have thought on these and other lines. Here is a first paper, which analyses the benifits of such a PDHT strategy. http://lsirpeople.epfl.ch/adatta/TR-IC-2003-69.pdf Opinions/comments/suggestions about our approach are welcome. Cheers, A. _________________________________________________________________ Contact brides & grooms FREE! Only on www.shaadi.com. http://www.shaadi.com/ptnr.php?ptnr=hmltag Register now! From behnel_ml at gkec.tu-darmstadt.de Tue Nov 11 14:30:53 2003 From: behnel_ml at gkec.tu-darmstadt.de (Stefan Behnel) Date: Sat Dec 9 22:12:36 2006 Subject: [p2p-hackers] XML in DHT In-Reply-To: <3FB0EEB7.FD95FEC8@cs.tu-berlin.de> References: <1515.1068322821@www49.gmx.net> <6059.1068460482@www30.gmx.net> <3FAFC3BC.F00B9E@cs.tu-berlin.de> <3FB01D30.4090501@neurogrid.com> <3FB0EEB7.FD95FEC8@cs.tu-berlin.de> Message-ID: <3FB0F29D.4060701@gkec.tu-darmstadt.de> Alexander L?ser schrieb: > does anybody know existing P2P systems storing XML paths in DHT as values? > E.g. /Top/Computers/Internet/Searching/Search_Engines/ I wouldn't know any such implementation in particular, but since XML is basically a tree structure, you may want to take a look at the P-Grid-Project. http://www.p-grid.org/ Hope it helps... Stefan From jdl at vinecorp.com Tue Nov 11 20:12:23 2003 From: jdl at vinecorp.com (jdl@vinecorp.com) Date: Sat Dec 9 22:12:36 2006 Subject: [p2p-hackers] [Silicon Valley] P2Punks TONIGHT Tuesday, Nov 11 7:30pm onward at DSRC Message-ID: <20031111151223.A29126@lynx.phpwebhosting.com> See you TONIGHT... James --- Where: Dana Street Roasting Company 744 Dana St., Mountain View Phone: (650) 390-9638 1/2 block off Castro St. When: 7:30pm onward From jdl at vinecorp.com Tue Nov 11 23:40:03 2003 From: jdl at vinecorp.com (jdl@vinecorp.com) Date: Sat Dec 9 22:12:36 2006 Subject: [p2p-hackers] [Silicon Valley] P2Punks meeting MOVED down the street - TONIGHT 7:30pm Message-ID: <20031111184003.D29126@lynx.phpwebhosting.com> Thanks to JimY for pointing out that Dana St. is closing early tonight. Tonight's location is just 1 block closer to Central- Red Rock Coffee Company Coffeehouse/Teahouse (650) 967-4473 201 Castro St. (corner of Castro St. and Villa St.) MAP: http://www.mountainviewca.net/restaurants/redrock.html Red Rock is keeping normal working hours tonight. See you there/then. From lujianming at software.ict.ac.cn Wed Nov 12 01:49:43 2003 From: lujianming at software.ict.ac.cn (=?GB2312?Q?=C2=C0=BD=A8=C3=F7?=) Date: Sat Dec 9 22:12:36 2006 Subject: [p2p-hackers] p2p full text searching software Message-ID: <20031112015148.589943FD25@capsicum.zgp.org> Hi all, Does anybody know which archietectures are better when doing the full text searching job in totally distributed p2p file shared system ? DHT based archiecture seems to work better. Is that so? I am now just composing a p2p full text searching software .But I don't realy know if there is a better appoach than mine.Hope others to bring me some good idea.3x. ¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡lujianming@software.ict.ac.cn ¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡¡2003-11-12 -------------- next part -------------- A non-text attachment was scrubbed... Name: face-3.gif Type: image/gif Size: 842 bytes Desc: not available Url : http://zgp.org/pipermail/p2p-hackers/attachments/20031112/a2601b7a/face-3.gif From joaquin.keller at rd.francetelecom.com Wed Nov 12 08:30:07 2003 From: joaquin.keller at rd.francetelecom.com (KELLER Joaquin FTRD/DMI/ISS) Date: Sat Dec 9 22:12:36 2006 Subject: [p2p-hackers] p2p full text searching software Message-ID: Hi, IMHO, DHTs are not the right solution, because: 1. DHTs are based on the assumption that the distribution probability of keys is uniform, when actually for words it follows a Zipf law. 2. If a document have one thousand different words, inserting this document will cost one thousand messages. Far too much. 3. In "full text searching", exhaustivity is not the main point: relevance is probably more important. Google said: How do you achieve a kind of page ranking with DHTs ? But, may be, the DHT solution you had in mind was different than 1 word = 1 hashkey ? -- Joaquin KELLER -----Message d'origine----- De : lujianming@software.ict.ac.cn [mailto:lujianming@software.ict.ac.cn] Envoy? : mercredi 12 novembre 2003 02:50 ? : p2p-hackers@zgp.org Objet : [p2p-hackers] p2p full text searching software Hi all, Does anybody know which archietectures are better when doing the full text searching job in totally distributed p2p file shared system ? DHT based archiecture seems to work better. Is that so? I am now just composing a p2p full text searching software .But I don't realy know if there is a better appoach than mine.Hope others to bring me some good idea.3x. ????????lujianming@software.ict.ac.cn ??????????2003-11-12 From Wolfgang.Mueller2 at uni-bayreuth.de Wed Nov 12 09:06:48 2003 From: Wolfgang.Mueller2 at uni-bayreuth.de (Wolfgang =?utf-8?q?M=C3=BCller?=) Date: Sat Dec 9 22:12:36 2006 Subject: [p2p-hackers] p2p full text searching software In-Reply-To: References: Message-ID: <200311121006.48301.wolfgang.mueller2@uni-bayreuth.de> > 1. DHTs are based on the assumption that the distribution probability of > keys is uniform, when actually for words it follows a Zipf law. 2. If a > document have one thousand different words, inserting this document will > cost one thousand messages. Far too much. 3. In "full text searching", > exhaustivity is not the main point: relevance is probably more important. Hi, The paper Jun Gao and Peter Steenkiste, Rendezvous Points-Based Scalable Content Discovery with Load Balancing. In Proceedings of the Fourth International Workshop on Networked Group Communication (NGC'02), pages 71-78, Boston, MA, Oct. 2002. might be of interest to you. From a quick scan it seems to be a DHT with load balancing. They use that system in Jun Gao, George Tzanetakis, and Peter Steenkiste, Content-Based Retrieval of Music in Scalable Peer-to-Peer Networks. In Proceedings of the 2003 IEEE International Conference on Multimedia & Expo(ICME'03), pages 309-312, volume I, Baltimore, MD, July 2003. for generating distributed inverted files. http://citeseer.nj.nec.com/tang02psearch.html also create distributed inverted files. They use _very_ aggressive pruning to make things tractable. On the Feasibility of Peer-to-Peer Web Indexing and Search, Jinyang Li, Boon Thau Loo, Joe Hellerstein, Frans Kaashoek, David R. Karger, Robert Morris http://iptps03.cs.berkeley.edu/final-papers/search_feasibility.ps is a _very_ interesting paper on the feasibility of distributed inverted file based approaches. Text-Based Content Search and Retrieval in ad hoc P2P Communities (2002) Francisco Matias Cuenca-Acuna, Thu D. Nguyen Department of Computer Science, Rutgers University http://citeseer.nj.nec.com/cuenca-acuna02textbased.html goes a radically different way for indexing text. Some people do not like it, because that system involves distribution and storage of large numbers of peer data summaries and does not scale to internet dimensions, currently. If anyone has some more references in that area, I would be glad to know. Cheers, Wolfgang From rrrw at neofonie.de Wed Nov 12 21:31:05 2003 From: rrrw at neofonie.de (Ronald Wertlen) Date: Sat Dec 9 22:12:36 2006 Subject: [p2p-hackers] p2p full text searching software Message-ID: <3FB2A699.8080602@neofonie.de> > Hi all, Does anybody know which archietectures are better when > doing the full text searching job in totally distributed p2p file > shared system ? DHT based archiecture seems to work better. Is > that so? I am now just composing a p2p full text searching > software .But I don't realy know if there is a better appoach than > mine.Hope others to bring me some good idea.3x. > Hi, there are distinct advantages to flooding based approaches (as opposed to DHT) when dealing with full-text[1]. Not only do DHT's have problems with the number of hashes in full-text, they also have to go to great lengths to perform simple boolean logic on terms, and miss operators like "", near, range, etc. We are in the process of creating a p2p search system on the basis of our high performance full-text retrieval software which includes several ranking possibilities including a pagerank derivative. Super-peers route queries using heuristics (these are somewhat more powerful than simple pruning of terms see Appendix B in [2], this stuf is adaptive along the lines of Neurogrid[3]) and edge peers answer them. We are also looking at Distributing our "Pagerank" computation ([4] is a possibility). At the moment we are concentrating on basics, getting the search, communications and UI all to fit. After that we have number of possible avenues to explore depending on funding. Regards, Ron [1] Brian Cooper, Hector Garcia-Molina. Studying search networks with SIL [2] Ronald Wertlen. The S2S JXTA Application. JXTA Workshop Potenzial, Konzepte, Anwendungen. November 2003. http://s2s.neofonie.de/DFNS2S_031113_JXTAWorkshopBerlin.pdf [3] http://www.neurogrid.net/ [4] Karthikeyan Sankaralingam,Simha Sethumadhavan,James C. Browne. Distributed Pagerank for P2P Systems -- ............................................... Ronald Wertlen neofonie GmbH Projektleitung Robert-Koch-Platz 4 D-10115 Berlin From paul at soniq.net Thu Nov 13 23:56:00 2003 From: paul at soniq.net (Paul Boehm) Date: Sat Dec 9 22:12:36 2006 Subject: [p2p-hackers] slightly ot: eternity service Message-ID: <20031113235600.GA13172@soniq.net> -----BEGIN PGP SIGNED MESSAGE----- ######################################################## # # This is a proof of posting certificate from # stamper.itconsult.co.uk certifying that a user # claiming to be:- # paul@soniq.net # requested that this message be sent to:- # paul@soniq.net # p2p-hackers@zgp.org # # This certificate was issued at 00:10 (GMT) # on Friday 14 November 2003 with reference 0142227 # # CAUTION: while the message may well be from the sender # indicated in the "From:" header, the sender # has NOT been authenticated by this service # # For information about the Stamper service see # http://www.itconsult.co.uk/stamper.htm # ######################################################## hi, do any of you know a reliable eternity service? (eternal logfile, commercial?, peer2peer) i want to have the following sha1sum timestamped: 1fdfeaf47b5a074f07eba38dfd5dee03382280ee paul -----BEGIN PGP SIGNATURE----- Version: 2.6.3i Charset: noconv Comment: Stamper Reference Id: 0142227 iQEVAgUBP7Qdc4GVnbVwth+BAQE2xwf+KM9DSMnbQV0rXP/yo69Gi6ZRKCPb7rUZ /SFZavHVH7MnM7d1keRzCj3SqZkNQ4TvuuJfAMMPvw1EZVt1ZV+ooW2m4JOXFMfs G+nwObsTef2Ox5KjL9P+KwJzzFYSrWKgwIZ8ij4nuW7tQSB4yC4Ckr2zrCegqG7K 2TqyM4RtukwBJnKUfmY+lobfl0wBMAujLoDJ+ACe2f19JbNR2CV84c8rpyJSsJ7h XXkzcRLCt/NJAV5YFJUKIwwmtl2fTKz5qqegQ79fLP4xHmTLj/+waKVijPgjhVoC QfVfKEZSKj/rTfiHzJi7EFKnYrkvGVWHCiJyEHou1SJo8VMm2cZW6Q== =eBcZ -----END PGP SIGNATURE----- From seanl at chaosring.org Fri Nov 14 00:45:27 2003 From: seanl at chaosring.org (Sean R. Lynch) Date: Sat Dec 9 22:12:36 2006 Subject: [p2p-hackers] slightly ot: eternity service In-Reply-To: <20031113235600.GA13172@soniq.net> References: <20031113235600.GA13172@soniq.net> Message-ID: <3FB425A7.6050305@chaosring.org> Skipped content of type multipart/mixed-------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 256 bytes Desc: not available Url : http://zgp.org/pipermail/p2p-hackers/attachments/20031113/3a1fb414/attachment.pgp From gojomo at bitzi.com Fri Nov 14 02:18:54 2003 From: gojomo at bitzi.com (Gordon Mohr) Date: Sat Dec 9 22:12:36 2006 Subject: [p2p-hackers] slightly ot: eternity service In-Reply-To: <20031113235600.GA13172@soniq.net> References: <20031113235600.GA13172@soniq.net> Message-ID: <3FB43B8E.3000300@bitzi.com> Paul Boehm wrote: > do any of you know a reliable eternity service? > (eternal logfile, commercial?, peer2peer) > > i want to have the following sha1sum timestamped: > > 1fdfeaf47b5a074f07eba38dfd5dee03382280ee I think 'eternity' service as coined by Ross Anderson implies storage of the data itself. I think you just want a digital timestamp. There were a couple of companies offering these a while back, their names elude me. Google turns up another I hadn't heard of before: Chronostamp. You could also take out classified ads in a number of dated papers/forums that are likely to be reliably archived. How much do those tiny 1-line ads the NY Times sometimes squeezes at the bottom of their page 1 stories cost? Or you could spam it to a giant selection of archived email lists. Some combination of the archives are likely to survive and be recognized as credible evidence by a later court of law. One list down, thousands to go. - Gordon From antr at microsoft.com Fri Nov 14 16:17:55 2003 From: antr at microsoft.com (Ant Rowstron) Date: Sat Dec 9 22:12:36 2006 Subject: [p2p-hackers] p2p full text searching software Message-ID: Hi, I think a key point that is being missed in this thread is that it is not necessary to map the documents into a DHT to exploit the benefits of a structured overlay (/DHT). For example, flooding can be used within a structured overlay/DHT as well as in unstructured overlays - it just it is cheaper in the structured overlay. Have a look at our paper which appears in HotNets II later this month - which addressed this exact issue: Should we build Gnutella on a structured overlay? Miguel Castro, Manuel Costa, Ant Rowstron http://nms.lcs.mit.edu/HotNets-II/papers/structella.pdf http://nms.lcs.mit.edu/HotNets-II/program.html Thanks, Ant. > -----Original Message----- > From: p2p-hackers-bounces@zgp.org > [mailto:p2p-hackers-bounces@zgp.org] On Behalf Of Ronald Wertlen > Sent: 12 November 2003 21:31 > To: p2p-hackers@zgp.org > Subject: Re: [p2p-hackers] p2p full text searching software > > > Hi all, Does anybody know which archietectures are better > when doing > > the full text searching job in totally distributed p2p file shared > > system ? DHT based archiecture seems to work better. Is > that so? I am > > now just composing a p2p full text searching software .But I don't > > realy know if there is a better appoach than mine.Hope > others to bring > > me some good idea.3x. > > > > Hi, > > there are distinct advantages to flooding based approaches > (as opposed to DHT) when dealing with full-text[1]. Not only > do DHT's have problems with the number of hashes in > full-text, they also have to go to great lengths to perform > simple boolean logic on terms, and miss operators like "", > near, range, etc. > > We are in the process of creating a p2p search system on the > basis of our high performance full-text retrieval software > which includes several ranking possibilities including a > pagerank derivative. > Super-peers route queries using heuristics (these are > somewhat more powerful than simple pruning of terms see > Appendix B in [2], this stuf is adaptive along the lines of > Neurogrid[3]) and edge peers answer them. We are also looking > at Distributing our "Pagerank" > computation ([4] is a possibility). > > At the moment we are concentrating on basics, getting the > search, communications and UI all to fit. After that we have > number of possible avenues to explore depending on funding. > > Regards, Ron > > [1] Brian Cooper, Hector Garcia-Molina. Studying search > networks with SIL [2] Ronald Wertlen. The S2S JXTA > Application. JXTA Workshop Potenzial, Konzepte, Anwendungen. > November 2003. > http://s2s.neofonie.de/DFNS2S_031113_JXTAWorkshopBerlin.pdf > [3] http://www.neurogrid.net/ > [4] Karthikeyan Sankaralingam,Simha Sethumadhavan,James C. Browne. > Distributed Pagerank for P2P Systems > > -- > ............................................... > Ronald Wertlen neofonie GmbH > Projektleitung Robert-Koch-Platz 4 > D-10115 Berlin > > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > _______________________________________________ > Here is a web page listing P2P Conferences: > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences > From aloeser at cs.tu-berlin.de Fri Nov 14 16:55:53 2003 From: aloeser at cs.tu-berlin.de (Alexander =?iso-8859-1?Q?L=F6ser?=) Date: Sat Dec 9 22:12:36 2006 Subject: [p2p-hackers] p2p full text searching software References: Message-ID: <3FB50919.33FA91B9@cs.tu-berlin.de> Ant, you support in your paper complex queries in a structured overlay. To reduce costs you use the overlay structure (of Pastry) as well as random walks and floodings for complex queries. Do you know other approaches enabling complex queries in structured overlays? Maybe approaches including Content based Routing approaches? Alex Ant Rowstron wrote: > Hi, > > I think a key point that is being missed in this thread is that it is > not necessary to map the documents into a DHT to exploit the benefits of > a structured overlay (/DHT). For example, flooding can be used within a > structured overlay/DHT as well as in unstructured overlays - it just it > is cheaper in the structured overlay. Have a look at our paper which > appears in HotNets II later this month - which addressed this exact > issue: > > Should we build Gnutella on a structured overlay? > Miguel Castro, Manuel Costa, Ant Rowstron > http://nms.lcs.mit.edu/HotNets-II/papers/structella.pdf > http://nms.lcs.mit.edu/HotNets-II/program.html > > Thanks, > > Ant. > > > -----Original Message----- > > From: p2p-hackers-bounces@zgp.org > > [mailto:p2p-hackers-bounces@zgp.org] On Behalf Of Ronald Wertlen > > Sent: 12 November 2003 21:31 > > To: p2p-hackers@zgp.org > > Subject: Re: [p2p-hackers] p2p full text searching software > > > > > Hi all, Does anybody know which archietectures are better > > when doing > > > the full text searching job in totally distributed p2p file shared > > > system ? DHT based archiecture seems to work better. Is > > that so? I am > > > now just composing a p2p full text searching software .But I don't > > > realy know if there is a better appoach than mine.Hope > > others to bring > > > me some good idea.3x. > > > > > > > Hi, > > > > there are distinct advantages to flooding based approaches > > (as opposed to DHT) when dealing with full-text[1]. Not only > > do DHT's have problems with the number of hashes in > > full-text, they also have to go to great lengths to perform > > simple boolean logic on terms, and miss operators like "", > > near, range, etc. > > > > We are in the process of creating a p2p search system on the > > basis of our high performance full-text retrieval software > > which includes several ranking possibilities including a > > pagerank derivative. > > Super-peers route queries using heuristics (these are > > somewhat more powerful than simple pruning of terms see > > Appendix B in [2], this stuf is adaptive along the lines of > > Neurogrid[3]) and edge peers answer them. We are also looking > > at Distributing our "Pagerank" > > computation ([4] is a possibility). > > > > At the moment we are concentrating on basics, getting the > > search, communications and UI all to fit. After that we have > > number of possible avenues to explore depending on funding. > > > > Regards, Ron > > > > [1] Brian Cooper, Hector Garcia-Molina. Studying search > > networks with SIL [2] Ronald Wertlen. The S2S JXTA > > Application. JXTA Workshop Potenzial, Konzepte, Anwendungen. > > November 2003. > > http://s2s.neofonie.de/DFNS2S_031113_JXTAWorkshopBerlin.pdf > > [3] http://www.neurogrid.net/ > > [4] Karthikeyan Sankaralingam,Simha Sethumadhavan,James C. Browne. > > Distributed Pagerank for P2P Systems > > > > -- > > ............................................... > > Ronald Wertlen neofonie GmbH > > Projektleitung Robert-Koch-Platz 4 > > D-10115 Berlin > > > > _______________________________________________ > > p2p-hackers mailing list > > p2p-hackers@zgp.org > > http://zgp.org/mailman/listinfo/p2p-hackers > > _______________________________________________ > > Here is a web page listing P2P Conferences: > > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences > > > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > _______________________________________________ > Here is a web page listing P2P Conferences: > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences -- ___________________________________________________________ M.Sc., Dipl. Wi.-Inf. Alexander L?ser Technische Universitaet Berlin Fakultaet IV - CIS bmb+f-Projekt: "New Economy, Neue Medien in der Bildung" hp: http://cis.cs.tu-berlin.de/~aloeser/ office: +49- 30-314-25551 fax : +49- 30-314-21601 ___________________________________________________________ From tutschku at informatik.uni-wuerzburg.de Fri Nov 14 19:16:00 2003 From: tutschku at informatik.uni-wuerzburg.de (Kurt Tutschku) Date: Sat Dec 9 22:12:36 2006 Subject: [p2p-hackers] CfP: ETT Special Issue on P2P Networking and P2P Services Message-ID: <00cc01c3aae3$bd75fbf0$806abb84@musa> Skipped content of type multipart/alternative-------------- next part -------------- A non-text attachment was scrubbed... Name: ETT_CfP_P2P.pdf Type: application/pdf Size: 103494 bytes Desc: not available Url : http://zgp.org/pipermail/p2p-hackers/attachments/20031114/8b46bdcd/ETT_CfP_P2P.pdf From joaquin.keller at rd.francetelecom.com Mon Nov 17 10:54:24 2003 From: joaquin.keller at rd.francetelecom.com (KELLER Joaquin FTRD/DMI/ISS) Date: Sat Dec 9 22:12:36 2006 Subject: [p2p-hackers] p2p full text searching software Message-ID: Thanks for this great bibliography. This paper should be also in the same area: Sloppy hashing and self-organizing clusters Michael J. Freedman and David Mazi?res NYU Dept of Computer Science {mfreed,dm}@cs.nyu.edu http://iptps03.cs.berkeley.edu/final-papers/coral.pdf -----Message d'origine----- De : Wolfgang M?ller [mailto:Wolfgang.Mueller2@uni-bayreuth.de] Envoy? : mercredi 12 novembre 2003 10:07 ? : Peer-to-peer development. Objet : Re: [p2p-hackers] p2p full text searching software > 1. DHTs are based on the assumption that the distribution probability of > keys is uniform, when actually for words it follows a Zipf law. 2. If a > document have one thousand different words, inserting this document will > cost one thousand messages. Far too much. 3. In "full text searching", > exhaustivity is not the main point: relevance is probably more important. Hi, The paper Jun Gao and Peter Steenkiste, Rendezvous Points-Based Scalable Content Discovery with Load Balancing. In Proceedings of the Fourth International Workshop on Networked Group Communication (NGC'02), pages 71-78, Boston, MA, Oct. 2002. might be of interest to you. From a quick scan it seems to be a DHT with load balancing. They use that system in Jun Gao, George Tzanetakis, and Peter Steenkiste, Content-Based Retrieval of Music in Scalable Peer-to-Peer Networks. In Proceedings of the 2003 IEEE International Conference on Multimedia & Expo(ICME'03), pages 309-312, volume I, Baltimore, MD, July 2003. for generating distributed inverted files. http://citeseer.nj.nec.com/tang02psearch.html also create distributed inverted files. They use _very_ aggressive pruning to make things tractable. On the Feasibility of Peer-to-Peer Web Indexing and Search, Jinyang Li, Boon Thau Loo, Joe Hellerstein, Frans Kaashoek, David R. Karger, Robert Morris http://iptps03.cs.berkeley.edu/final-papers/search_feasibility.ps is a _very_ interesting paper on the feasibility of distributed inverted file based approaches. Text-Based Content Search and Retrieval in ad hoc P2P Communities (2002) Francisco Matias Cuenca-Acuna, Thu D. Nguyen Department of Computer Science, Rutgers University http://citeseer.nj.nec.com/cuenca-acuna02textbased.html goes a radically different way for indexing text. Some people do not like it, because that system involves distribution and storage of large numbers of peer data summaries and does not scale to internet dimensions, currently. If anyone has some more references in that area, I would be glad to know. Cheers, Wolfgang _______________________________________________ p2p-hackers mailing list p2p-hackers@zgp.org http://zgp.org/mailman/listinfo/p2p-hackers _______________________________________________ Here is a web page listing P2P Conferences: http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences From aloeser at cs.tu-berlin.de Mon Nov 17 16:49:09 2003 From: aloeser at cs.tu-berlin.de (Alexander =?iso-8859-1?Q?L=F6ser?=) Date: Sat Dec 9 22:12:36 2006 Subject: [p2p-hackers] DHT Select Operator References: Message-ID: <3FB8FC05.53166B26@cs.tu-berlin.de> Hi, does anybody know literature/links/systems for realizing a SELECT Operator with DHT's? For example: SELECT * FROM X WHERE A.X='1' AND A.X='2' or: SELECT * FROM X WHERE A.X='1' AND B.X='2' or SELECT * FROM X, Y WHERE A.X='1' AND B.Y='2' Alex -- ___________________________________________________________ M.Sc., Dipl. Wi.-Inf. Alexander L?ser Technische Universitaet Berlin Fakultaet IV - CIS bmb+f-Projekt: "New Economy, Neue Medien in der Bildung" hp: http://cis.cs.tu-berlin.de/~aloeser/ office: +49- 30-314-25551 fax : +49- 30-314-21601 ___________________________________________________________ From gbildson at limepeer.com Mon Nov 17 17:06:25 2003 From: gbildson at limepeer.com (Greg Bildson) Date: Sat Dec 9 22:12:36 2006 Subject: [p2p-hackers] DHT Select Operator In-Reply-To: <3FB8FC05.53166B26@cs.tu-berlin.de> Message-ID: Do them separately and union or intersection the results as appropriate. Yes?? No other way unless some data elements have a combined key (for the first case). Thanks -greg -----Original Message----- From: p2p-hackers-bounces@zgp.org [mailto:p2p-hackers-bounces@zgp.org]On Behalf Of Alexander L?ser Sent: Monday, November 17, 2003 11:49 AM To: Peer-to-peer development. Subject: [p2p-hackers] DHT Select Operator Hi, does anybody know literature/links/systems for realizing a SELECT Operator with DHT's? For example: SELECT * FROM X WHERE A.X='1' AND A.X='2' or: SELECT * FROM X WHERE A.X='1' AND B.X='2' or SELECT * FROM X, Y WHERE A.X='1' AND B.Y='2' Alex -- ___________________________________________________________ M.Sc., Dipl. Wi.-Inf. Alexander L?ser Technische Universitaet Berlin Fakultaet IV - CIS bmb+f-Projekt: "New Economy, Neue Medien in der Bildung" hp: http://cis.cs.tu-berlin.de/~aloeser/ office: +49- 30-314-25551 fax : +49- 30-314-21601 ___________________________________________________________ _______________________________________________ p2p-hackers mailing list p2p-hackers@zgp.org http://zgp.org/mailman/listinfo/p2p-hackers _______________________________________________ Here is a web page listing P2P Conferences: http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences From aloeser at cs.tu-berlin.de Mon Nov 17 17:32:05 2003 From: aloeser at cs.tu-berlin.de (Alexander =?iso-8859-1?Q?L=F6ser?=) Date: Sat Dec 9 22:12:36 2006 Subject: [p2p-hackers] DHT Select Operator References: Message-ID: <3FB90615.C2DFBF27@cs.tu-berlin.de> Greg thanks for your response. Yes that's one solution to do that. I hoped that maybe somebody knows a more effective approach. I found some links to the piers project http://pier.cs.berkeley.edu/papers.html and to some greek people http://p2p.ceid.upatras.gr/papers/dbisp2p-final.pdf They provide a generell approach for a generall data model. I'm more interested in optimizations for on static data model(won't change anymore). Any ideas? Greg Bildson wrote: > Do them separately and union or intersection the results as appropriate. > Yes?? No other way unless some data elements have a combined key (for the > first case). > > Thanks > -greg > > -----Original Message----- > From: p2p-hackers-bounces@zgp.org [mailto:p2p-hackers-bounces@zgp.org]On > Behalf Of Alexander L?ser > Sent: Monday, November 17, 2003 11:49 AM > To: Peer-to-peer development. > Subject: [p2p-hackers] DHT Select Operator > > Hi, > does anybody know literature/links/systems for realizing a SELECT Operator > with > DHT's? > > For example: > > SELECT * FROM X > WHERE A.X='1' AND A.X='2' > > or: > SELECT * FROM X > WHERE A.X='1' AND B.X='2' > > or > > SELECT * FROM X, Y > WHERE A.X='1' AND B.Y='2' > > Alex > > -- > ___________________________________________________________ > > M.Sc., Dipl. Wi.-Inf. Alexander L?ser > Technische Universitaet Berlin Fakultaet IV - CIS > bmb+f-Projekt: "New Economy, Neue Medien in der Bildung" > hp: http://cis.cs.tu-berlin.de/~aloeser/ > office: +49- 30-314-25551 > fax : +49- 30-314-21601 > ___________________________________________________________ > > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > _______________________________________________ > Here is a web page listing P2P Conferences: > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences > > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > _______________________________________________ > Here is a web page listing P2P Conferences: > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences -- ___________________________________________________________ M.Sc., Dipl. Wi.-Inf. Alexander L?ser Technische Universitaet Berlin Fakultaet IV - CIS bmb+f-Projekt: "New Economy, Neue Medien in der Bildung" hp: http://cis.cs.tu-berlin.de/~aloeser/ office: +49- 30-314-25551 fax : +49- 30-314-21601 ___________________________________________________________ From cjwu at exodus.cs.ccu.edu.tw Thu Nov 20 16:53:55 2003 From: cjwu at exodus.cs.ccu.edu.tw (cjwu) Date: Sat Dec 9 22:12:36 2006 Subject: [p2p-hackers] About Pastry simulation implementation Message-ID: <001e01c3af86$e3f5c880$a52efea9@bluebox> Hi I works for the simulation of pastry, but suffers some problems about constructing its routing table. I refers the report of [1, 2], but something I can understand. So, if some one has some experience in here, can you guide me ? Thanks Chi-Jen at ccu.tw [1] Exploiting network proximity in peer to peer overlay networks [2] Proximity neighbor selection in tree-based structured peer-to-peer overlays -------------- next part -------------- An HTML attachment was scrubbed... URL: http://zgp.org/pipermail/p2p-hackers/attachments/20031121/5f3e3663/attachment.html From atuls at cs.rice.edu Thu Nov 20 19:24:58 2003 From: atuls at cs.rice.edu (Atul Singh) Date: Sat Dec 9 22:12:36 2006 Subject: [p2p-hackers] About Pastry simulation implementation In-Reply-To: <001e01c3af86$e3f5c880$a52efea9@bluebox> References: <001e01c3af86$e3f5c880$a52efea9@bluebox> Message-ID: Hi, We have a open-source implementation (BSD-like license) of Pastry, FreePastry here at Rice University. You can download the latest source (or binary) from our website http://freepastry.rice.edu . The current implementation takes into account the proximity awareness, as indicated by two references in your mail. Thanks, Atul. On Fri, 21 Nov 2003, cjwu wrote: > Hi > > I works for the simulation of pastry, but suffers some problems about constructing its routing table. > I refers the report of [1, 2], but something I can understand. > So, if some one has some experience in here, can you guide me ? > > Thanks > > Chi-Jen at ccu.tw > > [1] Exploiting network proximity in peer to peer overlay networks > [2] Proximity neighbor selection in tree-based structured peer-to-peer overlays > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ It is amazing what you can accomplish if you do not care who gets the credit. - Harry S Truman ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -------------- next part -------------- _______________________________________________ p2p-hackers mailing list p2p-hackers@zgp.org http://zgp.org/mailman/listinfo/p2p-hackers _______________________________________________ Here is a web page listing P2P Conferences: http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences From eugen at leitl.org Sat Nov 22 12:30:47 2003 From: eugen at leitl.org (Eugen Leitl) Date: Sat Dec 9 22:12:36 2006 Subject: [p2p-hackers] [mnet-devel] new ideas for old MetaTracking (fwd from zooko@zooko.com) Message-ID: <20031122123047.GR7350@leitl.org> ----- Forwarded message from Zooko O'Whielacronx ----- From: "Zooko O'Whielacronx" Date: 22 Nov 2003 07:18:52 -0500 To: mnet-devel@lists.sourceforge.net Subject: [mnet-devel] new ideas for old MetaTracking Reply-To: mnet-devel@lists.sourceforge.net So a few days ago as I was testing v0.6.2.290-STABLE, I realized that it never stopped trying to use old peers even if they were long gone. This would be a big problem -- every time a peer would come and go, your Mnet node would get a bit slower since it would try to use that dead peer every time you tried to do anything for the rest of your Mnet node's life. I was trying to figure out how to fix this by having some heuristic about when to stop trying to reach a peer. That obviously risks the opposite problem: that a high- quality, reliable peer goes off-line for a day, and when it comes back everyone ignores it because their "stop trying dead peers" heuristic has kicked in. I tried to envision a probability distribution that would try absent peers often enough to rediscover re-connected ones but not often enough to waste your time talking to dead ones as the number of permanently-dead peers grows unboundedly. I couldn't. Then I wondered how Mojo Nation and Mnet v0.6 had worked as well as they had so far. I realized that Mojo Nation had been using the MetaTrackers for liveness-detection all along. I had previously been thinking of MetaTrackers as providing three services: Original Introduction (with the help of bootpages), contact-info-lookup, and "please introduce me to some new nodes" discovery. Now I realize that MetaTrackers also provide the essential service of "find out which nodes are present and which are absent right now". While thinking about this I also realized that the idea of using consistent hashing in metatracking means we can have very scalable metatracking. The current v0.6.2.290-STABLE metatracking scheme degrades quickly as the number of MetaTrackers grows. That's why I've been advertising for three solid MetaTracker operators -- no more and no less. If we had more than three, then node A would send "hello" to MT01, but node B would send "lookup contact info" to MT04, and node B would fail to discover node A's contact info. The obvious fix to this of either having nodes talk to all of the MetaTrackers or else of having the MetaTrackers talk to each other would mean that the load on each MetaTracker increases proportionally to the total number of nodes in the network, making metatracking inherently unscalable. But if node A looks at the XOR metric of his own Id compared with the Ids of the MetaTrackers, and then node B looks at the XOR metric of node A's Id compared with the Ids of the MetaTrackers, then they will both talk to the same MetaTracker, so node B can learn node A's current contact info while leaving all of the other MetaTrackers alone. That means that with regard to "lookup contact info", metatracking becomes inherently scalable. (Leaving aside for the moment the issue of how node A and node B get a list of current MetaTrackers!) But what about the "list servers" message, which is used to get to know a random assortment of nodes which you have either never previously met, or which recently came on-line? Well, I'm thinking that we can achieve very high, if not perfect, scalability by trading-off a different factor: the latency between node A connecting to the network and node B learning about node A's liveness! The neat thing about this, is that increasing that latency sort of *helps* rather than hurts, since nodes that just recently joined for the first time, or nodes that just recently re-connected for the first time, are more likely to disappear again in the future. We actively *prefer* to avoid meeting nodes that have not been connected for a long time. Let me explain how the new scheme works and you'll see what I mean. MetaTracking Scheme X: * Every 15 minutes, you say "hello" to the MetaTracker whose Id is closest to yours. * Whenever you try to contact a peer and the message fails, you say "lookup contact info" to the MetaTracker whose Id is closest to that peer's. + After you've done that, if the message fails *again*, you remove that peer from your list of "active peers". You will never try to talk to him again until the next time you find his Id in a "list servers response" message. * Every 15 minutes, you say "list servers" to a MetaTracker. No matter how many MetaTrackers there are, you say "list servers" to only one of them. You have a list of known MetaTrackers, and you are iterating through that list, querying the next MetaTracker for its known servers every 15 minutes. Whenever you say "list servers" to a MetaTracker, it dumps back the list of *all* servers that have said "hello" to it within the last 15 minutes. That's it! That's MetaTracking Scheme X. Now suppose that you have an Mnet with K nodes and M MetaTrackers, and everything is working. Now supposed that K doubles and M doubles. What changes? Well, still ignoring for the moment the question of how the nodes learn about the MetaTrackers, the load of handling "hello" and "lookup contact info" on each MetaTracker *doesn't change at all*. No matter what size the network is, each MetaTracker has to handle only K/M "hello"'s per unit of time and K/M "lookup contact info"'s per unit of time. Likewise, the load of handling "list servers" messages doesn't change at all -- each MetaTracker receive's K/M "list servers" messages per unit of time. The big thing that changes about discovering servers is that the average latency between a node A (re-)connecting to the network and node A being discovered by node B doubles. As I've said, this is almost a feature rather than a bug, since we wish to discriminate against newcomers. Keep in mind that once node B discovers node A, then node B will never need to *rediscover* node A until the following sequence occurs: 1. node A disconnects from the network, and doesn't send a "hello" for 15 minutes. 2. node B tries to send a message to node A, fails (because node A isn't connected), sends a "lookup contact info", and gets a response saying that no current contact info is available. 3. node A later reconnects to the network. Now the load does change on the client side. The number of peers that each node has to keep track of doubles. This is one place where Mnet diverges from current academic p2p research -- we don't care (yet) about the asymptotic cost of each node having to know about each other node, only about the concrete cost. For example, if each node knows about 1,000,000 other nodes, that requires only a few megabytes of space and only a few seconds of computation. (As long as we're good about how we store and how we compute... ;-)) I would very much like to hear your feedback about this, but be warned that I don't want to worry about any micro-optimizations at this point. For example, suppose that instead of sending the complete list of known servers in every "list servers response" message, we instead sent only servers that the querier doesn't already know, or something like that. That might (or might not) reduce the load on the MetaTracker, but *only by a constant factor*. I don't want to add complexity to the design, protocols, or implementation in order to increase the concrete scalability of an individual MetaTracker *unless* the current implementation isn't good enough for the current load. A word about the provenance of these ideas: I've titled the message "new ideas...", but it's likely that some or all of these ideas were already envisioned by Jim McCoy, Doug Barnes, and Greg Smith years ago. Indeed, some of them were envisioned by *me* already, but they are newly understood by me in this context when other complicating or competing ideas are stripped out. For example, the "MetaTracking Scheme X" in this document eliminates the notion that you query a MetaTracker "when needed" -- when you've decided that you want to know more servers, in favor of the notion that you query every 15 minutes rain-or-shine. The presence of the "query when you feel like you want to know more nodes" concept obscured from me the necessity of the "query every so often" concept in the original Mojo Nation MetaTracking design. The path I took to rediscovering the latter was to try to boil down the MetaTracking design to only one concept, choosing the former concept as the one to keep, then discovering that the latter concept was required and switching to the latter concept as the one to keep. Regards, Zooko ------------------------------------------------------- This SF.net email is sponsored by: SF.net Giveback Program. Does SourceForge.net help you be more productive? Does it help you create better code? SHARE THE LOVE, and help us help YOU! Click Here: http://sourceforge.net/donate/ _______________________________________________ mnet-devel mailing list mnet-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/mnet-devel ----- End forwarded message ----- -- Eugen* Leitl leitl ______________________________________________________________ ICBM: 48.07078, 11.61144 http://www.leitl.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE http://moleculardevices.org http://nanomachines.net -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: not available Url : http://zgp.org/pipermail/p2p-hackers/attachments/20031122/fec0329d/attachment.pgp From seanl at chaosring.org Sat Nov 22 19:57:11 2003 From: seanl at chaosring.org (Sean R. Lynch) Date: Sat Dec 9 22:12:36 2006 Subject: [p2p-hackers] [mnet-devel] new ideas for old MetaTracking (fwd from zooko@zooko.com) In-Reply-To: <20031122123047.GR7350@leitl.org> References: <20031122123047.GR7350@leitl.org> Message-ID: <3FBFBF97.9040406@chaosring.org> Eugen Leitl wrote: > ----- Forwarded message from Zooko O'Whielacronx ----- > > From: "Zooko O'Whielacronx" > Date: 22 Nov 2003 07:18:52 -0500 > To: mnet-devel@lists.sourceforge.net > Subject: [mnet-devel] new ideas for old MetaTracking > Reply-To: mnet-devel@lists.sourceforge.net > > I was trying to figure out how to fix this by having some heuristic about when > to stop trying to reach a peer. That obviously risks the opposite problem: > that a high- quality, reliable peer goes off-line for a day, and when it comes > back everyone ignores it because their "stop trying dead peers" heuristic has > kicked in. > > I tried to envision a probability distribution that would try absent peers > often enough to rediscover re-connected ones but not often enough to waste > your time talking to dead ones as the number of permanently-dead peers grows > unboundedly. > > I couldn't. What's wrong with exponential backoff? It works for DHCP, DNS, ethernet, packet radio, and email; why not mnet? -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 256 bytes Desc: not available Url : http://zgp.org/pipermail/p2p-hackers/attachments/20031122/37d23bec/attachment.pgp From coderman at charter.net Sat Nov 22 20:37:57 2003 From: coderman at charter.net (coderman) Date: Sat Dec 9 22:12:36 2006 Subject: [p2p-hackers] [mnet-devel] new ideas for old MetaTracking (fwd from zooko@zooko.com) In-Reply-To: <3FBFBF97.9040406@chaosring.org> References: <20031122123047.GR7350@leitl.org> <3FBFBF97.9040406@chaosring.org> Message-ID: <3FBFC925.70200@charter.net> Sean R. Lynch wrote: >> From: "Zooko O'Whielacronx" >> ... >> I was trying to figure out how to fix this by having some heuristic >> about when to stop trying to reach a peer. That obviously risks the >> opposite problem: that a high- quality, reliable peer goes off-line >> for a day, and when it comes back everyone ignores it because their >> "stop trying dead peers" heuristic has kicked in. >> >> I tried to envision a probability distribution that would try absent >> peers often enough to rediscover re-connected ones but not often >> enough to waste your time talking to dead ones as the number of >> permanently-dead peers grows unboundedly. >> > What's wrong with exponential backoff? It works for DHCP, DNS, > ethernet, packet radio, and email; why not mnet? One solution I like is a combination of exponential backoff + a timeout. You may try reconnecting for 48 hours, then remove the peer entry. If you couple this with transitive introduction (i.e. peers you are currently connected to refer you to new peers you are not yet connected to) you can increase the chance of reconnecting if they do eventually come back online. If you weight transitive introduction by relative quality (as mentioned regarding high-quality, reliable peers) you can avoid "forgetting" about good peers in a large network with some amount of churn while also decreasing the time required to discover new high quality peers. regards, martin From zooko at zooko.com Sun Nov 23 16:13:29 2003 From: zooko at zooko.com (Zooko O'Whielacronx) Date: Sat Dec 9 22:12:36 2006 Subject: [p2p-hackers] [mnet-devel] new ideas for old MetaTracking (fwd from zooko@zooko.com) In-Reply-To: Message from coderman of "Sat, 22 Nov 2003 12:37:57 PST." <3FBFC925.70200@charter.net> References: <20031122123047.GR7350@leitl.org> <3FBFBF97.9040406@chaosring.org> <3FBFC925.70200@charter.net> Message-ID: [Sean R. Lynch wrote the lines prepended with "> > ".] [martin wrote the lines prepended with "> ".] > > What's wrong with exponential backoff? It works for DHCP, DNS, > > ethernet, packet radio, and email; why not mnet? > > One solution I like is a combination of exponential backoff + a > timeout. Those are good suggestions. I don't have a good understanding of this part of the design space at the moment. One major component is whether you detect the (re-)appearance of of a node by that node contacting you and announcing his presence or by you polling for that node's existence. When I put it that way the former obviously sounds better. By the way, this discussion began in mnet-devel, where the audience has a lot more context. For the sake of the larger p2p-hackers audience, let me provide a couple of sentences of explanation about Mnet's current design. I will then retreat from discussion, implement the previously-described "MetaTracker System X", and continue with the imminent release of Mnet v0.6.2. Part of the reason to implement MetaTracker System X instead of other ideas such as exponential backoff and transitive introduction is that MetaTracker System X fits into the current Mnet implementation, which is ready for release except for the MetaTracker component [1]. (Mnet v0.7, which is already operational but lacks a Graphical User Interface, is being used for experimental ideas.) So here's the basic idea: In Mnet v0.6.2, there is no routing -- every node has a direct connection to every other node. This is not asymptotically scalable, but we want to investigate how it performs in practice. It is also a good match for smaller "private Mnets" [2] such as those comprising a group of friends or a corporate LAN/WAN. This idea is *very* similar to the paper "One Hop Lookups for Peer-to-Peer Overlays" [3] by Anjali Gupta, Barbara Liskov, and Rodrigo Rodrigues, although it was independently derived. Okay, now I'm off to finish MetaTracker System X for the Mnet v0.6.2 release. Regards, Zooko [1] http://sourceforge.net/mailarchive/forum.php?forum_id=7702 [2] http://mnet.sf.net/faq.php#my_own_private_Mnet [3] http://www.usenix.org/events/hotos03/tech/gupta.html From coderman at charter.net Mon Nov 24 00:04:35 2003 From: coderman at charter.net (coderman) Date: Sat Dec 9 22:12:36 2006 Subject: [p2p-hackers] [mnet-devel] new ideas for old MetaTracking (fwd from zooko@zooko.com) In-Reply-To: References: <20031122123047.GR7350@leitl.org> <3FBFBF97.9040406@chaosring.org> <3FBFC925.70200@charter.net> Message-ID: <3FC14B13.3010907@charter.net> Zooko O'Whielacronx wrote: >... >I don't have a good understanding of this part of the design space at the >moment. One major component is whether you detect the (re-)appearance of of a >node by that node contacting you and announcing his presence or by you polling >for that node's existence. When I put it that way the former obviously sounds >better. > > I use both; for example one peer exits and the active peer starts an exponential backoff to try and reconnect. After a period of time, the reconnect attempt fails and is aborted. Later, the original peer comes back online. It starts an exponetial backoff to reconnect in the same fashion, and this will either suceed (if the other peer is still online and available) or fail after the same timeout period. The nature of peer communication implies that there will be churn in peer groups; however this style of reconnection seems to provide the best balance of effectiveness with the least effort. [ A discussion of peer identity might be appropriate here; but thats a long tangent on namespaces, cryptographic keys and signatures, etc ] From eugen at leitl.org Tue Nov 25 10:01:25 2003 From: eugen at leitl.org (Eugen Leitl) Date: Sat Dec 9 22:12:36 2006 Subject: [p2p-hackers] Re: (#552) Re: [speak-freely] Poll: Both ends behind NAT... (fwd from sjh_sf@2pi.info) Message-ID: <20031125100125.GK23337@leitl.org> ----- Forwarded message from Soren H ----- From: Soren H Date: Tue, 25 Nov 2003 16:57:30 +1100 To: speak-freely@fourmilab.ch Subject: Re: (#552) Re: [speak-freely] Poll: Both ends behind NAT... User-Agent: Mutt/1.5.4i Reply-To: speak-freely@fourmilab.ch On Mon, Nov 24, 2003 at 09:28:54PM -0800, Gregory Forrest wrote: > > > Here is a patch that supposedly gives NAT traversal ability. I don't have > a > > windows machine to test it. > > > > http://www.2pi.info/software/sf_speex/speakf76_20031030.exe.zip > > > Does anyone know where I might find a technical description of how NAT > Socket Sharing/traveral operates? > Some of the principles are here: http://www.alumni.caltech.edu/~dank/peer-nat.html In the case of speakfreely, it is simply a matter of ensuring that the outgoing UDP packets are sent from the same port number as what it is using to listen for incoming packets. It's really the NAT that does the magic, but the application needs to play it nice with the port numbers so the NAT knows what to do. Once a packet is sent out from port P to remote host X, the NAT knows that traffic from remote host X back to port P needs to get routed back to that machine. As only one socket can be bound to any given port, it means that the same socket must be used for the listen and send calls. That's what the patch does. It's pretty simple in the windows version, and the only downside is that you can't connect() the socket (for performance), or else you can only ever receive packets from one remote source. The NAT patch just replaces the calls to create transmit sockets, and instead uses the value of the existing listening socket. Also, this is not strictly about "NAT". In my use it also overcomes traversal through a simple firewall which is otherwise a problem. Soren * * * To unsubscribe from this mailing list, send E-mail containing the word "unsubscribe" in the message body (*not* as the Subject) to speak-freely-request@fourmilab.ch ----- End forwarded message ----- -- Eugen* Leitl leitl ______________________________________________________________ ICBM: 48.07078, 11.61144 http://www.leitl.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE http://moleculardevices.org http://nanomachines.net -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: not available Url : http://zgp.org/pipermail/p2p-hackers/attachments/20031125/32b88ae5/attachment.pgp From will.morton at memefeeder.com Tue Nov 25 20:14:29 2003 From: will.morton at memefeeder.com (Will Morton) Date: Sat Dec 9 22:12:36 2006 Subject: [p2p-hackers] Re: (#552) Re: [speak-freely] Poll: Both ends behind NAT... (fwd from sjh_sf@2pi.info) In-Reply-To: <20031125100125.GK23337@leitl.org> References: <20031125100125.GK23337@leitl.org> Message-ID: <3FC3B825.5070709@memefeeder.com> Eugen Leitl wrote: >----- Forwarded message from Soren H ----- > >From: Soren H >Date: Tue, 25 Nov 2003 16:57:30 +1100 >To: speak-freely@fourmilab.ch >Subject: Re: (#552) Re: [speak-freely] Poll: Both ends behind NAT... >User-Agent: Mutt/1.5.4i >Reply-To: speak-freely@fourmilab.ch > >On Mon, Nov 24, 2003 at 09:28:54PM -0800, Gregory Forrest wrote: > > >>>Here is a patch that supposedly gives NAT traversal ability. I don't have >>> >>> >>a >> >> >>>windows machine to test it. >>> >>>http://www.2pi.info/software/sf_speex/speakf76_20031030.exe.zip >>> >>> >>Does anyone know where I might find a technical description of how NAT >>Socket Sharing/traveral operates? >> >> >> > >Some of the principles are here: > >http://www.alumni.caltech.edu/~dank/peer-nat.html > > > > Good article. This technique depends on the NAT device in question supporting 'loose' masquerading, though; once a NATted host sends out a UDP packet to a public host, *any* machine on the Net can get back at the NATted machine (if it knows the port), not just the original target IP/port combination. That has major security implications depending on the port (UDP 137/138, anyone?), and I believe that for this reason most NAT devices will not behave in this way - though I'm going to check my netgear DSL router now... ;) W From seanl at chaosring.org Tue Nov 25 21:10:05 2003 From: seanl at chaosring.org (Sean R. Lynch) Date: Sat Dec 9 22:12:36 2006 Subject: [p2p-hackers] Re: (#552) Re: [speak-freely] Poll: Both ends behind NAT... (fwd from sjh_sf@2pi.info) In-Reply-To: <3FC3B825.5070709@memefeeder.com> References: <20031125100125.GK23337@leitl.org> <3FC3B825.5070709@memefeeder.com> Message-ID: <3FC3C52D.4020802@chaosring.org> Skipped content of type multipart/mixed-------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 256 bytes Desc: not available Url : http://zgp.org/pipermail/p2p-hackers/attachments/20031125/bf8366ab/attachment.pgp From Paul.Harrison at infotech.monash.edu.au Thu Nov 27 03:19:37 2003 From: Paul.Harrison at infotech.monash.edu.au (Paul Harrison) Date: Sat Dec 9 22:12:36 2006 Subject: [p2p-hackers] Re: Both ends behind NAT... In-Reply-To: <20031125100125.GK23337@leitl.org> Message-ID: Suppose two NATed machines both want to communicate. Maybe there is a server coordinating things somewhere, maybe not. There are 65536 UDP ports. If machine A sends 256 UDP packets (each from a different port) to machine B, that means there will be 256 holes in his NAT that machine B could get a packet through. The NAT *must* allocate 256 different ports: since each packet was sent to the same IP, the port number is the only way for the NAT to distinguish each possible reply. It will then take machine B about 256 tries to get a packet through to A (at which point a connection has been established). This is because each time B sends a packet it has about a 256/65536 = 1/256 chance of getting through. Some knowledge about how different NAT implementation work could speed this up, but even with no knowledge, 512 small packets is pretty reasonable. :-) cheers, Paul Email: pfh@logarithmic.net Current cost to save one life: approx AU$300 (US$200) From seanl at chaosring.org Thu Nov 27 06:16:02 2003 From: seanl at chaosring.org (Sean R. Lynch) Date: Sat Dec 9 22:12:36 2006 Subject: [p2p-hackers] Re: Both ends behind NAT... In-Reply-To: References: Message-ID: <3FC596A2.4030206@chaosring.org> Paul Harrison wrote: > > > Suppose two NATed machines both want to communicate. Maybe there is a > server coordinating things somewhere, maybe not. > > There are 65536 UDP ports. If machine A sends 256 UDP packets (each from a > different port) to machine B, that means there will be 256 holes in his > NAT that machine B could get a packet through. The NAT *must* allocate 256 > different ports: since each packet was sent to the same IP, the port > number is the only way for the NAT to distinguish each possible reply. > > It will then take machine B about 256 tries to get a packet through to A > (at which point a connection has been established). This is because each > time B sends a packet it has about a 256/65536 = 1/256 chance of getting > through. > > Some knowledge about how different NAT implementation work could speed > this up, but even with no knowledge, 512 small packets is pretty > reasonable. Does anyone know of a NAT implementation that does *not* map UDP packets from the same source IP and port to the same source port on the NAT address? In this case, even if the firewall NATs you to a random port (that's the same for source IP/port pair regardless of dest IP/port), you could just send a packet to a host that both peers know about, and it can tell you which port each end is using. One hitch I thought of in using p2p for my MMORPG engine: while all clients *have* to be able to reach the server, if they can't all reach *one another* there will be problems. However, I'm thinking in this case one can just fall back to routing updates for that client through the server. Since the server would need position updates anyway to determine which peers needed to talk to one another, all the clients have to do is tell the server "I can't see this peer" and the server will just copy messages to that peer. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 256 bytes Desc: not available Url : http://zgp.org/pipermail/p2p-hackers/attachments/20031126/6bf7607a/attachment.pgp From coderman at charter.net Thu Nov 27 06:46:11 2003 From: coderman at charter.net (coderman) Date: Sat Dec 9 22:12:36 2006 Subject: [p2p-hackers] Re: Both ends behind NAT... In-Reply-To: <3FC596A2.4030206@chaosring.org> References: <3FC596A2.4030206@chaosring.org> Message-ID: <3FC59DB3.2030203@charter.net> Sean R. Lynch wrote: > Does anyone know of a NAT implementation that does *not* map UDP > packets from the same source IP and port to the same source port on > the NAT address? In this case, even if the firewall NATs you to a > random port (that's the same for source IP/port pair regardless of > dest IP/port), you could just send a packet to a host that both peers > know about, and it can tell you which port each end is using. In the case of a protocol i am using for search, a NAT discovery step is required to determine if the peer is behind a loose or symmetric NAT router. If they are behind a symmetric (!loose) NAT then all connections must be mediated by a server with a known IP to tell the peers what the other's respective port number is for that logical UDP connection. This is similar to calling connect() on a UDP socket which associates datagrams with a single endpoint. If the peer is using a loose NAT, communication is simpler (this seems to be the default in most consumer NAT's, as they support internet gaming nicely). All peers can simply send datagrams directly to each other. Nat discovery is performed by sending a request to a known server to obtain the public NAT endpoint information. The server then asks a third peer to send a packet to the same public endpoint. If the packet is received, the client is behind a loose UDP NAT. If it not (after some period of retransmission with back-off) then the NAT is assumed to be symmetric. Note that this does nothing to solve the issue of firewall's blocking or filtering UDP traffic. Many corporate firewalls only allow limited outgoing UDP (for example, DNS) From eugen at leitl.org Fri Nov 28 09:54:16 2003 From: eugen at leitl.org (Eugen Leitl) Date: Sat Dec 9 22:12:36 2006 Subject: [p2p-hackers] [ANNOUNCE] Python network security tools: Pcapy, Impacket, InlineEgg (fwd from oss@oss.coresecurity.com) Message-ID: <20031128095416.GQ15515@leitl.org> ----- Forwarded message from CORE Security Technologies ----- From: CORE Security Technologies Date: Thu, 27 Nov 2003 19:38:47 -0300 To: impact-usr@coresecurity.com, bugtraq@securityfocus.com, pen-test@securityfocus.com, exploit-dev@securityfocus.com, ntbugtraq@listserv.ntbugtraq.com, sectools@securityfocus.com, python-list@python.org, winpcap-users@winpcap.polito.it, vuln-dev@securityfocus.com Subject: [ANNOUNCE] Python network security tools: Pcapy, Impacket, InlineEgg Core Security Technologies acknowledges the increasing interest on its products and technologies and therefore wants to share part of them with the developers out there in the spirit of creating an open user community around its key components and give back to the community the results of our ongoing development. These are indeed primary components of our software, CORE IMPACT, and not the regular free giveaways you'd get somewhere else. As such they are being actively maintained by our team. Python developers, network administrators, penetration testers, vulnerability researchers and information security practitioners in general may find this packages useful. All the tools described in this announce are available at http://oss.coresecurity.com/ Today we are announcing the public release of the following components: Pcapy-0.10.2 Impacket-0.9.4 InlineEgg-1.02 And there is still more coming... enjoy! OSS at coresecurity.com A brief description of the components and bundled tools is provided below -OSS projects released November 27th, 2003- Pcapy http://oss.coresecurity.com/projects/pcapy.html Pcapy is a Python extension module that enables software written in Python to access the routines from the pcap packet capture library. From libpcap's documentation: Libpcap is a system?independent interface for user?level packet capture. Libpcap provides a portable framework for low?level network monitoring. Applications include network statistics collection, security monitoring, network debugging, etc. Pcapy is most useful when used together with a packet handling package such as Impacket, a collection of Python classes for constructing and dissecting network packets. What makes pcapy different from the others? * works with Python threads. * works both in UNIX with libpcap and Windows with WinPcap. * provides a simpler Object Oriented API. Impacket http://oss.coresecurity.com/projects/impacket.html Impacket is a collection of Python classes for working with network protocols. Impacket is mostly focused on providing low?level programmatic access to the packets, however some protocols (for instance NMB and SMB) are implemented in a higher level as a foundation for other protocols. Packets can be constructed from scratch, as well as parsed from raw data, and the object oriented API makes it simple to work with deep hierarchies of protocols. Impacket is most useful when used together with a packet capture utility or package such as Pcapy, an object oriented Python extension for capturing network packets. What protocols are featured? * Ethernet, Linux "Cooked" capture. * IP, TCP, UDP, ICMP, IGMP, ARP. * NMB and SMB (high?level implementations). * DCE/RPC versions 4 and 5, over different transports: UDP (version 4 exclusively), TCP, SMB/TCP, SMB/NetBIOS and HTTP. * Portions of the following DCE/RPC interfaces: Conv, DCOM, EPM, SAMR, SvcCtl, WinReg. What tools are included? We bundle some tools with Impacket which are mostly intended for documentation purposes, but that are worth mentioning as they might be useful even for non?programmers and those who don't plan to develop with this library. RPCDump An application that communicates with the Endpoint Mapper interface from the DCE/RPC suite and displays it in a more or less human readable form. This can be used to list services which are remotely available through DCE/RPC, such as the Windows Messenger. SAMRDump An application that communicates with the Security Account Manager Remote interface from the DCE/RPC suite and lists system user accounts, available resource shares and other sensitive information exported through this service. Tracer A grapher written using Tkinter that displays a parallel coordinates graph of captured traffic. It's very easy to find network usage patterns with this type of graphs, and therefore to detect unexpected variations. At the moment Tracer only supports TCP and UDP traffic, but can be easily extended to handle other protocols. Split A small tool that can split any pcap supported capture file into several smaller fires, separated by connection. This was developed to address the need to feed several hundred?megabyte captures to Ethereal in a way that didn't take too long to load. At the moment Split only supports TCP streams, but can be easily extended to handle other stream?oriented protocols. InlineEgg http://oss.coresecurity.com/projects/inlineegg.html InlineEgg is a Python module that provides the user with a toolbox of convenient classes for writing small assembly programs. Only that instead of having to remember confusing assembly mnemonics and requiring the developer to remember how to use complex tools like assemblers and linkers, everything is done the easy way: in Python. InlineEgg is oriented ?but not limited? to developing shellcode (sometimes called eggs) for use in exploits. InlineEgg started separately as a pretty simple idea to fulfill a pretty simple need, but today it's part of CORE IMPACT's egg creation framework. We are releasing it under an open source license for non-commercial use in the hope that you'll find it helpful for your own projects. ----- End forwarded message ----- -- Eugen* Leitl leitl ______________________________________________________________ ICBM: 48.07078, 11.61144 http://www.leitl.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE http://moleculardevices.org http://nanomachines.net -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: not available Url : http://zgp.org/pipermail/p2p-hackers/attachments/20031128/6301516e/attachment.pgp From sam at neurogrid.com Fri Nov 28 02:53:42 2003 From: sam at neurogrid.com (Sam Joseph) Date: Sat Dec 9 22:12:36 2006 Subject: [p2p-hackers] Peer-to-Peer Journal (P2PJ) CFP Message-ID: <3FC6B8B6.2060905@neurogrid.com> ------------------------------------------------------------------ CALL FOR PAPERS Peer-to-Peer Journal (http://p2pjournal.com) ------------------------------------------------------------------- The Peer-to-Peer Journal (P2PJ) is a bi-monthly journal that serves as a forum to individuals and companies interested in applying, developing, educating, & advertising in the fields of Peer-to-Peer (P2P) and parallel computing. The P2P Journal is currently accepting submissions of articles, whitepapers, product reviews, discussions, and letters or short communications. Topics of interest include, but are not limited to: Novel Peer-to-Peer applications and systems P2P simulation and network/traffic mapping/topology tools Instant Messaging (IM) Collaborative Computing FileSharing tools and protocols Content Distribution Networks (CDN) Parallel Computing Grid Networks Cluster Architectures For writer's guideline, see http://p2pjournal.com/main/p2p_writers_guideline.pdf Important Dates Submission Deadline for January Issue: 11th December 2003 Submission Deadline for March Issue: 11th February 2004 Please send submissions to editor@p2pjournal.com Best regards, Raymond F. Gao, Editor-in-Chief Daniel Brookshier, Editor Sam Joseph, Editor From sam at neurogrid.com Sat Nov 29 04:12:09 2003 From: sam at neurogrid.com (Sam Joseph) Date: Sat Dec 9 22:12:36 2006 Subject: [Fwd: Re: [Fwd: Re: [p2p-hackers] Peer-to-Peer Journal (P2PJ) CFP]] Message-ID: <3FC81C99.1010904@neurogrid.com> Hi VAB, Here's some feedback from the P2PJournal Editor-in-chief. CHEERS> SAM -------------- next part -------------- An embedded message was scrubbed... From: "Raymond Gao" Subject: RE: [Fwd: Re: [p2p-hackers] Peer-to-Peer Journal (P2PJ) CFP] Date: Fri, 28 Nov 2003 22:03:39 -0600 Size: 1978 Url: http://zgp.org/pipermail/p2p-hackers/attachments/20031129/c6292300/p2p-hackersPeer-to-PeerJournalP2PJCFP.mht