From david at pjort.com Mon Dec 1 17:40:47 2003 From: david at pjort.com (David =?iso-8859-1?Q?G=F6thberg?=) Date: Sat Dec 9 22:12:37 2006 Subject: [p2p-hackers] Slightly ot: eternity service In-Reply-To: <3FB43B8E.3000300@bitzi.com> References: <20031113235600.GA13172@soniq.net> <20031113235600.GA13172@soniq.net> Message-ID: <5.0.2.1.1.20031201181133.009f3620@pop.home.se> >Paul Boehm wrote: >>do any of you know a reliable eternity service? >>(eternal logfile, commercial?, peer2peer) >>i want to have the following sha1sum timestamped: >>1fdfeaf47b5a074f07eba38dfd5dee03382280ee > >Gordon Mohr answered: >There were a couple of companies offering these a while >back, their names elude me. Google turns up another >I hadn't heard of before: Chronostamp. > >You could also take out classified ads in a number of >dated papers/forums that are likely to be reliably >archived. How much do those tiny 1-line ads the NY >Times sometimes squeezes at the bottom of their page >1 stories cost? I usually let my local Notarius Publicus sign any paper I want a legally valid timestamp on. In Sweden (Europe) that costs about 200 SEK / 25 USD / 22 Euro for each paper one wants signed. Notarius Publicus is a special government official or a lawyer assigned by the county that provides "signature service". That service is at-least available in most European countries. There is one more nice thing with a Notarius Publicus, he/she can also sign that it really was you and no one else that brought in the paper for signing. (By checking your identity papers, for instance your drivers license.) However, there is a secrecy issue: One often has to leave the paper at Notarius Publicus over night for processing. And some of them store a copy. Which of course makes it a better proof but makes the data publicly available... This is how I usually do it: If it's only one page and it is not secret I bring in the paper itself for signing. If it's several papers (it's expensive to sign several pages) or the data is secret I take the SHA1-hashsum of the file and then write down the hashsum on a paper together with current date, my name, my address, phone number and social security number. Then I take that "hashsum paper" to Notarius Publicus. If it is several files, just zip them first and take the hashsum of the zip-file. Of course store that zip-file on a good backup in a safe location since you will need that as proof. However if you don't have a local Notarius Publicus or it's too expensive I really like the suggestion by Gordon Mohr: "You could also take out classified ads in a number of dated papers" Greetings from freezing Gothenburg, Sweden, Northern Europe, .../David ----------------------------------------------------------- David G?thberg Email: david@pjort.com http://www.david.pjort.com ----------------------------------------------------------- From dmarti at zgp.org Mon Dec 1 18:02:17 2003 From: dmarti at zgp.org (Don Marti) Date: Sat Dec 9 22:12:37 2006 Subject: [p2p-hackers] Slightly ot: eternity service In-Reply-To: <5.0.2.1.1.20031201181133.009f3620@pop.home.se> References: <20031113235600.GA13172@soniq.net> <20031113235600.GA13172@soniq.net> <5.0.2.1.1.20031201181133.009f3620@pop.home.se> Message-ID: <20031201180217.GB3606@zingiber.sandbox.zgp.org> begin David G?thberg quotation of Mon, Dec 01, 2003 at 06:40:47PM +0100: > However if you don't have a local Notarius Publicus or it's too > expensive I really like the suggestion by Gordon Mohr: > "You could also take out classified ads in a number of dated papers" If you can say with a straight face that the content is somehow related to freedom and/or software, and you intend to publish about it eventually, you can send the hash in a letter to the editor to Linux Journal: ljeditor@ssc.com and we'll print it at no charge. If we start to get a lot of these we'll make a "hashes of the month" item. -- Don Marti http://zgp.org/~dmarti Learn Linux and free software dmarti@zgp.org from the experts in California, USA http://freedomtechnologycenter.org/ From hal at finney.org Tue Dec 2 07:13:02 2003 From: hal at finney.org (Hal Finney) Date: Sat Dec 9 22:12:37 2006 Subject: [p2p-hackers] Slightly ot: eternity service Message-ID: <200312020713.hB27D2S11890@finney.org> You might look at the "PGP Digital Timestamping Service" at http://www.itconsult.co.uk/stamper.htm. They've managed to keep it running since 1995, publishing weekly summaries of the signed hashes to comp.security.pgp.announce as well as to a mailing list. I don't know that it would have any presumptive legal status, but overall the security looks pretty good. The gold standard for timestamping is surety.com, but they are more oriented towards corporate users who need to have various electronic documents timestamped and notarized, and I don't think they will do timestamps for individual users. Hal F. From david at pjort.com Tue Dec 2 23:36:17 2003 From: david at pjort.com (David =?iso-8859-1?Q?G=F6thberg?=) Date: Sat Dec 9 22:12:37 2006 Subject: [p2p-hackers] Slightly ot: eternity service In-Reply-To: <200312020713.hB27D2S11890@finney.org> Message-ID: <5.0.2.1.1.20031203002824.00a024e0@pop.home.se> >The gold standard for timestamping is surety.com, but they are more >oriented towards corporate users who need to have various electronic >documents timestamped and notarized, and I don't think they will do >timestamps for individual users. I checked out www.surety.com. A funny thing is that they say that the basis for the security of their digital online notarisation system is that they publish a hash sum each week in "the Commercial Notices section of the national edition of the New York Times"... Greetings from freezing Gothenburg, Sweden, Northern Europe, .../David ----------------------------------------------------------- David G?thberg Email: david@pjort.com http://www.david.pjort.com ----------------------------------------------------------- From david at pjort.com Tue Dec 2 23:57:41 2003 From: david at pjort.com (David =?iso-8859-1?Q?G=F6thberg?=) Date: Sat Dec 9 22:12:37 2006 Subject: [p2p-hackers] Peer-to-Peer Journal (P2PJ) CFP In-Reply-To: <3FC6B8B6.2060905@neurogrid.com> Message-ID: <5.0.2.1.1.20031203005658.00a011e0@pop.home.se> > CALL FOR PAPERS > Peer-to-Peer Journal > (http://p2pjournal.com) >The Peer-to-Peer Journal (P2PJ) is a bi-monthly journal that serves as a >forum to individuals and companies interested in applying, developing, >educating, & advertising in the fields of Peer-to-Peer (P2P) and >parallel computing. The P2P Journal is currently accepting submissions of >articles, whitepapers, product reviews, discussions, and letters or short >communications. >Sam Joseph, Editor Hey Sam. I checked out your "writer's guidelines" and was somewhat shocked. You state that after accepting submission of a paper to your journal, the journal (that is Raymond F. Gao, Editor-in-Chief) gets the copyright of the submitted text. That's pretty silly especially since you don't even pay for the work and expect people to write about their inventions and research. When my mother hired an artist to do the pictures to her children's books we used a much better way: We signed a contract stating a "split" or "shared" copyright. That is, both the artist and my mother can do what they want with the pictures. Thus both parties can reprint them, sell them and use them in any way they see fit and booth are happy! I suggest you should do the same, or people like me will never bother to write for your journal. Among other things, your "rule" makes it impossible to send you texts that one has already published in other places and your rule makes it impossible to reuse that material as one sees fit. If I write about my inventions I of course want to be able to reuse any text I write about them. But writing for you is a one time thing and thus not worth the effort. And don't just say: "This is how it is normally done." Just because it's common to do like that it doesn't make it right. But I do like the thought of a p2p journal! Greetings from freezing Gothenburg, Sweden, Northern Europe, .../David ----------------------------------------------------------- David G?thberg Email: david@pjort.com http://www.david.pjort.com ----------------------------------------------------------- From sam at neurogrid.com Wed Dec 3 02:06:08 2003 From: sam at neurogrid.com (Sam Joseph) Date: Sat Dec 9 22:12:37 2006 Subject: [p2p-hackers] Peer-to-Peer Journal (P2PJ) CFP In-Reply-To: <5.0.2.1.1.20031203005658.00a011e0@pop.home.se> References: <5.0.2.1.1.20031203005658.00a011e0@pop.home.se> Message-ID: <3FCD4510.8040502@neurogrid.com> Hi David, Although I agree with you about the copyright issue, I think that this kind of thing is pretty common with academic journals. I'm not saying that makes it right, but it is true. Every time I get a paper published in a book or journal I have to sign away my rights to the paper. It is a wonderful little earner for the academic publishing industry generally. They have academics working for free to generate the content, and then they charge other academics to get access to the journal. I think it is another one of those fucked up things that we can't do very much about. However I would imagine that the publishers of academic journals would say that there is such low readership that without free content and exorbitant fees to libraries the entire thing would not be profitable, i.e. they couldn't make enough money to pay the people who work to actually print the journal. At the moment P2PJournal is not making any money, is not charging you to read the journal, and everyone is putting in their time for free. As it happens I have yet to have any say in the copyright issues. I'm working on trying to get the P2PJournal to serve the best interests of the P2P community. I will pass on your comments to the Editor-in-chief. BTW, I think the standard deal with most journals is that you can publish the work on your own personal website as well - but it would be good to make that explicit. As for a complete copyright share - personally that sounds fine to me, but one could argue that if the same work was completely free to be published anywhere else then why would anyone want to read the P2PJournal. I'm not sure I totally buy the argument myself, but I think the reason that most academic journals and conferences give for holding on to the copyright of the papers they publish is that if they didn't then they would be unable to maintain their readership or attendees. Whether this is true or not is open to question. There is also a sot of contradiction in terms of having a P2PJournal with restrictive copyright rules - but then such is life. Let us see what we can evolve. CHEERS> SAM David G?thberg wrote: > I checked out your "writer's guidelines" and was somewhat shocked. > You state that after accepting submission of a paper to your journal, > the journal (that is Raymond F. Gao, Editor-in-Chief) gets the copyright > of the submitted text. > > That's pretty silly especially since you don't even pay for the work > and expect people to write about their inventions and research. > > When my mother hired an artist to do the pictures to her children's > books we used a much better way: We signed a contract stating a "split" > or "shared" copyright. That is, both the artist and my mother can do > what they want with the pictures. Thus both parties can reprint them, > sell them and use them in any way they see fit and booth are happy! > > I suggest you should do the same, or people like me will never bother > to write for your journal. Among other things, your "rule" makes it > impossible to send you texts that one has already published in other > places and your rule makes it impossible to reuse that material as > one sees fit. If I write about my inventions I of course want to be > able to reuse any text I write about them. But writing for you is > a one time thing and thus not worth the effort. > > And don't just say: "This is how it is normally done." Just because > it's common to do like that it doesn't make it right. > > But I do like the thought of a p2p journal! From will.morton at memefeeder.com Thu Dec 4 13:40:42 2003 From: will.morton at memefeeder.com (Will Morton) Date: Sat Dec 9 22:12:37 2006 Subject: [p2p-hackers] More NATtage (was Poll: Both ends behind NAT) In-Reply-To: <3FC3C52D.4020802@chaosring.org> References: <20031125100125.GK23337@leitl.org> <3FC3B825.5070709@memefeeder.com> <3FC3C52D.4020802@chaosring.org> Message-ID: <3FCF395A.9090706@memefeeder.com> Sean R. Lynch wrote: > Will Morton wrote: > >> Eugen Leitl wrote: >> >>> >>> Some of the principles are here: >>> >>> http://www.alumni.caltech.edu/~dank/peer-nat.html >>> >> Good article. This technique depends on the NAT device in question >> supporting 'loose' masquerading, though; once a NATted host sends out >> a UDP packet to a public host, *any* machine on the Net can get back >> at the NATted machine (if it knows the port), not just the original >> target IP/port combination. That has major security implications >> depending on the port (UDP 137/138, anyone?), and I believe that for >> this reason most NAT devices will not behave in this way - though I'm >> going to check my netgear DSL router now... ;) > > > My reading of the article doesn't seem to indicate that this is the > case; the article mentions that each peer sends outbound UDP to every > other peer it wants to communicate with. I think that "loose" > masquerading actually indicates that the source port is not changed > unless it's already in use on the masquerade address. This would allow > the technique to work as long as each peer knew the public address and > port of the other peers, because they could all send outbound UDP and > their respective firewalls would treat all the connections as outbound. > OK, that's one serving of humble pie table 4 please... On rereading the article, you're absolutely right; this technique should work as long as UDP packets sent to multiple targets from the same source port on a NATted host are mapped to the same source port on the NAT device. Enabling NATted devices to talk to each other would help my work massively, and I guess also for a few other people on here, so I've attached some test java code that emulates the behaviour descibed in the article. Runs in two modes, first as a 'nameserver' which sits and listens on a port (must be public IP), and second as a client which connects to a nameserver. When the client connects to the nameserver, the nameserver replies with details of all the other clients it knows about (IP and port); the nameserver also sends details of the newly-connecting client to all the other clients who have contacted it in the last 10 seconds (configurable). The clients then try to connect to each other on the addresses/ports given by the nameserver; they each send two packets, 3 seconds apart, as I figure one of the first packets is likely to bounce off the NAT as the connection won't be enabled yet. Have tested this with one NATted machine behind a 'DrayTek Vigor 2600' DSL router, and another behind an OpenBSD box working as a router. Neither box was able to receive UDP packets from the other. Checked with a sniffer at both ends behind the NAT (don't currently have the ability to place a sniffer in front, unfortunately), and the packets were definitely being sent. I've left the set-up running, if anyone wants to try connecting. Compile the PeerNatTester class and then run with 'java PeerNatTester nexus.n0de.net 23695'. Some posts to this list would indicate that people already have working implementations of this idea... I'd much appreciate it if anyone could point out where I'm going wrong, or whether I'm just unlucky in my choice of NAT devices. Thanks Will -------------- next part -------------- /* PeerNatTester is a simple program to test the UDP behaviour of NAT devices. For details on the behaviour tested for here, see: http://www.alumni.caltech.edu/~dank/peer-nat.html Questions, comments, flames to will.morton@memefeeder.com (C) Copyright Memefeeder Ltd 2003 All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of Memefeeder Ltd nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ import java.io.*; import java.net.*; import java.util.*; /** Executes one of two functions. In 'nameserver' mode, listens on the supplied UDP port and informs clients of other clients' locations. In 'client' mode (the default), connects via UDP to the supplied nameserver, obtains information about other clients and then attempts to send UDP packets to them on the IP address and port number given to them by the nameserver. */ public class PeerNatTester { private final int CLIENTTIMEOUT = 10; // Seconds private final int GAPBETWEENPACKETS = 3; // Seconds private Hashtable clients; private DatagramSocket s; private PeerNatTester(String[] args) { InetAddress addr = null; int port = 0; if(args.length != 2) { printUsage(); } try { port = Integer.parseInt(args[1]); } catch(Exception e) { printUsage(); } if(args[0].equals("-ns")) { runNameServer(port); } else { try { addr = InetAddress.getByName(args[0]); } catch(UnknownHostException e) { System.err.println("Host "+args[0]+" not found\n"); printUsage(); } runClient(addr, port); } } private void printUsage() { System.out.println("Usage: PeerNatTester [-ns | ]"); System.exit(1); } private void runClient(InetAddress addr, int port) { clients = new Hashtable(); System.out.println("Starting in client mode"); // Create socket and bind to an unspecified local port System.out.println("Creating socket"); s = null; try { s = new DatagramSocket(); } catch(SocketException e) { System.err.println("Error creating socket"); System.exit(1); } // Spawn off a thread that keeps the nameserver informed of our existence ClientThread myThread = new ClientThread(s, addr, port); myThread.start(); DatagramPacket p; byte[] buf; // Wait for details of other clients, connect to them if necessary while(true) { buf = new byte[1024]; p = new DatagramPacket(buf, buf.length); System.out.println("Listening..."); try { s.receive(p); } catch(SocketException e) { System.err.println("Caught socketexception receiving packet: "+e.getMessage()); System.exit(1); } catch(IOException e) { System.err.println("Caught ioexception receiving packet: "+e.getMessage()); System.exit(1); } Date now = new Date(); String recvStr = new String(p.getData()); if(recvStr.substring(0, 6).equals("CLIENT")) { clients.clear(); // Split our string into individual lines String[] lines = recvStr.split("\n"); for(int i=0;i 0) { for(int i=0;i<2;i++) { Enumeration enum = clients.keys(); while(enum.hasMoreElements()) { InetAddress thisAddr = (InetAddress)enum.nextElement(); int thisPort = ((Integer)clients.get(thisAddr)).intValue(); now = new Date(); System.out.println(now.toString()+": sending packet to peer "+thisAddr.getHostAddress()+":"+thisPort); String sendStr = "j0"; byte[] sendBytes = sendStr.getBytes(); p = new DatagramPacket(sendBytes, sendBytes.length, thisAddr, thisPort); try { s.send(p); } catch(SocketException e) { System.err.println("Caught socketexception sending packet to peer: "+e.getMessage()); System.exit(1); } catch(IOException e) { System.err.println("Caught ioexception sending packet to peer: "+e.getMessage()); System.exit(1); } } if(i == 0) { try { Thread.sleep(GAPBETWEENPACKETS * 1000); } catch(InterruptedException e) { System.err.println("Sleep interrupted!"); } } } } } else { System.out.println(now.toString()+": received packet from peer ("+p.getAddress()+":"+p.getPort()+"): "+recvStr); } } } private void runNameServer(int port) { clients = new Hashtable(); int clientID = 0; System.out.println("Starting in nameserver mode"); // Create socket and bind to port number System.out.println("Creating socket"); s = null; try { s = new DatagramSocket(port); } catch(SocketException e) { System.err.println("SocketException while binding to port "+port+": "+e.getMessage()); } byte[] buf; // Loop forever listening for packets DatagramPacket p; while(true) { buf = new byte[32]; p = new DatagramPacket(buf, buf.length); try { s.receive(p); } catch(SocketException e) { System.err.println("Caught socketexception receiving packet: "+e.getMessage()); System.exit(1); } catch(IOException e) { System.err.println("Caught ioexception receiving packet: "+e.getMessage()); System.exit(1); } Date now = new Date(); System.out.println(now.toString()+": Received packet from "+p.getAddress().toString()+":"+p.getPort()); // Get connection details UdpTesterClient thisClient = new UdpTesterClient(clientID++, p.getAddress(), p.getPort(), Calendar.getInstance()); // Prune our list of clients pruneClientList(); if(clients.size() > 0) { // Send details of this client to our existing clients, and collect their details at the same time String thisClientDetails = "CLIENT "+thisClient.Address.getHostAddress()+" "+thisClient.Port+"\n"; buf = thisClientDetails.getBytes(); int numClients = 0; StringBuffer b = new StringBuffer(); Enumeration enum = clients.elements(); while(enum.hasMoreElements()) { UdpTesterClient client = (UdpTesterClient)enum.nextElement(); // Don't send if they're behind the same NAT if(!client.Address.equals(thisClient.Address)) { // Hoover their details numClients++; b.append("CLIENT "); b.append(client.Address.getHostAddress()); b.append(" "); b.append(client.Port); b.append("\n"); // Send notice of this new one System.out.println("Sending details to client "+client.Address.toString()+":"+client.Port); p = new DatagramPacket(buf, buf.length, client.Address, client.Port); try { s.send(p); } catch(SocketException e) { System.err.println("Caught socketexception sending packet to client: "+e.getMessage()); System.exit(1); } catch(IOException e) { System.err.println("Caught ioexception sending packet to client: "+e.getMessage()); System.exit(1); } } } // Send the hoovered details to our new client if(numClients > 0) { buf = b.toString().getBytes(); p = new DatagramPacket(buf, buf.length, thisClient.Address, thisClient.Port); try { s.send(p); } catch(SocketException e) { System.err.println("Caught socketexception sending packet to client: "+e.getMessage()); System.exit(1); } catch(IOException e) { System.err.println("Caught ioexception sending packet to client: "+e.getMessage()); System.exit(1); } } } // Put this clients details in our table clients.put(new Integer(thisClient.ID), thisClient); // Go back to listening } } /** Run through our list of clients, get rid of any that have timed out */ private void pruneClientList() { if(clients.size() > 0) { Calendar pruneTime = Calendar.getInstance(); pruneTime.add(Calendar.SECOND, (0-CLIENTTIMEOUT)); ArrayList toRemove = new ArrayList(); // Get the list of clients we want to remove Enumeration enum = clients.elements(); while(enum.hasMoreElements()) { UdpTesterClient client = (UdpTesterClient)enum.nextElement(); if(client.LastHeardFrom.before(pruneTime)) { toRemove.add(client); } } // Remove them UdpTesterClient[] removeArr = new UdpTesterClient[toRemove.size()]; removeArr = (UdpTesterClient[])toRemove.toArray(removeArr); for(int i=0;i Hi, Does anybody have any implementation of peer-to-peer related system for ns-2? I need to simulate new query forwarding model. I tried to use Gnutella simulator for ns-2(http://www.cc.gatech.edu/computing/compass/gnutella/gnusim.html), but it is not suitable for monitoring of queries. If somebody has such implementation, for example gnutella protocol v0.4, which uses simple query flooding mechanism, please send it to me at yeivanch@cc.jyu.fi. Thanks, Yevgeniy From angryhickclowN at netscape.net Thu Dec 4 20:19:43 2003 From: angryhickclowN at netscape.net (angryhickclowN@netscape.net) Date: Sat Dec 9 22:12:37 2006 Subject: [p2p-hackers] UDP file transfer protocol Message-ID: <281F7CE7.33D1957C.51A11433@netscape.net> I've been slowly and steadily implementing my own file transfer protocol over UDP (so it can work through NATs). I've taken a look at TFTP, but it isn't exactly what I want (need multisource downloading). I was wondering if anyone has implemented this already and is willing to share. If not, I will post what I have when I finish it. __________________________________________________________________ McAfee VirusScan Online from the Netscape Network. Comprehensive protection for your entire computer. Get your free trial today! http://channels.netscape.com/ns/computing/mcafee/index.jsp?promo=393397 Get AOL Instant Messenger 5.1 free of charge. Download Now! http://aim.aol.com/aimnew/Aim/register.adp?promo=380455 From Paul.Harrison at infotech.monash.edu.au Thu Dec 4 21:26:55 2003 From: Paul.Harrison at infotech.monash.edu.au (Paul Harrison) Date: Sat Dec 9 22:12:37 2006 Subject: [p2p-hackers] UDP file transfer protocol In-Reply-To: <281F7CE7.33D1957C.51A11433@netscape.net> Message-ID: On Thu, 4 Dec 2003 angryhickclowN@netscape.net wrote: > I've been slowly and steadily implementing my own file transfer > protocol over UDP (so it can work through NATs). I've taken a look at > TFTP, but it isn't exactly what I want (need multisource downloading). > I was wondering if anyone has implemented this already and is willing > to share. > > If not, I will post what I have when I finish it. > Neat! I had a look around for a streaming-over-UDP library after the NAT thing came up too, and couldn't find anything. It would be useful for all sorts of things. The Circle (thecircle.org.au) may be worth a look. It is a wholely UDP RPC based protocol that can do file transfer, but the file transfer is tangled up in all sorts of other stuff. See in particular the files node.py and file_server.py. The basic approach in Circle is that the the computer that wants the file sends out requests for blocks of the file to computers that have it, which then reply with the appropriate block. If it doesn't get a reply within a certain time, the request is sent out again. Several requests are sent in parallel to speed things up. This isn't terribly efficient, since there are all those small request packets being sent out. Things to consider * UDP is unreliable, you need a retransmit mechanism. * You can flood the network quite easily, so it has to throttle back when a packet is lost. regards, Paul Email: pfh@logarithmic.net Current cost to save one life: approx AU$300 (US$200) From greg at electricrain.com Fri Dec 5 00:39:57 2003 From: greg at electricrain.com (Gregory P. Smith) Date: Sat Dec 9 22:12:37 2006 Subject: [p2p-hackers] [mnet-devel] new ideas for old MetaTracking (fwd from zooko@zooko.com) In-Reply-To: <3FBFC925.70200@charter.net> References: <20031122123047.GR7350@leitl.org> <3FBFBF97.9040406@chaosring.org> <3FBFC925.70200@charter.net> Message-ID: <20031205003957.GS14907@zot.electricrain.com> On Sat, Nov 22, 2003 at 12:37:57PM -0800, coderman wrote: > Sean R. Lynch wrote: > > >>From: "Zooko O'Whielacronx" > >>... > >>I was trying to figure out how to fix this by having some heuristic > >>about when to stop trying to reach a peer. That obviously risks the > >>opposite problem: that a high- quality, reliable peer goes off-line > >>for a day, and when it comes back everyone ignores it because their > >>"stop trying dead peers" heuristic has kicked in. > >> > >>I tried to envision a probability distribution that would try absent > >>peers often enough to rediscover re-connected ones but not often > >>enough to waste your time talking to dead ones as the number of > >>permanently-dead peers grows unboundedly. > >> > >What's wrong with exponential backoff? It works for DHCP, DNS, > >ethernet, packet radio, and email; why not mnet? > > One solution I like is a combination of exponential backoff + a > timeout. You may try reconnecting for 48 hours, then remove the peer > entry. You can also apply some randomness or exponential backoff to the timeout. (ex: the longer its been since you spoke with that peer or heard contact info with a more recent sequence number than you already have for that peer the more likely you are to forget it). This prevents situations where a fixed length network glitch causes you to suddenly lose *all* knowledge about anything between you and the things on the other side of the glitch. From zooko at zooko.com Fri Dec 5 12:45:50 2003 From: zooko at zooko.com (Zooko O'Whielacronx) Date: Sat Dec 9 22:12:37 2006 Subject: [p2p-hackers] [mnet-devel] new ideas for old MetaTracking (fwd from zooko@zooko.com) In-Reply-To: Message from "Gregory P. Smith" of "Thu, 04 Dec 2003 16:39:57 PST." <20031205003957.GS14907@zot.electricrain.com> References: <20031122123047.GR7350@leitl.org> <3FBFBF97.9040406@chaosring.org> <3FBFC925.70200@charter.net> <20031205003957.GS14907@zot.electricrain.com> Message-ID: "Gregory P. Smith" wrote: > > You can also apply some randomness or exponential backoff to the > timeout. ... > This prevents situations where a fixed length network glitch causes > you to suddenly lose *all* knowledge about anything between you and the > things on the other side of the glitch. Hello again, Greg! Amber says "Hi!". So, after the earlier discussion [1] I went ahead and implemented "heartbeat- based MetaTracking", as described. I then observed that there was a problem with it that I hadn't considered: who tracks the MetaTrackers? With the simple heartbeat-based system, where you forget about a peer entirely when it fails to say "hello" every 15 minutes, then you need to have a separate class of peer about whom you never forget, so that you can use that separate class to bootstrap your connections to the normal peers when you've forgotten about all of the normal peers. We had such a separate class in Mnet v0.6.1 -- bootpages, which are simple text files served via HTTP, such as [2]. However, the set of bootpages that you fetch is a static set. What happens if new bootpages are set up and old bootpages are taken down over time? So we have the same problem again -- we need to dynamically track MetaTrackers. Therefore I wrote a new MetaTracking system which uses exponential back-off instead of forgetting about inactive peers. This is what is now used in Mnet v0.6.2.323-STABLE, now available for your pleasure [3]. I encountered two added complications. The first is what coderman and Greg were just discussing: when to give up on a peer entirely. My reasoning was that I don't care about how long a peer has been inactive, I only care about the need to conserve my local storage. If I could remember every peer that I've ever talked to into perpetuity then I would, but I don't want to use an unbounded amount of storage. Therefore I have a fixed maximum number of peers to remember, and when that number is exceeded I forget about some peers, starting with the ones that have been inactive longest. (By the way, the current hardcoded limitation is 16,384 peers. Once Mnet grows beyond that number of currently active peers, this will cause the behavior of the network to degrade. However, each peer currently takes up more than 1024 byte of memory, and I want to make sure that Mnet uses less than 32 MB of RAM while running.) The second complication really stumped me for a while -- what to do when *all* peers are inactive? Suppose that all peers have been unresponsive for the last 8 hours. If you are doing exponential backoff, then you will not attempt to contact any of them for next 8 hours or so. This isn't what I want. I want the node to send out a constant stream of attempts to contact some peer or other, so that it will reconnect to the network without making the user wait. My solution to that is that whenever there are no peers thought to be active, you promote all peers until at least one is considered to be "active". That way there is a constant stream of messages going out from you to the network as a whole, but when some but not all peers go inactive, then you do exponential backoff on sending messages to the inactive peers. Regards, Zooko [1] http://zgp.org/pipermail/p2p-hackers/2003-November/thread.html#1535 [2] http://web.nilpotent.org/bootpage.txt [3] http://mnet.sourceforge.net/download.php From wesley at felter.org Sat Dec 6 06:24:36 2003 From: wesley at felter.org (Wes Felter) Date: Sat Dec 9 22:12:37 2006 Subject: [p2p-hackers] UDP file transfer protocol In-Reply-To: <281F7CE7.33D1957C.51A11433@netscape.net> References: <281F7CE7.33D1957C.51A11433@netscape.net> Message-ID: On Dec 4, 2003, at 2:19 PM, angryhickclowN@netscape.net wrote: > I've been slowly and steadily implementing my own file transfer > protocol over UDP (so it can work through NATs). I've taken a look at > TFTP, but it isn't exactly what I want (need multisource downloading). > I was wondering if anyone has implemented this already and is willing > to share. Did you consider HTTP over Airhook? Wes Felter - wesley@felter.org - http://felter.org/wesley/ From decapita at dti.unimi.it Sat Dec 6 13:14:22 2003 From: decapita at dti.unimi.it (Sabrina De Capitani di Vimercati) Date: Sat Dec 9 22:12:37 2006 Subject: [p2p-hackers] REMINDER: Workshop on Issues in the Theory of Security (WITS'04) Message-ID: A reminder of the upcoming deadline for WITS. [Apologies if you receive multiple copies of this message] CALL FOR PAPERS 2004 IFIP WG 1.7, ACM SIGPLAN and GI FoMSESS Workshop on Issues in the Theory of Security (WITS'04) April 3 - 4, 2004, Barcelona, Spain co-located with ETAPS'04 http://www.dsi.unive.it/IFIPWG1_7/wits2004.html ---------------------------------------------------------------------- OVERVIEW OF WITS WITS is the offical workshop organised by the IFIP WG 1.7 on "Theoretical Foundations of Security Analysis and Design", established to promote the investigation on the theoretical foundations of security, discovering and promoting new areas of application of theoretical techniques in computer security and supporting the systematic use of formal techniques in the development of security related applications. The members of WG hold their annual workshop as an open event to which all researchers working on the theory of computer security are invited. This is the fourth workshop of the series, and is organised in cooperation with ACM SIGPLAN and GI working group FoMSESS. Extended abstracts of work (accepted after selection and) presented at the Workshop are collected and distributed to the participants. There will be no formally published proceedings; however, selected papers will be invited for submission to a special issue of the Journal of Computer Security. Suggested submission topics include: * formal definition and verification of the various aspects of security: confidentiality, privacy, integrity, authentication and availability * new theoretically-based techniques for the formal analysis and design of cryptographic protocols and their manifold applications (e.g., electronic commerce) * information flow modelling and its application to the theory of confidentiality policies, composition of systems, and covert channel analysis * formal techniques for the analysis and verification of code security, including mobile code security * formal analysis and design for prevention of denial of service * security in real-time/probabilistic systems * language-based security IMPORTANT DATES Paper Submission: 15 December 2003 Author Notification: 25 January 2004 Final version due: 29 February 2004 Workshop: 3-4 April 2004 PROGRAM COMMITTEE David Basin, ETH Zurich Pierpaolo Degano, Universit? di Pisa Claudia Eckert, TU Darmstadt and Fraunhofer SIT Riccardo Focardi, Universit? di Venezia Dieter Gollmann, TU Hamburg-Harburg, Germany Roberto Gorrieri, Universit? di Bologna Joshua Guttman, MITRE Chris Hankin, Imperial College Jan J?rjens, Munich University of Technology Gavin Lowe, Oxford University Cathy Meadows, Naval Research Laboratory Jon Millen, SRI International Peter Ryan (chair), University of Newcastle Thomas Santen, Dresden University of Technology Steve Schneider, Royal Holloway, University of London Paul Syverson, Naval Research Laboratory SUBMISSION INSTRUCTIONS Authors are invited to submit an extended abstract, up to 12 pages long, in LNCS style or with 11pt or larger font and reasonable margins and line spacing. Submissions departing from the instructions above are rejected independently of their technical merit. Authors have to submit through the web. Alternatively, they may e_mail a .ps file. If necessary, they may mail a single hard copy of their paper to the program chair; in the last case, please allow ample time for delivery. Submissions should have the author's full name, address, fax number, and e-mail address. FURTHER INFORMATION The official web page of the conference is at the url http://www.dsi.unive.it/IFIPWG1_7/wits2004.html Contact person: Peter Ryan School of Computing Science, University of Newcastle Claremont Tower, Newcastle upon Tyne NE1 7RU, UK tel: +44 0191 222 8788, fax: +44 0191 222 8972 e-mail: peter.ryan@ncl.ac.uk http://www.csr.ncl.ac.uk/ From coderman at charter.net Sat Dec 6 22:33:54 2003 From: coderman at charter.net (coderman) Date: Sat Dec 9 22:12:37 2006 Subject: [p2p-hackers] UDP file transfer protocol In-Reply-To: <281F7CE7.33D1957C.51A11433@netscape.net> References: <281F7CE7.33D1957C.51A11433@netscape.net> Message-ID: <3FD25952.40502@charter.net> angryhickclowN@netscape.net wrote: >I've been slowly and steadily implementing my own file transfer protocol over UDP (so it can work through NATs). I've taken a look at TFTP, but it isn't exactly what I want (need multisource downloading). I was wondering if anyone has implemented this already and is willing to share. > > One of the significant problems you will have with userspace bulk transport via UDP is retransmission and timeouts while avoiding congestion. This is particularly difficult in a windoze environment where timer resolution and task timeslices are much coarser grained. In addition to airhook (note: i've had some problems with airhook in very lossy environments) i'd suggest checking out swarmcast: http://sourceforge.net/projects/swarmcast/ (mostly dead, but still informative) and some of the other work being done by the reliable multicast charter: http://www.ietf.org/html.charters/rmt-charter.html Both of which utilize FEC encoding to reduce the impact of UDP packet loss for larger transfers. If you need to have low latency, ordered transmission than you will need to look at other solutions (perhaps airhook). One last note: NAT's can cause significant problems for UDP. I've seen a few NAT's, mostly windoze, that do not forward UDP datagrams larger than a single ethernet frame (aprox 1500bytes minus headers) and then there is the difficulty in dealing with symmetric NAT's which restrict where incoming datagrams can be sent from to be forwared on to the client. From wesley at felter.org Sun Dec 7 18:37:38 2003 From: wesley at felter.org (Wes Felter) Date: Sat Dec 9 22:12:37 2006 Subject: [p2p-hackers] UDP file transfer protocol In-Reply-To: <3FD25952.40502@charter.net> References: <281F7CE7.33D1957C.51A11433@netscape.net> <3FD25952.40502@charter.net> Message-ID: <6F23BD12-28E4-11D8-9CB3-000393A581BE@felter.org> On Dec 6, 2003, at 4:33 PM, coderman wrote: > One last note: NAT's can cause significant problems for UDP. I've > seen > a few NAT's, mostly windoze, that do not forward UDP datagrams larger > than a single ethernet frame (aprox 1500bytes minus headers) You are crazy. :-) Didn't anyone ever teach you not to send packets larger than MTU? Wes Felter - wesley@felter.org - http://felter.org/wesley/ From eugen at leitl.org Tue Dec 9 13:20:08 2003 From: eugen at leitl.org (Eugen Leitl) Date: Sat Dec 9 22:12:37 2006 Subject: [p2p-hackers] Call for Papers I-NetSec04 (fwd) (fwd from Claudia.Diaz@esat.kuleuven.ac.be) Message-ID: <20031209132008.GR4452@leitl.org> ----- Forwarded message from Claudia Diaz ----- From: Claudia Diaz Date: Mon, 8 Dec 2003 15:40:43 +0100 (CET) To: cryptography@metzdowd.com Subject: Call for Papers I-NetSec04 (fwd) +================================================================== | | C A L L F O R P A P E R S | | Privacy and Anonymity Issues in | Networked and Distributed Systems | |__________________________________________________________________| |__###______#_____#_____________#####________________###___#_______| |___#_______##____#_______#____#_____#______________#___#__#____#__| |___#_______#_#___#__###__####_#________###___###__#_____#_#____#__| |___#_#####_#__#__#_#___#_#_____#####__#___#_#___#_#_____#_#____#__| |___#_______#___#_#_#####_#__________#_#####_#_____#_____#_#######_| |___#_______#____##_#_____#____#_____#_#_____#___#__#___#_______#__| |__###______#_____#__###___###__#####___####__###____###________#__| |__________________________________________________________________| | | I-NetSec04 | | Centre de Congres Pierre Baudis | 23-26 August 2004, Toulouse, France | | 3nd Working Conference on | Privacy and Anonymity in | Networked and Distributed Systems | | Special track on | | *** SEC2004 *** | 19th IFIP International Information Security Conference | http://www.sec2004.org | +================================================================== Conference Scope ---------------- Privacy and anonymity are increasingly important aspects in electronic services. The workshop will focus on these aspects in advanced distributed applications, such as m-commerce, agent-based systems, P2P, ... Suggested topics include, but are not restricted to: - Models for threats to privacy/anonymity - Models and measures for privacy/anonymity - Secure protocols that preserve privacy/anonymity - Privacy, anonymity and peer-to-peer systems - Privacy, anonymity and mobile agents - Privacy/anonymity in payment systems - Privacy/anonymity in pervasive computing applications - Anonymous communication systems - Legal issues of anonymity - Techniques for enhancing privacy in existing systems The purpose of the special track is to bring together privacy and anonymity experts from around the world to discuss recent advances and new perspectives. I-NetSec'04 seeks submissions from both academia and industry presenting novel research on all theoretical and practical aspects of privacy technologies, as well as experimental studies of fielded systems. Instructions for paper submission --------------------------------- Submitted papers must be original, unpublished, and not submitted to another journal or conference for consideration of publication. Papers must be written in English; they should be at most 16 pages long in total, including bibliography and well-marked appendices. The paper should be intelligible without its appendices. Accepted papers will be presented at the conference and published in the *SEC2004* conference proceedings, by Kluwer Academic Publishers. At least one author of each accepted paper is required to register with the conference and present the paper. To submit a paper, you must first submit an abstract by sending a plain ASCII text email to inetsec04@cs.kuleuven.ac.be, containing the title and abstract of your paper, authors' names, e-mail and post addresses, phone and fax numbers, and identification of the contact author. The abstract must be received by February 9, 2004. Upon abstract submissions, authors will receive a paper number. To submit the full paper, send an email to the above e-mail address, containing the title, the author's names, and including the paper number in the subject. Attach to the same message your submission (as a MIME attachment), which should follow the template or Latex style files, indicated by the publisher (www.wkap.com/ifip/styles). Full papers must be received by February 16, 2004. Papers submitted after this date, or for which no abstract has been timely received, will be discarded without review. To apply for the "Best Student Paper" Award, please check the requirements at www.sec2004.org . Important dates --------------- Submission of abstracts: February 9, 2004 Submission of papers: February 16, 2004 Notification to authors: March 31, 2004 Camera-ready: April 30, 2004 ========================================================= Committees Programme Committee co-Chairs B. De Decker, K.U.Leuven, Belgium E. Van Herreweghen, IBM Research Lab, Zurich, Switzerland Programme Committee S. De Capitani, Univ. of Brescia, Italy Y. Deswarte, LAAS-CNRS, Toulouse, France H. Federath, Univ. of Berlin, Germany S. Fischer-H?bner, Karlstad Univ., Sweden U. Gattiker, EICAR, Aalborg Univ., Denmark K. Martin, Royal Holloway, Univ. London, UK R. Molva, Eur?com, France K. Rannerberg, Univ. of Freiburg, Germany P. Ryan, Univ. of Newcastle, UK P. Samarati, Univ. of Milan, Italy V. Shmatikov, SRI International, USA SEC2004 Conference General Chair Y. Deswarte, LAAS-CNRS, France Local Organizing Committee Marie Dervillers (dervillers@wcc2004.org), LAAS-CNRS, www.laas.fr ------------------------------------------------ The 19th IFIP International Information Security Conference will be held at Centre de Congr?s Pierre Baudis in Toulouse (www.centre-congres-toulouse.fr) as part of the 2004 IFIP World Computing Congress (www.wcc2004.org). --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo@metzdowd.com ----- End forwarded message ----- -- Eugen* Leitl leitl ______________________________________________________________ ICBM: 48.07078, 11.61144 http://www.leitl.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE http://moleculardevices.org http://nanomachines.net -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: not available Url : http://zgp.org/pipermail/p2p-hackers/attachments/20031209/20174cd0/attachment.pgp From bram at gawth.com Wed Dec 10 19:30:31 2003 From: bram at gawth.com (Bram Cohen) Date: Sat Dec 9 22:12:37 2006 Subject: [p2p-hackers] Codecon 2004 CFP - Just a few days left Message-ID: CodeCon 3.0 February 20-22, 2004 San Francisco CA, USA www.codecon.org Call For Papers CodeCon is the premier showcase of active hacker projects. It is an excellent opportunity for developers to demonstrate their work and keep abreast of what's going on in their community. All presentations must include working demonstrations, ideally open source. Presenters must be one of the active developers of the code in question. We emphasize that demonstrations be of *working* code. CodeCon strongly encourages presenters from non-commercial and academic backgrounds to attend for the purposes of collaboration and the sharing of knowledge by providing free registration to workshop presenters and discounted registration to full-time students. We hereby solicit papers and demonstrations. * Papers and proposals due: December 15, 2003 * Authors notified: January 1, 2004 Possible topics include, but are by no means restricted to: * community-based web sites - forums, weblogs, personals * development tools - languages, debuggers, version control * file sharing systems - swarming distribution, distributed search * security products - mail encryption, intrusion detection, firewalls Presentations will be a 45 minutes long, with 15 minutes allocated for Q&A. Overruns will be truncated. Submission details: Submissions are being accepted immediately. Acceptance dates are November 15, and December 15. After the first acceptance date, submissions will be either accepted, rejected, or deferred to the second acceptance date. The conference language is English. Ideally, demonstrations should be usable by attendees with 802.11b connected devices either via a web interface, or locally on Windows, UNIX-like, or MacOS platforms. Cross-platform applications are most desirable. Our venue will be 21+. If you have a specific day on which you would prefer to present, please advise us. To submit, send mail to submissions@codecon.org including the following information: * Project name * url of project home page * tagline - one sentence or less summing up what the project does * names of presenter(s) and urls of their home pages, if they have any * one-paragraph bios of presenters (optional) * project history, no more than a few sentences * what will be done in the project demo * major achievement(s) so far * claim(s) to fame, if any * future plans Program Chair: Bram Cohen General Chair: Len Sassaman Program Committee: * Bram Cohen * Len Sassaman * Jonathan Moore * Jered Floyd * Brandon Wiley * Jeremy Bornstein Sponsorship: If your organization is interested in sponsoring CodeCon, we would love to hear from you. In particular, we are looking for sponsors for social meals and parties on any of the three days of the conference, as well as sponsors of the conference as a whole, prizes or awards for quality presentations, scholarships for qualified applicants, and assistance with transportation or accommodation for presenters with limited resources. If you might be interested in sponsoring any of these aspects, please contact the conference organizers at codecon-admin@codecon.org. Press policy: CodeCon strives to be a conference for developers, with strong audience participation. As such, we need to limit the number of complimentary passes for non-developer attendees. Press passes are limited to one pass per publication, and must be approved prior to the registration deadline (to be announced later). If you are a member of the press, and interested in covering CodeCon, please contact us early by sending email to press@codecon.org. Members of the press who do not receive press-passes are welcome to participate as regular conference attendees. Questions: If you have questions about CodeCon, or would like to contact the organizers, please mail codecon-admin@codecon.org. Please note this address is only for questions and administrative requests, and not for workshop presentation submissions. -Bram Cohen "Markets can remain irrational longer than you can remain solvent" -- John Maynard Keynes From izzy at lina.es.ncku.edu.tw Thu Dec 11 16:45:41 2003 From: izzy at lina.es.ncku.edu.tw (izzy@lina.es.ncku.edu.tw) Date: Sat Dec 9 22:12:37 2006 Subject: [p2p-hackers] Topology Generator Message-ID: <200312111645.hBBGjfdW030817@lina.es.ncku.edu.tw> I have surveyed various internet topology generators for my P2P simulation project. I have found BU's BRITE (http://www.cs.bu.edu/brite/index.html) seems quite suitable. Does anybody have other suggestions? Is there any disadvantage about BRITE ? Thanks for all of your advice. Ian Y. Lee From gcarreno at gcarreno.org Sun Dec 14 16:15:37 2003 From: gcarreno at gcarreno.org (Gustavo Carreno) Date: Sat Dec 9 22:12:37 2006 Subject: [p2p-hackers] Some doubts Message-ID: <58157956829.20031214161537@gcarreno.org> Hello Peer-to-peer List, IMPORTANT NOTICE: Keep in mind that this writer/reader is a begginer and quite fresh on the matter of P2P, ok, I've been using some "leechin" aps, but that doesn't make me an expert :) From what I've gathered, and please correct me if I'm wrong, all the decentralized P2P protocols are not that decentrealized after all, indeed they still have to have some kind of "starmap" to find a dangling thread of the network, right? So my question would be: - Is there any way that a peer can discover his network without the use of a Static IP "server" to hand him a list of possibilities to connect? Now, this is some brain storming and please be gentle: - Would it be pratical, in this RIAA/MPAA context, to implement another transport layer on top of TCP/IP to mess around the detection/sniffing that those companies are doing? - My suggestion would be some parallel on ARP to make the connection between Peer-IP (Not MAC address and IP like the real one) and the rest of the normal stuff, either TCP or UDP just snuggled into the data area of the TCP/IP datagram. - Would it help that this new layer would be only TCP or only UDP ? Well, hope I can find someone interested on these matters and has the time to give me an answer, to whom I thank in advance !! Gustavo Carreno -=[ "When you know Slackware you know Linux. When you know Red Hat, all you know is Red Hat" ]=- From jdd at dixons.org Sun Dec 14 19:52:07 2003 From: jdd at dixons.org (Jim Dixon) Date: Sat Dec 9 22:12:37 2006 Subject: [p2p-hackers] Some doubts In-Reply-To: <58157956829.20031214161537@gcarreno.org> Message-ID: <20031214192843.O11657-100000@localhost> On Sun, 14 Dec 2003, Gustavo Carreno wrote: > From what I've gathered, and please correct me if I'm wrong, all the > decentralized P2P protocols are not that decentrealized after all, > indeed they still have to have some kind of "starmap" to find a > dangling thread of the network, right? So my question would be: You need some sort of pointer to a node in the network. This can be an IP address published in a well-known place (at a URL, for example), or an entry in a configuration file, or advice from a friend. > - Is there any way that a peer can discover his network without the > use of a Static IP "server" to hand him a list of possibilities to > connect? It would seem to be impossible in principle, except as sketched out above, because it would have to involve broadcast. > Now, this is some brain storming and please be gentle: > - Would it be pratical, in this RIAA/MPAA context, to implement > another transport layer on top of TCP/IP to mess around the > detection/sniffing that those companies are doing? > - My suggestion would be some parallel on ARP to make the connection > between Peer-IP (Not MAC address and IP like the real one) and the > rest of the normal stuff, either TCP or UDP just snuggled into the > data area of the TCP/IP datagram. You would have to use some sort of global broadcast mechanism. This couldn't be broadcast as such. ISPs filter out broadcast traffic. It would be foolish for them to change. No one in his right mind is going to allow someone's misconfigured Windows box to announce itself to the entire planet. The alternative is multicast, which means for all practical purposes MBONE. This is another hard sell. Broadcasters like the BBC have been promoting multicast for years. RealNetworks have been doing the same. To the best of my knowledge, there has been no movement at all on this front, despite years of pressure. Even if multicast was a global reality, you would need to have very good arguments indeed to persuade ISPs to pick up any kind of p2p announcements. Their legal departments would panic at the very suggestion. > - Would it help that this new layer would be only TCP or only UDP ? No. -- Jim Dixon jdd@dixons.org tel +44 117 982 0786 mobile +44 797 373 7881 http://jxcl.sourceforge.net Java unit test coverage http://xlattice.sourceforge.net p2p communications infrastructure From gcarreno at gcarreno.org Sun Dec 14 20:57:59 2003 From: gcarreno at gcarreno.org (Gustavo Carreno) Date: Sat Dec 9 22:12:37 2006 Subject: [p2p-hackers] Some doubts In-Reply-To: <20031214192843.O11657-100000@localhost> References: <20031214192843.O11657-100000@localhost> Message-ID: <61174899662.20031214205759@gcarreno.org> Hello Jim, Sunday, December 14, 2003, 7:52:07 PM, you wrote: JD> It would seem to be impossible in principle, except as sketched JD> out above, because it would have to involve broadcast. I thought that much, but really needed an expert confirmation. >> Now, this is some brain storming and please be gentle: >> - Would it be pratical, in this RIAA/MPAA context, to implement >> another transport layer on top of TCP/IP to mess around the >> detection/sniffing that those companies are doing? >> - My suggestion would be some parallel on ARP to make the connection >> between Peer-IP (Not MAC address and IP like the real one) and the >> rest of the normal stuff, either TCP or UDP just snuggled into the >> data area of the TCP/IP datagram. JD> You would have to use some sort of global broadcast mechanism. Well, the above point should answer this, I mean, "anchor" server would take care of "network hoock-up" and then every node would act has a router, so to speach, understanding this new layer. OFC this would be implemented has a new stack on top of the actual TCP/IP one and it would be transparent to applications. JD> This couldn't be broadcast as such. ISPs filter out broadcast traffic. JD> It would be foolish for them to change. No one in his right mind is JD> going to allow someone's misconfigured Windows box to announce itself JD> to the entire planet. Yeah, I know that broadcast never travel outside your next router/gateway, but I some wish-full thinking is always in order :) JD> The alternative is multicast, which means for all practical purposes JD> MBONE. This is another hard sell. Broadcasters like the BBC have been JD> promoting multicast for years. RealNetworks have been doing the same. JD> To the best of my knowledge, there has been no movement at all on this JD> front, despite years of pressure. I've superficially investigated multicasting and MBONE and can't agree more, for alls misfortune. JD> Even if multicast was a global reality, you would need to have very good JD> arguments indeed to persuade ISPs to pick up any kind of p2p JD> announcements. Their legal departments would panic at the very suggestion. LOL, yeap, indeed !! >> - Would it help that this new layer would be only TCP or only UDP ? JD> No. Well, you say no in the scenario where there is no "anchor", but could you re-evaluate your answer on the "anchor" being present and nodes acting has "routers" ? Gustavo Carreno -=[ "When you know Slackware you know Linux. When you know Red Hat, all you know is Red Hat" ]=- From coderman at charter.net Sun Dec 14 20:59:29 2003 From: coderman at charter.net (coderman) Date: Sat Dec 9 22:12:37 2006 Subject: [p2p-hackers] Some doubts In-Reply-To: <58157956829.20031214161537@gcarreno.org> References: <58157956829.20031214161537@gcarreno.org> Message-ID: <3FDCCF31.8050105@charter.net> Gustavo Carreno wrote: >... > From what I've gathered, and please correct me if I'm wrong, all the > decentralized P2P protocols are not that decentrealized after all, > indeed they still have to have some kind of "starmap" to find a > dangling thread of the network, right? So my question would be: > - Is there any way that a peer can discover his network without the > use of a Static IP "server" to hand him a list of possibilities to > connect? > I use the following method in a truly flat, peer network I am working on: 1. Bootstrap your peer with one or more known peers. This can be a static IP server, or it can the IP of your friend who runs a node... 2. Use transitive introduction (peers telling you who their peers are) to continually expand the size of your peer groups. This is similiar to the way "host caching" would work under gnutella, as messages routed for peers who have not been seen get added to your list. You touch upon a good point though; initial introduction (step #1) is one of the tricky parts in almost any peer network implementation. > Now, this is some brain storming and please be gentle: > ... > - My suggestion would be some parallel on ARP to make the connection > between Peer-IP (Not MAC address and IP like the real one) and the > rest of the normal stuff, either TCP or UDP just snuggled into the > data area of the TCP/IP datagram. > - Would it help that this new layer would be only TCP or only UDP ? > > I'm not sure how this would make initial introduction easier; it would simply be moving the problem to another layer. Could you provide a bit more detail? From hal at finney.org Sun Dec 14 21:05:52 2003 From: hal at finney.org (Hal Finney) Date: Sat Dec 9 22:12:37 2006 Subject: [p2p-hackers] Some doubts Message-ID: <200312142105.hBEL5qH08957@finney.org> When the gift.sourceforge.net project started up, it was intended to be an open source interface to the fasttrack file sharing network, the one used by kazaa. One of the things they had to do was to find nodes that were running kazaa. The strategy they adopted was simple. They searched IP address ranges that were known to have a lot of home and school users, and tried to connect to the ports that were used by that protocol. It turned out that they could usually get a success within a few seconds of trying. And once you found one good node, you could leverage that to find others. For the case where your P2P app is sufficiently popular, like kazaa is, this kind of strategy may be successful. Hal F. From coderman at charter.net Sun Dec 14 21:17:35 2003 From: coderman at charter.net (coderman) Date: Sat Dec 9 22:12:37 2006 Subject: [p2p-hackers] Some doubts In-Reply-To: <200312142105.hBEL5qH08957@finney.org> References: <200312142105.hBEL5qH08957@finney.org> Message-ID: <3FDCD36F.6090801@charter.net> Hal Finney wrote: >... One of the things they had to do was to find nodes that were >running kazaa. The strategy they adopted was simple. They searched IP >address ranges that were known to have a lot of home and school users, and >tried to connect to the ports that were used by that protocol. It turned >out that they could usually get a success within a few seconds of trying. >And once you found one good node, you could leverage that to find others. > >For the case where your P2P app is sufficiently popular, like kazaa is, >this kind of strategy may be successful. > > This is a cool trick! One thing I would mention is that it relies on the peers using a well known port for communication. Some peer nets select random ports when they are started which would greatly complicate this kind of discovery. From photon at vantronix.net Sun Dec 14 21:45:21 2003 From: photon at vantronix.net (Alexander Taute) Date: Sat Dec 9 22:12:37 2006 Subject: [p2p-hackers] Some doubts In-Reply-To: <200312142105.hBEL5qH08957@finney.org> References: <200312142105.hBEL5qH08957@finney.org> Message-ID: <20031214214521.GB2242@vortex.vantronix.net> On Sun, Dec 14, 2003 at 01:05:52PM -0800, Hal Finney wrote: > They searched IP address ranges that were known to have a lot of home and > school users, and tried to connect to the ports that were used by that > protocol. > For the case where your P2P app is sufficiently popular, like kazaa is, > this kind of strategy may be successful. this may work technically but is in my opinion really an uggly approach. the problem is that all the people who are not running this application get unwanted traffic. so if you are in such an ip range and have a low bandwith, you will probably have not much fun with your ip when half of the planet is scanning your p2p ports. it may still work with "small" networks like todays, but doesn't really scale for huge networks. if this method really becomes a standard, people will soon need a specific minimum bandwith only reserved for those connection requests. i have already seen ip's with more than 20% unwanted bloat traffic and i guess it would not take much time to fill the rest with one or two more hype networks. photon From gcarreno at gcarreno.org Sun Dec 14 21:49:39 2003 From: gcarreno at gcarreno.org (Gustavo Carreno) Date: Sat Dec 9 22:12:37 2006 Subject: [p2p-hackers] Some doubts In-Reply-To: <3FDCCF31.8050105@charter.net> References: <58157956829.20031214161537@gcarreno.org> <3FDCCF31.8050105@charter.net> Message-ID: <127177999489.20031214214939@gcarreno.org> Hello coderman, Sunday, December 14, 2003, 8:59:29 PM, you wrote: c> I use the following method in a truly flat, peer network I am working on: c> 1. Bootstrap your peer with one or more known peers. This can be c> a static IP server, or it can the IP of your friend who runs a node... c> 2. Use transitive introduction (peers telling you who their peers are) to c> continually expand the size of your peer groups. This is similiar to the c> way "host caching" would work under gnutella, as messages routed c> for peers who have not been seen get added to your list. I'm talking about this anchor "need", cuz from the superficial reading I've been doing and looking at some interesting graphics on how a generic P2P network should work, they all talk about the second stage, when you're already hooked up to the network, wich is good, this explains the actual P2P network. But I've found nearly to nothing dismistifying this 1st step. Ok, granted, if you want to have a popular network this means that you'll need some freeware/open-source software, leaving a platform for possible sniffing from the dreaded MPAA/RIAA. Even the suggestion of looking for a known-to-be popular network does not apply on an environment where you want to be dependent free. Acknowledging that ISP will block broadcast and multicast is far from popular, even less popular if any P2P could even profit from it: - What is the only layer, from all inside the TCP/IP stack(UDP, TCP, ICMP, ARP, etc) that could traverse ISP's routers and never be offensive? Or even get a bit lower, Ethernet, is it possible there? c> You touch upon a good point though; initial introduction (step #1) is c> one of the tricky parts in almost any peer network implementation. Well, thanks. Has I've stated before it's something taken has granted, so why not question it? >> Now, this is some brain storming and please be gentle: >> ... >> - My suggestion would be some parallel on ARP to make the connection >> between Peer-IP (Not MAC address and IP like the real one) and the >> rest of the normal stuff, either TCP or UDP just snuggled into the >> data area of the TCP/IP datagram. >> - Would it help that this new layer would be only TCP or only UDP ? >> >> c> I'm not sure how this would make initial introduction easier; it would c> simply be moving the problem to another layer. Could you provide c> a bit more detail? It won't solve the "anchor" problem, this was more like a try at fooling any possible sniffer, but my argumentation dies if it's implmented on an open-software basis, well even on a proprietary network it wouldn't be that hard to crack for resourcefull parties. This idea came while I was writing the initial mail, so it was only and inception idea crying for some guru expertize to say yes or no. Gustavo Carreno -=[ "When you know Slackware you know Linux. When you know Red Hat, all you know is Red Hat" ]=- From gcarreno at gcarreno.org Sun Dec 14 21:51:21 2003 From: gcarreno at gcarreno.org (Gustavo Carreno) Date: Sat Dec 9 22:12:37 2006 Subject: [p2p-hackers] Some doubts In-Reply-To: <200312142105.hBEL5qH08957@finney.org> References: <200312142105.hBEL5qH08957@finney.org> Message-ID: <143178101165.20031214215121@gcarreno.org> Hello Hal, Sunday, December 14, 2003, 9:05:52 PM, you wrote: HF> For the case where your P2P app is sufficiently popular, like kazaa is, HF> this kind of strategy may be successful. Not a bad idea while kazaa stands up, I'll grant them that :) Gustavo Carreno -=[ "When you know Slackware you know Linux. When you know Red Hat, all you know is Red Hat" ]=- From mfreed at cs.nyu.edu Sun Dec 14 22:17:55 2003 From: mfreed at cs.nyu.edu (Michael J. Freedman) Date: Sat Dec 9 22:12:37 2006 Subject: [p2p-hackers] Some doubts In-Reply-To: <20031214214521.GB2242@vortex.vantronix.net> References: <200312142105.hBEL5qH08957@finney.org> <20031214214521.GB2242@vortex.vantronix.net> Message-ID: On Sun, 14 Dec 2003, Alexander Taute wrote: > On Sun, Dec 14, 2003 at 01:05:52PM -0800, Hal Finney wrote: > > > They searched IP address ranges that were known to have a lot of home and > > school users, and tried to connect to the ports that were used by that > > protocol. > > > For the case where your P2P app is sufficiently popular, like kazaa is, > > this kind of strategy may be successful. > > this may work technically but is in my opinion really an uggly approach. the There are at least two main problems with this approach: 1) It is reasonable to be used only for very widely-deployed systems 2) It is guaranteed to generate abuse complaints, as firewalls and IDSs will likely raise red-flags for port-scanning if they notice such SYN-RST pairs. An alternative possiblity for bootstrapping is to get at least some fault-tolerance is to use the existing DNS infrastructure, where you can register at least several A records to a given hostname (and the clients' local resolvers will round-robin through them), and branch out further below if necessary. This is what I actually do for Coral (http://www.scs.cs.nyu.edu/coral/) Cheers, --mike ----- "Not all those who wander are lost." www.michaelfreedman.org From Paul.Harrison at infotech.monash.edu.au Mon Dec 15 23:45:04 2003 From: Paul.Harrison at infotech.monash.edu.au (Paul Harrison) Date: Sat Dec 9 22:12:37 2006 Subject: [p2p-hackers] Some doubts In-Reply-To: <127177999489.20031214214939@gcarreno.org> Message-ID: On Sun, 14 Dec 2003, Gustavo Carreno wrote: > I'm talking about this anchor "need", cuz from the superficial > reading I've been doing and looking at some interesting graphics on > how a generic P2P network should work, they all talk about the > second stage, when you're already hooked up to the network, wich is > good, this explains the actual P2P network. > But I've found nearly to nothing dismistifying this 1st step. A simple solution would be to include a list of IP addresses that are often on the network with the software itself. So the network effectively has many many anchors. I imagine networks such as Gnutella, Freenet and the various DHTs do this. > > Ok, granted, if you want to have a popular network this means that > you'll need some freeware/open-source software, leaving a platform > for possible sniffing from the dreaded MPAA/RIAA. > An open source network might be password protected and encrypted, you could have multiple networks using the same software. It could be arranged so that without the password it isn't possible to even tell that a computer was using that particular software and not, say, some innocuous instant messaging software that also used encryption. > Even the suggestion of looking for a known-to-be popular network > does not apply on an environment where you want to be dependent > free. > > Acknowledging that ISP will block broadcast and multicast is far > from popular, even less popular if any P2P could even profit from > it: > - What is the only layer, from all inside the TCP/IP stack(UDP, TCP, > ICMP, ARP, etc) that could traverse ISP's routers and never be > offensive? Or even get a bit lower, Ethernet, is it possible there? As i understand it ARP doesn't get routed beyond the local network, since it maps local IP address -> ethernet address. Raw ethernet similarly will not get routed. TCP, UDP, ICMP will get through the routers. A new protocol built on IP probably will too. A protocol can be built to look like another protocol. For example, it could look like HTTP traffic (Gnutella and Soap do this i think (?)). But Eve may eventually catch on to this trick. If you have a lot of wireless users within a small area, you can set up a "mesh" and do away with the ISP entirely. > > It won't solve the "anchor" problem, this was more like a try at > fooling any possible sniffer, but my argumentation dies if it's > implmented on an open-software basis, well even on a proprietary > network it wouldn't be that hard to crack for resourcefull parties. > Even on a closed source network, one could trivially run the software and then look at what other computers it's connecting to. It's not possible to hide the IP addresses of all people in a network that anyone can join (though there have been some attempts in the more paranoid networks to limit the number IPs one can extract). regards, Paul pfh@logarithmic.net | http://www.logarithmic.net/pfh Current cost to save one life: AU$300 / US$200 www.unicef.org www.oxfam.org From photon at vantronix.net Tue Dec 16 03:31:19 2003 From: photon at vantronix.net (Alexander Taute) Date: Sat Dec 9 22:12:37 2006 Subject: [p2p-hackers] Some doubts In-Reply-To: References: <127177999489.20031214214939@gcarreno.org> Message-ID: <20031216033119.GA4420@vortex.vantronix.net> On Tue, Dec 16, 2003 at 10:45:04AM +1100, Paul Harrison wrote: > Even on a closed source network, one could trivially run the software and > then look at what other computers it's connecting to. It's not possible to > hide the IP addresses of all people in a network that anyone can join > (though there have been some attempts in the more paranoid networks to > limit the number IPs one can extract). this is not that difficult. for example you can generate random request id's for every request your node routes and send them to the next hop with the request itself. when the next hop recieves that request it generates a new connection id and sets up a reference to the id of the source. that way your answers have only to refer to the connection id you got from your direct peer. it knows then through the reference to which of his peer it has to route the answer, and which connection id it has to refer to. the result is that you know only the ips of your direct peers and can not track other nodes in the network anymore. you don't even know if your direct peer, where the request came from, is the origin of the request or if it has only been routed through it. paranoid enough? ;) the only problems i know of with this approach appear when routes break down and have to be reorganized before the answer has reached the origin of the request. but this can be solved, too. photon From jdd at dixons.org Tue Dec 16 15:15:44 2003 From: jdd at dixons.org (Jim Dixon) Date: Sat Dec 9 22:12:37 2006 Subject: [p2p-hackers] Some doubts In-Reply-To: Message-ID: <20031216145816.J11657-100000@localhost> On Sun, 14 Dec 2003, Michael J. Freedman wrote: > An alternative possiblity for bootstrapping is to get at least some > fault-tolerance is to use the existing DNS infrastructure, where you can > register at least several A records to a given hostname (and the clients' > local resolvers will round-robin through them), and branch out further > below if necessary. Very good point, especially if use is made of the TXT record. This allows you to associate arbitrary strings with names in the domain name system. You could, for example, add this to the DNS entry for freernet-gateway.example.com freernet-gateway IN TXT "p2p='freernet'" IN TXT "transport='tcp'" IN TXT "protocol='dsa'" IN TXT "pubkey='0123456789abcdef'" In other words, "I'm a node on a Freernet network. Talk to me using TCP; my DSA public key is 0123456789abcdef". It would also be possible for kindly people to allow this sort of approach to be used for transmitting messages anonymously. You could either send (short) messages entirely through the DNS using the TXT resource records, or say store the encrypted document on a p2p network and put the node ID (20-byte SH1 digest) and the encryption key in the DNS. Lots of variations possible. -- Jim Dixon jdd@dixons.org tel +44 117 982 0786 mobile +44 797 373 7881 http://jxcl.sourceforge.net Java unit test coverage http://xlattice.sourceforge.net p2p communications infrastructure From clint at thestaticvoid.net Wed Dec 17 10:01:35 2003 From: clint at thestaticvoid.net (Clint Heyer) Date: Sat Dec 9 22:12:37 2006 Subject: [p2p-hackers] Some doubts In-Reply-To: <20031216200003.BE4C63FC9E@capsicum.zgp.org> Message-ID: <2003121711135.559185@platypus> Akin to this hijacking (for want of a better word) of an existing protocol, Naanou used hijacked HTML/HTTP for peer discovery. The tags could be embedded within say, HTML comment tags to render them invisible to normal browsers. On startup, Naanou downloads a series of predefined URLs which it looks for peer clues in. I wanted to extend it a bit further to exploit hard-to-shutdown services such as Google. Given a query string, the client could fetch n results, download them and look for clues from there. In effect, making the distribution of ids and mechanisms to find them much more distributed. This doesn't completely solve the problem however - as one still needs either predefined URLs or a query string so it also did a local subnet broadcast to find nearby nodes. I also worry that public 'reflector' services as used by many P2P networks are too visible a target for attack (legal or otherwise). There is potential I think for using stegnography to further hide peer ids within an existing protocol or data stream. If the decryption of these hidden clues was made computationally expensive (to make mass scanning difficult), and assuming they could be hidden in say, any image within a website, it would be harder for an attacker to point the finger at a certain host with the claim that "you're enabling people to connect to this P2P network". If many users then deployed these hidden tags within their websites, existing web search sites could be used to do peer discovery, and both the client looking for clues, and the web site have a certain level of refutability for their actions. This is far from a solution of course, as whatever knowledge a client has to locate these sites, an attacker could also utilise to focus a search. cheers, .clint On Tue, 16 Dec 2003 12:00:03 -0800 (PST), p2p-hackers-request@zgp.org wrote: Message: 3 > Date: Tue, 16 Dec 2003 15:15:44 +0000 (GMT) > From: Jim Dixon > Subject: Re: [p2p-hackers] Some doubts > To: "Peer-to-peer development." > Message-ID: <20031216145816.J11657-100000@localhost> > Content-Type: TEXT/PLAIN; charset=US-ASCII > > > On Sun, 14 Dec 2003, Michael J. Freedman wrote: > > >> An alternative possiblity for bootstrapping is to get at least >> some fault-tolerance is to use the existing DNS infrastructure, >> where you can register at least several A records to a given >> hostname (and the clients' local resolvers will round-robin >> through them), and branch out further below if necessary. >> > > Very good point, especially if use is made of the TXT record. This > allows you to associate arbitrary strings with names in the domain > name system. You could, for example, add this to the DNS entry for > freernet-gateway.example.com > > freernet-gateway IN TXT "p2p='freernet'" > IN TXT "transport='tcp'" > IN TXT "protocol='dsa'" > IN TXT "pubkey='0123456789abcdef'" > > > In other words, "I'm a node on a Freernet network. Talk to me > using TCP; my DSA public key is 0123456789abcdef". > > It would also be possible for kindly people to allow this sort of > approach to be used for transmitting messages anonymously. You > could either send (short) messages entirely through the DNS using > the TXT resource records, or say store the encrypted document on a > p2p network and put the node ID (20-byte SH1 digest) and the > encryption key in the DNS. Lots of variations possible. ______________________________________ www: http://TheStaticVoid.net From arachnid at notdot.net Thu Dec 18 06:18:16 2003 From: arachnid at notdot.net (Nick Johnson) Date: Sat Dec 9 22:12:37 2006 Subject: [p2p-hackers] High availability p2p system - feedback requested Message-ID: <3FE146A8.4090402@notdot.net> I have an idea for a P2P file transmission system that could potentially provide a very high availability and high spread rate, resulting in a more efficient spread of data for popular files, and a lower chance of file unavailability for less popular files. The scheme is detailed below, along with some caveats, and I would appreciate some feedback. The scheme is inspired by, and potentially based off the Threshold Scheme[1] originally developed by Shamir for the sharing of secret keys in cryptography. That system works by setting the scret key as the constant term of an n-1 degree polynomial, then generating all the other terms randomly. Keyshares can then be generated by evaluating the polynomial at different values of x, and returning as the keyshare the x,y pair. The original polynomial, and hence the key, can be reconstructed if n keyshares are available. Adapting this for the purpose of P2P involves splitting the file into pieces. Then, each piece is evaluated as a polynomial, with each set of 4 bytes treated as a 32bit integer term. When a peer requests a chunk, a random value of x is chosen, and each of the chunks evaluated as a polynomial at that x value. The chunk returned to the peer consists of the chosen x value and the results of evaluating the polynomial for each piece of the file. If a peer requests a chunk and the requestee is still downloading, it is simply passed a chunk that it has downloaded, instead of having one generated. To reconstruct the file, a peer must have as many chunks as the original file had pieces. Each piece can then be reconstructed by taking the corresponding pairs of x and y values for that piece and using matrix math to obtain the original terms. The advantages of this system are due to the fact that any peer with the entire file can generate unique and interchangeable chunks in such a way that any other peer with a sufficient number of different chunks, regardless of which they are, can reassemble the file. This means that two peers that have not completed downloading have a much higher chance of each having pieces the other does not than would occur in a normal system, and in a situation where there is limited availability, there is a much higher chance than normal that there will be sufficient pieces for a peer to reconstruct the original file and begin generating new pieces. The common situation on standard P2P networks with all of a file available except a single part should never arise. The disadvantages ar due to the computational complexity of the system used. In order for a peer with the complete file to generate a chunk, it must read in and process the entire file, creating and evaluating complex polynomial equations for every piece. Generating multiple pieces at once to anticipate demand can substantially reduce this overhead, however. Reconstruction potentially requires even more resources - the peer that has sufficient chunks to reconstruct the file must assemble matrices by reading the first word of each chunk, solving the matrix, and repeating for the second word, and so on. Obviously, reading single words from disk is a very inefficient operation, and the alternative, buffering all the chunks in memory while reconstruction occurs, is even less appealing for large files. Finally, in situations where there are simply not enough unique chunks to reconstruct the original file, none of the file is available. Other P2P systems at least provide all of the file that is downloaded, rather than nothing. I don't see this as much of an issue, however, as it is relatively rare to find a partial file useful. The use of polynomial equations is not the only way - it should be possible to achieve the same effect using reed-solomon codes[2], possibly with lower processing requirements. While this idea has substantial problems, namely the high processing and storage complexity, I believe it may have potential as a way to dramatically increase the availability and propogation speed of files on a P2P network. Any feedback, suggestions, improvements, etc are hugely appreciated. [1] http://szabo.best.vwh.net/secret.html [2] http://www.cs.utk.edu/~plank/plank/papers/CS-96-332.html From justin at chapweske.com Thu Dec 18 06:43:02 2003 From: justin at chapweske.com (Justin Chapweske) Date: Sat Dec 9 22:12:37 2006 Subject: [p2p-hackers] High availability p2p system - feedback requested In-Reply-To: <3FE146A8.4090402@notdot.net> References: <3FE146A8.4090402@notdot.net> Message-ID: <1071729782.16809.76.camel@bog> Hi Nick, This sounds exactly like Swarmcast. You can even play around with some very old code at (http://sf.net/projects/swarmcast/). Freenet is also now using the FEC library that we developed for Swarmcast, so some of those aspects are now available in that system as well. Thanks, -Justin On Thu, 2003-12-18 at 13:18, Nick Johnson wrote: > I have an idea for a P2P file transmission system that could potentially > provide a very high availability and high spread rate, resulting in a > more efficient spread of data for popular files, and a lower chance of > file unavailability for less popular files. The scheme is detailed > below, along with some caveats, and I would appreciate some feedback. > > The scheme is inspired by, and potentially based off the Threshold > Scheme[1] originally developed by Shamir for the sharing of secret keys > in cryptography. That system works by setting the scret key as the > constant term of an n-1 degree polynomial, then generating all the other > terms randomly. Keyshares can then be generated by evaluating the > polynomial at different values of x, and returning as the keyshare the > x,y pair. The original polynomial, and hence the key, can be > reconstructed if n keyshares are available. > Adapting this for the purpose of P2P involves splitting the file into > pieces. Then, each piece is evaluated as a polynomial, with each set of > 4 bytes treated as a 32bit integer term. When a peer requests a chunk, a > random value of x is chosen, and each of the chunks evaluated as a > polynomial at that x value. The chunk returned to the peer consists of > the chosen x value and the results of evaluating the polynomial for each > piece of the file. If a peer requests a chunk and the requestee is still > downloading, it is simply passed a chunk that it has downloaded, instead > of having one generated. > To reconstruct the file, a peer must have as many chunks as the original > file had pieces. Each piece can then be reconstructed by taking the > corresponding pairs of x and y values for that piece and using matrix > math to obtain the original terms. > The advantages of this system are due to the fact that any peer with the > entire file can generate unique and interchangeable chunks in such a way > that any other peer with a sufficient number of different chunks, > regardless of which they are, can reassemble the file. This means that > two peers that have not completed downloading have a much higher chance > of each having pieces the other does not than would occur in a normal > system, and in a situation where there is limited availability, there is > a much higher chance than normal that there will be sufficient pieces > for a peer to reconstruct the original file and begin generating new > pieces. The common situation on standard P2P networks with all of a file > available except a single part should never arise. > The disadvantages ar due to the computational complexity of the system > used. In order for a peer with the complete file to generate a chunk, it > must read in and process the entire file, creating and evaluating > complex polynomial equations for every piece. Generating multiple pieces > at once to anticipate demand can substantially reduce this overhead, > however. > Reconstruction potentially requires even more resources - the peer that > has sufficient chunks to reconstruct the file must assemble matrices by > reading the first word of each chunk, solving the matrix, and repeating > for the second word, and so on. Obviously, reading single words from > disk is a very inefficient operation, and the alternative, buffering all > the chunks in memory while reconstruction occurs, is even less appealing > for large files. > Finally, in situations where there are simply not enough unique chunks > to reconstruct the original file, none of the file is available. Other > P2P systems at least provide all of the file that is downloaded, rather > than nothing. I don't see this as much of an issue, however, as it is > relatively rare to find a partial file useful. > The use of polynomial equations is not the only way - it should be > possible to achieve the same effect using reed-solomon codes[2], > possibly with lower processing requirements. > > While this idea has substantial problems, namely the high processing and > storage complexity, I believe it may have potential as a way to > dramatically increase the availability and propogation speed of files on > a P2P network. Any feedback, suggestions, improvements, etc are hugely > appreciated. > > [1] http://szabo.best.vwh.net/secret.html > [2] http://www.cs.utk.edu/~plank/plank/papers/CS-96-332.html > > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > _______________________________________________ > Here is a web page listing P2P Conferences: > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences From decoy at iki.fi Thu Dec 18 13:45:59 2003 From: decoy at iki.fi (Sampo Syreeni) Date: Sat Dec 9 22:12:37 2006 Subject: [p2p-hackers] High availability p2p system - feedback requested In-Reply-To: <3FE146A8.4090402@notdot.net> References: <3FE146A8.4090402@notdot.net> Message-ID: On 2003-12-18, Nick Johnson uttered: >The advantages of this system are due to the fact that any peer with the >entire file can generate unique and interchangeable chunks in such a way >that any other peer with a sufficient number of different chunks, >regardless of which they are, can reassemble the file. This idea has surfaced before, only the starting point has usually been erasure correcting coding, not secret sharing. Probably the best example is Michael Luby's (of Digital Fountain and Berkeley, http://www.digitalfountain.com/ , http://www.icsi.berkeley.edu/~luby/ ) work with reliable multicast and Tornado codes. >The disadvantages ar due to the computational complexity of the system >used. Yes. I believe Luby's low density codes are rather more efficient than most alternatives. There's also an implicit tradeoff between error tolerance and redundancy which doesn't seem to be variable with polynomial secret sharing schemes. Once again codes based on random graphs (cf. e.g. http://citeseer.nj.nec.com/luby98analysis.html ) solve the problem. >Finally, in situations where there are simply not enough unique chunks to >reconstruct the original file, none of the file is available. This is easy to fix: divide the file into fixed size pages and encode them separately. -- Sampo Syreeni, aka decoy - mailto:decoy@iki.fi, tel:+358-50-5756111 student/math+cs/helsinki university, http://www.iki.fi/~decoy/front openpgp: 050985C2/025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2 From wesley at felter.org Thu Dec 18 17:23:52 2003 From: wesley at felter.org (Wes Felter) Date: Sat Dec 9 22:12:37 2006 Subject: [p2p-hackers] High availability p2p system - feedback requested In-Reply-To: <3FE146A8.4090402@notdot.net> References: <3FE146A8.4090402@notdot.net> Message-ID: Besides Swarmcast, see "Rateless Codes and Big Downloads" for a detailed analysis: http://citeseer.ist.psu.edu/566143.html Wes Felter - wesley@felter.org - http://felter.org/wesley/ From greg at electricrain.com Fri Dec 19 20:34:42 2003 From: greg at electricrain.com (Gregory P. Smith) Date: Sat Dec 9 22:12:37 2006 Subject: [p2p-hackers] High availability p2p system - feedback requested In-Reply-To: <1071729782.16809.76.camel@bog> References: <3FE146A8.4090402@notdot.net> <1071729782.16809.76.camel@bog> Message-ID: <20031219203442.GK21241@zot.electricrain.com> On Thu, Dec 18, 2003 at 12:43:02AM -0600, Justin Chapweske wrote: > Hi Nick, > > This sounds exactly like Swarmcast. You can even play around with some > very old code at (http://sf.net/projects/swarmcast/). > > Freenet is also now using the FEC library that we developed for > Swarmcast, so some of those aspects are now available in that system as > well. > > Thanks, > > -Justin I believe Mnet is also using it these days (someone from mnet correct me if thats wrong). Mojonation & Mnet originally started out with its own similar fec-ish library but your library was better... From greg at electricrain.com Fri Dec 19 20:49:16 2003 From: greg at electricrain.com (Gregory P. Smith) Date: Sat Dec 9 22:12:37 2006 Subject: [p2p-hackers] High availability p2p system - feedback requested In-Reply-To: References: <3FE146A8.4090402@notdot.net> Message-ID: <20031219204916.GL21241@zot.electricrain.com> On Thu, Dec 18, 2003 at 03:45:59PM +0200, Sampo Syreeni wrote: > >Finally, in situations where there are simply not enough unique chunks to > >reconstruct the original file, none of the file is available. > > This is easy to fix: divide the file into fixed size pages and encode them > separately. This was one of the major flaws we had in mojonation. The file was divided into fixed sized chunks each of which was then encoded such so that only M of N data blocks were needed to reconstruct that chunk. The problem with this is that quite often it is useless to be able to recover only part of the data file. The larger the data file (ie: the more chunks it was broken into) the exponentially less likely you were to be able to reassemble all of the chunks (even if your probability of getting a chunk is an excellent 0.999, getting 100 chunks is 0.999^100 which is not a pretty number). -g From zooko at zooko.com Fri Dec 19 22:02:00 2003 From: zooko at zooko.com (Zooko O'Zooko) Date: Sat Dec 9 22:12:37 2006 Subject: [p2p-hackers] High availability p2p system - feedback requested In-Reply-To: Message from "Gregory P. Smith" of "Fri, 19 Dec 2003 12:34:42 PST." <20031219203442.GK21241@zot.electricrain.com> References: <3FE146A8.4090402@notdot.net> <1071729782.16809.76.camel@bog> <20031219203442.GK21241@zot.electricrain.com> Message-ID: Mnet v0.6 uses its own erasure code, written by Doug Barnes and optimized by Raph Levien, which is probably some variant of Rabin's IDA. Mnet v0.7 uses Rizzo's FEC implementation in C. The erasure coding and encryption specs for Mnet v0.7 are here: http://mnet.sourceforge.net/new_filesystem.html From zooko at zooko.com Fri Dec 19 22:19:38 2003 From: zooko at zooko.com (Zooko O'Zooko) Date: Sat Dec 9 22:12:37 2006 Subject: [p2p-hackers] High availability p2p system - feedback requested In-Reply-To: Message from "Gregory P. Smith" of "Fri, 19 Dec 2003 12:49:16 PST." <20031219204916.GL21241@zot.electricrain.com> References: <3FE146A8.4090402@notdot.net> <20031219204916.GL21241@zot.electricrain.com> Message-ID: Greg Smith wrote: > > This was one of the major flaws we had in mojonation. [...] > The larger the data file (ie: the more chunks it was broken into) the > exponentially less likely you were to be able to reassemble all of the chunks. Because of this, Mnet's new file format [1] does not use chunking. Since constructing erasure codes is a super-linear task, this imposes a practical upper limit on file size, determined by the computational limitation of the publisher. What is perhaps less obvious is the a chunking format also imposes a practical upper limit on file size, determined by the availability of the blocks at retrieval time. If your file is too large in the no-chunking scheme then it takes too long to compute the erasure code to store it. If your file is too large in the chunking scheme, then you succeed at storing it, but it can't be wholly retrieved. I prefer the former failure mode, so Mnet v0.7 does not use chunking. [*] When Freenet added erasure coding they chose a chunking scheme, explicitly stating that they did so because they didn't want to impose a limit on file sizes. Unfortunately, their scheme *does* still impose a limit, but that limit is determined by reliability of block retrievals and is revealed by partially unrecoverable files rather than being determined by computational resources of the publisher and being revealed by an attempt to publish taking too long. Regards, Zooko [1] http://mnet.sourceforge.net/new_filesystem.html [*] If you really want chunking, you can write a "store_file_in_chunks()" method at a higher layer that splits the file into chunks and calling Mnet's "store_file()" method on each chunk in turn. Each invocation of "store_file()" results in a unique fileId which can be used to retrieve that chunk. Collect all such fileIds into a file, store that file, and use its fileId as the URI to your chunked file. This technique is best when it is considered okay to recover only a subset of the chunks. From mllist at vaste.mine.nu Thu Dec 25 06:38:53 2003 From: mllist at vaste.mine.nu (Johan =?iso-8859-1?Q?F=E4nge?=) Date: Sat Dec 9 22:12:37 2006 Subject: [p2p-hackers] Some doubts In-Reply-To: <3FDCCF31.8050105@charter.net> References: <58157956829.20031214161537@gcarreno.org> <3FDCCF31.8050105@charter.net> Message-ID: <3424.192.168.1.204.1072334333.squirrel@Vaste_lp3.wired> > Gustavo Carreno wrote: > >>... >> From what I've gathered, and please correct me if I'm wrong, all the >> decentralized P2P protocols are not that decentrealized after all, >> indeed they still have to have some kind of "starmap" to find a >> dangling thread of the network, right? So my question would be: >> - Is there any way that a peer can discover his network without the >> use of a Static IP "server" to hand him a list of possibilities to >> connect? >> > I use the following method in a truly flat, peer network I am working on: > > 1. Bootstrap your peer with one or more known peers. This can be > a static IP server, or it can the IP of your friend who runs a node... The problem is the "known peers" isn't it? As pointed out, since you don't have much better information than that there is about X number of users among the world's ip, guessing isn't usually a good idea :) (Unless you like with kazaa know that X in a particular part of the network is large.) There's no real getting around this problem, you must to have some starting point, just like you have to download a p2p-program to use it. (It doesn't "magically" appear on your computer. Programs that do that are called worms ;) The bootstrapping though, can be reduced. What creating a dedicated p2p-net for bootstrapping? At least then there'd just be _one_ net to bootstrap, from which all other can kick off :) Also, the bigger the network the higher the chance of finding extremely nodes to put in a (fairly) static file. Aah, how simple it'd be to unite the p2p-world around one design for such a network. This might be something - but only in the long (very long) run. > > 2. Use transitive introduction (peers telling you who their peers are) to > continually expand the size of your peer groups. This is similiar to the > way "host caching" would work under gnutella, as messages routed > for peers who have not been seen get added to your list. > > You touch upon a good point though; initial introduction (step #1) is > one of the tricky parts in almost any peer network implementation. /Vaste From douglist at anize.org Mon Dec 29 11:00:05 2003 From: douglist at anize.org (Douglas F. Calvert) Date: Sat Dec 9 22:12:37 2006 Subject: [p2p-hackers] High availability p2p system - feedback requested In-Reply-To: <20031219204916.GL21241@zot.electricrain.com> References: <3FE146A8.4090402@notdot.net> <20031219204916.GL21241@zot.electricrain.com> Message-ID: <1072695605.18557.507.camel@liberate.imissjerry.org> On Fri, 2003-12-19 at 15:49, Gregory P. Smith wrote: > On Thu, Dec 18, 2003 at 03:45:59PM +0200, Sampo Syreeni wrote: > > >Finally, in situations where there are simply not enough unique chunks to > > >reconstruct the original file, none of the file is available. > > > > This is easy to fix: divide the file into fixed size pages and encode them > > separately. > > This was one of the major flaws we had in mojonation. The file was > divided into fixed sized chunks each of which was then encoded such so > that only M of N data blocks were needed to reconstruct that chunk. > > The problem with this is that quite often it is useless to be able to > recover only part of the data file. The larger the data file (ie: the > more chunks it was broken into) the exponentially less likely you were > to be able to reassemble all of the chunks (even if your probability of > getting a chunk is an excellent 0.999, getting 100 chunks is 0.999^100 > which is not a pretty number). > > -g Hello, First let me say that I am not trying to be an asshole or overly nitpicky. I am interested in chunking and think that I may be missing something about what "decent probablities" are for large p2p networks. So my question may be more concerned with what are realworld scenarios when data is chunked across a network. .999^100 does not seem that bad of a probability for recreating a file. ; .999^100 ~0.90479214711370904203 That seems like a decent chance to me. If you decrease the probablity a little getting all 100 peices does start to look like a long shot though. ; .99^100 ~0.36603234127322950493 ; .9^100 ~0.00002656139888758748 ; .8^100 ~0.00000000020370359763 ; .7^100 ~0.00000000000000032345 -dfc -- Douglas F. Calvert From nazareno at dsc.ufcg.edu.br Tue Dec 30 02:21:35 2003 From: nazareno at dsc.ufcg.edu.br (Nazareno Andrade) Date: Sat Dec 9 22:12:37 2006 Subject: [p2p-hackers] Some doubts In-Reply-To: <3424.192.168.1.204.1072334333.squirrel@Vaste_lp3.wired> References: <58157956829.20031214161537@gcarreno.org> <3FDCCF31.8050105@charter.net> <3424.192.168.1.204.1072334333.squirrel@Vaste_lp3.wired> Message-ID: <3FF0E12F.7060403@dsc.ufcg.edu.br> Hi there. Johan F?nge wrote: > The bootstrapping though, can be reduced. What creating a dedicated > p2p-net for bootstrapping? At least then there'd just be _one_ net to > bootstrap, from which all other can kick off :) Also, the bigger the > network the higher the chance of finding extremely nodes to put in a > (fairly) static file. The JXTA project has something along these lines, right? I think that all peers get first into a "world group" and then find their groups and application "partners". > > Aah, how simple it'd be to unite the p2p-world around one design for such > a network. This might be something - but only in the long (very long) run. > cheers, Nazareno. ======================================== Nazareno Andrade LSD - DSC/UFCG Campina Grande - Brazil http://lsd.dsc.ufcg.edu.br/~nazareno/ OurGrid project http://www.ourgrid.org ======================================== From nazareno at dsc.ufcg.edu.br Tue Dec 30 13:03:53 2003 From: nazareno at dsc.ufcg.edu.br (Nazareno Andrade) Date: Sat Dec 9 22:12:37 2006 Subject: [p2p-hackers] High availability p2p system - feedback requested In-Reply-To: <1072695605.18557.507.camel@liberate.imissjerry.org> References: <3FE146A8.4090402@notdot.net> <20031219204916.GL21241@zot.electricrain.com> <1072695605.18557.507.camel@liberate.imissjerry.org> Message-ID: <3FF177B9.7050905@dsc.ufcg.edu.br> Hello. There is an interesting paper on the subject of having high availability in p2p storage. Maybe what you guys found out in MojoNation was kind of a practical proof of the result in this paper? I think there is a relation in what is said in that paper about deploying MojoNation and this one. The reference is: High Availability, Scalable Storage, Dynamic Peer Networks: Pick Two. Charles Blake Rodrigo Rodrigues cb@mit.edu rodrigo@lcs.mit.edu MIT Laboratory Abstract: Peer-to-peer storage aims to build large-scale, reliable and available storage from many small-scale unreliable, low-availability distributed hosts. Data redundancy is the key to any data guarantees. However, preserving redundancy in the face of highly dynamic membership is costly. We apply a simple resource usage model to measured behavior from the Gnutella file-sharing network to argue that large-scale cooperative storage is limited by likely dynamics and cross-system bandwidth (...) from http://citeseer.nj.nec.com/576500.html cheers, Nazareno Douglas F. Calvert wrote: > On Fri, 2003-12-19 at 15:49, Gregory P. Smith wrote: > >>On Thu, Dec 18, 2003 at 03:45:59PM +0200, Sampo Syreeni wrote: >> >>>>Finally, in situations where there are simply not enough unique chunks to >>>>reconstruct the original file, none of the file is available. >>> >>>This is easy to fix: divide the file into fixed size pages and encode them >>>separately. >> >>This was one of the major flaws we had in mojonation. The file was >>divided into fixed sized chunks each of which was then encoded such so >>that only M of N data blocks were needed to reconstruct that chunk. >> >>The problem with this is that quite often it is useless to be able to >>recover only part of the data file. The larger the data file (ie: the >>more chunks it was broken into) the exponentially less likely you were >>to be able to reassemble all of the chunks (even if your probability of >>getting a chunk is an excellent 0.999, getting 100 chunks is 0.999^100 >>which is not a pretty number). >> >>-g > > > Hello, > First let me say that I am not trying to be an asshole or overly > nitpicky. I am interested in chunking and think that I may be missing > something about what "decent probablities" are for large p2p networks. > So my question may be more concerned with what are realworld scenarios > when data is chunked across a network. .999^100 does not seem that bad > of a probability for recreating a file. > > ; .999^100 > ~0.90479214711370904203 > > That seems like a decent chance to me. If you decrease the probablity a > little getting all 100 peices does start to look like a long shot > though. > > ; .99^100 > ~0.36603234127322950493 > ; .9^100 > ~0.00002656139888758748 > ; .8^100 > ~0.00000000020370359763 > ; .7^100 > ~0.00000000000000032345 > > > -dfc > > ======================================== Nazareno Andrade LSD - DSC/UFCG Campina Grande - Brazil http://lsd.dsc.ufcg.edu.br/~nazareno/ OurGrid project http://www.ourgrid.org ======================================== From wesley at felter.org Tue Dec 30 04:47:42 2003 From: wesley at felter.org (Wes Felter) Date: Sat Dec 9 22:12:37 2006 Subject: [p2p-hackers] High availability p2p system - feedback requested In-Reply-To: <1072695605.18557.507.camel@liberate.imissjerry.org> References: <3FE146A8.4090402@notdot.net> <20031219204916.GL21241@zot.electricrain.com> <1072695605.18557.507.camel@liberate.imissjerry.org> Message-ID: <4D692B9F-3A83-11D8-9CB3-000393A581BE@felter.org> On Dec 29, 2003, at 5:00 AM, Douglas F. Calvert wrote: > So my question may be more concerned with what are realworld scenarios > when data is chunked across a network. .999^100 does not seem that bad > of a probability for recreating a file. > > ; .999^100 > ~0.90479214711370904203 > > That seems like a decent chance to me. I think "Given that the probability of retrieving each block is .999, is a 90% probability of retrieving the whole file acceptable?" is the wrong question. I would ask "Given that the probability of retrieving each block is .999, what technique maximizes the probability of retrieving the whole file?" In which case you may be interested in the paper "Erasure Coding vs. Replication: A Quantitative Comparison" (although it doesn't directly address what people in this thread have been talking about): http://oceanstore.cs.berkeley.edu/publications/papers/abstracts/ erasure_iptps.html Wes Felter - wesley@felter.org - http://felter.org/wesley/ From anwitaman at hotmail.com Wed Dec 31 14:48:42 2003 From: anwitaman at hotmail.com (Anwitaman Datta) Date: Sat Dec 9 22:12:37 2006 Subject: [p2p-hackers] RE: p2p-hackers Digest, Vol 5, Issue 18 Message-ID: May be I missed the point, but I think that its wrong to say that prob. of getting 100 chunks is 0.999^100. I think that if you are using M of N, and prob. of finding any one chunk is P, then, prob. of not getting M chunks is (1-P)^(N-M). - AD. ---------------------------------------------------------------------- Message: 1 Date: Mon, 29 Dec 2003 06:00:05 -0500 From: "Douglas F. Calvert" Subject: Re: [p2p-hackers] High availability p2p system - feedback requested To: "Peer-to-peer development." Message-ID: <1072695605.18557.507.camel@liberate.imissjerry.org> Content-Type: text/plain On Fri, 2003-12-19 at 15:49, Gregory P. Smith wrote: > On Thu, Dec 18, 2003 at 03:45:59PM +0200, Sampo Syreeni wrote: > > >Finally, in situations where there are simply not enough unique chunks to > > >reconstruct the original file, none of the file is available. > > > > This is easy to fix: divide the file into fixed size pages and encode them > > separately. > > This was one of the major flaws we had in mojonation. The file was > divided into fixed sized chunks each of which was then encoded such so > that only M of N data blocks were needed to reconstruct that chunk. > > The problem with this is that quite often it is useless to be able to > recover only part of the data file. The larger the data file (ie: the > more chunks it was broken into) the exponentially less likely you were > to be able to reassemble all of the chunks (even if your probability of > getting a chunk is an excellent 0.999, getting 100 chunks is 0.999^100 > which is not a pretty number). > > -g Hello, First let me say that I am not trying to be an asshole or overly nitpicky. I am interested in chunking and think that I may be missing something about what "decent probablities" are for large p2p networks. So my question may be more concerned with what are realworld scenarios when data is chunked across a network. .999^100 does not seem that bad of a probability for recreating a file. ; .999^100 ~0.90479214711370904203 That seems like a decent chance to me. If you decrease the probablity a little getting all 100 peices does start to look like a long shot though. ; .99^100 ~0.36603234127322950493 ; .9^100 ~0.00002656139888758748 ; .8^100 ~0.00000000020370359763 ; .7^100 ~0.00000000000000032345 -dfc -- Douglas F. Calvert ------------------------------ _______________________________________________ p2p-hackers mailing list p2p-hackers@zgp.org http://zgp.org/mailman/listinfo/p2p-hackers End of p2p-hackers Digest, Vol 5, Issue 18 ****************************************** _________________________________________________________________ Free transactions in any ATM across India. http://server1.msn.co.in/msnleads/suvidha/dec03.asp?type=hottag Click here.