From juicy at melontraffickers.com Mon Apr 1 11:50:01 2002 From: juicy at melontraffickers.com (A. Melon) Date: Sat Dec 9 22:11:44 2006 Subject: [p2p-hackers] P2P Onion Routing? Message-ID: <0436b2ad223e955ebc4fa263deb34bc8@melontraffickers.com> Bram Cohen writes: > A fundamentally more robust way of doing anonymous receiving than reply > block chaining is Private Information Retrieval. Unfortunately PIR schemes > aren't *quite* there yet - > > http://citeseer.nj.nec.com/beimel01informationtheoretic.html A few notes on PIR: PIR involves being able to retrieve some data from a database without the server knowing which specific item you are getting. This has an obvious application for receiving mail, since you could have everyone's mail in one big database and then when someone retrieved something, the server or snooper wouldn't know which item it was, and hence would not be able to identify particular clients as particular mail recipients. There are two flavors of PIR as in many other parts of cryptography: information-theoretic and computational. Most PIR algorithms are expressed in terms of fetching a single bit; some of them generalize more easily than others to fetching a larger record size. For information-theoretic to work you need more than one server. You get some function of the data from each server, and then combine them in the client to return the data. This requires assuming that the servers do not collude, and if there are snoopers then they do not collude. Computational works with a single server, so there is no concern about collusion. A trivial way of achieving PIR is to just download the entire database, and read only the items you are interested in. This is analogous to the common practice of downloading all of alt.anonymous.messages, and then decrypting the messages that are for you. The point of PIR algorithms is therefore to decrease the amount of data that has to be sent compared to sending the whole database. Substantial reductions are possible. However there is an inherent tradeoff with these reductions. For the server not to know which data item you are interested in, it follows that every bit of data in the server must be involved in the computation of what is sent. For if any data was not involved, it would be obvious that this was data which was not being fetched. Therefore every data item in the database is going to be involved in each fetch. The problem is that the computational versions involve PKC, which means that the per-data-item calculations tend to involve multi precision exponentiations. So your tradeoff is between sending the data, for the trivial PIR which fetches the whole database, and exponentiating it. And as a matter of fact, with most computers today, it is faster to send data than to exponentiate it. Therefore current computational PIR algorithms are practically useless. While they achieve the formal goal of reducing the amount of data to be sent, it is at the expense of so much computation that the system is incredibly slow. It would be faster to dump the entire data base to each reader. From blanu at bozonics.com Mon Apr 1 11:50:02 2002 From: blanu at bozonics.com (Brandon Wiley) Date: Sat Dec 9 22:11:44 2006 Subject: [p2p-hackers] P2P Onion Routing? In-Reply-To: <87lmcdjbgx.fsf@openprivacy.org> Message-ID: > ZKS was great... but was hard to setup and certainly not P2P. I think people need to get over this what is and is not P2P thing. ZKS is P2P when every user runs a ZKS node. In which case the problem becomes that ZKS is not scalable in P2P mode because, I think, it assumes that every user knows about all of the mix nodes. This brings up one of the hard problems in mixnets which Roger mentioned, how do you dish out subsets of the network to various users? From arma at mit.edu Mon Apr 1 14:39:02 2002 From: arma at mit.edu (Roger Dingledine) Date: Sat Dec 9 22:11:44 2006 Subject: [p2p-hackers] P2P Onion Routing? In-Reply-To: ; from blanu@bozonics.com on Thu, Mar 28, 2002 at 09:29:53AM -0600 References: <87lmcdjbgx.fsf@openprivacy.org> Message-ID: <20020401173850.S6832@moria.seul.org> On Thu, Mar 28, 2002 at 09:29:53AM -0600, Brandon Wiley wrote: > I think people need to get over this what is and is not P2P thing. ZKS is > P2P when every user runs a ZKS node. In which case the problem becomes > that ZKS is not scalable in P2P mode because, I think, it assumes that > every user knows about all of the mix nodes. Well, Freedom is trickier because it's aiming to be low-latency and connection-oriented. In order to get reasonable performance, you need to be thinking about latencies between nodes. You must choose routes that achieve good latency while not giving up 'too much' anonymity (the shortest route is best but using it is probably a bad idea). To provide good latency, Freedom keeps routing/latency tables and propagates them like other internet routing protocols. Typical internet routing protocols aren't designed to handle a non-trivial churn rate over thousands of nodes. With high-latency systems like traditional mix-nets, it seems that we can simplify the design by not paying attention to performance between nodes. > This brings up one of the hard problems in mixnets which Roger mentioned, > how do you dish out subsets of the network to various users? This question has been on my plate for a while. I'll get us started here and see where it goes. We have a reputation server (in its trivial form, it just keeps track of participating nodes; it might also maintain state about performance and reliability if we like). Alice should pull down info on the entire set of nodes. Otherwise people could observe the subsets people download and match Alice's messages to her pretty well. This single server is a trust bottleneck -- for instance, it might give out different answers to different people and then observe which nodes a message traverses. So we must make several redundant reputation servers; more about that below. These attacks are actually subtler than that -- imagine that at midnight Alice asks for a set of nodes, and then at 12:05 Bob asks for a set of nodes. If a new node has joined after midnight, and somebody uses that node, then we know it's not Alice. Similar issues arise with synchronization between reputation servers: "you used N_16; the only reputation server that knew about N_16 then was RS_4; so you must be Alice because she's the only one who asked RS_4 for a node list then." So it would seem that we need to keep the set of node information strongly synchronized across reputation servers and with all users. There are a number of simplications we can make to get away with loose synchronization instead. * First of all, don't use new information. Have a time threshold (say, an hour after a new node is advertised) by which everybody should know about it, and so people can start using data an hour after it's posted with relative confidence that most other people will know about it too. It's all about anonymity sets and getting a lot of users with indistinguishable profiles. Users who don't follow the guidelines will lose out on anonymity. * PIR from the reputation servers will make it possible to pull down subsets of the nodes without revealing which nodes we've learned. We still have the above issue of dangerous nodes (nodes which some reputation servers know about and others don't). PIR will also be computationally impractical for the (near) future. * If we assume some sort of bootstrap, people can query for node lists, subsets, or updates ("all the changes since noon") via the mix-net itself. Users must take care not to give away identifying information or patterns (eg by using newly advertised nodes immediately). This is still messy from a traffic analysis perspective though -- say the adversary is watching all the reputation servers and learning the subsets of nodes that are being downloaded. A message going into a given node N_i is likely to be heading towards some node which was learned at the same time as N_i. If there are lots of participating nodes and the subsets are relatively small, And now a word about keeping the reputation servers verifiable and loosely synchronized -- after all, a rogue reputation server could intentionally keep some information for itself or give out false reputation reports to rig the chance that a user will pick a given node. If we name mixes by their public keys (so they can self-certify), mixes can provide periodic certificates about their state. Because each mix signs and timestamps each certificate, a reputation server can at most fail to provide the newest certificate. The reputation servers can work together to ensure correct and complete data (perhaps by successively signing certificate bundles, so users can be sure that a given mix certificate has been seen by a threshold of reputation servers, and that they're getting the whole set of certificates in that bundle) and correct behavior (perhaps by doing random queries through the mix-net). Because the set of reputation servers is smaller and more static, it's easier to detect and punt misbehaving servers. There's plenty more that needs to be worked out. But this should serve as a starting foundation. What do you think? --Roger From raph at casper.ghostscript.com Mon Apr 1 16:10:01 2002 From: raph at casper.ghostscript.com (Raph Levien) Date: Sat Dec 9 22:11:44 2006 Subject: [p2p-hackers] Attack Resistant Trust Metric Metadata HOWTO Message-ID: <20020401212907.GC27005@casper.ghostscript.com> I recently realized that my work on attack-resistant trust metrics can be somewhat intimidating, and seems to frequently be percevied as "rocket science". Thus, I've distilled my latest thinking into a HOWTO, which should be reasonably straightforward to compentent programmers. It's geared towards a centralized, Web-based implementation of a trust metric for generalized metadata. I think it's worthwhile doing a web-based implementation first, because it's going to be easier and more malleable. I've always been thinking about a p2p implementation, though, and such a thing should be feasible (but probably not easy). The HOWTO is here: http://www.levien.com/free/tmetric-HOWTO.html Follow the links to find my thesis-in-progress, as well as a testbed implementation of the Advogato trust metric and PageRank. Hope somebody here finds this useful. Raph From sam at neurogrid.com Mon Apr 1 18:37:02 2002 From: sam at neurogrid.com (Sam Joseph) Date: Sat Dec 9 22:11:44 2006 Subject: [p2p-hackers] NeuroGrid Technical Paper Message-ID: <3CA919BE.4050806@neurogrid.com> Hi All, So I think it is now viewable (had problems with some fonts) but my latest attempt to explain NeuroGrid and the results of recent simulations is available at: http://www.neurogrid.net/NeuroGridSimulations.pdf I'm submitting it to the European P2P workshop on Friday, so any feedback before then will be very warmly received. But even after that I will be very interested in comments, opinions and criticisms. I am still deeply unsatisfied with my explanation of many of NeuroGrid's components. So any ideas about useful diagrams, turns of phrase, etc. will be greatly appreciated, particularly regarding section 4 of the NeuroGrid Learning Mechanism. Thanks in advance. CHEERS> SAM From justin at chapweske.com Mon Apr 1 19:29:01 2002 From: justin at chapweske.com (Justin Chapweske) Date: Sat Dec 9 22:11:44 2006 Subject: [p2p-hackers] NeuroGrid Technical Paper References: <3CA919BE.4050806@neurogrid.com> Message-ID: <3CA92548.2080400@chapweske.com> Nice paper Sam, I feel that I understand NeuroGrid much more fully now. One thing that I wonder about, especially in regards to the bookmark scenario, is that the user may have documents that are seperated into a number of categories that really don't have anything to do with each other. In this case wouldn't it make more sense for the peer to join the network at a different position for each category? Could the peer perhaps figure out what different categories of content it is holding based on the different directions that queries are coming in from for different pieces of content? Sam Joseph wrote: > Hi All, > > So I think it is now viewable (had problems with some fonts) but my > latest attempt to explain NeuroGrid and the results of recent > simulations is available at: > > http://www.neurogrid.net/NeuroGridSimulations.pdf > > I'm submitting it to the European P2P workshop on Friday, so any > feedback before then will be very warmly received. But even after that > I will be very interested in comments, opinions and criticisms. > > I am still deeply unsatisfied with my explanation of many of NeuroGrid's > components. So any ideas about useful diagrams, turns of phrase, etc. > will be greatly appreciated, particularly regarding section 4 of the > NeuroGrid Learning Mechanism. > > Thanks in advance. > > CHEERS> SAM > > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers -- Justin Chapweske, Onion Networks http://onionnetworks.com/ From sam at neurogrid.com Mon Apr 1 19:43:01 2002 From: sam at neurogrid.com (Sam Joseph) Date: Sat Dec 9 22:11:44 2006 Subject: [p2p-hackers] NeuroGrid Technical Paper References: <3CA919BE.4050806@neurogrid.com> <3CA92548.2080400@chapweske.com> Message-ID: <3CA92929.6070908@neurogrid.com> Justin Chapweske wrote: > Nice paper Sam, I feel that I understand NeuroGrid much more fully now. Did the learning mechanism section make any sense? > One thing that I wonder about, especially in regards to the bookmark > scenario, is that the user may have documents that are seperated into > a number of categories that really don't have anything to do with each > other. > > In this case wouldn't it make more sense for the peer to join the > network at a different position for each category? In some ways I think that is what happens, at least in terms of the logical network - each peer learns how to connect to other nodes that share similar content. However there is the open issue of whether those connections should be supported by lots of intermediate nodes (as in a gnutella style routing), or the nodes should disconnect and reconnect directly to the other nodes that share similar content. And this depends on the number of nodes trying to make connections, the length of connection duration etc. I mean the prototype I implemented just connectd directly to all other nodes briefly, as one would query a search engine, which may well not scale ..... CHEERS> SAM From melc at fashionvictims.com Wed Apr 3 03:41:01 2002 From: melc at fashionvictims.com (Ihor Kuz) Date: Sat Dec 9 22:11:44 2006 Subject: [p2p-hackers] A P2P file-sharing uberclient Message-ID: Hi everyone, As part of our cp2pc project we have started to design and develop a P2P file-sharing uberclient. This uberclient is a single client application that can publish, download and search for files on multiple P2P file-sharing networks. So, for example, a file published using the uberclient will be published on multiple networks, similarly, performing a search using the uberclient will perform searches on multiple networks. Besides the final application, a major goal of our project is to define a unified file-sharing API. This is an API that abstracts the concepts involved in P2P file-sharing and allows various different networks to be accessed through the same interface. So far we've studied a number of existing networks (Gnutella, mnet, Chord/CFS, JXTA, GDN) and their clients to get a good idea of what different kinds of file-sharing networks do and how they work. Based on these studies we've drafted a design document describing a unified file-sharing application and have designed a preliminary unified file-sharing API. The design documents (and more info about the project) are available at: http://www.cs.vu.nl/pub/globe/cp2pc/ Our next step is to implement the API for a number of existing P2P file-sharing systems. Before starting with the implementations, however, we would like comments and feedback about our proposed application and API from the people who are intimately familiar with actual P2P file-sharing systems (both from a networking perspective as well as from an application programmer perspective). Are there concepts and/or functions that don't map well onto some existing systems? are there things we've overlooked? or just misunderstood? Do you have any good ideas that can be incorporated into an uberclient? We are especially interested in comments about open issues such as: whether to have a synchronous or asynchronous API, how to represent file attributes and search results (XML, RDF, etc.), how to configure individual networks, etc. Hope to hear from you, Ihor. (PS I'm cross-posting this to the bluesky and p2p-hackers lists) From zooko at zooko.com Wed Apr 3 07:40:01 2002 From: zooko at zooko.com (Zooko) Date: Sat Dec 9 22:11:44 2006 Subject: [p2p-hackers] Resources discussing secure time (nonce) in a distributed environment. In-Reply-To: Message from burton@openprivacy.org (Kevin A. Burton) of "23 Mar 2002 17:15:36 PST." <87y9givk93.fsf@openprivacy.org> References: <87hen6x6i7.fsf@openprivacy.org> <15517.5326.823888.665525@openprivacy.org> <87y9givk93.fsf@openprivacy.org> Message-ID: Kevin A. Burton wrote: > > Fen Labalme writes: > > > See the FAQ on Digital Tinme Stamping at > > http://saturn.tcs.hut.fi/~helger/crypto/link/timestamping/ I'd like to discourage p2p hackers from using timestamping in general. For one thing, it can't be done with complete security. The closest you can get is to have a set of timestamp servers that you choose to trust such that as long as there is an honest subset of some size or other, you get honest timestamps. That turns out to be complicated and expensive and there's always the niggling doubt that maybe someone *has* succeeded in compromising enough of the servers you chose. For a second thing, you can never know *exactly* what time another node thinks it is, since messages take time to travel. Therefore you will always have a certain amount of "slop" in your design, which can cause problems if it turns out to be too much or too little. (Also remember that every clock runs at a different speed than every other one.) For another thing, you probably don't need it. On most occasions when you think you want timestamps you would probably get a simpler and more secure protocol by issuing a challenge or requiring a nonce. And the for the last reason, the use of time introduces non-obvious state dependencies. If you use nonces, challenges, digitally signed certificates and so forth, then it is obvious to everyone involved that operation X depends on the results from operation Y. If you use the passage of time, then an operation can succeed or fail depending on what time a different node thought that it was when it issued a message, or how many microseconds it took a message to travel, or how much slop a node was allowing. In complex systems it might be a *different* node than the one that you are talking to whose timing causes your operation to fail! That makes it much harder to design and to debug. I get sort of grouchy about this issue because it is one of the "bad intuitions" that people have. It's what someone called "The Big Clock in the Sky", which goes along with "The Big Phonebook in the Sky" and "The Big Dictionary in the Sky" and "synchronous connections" as ideas that programmers intuitively have from their experiences in Real Life which do not work very well in the world of mutually distrusting and remote nodes. Regards, Zooko --- zooko.com Security and Distributed Systems Engineering --- From gojomo at usa.net Wed Apr 3 22:26:02 2002 From: gojomo at usa.net (Gordon Mohr) Date: Sat Dec 9 22:11:44 2006 Subject: [p2p-hackers] Resources discussing secure time (nonce) in a distributed environment. References: <87hen6x6i7.fsf@openprivacy.org> <15517.5326.823888.665525@openprivacy.org> <87y9givk93.fsf@openprivacy.org> Message-ID: <146301c1dbb1$87d06380$2a01a8c0@golden> Zooko writes: > I get sort of grouchy about this issue because it is one of the "bad intuitions" > that people have. It's what someone called "The Big Clock in the Sky", which > goes along with "The Big Phonebook in the Sky" and "The Big Dictionary in the > Sky" and "synchronous connections" as ideas that programmers intuitively have > from their experiences in Real Life which do not work very well in the world of > mutually distrusting and remote nodes. But isn't there a big clock in the sky nowadays, via GPS? Clock skew seems to me an occasional headache but essentially "solved" by GPS and other constant-connectivity qualities. - Gordon From gojomo at usa.net Wed Apr 3 23:45:01 2002 From: gojomo at usa.net (Gordon Mohr) Date: Sat Dec 9 22:11:44 2006 Subject: [p2p-hackers] Optimal Replication? References: <3C8E4DF6.2010700@chapweske.com> Message-ID: <147801c1dbbc$8df31fe0$2a01a8c0@golden> Justin wrote a few weeks ago: > Does anyone have any good pointers to research on optimal replication > strategies. By this I mean a quantification of how much a piece of > content should be replicated depending on its file size, available disk > space, and popularity of the content. This paper, from the mounds o' goodness presented at the MIT "1st International Workshop on Peer-to-Peer Systems", seems directly relevant: Dynamic Replica Placement for Scalable Content Delivery by Yan Chen, Randy Katz and John Kubiatowicz http://www.cs.rice.edu/Conferences/IPTPS02/184.pdf - Gojomo From bert at akamail.com Thu Apr 4 07:38:01 2002 From: bert at akamail.com (bert@akamail.com) Date: Sat Dec 9 22:11:44 2006 Subject: [p2p-hackers] uServ deployed at CMU References: Message-ID: <3CAC73FC.57627025@akamail.com> We've just deployed the YouServ (a.k.a. uServ) P2P web hosting system at CMU. I'm hoping some people on these lists will be able to try it (you need an e-mail in the cmu.edu domain to register), or may know someone at CMU that would find it useful. Those that can't register may still find the information on the site interesting: http://userv.web.cmu.edu/ My test site demonstrates the proxying / firewall tunneling capability: http://bayardo-userv.userv.web.cmu.edu/ Files you see on the site are being access directly from my PC inside our corporate intranet (company name omitted to avoid the wrath of anyone who might view this as a security issue :). General info and a (recently updated) research paper on YouServ can be found here: http://www.almaden.ibm.com/cs/people/bayardo/userv/ From justin at chapweske.com Thu Apr 4 08:14:02 2002 From: justin at chapweske.com (Justin Chapweske) Date: Sat Dec 9 22:11:44 2006 Subject: [p2p-hackers] uServ deployed at CMU References: <3CAC73FC.57627025@akamail.com> Message-ID: <3CAC7BC2.4050108@chapweske.com> I love the system. It is a very clean and simple design and I'm sure it will be very successful. A feature that I would find very compelling is to provide an option to use the service through dyndns.org, so that server administrators don't have to mess with DNS configuration at all, and the users have complete flexibility over their domain names. bert@akamail.com wrote: > We've just deployed the YouServ (a.k.a. uServ) P2P web hosting system at CMU. > I'm hoping some people on these lists will be able to try it (you need an e-mail > in the cmu.edu domain to register), or may know someone at CMU that would find > it useful. Those that can't register may still find the information on the site > interesting: > > http://userv.web.cmu.edu/ > > My test site demonstrates the proxying / firewall tunneling capability: > http://bayardo-userv.userv.web.cmu.edu/ > Files you see on the site are being access directly from my PC inside our > corporate intranet (company name omitted to avoid the wrath of anyone who might > view this as a security issue :). > > General info and a (recently updated) research paper on YouServ can be found > here: > > http://www.almaden.ibm.com/cs/people/bayardo/userv/ > > > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > -- Justin Chapweske, Onion Networks http://onionnetworks.com/ From svanegmond at tinyplanet.ca Thu Apr 4 09:18:01 2002 From: svanegmond at tinyplanet.ca (Stephen van Egmond) Date: Sat Dec 9 22:11:44 2006 Subject: [p2p-hackers] Resources discussing secure time (nonce) in a distributed environment. In-Reply-To: <146301c1dbb1$87d06380$2a01a8c0@golden> References: <87hen6x6i7.fsf@openprivacy.org> <15517.5326.823888.665525@openprivacy.org> <87y9givk93.fsf@openprivacy.org> <146301c1dbb1$87d06380$2a01a8c0@golden> Message-ID: <20020404171758.GB8137@tinyplanet.ca> Gordon Mohr (gojomo@usa.net) wrote: > But isn't there a big clock in the sky nowadays, via GPS? Clock skew seems > to me an occasional headache but essentially "solved" by GPS and other > constant-connectivity qualities. When my computer can pick a clock signal off a satellite with some standard $5 component on the motherboard, then it's solved. And even then, it would still be a bad architectural decision for software to rely on it. -Steve From pete at petertodd.ca Thu Apr 4 14:40:02 2002 From: pete at petertodd.ca (Peter Todd) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] Resources discussing secure time (nonce) in a distributed environment. In-Reply-To: <146301c1dbb1$87d06380$2a01a8c0@golden> References: <87hen6x6i7.fsf@openprivacy.org> <15517.5326.823888.665525@openprivacy.org> <87y9givk93.fsf@openprivacy.org> <146301c1dbb1$87d06380$2a01a8c0@golden> Message-ID: <20020404223928.GB8296@gw.localdomain> On Thu, Apr 04, 2002 at 12:20:06AM -0800, Gordon Mohr wrote: > But isn't there a big clock in the sky nowadays, via GPS? Clock skew seems > to me an occasional headache but essentially "solved" by GPS and other > constant-connectivity qualities. GPS signals are so easilly blocked it's not funny... It's, from what I hear anyway, rare for a building to *not* block them. So they can't be relyed on. Anyway the US controls the GPS system, the Europeans are thinking of creating their own system because of this... -- Need some Linux help or custom C(++) programming? Drop me a line and I'll see what I can do. Resume at http://www.petertodd.ca/resume.php pete@petertodd.ca http://www.petertodd.ca -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 232 bytes Desc: not available Url : http://zgp.org/pipermail/p2p-hackers/attachments/20020404/a54b6a59/attachment.pgp From dnm at pobox.com Thu Apr 4 15:40:01 2002 From: dnm at pobox.com (Dan Moniz) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] (Is there a) p2p-hackers SF/Bay Area meeting this weekend? Message-ID: <5.1.0.14.2.20020404153326.00b01858@pop.vex.net> Hi all, Thought I might inquire as to see if anyone was interested in doing another ad-hoc p2p-hackers get together at the Metreon. For my own selfish purposes, I'm still in town, and I've talked to a friend who would be interested in coming along as well. Was the plan to do them monthly or bi-weekly? I forget. Was there even a plan? Bram? ;] -- Dan Moniz [http://www.pobox.com/~dnm/] From justin at chapweske.com Thu Apr 4 17:38:01 2002 From: justin at chapweske.com (Justin Chapweske) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] (Is there a) p2p-hackers SF/Bay Area meeting this weekend? References: <5.1.0.14.2.20020404153326.00b01858@pop.vex.net> Message-ID: <3CACFFC3.8070309@chapweske.com> Do you have a cell phone #? I'm going to be in town next week for the CTO forum and may have some time to get together.... Dan Moniz wrote: > Hi all, > > Thought I might inquire as to see if anyone was interested in doing > another ad-hoc p2p-hackers get together at the Metreon. For my own > selfish purposes, I'm still in town, and I've talked to a friend who > would be interested in coming along as well. Was the plan to do them > monthly or bi-weekly? I forget. Was there even a plan? Bram? ;] > > > -- Justin Chapweske, Onion Networks http://onionnetworks.com/ From justin at chapweske.com Thu Apr 4 17:47:01 2002 From: justin at chapweske.com (Justin Chapweske) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] (Is there a) p2p-hackers SF/Bay Area meeting this weekend? References: <5.1.0.14.2.20020404153326.00b01858@pop.vex.net> <3CACFFC3.8070309@chapweske.com> Message-ID: <3CAD01DC.6040700@chapweske.com> Oops, I wish that "Reply to Sender Only" used "From" instead of "Reply-To". Justin Chapweske wrote: > Do you have a cell phone #? I'm going to be in town next week for the > CTO forum and may have some time to get together.... > > Dan Moniz wrote: > >> Hi all, >> >> Thought I might inquire as to see if anyone was interested in doing >> another ad-hoc p2p-hackers get together at the Metreon. For my own >> selfish purposes, I'm still in town, and I've talked to a friend who >> would be interested in coming along as well. Was the plan to do them >> monthly or bi-weekly? I forget. Was there even a plan? Bram? ;] >> >> >> > > > -- Justin Chapweske, Onion Networks http://onionnetworks.com/ From dnm at pobox.com Thu Apr 4 18:02:01 2002 From: dnm at pobox.com (Dan Moniz) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] (Is there a) p2p-hackers SF/Bay Area meeting this weekend? In-Reply-To: <3CACFFC3.8070309@chapweske.com> References: <5.1.0.14.2.20020404153326.00b01858@pop.vex.net> Message-ID: <5.1.0.14.2.20020404175622.02b33bd8@pop.vex.net> At 07:37 PM 4/4/2002 -0600, you wrote: >Do you have a cell phone #? I'm going to be in town next week for the CTO >forum and may have some time to get together.... Woops. I should clarify. I meant this Saturday. I'm leaving the SF/Bay Area on Monday morning, although, with any luck, I expect to return within two weeks or so. We'll see what happens. And no, I don't have a cell phone number at the moment. =[ Sorry. >>Hi all, >>Thought I might inquire as to see if anyone was interested in doing >>another ad-hoc p2p-hackers get together at the Metreon. For my own >>selfish purposes, I'm still in town, and I've talked to a friend who >>would be interested in coming along as well. Was the plan to do them >>monthly or bi-weekly? I forget. Was there even a plan? Bram? ;] -- Dan Moniz [http://www.pobox.com/~dnm/] From bram at gawth.com Thu Apr 4 20:58:01 2002 From: bram at gawth.com (Bram Cohen) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] (Is there a) p2p-hackers SF/Bay Area meeting this weekend? In-Reply-To: <5.1.0.14.2.20020404153326.00b01858@pop.vex.net> Message-ID: Dan Moniz wrote: > Thought I might inquire as to see if anyone was interested in doing another > ad-hoc p2p-hackers get together at the Metreon. For my own selfish > purposes, I'm still in town, and I've talked to a friend who would be > interested in coming along as well. Was the plan to do them monthly or > bi-weekly? I forget. Was there even a plan? Bram? ;] We can do one on saturday, although sunday would be much better for me. Does anyone else have a strong day preference? -Bram Cohen "Markets can remain irrational longer than you can remain solvent" -- John Maynard Keynes From stevej at pobox.com Thu Apr 4 21:01:02 2002 From: stevej at pobox.com (steve jenson) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] (Is there a) p2p-hackers SF/Bay Area meeting this weekend? In-Reply-To: Message-ID: I'll be busy all day Saturday. Sunday works best for me, as well. -sj On 4/4/02 8:57 PM, "Bram Cohen" wrote: > Dan Moniz wrote: > >> Thought I might inquire as to see if anyone was interested in doing another >> ad-hoc p2p-hackers get together at the Metreon. For my own selfish >> purposes, I'm still in town, and I've talked to a friend who would be >> interested in coming along as well. Was the plan to do them monthly or >> bi-weekly? I forget. Was there even a plan? Bram? ;] > > We can do one on saturday, although sunday would be much better for > me. Does anyone else have a strong day preference? > > -Bram Cohen > > "Markets can remain irrational longer than you can remain solvent" > -- John Maynard Keynes > > > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers > From burton at openprivacy.org Thu Apr 4 23:16:01 2002 From: burton at openprivacy.org (Kevin A. Burton) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] Resources discussing secure time (nonce) in a distributed environment. In-Reply-To: References: <87hen6x6i7.fsf@openprivacy.org> <15517.5326.823888.665525@openprivacy.org> <87y9givk93.fsf@openprivacy.org> Message-ID: <87n0wi1usv.fsf@openprivacy.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Zooko writes: > Kevin A. Burton wrote: > > > > Fen Labalme writes: > > > > > See the FAQ on Digital Tinme Stamping at > > > http://saturn.tcs.hut.fi/~helger/crypto/link/timestamping/ > > > I'd like to discourage p2p hackers from using timestamping in general. > > For one thing, it can't be done with complete security. The closest you can > get is to have a set of timestamp servers that you choose to trust such that > as long as there is an honest subset of some size or other, you get honest > timestamps. That turns out to be complicated and expensive and there's always > the niggling doubt that maybe someone *has* succeeded in compromising enough > of the servers you chose. Zooko. Distributed time is a huge problem. Of course if it is solved a lot of good things cwould happen. > For another thing, you probably don't need it. On most occasions when you > think you want timestamps you would probably get a simpler and more secure > protocol by issuing a challenge or requiring a nonce. nonce is good. But doesn't accomplish everything. For example a distributed reputation system would very much benefit from time based gaming. > And the for the last reason, the use of time introduces non-obvious state > dependencies. If you use nonces, challenges, digitally signed certificates > and so forth, then it is obvious to everyone involved that operation X depends > on the results from operation Y. If you use the passage of time, then an > operation can succeed or fail depending on what time a different node thought > that it was when it issued a message, or how many microseconds it took a > message to travel, or how much slop a node was allowing. I agree that there are a ton of situations where one should not use time. I think that there are still some reasons to use time. The one remaining problem is that it is VERY complicated. It won't be happening anytime soon though as a DTS is very hard to build. > I get sort of grouchy about this issue because it is one of the "bad > intuitions" that people have. It's what someone called "The Big Clock in the > Sky", which goes along with "The Big Phonebook in the Sky" and "The Big > Dictionary in the Sky" and "synchronous connections" as ideas that programmers > intuitively have from their experiences in Real Life which do not work very > well in the world of mutually distrusting and remote nodes. I understand totally... I just don't think you understand what I want to use it for. I think that in the long term it will be required to build a real distributed repuation system. I may be wrong (and I hope so) but I still need to do a lot of thinking on the subject. Kevin - -- Kevin A. Burton ( burton@apache.org, burton@openprivacy.org, burtonator@acm.org ) Location - San Francisco, CA, Cell - 415.595.9965 Jabber - burtonator@jabber.org, Web - http://relativity.yi.org/ Windows 95 - A 32 bit extension to a 16 bit shell for a 8 bit operating system designed for 4 bit computers by a 2 bit company that can't stand 1 bit of competition. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.0.6 (GNU/Linux) Comment: Get my public key at: http://relativity.yi.org/pgpkey.txt iD8DBQE8rU7PAwM6xb2dfE0RAjhvAJ4nNOg282v9mwR8wDqXFmQnniEUZwCeKpjw OBN4+ehCh1Z6bQhwtPKihPQ= =jefA -----END PGP SIGNATURE----- From burton at openprivacy.org Thu Apr 4 23:23:01 2002 From: burton at openprivacy.org (Kevin A. Burton) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] Resources discussing secure time (nonce) in a distributed environment. In-Reply-To: <146301c1dbb1$87d06380$2a01a8c0@golden> References: <87hen6x6i7.fsf@openprivacy.org> <15517.5326.823888.665525@openprivacy.org> <87y9givk93.fsf@openprivacy.org> <146301c1dbb1$87d06380$2a01a8c0@golden> Message-ID: <87it761uh0.fsf@openprivacy.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 "Gordon Mohr" writes: > Zooko writes: > > I get sort of grouchy about this issue because it is one of the "bad intuitions" > > that people have. It's what someone called "The Big Clock in the Sky", which > > goes along with "The Big Phonebook in the Sky" and "The Big Dictionary in the > > Sky" and "synchronous connections" as ideas that programmers intuitively have > > from their experiences in Real Life which do not work very well in the world of > > mutually distrusting and remote nodes. > > But isn't there a big clock in the sky nowadays, via GPS? Clock skew seems > to me an occasional headache but essentially "solved" by GPS and other > constant-connectivity qualities. The problem isn't getting time... it is making sure it is secure and that someone doesn't lie. Kevin - -- Kevin A. Burton ( burton@apache.org, burton@openprivacy.org, burtonator@acm.org ) Location - San Francisco, CA, Cell - 415.595.9965 Jabber - burtonator@jabber.org, Web - http://relativity.yi.org/ Windows 95 - A 32 bit extension to a 16 bit shell for a 8 bit operating system designed for 4 bit computers by a 2 bit company that can't stand 1 bit of competition. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.0.6 (GNU/Linux) Comment: Get my public key at: http://relativity.yi.org/pgpkey.txt iD8DBQE8rVB7AwM6xb2dfE0RAsB2AKDCbA/lScsVnq5xkaSBRe/qP7RmxgCeJ128 zexm6f8yhnP3cJxWJ7XMaaU= =pc/G -----END PGP SIGNATURE----- From burton at openprivacy.org Thu Apr 4 23:23:02 2002 From: burton at openprivacy.org (Kevin A. Burton) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] Resources discussing secure time (nonce) in a distributed environment. In-Reply-To: <20020404171758.GB8137@tinyplanet.ca> References: <87hen6x6i7.fsf@openprivacy.org> <15517.5326.823888.665525@openprivacy.org> <87y9givk93.fsf@openprivacy.org> <146301c1dbb1$87d06380$2a01a8c0@golden> <20020404171758.GB8137@tinyplanet.ca> Message-ID: <87elhu1ugi.fsf@openprivacy.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Stephen van Egmond writes: > Gordon Mohr (gojomo@usa.net) wrote: > > But isn't there a big clock in the sky nowadays, via GPS? Clock skew seems > > to me an occasional headache but essentially "solved" by GPS and other > > constant-connectivity qualities. > > When my computer can pick a clock signal off a satellite with some > standard $5 component on the motherboard, then it's solved. xntp :) - -- Kevin A. Burton ( burton@apache.org, burton@openprivacy.org, burtonator@acm.org ) Location - San Francisco, CA, Cell - 415.595.9965 Jabber - burtonator@jabber.org, Web - http://relativity.yi.org/ Using an area of the Internet the size of Ireland, pedophiles can make your keyboard release toxic vapors that can make you more suggestible. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.0.6 (GNU/Linux) Comment: Get my public key at: http://relativity.yi.org/pgpkey.txt iD8DBQE8rVCMAwM6xb2dfE0RAnORAKCw3FEipXVCmaIHWc8XzozrluOz7wCgp8TE RkFHJ2/cBi6s1OwqyykI/GM= =HPSM -----END PGP SIGNATURE----- From fen at openprivacy.org Fri Apr 5 07:13:01 2002 From: fen at openprivacy.org (Fen Labalme) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] Resources discussing secure time (nonce) in a distributed environment. In-Reply-To: <87n0wi1usv.fsf@openprivacy.org> References: <87hen6x6i7.fsf@openprivacy.org> <15517.5326.823888.665525@openprivacy.org> <87y9givk93.fsf@openprivacy.org> <87n0wi1usv.fsf@openprivacy.org> Message-ID: <15533.48884.526811.607747@openprivacy.org> Kevin A. Burton writes: > Zooko writes: > > For another thing, you probably don't need it. On most occasions when you > > think you want timestamps you would probably get a simpler and more secure > > protocol by issuing a challenge or requiring a nonce. > > nonce is good. But doesn't accomplish everything. > > For example a distributed reputation system would very much benefit from time > based gaming. True distributed time is very difficult, and I agree with Zooko that it may not be beneficial. Stuart Haber, to create digital timestamping, daily published his document hashes online along with their time stamps, but to be secure he published a master hash every day in the NYT classifieds. A cool hack, but an ugly one as far a P2P systems go. For gaming, the servers can agree on a trusted time server and use an encrypted NTP protocol to get secure "before" and "after" readings, perhaps even with time-delta units, but the relationship to that and "real time" or "time WRT any peer outside the closed system" is difficult to guarantee. Fen From lgonze at gonze.com Fri Apr 5 09:23:01 2002 From: lgonze at gonze.com (Lucas Gonze) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] Resources discussing secure time (nonce) in a distributed environment. In-Reply-To: <87it761uh0.fsf@openprivacy.org> Message-ID: Another problem is durability. Code can last longer than the big clock in the sky -- the big clock can be affected by weather, war, the economy, meteors, etc. Why build in obsolesence? From tboyle at rosehill.net Fri Apr 5 11:24:01 2002 From: tboyle at rosehill.net (Todd Boyle) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] Resources discussing secure time (nonce) in a distributed environment. In-Reply-To: References: <87hen6x6i7.fsf@openprivacy.org> <15517.5326.823888.665525@openprivacy.org> <87y9givk93.fsf@openprivacy.org> Message-ID: <5.1.0.14.0.20020405105948.036baec0@popmail.cortland.com> At 07:36 AM 4/3/02, Zooko wrote: >I get sort of grouchy about this issue because it is one of the "bad >intuitions" >that people have. It's what someone called "The Big Clock in the Sky", which >goes along with "The Big Phonebook in the Sky" and "The Big Dictionary in the >Sky" and "synchronous connections" as ideas that programmers intuitively have >from their experiences in Real Life which do not work very well in the >world of >mutually distrusting and remote nodes. If there were an overwhelmingly intelligent host on the internet today it would already see many different communities where individuals are already engaging in all sorts of buying/selling and other collaborations are happening. I'm going to argue, what is really needed is an executable program that runs on your computer, that has very extensive intelligence and judgment, and is capable of understanding "The Big Clock in the Sky", "The Big Phonebook in the Sky" and "The Big Dictionary in the Sky" and "synchronous connections". This application would take advantage of them, *when available*. As well as webs of trust, generally, and the kinds of evidence that allow an independent evaluation of trust and credit. The sum of a large amount of observable information makes it hard to fool this Intelligent Peer. One could draw a set of requirements for such a program. The core of the program is a list of entities detected on the network, and a meta-model for the instantiation of different scoring models. The meta-model would provide a basic set of variables that all scoring models must certainly have: the entity identifier, the list of 1 or more Score values, the list of 1 or more observed facts together with the datatype and other details of a fact such as time, date, location and credibiilty of the fact, etc. Only after building out this intelligence Peer, then, you would plug in the Technique of the Month, the Transport of the Month, the Vocabulary ofth Month... but the Self would remain in the center, objective, the witness and observer of the Senses. Is this useful at all? In general everybody is inside one or another "walled garden" whose borders are mostly artificial, or proprietary differences, or dependencies on one host or another. This is going to continue many, many years. The vast majority of the nodes you really want to connect wiht are in those walled gardens. Quick, tell me the top ten places or ways, individuals conduct business *with each other*? in no particular order, eBay their own websites *.forsale newsgroups web auctions or hubs or markets source forge collaborations to build something collaboration platforms self hosted collaboration platforms commercially hosted etc. payment systems and their communities (the GBCs, LETs etc.) There are ways we communicate, ways we execute contracts, and ways we settle contracts i.e. paying. Whenever you talk about a secure, private communication fabric you're talking about business. If it's secure, business will happen. If it's not secure business dealings die. Business is your canary in the coal mine. In fact, graymarket and illegal businss is your canary. Show me a platform where there is no illegal business, like encrypted email. That's because the platform that is hopelessly insecure. Show me a platform, like a cell phone that, used together with associated behaviors, is reasonably secure. Cellphones are the fundamental tool of cocaine dealers in US cities. This is the platform where individual P2P business will explode. That is my belief. Well, there are an awful lot of software geeks, on p2p-hackers and 1000 other lists and platforms and languages, who are always talking about security but where is it? There's still no serious criminal business on the internet. There is still no platform on the Windows or even Intel processor that is fit for even ordinary digital cash wallets. The e-gold list this week is full of mesages by true experts, warning each other of hacking attacks, one or more hacker trying to get into the PC to steal digital coins. The coins are spendable of course. Would it be correct, there is never going to be a total solution, a magic bullet? Build an intelligent application that evaluates the credibility and reputation and security of other "nodes" on the internet by evaluating the whole constellation of facts that can be learned about that node? Todd From bram at gawth.com Sat Apr 6 17:10:01 2002 From: bram at gawth.com (Bram Cohen) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] meeting tomorrow Message-ID: welp, I'm gonna be showing up for a p2p-hackers meeting tomorrow. I think we set clocks forward tonight, which means my watch is gonna be off by an hour for six months again. I'll be at the metreon at 1pm current time and 2pm tomorrow time tomorrow, sunday, the 6th. -Bram Cohen "Markets can remain irrational longer than you can remain solvent" -- John Maynard Keynes From burton at openprivacy.org Sat Apr 6 17:28:02 2002 From: burton at openprivacy.org (Kevin A. Burton) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] meeting tomorrow In-Reply-To: References: Message-ID: <87ofgwqoxe.fsf@openprivacy.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Bram Cohen writes: > welp, I'm gonna be showing up for a p2p-hackers meeting tomorrow. I think we > set clocks forward tonight, which means my watch is gonna be off by an hour > for six months again. > > I'll be at the metreon at 1pm current time and 2pm tomorrow time tomorrow, > sunday, the 6th. Bram... You need to give everyone about a weeks notice before doing something like this. Now I can't make it because I already have plans :( If you would have announced this in advance I would have kept this time slot open. Kevin - -- Kevin A. Burton ( burton@apache.org, burton@openprivacy.org, burtonator@acm.org ) Location - San Francisco, CA, Cell - 415.595.9965 Jabber - burtonator@jabber.org, Web - http://relativity.yi.org/ One man's villain is another man's employer. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.0.6 (GNU/Linux) Comment: Get my public key at: http://relativity.yi.org/pgpkey.txt iD8DBQE8r6BNAwM6xb2dfE0RAgjgAJ4tyVdHkpaW9iBBhzH69fM/ta9cvACgzaFm XeYfjoDYCsHj0TfxoDROt3M= =nJO6 -----END PGP SIGNATURE----- From lisarein at finetuning.com Sat Apr 6 17:45:01 2002 From: lisarein at finetuning.com (Lisa Rein) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] meeting tomorrow In-Reply-To: Message-ID: where's this p2p hacker meeting again? it looks like it's in SF? thanks! lisa > From: Bram Cohen > Reply-To: p2p-hackers@zgp.org > Date: Sat, 6 Apr 2002 17:09:08 -0800 (PST) > To: p2p-hackers@zgp.org > Subject: [p2p-hackers] meeting tomorrow > > welp, I'm gonna be showing up for a p2p-hackers meeting tomorrow. I think > we set clocks forward tonight, which means my watch is gonna be off by an > hour for six months again. > > I'll be at the metreon at 1pm current time and 2pm tomorrow time tomorrow, > sunday, the 6th. > > -Bram Cohen > > "Markets can remain irrational longer than you can remain solvent" > -- John Maynard Keynes > > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers From bram at gawth.com Sat Apr 6 18:09:01 2002 From: bram at gawth.com (Bram Cohen) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] meeting tomorrow In-Reply-To: <87ofgwqoxe.fsf@openprivacy.org> Message-ID: Kevin A. Burton wrote: > You need to give everyone about a weeks notice before doing something > like this. Now I can't make it because I already have plans :( Sorry Kevin, we're doing this one because dnm's gonna be leaving on monday (see earlier mail) -Bram Cohen "Markets can remain irrational longer than you can remain solvent" -- John Maynard Keynes From bram at gawth.com Sat Apr 6 18:17:01 2002 From: bram at gawth.com (Bram Cohen) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] meeting tomorrow In-Reply-To: Message-ID: Lisa Rein wrote: > where's this p2p hacker meeting again? it looks like it's in SF? It's at the metreon, in the food court area, in the area you have to walk up some stairs to get to, in front of the mural. -Bram Cohen "Markets can remain irrational longer than you can remain solvent" -- John Maynard Keynes From burton at openprivacy.org Mon Apr 8 01:35:01 2002 From: burton at openprivacy.org (Kevin A. Burton) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] meeting tomorrow In-Reply-To: References: Message-ID: <87n0wek2sz.fsf@openprivacy.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Bram Cohen writes: > Kevin A. Burton wrote: > > > You need to give everyone about a weeks notice before doing something > > like this. Now I can't make it because I already have plans :( > > Sorry Kevin, we're doing this one because dnm's gonna be leaving on monday > (see earlier mail) Ok... guess it isn't a big issue. Kevin - -- Kevin A. Burton ( burton@apache.org, burton@openprivacy.org, burtonator@acm.org ) Location - San Francisco, CA, Cell - 415.595.9965 Jabber - burtonator@jabber.org, Web - http://relativity.yi.org/ Single acts of tyranny may be ascribed to the accidental opinion of the day; but a series of oppressions, begun at a distinguished period, and pursued unalterably thro' every change of ministers, to plainly prove a deliberate, systematical plan of reducing us to slavery. -- Thomas Jefferson -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.0.6 (GNU/Linux) Comment: Get my public key at: http://relativity.yi.org/pgpkey.txt iD8DBQE8sVXMAwM6xb2dfE0RAhWIAJ9eWTqqkXgFpr2yY5B+ZkQEUNY6lgCcDk1t gXTZV/laBMjXoIy2ysX2DkA= =SHbP -----END PGP SIGNATURE----- From burton at openprivacy.org Mon Apr 8 02:14:01 2002 From: burton at openprivacy.org (Kevin A. Burton) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] meeting tomorrow In-Reply-To: References: Message-ID: <87it72k2kw.fsf@openprivacy.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Lisa Rein writes: > where's this p2p hacker meeting again? it looks like it's in SF? Yeah.. It is just a bunch of us who sit and talk about P2P hacker stuff for a few hours. Usually really laid back and casual. We meet up at the Metreon in SF on Sunday and if things go well we might go out for food or something. ... You should come to the next one. Will probably happen in another few weeks. See you then. Kevin - -- Kevin A. Burton ( burton@apache.org, burton@openprivacy.org, burtonator@acm.org ) Location - San Francisco, CA, Cell - 415.595.9965 Jabber - burtonator@jabber.org, Web - http://relativity.yi.org/ $live{free} || die ""; -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.0.6 (GNU/Linux) Comment: Get my public key at: http://relativity.yi.org/pgpkey.txt iD8DBQE8sVbvAwM6xb2dfE0RAg6ZAKDHOxCuQKwYNMu4hefeMGva21Qm1gCfYbSm VJFtZm3rnGB4a67iqwgpuD0= =nqkp -----END PGP SIGNATURE----- From greg at electricrain.com Mon Apr 8 17:36:02 2002 From: greg at electricrain.com (Gregory P. Smith) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] Resources discussing secure time (nonce) in a distributed environment. In-Reply-To: <87n0wi1usv.fsf@openprivacy.org> References: <87hen6x6i7.fsf@openprivacy.org> <15517.5326.823888.665525@openprivacy.org> <87y9givk93.fsf@openprivacy.org> <87n0wi1usv.fsf@openprivacy.org> Message-ID: <20020409003515.GA15257@zot.electricrain.com> > > Zooko writes: > > I'd like to discourage p2p hackers from using timestamping in general. > > > For another thing, you probably don't need it. On most occasions when you > > think you want timestamps you would probably get a simpler and more secure > > protocol by issuing a challenge or requiring a nonce. > > nonce is good. But doesn't accomplish everything. > > For example a distributed reputation system would very much benefit from time > based gaming. Take a hard look at what you think you need timestamps for. In many cases all you really need are an indication of relative time, not accurate absolute time. For instance in the old mojonation, not mnet, stuff each agent's meta information (its key, contact info, etc..) contains a sequence number. The sequence number in mnet's case is incremented upon any meta info change. but this type of setup could easily be changed to "all good agents increment their current number once every N minutes" if you need a notion of the relative time between some p2p network event with an agent (in a protocol where an agent could not gain an advantage by lying about its relative time). If this information is signed by the agent and the agent uses the hash of its public key as its identifier then you have a reliable source of relativity from that agent in the context of whatever message its sending with that info in it. People will never set the clocks on their computers accurately all over, and a lot of lusers who (ab)use p2p systems to do your valuable testing are more likely to purposely set it wrong to get around time based software licenses, etc. -- Gregory P. Smith From burton at openprivacy.org Mon Apr 8 17:49:01 2002 From: burton at openprivacy.org (Kevin A. Burton) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] Digital Landscape - P2P Legal conference at Stanford. Message-ID: <87y9fxofz2.fsf@openprivacy.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Saw this posted to Politechbot http://www.politechbot.com/p-03358.html It has DRM and DMCA talks which are obviously important to all P2P Hackers. http://www.law.stanford.edu/slata/digital_landscapes/register.html Only costs $50 to register... but free to Stanford students. They probably ask for ID when you show up... Kevin - -- Kevin A. Burton ( burton@apache.org, burton@openprivacy.org, burtonator@acm.org ) Location - San Francisco, CA, Cell - 415.595.9965 Jabber - burtonator@jabber.org, Web - http://relativity.yi.org/ The dog ate my Activation Key. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.0.6 (GNU/Linux) Comment: Get my public key at: http://relativity.yi.org/pgpkey.txt iD8DBQE8sjogAwM6xb2dfE0RAv2sAKCx5D6piYGrK2Ac4KNoSTZuRZE0ugCg0gMV chFTp7e33RZcKWV3J1WwNcA= =rHj9 -----END PGP SIGNATURE----- From lisarein at finetuning.com Mon Apr 8 18:15:02 2002 From: lisarein at finetuning.com (Lisa Rein) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] Digital Landscape - P2P Legal conference at Stanford. In-Reply-To: <87y9fxofz2.fsf@openprivacy.org> Message-ID: okay. i give up. what's the date of the conference? (can't find it on the website :-) lisa > From: burton@openprivacy.org (Kevin A. Burton) > Reply-To: p2p-hackers@zgp.org > Date: 08 Apr 2002 17:47:29 -0700 > To: p2p-hackers mailing list > Subject: [p2p-hackers] Digital Landscape - P2P Legal conference at Stanford. > > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > > Saw this posted to Politechbot > > http://www.politechbot.com/p-03358.html > > It has DRM and DMCA talks which are obviously important to all P2P Hackers. > > http://www.law.stanford.edu/slata/digital_landscapes/register.html > > Only costs $50 to register... but free to Stanford students. They probably > ask > for ID when you show up... > > Kevin > > - -- > Kevin A. Burton ( burton@apache.org, burton@openprivacy.org, > burtonator@acm.org ) > Location - San Francisco, CA, Cell - 415.595.9965 > Jabber - burtonator@jabber.org, Web - http://relativity.yi.org/ > > The dog ate my Activation Key. > > > > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.0.6 (GNU/Linux) > Comment: Get my public key at: http://relativity.yi.org/pgpkey.txt > > iD8DBQE8sjogAwM6xb2dfE0RAv2sAKCx5D6piYGrK2Ac4KNoSTZuRZE0ugCg0gMV > chFTp7e33RZcKWV3J1WwNcA= > =rHj9 > -----END PGP SIGNATURE----- > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers From brad.codd at chipsurfer.com Mon Apr 8 20:40:02 2002 From: brad.codd at chipsurfer.com (Brad Codd) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] Digital Landscape - P2P Legal conference at Stanford. In-Reply-To: Message-ID: Hi Lisa: > what's the date April 20, 2002. See page http://www.law.stanford.edu/slata/digital_landscapes/. You do need to have graphics switched on. Regards, Brad -----Original Message----- From: p2p-hackers-admin@zgp.org [mailto:p2p-hackers-admin@zgp.org]On Behalf Of Lisa Rein Sent: Monday, April 08, 2002 6:15 PM To: p2p-hackers@zgp.org Subject: Re: [p2p-hackers] Digital Landscape - P2P Legal conference atStanford. okay. i give up. what's the date of the conference? (can't find it on the website :-) lisa > From: burton@openprivacy.org (Kevin A. Burton) > Reply-To: p2p-hackers@zgp.org > Date: 08 Apr 2002 17:47:29 -0700 > To: p2p-hackers mailing list > Subject: [p2p-hackers] Digital Landscape - P2P Legal conference at Stanford. > > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > > Saw this posted to Politechbot > > http://www.politechbot.com/p-03358.html > > It has DRM and DMCA talks which are obviously important to all P2P Hackers. > > http://www.law.stanford.edu/slata/digital_landscapes/register.html > > Only costs $50 to register... but free to Stanford students. They probably > ask > for ID when you show up... > > Kevin > > - -- > Kevin A. Burton ( burton@apache.org, burton@openprivacy.org, > burtonator@acm.org ) > Location - San Francisco, CA, Cell - 415.595.9965 > Jabber - burtonator@jabber.org, Web - http://relativity.yi.org/ > > The dog ate my Activation Key. > > > > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.0.6 (GNU/Linux) > Comment: Get my public key at: http://relativity.yi.org/pgpkey.txt > > iD8DBQE8sjogAwM6xb2dfE0RAv2sAKCx5D6piYGrK2Ac4KNoSTZuRZE0ugCg0gMV > chFTp7e33RZcKWV3J1WwNcA= > =rHj9 > -----END PGP SIGNATURE----- > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers _______________________________________________ p2p-hackers mailing list p2p-hackers@zgp.org http://zgp.org/mailman/listinfo/p2p-hackers From zooko at zooko.com Tue Apr 9 11:09:01 2002 From: zooko at zooko.com (Zooko) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] Re: Resources discussing secure time (nonce) in a distributed environment. In-Reply-To: Message from "Gregory P. Smith" of "Mon, 08 Apr 2002 17:35:15 PDT." <20020409003515.GA15257@zot.electricrain.com> References: <87hen6x6i7.fsf@openprivacy.org> <15517.5326.823888.665525@openprivacy.org> <87y9givk93.fsf@openprivacy.org> <87n0wi1usv.fsf@openprivacy.org> <20020409003515.GA15257@zot.electricrain.com> Message-ID: Greg Smith makes a good point that in addition to nonces, sequence numbers, challenges, and other techniques, you can also use "local relative time" (or what physicists apparently call "proper time"). This is a measurement of time which is only meaningful when comparing two samples from the same machine. In particular, it is *not* meaningful to compare a local relative time to a universal standard like Unix seconds-since-epoch or Gregorian calendar or whatever. Also it isn't meaningful to compare times sampled from different machines (or at least not from different trust spheres). I usually prefer nonces-and-challenges to sequence numbers and sequence numbers to local relative timestamps, but all of these techniques are reasonable IMO, and the "big clock in the sky" (that is: synchronized time across trust boundaries including "universal time" such as seconds-since-epoch) I usually consider to be unreasonable. By the way, despite my rant I'm not perfectly absolute about this. I'm contributing to a design right now that currently includes synchronization across trust boundaries with an allowable skew in the range of 24 hours or so. I consider this to be a complication and a potential problem, and I hope to "optimize out" that part of the design if possible, but so far it seems to be needed, because of some unique requirements of this particular project. So it isn't so much that I believe you should never ever do it, as that I think most people want to do it because they are used to it from Real Life, they don't appreciate the problems it can cause, and they haven't considered the alternatives. Regards, Zooko --- zooko.com Security and Distributed Systems Engineering --- From zooko at zooko.com Tue Apr 9 11:18:01 2002 From: zooko at zooko.com (Zooko) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] security techniques that rely on constrained nodes (was: Resources discussing secure time (nonce) in a distributed environment.) In-Reply-To: Message from Fen Labalme of "Fri, 05 Apr 2002 07:12:52 PST." <15533.48884.526811.607747@openprivacy.org> References: <87hen6x6i7.fsf@openprivacy.org> <15517.5326.823888.665525@openprivacy.org> <87y9givk93.fsf@openprivacy.org> <87n0wi1usv.fsf@openprivacy.org> <15533.48884.526811.607747@openprivacy.org> Message-ID: Fen Labalme wrote: > > True distributed time is very difficult, and I agree with Zooko that it may > not be beneficial. Stuart Haber, to create digital timestamping, daily > published his document hashes online along with their time stamps, but to be > secure he published a master hash every day in the NYT classifieds. A cool > hack, but an ugly one as far a P2P systems go. This is a good example of how some big security issues hang from the question of whether the "nodes" in your network are constrained and how they are constrained. In this example (good example, Fen!), the NYT classifieds are being treated as a "node" that broadcasts to multiple recipients, and it is constrained inasmuch as it cannot send one message to one recipient and a different message to another. That's why Stuart Haber can publish his message in the NYT classifieds, then go buy a copy from the street corner and verify whether his message was tampered with. Obviously unconstrained nodes, which are the kind that I like to deal with as a p2p hacker, can't be used for this same trick. Even when we look at constrained nodes (i.e., "Real Life" things like newspapers), Mark Miller likes to point out that such security techniques are *not* absolute but are only raising the cost of doing an attack. He uses the "Mission Impossible" team to illustrate. You can imagine the Mission Impossible team placing an edited copy of the New York Times, or even causing *all* copies of the NYT to come with their edited version -- except for the copy that Stuart Haber buys to verify his message! It's only a question of whether your attacker is willing to hire the Mission Impossible team to forge Stuart Haber's timestamps. :-) Regards, Zooko, humming the "Mission Impossible" theme --- zooko.com Security and Distributed Systems Engineering --- From ingo at fargonauten.de Sat Apr 13 09:14:01 2002 From: ingo at fargonauten.de (Ingo Luetkebohle) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] next generation indymedia implementation Message-ID: <20020413180932.GA4432@fargonauten.de> Hiya, a couple of people are at the moment talking[1] about a next generation indymedia[2] implementation. One of the main goals is to make use of p2p systems, either a specific one or more than one, if possible. If anyone here is interested in helping the design with usefull insights and real-world experience, I'd like to invite you to subscribe to the list (address can be found from the Wiki) and participate. It would certainly be much needed and appreciated. I, personally, would like to be frank and ask for a person with a sense of simplicity :) take care! [1] http://www.bandwidthcoop.org/imc/tech/NewCode (Wiki) [2] http://www.indymedia.org/ (or ask me anything you want to know) -- Ingo L?tkebohle / ingo@fargonauten.de http://fargonauten.de/people/ingo PGP encrypted e-mail preferred. Fingerprint follows 3187 4DEC 47E6 1B1E 6F4F 57D4 CD90 C164 34AD CE5B From svanegmond at tinyplanet.ca Sat Apr 13 09:39:02 2002 From: svanegmond at tinyplanet.ca (Stephen van Egmond) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] next generation indymedia implementation In-Reply-To: <20020413180932.GA4432@fargonauten.de> References: <20020413180932.GA4432@fargonauten.de> Message-ID: <20020413163846.GA28250@tinyplanet.ca> Ingo Luetkebohle (ingo@fargonauten.de) wrote: > a couple of people are at the moment talking[1] about a next > generation indymedia[2] implementation. One of the main goals is to > make use of p2p systems, either a specific one or more than one, if > possible. > > If anyone here is interested in helping the design with usefull > insights and real-world experience, I'd like to invite you to > subscribe to the list (address can be found from the Wiki) and > participate. It would certainly be much needed and appreciated. Ingo, I've got a lot on the go right now, so I can't devote enough time to do your cause justice - as worthy as it is. I humbly suggest that you begin with something that already works. As I understand it, the indymedia infrastructure is starting to show its age and hack origins. Rather than write it all from scratch, why not start with something that works -- Scoop would be a fine choice. Though its article-moderation philosophy might bear some scruitiny by whoever "runs" the IMC. It works quite well for kuro5hin.org; here's how it works: there are two kinds of stories: diaries and articles. Diaries are posted immediately. Articles are posted into a moderation queue where any registered user may vote or offer editorial comments on it. The votes are -1 (dump it), 0 (don't care), +1 (post to section), +1 (post to front page). Casual visitors see anything that gets enough +1's, whether on the front page or section pages. And it all gets syndicated out through RSS. There's numerous other content systems, some (like Movable Type and Drupal) which include distributed content bells and whistles, though it's mostly headlines carried via RSS, this can very easily be extended. But for the moment, it's simple and it works. Quite a bit of this, I must emphasize, has very little p2p goodness built in. -Steve From bram at gawth.com Mon Apr 15 12:40:01 2002 From: bram at gawth.com (Bram Cohen) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] next generation indymedia implementation In-Reply-To: <20020413180932.GA4432@fargonauten.de> Message-ID: Ingo Luetkebohle wrote: > a couple of people are at the moment talking[1] about a next > generation indymedia[2] implementation. One of the main goals is to > make use of p2p systems, either a specific one or more than one, if > possible. Hey Ingo, have you looked into BitTorrent? It's almost mature, and can be used for large-scale content distribution, sans modification. http://bitconjurer.org/BitTorrent/ -Bram Cohen "Markets can remain irrational longer than you can remain solvent" -- John Maynard Keynes From jim at at.org Mon Apr 15 13:32:01 2002 From: jim at at.org (Jim Carrico) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] next generation indymedia implementation In-Reply-To: References: Message-ID: >Ingo Luetkebohle wrote: > >> a couple of people are at the moment talking[1] about a next >> generation indymedia[2] implementation. One of the main goals is to >> make use of p2p systems, either a specific one or more than one, if >> possible. > >Hey Ingo, have you looked into BitTorrent? It's almost mature, and can be >used for large-scale content distribution, sans modification. Hi Bram - I had a conversation with Zooko at codecon which covered the following points, but haven't found the time to take it any further, basically it went like this: j- bitTorrent looks very cool, but it seems like it's mainly designed to help serve a relatively small number of relatively popular files. I have the opposite problem, a relatively large number of relatively unpopular files - I host a number of musician's websites, and allow them to offer free downloads to fans - none of the sites are particularly busy, but in aggregate it will start costing me too much money if traffic continues to grow. It seems like Mnet may be a better fit for my needs - do you have plans to develop a 'helper-app' style front end similar to bittorrent? z - good idea, maybe you should do it. j - er um... Anyway, short version, I want to p2p-ify the music sites I'm hosting, so we can afford to promote them without going broke. It seems that Indymedia is in a similar bind - lots of small files. Correct me if I am mistaken: according to the demo i saw it looked like swarmcasting was only happening between peers that were actively downloading a particular file. Is this still the case? I realize that this provides an elegant solution to the resource discovery problem - the server always knows who is downloading a particular file at any particular time, and hence who can carry some of the load - and determining the persistance of the file after the download is complete introduces numerous headaches. any plans/implementations in this regard? That said, I think that Indymedia could benefit from BitTorrent right now, for distributing their "newsreal" videos - it would allow them to offer a range of resolutions and hopefully ween them off of realplayer -Jim C. From bram at gawth.com Mon Apr 15 13:52:02 2002 From: bram at gawth.com (Bram Cohen) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] next generation indymedia implementation In-Reply-To: Message-ID: Jim Carrico wrote: > Anyway, short version, I want to p2p-ify the music sites I'm hosting, > so we can afford to promote them without going broke. It seems that > Indymedia is in a similar bind - lots of small files. Correct me if I > am mistaken: according to the demo i saw it looked like swarmcasting > was only happening between peers that were actively downloading a > particular file. Is this still the case? I realize that this > provides an elegant solution to the resource discovery problem - the > server always knows who is downloading a particular file at any > particular time, and hence who can carry some of the load - and > determining the persistance of the file after the download is > complete introduces numerous headaches. any plans/implementations in > this regard? Not at this time. I'm having enough technical problems just getting the basic functionality working. By the way, the general term is 'swarming', 'swarmcasting' is swarmcast-specific. > That said, I think that Indymedia could benefit from BitTorrent right > now, for distributing their "newsreal" videos - it would allow them > to offer a range of resolutions and hopefully ween them off of > realplayer Yeah, it's best to use p2p for what it can handle well already, rather than trying to make it handle everything prematurely. Slightly off-topic for this list - are there any decent non-proprietary video formats? -Bram Cohen "Markets can remain irrational longer than you can remain solvent" -- John Maynard Keynes From ingo at fargonauten.de Tue Apr 16 02:26:02 2002 From: ingo at fargonauten.de (Ingo Luetkebohle) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] next generation indymedia implementation In-Reply-To: <20020413163846.GA28250@tinyplanet.ca> References: <20020413180932.GA4432@fargonauten.de> <20020413163846.GA28250@tinyplanet.ca> Message-ID: <20020415200720.GA4370@fargonauten.de> Stephen, On Sat, Apr 13, 2002 at 11:38:46AM -0500, Stephen van Egmond wrote: > Rather than write it all from scratch, why not start with something > that works -- Scoop would be a fine choice. Yes, Scoop is definetely one of the options. A problem is interoperability (even if just importing old stuff) and at the moment we're trying to come up with some general stuff that will enable inter-op between different applications. Of course, at that point, p2p knowledge would come in very handy, lest we specify something that is not suitable on a p2p system or inefficient or whatever. -- Ingo L?tkebohle / ingo@fargonauten.de http://fargonauten.de/people/ingo PGP encrypted e-mail preferred. Fingerprint follows 3187 4DEC 47E6 1B1E 6F4F 57D4 CD90 C164 34AD CE5B From ingo at fargonauten.de Tue Apr 16 06:03:01 2002 From: ingo at fargonauten.de (Ingo Luetkebohle) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] next generation indymedia implementation In-Reply-To: ; from bram@gawth.com on Mon, Apr 15, 2002 at 12:39:49PM -0700 References: <20020413180932.GA4432@fargonauten.de> Message-ID: <20020416150225.B3302@fargonauten.de> On Mon, Apr 15, 2002 at 12:39:49PM -0700, Bram Cohen wrote: > Hey Ingo, have you looked into BitTorrent? It's almost mature, and can be > used for large-scale content distribution, sans modification. Yes, and it looks very good for the audio/video distribution (which is centralized at the moment). Its not of so much use for the articles themselves, though, and then we still have the issue of naming to solve if distributed publishing is to become possible in a meaningful way. A stand-alone tool, like BitTorrent but for searching would be a great thing :-) Ingo From burton at openprivacy.org Wed Apr 17 22:44:01 2002 From: burton at openprivacy.org (Kevin A. Burton) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] next generation indymedia implementation In-Reply-To: <20020413180932.GA4432@fargonauten.de> References: <20020413180932.GA4432@fargonauten.de> Message-ID: <87n0w1y351.fsf@openprivacy.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Ingo Luetkebohle writes: > Hiya, > > a couple of people are at the moment talking[1] about a next generation > indymedia[2] implementation. One of the main goals is to make use of p2p > systems, either a specific one or more than one, if possible. I think I have already been shown this... Have you seen Reptile? Very similar goals. http://reptile.openprivacy.org Of course Reptile handles most of the stuff you pointed out in your email... I think there were only about 1 or 2 items (calendaring) that we won't have in Reptile 0.6.0 > If anyone here is interested in helping the design with usefull insights and > real-world experience, I'd like to invite you to subscribe to the list > (address can be found from the Wiki) and participate. It would certainly be > much needed and appreciated. We are already down the path to implementation. It might be a good idea for us to talk and see how we can help each other. > I, personally, would like to be frank and ask for a person with a sense of > simplicity :) Thanks.. Kevin - -- Kevin A. Burton ( burton@apache.org, burton@openprivacy.org, burtonator@acm.org ) Location - San Francisco, CA, Cell - 415.595.9965 Jabber - burtonator@jabber.org, Web - http://relativity.yi.org/ Don't try to be a great man, just be a man. Let history make its judgements. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.0.6 (GNU/Linux) Comment: Get my public key at: http://relativity.yi.org/pgpkey.txt iD8DBQE8vlyZAwM6xb2dfE0RAjX4AJwJ/4+13wKWiWxWx3+eMNPEA8WFEQCgpFhY hVEDuqBP4ErG3NZMXTg6ebg= =AcTv -----END PGP SIGNATURE----- From burton at openprivacy.org Wed Apr 17 22:48:02 2002 From: burton at openprivacy.org (Kevin A. Burton) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] next generation indymedia implementation In-Reply-To: <20020413163846.GA28250@tinyplanet.ca> References: <20020413180932.GA4432@fargonauten.de> <20020413163846.GA28250@tinyplanet.ca> Message-ID: <87ads1y2y5.fsf@openprivacy.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Stephen van Egmond writes: > Ingo Luetkebohle (ingo@fargonauten.de) wrote: > a couple of people are at the > moment talking[1] about a next > generation indymedia[2] implementation. One of > the main goals is to > make use of p2p systems, either a specific one or more > than one, if > possible. > > If anyone here is interested in helping the design > with usefull > insights and real-world experience, I'd like to invite you to > > subscribe to the list (address can be found from the Wiki) and > participate. > It would certainly be much needed and appreciated. > > Rather than write it all from scratch, why not start with something that works > -- Scoop would be a fine choice. Scoop isn't distributed... but I do agree that cooperation is good (*cough*, Reptile *cough*)... > Though its article-moderation philosophy might bear some scruitiny by whoever > "runs" the IMC. The moderation system within Reptile is based on the OpenPrivacy distributed reputation system. I am working on a paper describing it now.... > It works quite well for kuro5hin.org; here's how it works: there are two kinds > of stories: diaries and articles. K5 is cool.... but not a P2P app. Kevin - -- Kevin A. Burton ( burton@apache.org, burton@openprivacy.org, burtonator@acm.org ) Location - San Francisco, CA, Cell - 415.595.9965 Jabber - burtonator@jabber.org, Web - http://relativity.yi.org/ Hi ho, hi hum. 0000001 Hi ho, hi hum. 0000001 -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.0.6 (GNU/Linux) Comment: Get my public key at: http://relativity.yi.org/pgpkey.txt iD8DBQE8vl2SAwM6xb2dfE0RAqhtAKCwIHbbkUEv0mrMmR8dB1JSwmKzKwCgqUrk kys4Q6Kg+RogoR7yeyqaHMs= =H9aX -----END PGP SIGNATURE----- From burton at openprivacy.org Wed Apr 17 22:49:02 2002 From: burton at openprivacy.org (Kevin A. Burton) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] next generation indymedia implementation In-Reply-To: References: Message-ID: <87662py2wk.fsf@openprivacy.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Bram Cohen writes: > Ingo Luetkebohle wrote: > > > a couple of people are at the moment talking[1] about a next > > generation indymedia[2] implementation. One of the main goals is to > > make use of p2p systems, either a specific one or more than one, if > > possible. > > Hey Ingo, have you looked into BitTorrent? It's almost mature, and can be > used for large-scale content distribution, sans modification. > > http://bitconjurer.org/BitTorrent/ This is a different problem Bram. This is not swarming distribution... this is distributed publication ALA Reptile. Kevin - -- Kevin A. Burton ( burton@apache.org, burton@openprivacy.org, burtonator@acm.org ) Location - San Francisco, CA, Cell - 415.595.9965 Jabber - burtonator@jabber.org, Web - http://relativity.yi.org/ It's not having what you want. It's wanting what you've got! - Sheryl Crow -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.0.6 (GNU/Linux) Comment: Get my public key at: http://relativity.yi.org/pgpkey.txt iD8DBQE8vl3LAwM6xb2dfE0RAumsAKCPd28lmoB5bntzW9eTO+ogOfgjMwCgjoV8 AoO9+T/ocjuGAW8GiQdgEfk= =5YJF -----END PGP SIGNATURE----- From burton at openprivacy.org Wed Apr 17 22:56:02 2002 From: burton at openprivacy.org (Kevin A. Burton) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] next generation indymedia implementation In-Reply-To: References: Message-ID: <871yddy2kv.fsf@openprivacy.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Jim Carrico writes: > start costing me too much money if traffic continues to grow. It seems like > Mnet may be a better fit for my needs - do you have plans to develop a > 'helper-app' style front end similar to bittorrent? I think that bittorrent could be used in certain situations... but only with a very distributed tracker... I keep saying that bittorrent needs distributed P2P tracking but Bram keeps telling me I am insane :) > Anyway, short version, I want to p2p-ify the music sites I'm hosting, so we > can afford to promote them without going broke. It seems that Indymedia is in > a similar bind - lots of small files. hm... News syndication is really pico files. Anythink less than 100k is VERY small. > Correct me if I am mistaken: according to the demo i saw it looked like > swarmcasting was only happening between peers that were actively downloading a > particular file. Reptile has different caching issues. We cache on a node by node basis. Our biggest problem isn't bandwidth, it is availability. You execute a spanning search to find the URL within a remote cache and get it from your one peer. Generally you start with other Reptile nodes you are subscribe to. Very fast resolution and since Reptile nodes end up clustering around communities, the chance of a cache hit is pretty high... Again... these are small files. Max about 200k... > That said, I think that Indymedia could benefit from BitTorrent right now, for > distributing their "newsreal" videos - Sure... +1 > it would allow them to offer a range of resolutions and hopefully ween them > off of realplayer Yup. Kevin - -- Kevin A. Burton ( burton@apache.org, burton@openprivacy.org, burtonator@acm.org ) Location - San Francisco, CA, Cell - 415.595.9965 Jabber - burtonator@jabber.org, Web - http://relativity.yi.org/ All your base are belong to us. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.0.6 (GNU/Linux) Comment: Get my public key at: http://relativity.yi.org/pgpkey.txt iD8DBQE8vl9wAwM6xb2dfE0RAs0TAJ9WATf/+EKt1QbzMhIclO0oUnrllQCeO0RD aNxjuoosn6o9Br7aUi5hTRo= =TKUF -----END PGP SIGNATURE----- From burton at openprivacy.org Wed Apr 17 23:34:01 2002 From: burton at openprivacy.org (Kevin A. Burton) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] next generation indymedia implementation In-Reply-To: <20020416150225.B3302@fargonauten.de> References: <20020413180932.GA4432@fargonauten.de> <20020416150225.B3302@fargonauten.de> Message-ID: <87vgapwnsl.fsf@openprivacy.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Ingo Luetkebohle writes: > A stand-alone tool, like BitTorrent but for searching would be a great thing :-) I think Bram should port BitTorrent to Java, integrate it with Gnutella (a deployed and popular P2P network) via LiveWire so that they can do streaming there. It would be a BIG win for both parties. Specifically the LimeWire guys have talked about implementing this themselves and BitTorrent is already a lot farther than them here. One large problem with P2P networks is that nodes don't stay around long. Having swarming distribution is almost a necessity because you can swarm by hash from multiple peers. I think this is a functionality that all future P2P apps should integrate. PS... and yes... Gnutella has performance issues. I realize that. Neurogrid and Alpine look like future alternatives though. Kevin - -- Kevin A. Burton ( burton@apache.org, burton@openprivacy.org, burtonator@acm.org ) Location - San Francisco, CA, Cell - 415.595.9965 Jabber - burtonator@jabber.org, Web - http://relativity.yi.org/ Windows 95 - A 32 bit extension to a 16 bit shell for a 8 bit operating system designed for 4 bit computers by a 2 bit company that can't stand 1 bit of competition. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.0.6 (GNU/Linux) Comment: Get my public key at: http://relativity.yi.org/pgpkey.txt iD8DBQE8vmCKAwM6xb2dfE0RAkkEAJkB1ap2my/Kg4lR/b1INQ9o11CiSACfbsRM ui4rdFgM41Wx18q3W4Ju61w= =8yW6 -----END PGP SIGNATURE----- From clint at thestaticvoid.net Fri Apr 19 01:40:02 2002 From: clint at thestaticvoid.net (Clint Heyer) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] P2P Honours Thesis Message-ID: <5.1.0.14.0.20020419182433.021ed4d0@130.102.87.136> Hi All, I'm implementing a P2P system for my honours thesis. One of the goals of the system is to try to address the social problems in P2P networks, as well as offering a soundly implemented protocol foundation. In an effort to discover what problems people have with existing systems, I've made up a questionnaire[1] that I was hoping I could get you guys to fill out. As 'p2p hackers' you probably have a deeper insight into the problems people face today, so your feedback is especially valued. The whole thing should only take 5 minutes or so, and can be completely anonymous. cheers, .clint [1] Questionnaire linked from: http://thestaticvoid.net/naanou/ ==[ Clint Heyer ]============================================ IRC: 'TheShadow' on irc.uq.edu.au CELL: 04210-11-22-4 -------------------------[ http://thestaticvoid.net ]-------- From burton at openprivacy.org Mon Apr 22 00:14:01 2002 From: burton at openprivacy.org (Kevin A. Burton) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] P2P Hackers Meeting at ETC? Message-ID: <87r8l8jjgw.fsf@openprivacy.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 OK. I was thinking that it would be a good idea to have a bunch of us get together at the O'Reilly Emerging Tech Conference. Maybe go out for some dinner and a few beers? There seems to be a lot of people going... Anyway... How about Monday May 13th? Any objections? If not I will draw up some more formal plans. Kevin - -- Kevin A. Burton ( burton@apache.org, burton@openprivacy.org, burtonator@acm.org ) Location - San Francisco, CA, Cell - 415.595.9965 Jabber - burtonator@jabber.org, Web - http://relativity.yi.org/ Whenever there is a conflict between human rights and property rights, human rights must prevail. -- Abraham Lincoln -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.0.6 (GNU/Linux) Comment: Get my public key at: http://relativity.yi.org/pgpkey.txt iD8DBQE8w7evAwM6xb2dfE0RAgkWAJ9/YAxlKaEhK2aLN8CbyILimaRZwwCdE2VC fAoqNGV5EZlWIZpCH3+AlYA= =Hnf/ -----END PGP SIGNATURE----- From wesley at felter.org Mon Apr 22 08:39:01 2002 From: wesley at felter.org (Wes Felter) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] P2P Hackers Meeting at ETC? In-Reply-To: <87r8l8jjgw.fsf@openprivacy.org> References: <87r8l8jjgw.fsf@openprivacy.org> Message-ID: <1019488451.19918.10.camel@arlx031.austin.ibm.com> On Mon, 2002-04-22 at 02:11, Kevin A. Burton wrote: > I was thinking that it would be a good idea to have a bunch of us get together > at the O'Reilly Emerging Tech Conference. > > Maybe go out for some dinner and a few beers? > > There seems to be a lot of people going... > > Anyway... How about Monday May 13th? In my experience, the hallways at these conferences are big P2P hacker meetings, but I'm always up for dinner and beers in addition. Wes Felter - wesley@felter.org - http://felter.org/wesley/ From burton at openprivacy.org Mon Apr 22 14:23:01 2002 From: burton at openprivacy.org (Kevin A. Burton) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] P2P Hackers Meeting at ETC? In-Reply-To: <1019488451.19918.10.camel@arlx031.austin.ibm.com> References: <87r8l8jjgw.fsf@openprivacy.org> <1019488451.19918.10.camel@arlx031.austin.ibm.com> Message-ID: <87adrvjupu.fsf@openprivacy.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Wes Felter writes: > On Mon, 2002-04-22 at 02:11, Kevin A. Burton wrote: > > > I was thinking that it would be a good idea to have a bunch of us get together > > at the O'Reilly Emerging Tech Conference. > > > > Maybe go out for some dinner and a few beers? > > > > There seems to be a lot of people going... > > > > Anyway... How about Monday May 13th? > > In my experience, the hallways at these conferences are big P2P hacker > meetings, yeah... but there is rarely any beer in the hallways :) Kevin - -- Kevin A. Burton ( burton@apache.org, burton@openprivacy.org, burtonator@acm.org ) Location - San Francisco, CA, Cell - 415.595.9965 Jabber - burtonator@jabber.org, Web - http://relativity.yi.org/ The gears of the digital revolution are turning faster than the wheels of justice. -- Andrew Pollack -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.0.6 (GNU/Linux) Comment: Get my public key at: http://relativity.yi.org/pgpkey.txt iD8DBQE8xH69AwM6xb2dfE0RAqtPAKChf6z7qUu+ZgHr/kr97XL7sOZ7OwCbBT+G Oslc6MgAQuXoMLVLssLgVIo= =9Sdz -----END PGP SIGNATURE----- From bram at gawth.com Mon Apr 22 15:05:01 2002 From: bram at gawth.com (Bram Cohen) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] P2P Hackers Meeting at ETC? In-Reply-To: <87r8l8jjgw.fsf@openprivacy.org> Message-ID: Kevin A. Burton wrote: > I was thinking that it would be a good idea to have a bunch of us get together > at the O'Reilly Emerging Tech Conference. > > Maybe go out for some dinner and a few beers? I'll be around ... anybody driving in from the city? I could use a ride. -Bram Cohen "Markets can remain irrational longer than you can remain solvent" -- John Maynard Keynes From kevin at atkinson.dhs.org Wed Apr 24 08:11:01 2002 From: kevin at atkinson.dhs.org (Kevin Atkinson) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] DistribNet Message-ID: Hi. I just wanted to let you know about a new network I am working on, DistribNet. I haven't worked out all of the details yet but here is a overview. Some code is available at the projects web site, but, it really doesn't do anything useful. As always feedback more than welcome. But please try to keep it positive. Most of the feedback I have received so far has been of the "that's imposable, its never going to work" nature. DistribNet A global peer-to-peer internet file system in which anyone can tap into or add content to. Kevin Atkinson (kevin at atkinson dhs org) Last Modified: 2002-04-24 Project Page: http://distribnet.sourceforge.net/ Mailing list: http://lists.sourceforge.net/lists/listinfo/distribnet-devel Meta Goals: *) To allow anyone, possibly anonymously, to publish web sites with out having to pay for the bandwidth for a commercial provider or having to put up with the increasingly ad ridden free web sites. The only thing the author of the web site should have to worry about is the contents of the web site itself. *) Bring back the sense of community on the Internet that was once present before the internet become so commercialized. *) Serve as an efficient replacement for current file sharing networks such as Morpheus and Gnutella. *) To have the network stable and working before some Commercial company designs a propitiatory network similar to what I envision that can only be accesses via freely available but not FSF approved free license. (Possibly Impossible) Goals: *) *Really* fast lookup to find data. The worst case should be O(log(n)) and the average case should be O(1) or very close to it. *) Actually retrieving the data should also be really fast. Popular data should be sitting on the same subnet. On average it should be as fast or faster than a typical web site (such as slashdot, google, etc.). It should make effective use of the topology of the internet to to minimize network load and maximize performance. *) General searching based on keywords will be build into the protocol from the beginning. The searching faculty will be designed in such a way to make message boards trivial to implement. *) Ability to update data while keeping old revisions around so data never disappears until it is truly unwanted. No one person will have the power to delete data once it spreads throughout the network. *) Will try very hard to keep all but the most unpopular content from falling off the network. Basically before deleting a locally unpopular key it will first check if other nodes are storing the key and how popular they find the key. If not enough nodes are storing the key and there is any indication that the data may be useful at a latter date it will not delete it unless it absolutely has to. And if it does delete it it will first try uploading it to other nodes with more disk space available. *) Ability to store data indefinitely if someone is willing to provide the space for it (and being able to find that data in log(n) time). *) Extremely robust so that the only way to kill the network is to disable almost all of the nodes. The network should still function even if say 90% of it goes down. *) Extremely effect cpu-wise so that a fully functional node can run in the background and only take 1-2% of the CPU. Applications: I would like the protocol to be able to effectually support (ie with out any ugly hacks that many of the application for Freenet use) 1) Efficient Web like sites (with HTTP gateway to make browsing easy) 2) Efficient sharing of files large and small. 3) Public message forms (with IMAP gateway to make reading easy) 4) Private Email (with the message encrypted so only the intended recipient can read it, again with IMAP gateway) 5) Streaming Media 6) Online Chat (with possible IRC or similar gateway) Anti-Goals: (Also see philosophy for why I don't find these issues that important) *) Complete anonymity for the browser. I want to focus first on performance than on anonymity. In fact I plan to use extensive logging in the development versions so that I track network performance and quickly cache performance bugs. As DistribNet stabilizes anonymity will be improved at the expense of logging. The initial version will only use cryptology when absolutely necessary (for example key signing). Most communications will be done in the clear. After DistribNet stabilizes encryption will slowly be added. When I add encryption I will carefully monitor the effect it has on CPU load and if proves to be expensive I will allow it to be optional. Please note that I still wish to allow for anonymous posting of content. However, without encryption, it probably won't be as anonymous as Freenet or your GNUNet. *) Data in the cache will be stored in a straight forward manner. No attempt will be made to prevent the node operate from knowing what is in his own cache. Also, by default, very little attempt will be made to prevent others from knowing what is a particular node cache. Philosophy: *) I have nothing against complete anonymity, it is just that I am afraid that both Freenet and GnuNet or more designed around the anonymity and privacy issues then they are around the performance and scalability issues. *) For most type of things the level of anonymity that Freenet and GnuNet offers is simply not needed. Even for copyrighted and censored material there is, in general, little risk in actually viewing the information because it is simply impractical to go after every single person who access forbidden information. Most all of the time the lawsuits and such are after the original distributors of the information and not the viewers. There for DistribNet will aim to provide anonymity for distributing information, but not for actually viewing it. However, since there *is* some information where even viewing it is extremely risky, DistribNet will eventually be able to provide the same level of anonymity that Freenet or GnuNet offers, but it will be completely optional. *) I also believe that knowing what is in one owns datastore and being able to block certain type of material from one owns node is not that big of a deal. Unless almost everyone blocks a certain type of information the availability of blocked information will not be harmed. This is because even if 90% of the nodes block say, kiddie porn, the information will still be available on the other 10% of the nodes which, if the network is designed correctly, should be more than enough for anyone to get at blocked information. Furthermore, since the source code for DistribNet will be protected under the GPL or similar license, it will be completely impractical for other to force a significant number of nodes to block information. Due to the dynamic nature of the cache I find it legally difficult to hold anyone responsible for the contents of there cache as it is constantly changing. DistribNet Key Types: There will essentially be two types of keys. Map keys and data keys. Map keys will be uniquely identified in a similar manner as freenet SSK keys. Data keys will be identified in a similar manner as freenet's CHK keys. Map keys will contain the following information: * Short Description * Public Namespace Key * Timestamped Index pointers * Timestamped Data pointers _At any given point in time_ each map key will only be associated with one index pointer and one data pointer. Map keys can be updated by appending a new index or data pointer to the existing list. By default, when a map key is queried only the most recent pointer will be returned. However, older pointers are still there and may be retrieved by specifying a specific date. Thus, map keys may be updated, but information is never lost or overwritten. Data keys will be very much like freenet's CHK keys except that they will not be encrypted. Since they are not encrypted delta compression may be used to save space. There will not be anything like freenet's KSK keys as those proved to be completely insure. Instead Map keys may be requested with out a signature. If there is more than one map key by that name than a list of keys is presented sorted by popularity. To make such a list meaning full every public key in freenet will have a descriptive string associated with it. Data Key Details: Data keys will be stored in maximum size blocks of just under 32K. If an object is larger than 32K it will be broken down into smaller size chunks and an index block, also with a maximum size of about 32K, will be created so that the final object can be reassembled. If an object is too big to be indexed by one index block the index blocks themselves will be split up. This can be done as many times as necessary therefore providing the ability to store files of arbitrary size. DistribNet will use 64 bit integers to store the file size therefore supporting file sizes up to 2^64-1 bytes. Data keys will be retrieved by blocks rather than all at once. When a client first requests a data key that is too large to fit in a block an index block will be returned. It is then up the client to figure out how to retrieve the individual blocks. Please note that even though that blocks are retrived individually they are not treated as trully independent keys by the nodes. For example a node can be asked which blocks it has based on a given index block rather than having to ask for each and every data block. Also, nodes maintain persistent connections so that blocks can be retrieved one after another without having to re-establish to connection each time. Data and index blocks will be indexed based on the SHA-1 hash of there contents. The exact numbers of as follows: Data Block Size: 2^15 - 128 = 32640; Index block header size: 40 Maximum number of keys per index block: 1630 Key Size: 20 Maximum object sizes: direct => 2^14.99 bytes , about 31.9 kilo 1 level => 2^25.66 bytes , about 50.7 megs 2 levels => 2^36.34 bytes , about 80.8 gigs 3 levels => 2^47.01 bytes , about 129 tera 4 levels => 2^57.68 bytes 5 levels => 2^68.35 bytes (but limited to 2^64 - 1) Why 32640? A block size of just under 32K was chosen because I wanted a size which will allow most text files to fix in one block, most other files with one level of indexing, and just about anything anybody would think of transferring on a public network in two levels and 32K worked out perfectly. Also, files around 32K are rather rare therefor preventing a lot of of unnecessary splitting of files that don't quite make it. 32640 rather than exactly 32K was chosen to allow some additional information to be transfered with the block without pushing the total size over 32K. 32640 can also be stored nicely in a 16 bit integer without having to worry if its signed or unsigned. Storage: Blocks are currently stored in one of three ways 1) block smaller than a fixed threshold (currently 1k) are stored using Berkeley DB (version 3.3 or better). 2) blocks larger than the threshold are stored as files. The primary reason for doing this is to avoid limiting the size of data store by the maximum size of a file which is often 2 or 4 gb on most 32-bit systems. 3) blocks are not stored at all instead they are linked to an external file out side of the data store much like a symbolic link links to file out side of the current directory. However since blocks often only represent part of the file the offset is also stored as part of the link. These links are stored in the same database that small blocks are stored in. Since the external file can easily be changed by the user, the SHA-1 hashes will be recomputed when the file modification data changes. If the SHA-1 hash of the block differs all the links to the file will be thrown out and the file will be relinked. (This part is not implemented yet). Most of the code for the data keys can be found in data_key.cpp Lookup Details: Lookup will probably be done by using the chord protocol. See http://www.pdos.lcs.mit.edu/chord/. Language: DistribNet is/will be written in fairly modern C++. It will use several external libraries however it will not use any C++ specific libraries. In particular I have no plan to use any sort of Abstraction library for POSIX functionally. Instead thin wrapper classes will be used which I have complete control over and will serve mainly to make the process of using POSIX functions less tedious rather than abstract away the details of using them. -- http://kevin.atkinson.dhs.org From vladimir at lecs.cs.ucla.edu Wed Apr 24 15:00:02 2002 From: vladimir at lecs.cs.ucla.edu (Vladimir Bychkovskiy) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] DistribNet References: Message-ID: <3CC6FFDF.DE9E9A62@lecs.cs.ucla.edu> Kevin, How is this different from SFS? http://www.fs.net Vlad. Kevin Atkinson wrote: > Hi. I just wanted to let you know about a new network I am working on, > DistribNet. I haven't worked out all of the details yet but here is a > overview. Some code is available at the projects web site, but, it really > doesn't do anything useful. > > As always feedback more than welcome. But please try to keep it positive. > Most of the feedback I have received so far has been of the "that's > imposable, its never going to work" nature. > > DistribNet > > A global peer-to-peer internet file system in which anyone can tap into > or add content to. > > Kevin Atkinson (kevin at atkinson dhs org) > Last Modified: 2002-04-24 > Project Page: http://distribnet.sourceforge.net/ > Mailing list: http://lists.sourceforge.net/lists/listinfo/distribnet-devel > > Meta Goals: > > *) To allow anyone, possibly anonymously, to publish web sites with > out having to pay for the bandwidth for a commercial provider > or having to put up with the increasingly ad ridden free web > sites. The only thing the author of the web site should have > to worry about is the contents of the web site itself. > > *) Bring back the sense of community on the Internet that was once > present before the internet become so commercialized. > > *) Serve as an efficient replacement for current file sharing networks > such as Morpheus and Gnutella. > > *) To have the network stable and working before some Commercial > company designs a propitiatory network similar to what I envision > that can only be accesses via freely available but not FSF > approved free license. > > (Possibly Impossible) Goals: > > *) *Really* fast lookup to find data. The worst case should be O(log(n)) > and the average case should be O(1) or very close to it. > > *) Actually retrieving the data should also be really fast. Popular > data should be sitting on the same subnet. On average it should > be as fast or faster than a typical web site (such as slashdot, > google, etc.). It should make effective use of the > topology of the internet to to minimize network load and maximize > performance. > > *) General searching based on keywords will be build into the protocol > from the beginning. The searching faculty will be designed in > such a way to make message boards trivial to implement. > > *) Ability to update data while keeping old revisions around so data never > disappears until it is truly unwanted. No one person will have > the power to delete data once it spreads throughout the network. > > *) Will try very hard to keep all but the most unpopular content from > falling off the network. Basically before deleting a locally > unpopular key it will first check if other nodes are storing the > key and how popular they find the key. If not enough nodes are > storing the key and there is any indication that the data may be > useful at a latter date it will not delete it unless it absolutely > has to. And if it does delete it it will first try uploading it > to other nodes with more disk space available. > > *) Ability to store data indefinitely if someone is willing to provide > the space for it (and being able to find that data in log(n) > time). > > *) Extremely robust so that the only way to kill the network is to > disable almost all of the nodes. The network should still > function even if say 90% of it goes down. > > *) Extremely effect cpu-wise so that a fully functional node can run in > the background and only take 1-2% of the CPU. > > Applications: > > I would like the protocol to be able to effectually support (ie with out > any ugly hacks that many of the application for Freenet use) > > 1) Efficient Web like sites (with HTTP gateway to make browsing easy) > 2) Efficient sharing of files large and small. > 3) Public message forms (with IMAP gateway to make reading easy) > 4) Private Email (with the message encrypted so only the intended > recipient can read it, again with IMAP gateway) > 5) Streaming Media > 6) Online Chat (with possible IRC or similar gateway) > > Anti-Goals: > > (Also see philosophy for why I don't find these issues that important) > > *) Complete anonymity for the browser. I want to focus first on > performance than on anonymity. In fact I plan to use extensive > logging in the development versions so that I track network > performance and quickly cache performance bugs. As DistribNet > stabilizes anonymity will be improved at the expense of logging. > > The initial version will only use cryptology when absolutely > necessary (for example key signing). Most communications will be > done in the clear. After DistribNet stabilizes encryption will > slowly be added. When I add encryption I will carefully monitor > the effect it has on CPU load and if proves to be expensive I will > allow it to be optional. > > Please note that I still wish to allow for anonymous posting of > content. However, without encryption, it probably won't be as > anonymous as Freenet or your GNUNet. > > *) Data in the cache will be stored in a straight forward manner. No > attempt will be made to prevent the node operate from knowing what > is in his own cache. Also, by default, very little attempt will > be made to prevent others from knowing what is a particular node > cache. > > Philosophy: > > *) I have nothing against complete anonymity, it is just that I am > afraid that both Freenet and GnuNet or more designed around the > anonymity and privacy issues then they are around the performance > and scalability issues. > > *) For most type of things the level of anonymity that Freenet and > GnuNet offers is simply not needed. Even for copyrighted and > censored material there is, in general, little risk in actually > viewing the information because it is simply impractical to go > after every single person who access forbidden information. Most > all of the time the lawsuits and such are after the original > distributors of the information and not the viewers. There for > DistribNet will aim to provide anonymity for distributing > information, but not for actually viewing it. However, since > there *is* some information where even viewing it is extremely > risky, DistribNet will eventually be able to provide the same > level of anonymity that Freenet or GnuNet offers, but it will be > completely optional. > > *) I also believe that knowing what is in one owns datastore and being > able to block certain type of material from one owns node is not > that big of a deal. Unless almost everyone blocks a certain type > of information the availability of blocked information will not be > harmed. This is because even if 90% of the nodes block say, > kiddie porn, the information will still be available on the other > 10% of the nodes which, if the network is designed correctly, > should be more than enough for anyone to get at blocked > information. Furthermore, since the source code for DistribNet > will be protected under the GPL or similar license, it will be > completely impractical for other to force a significant number of > nodes to block information. Due to the dynamic nature of the > cache I find it legally difficult to hold anyone responsible for > the contents of there cache as it is constantly changing. > > DistribNet Key Types: > > There will essentially be two types of keys. Map keys and data keys. > Map keys will be uniquely identified in a similar manner as freenet SSK > keys. Data keys will be identified in a similar manner as freenet's > CHK keys. > > Map keys will contain the following information: > > * Short Description > * Public Namespace Key > * Timestamped Index pointers > * Timestamped Data pointers > > _At any given point in time_ each map key will only be associated with > one index pointer and one data pointer. Map keys can be updated by > appending a new index or data pointer to the existing list. By > default, when a map key is queried only the most recent pointer will > be returned. However, older pointers are still there and may be > retrieved by specifying a specific date. Thus, map keys may be > updated, but information is never lost or overwritten. > > Data keys will be very much like freenet's CHK keys except that they will > not be encrypted. Since they are not encrypted delta compression may > be used to save space. > > There will not be anything like freenet's KSK keys as those proved to > be completely insure. Instead Map keys may be requested with out a > signature. If there is more than one map key by that name than a list > of keys is presented sorted by popularity. To make such a list > meaning full every public key in freenet will have a descriptive > string associated with it. > > Data Key Details: > > Data keys will be stored in maximum size blocks of just under 32K. If > an object is larger than 32K it will be broken down into smaller size > chunks and an index block, also with a maximum size of about 32K, will > be created so that the final object can be reassembled. If an object > is too big to be indexed by one index block the index blocks themselves > will be split up. This can be done as many times as necessary therefore > providing the ability to store files of arbitrary size. DistribNet > will use 64 bit integers to store the file size therefore supporting > file sizes up to 2^64-1 bytes. > > Data keys will be retrieved by blocks rather than all at once. When a > client first requests a data key that is too large to fit in a block > an index block will be returned. It is then up the client to figure out > how to retrieve the individual blocks. > > Please note that even though that blocks are retrived individually > they are not treated as trully independent keys by the nodes. For > example a node can be asked which blocks it has based on a given index > block rather than having to ask for each and every data block. Also, > nodes maintain persistent connections so that blocks can be retrieved > one after another without having to re-establish to connection each > time. > > Data and index blocks will be indexed based on the SHA-1 hash of there > contents. The exact numbers of as follows: > > Data Block Size: 2^15 - 128 = 32640; > Index block header size: 40 > Maximum number of keys per index block: 1630 > Key Size: 20 > > Maximum object sizes: > > direct => 2^14.99 bytes , about 31.9 kilo > 1 level => 2^25.66 bytes , about 50.7 megs > 2 levels => 2^36.34 bytes , about 80.8 gigs > 3 levels => 2^47.01 bytes , about 129 tera > 4 levels => 2^57.68 bytes > 5 levels => 2^68.35 bytes (but limited to 2^64 - 1) > > Why 32640? > > A block size of just under 32K was chosen because I wanted a size > which will allow most text files to fix in one block, most other files > with one level of indexing, and just about anything anybody would > think of transferring on a public network in two levels and 32K worked > out perfectly. Also, files around 32K are rather rare therefor > preventing a lot of of unnecessary splitting of files that don't quite > make it. 32640 rather than exactly 32K was chosen to allow some > additional information to be transfered with the block without pushing > the total size over 32K. 32640 can also be stored nicely in a 16 bit > integer without having to worry if its signed or unsigned. > > Storage: > > Blocks are currently stored in one of three ways > > 1) block smaller than a fixed threshold (currently 1k) are stored using > Berkeley DB (version 3.3 or better). > > 2) blocks larger than the threshold are stored as files. The primary > reason for doing this is to avoid limiting the size of data store > by the maximum size of a file which is often 2 or 4 gb on most > 32-bit systems. > > 3) blocks are not stored at all instead they are linked to an external > file out side of the data store much like a symbolic link links to > file out side of the current directory. However since blocks often > only represent part of the file the offset is also stored as part > of the link. These links are stored in the same database that > small blocks are stored in. Since the external file can easily be > changed by the user, the SHA-1 hashes will be recomputed when the > file modification data changes. If the SHA-1 hash of the block > differs all the links to the file will be thrown out and the file > will be relinked. (This part is not implemented yet). > > Most of the code for the data keys can be found in data_key.cpp > > Lookup Details: > > Lookup will probably be done by using the chord protocol. See > http://www.pdos.lcs.mit.edu/chord/. > > Language: > > DistribNet is/will be written in fairly modern C++. It will use > several external libraries however it will not use any C++ specific > libraries. In particular I have no plan to use any sort of > Abstraction library for POSIX functionally. Instead thin wrapper > classes will be used which I have complete control over and will serve > mainly to make the process of using POSIX functions less tedious > rather than abstract away the details of using them. > > -- > http://kevin.atkinson.dhs.org > > _______________________________________________ > p2p-hackers mailing list > p2p-hackers@zgp.org > http://zgp.org/mailman/listinfo/p2p-hackers From kevin at atkinson.dhs.org Wed Apr 24 17:27:01 2002 From: kevin at atkinson.dhs.org (Kevin Atkinson) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] DistribNet In-Reply-To: <3CC6FFDF.DE9E9A62@lecs.cs.ucla.edu> Message-ID: On Wed, 24 Apr 2002, Vladimir Bychkovskiy wrote: > Kevin, > > How is this different from SFS? > http://www.fs.net SFS gets files from particular hosts. In my network each file will have a unique id and the host will not matter. The content will be distributed throughout the network instead of being stored on a particular host. --- http://kevin.atkinson.dhs.org From dhelder at umich.edu Thu Apr 25 08:19:01 2002 From: dhelder at umich.edu (David Helder) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] Announce: Emcast 0.3.0 - generic multicast toolkit Message-ID: Emcast is a multicast toolkit for distributed/peer-to-peer applications that require multicast communication. Emcast supports IP Multicast, Banana Tree Protocol (BTP), Banana Tree Protocol (BTP), Internet Chat Relay (IRC), and STAR (centralized TCP). Emcast 0.3.0 includes API improvements, BTP improvements, a new "star" EM protocol, and several bug and compilation fixes. BTP 0.3.0 is not compatible with BTP 0.2.0. Emcast lives at: http://www.junglemonkey.net/emcast 0.3.0 ----- * New emcast_new() function. Now options can be set before joining the group. * Miscellaneous BTP improvements. Incompatible with 0.2.0. * BTP can add shortcuts to reduce group latency. (Experimental) * New star EM protocol. One node is the center, the rest connect to it. * Fixed buffer size issues * Fixed Sun crashes * Many small configuration and compilation fixes -- __ _ __ David Helder - dhelder@umich.edu ___/ /__ __ __(_)__/ / / _ / _ `/ |/ / / _ / Jungle Monkey: |_,_/|_,_/|___/_/|_,_/ Paper CD Case: From philh at comuno.freeserve.co.uk Thu Apr 25 11:39:01 2002 From: philh at comuno.freeserve.co.uk (phil hunt) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] DistribNet In-Reply-To: References: Message-ID: <02042519353600.08161@comuno> On Wednesday 24 April 2002 4:10 pm, Kevin Atkinson wrote: > > DistribNet > > A global peer-to-peer internet file system in which anyone can tap into > or add content to. > > Kevin Atkinson (kevin at atkinson dhs org) > Last Modified: 2002-04-24 > Project Page: http://distribnet.sourceforge.net/ > Mailing list: > http://lists.sourceforge.net/lists/listinfo/distribnet-devel > > Meta Goals: > > *) To allow anyone, possibly anonymously, to publish web sites with > out having to pay for the bandwidth for a commercial provider > or having to put up with the increasingly ad ridden free web > sites. The only thing the author of the web site should have > to worry about is the contents of the web site itself. Is this static web sites? Or dynamic ones with PHP, CGI, etc? (I'd guess the former, as the latter would be considerably harder to do). As an aside, in the UK, most ISPs offer 10 megs or so of statc web space with any account, so this wouldn't be particularly useful. > *) Bring back the sense of community on the Internet that was once > present before the internet become so commercialized. Hmmm. I'd guess that purpose is being served by mailing lists, Usenet, IRC, IM, etc. How does your system hope to do it better? > *) To have the network stable and working before some Commercial > company designs a propitiatory network similar to what I envision > that can only be accesses via freely available but not FSF > approved free license. Are you worried that a company might produce an incompatible version of your idea? > (Possibly Impossible) Goals: > > *) *Really* fast lookup to find data. The worst case should be > O(log(n)) and the average case should be O(1) or very close to it. Who cares whether it is O(log n) or O(1)? If n = 2e9 (the number of pages Google reports), then log2(n) is only 31, and an O(log n) process where each individual task is 40 times faster than the O(1) process, will be quicker. > *) Actually retrieving the data should also be really fast. Popular > data should be sitting on the same subnet. On average it should > be as fast or faster than a typical web site (such as slashdot, > google, etc.). It should make effective use of the > topology of the internet to to minimize network load and maximize > performance. And popular stuff should adaptively get spread around for speed; this counteracts the Slashdot Effect. > *) General searching based on keywords will be build into the protocol > from the beginning. Good idea. The best database in the world is useless if you can't get at the data you want. As well as a keyword search, have you thought of some sort of category search, using a system of categories like e.g. Freshmeat and Sourceforge use? (as an example; obviously, when someone is searching for stuff other than software, such as music, they would use a different set of categories). > Applications: > > I would like the protocol to be able to effectually support (ie with out > any ugly hacks that many of the application for Freenet use) > > 1) Efficient Web like sites (with HTTP gateway to make browsing easy) > 2) Efficient sharing of files large and small. > 3) Public message forms (with IMAP gateway to make reading easy) > 4) Private Email (with the message encrypted so only the intended > recipient can read it, again with IMAP gateway) What's wrong with the already existing system for email? -- <"><"><"> Philip Hunt <"><"><"> "I would guess that he really believes whatever is politically advantageous for him to believe." -- Alison Brooks, referring to Michael Portillo, on soc.history.what-if From kevin at atkinson.dhs.org Fri Apr 26 07:21:01 2002 From: kevin at atkinson.dhs.org (Kevin Atkinson) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] DistribNet In-Reply-To: <02042519353600.08161@comuno> Message-ID: On Thu, 25 Apr 2002, phil hunt wrote: > On Wednesday 24 April 2002 4:10 pm, Kevin Atkinson wrote: > > > > DistribNet > > > > A global peer-to-peer internet file system in which anyone can tap into > > or add content to. > > > > Kevin Atkinson (kevin at atkinson dhs org) > > Last Modified: 2002-04-24 > > Project Page: http://distribnet.sourceforge.net/ > > Mailing list: > > http://lists.sourceforge.net/lists/listinfo/distribnet-devel > > (Possibly Impossible) Goals: > > > > *) *Really* fast lookup to find data. The worst case should be > > O(log(n)) and the average case should be O(1) or very close to it. > > Who cares whether it is O(log n) or O(1)? If n = 2e9 (the number of > pages Google reports), then log2(n) is only 31, and an O(log n) > process where each individual task is 40 times faster than the O(1) > process, will be quicker. n = number of nodes in the network not the number of documents. Also, who said it was log base 2? It may be better than that by a significant const factor. > > *) Actually retrieving the data should also be really fast. Popular > > data should be sitting on the same subnet. On average it should > > be as fast or faster than a typical web site (such as slashdot, > > google, etc.). It should make effective use of the > > topology of the internet to to minimize network load and maximize > > performance. > > And popular stuff should adaptively get spread around for speed; this > counteracts the Slashdot Effect. Yes. That is the general idea. > > *) General searching based on keywords will be build into the protocol > > from the beginning. > > Good idea. The best database in the world is useless if you can't get > at the data you want. > > As well as a keyword search, have you thought of some sort of category > search, using a system of categories like e.g. Freshmeat and Sourceforge > use? (as an example; obviously, when someone is searching for stuff other > than software, such as music, they would use a different set of > categories). Everything will be in the form of keywords. category keywords can be tread as one keyword. For example "Artist: Some Music Artist". --- http://kevin.atkinson.dhs.org From greg at electricrain.com Mon Apr 29 15:14:02 2002 From: greg at electricrain.com (Gregory P. Smith) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] Fwd: Stanford Distributed Systems Seminar, Wed. 05/01, Yatin Chawathe Message-ID: <20020429221315.GA1294@zot.electricrain.com> ----- Forwarded message from Laurence Melloul ----- Date: Mon, 29 Apr 2002 12:25:32 -0700 (PDT) From: Laurence Melloul Reply-To: Laurence Melloul To: cs548-all@lists.Stanford.EDU Subject: Stanford Distributed Systems Seminar, Wed. 05/01, Yatin Chawathe Stanford Distributed Systems Research Seminar Title : Can heterogeneity make Gnutella scalable? Speaker : Yatin Chawathe, AT&T Labs, Menlo Park, CA When: 12:45PM, Wednesday, May 1st, 2002 Where: Mc Cullough 115, Stanford University URL: http://cs548.stanford.edu/schedule.shtml Map: http://www.stanford.edu/home/map/search_map.cgi?keyword=&ACADEMIC=McCullough+Electrical+Engineering Abstract: Many researchers have proposed designs for "highly structured" peer-to-peer systems, for example, CAN, Chord, and Tapestry. The underlying philosophy behind these systems is that random searches over "unstructured" P2P networks such as Gnutella are inherently unscalable. Instead, the structured systems tightly control the overlay topology and the layout of files (or pointers to files) across the topology. Unfortunately, doing so makes these systems less resilient to trasient user populations precisely because it is difficult to maintain the structured topology in the face of constantly joining and leaving nodes. Moreover, no one has yet demonstrated that these systems, while well-suited for exact-match queries, can support partial match queries such as keyword searching efficiently. On the other hand, unstructured P2P systems like Gnutella can easily answer such queries, and are better suited for handling large transient populations. In this work, we re-visit the question of whether Gnutella-like unstructured P2P systems can be made more scalable. Our approach is based on recent studies that show that Internet-wide P2P networks demonstrate large degrees of heterogeneity. We leverage this heterogeneity to adapt the topology of the overlay network in a dynamic fashion so that queries across the network are automatically funnelled toward nodes that have the capacity to handle them. In addition, we introduce active flow control to prevent overloading nodes or links between nodes, and improved search techniques to better utilize network resources than the simplistic flooding techniques currently used by Gnutella. This is work in progress done jointly with Sylvia Ratnasamy (UC Berkeley), Scott Shenker (ICIR), and Lee Breslau (AT&T). Bio: Dr. Yatin Chawathe is a researcher at AT&T Labs--Research in Menlo Park, CA. His research interests are in the area of large scale Internet systems. His current research includes peer-to-peer infrastructures and Internet broadcasting architectures. Prior to joining AT&T, Yatin was a graduate student at the University of California at Berkeley. He received a Masters and PhD in Computer Science from the University of California at Berkeley in 1998 and 2000, respectively. His PhD thesis developed the Scattercast architecture to support Internet broadcast distribution. +----------------------------------------------------------------------------+ | This message was sent via the Stanford Computer Science Department | | colloquium mailing list. To be added to this list send an arbitrary | | message to colloq-subscribe@cs.stanford.edu. To be removed from this list,| | send a message to colloq-unsubscribe@cs.stanford.edu. For more information,| | send an arbitrary message to colloq-request@cs.stanford.edu. For directions| | to Stanford, check out http://www-forum.stanford.edu | +-------------------------------------------------------------------------xcl+ ----- End forwarded message ----- From greg at electricrain.com Mon Apr 29 15:38:02 2002 From: greg at electricrain.com (Gregory P. Smith) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] swarmcast performance stats? In-Reply-To: References: Message-ID: <20020429223754.GB1294@zot.electricrain.com> > Anyway, short version, I want to p2p-ify the music sites I'm hosting, > so we can afford to promote them without going broke. It seems that > Indymedia is in a similar bind - lots of small files. Correct me if I > am mistaken: according to the demo i saw it looked like swarmcasting > was only happening between peers that were actively downloading a > particular file. Is this still the case? I realize that this > provides an elegant solution to the resource discovery problem - the > server always knows who is downloading a particular file at any > particular time, and hence who can carry some of the load - and > determining the persistance of the file after the download is > complete introduces numerous headaches. any plans/implementations in > this regard? Has anyone got real data to show how much bandwidth is actually saved on average for unpopular, popular, and very popular content using swarmcast type methods? I recall seeing a 50% number somewhere but can't remember where or why (so its undoubtedly wrong). > That said, I think that Indymedia could benefit from BitTorrent right > now, for distributing their "newsreal" videos - it would allow them > to offer a range of resolutions and hopefully ween them off of > realplayer That's an ideal use. Put the "dinky" low bandwidth ones up on your website, and strongly encourage people to install the p2p app in order to watch good quality ones. -g From bram at gawth.com Mon Apr 29 19:40:02 2002 From: bram at gawth.com (Bram Cohen) Date: Sat Dec 9 22:11:45 2006 Subject: [p2p-hackers] swarmcast performance stats? In-Reply-To: <20020429223754.GB1294@zot.electricrain.com> Message-ID: Gregory P. Smith wrote: > Has anyone got real data to show how much bandwidth is actually saved on > average for unpopular, popular, and very popular content using swarmcast > type methods? A recent file distribution using BitTorrent apparently saved somewhere between 2-3 orders of magnitude (99% - 99.9%). > I recall seeing a 50% number somewhere but can't remember where or why > (so its undoubtedly wrong). I think the blue falcon page says 50%, with no explanation. It's apparently a claim motivated by marketing rather than any real technical basis, since their model is to charge for the amount of bandwidth 'saved'. -Bram Cohen "Markets can remain irrational longer than you can remain solvent" -- John Maynard Keynes