00:08:27 topic is: Bitcoin research, hardfork wishlist, ideas for the future - see also: https://en.bitcoin.it/wiki/Hardfork_Wishlist https://en.bitcoin.it/wiki/User:Gmaxwell/alt_ideas 00:08:27 Users on #bitcoin-wizards: andytoshi-logbot K1773R andytoshi nsh Mike_B skinnkavaj Luke-Jr bizzle Emcy go1111111 nOgAnOo grv joecool Fistful_of_LTC jtimon spinza edulix amiller MixX typex DougieBot5000 Lifeofcray Mikalv_ CodeShark kill\switch epscy azariah4 gwillen hnz realazthat home_jg Guest8739 MoALTz_ maaku trn TD fagmuffinz jrmithdobbs UukGoblin deepc0re_ iddo Xarian Muis tucenaber sipa nanotube BlueMatt Graet kinlo firepacket michagogo|cloud gavinandresen 00:08:27 Users on #bitcoin-wizards: lianj Ryan52 midnightmagic HM2 wumpus gmaxwell petertodd harrow cfields hno warren forrestv phantomcircuit EasyAt pigeons 00:08:27 [freenode-info] if you're at a conference and other people are having trouble connecting, please mention it to staff: http://freenode.net/faq.shtml#gettinghelp 00:08:49 firepacket: sorry, that doesn't even make sense 00:08:57 firepacket: that would require a human to validate... 00:09:08 firepacket: the closest thing to a turing test used today are captchas, and i think machines are better than humans at that anyway ;) 00:09:10 yes it would 00:09:25 machines are better at solving captchas? 00:09:55 why would it be a problem? it would employ humans for a pay check 00:09:59 it would prevent consolodation 00:10:09 what is'it' ? 00:10:18 who creates the probkems? 00:10:27 also limiting miners to humans with way too much free time would definitely cause consolidation 00:10:33 a computer would have to generate the problem 00:10:44 im not sure how 00:10:50 which computer? 00:11:13 not sure 00:11:26 it could be generated based on information from the last block 00:11:38 please think aout those things more first :) 00:12:20 it could also be generated using the chosen nonce 00:12:47 i don't think you understand the problem 00:13:05 the computer that generates the problem can trivially solve it 00:13:14 as it knows the answer 00:13:43 and there is no way to validate that a human-generated solution is right without knowing the real answer already 00:14:12 maybe validating other peoples tests could be the test itself 00:14:44 come back when you have actual ways to deal with this :) 00:14:50 firepacket: Do you know what the properties that a PoW system needs to have are? 00:14:56 (I suspect not) 00:15:04 not "maybe we could do something *handwaving* X" 00:16:38 i was just wondering if anyone had ever thought of it 00:16:50 i mean captcha still seems to work 00:16:50 ...no, because it can't be done 00:17:12 alright. 00:17:14 Yes, captchas are useful for many things 00:17:21 PoW isn't one of those things. 00:18:04 captchas are useful for what exactly? 00:18:26 ensuring a human is present 00:18:28 they seem to keep humans out better than bots 00:18:44 well, in reality it just spawned a captcha solving industry 00:18:49 that the bots use 00:18:57 but it still limits you to the number of people on earth at any given time 00:19:57 I have to try like 4 or 5 times to solve captchas 00:20:18 Well, some captchas are better than others 00:20:26 googles are the worst 00:20:31 firepacket: the point is i don't think you understand the purpose of a proof-of-work 00:20:38 I'm usually able to get recaptchas first try 00:20:43 the intent is not to determine if there is a live human on the other side 00:20:56 (it helps that recaptcha is also somewhat flexible in certain ways) 00:21:07 maaku: what don't i understand? 00:21:17 the intent is not to determine if there is a live human on the other side 00:21:37 maaku: I know that is not the primary intent, but it could be helpful if the goal is to resist asics 00:21:53 1) the goal is not to resist asics 00:21:58 2) it's not the intent *at all* 00:21:58 firepacket: that's a bad goal 00:22:03 firepacket: why is asic resistance a goal? 00:22:08 or colsolidation rather 00:22:13 i am genuinely curious as to the mindset behind this.. 00:22:14 consolidation* 00:22:39 if all *humans* in the world were able to help verify bitcoin transactions and get paid to do so from anywhere in the world 00:22:46 how would not that help promote diversity? 00:23:21 it also gives us clear sides when the machines attack 00:23:46 humans very often fail to follow even basic rules. 00:24:23 i didnt say we should trust them 01:06:19 has anyone thought about forking ripple and turning it into a decentralized forex exchange? 01:06:24 like a really decentralized one 01:06:55 i guess that'd be really hard to do though 01:07:16 given the best way i know to decentralize ripple is to get away from consensus and go back to pow 01:07:19 and then trades take 60m to confirm 01:08:19 Mike_B: you can make pow much faster than bitcoin if you don't care about decenteralization of the network... though never as fast as a non-anonymous system. 01:08:46 Mike_B: e.g. you control the difficulty to achieve a constant orphan rate, instead of constant time. 01:09:10 it's still slower because you need settling time because you don't know if there is a hidden majority. 01:09:21 whereas in a non anonymous network the majority can never be hidden. 01:11:21 right 01:11:30 i was thinking about how a decentralized exchange would work 01:11:37 to stop the government from going after gox and bitstamp or whatever 01:12:00 and it seems to me that this problem is just as hard as making a cryptocurrency where transactions don't take 10m to hit the blockchain 01:12:14 but you can't really do that in any case. 01:12:14 unless you want trades to take place quickly 01:12:20 US is not a cryptocurrency. 01:12:33 yeah of course 01:12:44 Creating a US crypto currency is almost certantly unlawful, and anyone issuing US crypto notes in the past has been shut down. 01:12:55 The regulatory point isn't the "exchange" it's the handling usd. 01:13:01 i was thinking more about either trading with other cryptocurrencies, or with US "ious" a la ripple or what have you 01:16:28 gmaxwell is now known as Guest16297 01:17:18 Guest16297 is now known as gmaxwell 01:22:00 Mike_B RippleLab's Ripple is not a very good ripple design (sorry for the redundancy) 01:22:30 Ryan's two-phase commit was actually scalable 01:22:47 I extended it to support atomic transactions with bitcoin/freicoin 01:23:41 and then we merged 2pc ripple with what I previously called "ripplecoin" (basically a ripple implementation on pow) 01:24:20 but they have several big design flaws even if you change their consensus for SHA256 01:24:41 like what? 01:24:56 I just made a fast enumeration to pigeons this mornging...wait 01:25:40 they should have never replaced inputs/outputs with accounts 01:25:52 trust-lines don't have to be in the core, they can be simulated with regular market orders 01:25:52 and orders don't need to be in the ledger 01:26:32 so why does the current setup cause problems 01:26:39 like why is it a "flaw" 01:26:51 the only problems i know about it come from how consensus claims to be decentralized but it isn't 01:27:12 we had a good discussion a while ago about how various network topologies can lead to dishonest nodes winning even if the majority of the network is honest 01:27:20 having all the open orders in the blockchain requires more validations and bandwith 01:28:07 yeah, I'm talking about the inner structures, assuming you get their code and replace the consensus with pow 01:28:12 jtimon: is mostly talking about layers of the system I know nothing about. :) 01:28:23 ah ok 01:28:40 instead of inputs and outputs like bitcoin 01:28:52 an address is actually an account 01:29:15 and all transactions from a given account must be sequenced qith an ugly seq field 01:29:20 with 01:29:38 wait, so addresses aren't just hashed public keys anymore? 01:29:54 yes, what is missing is outputs 01:30:18 they have accounts in a ledger 01:30:39 ok i'll have to take a look at it 01:30:45 there's no utxo 01:30:52 yeah that's different than i thought it worked 01:31:16 tehre's a list of accounts and their balance in "each currency" (by currency meaning a 3 letter code) 01:31:34 and it's also a bad idea 01:31:44 imo 01:32:20 Mike_B: i assume you're trying to answer the question "how can we create a Ripple-like system using bitcoin primitives?" 01:32:39 jtimon: makes sense, i have to read about it more 01:32:48 we (jtimon and maaku) have addressed this : http://freico.in/freimarkets.pdf 01:32:55 maaku: i was attracted to ripple mostly because "consensus" has tx's confirming in a few seconds rather than 10m 01:33:03 er, http://freico.in/docs/freimarkets.pdf 01:33:12 but, i'm a bit disillusioned about it now because it has some bad flaws in terms of not being decentralized 01:33:17 ok 01:33:33 and i was in a trading channel and people were talking about decentralized exchanges and how they'll be the next big thing 01:33:35 they get that by having a completely centralized transaction processing mechanism 01:33:40 there was also their negative to properly implement demurrage, ejem, interests 01:34:13 but then i realized that making a "decentralized exchange," in which trades execute reasonably quickly, is at least as hard as making a new cryptocurrency that doesn't require blockchain confirmations 01:34:16 yeah freimarkets is an architecture for doing decentralized exchanges using bitcoin protocol, but keeping as much data off the chain as possible 01:34:18 JoelKatz tried to convince me that it was impossible to have ripple transactions with interest bearing assets 01:34:41 and I tried to make him read my examples 01:34:44 in fact, in real application we expect most applications to be off chain entirely, on private servers that nevertheless communicate with bitcoin-like messages 01:34:48 maaku: how long does it take for a trade to execute? 01:34:55 10 minutes to be confirmed by the network * 6 confirmations? 01:35:11 on-chain, yes, it's like any other transaction 01:35:14 Mike_B that depnds on the value of the trade 01:35:19 off-chain as fast as the private server can process it 01:35:31 yeah so if trades take an hour to execute, it's going to be pretty different from how normal exchanges work 01:35:36 if you trade 0.01 usd one block may be fine 01:35:58 Mike_B: you're not going to get a decentralized platform like bitcoin to do high frequency trading 01:35:58 well fair enough, i'll read it 01:36:34 there are fundamental limitations in play here 01:36:56 well, actually trades are atomic, so you're not waiting to give something in return like in real payments...the value is irrelevant 01:37:20 for things you need global decentralized concensus on, it'll take time to get global consensus 01:38:16 however you can do things like high frequency micro trades using sequence numbers and transaction replacement 01:38:42 yeah i'm trying to see the big picture of that 01:38:44 but you run a serious counter-party risk if you don't wait for confirmations 01:38:45 global decentralized consensus 01:38:51 you basically are exposing the network to an election 01:39:02 and somehow it elects an ordering of events 01:39:12 and bitcoin is like using the "random ballot" voting principle 01:39:26 yeah the chain is a global serializer 01:39:30 where 1 share of cpu time = 1 ballot 01:39:36 Mike_B: the thing is for nearly all applications you *don't* need global consensus, particularly when you're talking about trading IOUs or stocks or other assets with an inherent trusted party 01:40:17 you just can't have p2p dollars 01:40:37 no matter what mastercoin or bitshares claim ;) 01:40:46 but people instantly jump to "decentralize all the things!" mindset, leading to crazy inefficient orderbook-on-the-blockchain proposals and such 01:40:54 haha 01:40:58 yeah i was trying to decentralize all the things 01:41:03 i think it's a fun academic problem though, at the very least 01:41:20 i mean say you have a fleet of starships that are flying around in deep space, and they need to synchronize somehow 01:41:29 well, there's no one absolute reference frame that tells you the "correct" ordering of events 01:41:37 so the bitcoin approach would be to just pick one guy at random to decide (which is what pow does) 01:41:43 i was curious if there were other approaches too 01:42:16 consensus seemed promising but that flaw re: a minority of dishonest nodes ruining the network kind of kills it 01:42:51 there are plenty other approaches that could work, but very few that are rooted in fundamental physical laws like proof-of-work is 01:43:43 consensus could probably be made better.. but really it's the ugly child that nobody wants 01:43:45 hehe, here comes entropy... 01:44:08 heh, i'll let Mike_B figure that one out on his own 01:44:18 heh 01:44:26 in any case, Mike_B whatever the consesnsus mechanism 01:44:44 all nodes on the p2p netwoek must repeat the same validations 01:45:02 and you just can't have 10,000 nodes validating nasdaq 01:45:16 independently 01:45:22 if you want fast transactions, there are ways you can have a centralized serializer without having to trust the central node in any way except availability 01:45:54 see: open-transactions, freimarkets private accounting servers, and others i'm sure 01:46:14 2PC ripple 01:46:25 yes, 2PC ripple 01:47:03 http://archive.ripple-project.org/Protocol/Protocol?from=Protocol.Index 01:47:25 although that's kind of abandoned 01:47:49 well, we did incorporate it into freimarkets 01:48:04 yeah 01:48:25 at least functionally 01:49:03 hm ok 01:56:47 alright, well thanks for the info 01:56:50 i'll look into all that 02:04:09 supposid proof of P=NP : http://arxiv.org/pdf/1208.0954.pdf 02:04:17 dubious of a proof that's only 24 pages long 02:06:05 maaku: well, a successful proof could be done with a single reduction, that could be short 02:07:07 well, i mean dubious of a short proof to this problem ;) 02:07:49 i'd expect the nearby inferential space to be completely exhausted by this point 02:10:03 it claims to be constructive. 02:11:03 right before sec 2 he outlines the plan 02:11:10 i'm having trouble understanding what he's saying.. 02:26:38 well, it does appear to be constructive, there are explicit algorithm listings everywhere 02:26:44 but it is much too elaborate for my poor brain 02:34:01 after it said it was constructed I paged down to the end to see if it had benchmarks for solving some NP problem, even in terms of machine steps... and some boring np problem. 02:34:06 nope. 02:34:08 closed pdf. 02:35:37 yeah, he went so far as to claim this was possible 02:36:41 very last sentence, "Therefore, the algorithms proposed in the present paper can be used in practice to implement non-deterministic algorithms using deterministic imperative programs. 03:15:26 how did this crap get on arxiv.org 03:21:58 i'm gonna email the guy and ask him if he can efficiently compute preimages for SHA256 hashes 03:24:18 Mike_B: arxiv does not verify or censor anything, 03:24:29 andytoshi: to publish something to arxiv you need someone to endorse you 03:24:42 yeah, but it's easy to get an endorsement in academia 03:25:17 also if you had an account before they started doing endorsements 03:25:21 i think you're free 03:25:51 http://arxiv.org/find/cs/1/au:+Yakhontov_S/0/1/0/all/0/1 03:25:51 heh 03:25:56 his first paper was some other random thing 03:26:12 he probably was like "can you endorse me for this algorithms paper?" and the guy was like "sure" 03:26:17 second paper after that: "P = NP" 03:26:25 i'd be pissed if i was the endorser 03:28:18 lol yeah, i'd be annoyed 03:28:24 tbh i'd probably never bother to find out :P 03:39:51 we find out later it was just created as an effort to manipulate bitcoin prices. 03:40:34 Mike_B: meh, give him an easy one, ask for an md5 second preimage of the all zeros md5sum. 03:41:23 ha 03:44:53 i wonder how security would change if you replaced the usual 10m blockchain confirm with the following process 03:45:48 1) set difficulty so that each miner can solve the problem in (some shorter amount of time, like 10s) 03:45:55 2) wait for N miners to have declared a solution 03:46:11 (assuming N is large) 03:46:12 not progress free. 03:46:13 3) have those miners come to consensus 03:46:23 "progress free"? 03:46:34 A large miner has an unfair advantage. 03:46:45 He will mine with his large hashpower, claiming to be M small miners. 03:47:04 right but is that just the same 51% vulnerability? 03:47:10 and his partial results for himself, and then come to consensus with himself, and by keeping his partial results to himself he gets a superlinear speedup. 03:47:16 At the extreme the fastest miner always wins. 03:47:20 no its not. 03:49:01 so say you have an expected solving time of s, and you need N miners for a quorum, so that s*N = 10 minutes 03:49:04 imagine the extreme version where every hash is a winner. I am 4gh/s you are 3gh/s. Target is 40giga-shares to solve a block. How many blocks will you solve? 03:50:04 what do you mean by "giga-shares?" 03:50:42 hashes. 03:51:01 if every hash is a winner, doesn't that mean the target is 1 hash to solve a block? 03:51:21 I mean every hash meets your lower criteria. 03:51:46 I'm using an extreme example where the ratio of the lower criteria to the block criteria is very large. 03:51:58 In those cases mining becomes a race and the fastest miner ~always wins. 03:52:21 it's true when the ratio isn't large, but the advantage is somewhat less. 03:53:25 The method you're describing (breaking up the hashcash into N smaller hashcashes) is suggested in some hashcash papers to reduce variance, but it has the property that it's not progress free, which is why we don't use it. 03:53:25 don't understand what you mean by "lower criteria" and "block criteria" 03:53:49 lower criteria is your "solving criteria" 03:54:27 Mike_B: in your own language set N to a large value like a billion. 03:55:48 ok, and now what 03:55:52 N is a billion, s is tiny, N*s = 10m 03:57:23 now you have some miners and one a good amount faster than the others. instead of sharing his partial solutions he hordes them (or at least hordes them unless he learns of someone else having too many of them). 03:59:08 ok 04:05:33 gmaxwell: i still don't see the issue, sorry 04:05:44 you're talking about a case where a miner has a plurality of hashpower but not a majority? 04:07:41 Mike_B: I haven't fully understood the issue, but consider that _any_ scheme here you have a threshold of "N miners" can do something by consensus, there's something wrong 04:07:51 Mike_B: because one miner can always claim to be N miners for any value of N 04:07:57 so either the threshold is not necessary, or it's broken 04:08:13 I don't know which is the case here 04:10:33 gwillen: i mean N verified proofs of work 04:10:38 could be the same miner more than once 04:11:09 okay, N distinct proofs of work, that defeats my objection 04:11:22 I don't understand gmaxwell's well enough to know what it does to his 04:12:31 oh, I think I see 04:12:45 when it's a single share you need, everybody has a chance proportional to their hashpower, but it's high variance 04:13:04 if you need N smaller shares, you reduce the variance, but you also reduce the chance of people with low hashpower and increase the chance of people with high hashpower 04:13:50 if you need 1 share that takes a million seconds on average, winning is proportional to hashpower 04:14:03 if you need a million shares that take 1 second on average, the guy with the most hashpower will win every time 04:14:16 (if I'm thinking about this right) 04:14:25 Thats what I'm arguing, yes. 04:14:29 ok. 04:14:41 It's nor progress free. As you find shares you're making progress. 04:14:50 oh, interesting 04:14:56 progress-freedom makes it a poisson process 04:15:08 and only a poisson process has the right statistics for winning to be proportionate to hashpower 04:15:14 gmaxwell, can you link me to a paper that describes this 04:17:28 if you're saying one exists, anyway 04:18:09 gwillen: what i'm trying to figure out is what the analogue of the 51% vulnerability is as N changes 04:18:42 I thought there was, but I'm not finding it at the moment, I'll look more after dinner. :) 04:18:47 Mike_B: as I understand it, you could indeed compute an analogous percentage as a function of N 04:18:53 but I don't know how off the top of my head 04:19:00 gmaxwell: alright, well i'd much appreciate it if you do find anything 04:19:03 I could probably work it out but I have real work I need to be doing 04:20:45 gwillen: fair enugh 04:20:47 enouh 04:20:51 god damn it 04:20:54 :( 04:21:00 * Mike_B "enoughghghghghghghghghghghghg" 04:23:24 new lenovo keyboard? 04:23:59 no, i just developed a neuromuscular disorder that lasted 2 seconds 04:26:27 It's been known to happen to bitcoiners. :( 04:27:29 bitcoin-related finger tremor 04:28:22 ok, so i see your objectionnow 04:28:33 so you're saying the target is 0xfffff.... 04:28:46 so every hash wins, but you need a trillion hashes or whatever 04:29:20 so if you have double the hashpower I do, you generate hashes twice as fast 04:30:09 and i guess you're saying there's a strategy where you can hoard hashes and i, the poor unsuspecting sap, just broadcasts them to the network 04:30:50 is that right? 04:31:37 i guess i'm just not sure how you'd use hoarding hashes to have influence more than your hashpower 04:31:44 you'd have to wait for me to pass some threshold and thend ump 06:01:33 gmaxwell, what do you think of the transaction notation in the "mpc on bitcoin" paper 06:01:38 is it easy to read? 06:02:14 it's a pretty sound compromise between the current academic notation and how we're used to looking at them, i think 06:03:02 i guess i should try writing something else out in that style 08:02:37 Fistful_of_LTC is now known as Fistful_of_AFK 13:06:25 Fistful_of_AFK is now known as Fistful_of_LTC 13:21:39 maaku I'm still on page 5, but this P = NP paper looks very good 13:22:56 I thought you believed this was possible since you tried it yourself 13:28:35 jtimon, link? 13:28:53 supposid proof of P=NP : http://arxiv.org/pdf/1208.0954.pdf 13:28:54 dubious of a proof that's only 24 pages long 13:38:56 can you express the problem in coq or agda? 13:46:29 <_ingsoc> For a second I thought it was this guy: https://en.wikipedia.org/wiki/Sergei_Yakhontov 13:46:36 <_ingsoc> I would have been like, damn, that's badass. 13:52:44 jtimon: it's not new, it's revised from 2012, see http://arxiv.org/abs/1208.0954 and http://www.win.tue.nl/~gwoegi/P-versus-NP.htm 13:53:15 what was the problem in 2012? 13:53:59 shouldn't a constructive proof of P=NP leads pretty directly to an efficient algorithms/reductions for All The Problems 13:54:02 ? 13:54:28 huh 13:54:49 it's funny to see a list of papers along with claims "This paper proves P=NP" followed by "This paper proves P/=NP" 13:55:18 yeah 13:55:19 -- 13:55:21 [Equal]: In September 2012, Sergey V. Yakhontov proved that P=NP. The proof is constructive, and explicitly gives a polynomial time deterministic algorithm that determines whether there exists a polynomial-length accepting computational path for a given non-deterministic single-tape Turing machine. The paper is available at http://arxiv.org/abs/1208.0954. 13:55:21 (Thanks to Ricardo Mota Gomes for providing this link.) 13:55:22 -- 13:55:48 nsh: serious people stopped trying to look for problems in non-peer-reviewed papers like this, e.g. http://www.wisdom.weizmann.ac.il/~oded/p-vs-np.html 13:56:01 (constructively determining the existence of something is not constructive) 13:56:03 isn't looking for problems rather what peer review means? 13:56:12 nsh: there are classes above NP that would be unaffected (ExpTime, ...) 13:56:18 sipa, right 13:56:35 also, polynomial does not imply efficient by any real-world standard 13:56:43 (assume it was polynomial in the 100th degree?) 13:58:00 have there been many cases of polynomial algorithms being found but only with high exponents? 13:58:22 i have the impression (but i don't know how reliable it is) that generally relatively efficient algorithms are found where they exist at all 13:58:23 TD: yeah but they prefer (anonymous) submission to conference for peer review, instead of posting it publicly and confusing random people who come across false proofs 13:58:43 confusion has some overlap with inspiration :) 13:58:57 i don't mind 1000 quacks if there's one genius 13:59:19 (the ratio is probably much higher in practice though) 14:01:41 nsh: i think poly time algorithms for interesting problems are no more than a small const in exponent after optimizations, say n^6 or n^12 when n is the bit size 14:02:09 right, i wonder why this is though... seems very... fortunate 14:02:27 nsh: obviously you can have artificial problems like clique of size 1000 in an arbitrary graph, with poly time complexity of n^1000 14:02:48 sure, there'll always be nasty cases. but it's a question of how they're distributed i suppose 14:23:37 home_jg is now known as jgarzik 14:26:51 so iddo, has the paper been proven wrong? 14:29:21 jtimon: probably no one serious tried to look and refute it 14:29:27 jtimon: this paper is a tangled structure of about 30 definitions and 10 nested algorithms which purports to be a program which proves the existence of a poly-time algo for a given NP problem 14:29:29 (i think) 14:29:50 nobody is going to peer-review that when it's just a random thing on the arxiv 14:29:53 jtimon: is you google you can find explanations, e.g. http://www.scottaaronson.com/blog/?p=458 14:32:42 http://arxiv.org/abs/0711.0770 this one is clearer 14:32:46 (iddo's link is a general "how to judge P vs NP papers without reading too closely" article) 14:36:26 there was a claim that looked serious (involving a new tecnique of statistical physics) about 3 years ago, so Terence Tao and co. looked and demolished it within a few days after it became public: http://michaelnielsen.org/polymath1/index.php?title=Deolalikar's_P!%3DNP_paper 14:41:46 Terence Tao used to hang out in the go-lang irc channel :| 14:45:55 does he not anymore? he seems to spend an impossible amount of time hanging out on the internet 14:45:59 considering how much work he gets done.. 14:46:35 andytoshi i stopped using go a long time ago 14:54:05 pigeons you gave me a link about a physics unified theory 14:54:21 yeah sorry, bad joke 14:54:29 ah, ok 14:54:34 this one is clearer 14:54:38 i was trying to comment on the reliability of arxiv.org papers 14:54:58 I see 14:55:35 but if you have to explain the joke, it wasnt a very good one :) 14:55:47 but is there a critique to this concrete proposal? 14:56:05 although thank you for the link iddo 14:58:21 or it was just rewarded as "not enough serious" and not reviewd by anyone or something? 17:35:46 jtimon: the paper has only been up for hours 17:36:14 oh, I see, so there's probably no critique yet 18:23:29 TD is now known as TD[away] 18:37:28 Huh, there are two papers recently added to eprint.iacr.org with "proof of space" in their title. 18:37:59 amiller: have you seen gmaxwell's argument that making mining-effort into a "dual purpose" operation isn't necessarily good? 18:38:44 fwiw i am *not* in favor of "dual purpose" unless the dual purpose is intrinsic to the system itself somehow 18:38:45 zooko, ^ 18:38:51 that probably makes no sense i can try to elaborate though 18:41:28 * nsh nods 18:41:29 it makes sense to me. 18:42:43 it makes sense to me 18:43:12 ok :) 18:43:26 though i'd have to think a bit about why you feel that way 18:43:57 these two proofs of space papers are interesitng that they show up though http://eprint.iacr.org/2013/805 and http://eprint.iacr.org/2013/796 18:44:34 i can't really figure out if they're better than gmaxwell's proof of storage 18:46:46 eerily simlar works 18:46:56 (per abstract, at leasts) 18:47:09 oh, one of the auhtors of one of them is also on the Secure Multiparty Computation on Bitcoin paper 18:47:18 amiller: that makes sense. 18:47:25 university of warsaw seems to have a strong bitcoin research faction now... 18:47:44 amiller: because of gmaxwell's argument about weakened incentives for correct consensus-building? 18:48:21 zooko, yes that's the argument i have in mind and think is right 18:48:27 ("consensus-building" ≈ mining) 18:48:32 amiller: thanks. 18:51:57 amiller: I think the first paper there is basically isomorphic to my proposal with a lot of obfscuating language. 18:52:38 well not quite isomorphic. 18:53:22 do we have a standard template form letter yet to send people who write papers and don't cite forums posts they should 18:53:44 * amiller wants to see whatever iddo sent the lottery paper auhtors 18:54:39 <_ingsoc> Lottery paper? 18:55:30 _ingsoc, http://eprint.iacr.org/2013/784 summarized in this thread https://bitcointalk.org/index.php?topic=355174.0 18:56:13 <_ingsoc> Oh cool. Thank you. :) 19:10:34 amiller: i pasted the link here yesterday: http://www.cs.technion.ac.il/~idddo/cointossBitcoin.pdf 19:10:58 i asked them to reference this in their paper, but they haven't replied so far 19:16:31 nsh- is now known as nsh 19:50:27 zooko` is now known as zooko 22:04:45 phantomcircuit is now known as PhantomCircuit 22:05:19 PhantomCircuit is now known as phantomcircuit