Q&A with BitcoinABC and Others

I am Amaury Sechet. I am the lead developer of Bitcoin ABC Thank you Next is Shammah Chancellor Hi, I’m Shammah Chancellor I am one of the maintainers for Bitcoin ABC, mainly focusing on refactoring and cleaning up the code Thank you. And next is Jason B Cox Hi! I’m Jason. A software engineer working onBitcoin Cash I started volunteering back in November Right now I’m focusing on refactoring RPC and improving our ability to move faster in that area Ok. Next is Antony Zegers Hi I’m Antony and I guess I’ve been around helped trying to be involved with big blocks stuff for a while I was a founding member of Bitcoin unlimited and then when once I saw Amaury was working on Bitcoin ABC I have been trying to help him out with that. Ever since I found out about that, which was pre fork , I’m trying to help out with ABC Great. Thank You Antony The gentleman from ABC are joined by Chris Pacia So Chris could you introduce yourself please yeah I am the, I guess, lead back-end developer for Open Bazaar Also recently started a Bitcoin cash implementation called BCHD which is so far me and Josh Ellithorpe working on it Thank you Chris Jonathan Toomim hi Jonathan Toomim with an M at the end. Sorry It’s okay. Most people misspell that I’ve been in in the big blocks community for quite a while, basically since the early Bitcoin XT days So I was one of the contributors to Bitcoin XT I also helped found the Bitcoin Classic project And recently I’ve been active as sort of a scientist/developer for Bitcoin Cash Trying to look at data that measure the effects of some the proposals and looking at ways that we can make Bitcoin Cash better And mostly focused on scalability but I do also have some interests in the CheckDataSigVerify stuff Great. Thank you very much Jonathan The next guest is Guillermo Paoletti from Bitprim Yes. I work for Bitprim. We have a full node implementation that is going to follow he ABC changes So I’m here to represent another implementation and so the people can see that this is not only an ABC hard fork That some other nodes are wanting to follow the changes We also, at Bitprim, help ABC with all their testing We set up the testnet, the explorers We have applied some Bitcore patches to their nodes so the community can test their nodes before the hardfork Thank you very much Guillermo So without any further ado I think we’ll just go into the questions that we have had so far presented The first one is on CDS and it is: What are some interesting use cases for CDS that you’re aware of? Anybody want to take that? I can So CheckDataSgVerify allows on a basic level for Oracle’s….. if anybody’s familiar with Ethereum and the way that they approach getting data from the real world into the blockchain, it’s usually by having some semi trusted Oracle that will sign messages stating things like the current exchange rate of Eth to USD is X So CheckDataSgVerify allows for that kind of signature, that’s a third-party signature, to be included as data and for the transactions validity to depend on having a signature like that one of the specific use cases that I’m particularly interested in is that you can use CheckDataSgVerify to have a third party which verifies some sort of identity token or some sort of proof of identity Like somebody could do two-factor authentication on a phone number, and you could set up a script that would effectively allow somebody to send money to a phone number in a trustless fashion, or semi trustless fashion, only trusting that this third party is going to accurately verify that the phone number belongs to that person

You can also do this with an email address or something like that And with it you can set it up so that the Oracle cannot steal the funds, that the recipient also gets a direct message to their phone number which they need in order to claim it But nobody else who is eavesdropping can steal the funds either So there’s a lot of cool stuff that you can do with it that way we’re just scratching the surface right now But I think it’s really exciting All right thank you Jonathan. Antony did you have a comment on that as well? yeah sure! I guess I kind of…. maybe I’ll start by talking about the background like I helped coordinate review for this and and write the spec so basically like if everyone is familiar with Op_CheckSig, like every Bitcoin node and wallet has to implement Op_CheckSig, which does two things: it calculates, it takes a bunch of information from a transaction and hashes it and then it checks the signature of that hash And it does it using like an ECDSA signature check Basically all Op_CheckDataSig does is the second part but not the first part So it does everything exactly the same as Op_CheckSig, it just checks an ECDSA signature. Same format. Same So basically that part is already implemented if in every wallet It has to be so that you can check transactions then you can pass in some other data other than the Sig Hash from a transaction Basically that means that it’s fairly flexible You can do…. you can pass in external information from an Oracle or something, but you can also Jon Toomim mentioned the thing about checks like ID but basically any You can pass in a PGP signature it turns out As long as it’s in ECDSA singed thing So basically you can check if something has been signed that could be, maybe it was not really even intended as a Bitcoin thing, but you can check whether that signature is valid And another one that I’ve heard of is: Awemany came up with a thing where you can basically pass in Sig Hashes from other transactions to see if other transactions have been signed So he’s using that for a thing where you can see if two different transactions have been signed from the same I guess from the same output I’m not sure exactly what it does but basically it proves that you have a double spend Yeah You can make a transaction that detects if something has been double spent and then can do something based on that so I think in his case he uses it for a forfeit kind of thing. Yes exactly It actually to discourage double spends I can expand on that, if you want. Sure So yeah Awemany’s proposal is that you have So you’ve got some parent transaction and let’s say somebody tries to do a double spend attempt so you’ve got transaction A and transaction B With the signatures from transaction A and transaction B we can have a transaction C that the double spender pre-sent to a certain script and transactions C depends on Op_CheckDataSigVerify on two different ECDSA elliptic curve signatures from that parent So you can take transaction A signature, take transaction B signature, stick that into the redeem script for transaction C and then all of a sudden this double spender has lost their life savings or as much as their deposit is So this provides a crypto economic incentive for people to not double spend It allows vendors to say: oh you have this deposit. You’re gonna lose big if you try to double spend. So I can just some extent trust you’re not going to try The one drawback of this is that the forfeit money goes to miners It does not go to the store or whoever had a double spend attack against them So it is really a crypto economic proof but it’s still probably quite good And so I’m looking forward to seeing it being implemented Thanks Jonathan Just before we go on to the next question would anybody else like to address this question? yeah. I also think I’ve heard of a use case from Awemany about potentially doing cross chain swaps using it Basically what Antony was talking about where you can supply a signature from another transaction, you can also do that across chains with compatible signatures So that’s another pretty interesting use case

So like Dash or other things like that LIke CoinDance Part of the design, like when we did the review of it, was to try to make it more more generic So I think it’s a pretty good sign that there’s these use cases that like we didn’t necessarily anticipate That they’re cropping up. So hopefully there will be more in the future all right I’ll move on to the next question which is related: What’s the difference between this and Andrew Stone’s original DataSigVerify? Who would like to take that question? I guess I can talk about that Basically the the main, the major difference there’s one, basically major, difference and that is kind of what I talked about before is it’s essentially exactly the same as Op_CheckSig So in Andrew Stones proposal the signature format was was a little bit different And maybe it means that maybe it was better or who knows It used to be the signature method that you can use to sing a message in the wallet in Bitcoin Core or Bitcoin ABC But that’s different from the signature format in Op_CheckSig Basically the main thing that came out of the review is that everyone, or almost everyone,thought it was a good idea to basically mirror Op_CheckSig exactly in terms of how the signature format is Because it’s going be a consensus thing And it’s just a lot safer Basically the more conservative thing to do But the interesting thing that came out of that is that, that’s it’s sort of funny, because now it actually turns out that it seems like that’s actually more powerful this way Because of how we mentioned like now you can pass in Sig hashes from other transactions and stuff like that So it’s brings, it creates these interesting ideas about maybe cross-chain types of conditions and stuff like that Thanks Antony. There are a few other minor things but that was the major difference Okay. Anybody else like to address the question? No? Okay we’ll move on.to the next question which has come in This is regarding CTOR and I will throw it out there and read it What is the BitcoinSV wrong about in terms of CTOR and if nothing is wrong why not do it their way? So BitcoinSV believes that CTOR is wrong and that we shouldn’t do it And they are wrong about that because CTOR is great and awesome and it will help Bitcoin scale And so like see BitcoinSV has this very interesting idea, interesting in like not so good of a way, they want to increase the block size limit while simultaneously getting rid of the features that will help us actually achieve the block size limit, or a higher capacity in a safe fashion So yeah I do not have any sympathy for that view at all Specifically for CTOR the things that it makes a big difference in are in block propagation CTOR allows block propagation to happen by just sending this set of transactions instead of the ordered list of transactions And it turns out that information theoretically, the order information exceeds the size of the minimum amount of information that’s necessary to specify which transactions are in a block And for Graphene it makes around a like seven fold difference in terms of the encoding size and Graphene isn’t necessarily optimal in terms of performance. It’s just pretty good For other algorithms like the Xthinner approach that I published a few days ago, CTOR reduces the information theory entropy by a factor of roughly 2 or not the entropy, excuse me, the inputting size by a factor of 2 so I think that you know a factor of 2 or a factor of 7 is worth worth worrying about and we’re gonna have to do it now if we want to do it at all because it is a large philosophical change, although in terms of code is a very small change, so if we want to get it through we have to do it while Bitcoin cash is still small We can’t wait until we need to scale before we can add scaling features like this Thank You Jonathan Anyone else like to tackle the question? I will go ahead because I talked with Amaury a bit about this and some other the developers Specifically regarding Compact Block, because that is a fairly common

transmission method for for Bitcoin nodes when it comes to transmitting new blocks, and one of the things we discovered is when you start hitting one gigabyte blocks Compact Block fails predictably to the point where it’s completely unusable I don’t have the numbers right offhand but if I remember correctly it’s around 75 percent failure rate for one gigabyte blocks and I think it’s like:100 megabyte blocks is something like 25 percent failure rate And so when Compact Blocks fails now you have to transmit the entire contents of the block, use a lot of bandwidth, the transmission time tanks It really starts generating these perverse incentives for miners to start selfish mining and other sorts of behaviour So without CTOR we start running into this roadblock where 1GB is practically unattainable without major changes So that’s why I mean personally I’m really excited about CTOR I haven’t really seen any proposals by nChain to get around this roadblock other than their suggestion that hardware Asics could help improve performance But the thing is you know when you’re forced to transmit a large block over over a net connection that is potentially weak, especially when you’re going across continental lines from America to China for example or Europe, this problem starts to become very very real with the more data you’re trying to transfer So unless they have a proposal for how to get around that without CTOR I think CTOR is the right way forward Thank you Jason The other thing to note is the objections to CTOR haven’t particularly been technical, I actually had a conversation with nChain developers at one point and explained to them how CTOR works, and they seemed to agree with it at the time, in fact they had it listed on their website for a period of time as well There being no technical arguments I don’t know how we can continue to do protocol development when technical arguments are not the primary factor driving changes As far as what has been offered the main quote-unquote technical argument has been that you can append to Merkle trees in relatively constant time however that limits adding transactions to a block to be single-threaded so that’s not really… yeah I mean it’s technically correct but it isn’t relevant to actually constructing blocks like blocks take a long time to mine anyways Like the whole point is to make blocks expensive to produce So that’s kind of my two cents on it Like I would be willing to listen to technical arguments but I haven’t heard any Thank you Shammah Anyone else like to speak to this question? May I ask a question about this? Sure Juan. Go ahead. Sorry. Juan came in a bit late. Juan is the CEO of Bitprim All right Jason I wasn’t aware that, and I am personally I’m not sure how bigger blocks effects Compact Block’s efficiency. Just to have an idea, in which way does it affects, a bigger block affects the hit rate of CTOR and why if the block increase is allowed it starts failing. I don’t get it. I’m not saying it’s not true, I just don’t know in which way it affects it. if you know it….? I can answer that So what happens, and what most methods that relay blocks very quickly rely on, is computing some small ID for each transaction and then assuming that the other node that is going to receive the block knows about those transaction already And so you can transmit only the list of small IDs and the other node at the other end can reconstruct the block This is how XThin is working and this is how Compact Block is working as well The problem is as the number of transactions in the block start to grow there are more collisions that happens Meaning there are more transactions that exists on the network at some point in time that have the same short ID. Right? And currently what happens, because there are many orders possible in the block, every time you have a collision you cannot reconstruct the block whatsoever

And so the bigger the block the more transactions there is in the block and the more likely there is to be some collision So at some point collisions just become very very likely and the protocol just don’t work anymore If you enforce a specific ordering on the transactions in the block, when you have a small ID at some position, even if you have several transactions that match this small ID, only some of them are gonna fit in the proper range, at this proper position. Right? And the other transaction that may have the same short ID will have to go in some other position in the block So having a canonical ordering ensures that you can disambiguate those collisions And so the existing block propagation techniques can work up to significantly larger block sizes than what they do right now But also as Jonathan mentioned you can develop a whole new series of propagation techniques that are so much more efficient than the way we are right now Thank you Amaury. Anyone else like to speak to this question? No? Sorry? I had something but I forgot what it was Okay. So moving on to the next question. I’ll read it out for you This question is open to everyone: As developers, how do you envision the current relationship between yourselves and miners evolving in the future? You are obviously contributing meaningful value to the ecosystem albeit not contributing hash, how important is something like Mike Hearn’s Lighthouse initiative with respect to making sure your incentives are aligned without destroying the competitive tension that protects the system So I’m not sure the competitive tension We don’t have that much competitive tension with miners. Right We are operating on different markets More generally we want to talk with miners as much as possible you know. What are the problems, what do they want to have happen So it’s not always easy because historically there’s been a divide between developers and miners especially you know after the Hong Kong agreement failed and all of that The to communities kind of drifted apart But we’re trying to build as many bridges as possible Also maybe Jonathan can talk to this because he is a miner okay I head off every other question so I was trying to not do that this time As many of you know I am an industrial miner and I have been for about four years I’ve also been active in the development at various points but I’ve been more commonly or more consistently mining than that I have been developing I hope to switch that. I hope to shut down my mining operation over the next two years or so, And get back to my other projects and whatever. Do more development Because I think that development is really what’s going to determine the success or failure of Bitcoin Cash I think that miners are just producing a commodity They’re just you know doing hashes very very easily very simply on their machines but the creativity mostly comes from the development community and I think that miners should support developers financially I think that they should set up a certain percentage of their revenue, maybe like one or two percent, which they spread amongst all of the active development communities and give developers mostly free rein to think of whatever they think will help Bitcoin Cash the most So yeah I’ve done that. I currently contribute to Bitcoin ABC financially and I hope to start contributing to Bitcoin Unlimited and possibly XT soon, So I don’t think there is any competitive nature between miners and developers I think that it’s a inherently cooperative relationship Bitcoin Core has not communicated or did not communicate as well with miners as they could have and I think they took on like a dictatorial role but I think that a better role is is really bilateral cooperation Both parties give suggestions to other about what they think is important and collaborate on creating a future Bitcoin Cash that affects the world Thank you Jonathan yeah I kind of agree with Jonathan there To the question about like Lighthouse

I mean that was kind of what Mike Hearn originally created Lighthouse for It was kind of for funding development if I remember, but I don’t think we’ve seen the ability of the community to contribute the type of money that it takes the finance development through like crowdfunding Developers aren’t particularly cheap So I kind of think at this point miners are probably the best source of funding for development It would be nice if we could change it but we haven’t seen that yet And Jason you wanted to speak to this? yeah I just wanted to expand on it saying that a fair number of miners contract developers So there’s a pretty strong relationship between them and some developers that they have hired I’d actually like to see more miners doing this because it gives a better channel for a lot of the, how you call it, like free range developers, the ones that kind of work more in their own interest and are just getting donations from miners or other indirect methods I would like to see more of these contract developers coming into the space so that we can communicate with them directly and they would give kind of a better channel for communication with individual mining groups You know the big miners: Bitmain, bitcoin.com, we have Rawpool and such We would like to talk to these developers more directly and it would kind of improve communication across the board and I think that’s how you kind of reduce some of these strange incentives Other than that I don’t really see mechanisms that need to be put in place to kind of enforce good behaviour because those already exist in Bitcoin Okay anyone else on that question? I’ll move on to the next one. If at some point you feel that these questions have been answered satisfactorily then just let me know and I’ll move on to the following question The next question is: Why is CTOR better than any of the other options? There are many non-enforced options that have been proposed What are the downfalls of these non-enforced options? I will take that one So basically there was three possible paths that’s a client can take One is the current method which is a topological ordering Which we want to remove for scaling purposes The other one would be the client can accept any ordering And then the third one is that there’s some canonical order in our case we have proposed lexicographical ordering The problem with allowing any order to be accepted is that you have to actually program the client in such a way as to accept all those orders If you do that you lose all the benefits in block propagation and other things from having a canonical ordering You can’t, for example, fix the collision problem with Compact Block or XThin when you accept any order Because you don’t know what order the sender sent you the transactions in The only way to get around that would be to have some some kind of flag in the block that says: hey this is like a you know this is a CTOR block produced by ABC so now we can like add these you know specialised algorithms to the block propagation and reconciliation process But that’s very undesirable for a variety of reasons The clients would also you know need to support both of those options basically You get twice the bugs. You get it like more much more complexity in the system in general and considering any small change can fork the chain this is not something where you want a bunch of complexity Okay. Anybody else like to respond to that question? Yeah so one of the, I think the second best option in terms of which CTOR we should use, is Gavin’s order so Gavin Andresen in I don’t know what was it, 2014 or 2013 or something like that, he made the first proposal for a big O-1 or constant time block propagation using IBLT and in that proposal he specified a canonical ordering that would not violate the topological transaction ordering rule that we currently have So it would be both canonical and consistent with existing rules And so it could be a soft fork to enable that And I think that that is a decent option But ultimately it’s slower than the lexicographic rule is for generating the sort

and also all the optimisations you can do with canonical ordering are just a bit harder to do with with that Gavin’s order, Gavin’s CTOR I did an estimate a while ago, which is on Reddit and I can link it if anybody asked me, but it looks like sorting into Gavin’s order would take around four hundred nanoseconds per comparison during the sorting You have log(n) sorry, n log(n) times that for each block whereas sorting into lexicographical order takes around 5 to 20 nanoseconds. Something like that So it could be around 20 times as fast put into LTOR then into Gavin’s rule CTOR I don’t know if that’s a big enough effect to be a problem But I also don’t know why we should spend time looking into a second best option if we know that it’s second best In terms of some other algorithms like Xthinner, LTOR is just inherently a lot easier to work with easier to program for and easier to develop optimisations for and also for like UTXO-inserts. The LTOR rule makes all of our inserts sequential Which could be a big advantage in terms of performance for the UTXO bottleneck at some point in the future So yeah there are some other options out there that are decent, but I think that they’re not quite as good or for some of the other options they are way worse because you know you can’t you can’t have one code path for everything You have to have three or four or maybe just two code paths And having more code paths means that either code path gets optimised and yeah bugs! Bugs suck. We don’t like those Thank you Jonathan Anyone else like to address the question? Yeah maybe I’ll give a comment too Jonathan talking about it makes me thing If you look at all the different options, like he’s talking about Gavin’s order which preserves the topological order but then you also have a canonical rule on top of that, the question was more about I guess the question said well why don’t we just remove topological and allow any order so that’s kind of the opposite you’re doing So when you start picking and choosing these different options, if you’re going to remove the topological order there’s basically no reason not to like you get, like that that allows you the benefits of parallelisation stuff, but then there is no reason not to have lexicographic canonical order with that Like the other options don’t really fit well together So I guess I see it just having pure lexicographic order you get basically the best of all worlds you get the parallelisation benefits and the transmission benefits and the simplicity of the data structure I guess my point is like there’s a lot of these arguments where if you focused only on one aspect you say well this other thing will also help with that aspect But then when you consider everything together the lexicographic canonical order just is basically best on every metric So yeah that’s just my overall kind of take on it Thank You Antony Anyone else want to speak to this? I’ll move on to the next question then Will any of the bottlenecks problems discovered during the stress test be fixed with the November changes? Of those are any of them protocol changes? Actually most of them are not consensus related changes and there have been already a few fixes for some of them that have been made The reason why we are focusing on consensus changing or consensus changes for November is that firstly November is the you know the time to upgrade the consensus rules So we want to upgrade consensus rules at that time, but also as the ecosystem grows it becomes more expensive to change the consensus rules Because more people have to upgrade So the rhythm at which we upgrade right now is not something that we can sustain long-term and so we want to fix the consensus rules right now that prevent from scaling really really big as soon as possible So as a result we’re gonna focus on some of those changes in November like CTOR is one of them we already talked about it a lot But other limits that we noticed during the stress tests, such as how many transactions are forwarded on the network,

So there is a rate limitation of how many transactions are propagated on the network for instance, this one has been fixed already by Jonathan yeah INVENTORY_BROADCAST_MAX which Nullc, Greg Maxwell, pointed out yeah and other ones are being worked on But those other changes don’t really require forks so as soon as they’re ready we are going to have a release You know as soon as one of the changes is ready we are going to have a release with that change in it Thank you Amaury Anyone else like to address this question? I’ll just add a few things yeah so the INVENTORY_BROADCAST_MAX thing was limiting Bitcoin ABC and SV to about 3 MB of transactions being uploaded every 10 minutes So the only large blocks that we got, the 21.3 MB blocks, those happened after very long intervals The 21 MB block happens after a longer than one hour interval Just because blocks are stochastic they have random intervals this just happened to be a long block and because it was a long interval block it was big so I fixed the, or I was the one who submitted the very very simple ridiculously simple patch to fix the INVENTORY_BROADCAST_MAX bug and that actually is already live on the network When the bug fix ,the security fix for the inflation and crash bug we had, was pushed out INVENTORY_BROADCAST_MAX per megabyte, the modified version, was already in that code So yes this doesn’t have to wait for November for these fixes We just need to make sure that we’re always ahead of demand And right now we are well ahead of demand, so we’re focusing on the things that need to be done with a very long runway, very long time like CTOR But we’re also working on things like parallelising the acceptance in the mempool code And getting better benchmarks in And planning on how to deal with UTXO access bottleneck Thank you Jonathan Anyone else want to address it? I wish to add just a small thing that at least for us in order to have a strategy to scale and parallelise we need to know what tools do we have So we can start moving in the direction once CTOR is activated and there’s nothing we can do in terms of which techniques we’re going to use for massive scaling or to parallelise the block validation in order to have a block validation time that is economically useful for miners we need to know what do we have With the current ordering there are some restrictions With CTOR we see a lot of potential even if at the beginning there’s no tangible improvement It’s a starting point where we can improve a lot As soon as we have it activated every implementation will start finding the best ways to take advantage of that So unless we need it, the feature first and then we can start making progresses on top of that Thank you Juan I will move on to the next question now The next question comes across, I don’t know this is from Andrew Stone but they are technical arguments against CTOR and then in brackets “from Andrew Stone” So I don’t know if this is directly from him but I’m going to present them one at a time forward here So Graphene can accept multiple types of ordering Not forcing a certain ordering now keeps the possibility of continuing to exploring ordering that would help scaling and Graphene can just be updated to use that when a time comes Anybody wants to address that? Yeah so this on has already been addressed If you want to do that you essentially need to have two code paths right? You have one code path that is like: if it’s canonically ordered then do the optimised stuff for canonical ordering and if not then you do the other stuff So we end up having twice as much code and twice as much bugs One of those code paths is likely to be exercised very little which means there is high likelihood that there is bugs that lurk in there for a long time Generally that’s it’s not the path you want to go in Adding a bunch of complexity in there is not the path you want to go in

Okay Thank You Amaury Shae? Yeah. The other part of that question sort of assumes that there is maybe some other ordering that exists that could be more beneficial than using a lexicographical ordering I don’t really think that that bears out If what ordering you use isn’t so important as that you use a canonical ordering, or you know a total ordering for the block, and so any of those total orderings for a block would be pretty much equally as good Unless there’s some extra overhead calculation like for example with Gavin’s canonical ordering you need to still calculate the descendents in order to do the sort So yeah in short I go I don’t think anyone will ever find a better ordering There’s probably a way to prove that mathematically I don’t know if any other panelists wants to comment it on that but I I don’t ….yeah Yeah any order, like essentially any ordering that is cheap to compute fits the bill Like right now we make them by transaction ID in ascending manner but we could make them by transaction ID in descending manner that would be just the opposite order and that would provide the exact same benefits As long as you have some order that you can compute locally meaning from the transaction and its immediate neighbour you can compute if the order is correct or not, as long as you can do that you get all the benefits whatever the rule is (…..inaudible…) And no matter what that rule is, it is going to be just as good Using a transaction ID for your sorting is a lot faster than using anything that requires deep inspection of the transaction because it reduces in particular the number of pointer lookups, or pointer dereferences that you have to do And pointer dereferences they’re fairly slow because you have to go to main memory You just don’t know main memory you’re going to go to And with most of the alternative proposals you have to do more than one pointer dereference in order to do each comparison So that’s the main reason that the other options are likely to not be faster or better With LTOR, with the using TX IDs you are just able to use an array of just TX IDs Just 32 bytes for transactions. Very dense. It’s very regular. It’s all in a vector or an array And so memory accesses you just get pipelines and it’s just extremely efficient so yeah like there are also some ……. the question I think also touched on whether it has to be mandatory? it would be better Andrew Stone says or a few people say if we could just get the benefit without making it mandatory because that way we don’t get locked into this one system And that’s a little bit of a red herring because we’re not getting locked into this one system The code does not have to be changed to be able to verify each block type If we want to change the rule later to be let’s just say sorting in descending order we would just flip the order Then we can just say that after this certain block we also check that it’s in this particular order and before this certain block we don’t check the order at all that is perfectly valid for creating the UTXO database which is all you really need to do from the blocks You don’t need to verify things that came before a checkpoint that gets hard-coded unless you know want to be really paranoid about somebody creating an alternate history in which they have exactly the same amount of proof of work or more but was forked off in like you know five years in the past and invalidates one particular block way back and it’s just like an absurd scenario So yeah I think that we’re not going to get locked into this but there’s not going to be anything that’s better So why not just focus on doing one thing really well and then we can move on to the next thing We can then focus on doing something else really well Thank you Jonathan One quick thing These kind of techniques are used by Ethereum, it’s used by Ripple, it’s used by like almost all the coins that are not really derived from Bitcoin It’s like it’s pretty much the de facto standard in the industry by now It’s like it’s really well researched and extremely well understood Okay thank you Amaury We have a couple of other points, I’m not sure there are actual questions, that were brought forward from Andrew Stone so I’m just going to go to them quickly Just two more points The sharding proposed by ABC doesn’t work for light wallets, SPV and mobile wallets

which really defeats the whole purpose of Bitcoin Cash when the focus is on the majority of users using light wallets Hence a completely different sharing approach would need to be taken to solve this requirement Is there any comments on that? Yeah that’s not a question it’s more of an assertion and it’s totally false The sharding proposal that I described in the article I wrote is actually a trusted sharding proposal So basically allows minors and exchanges and anyone else who wants to run a node to horizontally scale the creation and validation of blocks and mempool acceptance It has nothing to do with the light wallet protocol whatsoever Yeah that argument the document is very similar to say like you cannot run a train on that road. Well its a road! So this whole stuff is there to shard block generation, propagation and validation. Right? If you want to support for SPV wallets you need to have another infrastructure that’s gonna do index per addresses essentially So what the server for SPV wallets needs to do is that you have your SPV wallet, you have a set of addresses in there that are your addresses for your coins So you’re gonna send that to the server in some manner and the server is going to be able to retrieve the UTXO that corresponds to those addresses and send them to you This is a very classic key-value pair It scales very well there is like essentially in the computer science domain it is a solved problem like we know how to scale key value store to very very big sizes The whole stuff about CTOR and about scaling block……Like block validation doesn’t scale Address lookup that’s kind of obvious it’s just two different things so that the whole thing is like a complete red herring. Like it doesn’t compute I can add. I have a thought on that also I guess the way I see it is a node has to see every transaction so you need to be able to scale and parallelise everything For SPV wallets lets say there’s some limit like a node can only support ten thousand SPV Wallets and you run into some limit where you can’t scale beyond that, but you can add more nodes that support other wallets So that’s not really Like the amount of SPV wallets per node isn’t really a thing that will limit the growth of Bitcoin Cash Because you can just add more nodes to support more wallets So I don’t think optimising the number of SPV wallets per node is a real limit We can also optimise the number of SPV wallets per node a lot past ten thousands SPV merkle tree block requests work as a bloom filter and you can just add more requests into single bloom filter and check the entire block simultaneously for hundred thousand wallets it’s in a same pass you watch. So yeah I just chose a random number on the top of my head. but yeah I agree so I am thinking that..( I am taking the floor) So I think that maybe Andrew Stones objection was that you don’t know whom to send your SPV Merkle block requests to if it is sharded But I think that that misunderstands nature of the sharding protocol or the proposal So in this sharding proposal you still have one node that deals with the entire block and that deals with the entire UTXO-set it’s just that one node is now comprised of ten different computers So the sharding happens behind the network layer as a back-end and the interface is still going to be or will likely still be the same it needs to be pointed out that load balancing is a very well solved problem every single company that you know the name of does this especially you know Google, Facebook, Amazon etcetera, load-balancing I wouldn’t even consider to be a difficult problem to solve at this point All right, and with that we just had one more statement that came forward from Andrew Stone I’m just gonna read it out to you ABC’s proposal ignores validation of transaction input Andrew Stone concludes if this sharding scheme can handle the validation routing load it can handle the top level routing of unsorted transactions CTOR therefore not necessary

yeah I’ll take that one That’s not true The proposal actually explicitly calls out validating transaction inputs and how it could be done The second part of this “if the sharding scheme can handle the validation and routing load it can handle the top level … ” okay. yeah the question disappeared that I was reading “…you can handle the top level routing of unsorted transactions.” When you shard the transactions across to a lot of nodes and you need to do a validation of the transaction input let’s say the transaction you’re trying to validate is chained with another transaction that is in the block so you have that transaction shard A and the input is over on shard B or C, in order to do that validation you have to be able to know that shard A, the node handling shard A, needs to be able to tell which shard the input would be in When you have a basically an unordered set of transactions you need some other kind of index to be able to look up what shard the input would be in And maintaining that index is actually quite expensive So CTOR actually makes this process faster by allowing a simple computation Basically: is the input transaction ID within this range then go to this other shard and look for the input So it actually improves that that portion of validation quite a bit over an any order proposal The assertion that it can handle top-level routing? like yeah you might be able to scale that The original CTOR paper talks about the scaling properties of that, It’s actually significantly worse And it may actually top out before you can get to very large blocks But more to the point if we want to get to planetary scale for this where seven billion people are using this, then there’s no room for waste in the protocol Especially when it’s not necessary Like why would we go with another ordering proposal when we already know that it’s slower? That’s just gonna reduce the total number of users that can can can use Bitcoin Cash Thank You Shammah Jonathan did you want to add one more thing? Yeah. Well so my opinion on this particular issue is actually a lot closer to Andrew Stone’s than to Shammah’s I think there are some advantages for sharding for LTOR but they are not that big My guess is that they’re around twenty or thirty percent. Somewhere in that range and is that worth making a big fuss about? Not really. But the sharding benefit has always been a side benefit of LTOR The main benefits are block propagation And that’s always been the big bottleneck so far Block validation is already about twenty times as fast as block propagation is So yeah I mean LTOR is definitely not worse for sharding It’s probably a decent chunk faster It’s probably not worth thinking a big deal about Alright thank you Jonathan Can I add something real quick too? Sure! Antony and then Shae and then we will move on I just wanted to point out this is kind of what I was mentioning before, that if you look at this objection and then the previous objection about Graphene, that you could use some other canonical order, those two objections contradict each other So this this objection is saying you could use an any order and not have canonical order and would be just as good, where as the previous objection said you could have some other canonical order that preserves the topological So those two things are mutually, like mutually incompatible So you know. You can kind of go one way or the other but with LTOR you get to the best of both worlds Thank you Antony Shae one more thing? The other thing is with respect to any order and the validation of transaction inputs, that doesn’t scale linearly. It actually scales super linearly or more than linearly Which means adding more machines doesn’t necessarily get you the same performance benefit as it normally would under LTOR So while it maybe twenty or thirty percent right now, what will it be when you have blocks that are a TB big Thank you Shae I think we’ll move on to the next question This does not have anything to do with Andrew Stone The question is: is the 100 byte limit a possible danger? Miners losing blocks due to short coin bases

Who would like to take this question? So that is possible. I mean a miner can always lose a block by not respecting any of the consensus rules So there is nothing specific about that rule that would you know make it the case What we see in practice is that the coinbases are bigger than this because miners So the coinbase is like a special transaction that have a special input that comes from nowhere like it spends coins that don’t exist. It’s how the miners create new coins when he finds the block And this input that comes from nowhere has also a signature script Half of it where the miner can put a ton of data And if you go on block explorer or various stuff, you see messages that are in the block They are in the signature of the coinbase. And typically, because miner put some message in there, the coinbase is actually bigger than one 100 byte And miners can just have that space Like if they know that there’s a rule saying that you have to have a 100 byte or greater coinbase transaction, then they can just stick up to 100 bytes into that ScriptSig. That’s the size of it So it’s it’s really trivial to to make sure that the miner will not have that problem And if they do violate that problem, that’s their own damn faults Thank you Jonathan Any other comments on this question ? Hearing none I’ll move on to the next question: Could you describe the security testing that has been done for the November ABC changes? Yes so I can talk to that First any change that we make, especially to the consensus rules, comes with unit tests and integration tests that we run on every single modification on the code So all the older consensus changes are essentially….. like every time we change anything on the code we verify that those consensus rules are still applied and respected properly by the code base In addition to that there is a test net, that has been running for like a month now, where the new rules are activated So if you want to play with them, poke with them, poke around to see if it works, you can do so You can find the parameters of the test net on like various places but most importantly on the front page of Bitcoincash.org And maybe Guillermo can talk a bit more about the test net because he has been more involved with that Yes. For every hard fork, not only this one, we set the test net We test the activation point and set up some miners to check if they have block template and RPC calls are working fine. So yes, like Amaury says this was done like a month ago and the test net is currently been mined by us and anyone can join Yes we made sure that the test net is forked from the original test net. The original Bitcoin Cash test net And when everything was working fine we found a couple of minor bugs that were fixed in the next version I believe 0.18.1. So from that point on everything is working fine Hey Guillermo, has the… is there CheckDataSig transactions on that test net? Do you know if anyone made them? I know that in the testing channel of the ABC-Slack some people were talking about that Me personally, I didn’t sent any CheckDataSig transaction Mark Lundeberg has. So yes there are some Op_CDS transactions on test net Okay cool Thank you Any other comments about this question? Jason! So I want to add a little bit because Amaury had mentioned about test coverage We noticed a number of parts in the code that were like not well covered I actually personally spent over a month just writing tests to cover critical parts of the code This is something we’re always looking into and it wasn’t necessarily related to the November changes specifically, but when we identify these portions of the code that are not well covered or not well understood, we immediately go to write tests And that’s not just part of the review process for when we make changes but it’s also just part of our regular review Thank you Jason. Any further comments? Moving on to the next question then

The question is: How come Graphene is not yet working in Bitcoin ABC while it’s already working in BU? (….inaudible…..) One second. One at a time Amaury do you want to take this first then? No no Jonathan can go. He has more detailed knowledge. Okay. Jonathan Just saying that it’s working in Bitcoin Unlimited is a bit of an exaggeration There is an Alpha version That is a feature complete version It can transmit blocks but the current code has a failure rate around, in one users tests, 41 percent… sorry a success rate of 41 percent A failure rate of 59 percent on a two day interval using regular size 100 KB-ish Bitcoin Cash blocks So Graphene still needs a lot of optimization The current Graphene code also has the reordering and the transaction reordering based on what the actual block order is instead of using some canonical rule And that adds a factor of roughly seven to the Graphene encoding size It also adds a lot of code that will be completely unnecessary after the November hard fork So yeah, as far as I understand it, the Bitcoin ABC approach to this has been: let’s do CTOR first because it’ll make code simpler That way we don’t have to write code that is only active for half a month or a months or something like that And that seems reasonable Also we want to see the protocol, or at least some of the research on it, get a little bit better developed So we know how to tune the parameters in order to not get a 41 percent success rate Because a 41 percent success rate doesn’t really help that much Actually might even hurt overall Trying one algorithm first and then it fails and you have to fall back to the other algorithm So there’s additional latency there So yeah I mean we definitely want to see Graphene implemented in all options It is by far the most efficient proposal when it works But let’s not get ahead of ourselves It still needs a lot of work Thank You Jonathan Jason! I’m sorry Shammah first Yeah also…. I mean part of this question just has to do with the development philosophies between BU and ABC From our perspective we want to be very conservative Try and implement as few code changes as possible Make sure that everything is basically fully specified before we implement it Right now the Graphene spec, unless something has happened since I last looked at it, actually is not completed There is still commentary going on on it Now BU and Andrew Stone have a much more liberal philosophy of like go ahead and get things out there and kind of experiment with them Which is good but also has its own drawbacks Jason! I was gonna say something very similar I actually really like that BU is experimenting with this We’ve been reviewing the Graphene spec from time to time and helping them with improvements But until the spec is finalized I don’t think ABC will be building it Does sound like you’re building in the background though Jason. Sorry about that Yeah. Notably there are a few aspects of the specs that would I think require improvement One, Jonathan hinted at it, is like you need to pick various numbers when you transmit a block with Graphene Picking those numbers right now it doesn’t seem to be working very well. Because the failure rate is pretty high So there is that There is a lot of complexity that is added because there is no canonical ordering right now So you can simplify Graphene a lot by removing the current ordering rules and moving to canonical ordering There is also various stuff. So for instance there is no, what is called, “high bandwidth mode” So what you want when you transmit the block, you want the transmission to happen as fast as possible. Right? So you want the amount of data to transmit to be small, but you also want to limit the amount of round trips that you do And if you waste a bit of data, you send a bit more data than you would otherwise but win a round trip in the process, then you’re probably getting ahead This is what Compact Block does for instance When you use Compact Block your node is going to select some peers to put them in high bandwidth mode where they send you the block directly without a round trip

Right now there is no way to do Graphene without a round trip And considering how small Graphene blocks are, the round trip is actually probably the sore part of it There is no randomisation on the short IDs Which means that if you get a collision for some reason, the block is impossible to send through Graphene Because the same collision is going to happen in every Graphene block that you can build So you need randomisation of short IDs And there are few other points like that that need to be improved in the current specs So it’s not quite where it should be Though it’s great that it’s live somehow and we can get data from it This part is absolutely great But I think it’s going to need to go through a few iterations before we can consider that Graphene is production ready Thank You Amaury Any further comment on that question? Hearing none I will move on to the next question which is: What is ABC’s proposal to improve the mempool acceptance code? I’m working on that. It’s in progress So far I have code that improves performance on four core machine by almost three times The change that I’m making is pretty conservative or relatively narrow It’s a lot less sweeping and general than the changes that Andrew Stone made in his Bitcoin Unlimited GigaPerf branch But it seems to be getting most of the benefits Currently I can run a full node on main net with this code for about six hours before crashes So I still have some work to do before I submit the code But I think once I find the last few bugs, that I know about in it, I’ll be going to submit a version to ABC with a few other bugs inserted and make sure they can catch those bugs before we commit it Is there a bounty? Are you setting up a bounty for that as well Jonathan? I could do that too yeah No I am joking. Pardon me No this is an idea that I’ve wanted to do for a while. I mean like I am a miner I do have the resources to be able to set bounties out for the things that contribute So yeah, I’m willing to do that. It won’t be huge but.. I apologize that was my attempt at humour Does anybody else want to speak to ABC’s proposal to improve mempool acceptance code? Yeah so what Jonathan is doing is like the short to medium term plan The longer term plan is really to rewrite a bunch of that code because it’s very crafty But it’s also you know very very tedious code to rewrite It’s both performance critical and consensus critical So we are going to do it but very slowly on that front So don’t expect something shortly on that front However I expect the work of Jonathan hopefully to be released in the you know next few months Excellent Probably less. yes I don’t know. Its up to Jonathan No I think a few months makes sense This I think needs a good review So it’s not going to be rushed I’ve got another question regarding CTOR. I’m going to get it out there. In……no see if I can read this properly In the field observation by Tom Zander shows current CTOR implementation suffers from a reduction in validation speed Your thoughts on this? No that is not in the field observation That is Tom Zander noticing that he doesn’t know how to code CTOR, the outputs and inputs algorithm, into his particular Bitcoin implementation called Flowee which nobody currently uses And that doesn’t mean that it’s not possible, it just means that he hasn’t either tried hard enough or he doesn’t know how to do it I have looked at it and I think that I know how to implement OTI in his code and I think that it will just have the same one extra loop overhead that it has for ABC And I did do in the field performance testing on ABC’s code base with the OTI algorithm versus the traditional algorithm And I noticed that yes, it is actually slower to do OTI validation which can work with any ordering,

than it is use the current rule It’s slower by 0.5% And that’s pretty much what we would expect given that the overhead for iterating through a vector is about 0.8 nanoseconds compared to around 200 nanoseconds for the pointer dereferences to do the lookups and so forth A few hundred nanoseconds for the hash table lookup So I mean I don’t think he’s talking about 0.5% I think that he’s talking about the difference between the parallel validation that he already has which looks up multiple blocks in parallel, versus the fact that is not present in ABC’s code base because we’re trying to do something a little bit more rigorous and reliable And I think that’s a red herring Thank you So I would like to add on this First, I noticed the same numbers as Jonathan So that it is lower but by like a negligible amount That being said it’s actually fairly common when you start implementing algorithms that parallelise, that in single threaded performance or small amount of data they actually run slower And usually the performance hit that you notice is significantly worse than what we see with OTI Having performance hits of more than 10% or 20% is actually not that uncommon when you move from a single thread implementation to a parallel implementation Here is the deal though, if you want to scale you need the parallel implementation Because one single core on one single machine can only get that fast Essentially like single core performance over the past 10 years on CPUs has become slower. Not faster And the reason is that CPU manufacturers optimise a single core to be more energy efficient so they can put more cores on the same chip and still dissipate, like you know, the temperature And this is the way, you know, this is the direction that CPU manufacturers are going into for like the last decade They put more cores in their chips and this is how they make that chip more powerful And exploiting those cores, like when you move from a single-threaded algorithm to an algorithm that can exploit all those cores, initially you take a hit in performance And the fact that we take only 0.5 percent is actually kind of miraculous But then the advantages is that: you want to make it faster, you have more data to process? Well you buy a CPU with more cores And eventually we have so much data that there is no CPU with more cores so you buy two servers And then three servers and so on, right So once you move to that type of algorithm you’ve changed scaling from a technical problem, how fast can I make this one core go, to an economical problem, like how many machines I am willing to buy to process this amount of data And this is where we want to be, right? If we want to scale big we know that one core is not enough and so we need something that you know scales horizontally Thank You Amaury Nothing further? I’ll go on to the next question I am curious as to the six-month upgrade process What process is taken to decide what changes are added to an upgrade? This leads to a potential concern that certain changes might be forced in a package upgrade with rider proposals that must be accepted to adopt more important changes Yes so I can talk to that We actually have meetings with other implementations every two weeks on which those kinds of matters are decided However what seems to be happening in practice is that people decide something in the meeting and then they go and do something completely different So this is a bit of a problem We are discussing internally in ABC right now to you know see how we want to change things because the current way of doing thing is not tenable. But yeah Jason Also to add: one of the things is that since Bitcoin Cash is still relatively young, like we are seeing kind of like these you know batches of changes going in, we don’t actually expect to see that as much in the future Ideally whenever we have a scheduled hard fork there would be one maybe two major changes,

consensus rules let’s say, but we don’t expect to see these large packages of changes going in That’s not ideal We would like people to all agree and then go: hey, we’re gonna upgrade for this one specific thing And then everything else that is not consensus related can go into other releases That’s how I hope to see things going forward But you know since we are working towards the November fork with a lot of these changes, inevitably they were batched And everything in the November fork was uncontroversial until like one or two months ago And so it caught us all by surprise when it started sort of coming out of the woodwork and saying: “hey wait a minute, this isn’t I am not comfortable with this.” We need to split this out and and separate these If people had raised those concerns earlier it would have been a lot easier to come up with a non packaged fork But the code was already almost done, or basically done, by the time those concerns were raised That made it a lot more difficult Yeah I really like the idea of using BIP9 and BIP135 for voting for Bitcoin like BTC, but unfortunately because Bitcoin Cash is a minority fork, it is really hard to use miner voting as a proxy for user support Because somebody who just believes really strongly in the issue can take all their hash rate off of BTC and direct it to BCH just to block that vote ViaBTC for example has let’s just say 8% of the Bitcoin Cash hash rate, but they have about six exa-hashes of total hash rate, compared to 3.5 exa hashes hash rate of total BCH hash rate So they could take half of their BTC hash rate and mine 100% the blocks on Bitcoin Cash if they wanted to So yeah we can’t trust those kinds of miner signals as long as we’re a minority fork So if somebody has a good robust proposal for how to do piecewise approval voting of forks, I think we’re all interested in hearing it But we can’t do miner voting Unfortunately! Next in order would be Shammah. And then Amaury Yeah the other thing that I kind of wanted to point out with this is that this question sort of assumes that six months hard forks will continue forever, and that there will always be a necessary, you know, need for adding or upgrading the protocol While we can’t possibly foresee all the, you know, the possible use cases in the future and say that nothing will ever change, like Amaury said before is that changes get progressively more, you know expensive as use increases And at least from ABCs perspective, so far as I’ve talked to other developers on the team, there isn’t an endless need for changes What we would like to do is get to the point where the protocol can be scaled horizontally across machines and then at that point there’s not a lot of impetus for changing the protocol after that Because now you have this economic way of scaling where you just buy more machines rather than continually needing to adjust the protocol Even for opcodes, it’s hard to say, once you have a full suite of basic operations including Op_Mul, Op_Invert and all these other items, you have a total Turing language and you can compute any values using those There may be some use for specialised OpCodes like DataSigVerify, but those would have to be argued you know with a strong case in the future Thank You Shammah And Amaury did you want to make a comment? Yeah so Jonathan pointed on it, So I would like to come back to the timeline a bit The changes that are going out in this current upgrade have been put on the table a long time ago Actually you can go on the websites You can go on the ABC website, you can go on the nChain website, also Bitprim made an announcement about it Various people that work on the protocol made announcements about those changes up to last November actually And last December And you can find those documents on the websites of those various people working on the protocol And so the work on those has been going on for what? For like a year It’s been decided since pretty much the beginning of Bitcoin Cash that the freeze for code and features is three months before the fork Because the ecosystem needs to be able to you know test the changes and upgrade

We need to have enough time figure out if there is a bug in there And be able to react in a way that is not last minute scramble So that puts the the deadline on August 15 Right? So up to August 15 everybody has the same roadmap and everybody is working toward that What happened is that SV, Bitcoin SV, has been announced on August 16 Right. So after the freeze date No concerns were particularly raised before that Which essentially puts it in an impossible situation to be compatible with anyone Today there is still no release of the software, There is no public test net, there is no mining pool Like all that stuff is like “coming soon” And so what I notice here is that Bitcoin SV has been very good at generating a lot of noise and a lot of confusion, which is extremely bad for Bitcoin Cash, but they have not delivered anything except confusion essentially so far Yeah their timing seems like it was almost intended to cause a chain split It seems like….. Yeah it’s very confusing So yeah what happened after that is that people started being doubtful and be like: okay maybe we need to implement a plus proposal for instance This is what BU proposes to do They are going to implement both change sets But all this confusion comes after August 16 where SV announced their client And now this is up to the community to decide if we want to essentially allow someone to throw a wrench into the whole process and derail everything or not ButI think this is extremely damageable for Bitcoin Cash Because the market….. Like this is money, and the last thing you want your money to be is completely unpredictable. Right? So it’s very important to have a road map way ahead of time and a very clear timeline and stick to those So right now would be the good time to discuss what’s gonna happen in May next year So that by the time we get anywhere close to May it’s very clear to everybody what’s happening If I had to give an advice to people and actually we want feedback from people Like what do we need to do in May? So far we think it’s good to include the set of OpCodes that SV is proposing Because we have time to review coming to May It was way too short of a notice how it was done for November, but we can probably do them in May But beside that I’d be very curious to know what people would want to do Thank you Amaury I just want to remind people that we are reaching an hour and a half into this event today and I’m conscious of people’s needs to take breaks and so on So I’m going to try and wrap things up within the next half hour Just to be fair to everyone that is participating I do want to let you know that that’s usually the timeline that we give when we’re organizing and coaching the development groups through the Bitcoin Cash development meetings that happen bi-weekly So I just want to you know we just have sort of a patience level and obviously people have biological needs as well So having said that we’ll go for at least another half hour and then I’ll check in with the panel and see if they’re okay with continuing The next question that has come up is: What future steps are going to be taken to assure there will always be efficient compatibility between the different BCH node software if ABC’s implementation of Graphene would not work with the BU implementation and the XT implementation, the optimizations are not going to be as effective as they can be For instance on Ethereum Geth and Parity both have a light mode setting that does not require the entire chain Geth nodes with light mode on don’t serve Parity nodes with light mode on and the other way around last time I checked I apologise if I pronounce things related to Ethereum incorrectly I’m not a big…. I’m not aware of too much with Ethereum So if somebody wants to answer that question please feel free Yeah so we have hinted about that a bit

We have rounds of review on the Graphene spec that has been worked on for BU So it’s not, at least like last time I reviewed it, it was not a complete spec But I still provided various comments and various reviews If the spec gets to a point where it reaches a good level of quality then ABC is going to do its best to make something that is compatible with it. Absolutely This is why we spend some time reviewing it And we are going to continue to do so If….. Yeah like if we can reach a good spec and quality, and we are committed on our side to you know put in generic time to provide the required feedback, and if we can reach a spec that is good quality, you can be sure that ABC is going to implement it You know like as per the spec It’s really important for multi-client protocols to have a good solid formal specification that fully specifies all the elements that implementations need to follow for the protocol And it’s often really hard to write a spec without having written an implementation So often people will write a test or example implementation in parallel with the spec And that’s what BU is doing right now And that’s totally cool But that doesn’t scale. You can only have one sample implementation being written in parallel with spec And before XT implements Graphene, or SV, or ABC, we have to have a nailed down spec so that we know how to interoperate between clients Who did you say is doing that Jonathan? Bitcoin Unlimited. Bitcoin Unlimited is actively developing a test implementation Just for Graphene though right? Not for Yeah they are cooperating with the University of Massachusetts if I am not mistaken And so there are people like George Bissias from the University of Massachusetts that are working on that And they are working with the BU to get it implemented in BU and get the spec out there Okay. Any further comments on that? Move on to the next question This question came in by email. So I’ve just typed it in Do you think zero-conf as currently implemented is too insecure for real-world usage? If so what changes do you think need to be made to ensure zero-conf reliability? I can speak to that It really depends on your risk profile. Right? So first if you are some online website for instance, you don’t really care about zero-conf Settling in ten minutes is plenty enough. Because you are not gonna ship something within ten minutes So you can cancel the order if the payment doesn’t arrive Face to face payment of small amounts is also probably okay But as the amount gets higher then it becomes less okay And how high you can go really depends on your risk tolerance That being said there is no reason to not improve zero-conf if we have the opportunity to improve it It would be very bizarre to say okay it’s not 100% secure but it’s secure enough so we’re never gonna improve it It’s not a very good attitude Especially since I think many people in the BCH community tend to overestimate how secure zero-conf are Right now it’s possible to double spend zero-conf with the reliability of somewhere between 20 and 25 percent So it’s not like you know 100 percent chance of doing it, but within 10 minutes you can double spend with a probability of 20, 25 percent if you use the right techniques So someone is probably not going to do it when they buy a sandwich for a few Euros. Right? But someone may definitely do that if they buy a TV at some electronic shop, right, for a few hundred Euros So I think this number needs to be worked on And especially since it has implications in scaling Which like people may not realize how they are actually intertwined But if you can decide what goes in the next block or not like with a high degree of certainty before the block is found, then you can leverage that partial information of what the next book is gonna look like to speed up the blog propagation and validation So if our nodes can communicate and say: “okay this transaction is correct, this transaction is not correct”

and come to an agreement on that, then each of our nodes is gonna have a very similar set of transaction ready to go in the next block So there is a whole set of ideas there So it’s a very active field of research I think the most interesting step for what has been done recently is the discovery of the Avalanche protocol Which is essentially a way for nodes to negotiate a set of transaction to be included in the next block Or just to be like Or actually what they proposed in the Avalanche paper is not even have blocks, which I don’t think is the best option But we can definitively like combine both approaches So you get, you know, much higher reliability on zero-conf and at the same time it expands the scaling capability of the whole network So we definitely want to work on that And even if you don’t really care about zero-conf, you may want to care for the scaling aspect of it Thank you Amaury We have a conference coming up too. I don’t know I had to grab my charger, I don’t know if Amaury mentioned it But in a couple of weeks. Where I guess we’re gonna try and talk about some of this stuff The question is it reliable enough now? I mean certainly for like what we’d like to see it be used for, for retail transactions and that sort of stuff So probably not. But there’s kind of two things that need to be addressed You know you have the person who just kind of relays two conflicting transactions on the network, and that’s probably the easier attack to address And I think the only thing stopping it right now is we haven’t had agreement on how to address it But hopefully we can get that agreement in this conference. We’ll see But the other is what Amaury mentioned, where if the miners are if someone kind of wants to bribe a miner that’s the way around you know or the way we would try and stop the attack you can get around that just by going right to the miners and bribing them And that’s where maybe something like pre-consensus can change the incentives a little bit Okay. Anybody else would like to comment on the zero-conf question? Hearing no I’ll move onto the next question Is a 128 megabyte limit on ABC’s roadmap? Jason So technically yes. But I would like to point out that our roadmap includes a far loftier goal We’ve talked a number of times about shooting for one terabyte blocks Now exactly like where that lands in the future, it’s probably you know years out from now But that’s kind of where we’re shooting towards I would like to see 128 megabyte limit raised appropriately I don’t know if we have a really exact date. It could be the May hard fork if nothing…. improvements occur in time It seems a little unlikely given our current velocity But the hard fork after the May hard fork is a very real possibility And we may even go higher than that like rather than you know hitting for 128 if it’s possible we could hit 256 or 512. It’s really hard to say at this point in time But our ultimate goal is one terabyte blocks Because that’s what we’ve determined is necessary for worldwide scale Thank You Jason Anybody else want to speak on the 128 limit? Yeah it has to happen after benchmarks showing that the software can handle it without perverse incentives, without widespread accidental selfish mining and without the probability of lots of nodes losing consensus The limit is a safety feature. It is not intended to be a economic feature And currently the limit is what, three hundred times higher than average usage We want to increase that further but we we want to make sure that we still have that safety feature in place to make sure that if something goes wrong, the extent to which it goes wrong can be limited to what we tested that the system can handle Thank you Jonathan. Anyone else? All right….. Oh sorry go ahead Okay yeah So yeah we saw during the last stress test that like it’s actually very difficult to actually generate 32 MB blocks We have some tests that do it, but like in live situation there is a lot of bottlenecks that make it difficult And so what would happen if you raise the block size right now?

You just increase your attack surface Because there is a lot more like there is a lot of different attacks that you can make worse if the node accepts a bigger block size However you get nothing out of it right You get nothing out of it because in normal conditions we actually fail to generate the block that big and propagate a block that big And even like before getting close to 32 MB we have serious issues So essentially it’s a bit reckless. Right? Like you increase your attack surface and you get nothing From an economic perspective So from a technical perspective the more you increase the block size the more attack surface that you have From an economic perspective if the block size is below market demand then you get all kinds of perverse effects and if it’s above market demand you get no effect I detailed the reasoning of that in in my presentation where I announced Bitcoin ABC in Arnhem If some people want to look that up But essentially when you study the economic impact of the limit this is what you see So if there was a way to know what is the exact market demand, you would want the limit to be just above that. Right? So if the market demand is 10 you’d like the limit to be 11. Right? Like ideally But there is no way in practice to know what is the actual market demand So you want you have a fix you want to have like enough margin But right now the margin is more than a hundred times market demand So we are safe there And so we have the time to do things properly to make sure that the technical problems that arise when you make it bigger can be fixed before we increase it, without risking all the perverse economic effect of running into the limit A quick response to Amaury’s comments So the main things that prevented us from getting 32 MB during the September 1st stress test have been fixed If the stress test were done right now, we probably would see several 32MB blocks and possibly even a chain of 32 MB blocks And a second point that Amaury made was that all that matters is that the block size limit is above the market demand And that’s technically true, but there are other reasons why that’s false It’s important to keep in mind that the block size limit is a social signal It is a sign from the developer communities, from the miners and from the old users to future users that they are welcome here and that we want them here So we do try to keep the block size limit well above market demand And that’s why the increase to 32 MB was made We want to keep that signalling going on. We want to keep people aware that we want them to come And that if, you know, a sudden load comes onto the system, we will hire more engineers, we will make these performance fixes faster so that we can handle them But there’s a limit to how fast we should be increasing that limit in the absence of those performance fixes And for now 32MB is the limit for how fast we should do it Maybe in May we might be able to do 128 MB Probably in November of next year it will be able to do at least 128 if not more We have a lot of changes in the pipeline that we just need to finish ATMP parallelisation, We need to do Graphene and or Xthinner to improve block propagation Those two should alone should actually get us to at least 128 As long as they are tested and unlike my current code, don’t crash And you know so we just need a little bit of patience and not rush into things before we are ready for it And unfortunately that’s what SV is trying to do right now Okay I wish to add something if possible Yes Juan. Please go ahead I just wish to be very clear It’s not a matter of hardware You can spend $30,000 in a server and basically the validation will work slower than in a basic gaming machine So it’s not about money it’s about we need to do some modifications before. That’s it Thank You Juan Yeah I agree I just wanted to also emphasize like I agree with what Jonathan was saying Like the focus…. Like we want to give the signal to people that yeah we want big blocks and we want to have big scaling, but the way to do that I think is to focus on the technical issues that need to be fixed to enable that

You know. Not just increasing a number for show So if we want to credibly say yeah we’re welcoming everyone to come onto here and we want to scale massively, the focus should be onto fixing all these bottlenecks and having a credible plan for scaling well into the future Thank You Anthony. Any further comments? Hearing none I’ll move onto the next question If pruning the chain as in the roadmap will increase anonymity significantly and whether it might conflict with any L2 apps. Thanks I just read it as it was so if you guys can take a look maybe you could understand the question better than I can Yeah I’m not quite sure what that question is referring to I would say we skip that one. It doesn’t seem to follow I mean I think if this were a block our nodes would reject it All right. We’ll move on from that one The next question that I have lined up is Give me half a second Micro transactions will be very important for BCH What ABC developments are being worked on to handle lower fees, sub Satoshi transactions, dust etc? So I’ve been working on the fee code pretty much the last couple months Unfortunately it’s not not as simple as just dropping the minimum relay fee to say 100 Sats per kilobyte A lot of the test suite starts failing. There’s some rounding errors with fee calculations in the wallet So I fixed most of those issues The other issue that you run into is that if you drop the fees substantially it becomes very cheap to generate UTXOs And those are the primary thing that costs miners money at his point So a number of miners Bigger hash rate is going to be Sorry yeah of course hash rate is more expensive But levelDB is definitely eventually a bottleneck And generating UTXO becomes much cheaper to do when you change the fee structure or change the fees So yeah once those things are addressed I think that we can actually drop the fees even more So as a miner I’ve looked into the optimal minimum fee that a miner should accept for a transaction And it turns out there’s a relatively simple formula or this And all it depends on is the block reward in Bitcoin Cash, you know it’s BCH units, and the block propagation velocity And it does not depend on UTXO size at least until UTXO size increases to the point where it starts to significantly delay block propagation Currently at a block propagation speed of 1MB uncompressed block size per second from the first miner to the last miner, the optimal rational fee to use as a minimum acceptable fee for miners, is coincidentally about one Satoshi per byte This is also pretty much exactly the median fee on Bitcoin Cash I can’t say for certain that other miners are following this formula and are actually sophisticated enough to calculate it in order to determine that one Satoshi per byte is the minimum that they should be accepting But I know that’s what I’ve done I will not reduce my minimum acceptable fee below one Satoshi per byte, until block propagation performance improves And once block propagation performance improves, which might be in a few months, then that rational point is going to change, is going to decrease So that miners will probably start to accept smaller fees at that point Something else that happens is that when the block reward changes, and it’s kind of counterintuitive, and actually I am not particularly happy about it, but when the block reward changes the result of this calculation also changes So at the next having the rational fee for a miner to accept is going to get cut in half I’m personally worried about this Because I think that there should be a little bit more of an incentive for people to overpay on fees

in order to be able to support miners after the block subsidy is effectively zero So I’m hoping that people will voluntarily, out of the goodness of their heart, and for the sake of Bitcoin Cash as a whole, choose not to use ridiculously low fees Whenever they can afford it, they’re sending a five dollar transaction, I think that people should voluntarily include a 1 cent fee for miners. For example I don’t know if that’s good enough This is currently an unsolved problem but yeah just improving performance is going to reduce the minimum fees And that’s what we’re working on So even though we’re not working directly on fees, we’re still working indirectly on fees Thank You Jonathan Any further comments on the question? Chris you got…. I’m sorry to put you on the spot, but you have some experience with transactions in general with I’m gonna call it OB1 Do you have any comments on micro transactions and fees? Not particularly We don’t really have much of a need for like microtransactions in Open Bazaar I will just say, I mean it’s a lot I don’t know if the whole fee thing has ever really been…. if we’ve ever come up with a really great solution But just having more stable fees like we have in Bitcoin Cash makes it a lot easier to write code than in Bitcoin where the fees can be changed from hour to hour We have to, with Bitcoin we have to hit like a centralized API to get fees Which is certainly not ideal when you’re trying to build a light wallet So I should have clarified Open Bazaar uses a light wallet So it’s not able to estimate fees by itself So when you have you know…. when I can just kind of hard code a fee per byte and know that it’s always going to go through on the network that makes it a lot easier to write code for then when you have to use a centralized service to get your fees Thank you Chris And sorry to put you on the spot like that That’s alright Alright, next question: When is ABC going to implement BIP70 so we can easily pay Bitpay invoices? Honestly, someone is going to have to do it. Like we don’t have the engineering bandwidth at this time But would be…. We would be thrilled to get a patch for it. But yeah When is Bitpay going to task one of their developers to implement their basically proprietary protocol for ABC? I think that, you know, if they want to do that, if they want to make something that nobody else uses except for Bitpay, then it’s in their interest to write the code for it If we had time to do it ourselves we probably would, but we just have bigger fish to fry It might be something I might work on in the BCHD I think I have a little bit different strategy for that one then let’s say ABC has As far as like the scalability stuff I kind of just want to let other people work on that and then figure out what’s best and then you know when it’s needed eventually get around to implementing it in the BCHD But that would allow me to work on other things. Like that BIP70 and other kind of cool features So that is something I think would be on my priority to put it in there Thank you Chris Any other comments? No but I like to reiterate: if we get a patch that is of good quality we’re gonna merge it Like no problem with that especially but we don’t have the engineering bandwidth at this point in time to do it ourselves Alright thank you Next question. Perhaps a little bit of a different take Why did ABC argue against using Nakamoto Consensus as the governance model for BCH in the upcoming fork at the Bangkok meeting? Because it doesn’t work Nakamoto Consensus would work for a soft fork but not for hard fork You can’t use a hash war to resolve this issue If you have different hard forking rule sets you’re going to have a persistent chain split no matter what the hash rate distribution is So you know, and I don’t think….. I don’t know Whether or not we are willing to use Nakamoto Consensus to resolve issues is not the issue right here

What the issue is, that it’s just technically impossible Yeah I would like to address that in a bit of a different way So first yes, if you have an incompatible change set you get a permanent chain split no matter what. Right? No matter……. But also I think Nakamoto Consensus is probably quite misunderstood People would do well to actually reread the white paper on that front And so what the Nakamoto Consensus describes generally is going to be miners starting to enforce a different rule set and everybody is going to reorg into the longest chain. Right? And this is how you can decide amongst changes that are compatible with each other Because if they are not compatible with each other nobody is going to reorg into any chain and what you get is two chains And Nakamoto Consensus cannot resolve that Also another idea that people have a lot with Nakamoto Consensus is that miners vote and this is how we decide what goes in the chain But this is not what is described in the white paper What is described as voting in the White paper is miner vote for a chain by choosing to extend it. Right? So any block they find on a chain and extend the chain counts as a vote as described in the White Paper And this is actually a vote and a good decision mechanism because this is costly and binding for the miners to do so Because the block that the miner is going to find that way is not going to be valid under another rule set. Right? So effectively the miner is committing to that chain However this is not the case when voting ahead of time which is not what is described by Nakamoto Consensus The vote ahead of time is non-binding And then you get to the problem that Jonathan described earlier, that you can have a big actor that moves a ton of hash rate on BCH for some period of time, for the duration of the vote, and then leaves again and let everybody deal with the consequences of that vote So we are in a position where we cannot really do the hash rate vote stuff and also we are in the position where a lot of people are confused between this whole hash rate vote and what Nakamoto Consensus is actually described in the white paper So yeah I think people would so good to like reread those two parts of the white paper because they are actually quite insightful and widely misunderstood Yeah Nakamoto Consensus in the white paper is about determining which of several valid histories of transaction ordering is the true canonical ordering And which transactions are approved and confirmed and which ones are not It is not for determining which rulesets determines what You cannot use Nakamoto Consensus for example to decide whether the 21 million Bitcoins limit is good or bad You can’t just say, ok but I have 10 times as much hash rate that says you are paying the miners ten times more Because it’s not up to the miners make that decision The only decision that Nakamoto Consensus is allowed to make on is which of the various types of blocks or block contents, that would be valid according to the rule set, is the true history So yeah I think there’s just a misunderstanding there about what Nakamoto Consensus is All right Thank you for that Jonathan and I think this is probably a good point to wrap things up for today We’re at the top of the second hour so we’ve been here for two hours I want to thank you all The participants as panelists for this meeting today and opportunity to speak both in front of the participants and also in front of the video audience And again thank you all very much for your participation today Look forward to seeing you all again soon