BSV Devcon China – MetaSV: Bitcoin SV Cloud Computing Platform

Hello, everyone! Welcome to the Bitcoin SV DevCon My name is Tsiming Ho, founder and architect of MetaSV Today, I want to share with everyone an important service required for blockchain development, which is blockchain browser service I also want to talk about how MetaSV constructs a blockchain cloud computing platform that’s of high availability and low cost using the theory and paradigm of cloud computing Current blockchain apps are still in their early days of growth Many apps are still in their POC stage It will require a lot more effort to be made to enable them to become fit for commercial use The services that blockchain supports involve some key areas like property rights and finance This means the quality requirements are very high for these blockchain services to be implemented and ready for commercial use Now I will talk about blockchain browsers Many people are very interested in mining or exchanges but not many are interested in browsers Because the thing about browsers is that they are difficult to develop, and they cost a lot The profit model is also tricky and not very clear However, browsers are an important sort of infrastructure in the eco-system of blockchain apps Without the support of browsers, many simple functions, like looking up (address) balances, would become very difficult Let’s also get into another point As the block grows bigger, the number of transactions grows, and the types and protocols of the transaction get more and more complicated, developers will face more and more engineering challenges and technical challenges in order to become able to provide commercial blockchain services or be able to use commercial blockchain service Many of these difficulties or challenges are not about the business logic They require a huge amount of energy and cost to get the wheel rolling Let’s analyze each of the challenges and how they are generated The first challenge would be the huge amount of historical data This picture shows the hard drive storage to be consumed when MetaSV is running a full node As you can see in this picture, the full node currently takes up about 260 GB worth of storage space Probably many people don’t think this number is such a big deal After all, it’s common for home laptops to have at least 1 TB worth of hard drive space But commercial-use or server-based hard drives are different from personal-use hard drives The hard drives for servers need to be solid-state drives also known as SSDs Otherwise, the read/write function wouldn’t be enough to synchronize this huge block Besides, the hard drives for servers require constant maintenance, repairs and replacements in order to ensure the hardware will work 24/7 Hard drives for servers need to be backed up on a real time basis to prevent data from being damaged or lost Also, the 260 GB we see here points only to full node data Those who understand the data structure of a full node would know that the data structure of full node storage is a very dense and compact format of binary data flow It would be extremely difficult to extract required data from files directly If you ask a full node how many transactions a certain address has sent, or let’s say how much balance a certain address has, or what the child nodes of a certain Metanet transaction are A full node can’t answer any of these questions Because this type of data isn’t stored in a full node If you want to acquire such information, you must filter it from the genesis transaction up to the current transaction to locate the information you are looking for This is the difficulty that enormous historical data brings Also, the browser need to dig up and analyze such information for users, then store it in the database After the analysis is completed, and the data is entirely expanded, it is now 8 times the size it was before the analysis This means we can now see that the original data was 260 GB and the size it takes up in the storage after the analysis is roughly 2,000 GB The size difference is 8 times,

so without considering complex scenarios such as backing-up or master-slave MySQL-Proxy, it takes at least 2.5 TB worth of commercial SSD just to provide an address-looking-up browser service That means the data volume after the analysis plus the data volume of the full node, plus a certain amount of redundancy, requires an SSD that’s at least 2.5 TB Based on the current speed of TPS growth, which is roughly 600 GB per year, If the full node grows by 80 GB per year, the database will have to grow by 600 GB That would be 680 GB in total So if you want to provide a stable, consistent service, you always need to store this amount of historical data Therefore, the cost of storage will grow every year This is the first challenge or handicap that developers are facing To wit, a huge amount of historical data The next challenge would be the search for historical data No matter what kind of index table you use, the total data volume has dramatically surpassed the upper limit of a single table in the commonly used relational database Therefore, we need to perform disk partitioning on the index table The total number of all the current transactions has exceeded 400 million Which has surpassed the upper limit of a single table in relational databases such as MySQL Also, we just mentioned that a full node itself could not provide the search index for addresses So as for the search index, unavoidably, you need to store it yourself So in order to perform quick queries, you need to establish certain indexes For example, for address indexes or indexes based on protocols such as bitcom, or Metanet index, if you don’t establish indexes like these, you need to go through all the transactions to look for the data you want when performing searches for data Some people might mention that nosql performs very well when it comes to scalability or when it faces a large amount of data That is not wrong Nosql is excellent at scalability However, transaction history bears a characteristic of continuity, which means we often need to arrange the transaction history in order or to arrange the transaction history in order of time, from the latest to the oldest If we only simply use nosql for storage, it will become extremely difficult to look up historical transactions Because we need to use nosql to scan the table So in this circumstance, we need to overcome some limits of relational databases We can fully maximize the convenience the scalability of nosql brings along So the storage of historical data, or rather all the storage and searches, would constitute a very tricky problem for developers There is another difficulty, which is the cost of high-performing hardware Such cost of hardware is not only for storage It also covers components like servers or computing storage, required to support analytical operation and query operation as to a large number of transactions According to the official configuration for the full node production environment recommended by Bitcoin SV, they recommend using an eight-core processor and 64GB RAM This is the mere configuration for a full node We need a large amount of working groups to be able to provide a service For example, work server or API server, as well as servers that can support high-volume databases This means it requires at least four full nodes for a machine to support a basic service like that Let’s use the M5 2xlarge by AWS as an example It features eight cores and 32 GB The number of cores fits the official configuration recommended, but the RAM is only a half size In order to make it easier to estimate, we will calculate it using this 2xlarge We would need at least four 2xlarges to meet the above needs Under this calculation, we can see that we would need to spend USD 4,300 on storage per year This is the first part And the four 2xlarges on the server would cost USD 17,520 per year You can find the price on the Internet That means the investment required

to run a full node and analyse the data would equal over USD 20,000 per year In RMB, it roughly equals 150,000 yuan Therefore, high-performing hardware or the large amount of initial investment in providing a service, is also a challenge developers are faced with The next big challenge developers are faced with is the analysis of ex-large blocks At 2pm UTC time on May 16th, we could see that No, this was not UTC time This was 2pm Japan time It created the largest block in history It contained 1.32 million transactions with a total size of 370 MB It was a browser killer And paralyzed the services of a bunch of browsers Let me explain why large blocks can entail a huge amount of stress on eco-system components like browsers, exchanges, apps, etc Because unlike miners, it is not the job of a browser to verify the legitimacy of a transaction Instead, its job is to analyze and index transactions Let’s talk about why it is acceptable for miners to pack such a large block, but for browsers, they would struggle to do so Because for miners, the optimization and functionality of full node software are enough to pack and verify G-level blocks The Bitcoin SV full node software provided by official sources has done a lot of optimization on the verification and packing of transactions Also, the focus of transaction verification is on locking and unlocking the scripts, so they can perform certain types of optimization on these scripts Therefore, it’s not a huge problem for miners to pack G-level blocks You can do so as long as your hardware is good enough; your investment is sufficient, and your network communication is decent enough However, for browsers, even though they don’t need to verify transactions, because they need to analyse them, this process is, in fact, to conduct multiple queries and operations on the same transaction For example, when a transaction comes in, you need to change the relevant address for it, and there may be more than one address, because one transaction might involve multiple addresses In that case, you need to enrich your record for the transaction, and modify all the UTXOs involved in these addresses Then, you need to increase or decrease the number of Metanet nodes back and forth about ten times If all these ten times happen in the same working group, the amount of intermediate data that needs to be temporarily stored in the RAM, which is also intermediate data for processing a transaction, will increase exponentially in the process of coding Eventually, it will consume all the RAM, and then cause a failure in processing of the transaction Let’s imagine this In processing a million transactions, 10 million intermediate variables are likely to occur If all these intermediate variables are piled up in the RAM, what kind of consequences would that cause? This is why once there comes an ex-large block, many browsers will get stuck and become unable to analyze the block Because after a browser gets stuck, it needs to roll itself back, and crawl (data) once again, so as to ensure the accuracy of data Assuming that it gets stuck by an ex-large block and it is only halfway done with the processing of data, it then needs to roll back to the very start, before crawling (data) once again Thus this process would repeatedly fail, and the browser would roll itself back over and over, and fail again and again In this scenario, the browser will get stuck in the same place So, for a browser or an app, if you want to process an ex-large block, you must perform improvement on the framework, or rather the basic structure, which means you need to split the working group For an ex-large block, you can divide the working groups into different small groups based on different types of operational logic and then cause these small groups to perform certain tasks Plus, in each small group, since you have to process an entire block, you also need to divide the block into segments Let’s say if it’s a block for a million transactions, you need to divide it into segments each containing a thousand transactions

Then you’ll get a thousand little segments Then a thousand very small working groups will process the entire block This way, each working group, or rather each work server, only needs to process a thousand transactions With 1000 transactions to process, even with many intermediate variables that we mentioned earlier, these intermediate variables are very unlikely to paralyze the RAM Because a single machine would only have a RAM of 100-200 GB, even with the best configurations, but a cluster has no upper limit for its RAM So a cluster of servers will be required to process such an ex-large block, while providing the ex-large block with comprehensive support structure-wise However, ordinary app developers don’t care about how you process the block, but whether their transactions can be done without a hitch So this would pose another engineering problem for these developers ex-large blocks may also cause an even greater difficulty, which is block re-org We will talk about that later In conclusion, the analysis of ex-large blocks is an unavoidable challenge for developers The next big challenge is how to process and generate transactions of high concurrency Especially when there is large traffic volume, how do we generate and process transactions? Now we need to mention this Gatling pressure test organised and held by Chinese communities The Gatling pressure test was performed on a TPS 3000 mainnet Because the participants were so ardently involved, 10 million transactions were sent in a few days While developing Gatling, we actually encountered many engineering problems During sessions of high concurrency, how do we make sure the UTXO under our control is not used repeatedly, thus leading to double spending or a transaction failure? How do we accurately manage the number of times for which UTXO is to be used? It shouldn’t exceed the limit of 50 reference chains How will we improve API’s processing capability under a large amount of traffic? How will we mitigate the delay to make sure your transaction is sent in due time? These have posed rather demanding requirements on developers Developers have to think about many other things unrelated to the sending of transactions They have to think about system delays, UTXO, etc These have nothing to do with the business operation So it’s a great challenge for developers and it’s very tricky for them to manage resources and the UTXO This is also a serious engineering problem that developers are faced with The next engineering problem is block re-org I will briefly explain what block re-org is Let’s say a miner created a rather large block, but didn’t have time to broadcast it In the meantime, other miners also created blocks, and the entire network will encounter re-org Whoever wins is anyone’s guess There will be a swinging-like phenomenon on the chain One second it follows the top chain, the next second – the bottom chain Such re-org is a very serious challenge for browsers or apps, because the outcome of such re-org, is not that those post-re-org transactions are simply struck off and the (data) crawling is started again Because some of the saved tables do have their own state elements, such as addresses or balances, it’s not simply about striking off a record So the logic for processing the roll-back is very complicated But for browsers, the roll-back itself will cause a huge database input and you have to go through such data again after the roll-back This can create a huge spike in system pressures, especially when a block gets ex-large Also, when the network features a swinging phenomenon, the browser will see a similar scene accordingly, causing the roll-back quite often The browser will undergo the roll-back and reprocessing every so often It can cause a huge amount of pressure on the system in the short term So block re-org has posed a severe challenge on blockchain apps The last challenge, which is relatively severe, is how to manage such a complex cluster Because according to our introduction just now,

in order to provide a stable service, when searching for transactions, we need to make a great effort and many components are needed for the purpose of joint coordination in order to accomplish it An app like this will feature functions such as sending a transaction, monitoring a transaction, searching for a transaction, etc It requires multiple modules, like the segments we mentioned, to coordinate with small groups during processing Aside from conducting the business, developers also need to manage the cluster of these apps, including crawling through and monitoring the blocks, pushing notifications, broadcasting the transactions, and maintaining full nodes This requires developers to manage the cluster of these micro-services And such a cluster has a huge size Today’s mainstream paradigm of development is known to be serverless Developers only need to focus on the business operation or the operational logic, instead of any infrastructure on the bottom layer Infrastructure should be enclosed in order to form a service such as BAAS, which we’ve mentioned a lot It means blockchains operate as a service, whilst developers are not supposed to care about how to manage the bottom layer, or how to conduct monitoring, give alerts, etc Now let’s discuss how MetaSV will solve all the engineering problems mentioned above These engineering problems to be solved are actually common problems encountered in the field of cloud computing So MetaSV is trying to solve these engineering problems using business logic and paradigms of cloud computing Let’s look at specific paradigms and characteristics of cloud computing, and how MetaSV applies these theories and their characteristics First of all, cloud computing is about agility It necessitates the provision of utmost convenience for developers, or rather users It should enable a machine or a service to be initiated in an instance and it should allow a service to be shut down very swiftly Development work and varying operations will become very responsive, unlike the traditional practices, which require a series of preliminary operations such as building a facility chamber and preparing power supply Instead, you can directly turn on the computer to start the browser, and operate its console right away That’s agility Then how should MetaSV put its agility to use? It offers an API which grants users, or rather developers, direct access to data The second feature in the paradigms of cloud computing is flexibility, which means you can easily expand or shrink it For example, in normally circumstances, when you switch on one machine and there is a sudden spike in traffic, you can expand the capacity from one machine to 10, 100, or even over 1,000 machines When the sudden spike in traffic ends, you can automatically switch back from 1000 machines to just one That is flexibility in terms of expanding and shrinking Then how will MetaSV achieve such flexibility? MetaSV has lifted the ban on the frequency of the API, which means you can call the API at a lower frequency, when there’s not enough traffic or users When you have many users, you can call the API at a rather high frequency MetaSV has no limit on API frequency for users The third paradigm of cloud computing is cost reduction or savings The costs here are mainly upfront costs, management costs and a series of other costs MetaSV adopts a Pay As You Go model and does not require pre-payment You don’t need to install the full node in advance You don’t need to buy a machine or hard drive storage You only pay for what you need No pre-payment is required This could greatly reduce development and operation costs The fourth paradigm of cloud computing is joining various facilities, platforms or software and defining them as services This is what we call IAAS, Infrastructure as a Service, Platform as a Service, or Software as a Service That is to say, encapsulating the bottom layer of the software, its difficulties in terms of operation and maintenance, and a series of engineering problems into one combo, which on the outside appears to be one service This design is the fundamental concept of cloud computing Here, we define Bitcoin SV as a service What does this service mainly offer? It provides a series of services,

such as constructing, sending or searching for transactions Users only need to worry about the contents and quality of such service They don’t need to care about how everything works at the bottom layer, because it’s all been encapsulated and streamlined For users, it’s just about making the Bitcoin service available The fifth paradigm of cloud computing is high availability We have talked about that before You need to make sure this service is available There are many practical methods to secure the availability of service The method we adopt is to utilize multiple availability zones of AWS First and foremost, our service is deployed by the cluster of micro-services in multiple availability zones Inside AWS’s availability zones, there are computer clusters located in different geographic locations When a natural or a man-made disaster strikes at one location, a service may still be available at other locations High availability of services is secured by this method, which is similar to backing-up or redundancy Cloud computing can also enable automation, meaning it can use infrastructure as a sort of code You can edit the infrastructure by editing the code This way, management, operations and maintenance costs will be reduced by leaps, because when your cluster reaches a certain size, it would become unrealistic to use manpower to manage it On such occasion, we would need to manage by use of machine Like the shrinkage of flexibility we mentioned before, if we can define that as a code, and cause the machine to shrink flexibility automatically, reducing greatly the cost of operations and maintenance by manpower Therefore, MetaSV uses a lot of DevOps in here DevOps is a mindset that manages the cluster with the help of developers, or rather by means of code, or by way of code development So we have adopted a lot of ideas and techniques of management that are required by micro-services of the day Now onto the last paradigm Serverless paradigm is relatively popular nowadays Serverless is not necessarily meant for an utter lack of servers Instead, you don’t need to worry about servers Namely, it is the calculation resources that you use in fact, and calculation resources are infinite in theory, including infinite memory resources As long as you assume these resources are infinite, you can build your service on them This is the basic idea of “serverless” You don’t need to worry about how many resources are stored on each machine, or how the bottom layer has been designed You only need to care about how many services are required on them, and they are infinite Such a mode is called serverless Then how will MetaSV fulfill it? We will go back to the issue mentioned before, i.e. how to break a gigantic block into segments Each segment would fulfill its “serverless goal” by adopting a very tiny service, which is a Lambda function service This means no matter how big your block is, it can be evenly distributed across all the machines in the cluster for processing That is to say, when this paradigm is achieved, the calculation resources in the CPU and RAM resources are factually infinite Then in an infinite scenario, as long as you adopt the right design of codes and architecture, you can achieve management and computing on a scale No matter how big your block is, its processing can be completed in a given time frame These are some of the paradigms and ideas MetaSV has adopted Now I will introduce some of the services MetaSV provides Specific services MetaSV mainly provides blockchain browsers specific for Metanet On top of regular blockchain browser services, such as block queries, address queries and transaction queries, they also provide the Metanet tree, i.e. queries for data payload MetaSV also provides blockchain API services of highly availability Blockchain API services of high availability mainly offer developers a large amount of practical APIs that can be used for blockchain queries, as well as a full spectrum of detailed development documents Functions of the API mainly cover commonly-used blockchain queries, such as addresses, transactions, UTXO, etc

These commonly used blockchain queries also include queries for relatively popular data protocols, such as data on the bitcom chain, or tree-shaped data in the Metanet They also include some other services, such as real-time PushStream, which allows you to monitor what’s happening with a blockchain As to these API services, we can use the API key to manage them You can apply for or deregister your private key at any time Besides, MetaSV’s API service allows you to pay based on the number of requests, that is you pay for what you need, based on the number of requests, with no limits on frequency If you use more, you pay more If you use less, you pay less Each interface, or rather each API, uses “satoshi” as the pricing unit, For example, someone pays how much satoshi per request One advantage of paying by the number of requests is that, if you’re still in the development phase, you can use very little money, sometimes even a few RMB yuan, to make thousands of requests This reduces greatly the initial cost during the development phase Even when you start using it after the service is officially online, you can adjust the number of times for which you need to request the API, based on your usage MetaSV also provides detailed usage bills plus a monitoring sheet So every satoshi spent by the user will be proved and explained Now I will demonstrate some commonly used APIs and their characteristics Let us visit, to see a document that we provide developers with It mainly provides a range of queries, such as queries for data types, i.e. queries related to OP_RETURN For example, in here, we mainly offer queries for the tree-structure in Metanet You can choose a node via txid, or an address If, for example, many addresses share the same historical version, you can also use parent nodes to search for child nodes We provide these four services If you search for a parent node based on a child node, you use the first interface Now, for example, let’s take a look here If a parent node is used to search for a child node, we can see here both child nodes corresponding with the same parent node And these two child nodes share the same parent txid, which starts with 72 That is to say, when searching for such a txid, by using these two interfaces, we can very quickly rebuild a Metanet tree However big this tree is, you’ll be able to rebuild it, by simply performing a few more recursions This is how we perform Metanet queries Other than that, we also provide queries for normal OP-RETURN data If you know the txid of a certain transaction, and its output index, you can directly bring away this output So by using such a txid, you can locate the original data right away For example, if you saved an article, a picture or other data on the chain, and you want to retrieve such data, you can directly retrieve the OP_RETURN data by calling such an interface Here, we did some coding on it, something like UTF-8 So it’s available for convenient use If you have your unique coding, or if the content has been encrypted, then you can analyze all the data from this raw hex Another important function is list query After we’ve adopted a bitcom protocol, we hope to retrieve some data by use of the list query function For example, if you use the on-chain protocol of Webot, i.e. on-chain data, etc which we often use in a group chat, such information is actually composed of a certain bitcom protocol How does it tell one user from another? It distinguishes the users using input addresses Such an input-Address-based method can endow data with authority, meaning only those who have input address private keys can send transactions as such This is tantamount to a one-dimensional Metanet protocol Because it’s comparatively easy and practical, it has been applied most widely Protocols we see often,

such as Webot protocol, weathersv protocol and preev protocol All use input addresses to ensure data security There are also queries for commonly-used Blockchains Take the transaction query as an example, you can make a query for an analyzed transaction In an analyzed transaction, some data is not included in the full node, such as who packed the transaction, the time point of the block or the location in the block This data isn’t provided in the full node, but will be available in browser services As to the input, it is very difficult to locate an address that corresponds with the input, because you have to find the previous transaction, and extract the output from the previous transaction However in this browser, you can find information directly from vin, e.g. whose the money it is There are also some important interfaces for blockchain merchants to use For example, a merchant needs to know if a transaction exists in mem-pool , or if it only exists at a certain height Then we can call the interface as to the “existence of transactions” When you start a transaction, you can call the interface as to the broadcasting of transactions Because MetaSV works with TAAL miners, we can enjoy the rate of 0.25(sat/byte) If you choose to broadcast your transactions through this interface of MetaSV, you will first get through this MAPI interface of TAAL Then getting a return value like this If TAAL encounters a fault, we will broadcast it to our own full node This way, we can make sure your transaction reaches our node quickly, enabling the browser to respond rapidly This way, it copes smoothly with other interfaces The function of picking the UTXO set is mainly used to choose the UTXO Because you need to turn pages when searching for the UTXO, this interface can help you quickly choose the amount you want There are certain rules for the selection of amount First, it will choose an amount with more confirmations, to help you avoid certain problems caused by reference chains Here are two very important characteristics i.e. the calculation of the number of ancestors and descendants This will help you estimate the location of your current transaction on the reference chain when you start a large number of transactions It can help you avoid limitations caused by reference chains For example, when the total number of ancestors and descendants turns out to be more than 50, you need to understand it is unlikely to broadcast this transaction after being constructed, because it has exceeded the limit imposed by the reference chain, which is currently acceptable to most nodes If the data has been calculated on the basis of unconfirmed transactions and by use of the calculation method contained in the full node, you can quickly evaluate if the UTXO is usable based on such data If it is usable, you can use it without any worry If it has exceeded the applicable limit, you won’t be able to use it until the previous transaction has been confirmed That is to say, when you spend the unconfirmed UTXO, if you see these two fields, they can help you avoid the excessive length of a reference chain As for address queries, we provide some basic information about addresses, including since when money was spent at an address, when the first spending took place, when the first payment was received, when the last spending took place, when the last payment was received, as well as the total number of all those transactions, the total revenue for the spending relating to this address Such data also includes the current balance You can find the history and the current state of an address through this interface The address-specific transaction list is used to search for transactions taking place at a given address, including transactions in which money was either sent out from or received at a given address This data will be included in here So if you want to audit one address and check if its amount is correct, you can find all transactions for a given address using the address-specific transaction list Then you can calculate if the current balance is correct using the income and the outcome Here, the income field shows the revenue earned at the address through this transaction, and the outcome field shows

the money spent at the address through this transaction For example, if we see in this transaction the income is this much and there’s no outcome, it means this address was making a profit through this transaction, or it is the receiving end If there’s only outcome and no income, that means it’s the paying end If an address has both, it’s both the receiving end and the paying end The difference between its outcome and income is the balance variation resulting from this transaction The next characteristic service is stream subscription You can simply use the SSE mode for http to monitor this service For example, if we want to monitor a real-time transaction, we can simply open it, only to see a transaction stream being pushed all the time, which is the transaction stream taking place now on the blockchain It includes all the transactions If a transaction was directly packed by a miner, instead of occurring in the unconfirmed mem-pool, it will also be pushed over When using the Push service, you need to pay attention to a field, which is this flag We can see that each of these services bears a time field And the time is marked in each message The time is accurate to a microsecond, which is impressive Using this time field allows you to resume interrupted downloads If there is an incident, such as a server error or an app error, interrupting the connection during monitoring, and you wish to re-do your subscription from the point of interruption, then you can use this flag to transmit the time of the last transaction you received into this parameter It will first return all the transactions from the time the last transaction took place until now, and then it will return all the transactions taking place from now on as usual There’s also a time limit for the Push It’s 24 hours On-chain events can’t be pushed again after 24 hours However, if within 24 hours everything fits this interface, you can make up the information missing in the meantime by using the flag This operates mainly to monitor blocks, monitor real-time transactions, as well as the hash and details of real-time transactions, plus the monitoring of addresses, which is a very useful function in many apps All you need to do is put in a long link The http request for the address to be monitored is a long link When a transaction occurs at this address, the server will push a message to the app, which will include all the information relevant to you in this transaction, including the input and output for this transaction, the money you received through this transaction, etc Thus, the address subscription is a very useful interface In next stage, we plan to adopt the practice of address subscription in bulk When a browser is officially online, we use address subscription in bulk It will reduce the loss of traffic I will now introduce the future plan and milestones for MetaSV We plan to complete a radical improvement of architecture and the transformation of API v2.0 by September 2020 Besides, we hope to launch online the browsers and API’s billing system by October 2020 The browsers include ordinary browser, Metanet browser, and more browser services specific to BSV By November 2020, we plan to support more development protocols For example, we are now studying the smart contract developed by sCrypt, and how to continue providing more browser convenience to developers After 2021, we plan to support searching for smart contracts Here are contact details for MetaSV If you have any feedback or suggestions, you’re welcome to leave us a message Thank you all for your support That is the end of my speech Thank you for attending Goodbye!