Q2 2019 Storj Labs Town Hall

welcome to the Q2 2019 Storj Town Hall parts of this presentation have been pre-recorded to ensure the best possible sound quality for everyone who’s listening after the pre-recorded section we’re going to move to a live Q&A as you’re listening if you have questions please send them in by email to ask at storj.io and we’ll be taking your questions live at the end of this presentation I’m going to now read a forward-looking statement this document contains forward-looking statements about our product direction the development release and timing of any features or functionality described for our products remains at the sole discretion of storage labs the information herein is not a commitment to deliver any material code or functionality and should not be relied upon in making purchase decisions our topics today are: executive summary, product update, storage node operator payments, a token report marketing update, and community. Our speakers today are Shawn Wilkinson founder and chief strategy officer; Ben Golub interim CEO and executive chairman; Brandon Iglesias product manager; JT olio vice president of engineering; Jon Sanderson vice president of marketing; John Gleeson vice president of operations; and myself, Jocelyn Matthews the community manager at Storj. We’re going to go now to an executive summary by Ben Golub. Ben, takeit away (BG): Thank You Jocelyn and thank you to everybody on the call this has been a very exciting quarter as you know we’re trying to do something that’s really unique in the industry any time you launch a storage service it has to be durable performant and secure there’s a whole new set of challenges when you do cloud storage and an even greater set of challenges that are there when you’re trying to do something that is decentralized which has never been done before successfully in the industry we’re very excited by the progress that we’re seeing not only in being durable performance and secure and Amazon S3 compatible but also in being able to deliver something that gives superior economics for our users for a storage node operator for our partners and for Storj as a company this quarter we’ve seen great cadence we’ve hit all of our milestones both our Vanguard and our Beacon releases and we’re now on the cusp of our first public beta. In conjunction with that we’ve also seen significant activity on the demand side great progress on our Open Source Partner Program dApps and large user trials We’re now at the point where we’re seeing multi petabyte opportunities coming to us and so we’re very confident that if we build it they will come and they’ll be very excited by when they come. We also grew as a company to 48 people including getting some of the greatest concentration of decentralized storage expertise on the planet. We continue to make our storage node payments on time we’ve been better than planned on finances and continue to lead the crypto industry in terms of external communication transparency and governance. Q3 is a very important quarter for us we not only want to get to the point where our code is of production quality but we want to make sure that the network is of production quality as well which honestly means time and experience running; Brandon will be up in a bit, and he’ll be talking about both our Pioneer 1 and our Pioneer 2 public betas and what we’re hoping to achieve with them and if those both go well and we achieve all of our goals we’ll be set up for the launch of our production service Voyager at some point in Q4 of course we’re going to continue to push on governance, continue to push on partnerships and sales. Again it is incredibly unique and gratifying to know that we have multi petabyte opportunities that are waiting in the wings and so I’d like to end by thanking everybody who helped us reach this our employees our partners and most importantly the members of the community, our storage node operators, our code contributors, and especially our users for helping us to reach this point and with that let me turn it back to Jocelyn (JM): Next up we have a product update from Brandon Iglesias, our product manager here at Storj (Brandon): Hey thanks Jocelyn. So as Ben mentioned earlier we’ve been working diligently on our first beta but let’s not forget we just launched one of our major development milestones which was the Beacon release and we did add one development milestone onto this graphic, which is the only thing that has changed since we started using it, but all in all we we seem to be in good shape for our development milestones So, our Beacon release is what we’re currently in and it’s a really important release because we added the ability to share files folders and buckets via macaroons at hierarchical encryption keys this is really important for clients sharing data with each other and then our Tardigrade users should expect for the network to be backwards incompatible and they should expect network wipes since it’s still in alpha Storage node operators should expect business as usual – they’re gonna continue to get paid, but there is one thing that they should look out for: we’re gonna have frequent updates during this time

period, so please stay up-to-date with our software updates for storage nodes The beacon release we have Pioneer 1 which is our first beta. The key distinction between alpha and beta is that when we go into beta we’re gonna be backwards compatible and we’re not gonna have any more network wipes that’s one of the major things that you need to be aware of. And (for) storage nodes everything will be the same but in terms of what Pioneer 1 gives our users who are gonna be adding C bindings, invoices, storage node operators are gonna get garbage collection and a storage node operator dashboard which John Gleeson is gonna show you later on, and our partners are gonna get value attribution and a referral program. This is just a small subset of the features that we’re working on for the Pioneer 1 beta, but after Pioneer one, we’re gonna be launching Pioneer 2 which adds some more functionality for our users such as storage token payments and multi-part uploads, and then automatic updates, graceful exit and a notification system for the storage heads. But the key distinction between pioneer 1 and pioneer 2 is that pioneer 2 is when we’re going to be measuring our SLA’s so clients should expect extremely high levels of stability and durability and this is really when we’re gonna gain the confidence and the experience that we’re looking for on our network to ensure that our clients don’t have any kind of data loss or any kind of outages when we go into Voyager and launch into production. So, just a quick recap: we’re currently in the Beacon release; we’re going to be launching Pioneer 1 in the next few weeks; after that we’ll go into Pioneer 2 where we’re going to gain confidence add the last bits of functionality we need for Voyager and then once the network has the durability and level of confidence that we want we’ll be launching our Voyager which is production so on that note I’m gonna send it back to Jocelyn (Jocelyn): Next up we have storage node operator payments with John Gleeson. John is our vice president of operations. John, please take it away (JG): Thank you, Jocelyn. One thing that you hear consistently throughout this presentation is our need to grow our storage node operator network we have an aggressive goal for the Pioneer release that is gonna need some pretty rapid growth we’re looking to get at least four times the current number of storage nodes but we want statistically uncorrelated nodes. Not only does that mean more storage nodes but more storage node operators. And we want high uptime, high bandwidth, with no data loss. But the good news is we’re willing to invest in growing that relationship and growing that network. So similar to everyone’s favorite rideshare apps, we’ve rolled out surge pricing – and just like surge pricing with Lyft and Uber where you have to actually drive to take advantage of surge pricing you need to run a storage node and you need to run it well. As far as the details of the program are concerned: first as a big thank you to our v3 storage node operators who joined during the early alphas we’re offering five times the normal payout for at least the next three months and of course to attract new high-quality storage node operators for those who joined in July we’re gonna offer four times the normal payout for at least the next three months and hopefully the combination of these two programs will help us achieve what the targets that we’ve set for a network growth So speaking of storage node operator compensation, I’d like to provide a brief update also on storage node operator payouts. We’ve been managing to a storage node payout SLA for the last eight quarters. The details on that SLA are, in the first week of every month we aggregate the payment information across satellites and prepare for payouts. And then the second and third week every month we complete those payouts. For the last eight quarters we’ve met or exceeded the v2 payout SLA. And, for the last several months while v3 has been in operation we’ve met our SLA every single time. With beta in production, we’ll continue to meet or exceed those SLA s and of course for each payout we’ll use a unique payout address per payout period to help you track when payouts begin and end Now in terms of payouts for operating multiple storage nodes: we do support that, and so if you have multiple invitations from the waitlist to operate multiple storage nodes you can go ahead and register those identities and operate those storage nodes. We ask that you operate one storage node per hard drive disk and processor, and don’t try and run multiple hard drives on one storage node. Or multiple storage nodes against a single hard drive. Bandwidth is going to be the thing that determines your multiple node strategy and whether it’s successful or not. If you have high bandwidth you may be successful operating multiple storage nodes but if you’re in a bandwidth constrained environment a single storage node will still remain your best strategy. And finally there’s one more thing we wanted to share with the community, and that’s a preview of the upcoming storage node dashboard. This is a new UI that’s been in process for a while and it’s going to be released in the next three or four weeks or so. This UI will give storage node operators better access to the information they need to manage their storage nodes, determine the number of satellites that they’re working against, and ultimately manage the profitability of their nodes. So that’s all I’ve got to

share! Jocelyn, back to you (Jocelyn): Next up, we have the token report with Ben Golub, our interim CEO and executive chairman (Ben): Thank you. Well welcome to what I hope is the most boring part of this presentation as you know we are a decentralized storage company first and a crypto company very distinctly second it’s our belief that if a company is talking about tokens and it’s anything but clear and transparent and predictable then something’s going wrong. So, with that, as you know over the past several town halls we’ve talked about the work that we’ve been doing to be leaders in token governance. This includes work that we’ve done on the people side, communication side, cryptographic controls, as well as external reporting through our token report and we’ve also detailed what we’ve been doing around insider trading and the time lock. Now for the third quarter in the row where we’re presenting our token balances and flows report this again should make it very easy for you to understand where the tokens came from and how we’ve been using them. No major changes, 75 million tokens were released in the initial token sale back in 2017. We set aside a number of tokens for conversion at this point. We’re seeing very little activity from our converter so we believe most of the SJCX that are going to be converted have been converted this quarter. We use 2.7 million tokens in our operations and we expect that number to go up significantly as we go into production launch. We have 43.7 million tokens left left in our operating reserves and we continue to have 245 million tokens in our long-term, 8-quarter rolling time lock. One major change which happened after the end of the quarter but which is now very visible in the blockchain is that we took 1.5 million tokens from our operating reserves and put them into a long-term time lock. Those are being transferred to a service provider in conjunction with the cleanup from items from 2017. And for those of you who like to trust but verify you can see that we have been following our pattern of time locks where we take our 245 million tokens, we divide them into eight equal sized tranches, sign for successive quarters over two years, and when one token tranche becomes unlocked we relock it for another two years We’ve committed to letting you know in advance, at least one quarter in advance, if we’re ever going to break that pattern. We’ve followed that pattern for the past two quarters and we have no intention to change that in Q3. And again here’s all the information if you want to verify what we’ve told you, including the size of the tranches, when they expire and the fact that we’ve conducted two successful resignings so far. With that, I’ll turn it back to Jocelyn (Jocelyn): Thanks, Ben. And for those of you who are interested you can also view the time-lock graphic on our website. We have it on our blog. Next up, we have the marketing update from Jon Sanderson our vice-president of marketing (Jon): Thank you Jocelyn and hello everyone as we are finishing our latest alpha release and moving into beta I wanted to address ongoing questions we’ve been receiving about our wait lists. Tardigrade, which will be the SLA back service built on Storj Labs, is what we’re taking into the market for developers for their cloud object storage needs. And like all major cloud storage providers we were watching and metering the inbound demand for storage space… especially for those larger clients that come in. We’ve already had a few potential clients asking us if we could store over 20 petabytes of data storage and as you can imagine we need to ensure we’ll have enough storage available. To do this we’ll be maintaining and relying on wait lists for both sides of our business until our engineering team feels we’ve hit critical mass. For storage node operators we’re going to be building a dedicated auth token page on storj.io that will help us control new node creation. When we are in need of more nodes, it will automatically issue a token. When we are throttling them, it will give an estimate of how long it’ll be before your token is likely to be issued, based on our current growth rate As of the past weekend we have issued tokens to everyone on our node wait lists please make sure that hello@storj.io is on your safe sender approved email list. We’ve also begun sending out alpha invights for our Tardigrade developer wait list. We’ll be increasing the number of daily invites like we did when we bootstrap storage nodes. For the purposes of stress testing these alpha accounts will have a 25 gigabyte limit as part of this alpha phase. When we begin production and launch our network into enterprise we’re offering the first 10,000 developers who activate their account the equivalent of one terabyte of storage and 330 gigabytes of bandwidth for their first month. We’re also excited to invite and engage our community to help us finalize our new mascot for Tardigrade. Our senior designer DJ who’s been responsible for new designs on storj.io and tardigrade.io (including the new tardigrade logo) has written a blog post with a link to vote on the new mascot. We are bringing and keeping brand attributes like durability and

resiliency among many others to these mascot variations. Thank you so much for your involvement and passion for our project as we prepare to launch our Enterprise Cloud storage service. Back to you Jocelyn (Jocelyn): Great thank you so much. I want to just introduce myself I’m the community manager for Storj. And what that means is that I run the technical community team, live events, and also develoroll up underneath me. So what that means is to the community I represent the company, but inside the company I represent the community and I help to facilitate that conduit. When I first came into Storj it was with the mandate to grow a vibrant ecosystem that will eventually sustain tens of thousands of community members. So I spent a lot of time listening and watching for patterns, and I’d like to explain what that means to those of you who are listening. So let me just dial back for a second to talk about the community itself. Many of you know that when Storj started it was a “what-if” idea by our founder Shawn Wilkinson while he was in college. So if you’re a longtime member of the community it’s likely you’ve interacted with him significantly. Shawn’s energy and devotion are really — he’s the soul of the company and an indelible part of its DNA. The company did attract attention as it continued to grow and change we realized a few things and one of those things is that the definition of community is broader as we continue to evolve. So in the beginning it was enthusiasts and storage node operators but now it’s broadening to include all of those people plus people who are contributing to our code base, our partners, our open source partners, and of course developers of all different sorts. So one of the biggest patterns I noticed was that there is a demand for information and we have a lot of information and we need to be able to get it to you in the fastest and most useful way possible So this sparked a dialogue in the company…Shawn our founder, Ben our CEO, JT our vice president of engineering, and a lot of the engineers here have really strong roots in open source and they all bring in their own experience and perspectives that are very role-specific In addition to that, several of our original moderators now work in the company. So we brought them in-house they work here full-time and they manage control and create significant pieces of the business across a span of teams. We have people who came out of the community who now run parts of operations, who run parts of engineering, who run parts of strategy, and you may know them by their handles which are heunland littleskunk and stefanbenten. And so I just really want to be sure that you know that they are part of the company now but they’re all still really active in the community So what came out of this dialogue was that we’re opening communications even more and strengthening ties even more and that’s taking several forms, one of which is we are taking our engineer conversations into the real world. So we have moved our architecture discussions into the discussion area and we’re openly and transparently having conversations that we’re inviting the community to be part of and to actively participate in. We’re also starting an AMA series (ask me anything) which will be a monthly series that we’re trying out for a quarter. And we would really love to have you engage with our engineers. In addition to that we have also changed some of the ways that we are running the engineering. So we have an on-call engineer whose specific duty it is to be is to be an available resource to the community so when situations come up in the community that people would like assistance or information about what that means is that we always have a designated point person that we can escalate issues to and talk to and they’re available to always try and get you the best information as quickly as possible. I can’t guarantee that you’re always going to love every answer that you get but we’re making the best efforts we can to get you the right information as quickly and efficiently as possible So one of the outcomes of that is that we are going to be sunsetting rocket chat over the summer and RocketChat is the area that most of us are familiar with where conversations have been happening and we are moving to forum.storj.io It’s in preview mode but we’re nurturing a core group of loyal and active community members. It has trust levels, spam control, and the ability to mark items as “solved.” We’re very excited about that because it means that when people get a really good answer to a question we can mark it as a solution, and you can search on those. So it will continue to become easier to get the best information all the time You can go ahead

make an account now and start building rep and again that URL is forum.storj.io and the other ways if you want to deepen your involvement with Storj and Tardigrade there’s always something that you can do at whatever level you’re at. We have a mix of “help wanted” and “good first issues” with bounties available on our GitHub. We’re always looking for people to become more involved in the community. We actively sponsor meetups. If you’re out there and you have a meet-up and you would like some swag or you’d like some sponsorship or potentially a speaker to come, let us know and I’ll help you I’ll set it up Beyond that you know just the usual! Follow us on social we have the Twitter, we have the we have the forum and we’d love to see you there so moving on to the Q&A section before we dive into our deep well of questions I would like to know if we can get a joke from Shawn Wilkinson in the grand tradition of Storj Town Halls (Shawn): Absolutely. Absolutely so one of the things that I don’t know how we started this but I’ve become a Storj stand-up comedian we do some jokes in front of our all-hands and it’s met with some a lot of chuckles a lot of laughs so I was asked to add this to a town hall so I have a kind of storage themed joke for you guys. So why was the computer tired when it got home (Jocelyn):I don’t know Sean why was the computer tired when it got home? (Shawn): because it had a hard drive my terrible job how people know that we actually are broadcasting live the walls of pain could not possibly be faked. Thank You Shawn that was great. Let’s move to the first question so we’ve been collecting these questions ahead of time and we try to collect them into groups as best we can but we also have questions that come in live. If you have questions that are coming in send them to ask@storj.io and we’re going to be adding those in as we go so these first we’re gonna handle the ones that we collected ahead of time so this first community question is I have a Linux in this distress mining rig and I’ve thought about running a p3 node is it possible yes it is possible I’m actually running that as well for my mining rigs it’s etho s is Linux based and you can install our software that you can install the software and you can install talker of it so it is possible yes thanks next question in recent weeks there’s been a lot of talk about uptime requirements and disqualification currently there’s no way for someone running a node on the residential internet connection to tell if they’ve been or if they’re at risk of being disqualified this person is wondering is there a way of adding in a visual indicator of someone standing in the network in the dashboard or the logs I’m going to kick this question to Brandon Iglesias our product manager thanks Johnson yeah I’m actually really happy that someone asked this question because as a part of our Pioneer 1 release which is our next major development milestone we’re going to be introducing the storage ode operator dashboard into the storage node software so when you’re running your source code you’re actually going to see a URL that’s gonna be a local URL and you just click on that URL and you’re gonna have a graphical user interface that gives you statistics about your node such as your uptime your audit failures how much storage you use to how much egress used and that graphical user interface is only going to grow over time so we’re going to be adding a lot more chaddy into that so expect to see that what we’re calling the snowboard in the next few weeks once we launch pioneer one great thanks we have the next question here look let’s go ahead to that and the next question is can you share some stats about the scale of the current network and any usages usage stats or projections I’m going to toss this one – Ben Ben is our CEO and interim chairman sorry about that so again to clarify when we’re talking about our current networking how we’re talking about v3 and of course v3 is still in test mode we’re growing it very deliberately as opposed to the last time that we grew this currently we are over a thousand nodes and over 1.5 petabytes we’re looking to about double that by the time we get to pioneer double that again we get to pioneer – and double that again when we get to Voyager at that point we

will have what we believe is sufficient capacity to handle the next three months of data after that but we do see from potential customers who’ve been waiting in the wings several customers and prospects who have well over a parapet abide each so we are expecting that as we launch and grow we should be seeing tens of petabytes in the near future great I think that’s exactly the information that people would like to hear the next question is I started a node on the 29th of last month what searching pricing can I expect John Gleason VP of Operations we’d like to take this one yeah sure the we surge pricing works is that for anybody who started a node and got their first piece of data before the end of July they’ll receive the 5x surge pricing and anyone who starts after that date in July look at the 4x surge pricing and we’re going to run those programs for at least three months one of the things we should probably mention is that for our stalwart v2 node Raiders they’ve also they’re also the ones who are already registered will get that’s 5x surge pricing as well but we are not looking to get additional v2 storage node operators so there is no surge pricing for new signups only for the existing ones ok thanks John and thanks for clarifying the difference between the versions as well the next question is how can I add a new Drive to my existing storage node sure so that’s a great question this is JT olio or vice president of engineering JT thank you sorry yeah so the major question that I want to ask people when they’re asking about adding a new drive to their existing storage node is actually there’s a few things that are worth making clear before them the first thing is probably the biggest capacity restraint in our network is bandwidth so please take a look at our storage node estimator in terms of what you can earn with the bandwidth you have available and the disk space you have available because adding more hard drives won’t actually add you more earnings unless you also have the bandwidth to support the network bandwidth so please make sure you’re keeping track of your bandwidth usage before you just add more disk space secondly the issue with adding another disk is that you want a way there’s there’s sort of two trade-offs that you can make when you add more disks to the same node you have a greater risk of one of those no of if you have any sort of audit failure or disqualification you may lose that entire node and so it’s in some cases better to have we try and target for a node per hard drive or a node per server you don’t really need to do raid you don’t really need to do raid because we already have data durability built into our product and if you’re using raid you’re either increasing the risk of data loss and potentially increasing the risk of a disqualification and losing your held value or you are increasing the amount of hard drive space you need for an equivalent amount of storage so rate doesn’t really gain you as much as you would gain by just running more nodes with the same art where so we recommend we roughly recommend a node per hard drive and we are at the point where we are inviting people now if you would like to get more waitlist invites onto our network to add more nodes please sign up again for our waitlist and we are now at this point adding a follow-on signups so so please sign up if you’re interested in adding a new node at the end of the day if you do still want to grow your node your existing node and add more hard drive space to it probably the best thing to do is as LVM approach though I wouldn’t recommend that I would recommend going and signing up for another node for your new hard drive ok great thanks JT let’s move to the next question and this is a question for John Gleeson is v2 storage shares still active yeah Jocelyn the the v2 network is still active we still have a number of customers who are using the v2 network and we do say thank you to the storage node operators have continued to operate quality storage nodes to keep that experience going as we’ve worked on v3 as we get a little closer to production will announce the process for moving from v2 to v3 and we’re already actively engaged with a number of customers in sort of the planning phases for that but again just remember if you are operating a node on the v2 network you will get the surge pricing if your node was active before July 29th ok great and since we’re talking about forward momentum the next question is where do you see storage in like the next two years or so in terms of price and projects well let me take that into the

two parts then the first part is on the pricing so our goal is to always be between a third and half the price I’m sorry a third cheaper two half cheaper than the the other large cloud providers the Amazons and googles and such and so we’ll we’ll continue to work with the help of our dedicated community and I’m sure we’ll be massively successful and offering a fantastic product at a great price in terms of the projects I think we’ll see initial success in archival and general object storage use cases but mr. platform matures and new features and capabilities are added I think much wider range a use cases everything from compliance oriented storage all the way through content delivery networks so offering a CDN service the architecture is uniquely adapted to some of those things and I think we’ll see a lot of success there we’ve got over 10,000 developers on our wait list and a long list of partners who are who are just itching to get on to the platform and so we’re expecting fairly accelerated growth and broad usage decentralized infrastructure applications are really in their infancy and honestly as we see those those capabilities evolve and those technology segments evolve additional combinations and services that become relevant will probably be added as well great thanks John the next question is I’m going to paraphrase this just because it was a little bit long but this is a question that people have which has to do with how wood storage deal with you know a small number of users who are going to do something bad like storing illegal or copyrighted material on the network and they’d like to know what our response and how we how we would plan to react to something like that ok so this is showing here this is a common question that we get and it’s first noting that storage operates as a zero knowledge network right so we’re not going to know what the contents of the data that you upload on the network are unless it’s publicly shared so somewhere so if you want to upload your cat pictures or your a hundred page manifesto on Y every third Saturday should be an ice-cream free day then you know you can do that and no one including us can view or remove that data but as you mentioned there are a small number of users that are gonna upload you know illegal or or copyrighted content and publicly share that and if that’s brought to our attention through a valid court order are notified by law-enforcement than on our target rate satellites on our satellites you know we’re gonna comply with you know the laws and jurisdictions and we’re gonna remove that that material now you know we our goal is is to you know be a decentralized network and be an open network where no one entity including us can you know control or over the network and its content and so again like we are gonna comply with laws and regulations on for our satellites but there’s gonna be obviously very different policies between countries and between satellites so we are gonna be releasing a framework to help both satellite operators and users kind of navigate that environment and we’re gonna be releasing that sometime in the future as well okay thanks let’s go to the next question and this one has to do with the pricing models does a pricing model already exist for your v3 network and will it work similar to see a coin I think this is appropriate for John Gleason sure so there is a pricing framework it exists for the v3 network and we are getting ready to publish the pricing we do charge differently than C or some of the other decentralized projects our goal is of course to to be a developer friendly platform for storage and so we charge fixed prices and US dollars and we provide predictable pricing to customers which frankly consumers and developers on using cloud products have come to expect we provide the flexibility to use our token to pay for storage services providing users with the benefit of a growing token ecosystem but again it’s a utility token designed specifically or transacting storage and bandwidth on the network okay great thank you this is another question I’ll paraphrase but the beautiful question is as to whether we’re going to have an update for Windows home or not let’s give this question to Brandon and again Brandon is our product manager here at storage hey thanks Austin

so this is a good question and it’s actually part of the work that we have slated for our Pioneer 2 beta release which is the release that we’re gonna be working on after we launch Pioneer 1 so we’re gonna be on adding automatic updates to the storage locker at which point we’ll be releasing official binaries for all operating systems but there’s also an effort within the docker community to start supporting more Windows stuff so we know that docker is actively working on this and if darker beats us to it then we might be able to support Windows a little bit sooner than our actual product roadmap but either way it will absolutely be within the next few months great that’s really good news next question is around blogging tools are there going to be more advanced logging tools built into the storage software itself I’m gonna give this one to JT our vice president of engineering sure yes so there’s the as Brandon mentioned for a previous question we’re building the storage node – a storage node operator dashboard the storage node operator dashboard is going to show a lot of detail to storage node operators about what’s going on with their node there is also an effort that we’re working on that I’m gonna let Sean talk about about showing more network health yeah so one of the projects that that’s currently ongoing that we’re working on is a bit of a network stats dashboard so in addition to giving storage node operators really the information that they need to be to operate and be good nodes we want to give our users the ability to kind of pure on into the network and how it’s doing and how it’s growing so that is a project that is in process is essentially a dashboard showing and network stats for for the entire storage network ok great Thank You Sean and Thank You JT question is when it comes to the financial side will storage buy back tokens from the market to sell the customers on tardigrade I oh and this is another somewhat common question so John Gleeson would you like to take it sure so the star token is a utility token and it’s designed to facilitate the transaction of excess storage capacity and bandwidth between satellite operators and storage node operators and if our network is as successful as we expect it to be with a lot of third parties operating satellites along the network it’s very likely satellite operators including storage will have to buy tokens on the open market to compensate storage node operators so we really look forward to achieving the kind of scale of the network and the complexity of the ecosystem where that’s occurring there are three main things that we can do to acquire those tokens to power the network we can buy tokens on the open market as the question indicated we can offer customers discounts for paying with storage tokens if we need to bring more Sturge tokens into the company and then also we can access our operating reserves and pretty much anyone who’s operating a satellite can need to buy tokens in the opens market or do the discount as well okay thank you let’s go to the next question this is another longer one so a paraphrase in a little bit and we heard the sake of transparency we don’t like to heavily edit people’s questions so we do like to let you see them but we’re also trying to speed along so this question is asking if someone can use the network in a way where they share storage capacity as a storage node operator and in turn receive 30 or 40 percent of the capacity back for their personal use I’m gonna give this one to JT I think JT yeah so the one of the major benefits about our design of the storage fee through network is that you can be a storage node operator only you can be a tardigrade customer only or you could be both I think probably the most useful thing to point out about this type of use case where someone is both contributing storage space on their hard drives and earning storage and then buying storage space by storing data maybe they’re using sorted to get off-site backup in addition to having excess storage space they want to trade to others only real thing worth pointing out is that there is an expansion factor in our network which is that when you store one terabyte of data in the storage network in the tardigrade Network that translates the two and a half times that amount of data on storage node operators hard drives so for every byte that you store there will be two and a half bytes required in the network so it can’t really be one for

one – Pratap type storage if you want to be both a customer and a supplier you’ll probably have to supply more than you use but that said it’s absolutely the case that we encourage people in a lot of different markets to use storage to help have their in their hard drive resources and needs be more elastic ok great thank you let’s go ahead and see the next one and this question is around financial compensation so in the case that demand remains low and storage providers have higher costs than earnings in the beginning is there any financial compensation in that case and I’m going to ask John Gleason to take this one please sure so the interesting supposition that question is the demand remains low so we have over 10,000 developers on the wait list and we have a number of multi petabyte partners who are just chomping at the bit for us to reach our production level of service and frankly even just the beta level of service and so we are really excited about the amount of demand for for our services but as we’re trying to grow the network we’re absolutely using every tool in the toolkit to build that supply and build the storage capacity to be able to support that demand and so you’re seeing us take actions like implementing surge pricing for a few months to grow the number of quality storage nodes and to keep them interested in active as we’re we’re able to put that demand onto the network similarly we’re putting test data onto the network as well to also sort of both test and and verify the capabilities the network in the amount of space that we have available but also to provide a financial incentive to those early adopters who are going to be joining the network in advance of when those demand partners can actually start putting data on the network okay thanks let’s go ahead to the next question I’m gonna give this one to you also and this one is whether the earnings and storage tokens that are held back for long term incentivization keep the amounts as they were earned or does the amount get adjusted according to the storage token price at the time that the delayed payout occurs so this has to do with the held amounts sure so for the held amount that functions very similar to the way that both pricing and payouts to storage operators so we track those amounts in US Dollars and and sort of insulate the parties from the fluctuation of the token price as time goes on okay let’s go ahead to another financial question and is there compensation for the bandwidth when the user uploads data to the node so I’ll take that one also there isn’t compensation for storage nodes ingress bandwidth in the storage and just sort of a number of things that are the comp conventions in terms of what what customers pay for and what we try to do is align the customers pricing with what the storage node operators are compensated for and so in our network of course we pay our storage node operators for the static data that they store and also for the egress bandwidth associated with customer downloads in addition we also pay for the bandwidth associated with file repair so as as storage nodes fail and we need to maintain the high durability of files if we Gress pieces off of storage notes we’ll compensate them for that as well but ultimately I’m sorry one more last singer our goal is to pass along about 60% of our what we make on to the storage node operators and try and keep that balance I’m really glad that you mentioned that because the next question has to do with passing along 60% of the revenue so people someone other would like to know what the other 40% is used for CEO coin charge is a network fee of approximately 11% can you break some things down sure so that 40% goes to a number of different things of course there’s running the tardigrade network and so we are operating a number of satellites and we are engaged in activities to drive demand for the network and so for for the cost that it takes to drive that to run those servers and operate that service and generate that business part of the money goes to compensate storage for that effort in addition we also have things like the open source partner program and technology partners where we share a meaningful portion of our revenue with partners who will also bring demand to us and of course finally there’s just the operation of a functioning infrastructure as a service company so storage labs itself is funded out of that 40% as well okay great thank you so much I’m gonna go ahead and we went to some technical questions that we collected ahead of time this one I’m

going to give to Brandon Iglesias our product manager and someone is asking do I have to have a static public IP address or can it change it any time either you or JT olio I think could answer this question okay yeah I’ll answer um so you actually as a storage node operator do not need a static public IP address the major thing that needs to happen with our current release where we are currently in the alpha and beta is to require that you set up dynamic DNS so there are a number of dynamic DNS services that will provide you a public hostname for your IP address as it changes many consumer routers have support for updating a dynamic DNS service so that even if your dynamic IP changes you still have a hostname that’s reachable you’ll want to configure your storage node to use this dynamic DNS hostname so that the network can find this is kind of the same sort of experience you would have if you were setting up tour or other services in the longer term view after we’ve had a few more releases we are going to upgrade it so that this even this step is automatic and you won’t even need to worry about dynamic DNS okay great thank you next question is there any personal data starett is there any personal data stored in the network that could be used to trace back providers and users by third parties I think that Shawn you might be a good person to take this yeah I’ll take a bit of this so we aim to to collect the kind of minimal data needed to effectively run our service right so we’re really not looking at a collecting bunch of you know personal data on our users that’s it’s not useful to us we were just looking for technical information that it can help the service run better and faster so you can actually take a look at some of our privacy policies that we have listed at the bottom of our site so if you’re you’re curious – what’s cific information we collect that that is on our website for you to look at yeah and you can also look at our storage share information policy the storage share information policy just you know lets you know that we collect information related to IP address and network information and traffic information the node ID payment details the space information things like that that are required for the mechanics and the operation of the software but we’re not a data aggregator so we are not a service that golfer’s you some sort of free ability to use our software and then we aggregate data from multiple sources and market to you we are a storage infrastructure as a service provider and what we collect is just associated with the operation of the network okay great thank you and I have a question here I’m going to give this one to John Sanderson he’s our Vice President of Marketing someone out here is wondering why don’t you provide a telegram channel to easily answer questions and collect FAQ from the community and and share publicly rather than using emails I’m gonna swim the mic over to you go ahead thank you very much so we have debated a telegram channel and there’s been a lot of spam accounts created in our name what we’ve decided to go do is focus our efforts in our new forum that Jocelyn mentioned at the beginning of this townhall being the QA where we believe that’s the best place to engage and interact with us and ask questions we do have an integration with our forum and telegram on the storage labs tardigrade official account that’ll be an announcement only channel that way you know that there’s no one out that running any scams or claiming to be us and offering tokens on some type of contest which is pretty rampant around telegram okay thanks so this brings us to the end of whoops sorry one more question this person appreciates that each satellite is separate and independent in its own right but it’s also a critical single point of failure for customers using it how can you assure potential customers that there’s chosen satellite will not have downtime what technical measures will be implemented to guarantee this JT thanks jessalyn so yeah so this is a great question I think the most important point to point out with this type of question about our architecture is that a satellite is not a single server a satellite is more maybe better known as a trust domain in terms of a collection of servers and services that operate together together maybe frankly satellite is really a constellation of smaller parts in in our nomenclature that all works together to make sure that a user’s metadata and the users account information is all protected and reliable and so most importantly if any one of those servers goes down we use best-in-class best

practices for running a distributed infrastructure for that satellite and so if one satellite app server goes down that really shouldn’t affect you any more than say one of Google’s servers going down or one of Netflix’s servers going down in terms of just sort of our longer-term goal of how to make it that we’re robust and resilient towards kind of any satellite you know going down entirely maybe we lose an entire region or something like that kind of the way that we see our roadmap mirrors how the electric car has gone right and I think we’ve talked about this in an earlier blog post that you know early electric cars you had the excitement about the electric car but you didn’t really get a lot of range and so the next step to try and bring electrified cars to the market was the Toyota Prius right which kind of is a midway point and then the Toyota Prius became a plug-in and then now we have Tesla’s which are kind of the best of all worlds no one is to the point where it were to the best of all worlds yet and so we’re getting there through the sort of the hybrid plug-in electrifying approach and so right now we’re a hybrid in terms of good infrastructure that’s for each satellite anyone can run their own satellite our satellites are run with best-in-class practices for making sure that they don’t have downtime okay that’s a really good analogy thank you so now let’s answer some community questions that brings us to the end of the ones that we collected ahead of time but we did have questions that came in as we were speaking so I’m gonna jump into one that came in through YouTube this person would like to know why there’s no network reset for a tardigrade pioneer – yeah so actually our last network reset is gonna be when we launch pioneer 1 so after we launch pioneer 1 in the next few weeks we will no longer have any network resets for the rest of the life for the rest of eternity early so it’s already if that wasn’t clear in the slide but after pioneer 1 there will be no more Network recess ok great thank you the next question is a person who’s saying it we mentioned that it’s not a good idea to use multiple hard drives hard drives for one node this person is currently using spare room on a raid 6 array and they’re thinking that that could lead to some better performance and stability is this kind of setup not advisable so that’s a great question um and and to just go a little bit more into certainly for this particular user we do prefer right now that people use excess storage on whatever they currently have we’re not asking people yet to go and buy new hardware to support so if you already have something set up it’s totally fine to use what you have in terms of what type of hardware you do want to set up it’s worth talking about you know sort of the different options between raid there’s obviously raid 0 and raid 1 or kind of the extremes where you have either striping or replication in both cases it’s not really a good fit for storage and striping you’re increasing the risk that you have of getting disqualified by any hard drive failing and in replication you’re actually not able to use all of the hard drive space to earn the way that raid 6 and other raids work is they use erasure codes and erasure codes are already something that storage the storage network is already doing for you and so raid 6 may slow write performance some in terms of just the extra work that it has to do though it does balance across the hard drives locally the storage labs network is already doing that at a higher level so you’re duplicating the effort of the storage network we probably still have some tuning that we have to do to make it more incentive aligned but our long-term goal is to make it such that the best thing you can do as a storage node operator is just give hard drives directly to the storage network and the storage network will do its best to line up the incentive so that’s the best thing for the storage unit operator ok thanks let’s go to a question that came in by email and this person would like to talk about the business model so they perceive some issues that like to discuss that they think may be pointing to a flaw they if it’s not if it’s not profitable to run a storage node they think the requirements are too high for a residential farmer to be online 99.5% of the time and they’re wondering about how we weighed out how we weighed out the options and made the decision to build a cloud service the way that we did and wondering if it may be feasible to focus more on backup storage at a low cost for customers with less constraints on farmers I’m gonna

give this one to Ben Golub cuz ya thang sure I’m so it’s a it’s a really good question what I would say is that we’ve done quite a lot of research on the typical costs to run a residential storage node and we’re very confident that as the network gets to scale as we start bringing all of this pent up demand online and we profitable for the vast majority of residential and commercial storage operators to be profitable leveraging unused capacity now everybody’s economics are different some people live in an area where their bandwidth is expensive or very limited or their electricity is very expensive so we provided a calculator to help you understand whether what you’re white earnings potential are but we’re confident for the vast majority of residential operators this will be a profitable way to use their unused capacity and with regard to backup versus other versus other use cases we certainly are providing backup we’re providing consumer level backup we’re providing enterprise level backup but as we move up the chain generally speaking the profitability both for storage mode operators and for the network as a whole will go up as we use more bandwidth intensive use cases so as you move toward CDN I’ll actually be more powerful for everybody okay all right thank you let’s go to a question that came in by zoom and this person is wondering what’s the projection of earnings per terabyte for farmers do I need to clean and maintain my cloud server or do I get penalized for this if it’s once a week I mean I can just continue on that’s right yeah so again we provided an estimator so you can estimate what you learning will you’ll be learning at various levels of capacity and bandwidth as far as taking your nodes offline on a temporary basis if you need to do upgrades that’s actually fine on our network is fine with people taking off they’re getting I’ll find in a plan wait what hurts your reputation hurts networkers if you take it off in an unplanned way so we have capabilities for people who are operating nodes to do routine maintenance without impacting their reputation as long as they use the term the plan mechanism that we provided okay great thanks for clarifying that and I think that that’s another one of those common questions where you have people who really want to make sure that they’re doing everything right and part of that is maintenance and so the key point just being that you need to do it the right way you’re not being penalized for doing the right stuff you just need to do it at the right time so we have a question that came in by email and this person signed up on the wait list after their note got disqualified we stated earlier in the presentation that all invites have been sent out as since as of this week but they say they didn’t receive a token Brandon can you make a comment about that yeah absolutely so the best thing to do is to contact our customer support line and and you know just let them know that you didn’t receive your token another thing that you can proactively do is just sign up on the waitlist once more and we’ll send you a token right away so sorry about not receiving your token and we’ll try and get you on as soon as possible Thanks and for anybody wanting to contact support is just support at storage dot IL the next question is one that came in by YouTube and people are asking when can we see any token price movement I’m going to give this one to John Gleeson sure so the star token is it as we’ve talked about a number of times here it’s a utility token intended for the facilitation or transactions between storage and bandwidth now we’ve also designed the network so that there are multiple pur classes so you can actually separate out the functions of people who want to store data on the network and people who want to share excess capacity on the network and we understand that not everybody is going to use those tokens to purchase storage and people who sell their storage and bandwidth may want to actually get rid of those storage tokens for something else so the storage token is listed on multiple exchanges that to help facilitate those types of transactions we’re not really responsible or active in any activities related to the value of the token but on those exchanges you can check and see what the values are if you are in that pure class where it’s relevant to you okay great I left the next question you’re gonna like a to Brandon so someone from YouTube is wondering if it’s possible is someone going to address the node reputation model and will there be some sort of visualization yeah absolutely so as we mentioned earlier in the presentation we’re going to be releasing the storage code operator dashboard which we call the snowboard and that snowboard actually has a ton of data visualizations about your storage node so you’ll be able to see your storage nodes audit statistics uptime checks how much egress and storage space you’ve used per satellite so it’s gonna give you a lot more information and that storage so the operator dashboard is going to continue to grow with new functionality the the version that we’re

releasing the next couple of weeks is the base and it has a ton of functionality already so we’re really excited about this because it’s just gonna give the stores and operators really just a microscope into how their nodes are performing okay great thank you when will graceful exit be enabled and doesn’t this help in giving more hard drive space to an existing storage node yeah absolutely so graceful exit is actually a feature that we have slated for our pioneer 2 release and really the purpose of graceful exit is to allow storage nodes to leave the network without losing their escrow amount we have a great blog post on how escrows are collected and calculated those are and I believe by John Gleeson but if you want to check that out that’s on our blog but if you are solu operator and yet and you want to leave the network for any particular reason you’re gonna have this feature called graceful exit and essentially what its gonna do is it’s gonna allow you to transfer the data that you’re storing for the network on to other nodes and then the satellite will update its metadata and keep track of where you cents to all those different pieces so the files durability continues to stay high on the network so as I mention that feature is going to be implemented as a part of our pioneer 2 release so it’s up and coming and we actually have been working design doctorate so I’m actually going to post design doc on to our forum so if you want to read more about it you’ll be able to check it out there okay awesome thank you so much let’s see what’s the next question so this one came in by email and someone would like to know how do I know if I have a bottleneck on my network or if there’s just not enough demand I think I think this goes back to what we were just discussing a moment ago yeah so one of the questions that’s worth you know always having at your disposal as a storage node operator is is sort of more broad and system administration tools if you have a bottleneck 8 on your network there’s certainly things that we can do on our end to diagnose it but we can’t diagnose everything and so certainly being able to inspect your router inspect your software on your computer see if there are other things on your network that are causing issues are always are always you know there’s there’s lots of good tools that exists already at 7 point you know the the documentation for the storage node operators we may be able to provide some recommendations if storage node operators find a certain suite of tools that tends to help them diagnose network diagnostic issues in terms of what we can detect on our side we will be exposing everything we can about what what data is available about bottlenecks on the storage node dashboard um for the most part the the the reputation information that you get on the dashboard will be everything that you would need to know to find out if the satellite thinks that you have a different amount of bandwidth than you think you do ok great and I’d also like to mention that if people have future requests or suggestions or importantly if you want to vote up things that are more important to you the way that we track that and know about it inside the product management cycle is if you go to our I’m Brandon do you want to get the live it up yeah so if you go to ideas start storage to i/o you could see all of these really cool features that our community has asked for and those features are getting voted and eventually those features make it into our product or a map and get implemented so it’s really a great way to see what the community kind of wants built from us and what we’re currently working on okay thanks we have a question that came in by zoom and this person’s wondering could be host a satellite server if so when can we apply what will be required to run a satellite server right yeah I see a couple of people that could take this on but I’m giving it to Ben all right yeah so they’re they’re really two ways to think about this first of all we are open source software and so anybody can run their own storage satellite and build up their own storage network we are running a storage labs is running a branded network called tardigrade where there are satellites that we run as storage labs and we’re trusted partners of ours run that as well so we’ll be releasing more data on what it will take to join the tardigrade network but we are absolutely looking to have people who can run high quality satellites to do so as you might expect there will be pretty stringent requirements to join the tardigrade network including through your level of expertise and the SLA is that you can provide to the network in terms of running satellites great thanks and I guess that all goes back to our pledge of being performant and resilient and secure it means that tardigrade

needs to be pretty stringent so the next question is one that came in by email and this person is asking when nodes are grouped together because of the same IP by IP filtering does it get ungrouped when they move to different IPs so for example a snow has a failure and moves to another IP temporarily while their connection is restored and this other IP already hosts to note what happens in that case this is that’s a great question there’s actually in the way that our system is architected currently there’s there’s nodes are marked with the the node network that they’re on we have a sort of a concept of node network and no network is something that we’re going to continue to refine over time right now it’s actually the first three octets of the IP address we basically use a net mask to determine what addresses are related and that just happens live so as nodes update if their IP adjust change obviously we support dynamic IP changes as we talked about earlier and if the IP address changes that’s totally fine it just becomes part of a new net mask in terms of our database and it gets selected on on its own so it happens automatically you shouldn’t need to worry about that nodes there’s there’s not so much of a concept of actual grouping more than there’s a concept of when we select nodes for doing an upload we try and geographically distribute the nodes that we select by randomly selecting networks as opposed to randomly selecting nodes okay thanks JT this next question is this is another example of a common type of question that comes through and we actually love the common questions because we know if a lot of people are asking that just by answering it here we can get information to a lot of people who need to have it so thank you for sending it in this question has to do with with operation from home so this person saying a lot of users operate from home myself included and while there’s an incentive to get some used Enterprise Hardware we can’t guarantee the power and internet connection utilities without large additional costs and this might get nodes disqualified in addition to possible hardware failures in my opinion if a node is online for two years within ninety nine point three uptime and then is offline for a couple of days say two to three days and is eventually disqualified is this a right course of action after two years it’s very likely the node has suffered a failure and will return to the heart will return to the network so again this is one of those questions that it comes up frequently there’s a couple of people here who could answer it and I’m kind of looking around the room to decide who I’m gonna kick it – I’m thinking Ben is probably the best person to weigh in on this okay yeah so right now we don’t recommend purchasing a hardware for the network for most people the best place to start is by leveraging they’re unused capacity Enterprise Hardware won’t be helpful unless you have enterprise level bandwidth I mean again as JD said earlier what we’re seeing right now is that most people the constraint to how much they can earn is bandwidth rather than relevance of raqqa past as far another question which is about going offline again there’s really sort of two ways of going offline there’s going offline which we understand if you’re moving if you’re doing routine maintenance and we have a way for you to do that that doesn’t impact your reputation as long as you let us know in advance we should also mention that nodes aren’t immediately disqualified for unplanned downtime they’re placed into a grace period and your reputation is impacted however you can earn your reputation back again if there’s something that happened your if your eye as fee goes out and an unplanned way of course you can’t you can’t control that but you’ll be able to earn back the reputation okay thanks yeah I think that’s that’s a really great real-world approach to a real world issue we have a question that came in by email and this person is saying they have the option to expand both their bandwidth and their HDD space as storage grows right now they’re keeping things lower because there isn’t a big push in alpha for high needs but as beta and production come when should we recommend boosting and is there a benefit or risk to making oneself bigger chunk Leeson would you like to take this sure I’ll take this one so I think first and foremost been to mention that leveraging what you’ve got right now is really the best bet and we’re sort of trying to take the Airbnb approach to hard drive space by leveraging existing capacity now what you’ll see from us following a certain of our initial growth period here where we’re doing things like surge pricing and pushing a lot of data and and doing a lot of marketing around the need to get more storage nodes as we sort of

level off and we get to the capacity that we need when we are ready for our greater capacity we’ll communicate that pretty broadly and aggressively we’ll implement you know again tactics like surge pricing or tactics like loading up data or doing performance testing or load testing or those sorts of activities that tend to war notes with better bandwidth and again better capacity or better performing notes and so as you see us doing those things you know those are those are probably good times to to make those sorts of changes and again we’re pretty transparent so we’ll communicate when we’re doing those things temporarily and why we’re doing them and you of course can always participate in the town halls and ask questions like this so now’s not necessarily the best time to be doing that but you have all those well I think there will come a time in the very near future where that’ll be an appropriate thing to do okay great following on to that I have a question from YouTube and this is similar it’s a question that was asked earlier but if you just give a hard drive to storage code and that drive fails and needs replacing how’s that going work it depends on the kind of failure if the hard drive failure is complete then the storage node is disqualified it lost some data and so this is one of the major reasons why we recommend more nodes instead of us more hard drives per node is that if there is if there is a node that has some sort of failure you’d rather only lose a fifth of your reputation than all of it so it’s definitely better to have compartmentalized reputation in terms of your hardware failures if the node if the hard drive failure is partial and the the graceful exit process will allow you to transfer the remaining data off of that node onto a node ID or a new node that will continue to have good reputation so can just sort of having compartmentalized reputation and compartmentalize nodes helps you limit the damage to reputation if you have complete hard drive failure okay thanks and as long as I have your attention I have a question from zoom which is what is hard dirt what it’s not hard drive what is neighborhood size mean and is that a geographical designation sure so in the storage node dashboard we have a field that says neighborhood size and the neighborhood size on the dashboard is the actually a measurement of what’s going on in our Kadhim Lea routing table and so it’s probably a little bit more inscrutable what that means for people who aren’t familiar with how distributed hash tables or academia works but that is a measurement of essentially what other nodes on the network are not geographically but are virtually nearby using the Kadhim lea X or distance metric so the main use for us as developers on the storage project is just confirming that the network is working in as healthy okay thanks let’s see I’m gonna give a question to Jay to Brandon and then I’m gonna give one to you John Gleason Brandon will the snow board only display data for storage own satellites or will it display for all satellites no this is pretty exciting so the snow board is actually gonna connect to all of the satellites that the storage node is doing business with so let’s say that there’s a couple of tardigrade satellites that you’re connected to and storing data for and then there’s another satellite that someone is running on their own and you’re also connected to that satellite and storing data for that sound like the storage snow dashboard is actually going to be able to tell you metrics for your node in terms of each of those satellites so it’s pretty awesome another thing it’s gonna be able to do is tell you how much combined storage space and egress you’re using for all of those satellites so it’s gonna break it down and you’re you’re gonna be able to drill into each of those okay that’s really great thank you I agree it’s very exciting news weather there’s one question that came in from someone who is a bit of a late comer they’re saying oh sorry I just tuned in now was there any more information in regards to looks like escrow tokens for people who have recently been disqualified and I didn’t just repeat that so this person came in late so they didn’t necessarily hear all of the answers that we had earlier and for people who are turning in late by the way we are putting the recording on YouTube so you’ll be able to access it later and kind of digest it at your own rate we also try to put in time codes for the different sections so if there’s a particular question that you remember hearing that you’d like to listen to again or just review if you give us a little bit of time to go through the material you’ll be able to open the description box on YouTube and go to the timestamp of that particular area so

diving back into the question it is whether we have any information in regards to escrow tokens for people who have recently been disqualified got it so I think the question there when they’re to referring to the escrow tokens they’re talking about the held amount and so if you if you’re a storage node operator and you haven’t already please hit the blog there’s a great blog post that explains the held amount process and then end so there’s a link there to the storage during terms conditions that explain how all that that works it’s worth pointing out that if you don’t actually know if you’re disqualified you might not be and we did undisclosed I some storage nodes you know basically because some of the technology things that was unrelated to the operation of the storage and of itself and we are continually tuning the system to minimize disqualifications where where they probably shouldn’t be and really focusing on things where nodes are either poorly operator data is lost right and that’s really what disqualification is for but for storage nodes you know as we implement more technology like graceful exit and the sort of voluntary containment mode and grace periods those are all designed to make sure that that the penalty of loss of held amount is really there to protect the network and not necessarily penalize storage nodes who have an inversion activities that that aren’t their fault so I think you’ll see as we’re as we’re tuning this the most important thing that you can do is make sure you keep the data if you delete data to free up more space you’re gonna fail audits and if you fail audits you’re gonna get disqualified and you’re gonna lose your health amount so again it all goes back to try and run a really good node and everything’s gonna be okay one hard drive for storage node one processor for storage node don’t run multiple if you don’t have the bandwidth to support it keep the data keep it online all the time that’s what we’re looking to see and the entire network incentive system is built around rewarding that behavior great thanks we also have some blog posts that lay out exactly what you need to do to be the best storage node operator possible sure there’s a series of four of those from January so if you just go to our blog hit my name and you’ll see a series of four for storage and operators and oh my gosh there’s a ton of information in there everything you want to know about how everything works from a storage and operator perspectives in those blogs yeah we try to reduce the mystery as much as possible and give you just the information exactly what you need to do and it is all contained in the blogs so we have a question here that is really similar to the previous ones that we’ve had I’m going to throw it out there if anybody has anything to add in addition to the similar questions that we’ve already had this would be your time otherwise if I don’t hear I’m gonna move to the question after that so this question is around SLA requirements for the notes being too high they’re higher than the SLA promised by the ISP a short downtime of six hours because of internet loss or power outage just qualifies you know it’s not going to change in the future and they’re worrying that residential nodes might be doomed to fail if this does not change if there’s are there any additional comments yeah John’s last answer gave a really good description of that I should add one other thing we have among the many talented people that work at storage we have a great data science team and they are working on giving us the ability to distinguish between node going offline permanently or like to do offline permanently or versus somebody who’s just suffering an ISP failure and so as we get better and better at diagnosing that we’ll be able to keep people who sort of suffer on inadvertent ISP outages from being penalized okay great thanks next question is it possible to use two internet channel for redundancy JT it’s currently well yes actually the major thing that’s important to keep track of is that the address that your node advertises is reachable by whatever routes you want your internet connection to be available through so as long as someone is able to reach the P that you’re advertising on the condemned million Network with your dynamic DNS configuration everything should be good to go okay thanks next question based on your experience to date do you have a suggested hardware and software setup not recommendations necessarily but just suggestions based on experience and on what you’re seeing in the storage network how about you John Gleeson sure so you know I hate to say the same thing over and over and over again but yeah if you hit our github and you and you look to see the operating system we’re supporting right now and you kind of stick to that pattern those will be your easiest choices from a software perspective but again what we’re really looking for is people who can who could get the minimum requirements that we put in the storage sharing terms and conditions and we’ve set those pretty low but again the most important things are turn it on leave it

on all the time with good bandwidth and that’s going to be your key to success so right now it doesn’t have to be a an aggressive break like you’d see for more mining type operations this is really just making the space available and having the bandwidth to support the movement of data to and from your storage node and so it really doesn’t matter what it is for the most part use what you’ve got and over time as we as we learn more about what’s the most effective combination and what works best on the network will of course be very transparent and published that and again as the earlier question asked when the time is right to start adding net new hardware and then makes financial sense will be very very quick to share that with you okay great thanks next question is what’s the projection of earnings per terabyte I think we had that question earlier but just to reiterate we’ve got a calculator on our website it’s largely dependent on your bandwidth but go ahead and check out that calculator and it sort of allows you to figure out based on and what the conditions are your should be okay and then we had another repeat question which is someone who’s asking about cleaning and maintaining their cloud server once a week and whether they’re going to be penalized we did answer this earlier so what I would recommend is going to the YouTube once we publish it and going to that particular question and we gave an extensive answer for that um here’s a question when I run both the storage down and a satellite on my premises how does the network make sure that my satellite does not use my storage node this is a JT question it’s a great question so the work that we’ve done so far to run satellites for the most part we haven’t really prioritized efforts to co-locate satellites and storage nodes on the exact same server on the exact same network for the most part that’s something where just in general we think it’s a good idea to run storage nodes and satellites geographically separated dispersed in fact satellites in general are something that aren’t necessarily even one server so there isn’t any specific logic to make sure that a storage node that’s on the same network as a satellite will not be selected currently just because we aren’t configuring satellites and storage nodes to even have that problem so if that’s something that you’re configuring I I guess I would just say that that’s a great point add to the ideas portal and we can prioritize adding some explicit logic to make sure that if you don’t want your storage stored on your co-located storage node with your satellite that we can have that restriction yeah and I guess the other thing worth noting is that to a certain extent it probably doesn’t matter because it’s not so much that the satellite stores a piece on its own related storage node but the pieces are stored on unrelated storage nodes so one piece is on your storage node that’s a much different risk than if multiple pieces are stowed on related storage notes and so the way the network is working it’s designed to pieces on statistically uncorrelated node so one pieces on your node what your satellite is smart enough to do is just not put anything else that’s going to be at risk and so that’s that’s from a bigger picture perspective it’s it’s probably just not that big a deal that’s accurate okay thanks are there any metrics in the reputation system that will help select faster nodes for future uses like CD ends and help migrate stale data out to slower nodes yeah so I could talk about this one just briefly we do have quite a few exciting features on our product for a map that are gonna get us to more of a CDN type of network and even be able to geographically select nodes within a specific region such as like US East or something like that but in the short term where we don’t have any specific features built into the network that are gonna allow you to select factor storage so it’s a way that the system is architected is that you’re gonna download your files from a number of different storage nodes so you’re gonna be downloading those things in parallel so it’s just inherently gonna be fast okay thanks next question oh I think this is a good one for Brandon Brandon can someone make a video on how to install and run this thing on Windows not like it can’t see the next word out there it is not all people are programmers yes absolutely as soon as we support Windows for our storage software we will absolutely make some videos actually the storage tote operator video that we have up on YouTube needs a refresher so that’s um like to do this so expect to see a new one in the next couple of weeks okay great thanks this is this is again similar to an earlier question any plans of support for Windows 10 huh yeah like we mentioned earlier docker has an effort to support more Windows operating systems so they’re working on that and

if docker ends up supporting Windows before we do you’ll be able to run your storage head via our docker container aside from that we do have one of the features in the Pioneer 2 milestone is to add automatic updates for source nodes so once automatic updates are added for storage heads we’re going to be releasing official binaries for all of the operating systems including Windows specifically the it’s worth pointing out but the thing that we’re waiting on for Windows home is either that we have our own automatic automatic updating system or there is a docker release that supports mounting to the outside file system directly and that’s the feature docker runs on Windows home of course but that’s the feature that we need to support the storage node working well and so if that feature beats us to our automatic updates then rate and that’s also a question that does come up in the community so thank you see here the next question is one that came in through zoom and let me just yeah couldn’t see that word for a second regarding the snowboard will the web GUI be limited to the local subnet only or could a node operator establish a port forward to have remote read-only access should they wish to go that route and I can repeat it if you want yeah no that’s great okay so of course you know no matter what happens if even if we had our process limited to a local subnet only port you can always do your own port forwarding with SSH or so cat or a number of different tools that allow you to port forward so and just in general our our typical system for registering services I think the snow dashboard is no exception is that in your configuration file you can specify how the port is listened on so that if you want to expose it publicly if that’s something you want to do you can though we will have the default to be private okay great so the next question is just to confirm when a drive fails you lose reputation and lose money right so it wouldn’t be better to store in an existing grade six set up this seems the person is saying this seems like a catch-22 since raid is frowned upon JT yeah so raid isn’t necessarily frowned upon it’s just that it’s additionally redundant so one of the things that we want to do is make sure that the incentives and the tuning of the network as we go forward aligns your amortized cost and so the concept of amortization what we want it to be is essentially your overall expected value out of the network is greatest when you dedicate hard drives directly to storage as opposed to getting rate it is true that if you use raid you can reduce the the risk of you know some loss of reputation but it comes at a cost and the cost is that you have a hard drive or to dedicated to redundancy that you’re not earning money from and so what we’re trying to do is make sure that the incentives are aligned such that the value of just turning the hard drives to storage directly outweighs the amortize the the expected value of the risk okay thanks I’m not sure if this was an email or YouTube question but the question is can I already run more storage notes today on my different HDS today how do I apply for that give me some email dates I love the compartmentalized approach and by the way thanks for answering all these questions Brandon yeah so the best thing to do is just to sign up on our waitlist again we can’t really provide you with a specific date as to when we’re gonna be sending out more invites because it really depends on how much more capacity we want to add to the network we want to make sure we grow it at a sustainable and an intelligent way so that our search operators are still receiving large payouts but we are looking to expand the network greatly as Ben and a couple of other people have mentioned so please just you know sign up on the latest again and I’m pretty sure you’ll get an off took and really soon yeah and for people who are listening I can I can definitely vouch for that I report to the vice president of marketing and when there is an email that is about to go out we do not lag if it is if it’s nighttime on a weekend we are we are on it we get everything out as fast as we can so we’re gonna be getting emails out to people as quickly and efficiently as humanly possible next question is if a notice disqualified you lose your escrow balance and take a reputation hit it seemed like some local redundancy practices like raid or prudent for storage no stability I feel like this is a ground that we’ve covered so if it’s okay unless anybody has anything to add to their previous comments I’m gonna go ahead the next question all right I’m not hearing anything so I think that may bring us to the end of our live questions I’d like to just take a quick pause and say thank

you to everybody I know that you know these town halls are something that we produce every quarter we are just so happy and grateful that you tuned in and that you engage with us and that you ask us these questions and that you keep us transparent keep us engaged and ask these really great questions it feels great to be able to sit here and talk with you so thank you so much for being here our next one is again going to be in one quarter from now and we’ll be sending out the same deal with the emails and the invites and the pre-registration so we hope that all of you and more tune in next time if you would like to get in contact with us before then you can follow us on social we’re on Twitter under storage project and under ciao degrade underscore IO and again please come to our forums our forum is at forum singular fo are um like Mary dot storage IO and we are actively looking to sponsor events so if you have a meet-up and you want to run a storage themed event please email me directly I would love to support you and if you’re interested in contributing to our github you can see on the screen we have a mix of help on it and good first issue and if you have something that’s more advanced if you have an idea get in touch with us maybe there’s something that we can collaborate on together so thanks again and we’ll see you next time take care you you