Anthos Migrate: On-Prem to Cloud-Native on GKE (Cloud Next '19)

[MUSIC PLAYING] LUCIEN AVRAMOV: My name is Lucien I’m one of the PMs working on our Velostrata product that’s now called Anthos Migrate And I’m joined today with two other gentlemen that will come on stage later on We have our architect and technical lead for this product, Leonid, that’s here with us And we have David, who is the product manager for Service Mesh and Istio as well So during the next 50 minutes, what we’d like to tell you about is Anthos Migrate, what it is, and how we will enable you to migrate on-premise type of VMs workloads to Cloud-Native, and how to take you through the GKE journey So that’s the goal of what we want to achieve in the next 50 minutes together And I’ll be leading the first part of our talk, and then we’ll go over and deep dive Before I start, what I’d like to clarify is naming and what things are because we made some announcements two days ago the keynote And what I wanted to make sure everyone understands is that the product called Anthos Migrate now, that we decided to call that way, is actually our Velostrata technology That’s a Google product And it’s the same thing So if some of you are familiar with Velostrata, migration of VM to VM, lift and shift, now we have a new functionality where we also migrate to containers, to GKE And both of these Velostrata functions are called Anthos Migrate So it’s the same thing, and I hope this clears it up Here’s the agenda for the next 48 minutes I’ll go over, first, some of the whys– why we want to do what we’re doing, what are the goals, and what’s the reason of our technology Then I’ll go on to product overview After that, I’ll hand it off to David, who will be talking about service management And finally, we’ll have Leonid that will go over that architecture So more technically, Leonid will explain what’s happening with the migration process, and he will guide you on the architecture deep dive of the product All right So without further ado, why do we want to move VM workloads to run as containers on GKE? Why is it compelling? First of all, migration is a difficult task Migration is hard Most enterprises trying to migrate realize how difficult it is They face delays, budgets, and stuff like that, so it’s a hard thing to do And when we poll on different surveys, it’s top of mind for customers And actually, I’ll do a poll here in the room Can I get a raise of hand on how many of you would like to move to the cloud, this room? OK About a third How many of you want to move to GKE? Wow OK And GKE on-premise? Some OK So for those of you that want to move, I hope this session makes sense And for the other ones, I’d like to take you on board so you understand our technology, so when these topics come top of mind, you know best what Google can offer you to migrate you and take you to that journey So challenges, it’s hard The scale, you have data centers, and you have workloads You have thousands of VMs that you may need to move You have multi different places, different locations, and you’re wondering what to aggregate The applications sometimes run on multiple places Which app do you select? Which one can you move? And how then do you start doing it? And it’s complex Also, the applications are complex, and you have to understand what’s happening behind there And then lastly, the risk We oftentimes talk about migration, but we don’t often talk about rolling back What happens if you start to migrate and things don’t go well? Well, you’d like to go back to your initial state without losing data so you can roll back and then figure it out and try again later With our technology, we can provide you that back and forth at any time So you can run on-premise, migrate, if something happens you can come back And we have multiple ways of doing that migration So why now do we want to modernize our workloads to containers? Well, there’s two main things

One is, we have customers– and about a third in this room knows this– that want to modernize legacy applications So the idea is, well, I’d like to move certain parts to the cloud because it’s more cost efficient It makes more sense It will be closer to a development environment, and I’m developing some new app on the cloud, and I’d like to move some legacy apps over there as well to be able to interact with my applications that are running on the cloud I’d like to modernize my application management lifecycle and have a consistent way to do maintenance, patching, and so forth So that’s one The second one on this slide is GKE So GKE is our offering for Kubernetes, and a lot of our customers love GKE And so that’s a compelling reason to move to Google or to GCP in general And so we can take you to that framework, from a traditional enterprise VM place to take you to GKE Once you’re there, you’ll see the benefits of GKE, so management of your images, OS, interaction with logging, interaction with service meshes, and Istio And so those are things we’ll talk about today in terms of your benefits as when you move to the cloud Before I forget, we also have a Dory today open So if any of you have a question at any time, feel free to raise your hand or type a question on the Dory, and we’ll come back to the Dory we have along here that’s monitoring our Dory And we will get back to your questions So we talked about why we’re doing it Now, why is it hard? Why is it hard to move from a VM environment to GKE specifically? Well, when you have a legacy app, to move into a microservices environment you have to rewrite it in a modern way and oftentimes go from a, obviously, stateful type of application to think about how you’re going to make it stateless And so that is actually, really, a difficult thing to do You have to re-build your application Many of the old applications don’t have that type of knowledge of application logic and just cannot go into being stateless So it’s something hard and oftentimes, it’s not possible However, your new applications that you’re developing, they are modern, they interact, they’re containerized And so then comes the point of, well, I’d like these new applications that I’m deploying to also be able to interact with the other ones and to have a consistent way to manage my infrastructure, my workloads, by being on containers on a container framework And this is why– the second point is, why not just do a lift and shift, or just move a VM to be a VM moving to the cloud? You can do that, and we offer that technology We know how to do it at scale We have customers moving thousands and thousands of VMs in lift and shift, where the source is a VM and the destination is a VM But the new thing that, here, we’re talking about is the source is a VM and the destination is a container And finally, when you’re there, basically it’s us adapting to your journey, to your environment And how can we help you going to a modern framework on containers and at the same time enable you to move these legacy app and still keep the lights on for these apps to run but being managed in the modern infrastructure So with that, I’m going to go ahead and move and talk about our product overview So what is Anthos Migrate? How does it work? And then we’ll move further on the next topic So first, I’d like to ask in the room, how many of you know Kubernetes? Can I get a raise of hand? Majority OK Great So we’ll go quickly over this one Portable, open-source management platform, allowing you to scale, to run, basically to orchestrate your containers and to run them with policy, declarative model, and be able to run on various clusters, various environments So initially, our internal systems obviously run on Kubernetes, and now we do it with managed to Kubernetes, GKE, where we actually take care of managing the kernel for you, providing you upgrades, security patches, fixes And so our SRE teams operate the management of the kernel

and OS for you when you are on GKE versus running Kubernetes by yourself on an open-source model So as we know GKE, I’ll move forward So at this point, we’re at a place where you have applications, hopefully, that are VMs in a data center, you know the value of GKE, and you want to modernize these apps or you want these apps to be able to run in that new modern environment So then, what do you do? Then comes this product, Anthos Migrate OK And here is, at a glance, what we do You start– and the way to read the slide, we’ll go on the left-hand side of the slide I’d like you to read it from bottom, up OK? So at the bottom, what you see is that we have different sources of environments You can be on premise, you can be on a public cloud, and you have, as a starting point, a virtual machine Your destination is up on this slide, which is the GCP platform And where you end up is basically on containers running on GKE, and then you have other services– service mesh– that come and attach themselves to that So all the ecosystem of the GCP platform come, and then can interact with your GKE cluster once you’ve done that move OK So we can do VMware We can do physical bare metal We can do other cloud type of migration, and the source is a virtual machine Here is how this works And for you to understand Anthos Migrate, there’s two things that you need to understand One is understand GKE and understand GCP And this is new This can be new when you are in an enterprise or a VM environment and you’re moving to the cloud So one aspect to know is containers and GKE and how GKE works And the second part is the Anthos Migrate Velostrata technology And so we intersect at these two And the way we’ve designed this is to give you the journey and experience of Kubernetes So how do we start the journey? You’re a user, and you’re over here, and you want to go on containers So the way you will do that is basically, you’re going to do a deployment policy You will have a YAML file, which is Kubernetes, and you will tell GKE that you want to start a migration So actually, you start your migration by talking to GKE versus going on your on-premise environment And so you push the YAML file, and then what GKE will do, GKE integrates with our Anthos Migrate technology And then we start the migration of the VM So you have these VMs that you see here that are running in your data center, and we’ve received from Kubernetes, from GKE, we received the instructions to migrate, and then Velostrata Anthos Migrate starts the migration So you have these two blue VMs that are running on your data center, and where we move is over here You end up with two containers So I did a one-on-one mapping, just to show you the example, the visual example where you have two VMs running on premise, and the end result is two containers running on GKE And it’s all orchestrated by GKE And so that’s really important concept to know Now, with that, I’ll tell you a little bit more under the hood in terms of how we enable this migration So I’ll just move to this slide We have, basically, here, our Velostrata Anthos Migrate technology resides into two places One is on premise, we have a Velostrata appliance, which is a VM, running on one of your vCenter which is basically our back end And we have an ESX plug-in, so a plug-in for VMware that will basically give you the manageability and the interaction with the virtual machines over there So that’s one part of the deployment So you install that on your on-premise environment Number two, on the cloud we have running our management software for Velostrata So install that through Marketplace You install it as a VM running, and you connect to your on-prem Between the two, we assume that you have IP connectivity between your on-premise environment and your GCP cluster That can be achieved by a VPC That can be achieved by an interconnect

But you have to have IP connectivity between your two environments We create, then, a local cache on the cloud And what we do is– now, listen to this We have a VM running, and we started the migration In 5 to 10 minutes, depending if it’s Linux or Windows, your new environment will be up and running on the cloud The way we do it is, we start the new VM or the new container, so I’m talking of two use cases, either the VM to VM or VM to container We fire up the new container, for example The data initially is still residing on your on premise, and is being streamed over your IP connection to the new environment And that data is getting cached locally The cache grows, and you can start testing your application You can start using it right away You don’t need to wait for all the data, all the disk, to be migrated for you to know if your application will start working on the cloud And we achieve that with our streaming Anthos Migrate technology And you benefit from that technology whether you do VM to VM migration or VM to containers migration If things don’t go well, you can roll back We also have ways for you to do testing So if you are running production in your environment, we can actually make a copy and get you to test a clone of that VM on the cloud, so it’s less disruptive And then we give you the way to migrate at [INAUDIBLE] later on production environments So if you want to migrate hundreds, thousands of VMs, you can do that on a schedule with automation tools that we provide So that’s how the whole migration is working under the hood Next, now that you are with containers, for example, where do you see your logs? So on your VM, initially you go and you have to look at log files and understand error messages and so forth We have integrated the logs from your VMs with Stackdriver with the GKE Dashboard and, more importantly– and David will be telling you about this– with service management And so you basically can see your logs as you would see logs from– you see your log outputs here in the Stackdriver output And so you can interact with APIs, with monitoring management tools if you have, but it is a modern GCP native display Same goes with the GKE dashboard OK You can see what your app is doing, how much CPU it’s using, and so forth Now, the next point is, well, what can you migrate to containers? What really can you do if you are ready to migrate your VM to containers? What we’re doing is, basically, Linux operating systems today We don’t support yet Windows, Windows because we don’t want you to have Windows on containers So it has to be Linux It has to be Red Hat, CentOS, SUSE, Ubuntu– those are the main operating systems– Tomcat, Apache, Websphere, Weblogic, JBoss type of middleware applications, databases So all of these legacy type of stateful workloads you can migrate to GKE Certain things that we don’t yet do because, what we announced was our beta that we’re working on now on this product, we don’t want you get to move production We don’t want you to move sensitive data because we’re on a private beta at the moment And the other thing we don’t do is high performance databases, such as Hadoop, big data type of workloads And here, for transparency purposes, we’ve outlined the things we support, we don’t support Keep in mind that the product is evolving, roadmap and so forth So if there are things that you’d like to get, come talk to us at the booths Come meet us today Talk to your field representatives, and we can work with you to prioritize things This is just to give you a framework so you can understand if it’s something that’s applicable to you right away With that, I’d like to give you a little demo,

just so you can visualize the product So let’s go ahead and try to play this recording So while this loads I just want to prepare you It’s a CRM application So it’s basically going to be a web app that has the database and customer base CRM And it’s going to be two VMs So I have it running as two different virtual machines on VMware One is a database, and one is the web server in application And we’ll migrate these two VMs from on-premise to GKE OK And here we’re going to play this And we can spend a lot more time with you if you stop by our booth We can show you the demo live Here, for purpose of time and everything, I just want to play a recording so it just gets things going, so you see the concepts But then we can spend more time on the live demo So as you see, we have, as I said, two VMs One is running on CentOS and the other one is running on SUSE So I have two different operating systems as a start, and that’s my CRM app running on premise So two VMs, two OSs I can log in And what you see here when I log in on the local IP is the look of the app I have a GKE cluster called Demo, which is running a specific version of GKE And I have, basically, a YAML file Remember, I told you that we started the migration from YAML from Kubernetes, so we have a file where we say, look, this is the VM number We want to use streaming for our Velostrata technology, and we want to do, here, a test plan, for example We point to a persistent disk, and we define the app name and the way it will look once it’s migrated on the destination environment And so we’re going to go ahead and execute the YAML file It’s at the very bottom here We do a kubectl apply And basically what you see, we refresh Services and Workloads, here is basically now two– you can see the two nodes, app and database, and you can see the services We have a load bouncer with a local IP on the cloud, and we login onto that IP, and it’s the same app Now that app has moved and is running as containers on GKE Then we go in and we enable Istio as an add-on onto our cluster Now when we do that, basically, we can go in Stackdriver, and we can see data We can see the graphs And a lot more things we can do is going to be what David is going to talk to you about next, which is service mesh, which will be added functions of what you do once you are on GKE and you’ve migrated our application So with that, Dave, I’d like to call you up and take it away for us And I’ll put it back in presentation mode DAVID MUNRO: So first of all, thank you Lucien So I just actually want to recap because one of the things that’s really amazing there is– I don’t know whether you just realized what you’ve witnessed, but you literally witnessed a VM application on premise migrating live, directly to GKE So I think that’s actually pretty amazing, personally I think that actually gives you an opportunity, and this is something that you can do while your applications are running So I think that’s actually a pretty big thing When Lucien asked before around the number of people who understand or know what GKE is, it was a very, very large number of people So the other thing that we talk about a lot from a Google perspective is Istio and service mesh So Istio is an open-source project It was founded primarily by IBM and Google There is a large number of contributors now– Cisco, VMware, Red Hat There are a large number of people contributing to Istio, and we’re getting a lot of community support So this is actually something that we’re actually investing quite heavily in And we think this is a natural extension for both Kubernetes and non-Kubernetes environments But also, it gives you the ability to do things from a service management perspective that you may not necessarily– it reduces the need for you to instrument your code, and you can take advantage of those things effectively out of the box So one of the things that we talk about would be, let’s do the migration that you just saw but do that into a service mesh enabled GKE cluster By doing that, you can actually take advantage of some of the service management capabilities that you get from a service mesh from day one So there are three main tenants around service mesh So if I look at the three key benefits that service mesh provide, it’s effectively

uniform observability It is operational agility This is the ability to migrate traffic, do circuit breaking, retries within your environment, and then policy-driven security So this is from a microservices environment All your nice, freshly migrated VM workloads into containers, you can start applying these tenants straight away out of the box So let’s drill into these just a little bit, and then I’ll do a little bit of a demonstration on service management So one of the things that service mesh provides is it provides uniform metrics and flow logs across your entire environment When we deploy a container into Kubernetes we use a Istio sidecar injector to basically automatically inject a sidecar co-located in every pod associated with that service The advantage of doing that is, in a language agnostic way, I now get a very consistent set of metrics and flow logs through my entire environment I can use them as golden signals I can use them to get an understanding of these service dependencies that I have within my environment These service dependencies can literally give you a way to understand how your services are connected And this is actually based on the traffic flows that is going through your mesh The last thing you can do is, now I have these really, really nice uniformed consistent flow logs and metrics, I can use them as my golden signals to set up SLOs on those services I can look at the service dependencies to understand what my critical customer user journey is with respect to what’s going to impact my end SLA to my customers The other key thing that we actually look at here, within a service mesh, is the ability to add agility So there’s two things here One here is around the ability to shift traffic or split traffic Canary would be a good example of this I can move, based on a weighting, a certain amount of traffic from one service to a newer version of that service [INAUDIBLE] There was a talk earlier this week where a customer using Istio has increased the number of roll-outs from around one a day to on target to 15,000 this year And the main reason they’re able to do that is they’re now incrementally releasing multiple times a day in very small increments That’s doing two things That is actually reducing the risk because every change is incremental, and they’ve got very fine-grained controls on how they canary those releases This is shortening the time it takes for them to get their changes out to their customers, which is really important The other key thing that Istio provides is the ability to do things like circuit breaking I can do retries I can build this into the infrastructure itself So if my VM application that I’ve just migrated to containers didn’t have any of that sort of capability built into it, I can now start picking up these capabilities just from the service mesh infrastructure The last thing is around security So with Istio or service mesh, I can effectively apply a identity-based security model between my services So I can migrate my services, and I can actually pick that up straight away I can also start looking at that interconnect and start getting security insights So let’s just take two minutes and have a quick look at what that actually looks like So I actually have a microservice application running And these are the services that are actually operational within the actual mesh Let me just make that– can everyone see that OK or do I need to make a little bit bigger? It’s OK? All right So what I can actually do here is I can go through, and I can quickly look at the actual topology associated with this particular environment So I understand the service dependencies I can highlight a specific service and understand the actual traffic that’s traversing this environment If I was to click on that service, I can go to the service dashboard and look at that particular service You’ll notice, on this particular service, I have not set any SLOs One of the things you’ll also notice is I automatically have security recommendation So then this particular service, I have not enabled MTLS

The flow logs and metrics has given me that information, and it’s giving me now a recommendation of how I can increase the security posture of my environment So these are the things that the service mesh can provide for you straight out of the box And this can now be applied directly to those VMs workloads that have been migrated directly into GKE I get very consistent, these nice flow logs and metrics associated with the services that I’ve now just migrated I can start doing things like set SLOs on these services So setting an SLO, very simple Pushing through, selecting, say, availability would be an example SLO 95.4 Actually, that would be pretty impressive You can actually do this based on a rolling period of time or a calendar period of time, depending on whether you’re setting something up more for internal or more to match a business objective Windowing can also be optionally added with respect to applying now an alert associated with a smaller period of time And it would literally just be as simple as applying that, and now I actually have an SLO set for that particular service So that’s just kind of a little bit of a– if you think about, again, what Lucien demonstrated, he migrated VM workloads straight to Kubernetes If you migrate those VM-based workloads into a Kubernetes cluster that has service management or a service mesh enabled, this then just effectively just works out of the box So at this point, I’d actually like to pause and invite Leon up to talk a little bit more in technical detail around Anthos Migrate And I will hand it over Thank you LEONID VASETSKY: Thank you, David So I’d like to give you a glimpse of inner insights of how it works, where we’re heading So this is a very young product So we are just starting It’s a long journey And I want to show you what we’re doing now and what we’re doing next But before that, I’d like to give you a little bit of history, back to the history lane So let’s think of the developer, the application developer that work on application five years ago, maybe 10 years ago Right So he was writing code, and then he wanted to run it in an [INAUDIBLE] way, probably bundled with contained some runtime, like Java or Python maybe And he wanted to put it somewhere And the best place to put it, in [INAUDIBLE] way, was VM And this was convenient, powered by VMware was extremely popular, so every application that you write ended up being with this stack on the left side And you only need application maybe two or three services around it, but you were getting the virtual hardware that you need to manage, and network, and disks, and operating system with kernel and staff, and all the infrastructure that you need to accommodate, like logs, and security policies, and users that can access it So all this is a good start But then you’re starting to get more of those And specifically, the painful that those application [INAUDIBLE] mentioned, small or mid-sized stateful application, maybe simple database, web, middle-tier apps, they started accumulated This can be within apps for some on some enterprise or maybe off-the-shelf applications that you managed And this is very, very painful and become very, very painful Now imagine, today, you are a container developer So you can develop your application packages in a Docker container and run it with Kubernetes So today, if you develop a new one, you have a way of doing that But what will you do with all this leftovers? And this is a very, very painful rewrite So what we wanted to do is a phased approach or phased journey for those legacy applications to get to the modern way, and they’re not number of stops in the middle And these stops can help you benefit from some of the ecosystem, maybe not all of this,

and then you can benefit from more and more So the first one, the first stop, the one that you all actually saw in the demo that we, to the application, we didn’t change anything for this application, but we peeled away all the low level stuff that is not required for this application to function It’s required, but it’s not must to be the same stuff in the VM So we took away OS kernel and drivers Now it’s provided by a container as orchestration framework We took away logging We took away networking Networking is now handled by Kubernetes We took away storage And we leave application and the services that you may have, maybe some cron jobs for this application to run So this is the first step It allows you to get into the stage when you manage the Kubernetes, leverage all these good features that you have, but it’s still not the end of the journey The next bar is going to this stage, where we want to take this upper part of the VM and split it further Maybe some application will be rewritten, some of our application will be just breaking down to the application and the data And maybe some application will be left as is because you don’t find it cost effective to invest in them And so this is the journey that we follow And the beta will cover mostly this far, then this part will come preview right pretty much after that So this is one aspect The second aspect, which is also very important, is how you’re doing your journey So we focused in all the demo and announcements, we focused on getting workloads from VMware to GKE, right? But basically there are two dimensions that you need to handle One is migrating things from remote location to another location, can be either cloud, can be VMware, again, on-prem data center And maybe you don’t need to migrate at all If you’re moving from VMware to GKE On-Prem you don’t need to migrate the data The data is already there So there’s dimension number one Dimension number two is basically moving from the VM to the container So we build product in a way, and built architecture in a way, that you can do any of those transitions independently We can get you, of course, the quick way, this one, when you move from VMware to GKE and cloud, getting all the benefits But you can go either way You can first most GCE VM and then decide to containerize this VM and so on And this is very important in that both of those are handled by the product And it’s different types and different places on the road map when we go there But this is important flexibility that you want to do And especially with Kubernetes, when you have the storage obstructions So from Kubernetes perspective, you consuming storage And storage can be remote or can be local If you’re consuming storage, and then running your workload this container, it doesn’t matter You want to use this Kubernetes obstructions to hide the storage location Obviously, this is just a recap, small recap, what happened in the demo, just to give you a replay So Lucien [INAUDIBLE] presented a corporate user starting with the VMs on-prem and connection to them Then we moved those VM to being in containers in GKE We started to use networking of Kubernetes So at this point, application were connecting to the database by its DNS name and the service, and right now it’s both, so it can any IP You don’t need to manage IP anymore You have a load balancer to get you outside access And then once those are containers, you can see the logs, you can see all the benefits from the Istio, and other things that David show you One thing that Lucien mentioned as well is that we do have a technology a couple of years now,

when we want to, we can migrate storage from remote locations to cloud And we want to leverage this technology when we move storage for containers and when we move VMs to containers So we have the same infrastructure that can bring storage from remote locations to their workloads, no matter if workloads are VMs or containers And this is how the product is built And again, you can use it to migrate first, VM as VM, and then containerized, or you can go directly to that We’re talking a lot about storage Why is it so important? This is where caching, and latency optimizations, and deduplication– all the things that the storage layers gives you, Velostrata storage layer gives you, they come into play And it’s the same optimization that we use for containers and the VMs And this is basically what gives you the ability to test something or to run something in couple of minutes because you cannot move terabytes of data in a couple of minutes So let’s get another level down of how this works on the Kubernetes side So quick show of hands, who is familiar with CSI? That’s less So I won’t go to the CSI here It’s a lots of lectures about that CSI is container storage interface, something that became GA in Kubernetes 113 Basically, this is one of the things that allows you to obstruct the storage implementation from the orchestration So Kubernetes will orchestrate the workload creation and will orchestrate the volume provisioning, volume attachment And then you will be, as a driver, as the provider of the storage, you will be providing the implementation for that So what will happen? When you deploy the [INAUDIBLE] that Lucien showed you, you actually trigger the activity from Kubernetes, which says, OK, I need to schedule this pod on this node, and I need to make sure that this node has access to storage And you ask me for the storage of this type, so I need provisioning So at this point, Kubernetes goes to the storage driver and ask me, OK, look, I need to provision the storage This can be native storage driver For some storage vendor it can be native storage driver for GCE persistent disk or, in our case of the streaming, Velostrata Control Plane It basically says, look, if I have a needed provision volume, I’m exposing the storage of this VM from on-premise or other [INAUDIBLE] to this cluster So Kubernetes is happy, volume is provisioned, now he decide which node should be used to run pod, and then it’s come in attachment So he connects this node to the storage backend, mounts relevant volumes And at this point, the workload that runs consumed this storage, again, being it remote or local What’s important in that system, so it’s fully integrated with Kubernetes obstruction It means that if you want just to migrate the storage, and you don’t want to migrate your VM, just maybe you already have MySQL container, MySQL image that you want to use, you only want to move the data, then you will do the same So you will attach the storage to the existing container, and it will benefit from the same streaming and compression technology And last, what you can also do with the storage– and this is something that we’re also providing This is what we call optimized data, data extraction, or selective data extraction So we provide the job in the Kubernetes cluster– again, fully managed by Kubernetes, fully obstructed by Kubernetes obstruction– that can take you’re streaming storage and extract part of the files of part of the data that you want For example, you only want your MySQL database data So this will fetch this storage in a streaming manner inefficient with all the optimization, and it will deploy it to the storage of your choosing

It can be native cloud storage, it can be storage vendor that runs in the cloud And this is also a foundation of what we call the next phase So this is the same technology that will then be used to extract the application So let’s say you have multiple services running on the same VM, and you want to extract some of them, so we’ll extract part of the data will go here, part of the data will go there And this will be the same component doing this job So with that, I’ll heading back to Lucien for a quick wrap-up LUCIEN AVRAMOV: Thank you, Leonid [APPLAUSE] So what did we learn? Well, hopefully, I hope that you understand, first of all, that modernization is no longer something where it means you have to rewrite everything from scratch And it means that if you want to go there, we’re here to help you on this journey with moving those application on a modern environment Two, I hope you understand that state stateful apps can actually run now in GKE without any need for writing them again It’s kind of the same as the first point And then number three, this can be directly done from on premise to the cloud, and you can be up and running in minutes Now, one thing to remember is we don’t have to migrate you directly to containers and to the cloud if you’re on premise We can do it in two steps We can move you to the cloud on a VM, get you comfortable, and then move that VM to a container OK So we can meet you in your journey where you are, and this comes back to some of the questions on the Dory today where one question was, well, how about Windows? Can I use Anthos Migrate for Windows? Well, what we can do today with Anthos Migrate for Windows is VM-to-VM migration So we can get you to the cloud running as a VM, and when we have support for Windows on GKE– hopefully later this year– then we can migrate that VM into a container Number two, another question on the Dory was, well, is Velostrata a requirement of Anthos Migrate? Anthos Migrate is Velostrata It’s the same thing You install Velostrata, and you can do VM-to-VM migration, and now you can do VM to GKE migration So hopefully that clears it up I’ve put some links on the deck So if you download this, you have the list of all our resources, public documentation We have a couple of other sessions– one more this afternoon that will be interesting I think the other ones have past, but you can watch the recordings We’re at the booth all day Stop by the Infrastructure and Modernization booth to talk to us We can do live demos, talk further on your environment And finally, in the last 10 seconds, just to let know, give us feedback– what you liked, what you didn’t like If you want us to come back again and talk more, if you don’t want to see us again, let us know But we really take it seriously So thank you very much, and have a good conference [APPLAUSE]