Containers in Production with Deis and NGINX

so thanks for joining us today we’re going to talk about containers in production with dais and engine X so flipping over to my slides here just quickly about me my name is Gabriel monroy I’m the CTO at Engine Yard I’ve been an early contributor to docker so if you ever used anything in dr. like buying mounts the dash V flag for docker things like host configuration which is sort of container local configuration on these are all things that were added in docker in the early days to make day as possible and dais is a project that we’ve created which the technology was later acquired by Engine Yard and is today you know one of the leading docker passes so what what is day is exactly right the best way to think about dais is as a it’s it’s really a private karoku running on top of core OS it’s kind of the simplest way to describe it rather than being an orchestration system ourselves we’re actually a consumer of other orchestration api’s well you know that that’s not commonly known but a deus is best thought of it’s kind of like a workflow layer that sits on top of something like uber Nettie’s so today we are the leading docker base pass and well you know what i mean by that is you know dais today is over a million downloads we’re seeing about 500 new clusters going up every day and the project has and believe 145 open-source contributors at this point so it’s used by many many companies needs you know including folks like Mozilla shopkeep the real real coinbase list goes list goes on and on but but probably hundreds of production deployments if not thousands hard to know exactly how many so I wanted to spend a minute about what I talking about why dais why it exists in the world how many of you are familiar with containers and are exploring use of container sitting how many you feel like you have a good handle on everything that’s out there in terms of the space and which direction you should be going one two hands okay so um and there’s a reason for that right there’s a huge pile of stuff going on and it’s great on the one hand because there’s a ton of innovation happening in the space but it’s really confusing for folks like yourselves in the audience who are just trying to figure out well what am I supposed to do inside of my organization to help solve problems and that’s really what dais has always been I’m from the outset is a way to an open source project that is actually gluing together Best of Breed open source technology turning that into something that has a proven operational model and that can scale production workloads in the wild so some of the specific benefits of deus are you know one of the things that we provide is developer self-service and you know I want I want to be clear about this this doesn’t mean that developers could just run wild in production and deploy workloads out you know without any controls whatsoever on but what it does mean is that developers should be free to innovate there’s a reason you have things like Deb clusters and staging environments where folks should be free to go you know build new things and benefit from some of the agility that containers provide and days helps make that possible and we’ll talk about that a little more another is extreme reliability the whole move from single system configuration management based approaches to a world of distributed systems brings with it a lot of benefits but one of the ones that you know you know is really a poignant for for folks who have experienced it is this idea that when you have a node failure that doesn’t actually ever result in downtime for any of your services on the same is true for failure of availability zones or racks or network partitions of any kind that’s something unique to building just these kind of distributive systems another side effect of this is horizontal scalability if you’re building apps in this model on you can really literally just throw hardware at the problem scale out and things become you know a lot easier to scale and one of the biggest things about about days and I hit on this just a moment ago is wrapping all this technology into a proven operational model that is actually backed by commercial support is something that’s kind of unique in the Street so let’s get into some of the technical nitty-gritty here so how does dais work actually systems like this are a pretty complex so let’s let’s walk through it so this is a system diagram of deus and you know there’s really three main components to the system the first is the control plane which is really the brains of the platform and inside of the control plane exists you know about a dozen or so different microservices they’re all themselves powered by docker containers but these are things like a containerized SEF cluster because at the end of the day there needs to be some storage subsystem I’m that’s highly available so we

actually roll their own container is f which is actually reusable outside of the project but and and so the control plane is really where API interactions from developers come in who are looking to drive the platform cic d systems that are looking to drive the platform at Jenkins or something that’s that’s where you’re talking to the data plane is where the work gets done that’s where you actually think of it as your worker cluster that’s where you’re actually the containers get spun up now in between those two things sits a schedule or an orchestration system and you know what one of the interesting things with days as architecture is where I believe the only platform is a service out there today that actually has available pluggable schedulers so we can actually use we currently use fleet paralysis fleet for scheduling I’m going to be demoing Kuber Nettie’s which is a technology preview we have but we also support docker sworn as well as mezzos via the marathon framework so that scheduling and orchestration component is pluggable today and the last component and what we’re going to be spending most of the time talking about today is the router mash and the router mesh is how the platform itself as well as all the containers that run on the platform get exposed to you know consumers outside the cluster you know bday on the internet or inside of an internal network now some certain things that we don’t do on the platform that I actually want to specifically call out backing services you know so right now it’s you know as someone who helped work on Dockers volume subsystem I’m not of the belief that where you ready to run production data stores inside docker containers I think the operational model is not yet fleshed out I am not very comfortable with my data living under VAR lib docker VFS some grid whatever you know the backup and restore procedures for some of this stuff are a little bit different and you can get yourself into trouble and if I’m a DBA you know I’m not one to take risks so you know backing services right now we connect to those externally the environment variables now you can run those in containers people do this successfully it’s just you have to understand the risks and we want to you know part of our job with dais is to make sure that you know we put kind of the bumpers on the bowling lane and make sure you don’t actually hurt yourself and running data stores inside containers are not something we support natively um things like monitor and monitoring and logging many of the folks who deploy deus already have their own systems for how they want to do monitoring how they want to do on log aggregation so we do have some features actually quite a bit when it comes to logging for things like monitoring what we’re interested in is exposing api’s that allow you to plug into whatever monitoring systems we see a lot of data dog a lot of cystic a lot of New Relic in the wild so um this is actually a workflow diagram of what happens when you get push something into a dais plesur so the first stage that we’re going to do is you’re going to hit a builder component in the control plane and that’s going to create a new build and that new build is going to be added to configuration which is basically a set of environment variables those two things are going to be smashed together into something that we call a release and that release is going to be tied to the version number version 23 for example we model those releases as docker images inside of a cluster local docker registry that’s distributed and highly available and part of the cluster infrastructure this is really really important to make sure that things are fast when you’re looking to scale or in the event of network partitions you really need to have this stuff cached locally so we start that in a local registry let me go and we talked to this pluggable scheduler / orchestrator and we say hey you know the way we want to orchestrate the system is chained change or this application so let’s let’s effect a rolling deploy and let’s update some of these containers roll out new versions have them gracefully come up published to Etsy d when they’re ready and then exedy will inform the router mesh that the new containers are ready and slowly start shifting traffic over to the new containers and we do that in a way that has zero downtime with thing features like optional health checks at the HTTP level and stuff like that this is kind of a general overview but what I want to do is walk you through a video of this because always helpful to see this in in the wild here so I’m going to hit play and hope this video plays let’s try and reload so we’re going to start out with an example go application and I’m going to use the dais command line client which is what you distribute to your developers and they’re going to say deus create go which is going to create a new application space inside the cluster for this go application the interface is then as simple as get push deus master it created a git remote automatically when you created the app and what this is doing is it’s hitting our builder component which is actually proxy to buy

engine X and the Builder component is a git server that is running a build pack and you can see the built back already sort of completed it completed compiling a slug which is you know essentially a go binary and now what it’s doing is is building a docker image from that slug so I didn’t actually have to build a docker file I didn’t have to do anything I literally just packaged up my application code and get pushed it and and the cluster is building this into an image you know packaging it up and shipping it to a cluster local registry and then we’re going to call out to the orchestrator in a moment here to actually run this across the cluster so if you want to take a look here you can see that we actually have a special custom slug runner image that we inject the slug into at Build time and that gets pushed out to this private registry that we’re that you’re seeing right now and what we do is as you modify configuration we’re actually talking to the doctor registry api’s to actually inject environment variable configuration at the registry API level so at this point we’re done on the git push succeeded and if I curl go to Bertie viil which is the domain for this platform you can see powered by deus release v2 on the container name now what i’m doing here is doing configset now this is really you know that’s pretty useful of course you know the build-out workflow but you’re obviously going to need to configure your applications at runtime and make them slightly different so you’re going to you know this is a stand-in for how you might connect to an external database like a Postgres database or add a debug flag or wire up to a memcache or something like that I’m you would update environment variables this one happens to update the powered by section so you can now see powered by docker so at this point we’ve we’ve built an application we’ve updated configuration now it’s time to scale up and scaling is as simple as Deus scale process type equals number of containers and you can actually scale on different dimensions so if you if you place if you’re familiar with Heroku they have the idea of process types so you can actually put a proc file in the root of the repo and scale on different commands so here you can see we just curled a couple times after scaling Rand a asst logs and you can see that all the logs are actually aggregated for the the developer here across all the containers which is a really really powerful feature and really helps with folks debugging we also maintain this append only release ledger which you know tracks every single change every developer makes to either a build pushing a bill or config change as well as one-off commands that can be run against the the clusters or do things like schema migrations or admin tasks so this is incredibly useful and using this ledger because we modify we guess we store everything as docker images we actually have a history of the application at every point in its life cycle so we can use this dais rollback command to roll back to any past point in time and you’d think well that might take a lot of storage but because of the way docker uses layers I’m actually takes a lot less storage than you might think so roll back you know takes a moment here but we’re now back to v2 which should print powered by deus again and now you can see we’re on actually released v4 with a different container ID another feature that we have which I touched on is is the ability to collaborate with other users you can actually have a ldap Active Directory Integration wired up to this a ton of other features that I’m not showing you hear things like memory and CPU limits tags to pin workloads to specific post types so for example if you want to separate q processing jobs from you know web workers and things like that you can leverage days for all that so moving right along how do we route traffic in days right very interesting question I guess you guys probably know the answer here we use engine X is the answer and you know but it’s actually a specialized version of an impact of adventure next we use etsy d to store configuration data for you know how we’re going to configure the plot the web server compte is used to watch at CD for changes and then re template out engine X config files as well as things like ssl certificates and then update engine X so why are we using engine X it’s an interesting question now we’ve evaluated lots of different solutions in the space one of the things that we found was engine X has a ton of features around hdb application management that simply aren’t there in other alternative open source solutions H a proxy for example is a really great TCP based load balancer it’s very lacking in some of the more advanced features that a lot of our users need to deploy http-based microservice architectures those of you who have done

the evaluations yourself probably know what I’m talking about but it also has very active community things like web socket support we’re added recently TCP load balancing was added recently and the project seems to be moving at a good clip in a positive direction so we’re quite pleased with it so some of the things that the dais router exposes thanks to engine X is this entire list here custom cnames virtual hosts for applications such an affinity and I’m not going to go through all the all these things but you know this is incredibly powerful feature set but one of the things that we like to do is to really simplify the operational model around this so rather than making folks edit configuration engine X configuration by hand with dais with the router mesh configuring the router looks something like this so you’d say deus ET l which is our operator tool config-router set enforce HTTP is true so anything HTTPS gets automatically redirected HTTP HTTP gets redirected HP we’re enabling the web application firewall with the default rule set that we ship and we’re also enabling a strict Transport Security m4h HTTP one command update set CD read templates the configuration and everything is live with engine X and all of the tunable that we have for the router work in the same way so it’s pretty powerful and makes it really easy for operators another thing that we added was the ability to do / application SSL and TLS and this is really important for for platform as a service solutions so what does this look like the tool i showed you before was deus ET l which is the operator tool this is the day of CLI so this is you know you know ideally the developers are going to be the ones the person in charge of the app is going to be the one who’s going to want to set up the SSL or that’s that’s the ideal model for a pass and the way you do this is you add a custom domain so a custom cname to a nap example.com in this case and then you use the day asserts add command to add a certificate a key in an optional chain validation chain on to get distributed with it that also updates at CD which updates the router templates which soft reloads engine acts and results in this going live on the platform in a matter of seconds really so this is pretty powerful model for for getting a per-app SSL proud to play so with that I actually want to show you a little bit about how what this template looks like how do you hear are familiar with engine X consider yourselves advanced engine X config folks a handful all right well this this might scare scare you see if I can get out of your presenter mode so this is the dais project and what we’re going to do is we’re going to jump into the router subfolder this is actually the dais router component and it’s all packaged up as a docker image all of our components in the system are all packaged and run as doc ock Epona’s one of the patterns we like is to actually have a root of s folder that we can track all of the basically the root file system for the entire container what I want to show you here is inside of compte but how many you hear a familiar with compte okay a few of you so I’m compte is basically a template a lightweight templating system that uses that TD which is a key value store not unlike something like s3 and what it allows you to do is update key value pairs and then have those poll those watch on those key value pairs and you essentially use that to drive configuration via coffee so i’m going to pull up the engine XCOM template this is 578 lines of basically a goal laying template so for a simple example of how this works worker processes this is something configurable I’m if you guys can see this zoom in a little bit this is something configurable inside of days so what we do here is we say get the value for deus router worker processes or if that value doesn’t exist fall back to Otto and that’s a pretty good model here for a lot of kind of default values things like Max worker connections but then we get down into things like enabling the firewall um you know this gets pretty hairy in terms of how you template this file up then we get even hairier this is strict Transport Security but let me let’s go down into the section for this is for custom domains we range over custom domains and we set up basically virtual host mappings for each custom domain so this is pretty tricky stuff but one of the things that is really advantageous is because we have so many folks running this in production

about 20 people over the over the many months that this has been out there who’ve actually helped us tune this and most of them are much better at engine X than I am frankly and and and so this has been really battle-hardened you not just at a single customer site but across hundreds and probably thousands of different customers using the same engine X config so it’s actually a really useful place to start if you’re looking to do something dynamic like this now this is obviously a sort of verbose way of showing you how we do some of these Sunnah bowls we do actually document this stuff inside of the documentation under if you go to de estado or doc de estadio the documentation tab customizing days and customizing the router you can see here settings used by the router we’re remunerate all the different CD keys how you can drive configuration doing things like connect read and send timeouts for all the major components gzip compression tuna bowls Shh sts ssl tuna bowls pretty much you know anything and you can envision and so we get lots of folks who weighed into the project and wind up customizing this component it’s the router as a component inside of deus the router is far and away the most customized component in the system so now let’s say let’s say that what we had inside that template didn’t meet your needs for whatever reason i showed you before that that was actually a docker image inside of deus and and one of the things that we see folks do quite often is you know they’ll say well I want to do something wildly different I want to custom error pages and I want to use engine X plus or I want to do something wildly different well as long as the router implements the same interface in terms of the tuna bowls you can actually or even if it doesn’t frankly I’m you can configure your own image by doing de CTL configure router set image equals some path to a docker image and then you just simply used a CTL to restart the router mesh I’m using this command and that will allow you to use your own custom docker image for the router including engine X plus and we do have a number of folks who are running with nginx plus with deus that dashboard is pretty hot so I want talk for a minute about engine X open source one of the things that we’re relying on heavily is soft reloads and you know some folks may be wondering well how does this scale for specifically for putting containers into production and the answer is extremely well and and we’ve actually done some scale testing and the actual answer is surprisingly well so I want to go into a little bit of how this works so how many of you are familiar with the soft reload process for engine X a few of you are many of you okay so I actually got a chance to look at the C code recently in hat how to chat with some of the folks about some of the details of this last night but the general process here is you update configuration and then you send a cig hub to the engine X master process that will then do a syntax check on the new configuration it will then move on to to basically prepping the new workers starts by opening some logs and some listening sockets and starts up the new workers once the new workers are serving traffic Oh workers close down there listen sockets complete in-flight requests and then the workers get reads pretty pretty straightforward stuff and actually works works quite reliable so in the initial state if this is your simple 3 worker and you know engine X tree you would have while the soft reload is in progress you know that second process in the list there has an in-flight request so it says the you know shutdown in progress and then eventually it goes away now with a platform like dais we’re actually tracking the container topology you’re seeing these these shutdown in progress processes occurring pretty frequently the and there’s a concern is this going to be able to keep up and the truth is it really it really keeps up quite well so we’re quite quite quite happy with that now some production concerns with days specifically and with engine accent in this kind of approach so the first is you know you really want to scale out that router mesh to have as many instances of engine X’s you need so we default to three but we see folks running with five on some folks running with with ten it depends really on how much bandwidth you need at the edge um you know in order to determinate but we see three working for most folks another thing is graceful upgrades of deus upgrading the platform is always a problem with with with solutions like this we actually have a new graceful upgrade command and what it does is it sort of stops everything in the platform except for the router mesh and any applications running in the data plan to ensure that you don’t have any downtime and then there’s an upgrade take over command which upgrades the rest of the platform and does a hero

downtime upgrade of the of the dais platform components again this is really important and something that you know we take very seriously the sort of operational hardening of how dais actually works in the real world now some potential gotchas um we’ve noticed NF conntrack can be a problem at high scale so and if contract is essentially the Linux kernel doing network connection tracking for things like IP tables there is a limit to how many active connections can be tracked and and oftentimes you’re going to need to bump that tunable we bump it by default and the core OS boxes that we ship but you’re going to want to watch out for that another is long-lived sessions I had a conversation last night with some folks on the engine X team and if you’re not careful with your TCP sessions if you leave your TCP connections open you can actually leaked workers using the engine X off reload feature so you really need to be careful that if you’re using soft reloads you’re actually closing connections every once in a while or having those connections time out due to inactivity otherwise it can be a big problem and one of the biggest things that can bite you if you’re using this is just generally compte and engine X is inconsistent template output or more specifically i dint template output if every single run of compte results in a slightly different ordering of you know the config file coffee is going to be sick upping the engine X process when nothing actually changed it’s just that you’re iterating over a map and the map is sorted differently every time and that results in basically you sick helping everything you know way too often so if you’re going to use this approach make sure that your template output is consistent every time and watch this engine x.x logs so I got a few minutes left here I want to talk a little bit about how Deus scales I’m more broadly so plus your schedulers are a thing this is how distributed systems work now you can generally know a cluster scheduler by a few common patterns they typically have declarative interfaces I want to run this many things they also have placement constraints things like I want these workloads to run on these hosts on they feature things like affinities and anti affinities as well and interfaces for services service jobs and batch jobs batch jobs being a job that runs to completion and returns and exit code service jobs being long-running Johnson now for deus it was really important for us that we choose a scheduler that was resource aware that had host constraints we want to pin stuff to specific host for example engine X should be on these three boxes that are connected to a special network as well as things like Co scheduling soft aunty affinity to spread out workloads 4-h a placement in global scheduling global scheduling being the ability to schedule something and have it running on all boxes it’s very useful pattern for sort of management and agent agent style architectures so there’s a few options out there that support scheduling docker natively you know fleet obviously is what we use today docker swarm is making some good strides mezzos is obviously the battle-tested solution and then those Kuber Nettie’s and you know what we found is that kuber Nettie’s actually strikes the right balance of light weight while also providing you know a good broad feature set you know that that meets our selection criteria so I want to talk for a moment about Cooper Nettie’s how many in the audience are familiar with Cooper Nettie’s how many of you used it okay a good amount of you so what does Gerber Nettie’s it’s an open source system for managing containerized apps basic mechanisms for deployment maintenance can’t and scaling on this is direct from from their read me now benetti’s is really a really useful building block for building plier order orchestration systems and that’s you know what we’re using it for you so it features some core concepts I’m not going to go through these in great detail but a cluster is a cool of nodes pods are this idea of sort of a grouping of containers that run across a distributed system replication controller is this idea of I want a certain speck of a container but I want a certain number of copies of those spread across the cluster services are what you expose inside the cluster for inter-cluster intracluster communication and service discovery and labels are a way of loosely organizing you know a metadata inside the cluster so some some very powerful concepts now some of the some of what makes q burnett is unique kuber Nettie’s unlike a docker swarm for example also unlike nasos features shared namespaces so you know you when you run a container inside of Coober Nettie’s it actually creates a shared namespace where you actually run multiple containers a pod and all those things share you know a network name space of a mountain namespace and they’re really volumes and things like

that secrets things like that it’s a very different model and a very powerful model for things like atomic scheduling there’s also this pod and service networking model for those of you who may have come across Cooper Nettie’s in the past one of the things that immediately turns folks off is Oh like I need to run an overlay network to make this thing work or I need to assign a subnet to every host in the cluster well there’s a reason for that the reason is that every network every pod that gets scheduled on to a kubri Nettie’s host gets its own IP address and that makes it routable uniquely across the cluster and it’s really powerful construct because you end up getting to move away from this port brokering you know a world where you’re dynamically exposing a port that gets mapped and you late bounds to you know your engine X web server that you’re on gets bound to 49 153 and you have to figure out oh well that’s the port but really what I wanted was a tea and so how do I actually publish that to everything that needs to know that it’s on this new weird port well if you have a namespace and the container binding to its own address it can do a listen on whatever port it wants and that really dramatically simplifies service discovery inside the cluster allows for things like I’m DNS based service discovery and and everything just kind of works you do pay the price for for you know complexity on platforms that don’t support you know container per IP natively but most most cloud providers are getting around to this and if you control your own gear this isn’t a problem there’s also solutions like we’ve flannel and project calico that offer solutions for this as well biggest thing that makes q burnett is unique though is the Borg and Omega team it’s very clear to someone like myself who’s done a good deal of research on on cluster scheduling that the folks who actually worked on Borg and Omega are the ones working on the Cuban Eddie’s team you see you know the reconciliation pattern you know applying a desired state and current state you know the whole idea of not using port brokering was actually a lesson that they learned inside of Google and applied to Cooper Nettie’s which is probably best viewed as a rewrite aboard their internal cluster manager so very very unique project and there’s a lot to a lot to like about it so why would you use dais and communities together right that’s really really interesting question and I want to walk you through a quick example of deploying immediate meteor app using rock uber Nettie’s so it kind of starts like this I’m step one you build a docker image right and you know this assumes that your developers have docker loaded on their on their laptop they get da car engine running but they know how to build a docker image ever you have docker files for everything but not that part right step two is you got to ship the image right you got to move it to a registry now one of the things you quickly learn if you’re operating these systems and production is you don’t actually want to go to da crib or clay or somewhere else you know some third-party to pull your images you need a local registry that’s cache of your images local to the cluster otherwise everything’s going to be too slow so you gotta stand up a register you know it’s kind of tricky to someone up and maintain it but it but it’s doable and the last thing is you got to run um you know the application build ship run right and so for Cooper Nettie’s you got to create a replication controller which is how many of these things you want to stamp out I mean you create a service which is how how you want to expose this and those commands are pretty simple right what what you do have to do those you actually got right out the definitions for these files and I was actually in a talk to Kelsey was giving earlier about some of the someone asked you know how do you actually write these files and you know I guess the answer was more or less very carefully um you know you got to be extremely careful with this it’s not just this it’s the service Jason and by the way these are the Jason representations they’re easier to write in yamo if you have to worry about the commas and semicolons but there’s still a lot of kind of a human error here in writing this this stuff so rock uber Nettie’s can be a little frustrating if you’re trying to bring this to a software team and say hey write this kind of arcane syntax now I expect that will get easier over time and there will be less of a problem but right now dais offers a pretty good answer for this a Heroku style CLI that actually translates this stuff into those manifest files so you get all the benefits of Coober Nettie’s declarative orchestration but you don’t actually but you get an imperative workflow for developers who are more comfortable with that comma so we feature Heroku style CLI experience a build server that allows you to do get pushed deploys though those are actually optional you can ship bra docker images directly into the platform too if you have a CI CD process cluster local artifact storage using backed by Seth on the cluster container I set extremely important for real world deployments an engine X router mesh that features

things like custom see names and TLS which are extremely important for real world deployments as well as a whole complement of operations ready features log routing aggregation configurable syslog drains via one command user management with ldap and Active Directory Integration all the bells and whistles you need to actually put this thing into product so steps 1 through 3 with dais dais create meteor get push dais master uses the bill back to build everything ship it to the local store and then day a scale web y b equals 2 because we actually don’t support scale on deploy although i suppose we could um but this is a lot more palatable to most folks i was going to do a demo we’re getting out of time here so i’ll actually like to talk just quickly about a couple other things what’s next for the dais project the biggest thing we’re doing is right now dais actually sits outside of Coober Nettie’s and dace itself runs on fleet on core OS and schedules out to an external kubernetes cluster that’s how we support things like mezzos stock or swarm is you know we haven’t essentially picked orchestration system to replat form on well guess what we picked we’re going to be re platforming on cooper Nettie’s rebasing the entire system so there is in a branch you can see on github now dais itself running on top of Coober Nettie’s scheduling back to Coober Nettie’s which is a really nice way to you know run for example on gke or core OS on AWS some of the things we’re working on an engine next cooper Nettie’s load balancer integration there’s active work going on in the communities community to make load balancing a first class citizen in the platform and we are helping drive that effort inside of the kubernetes community to make sure that engine X works extremely well as an edge load balancer inside of Coober Nettie’s world now one of the things we’re exploring we haven’t yet decided you know in a perfect world engine X would be completely API driven for configuration be able to create new up streams all those crazy tunable zai showed you turning on h st s you know that should be an API call right but we’re not in that perfect world i understand it takes a while you know to get there so in the meantime what we’re looking at doing is is there a way we can do some cougar Nettie’s native backends in memory via some kind of third-party modules so if any of your working on kuber Nettie’s and are interested in this effort please come reach out to me we’re looking for for to kind of get a group of people on to help work on making engine X plus Q brunette ease extremely extremely good fit so last thing I’m going to talk about here we have an open road map and planning process date is kind of unique as far as I can tell so we have a meeting once a month where we basically share the road map with everyone we walk through what people are looking for and we have very high turnout on these calls typically and it’s a good chance for folks to voice their concerns about where the project is going and you know push their pet pull requests and things like that so that’s the first thursday of every month so with that want to thank you all for having me here today thanks everyone and open it up for questions anyone have any questions true I’m sorry okay okay the question is the shifting to Coober Nettie’s mean dropping the maze of support and the answer is no in fact we’re working very closely with the folks over at mesosphere on the only difference is that in order for us to to get a really solid mais les integration we would have to write our own framework it turns out marathon does too much basically for free Deus um guess what there’s a really great framework out there that does exactly what Deus needs it’s called the crew benetti’s framework so we’re just going to build on top of that and help contribute to that effort going forward so we’re also working with the folks over there on a DCOs plugin so it can be getting days up and running on D cos can be as simple as DC OS installed Deus great other questions no all right thanks everyone