Workflow on Fortnite | Unreal Fest Europe 2019 | Unreal Engine

>>Ben Marsh: Good morning everyone. Thanks for coming My name is Ben Marsh I am the Lead on the Developer Tools Team at Epic We own things like Unreal Build Tool, Automation Tool, the packaging pipeline and the internal build infrastructure at Epic, all the really glamorous stuff Today, I am going to talk about a lot of the things which are an afterthought when you are starting out a project with UE4 You have a bunch of ideas, hopefully a team that is enthusiastic and ready to go, and then you have to figure out how to get everybody to sit down and work together The logistics of actually getting them to work together If you are really lucky, you get to iterate on that over a really long period of time and figure out all the things that work really well for you as a team organically If you are unlucky, you pick something early on and then struggle to fix it later on because you do not have time or resources to devote to it There are lots of things that go into deciding which is the best solution to you There are things like the culture of your studio, whether you are scrappy or whether you are risk-averse, whatever tribal knowledge people bring from the previous project they worked on, what works and what does not work There is the fact whether you are working on a box product and you have a one off release or whether you are making a live game that you need to sustain over time, and there is whether you know what the game is from the start and you know that you have a hard content pipeline that you need to get up and running quickly or whether you need to iterate to find the fun I am going to talk about some of the core workflows that we have developed for Fortnite, but a lot of this is extended to other projects that we have done at Epic Hopefully, as subjective as it is, there might be some things which are useful for you as well I am going to break it down into three sections First of all, I am going to talk about how we distribute the Editor, how we get the Editor to our artists and designers and introduce a tool that we use called Unreal Game Sync Then I am going to talk about our branching model and how we handle multiple releases being in flight at once on a live game Finally, I am going to talk about a few things that we have developed as kind of best practices for iterating on Fortnite First up, distributing the Editor Back in the UE3 days and on the early days of Fortnite, we had this process which we used to call the Editor Promotion Pipeline We would have our build machine sync down the latest code, then update a header file with the changes number that it synced, compile the binaries, then submit them to Perforce Our QA team would sync those down, test them If everything went well, they would apply Perforce label to it and our artists could sync that down using a little desktop tool that we made for them called Unreal Sync I just want to dwell a moment on bit at the start of that, when we update the version number That is not just a cosmetic thing That is actually a really important thing for avoiding data loss Unreal’s property serialization is really useful for rolling development where you can modify Class layouts whenever you want to, and you do not really have to worry too much about upgrade paths If the Engine encounters– If you have added a property to a Class and an Asset was saved out without it, it will just be initialized to its default value on load If you remove a property, it will just skip over it and any data it loads in That can cause a problem if you load up a new Asset with an old Editor It won’t understand some of the properties that were serialized out to it, so it is going to discard them To avoid that, we compile the version, the changes number into the Editor, and we can tell the Editor to refuse to load anything that was created in a newer version than itself This is what the kind of cycle would look like on that pipeline We would have a build, we would test it, then the QA department would bug it, and then it would go into the triage queue in our bug-tracking software Hopefully, the lead of the team that it went to would triage it quickly, they wouldn’t be in a meeting, they wouldn’t be out to lunch Then it would go to a developer on that team Hopefully it went to the right person straightaway They would try and reproduce the bug and then check in a fix for it We have really struggled to make this scale to Fortnite As the size of the Engine was growing and the number of people who were working on the project grew, the amount of test area there was for the Engine was increasing and increasing The likelihood of finding a bug was getting more and more Once we completed one of those cycles, which at best would take half a day and at worst could take a day or two, then we would have to run the cycle again By the time we got to the second time around, there was a good chance that somebody else had submitted new changes which had broken something

We would just go round and round and round We were getting to the point where it could be up to a week before we would get promoted binaries to our artists Back in the early days of Fortnite when we were dealing with this, we were still struggling to figure out exactly what the project was going to be That iterative loop between gameplay engineering and design was really important The fact that Blueprints, to sort of enforce that mentality, that you kind of have to work lockstep together to figure things out was really hurting us and slowing down our progress on the project That was not the only thing, though The fact that we were checking binaries for artists into Perforce was creating problems with our engineers, because they were syncing them down and getting mismatches between binaries that built themselves and binaries that they were sinking down from the build system Sometimes they try and mask that out with the client spec and Perforce, and sometimes they get it right, sometimes they get it wrong When you end up with a mismatch, if you are lucky, you get a dialog box from Windows which is not very helpful saying that it cannot find a symbol or something like that If you are unlucky, then everything looks like it works fine, and then you just end up with some random memory corruption, because Class layouts are different in different modules or something like that What’s more, it meant that engineers could not check in any content If they needed to change content, they would have to check in their code changes, wait for an Editor to be promoted, and then make the content changes If they tried to check in things they had made with their local build of the Editor, those would be un-versioned, and any artist who was syncing them down with the promoted version of the Editor could wipe out any properties they had added While we were trying to figure out how to solve the promotion problems for Fortnite, we had a small project starting up called Battle Breakers It was a team of 10 people, a mixture of artists and designers and engineers They all sat in the same room They were really passionate about their project, and they really wanted to make it succeed, and they did not really care about process and things getting in their way This whole Promotion Pipeline did not work very well for them at all They decided that they would solve it by just turning everybody into an engineer They gave everybody a copy of Visual Studio Express and taught them how to sync from Perforce, generate project files, load up the solution in the IDE, and then build their own Editor It kind of worked for them In fact, it worked really well for them If they ended up with a problem, there was a compile error or a bug in the Editor or it crashed or something like that, the just turned around to the engineer sitting next to them and asked them to have a look That was not such a big deal for the engineer, because they already had Visual Studio installed They had full symbols They could just debug it there and then and get the artist back on their way It also had a really positive side effect that we didn’t anticipate, which was that by making communication between artists and designers and engineers the norm and part of their daily workflow, they would also get a lot more feedback on things that would really contribute towards improving the product or just hardening the Editor in general Any workflow things they wanted, they would have an avenue to communicate them We thought, could we scale something like up to working on Fortnite? We went into this with a bunch of concerns about how to get it to work The first one was stability Obviously, stability is going to get worse because nobody is checking the Editor anymore But who is that really going to affect? For artists, they are usually relying on core workflows that are part of the Engine that gameplay engineers are not changing too much, so on a game project, they are probably going to be pretty stable For designers, there is definitely a tradeoff there, but it was a tradeoff that they were willing to make for sacrificing stability for a bit of productivity There was also a bit of a case to be made for the fact that designers are testing the more relevant code paths to the game anyway QA might be bugging things that they have a test plan for that was decided six months ago Designers are always on the cutting edge and trying the things that are actually in the game right now We were worried about data integrity as well There was every possibility that somebody could check in a change that would just format somebody’s hard drive or something like that We had to look back and some of the issues we had seen before where somebody had checked something in that had caused Assets to be corrupted or something like that We did not want anybody to lose work We figured that that sort of content corruption was actually really rare, and it was usually a kind of insidious thing where people in QA did not see it when they were running their test plans There would be a lot more versions of the Editor

in the wild if we did something like this People could build their Editor at any arbitrary change there, so that problem with versioning for property serialization is still something that we are quite concerned about Finally, compile times Nobody really wants to compile unless they have to, but we figured that it might be worth trying There are still the practical matters of getting all this stuff to work We figured that we did not really want our artists having to load up Visual Studio and being faced with all of the toolbars and menus and ability to do the wrong thing We decided that we wanted a one-click workflow that was going to work for them We also knew that because we did not have everybody sitting in the same room, we would need a way to surface information that other people could see how to communicate issues between the team This is what we came up with This is our prototype of tool called Unreal Game Sync It is a little bit of a noisy screenshot, but you can get the basic idea of how it works At the top there, you select a project on your hard drive, then it will detect the Perforce settings that go along with that project, whatever workspace you’re using and so on, and then it will fetch a list of all the changes which are being checked in that affect that project You can select anything from that list and double click on it, and it will sync it down and compile it and then launch the Editor If it compiles successfully, it will send a little event to a database, to a web service, and that shows up as a little green dot for you and for everybody else on the project If it fails, it comes up as a red dot If it does fail, or there is some problem with the Editor and it crashes at startup or there is some blocking issue with it, you can come back to the tool later and right click on one of the changes and mark it as bad You can also leave a comment explaining why you did so, give somebody some idea of where to start looking for the problem Engineers would use the same tool as well so they could sync down the code They do not have to compile through it if they do not want to They can just compile through Visual Studio, but this is going to take care of updating all the versioning information for them If they are investigating an issue that somebody else has flagged as being wrong with the build, they can mark the build as being under investigation, and that will show up as a message in that column on the right there saying, the name of the person is investigating the build, to discourage other people from syncing there This all felt like a really bold experiment Fortnite was not a small team at the time, and in order to get it to work, it would require quite a culture shift But we figured that we did not have much to lose about it The process that we had at the time was pretty slow and cumbersome, so we figured that we would give it a go One weekend, IT installed Visual Studio Express on all of our artists’ machines, and we came in on Monday morning and just started using it Kind of surprisingly, it was actually really successful We got a lot of feedback on it, and we iterated on the tool quite a lot in the early days One of the first features that we added was the ability to set up a schedule so that you could sync a build overnight or you could sync down Perforce overnight and make a build from it so that when you get into the office in the morning, you will already have an Editor there for you It very quickly became a staple of our workflow We regarded it as a big win for Fortnite, and it really helped us get back to that rapid iteration cycle that we wanted That was four years ago, and we are still using Unreal Game Sync now This is what it looks like today Pretty much the same kind of general look to it We polished it bit and added a few extra features Now you can have multiple tabs so that you can see different projects open at the same time There is a big status panel at the top which shows you what you are synced to, allows you to open some really useful tools and also shows you things like which SDKs you are required in that branch It also allows you to switch between streams We found that Perforce fast stream switching has a bit of a bad reputation because of how easy it is to do it incorrectly But being able to expose it through a tool like this allowed us to hide all the nuances and do it reliably The main panel is pretty much the same, but the obvious difference is all of these badges on the right, these green badges Fortunately, they are green, because it is a good day today These are results from our build system We realized that having a tool like this that everyone was running made it a great choice for something that we could use to surface information about the state of the build, notifications from the build system, and so on Those badges are really easy to add and completely customizable There is a little command line tool called PostBadgeStatus

that you can pass with a bunch of arguments like the name that you want to appear on the label, which change list it is for, whether it succeeded or failed or produced a warning, and you can give it a URL to open if the user clicks on it We usually use that to take you to a build log or something like that so you can investigate more Because Unreal Game Sync is also polling for changes that have been checked into the branch in the background, it also knows when you have checked something in If one of those badges turns from green to red, it can give you a little desktop notification telling you that you might have broken the build as well We put a lot of attention into what we build as part of the continuous build cycle to get good feedback on those badges This is what we build nowadays on Fortnite Every 10 minutes, we have this build that starts up that builds the Editor incrementally on one machine It is fast because it is only compiling all of the changes It is built-in non-unity mode so that any CPP change only has to build that one file It does not have to rebuild a whole unity blob It also means that we catch things like missing headers Once we have a compiled Editor, we run a little Commandlet which just loads up any content which has been changed since the last time it was run It does not do anything with it It just loads it up We found that when we were making packaged builds of the game, the most common source of errors was artists who had forgotten to submit a piece of content Just by loading it up, it causes the Editor to output errors and warnings if something is missing, and we can surface that through a badge in Unreal Game Sync The next thing that we do is to run a quick automated test that just spawns the Editor twice, once with the -game argument, once with the -server argument It just checks that a client and server can talk to each other, check that the login flow and matchmaking process works correctly, checks that you can make it into a match, and then checks all of the four basic actions that you can do in Fortnite It checks that you can move, checks that you can shoot, checks that you can build, and checks that you can harvest We have a second machine which incrementally compiles the game for all of our target platforms Those are just two machines that we have running all those tests It is a massive, massive value for us It is a really nice balance between getting quick feedback and getting broad coverage of things Paragon was the next project that used Unreal Game Sync, and Paragon was a project that ramped up really quickly and had a really large art team and a really vocal art team as well They did not have the history that we had on Fortnite of trying to iterate for a long period of time They just wanted to get stuff done They did not know why they had to compile stuff at their desks, which is a reasonable thing to complain about They wanted to go back to a model where binaries for the Editor were built in some centralized location and we could distribute them out from there We did not want to go back to the situation where we were checking binaries into the source tree and all the problems that we had with churn and engineers syncing their binaries and getting mismatches locally We decided to go from a system where using the binaries that were already generating as part of a continuous integration test, we would just zip them up and we would submit them to a location somewhere outside the branch Unreal Game Sync could then, if the user chose to enable them, could fetch those binaries without having to create a workspace for it You could just get that one file and use the description on that revision to match it up to which source changes it corresponded to That was a really nice solution, because it allowed us to get all of our departments using exactly the same tool for syncing binaries For the case where people were using precompiled binaries, it would just show changes which did not have matching binaries grayed out, and for everybody else, they could sync and compile whatever they liked A quick tour of some of the other features that we have in Unreal Game Sync as well It was kind of frustrating when we switched from old-style Perforce workspaces to streams where we could not customize what everybody was syncing When people are working from home or they have a poor connection or something like that, syncing down lots of stuff that they do not care about was a waste of bandwidth and was making syncs take a really long time At first, we tried to solve that by creating virtual streams of things That works okay if you just have one or two of them, but we were finding that the number of permutations

that people would want was getting out of hand The number of branches was increasing as well, so people would want their own personal customizations on every single branch We decided to sidestep Perforce and just do all of the filtering on the client side instead Since everybody was syncing through one tool, we can build it into that It allows us to create a much nicer user interface around it as well It allows us to set up some broad, high-level categories that people can choose from, like an a la carte menu of things that they want to sync They can exclude certain platforms that they do not care about Maybe they do not want cinematics content or something like that or they do not want localized Assets, and they can all do it from inside this panel We also added this Clean Workspace tool, which kind of serves two purposes First of all, to clean out any intermediate files that you have The directory layout that we have in UE4 means that things like Plugins and game projects, these can be scattered all over the source tree Something like this allows us to clean up those intermediate files It also allows us to check that you have not deleted any files that are in Perforce, or you have not made them writable, even It gives you this nice tree view You can see everything that it finds, and it will only select things which are known intermediates by default, so it is not accidentally going to delete any work that you have created and just forgotten to check in Unlike the p4 clean command, this only checks where the files exist and the attributes on it, whether it is read only or not It can actually do it really, really quickly It takes about 10 seconds to come up and compare the whole workspace Our QA department uses Unreal Game Sync as well We added this feature for them so that they can regress a bug If you hold down the shift key, you select a start change list and an end change list and then right-click on it, you can go into this Bisect mode, which filters down the view to just include the changes that you have selected, and then you can mark individual changes as good or bad There is the sync button at the top changes to saying “sync next,” and it will sync to the middle of the range between the last good and bad change It allows you to do a binary search for the change that introduced a bug Finally, we added a lot of little customizations that were useful for our own projects This is what a recent Fortnite release branch looks like You can see it has got the Fortnite logo there, and we also color the status panel red so that people know that they should not be working in there because the branch is locked down There is a little message of the day feature underneath the status panel just there, which usually gives you some information about the dates that we are going to branch It also allows you to get a link to your list of bugs in our bug database, that kind of thing I kind of grayed out all of the changes descriptions But next to it, you can see these little buttons here We have a Perforce trigger that requires everybody to include a bug number and all of the changes they submit In Unreal Game Sync, we can pass that out using the reject and create these little badges for it If you click on that, it will take you to our bug-tracking software to that particular bug All of this stuff is configured through Config files We do not have any Epic internal special version of Unreal Game Sync. It is all Config file driven based on Config files that are checked into the branch, so any project can do whatever they like with it The latest source code for Unreal Game Sync is in Perforce and GitHub If you are interested to give it a try, I thoroughly recommend that you do so It is not locked to anything in the Engine itself In fact, we distribute it as a tool outside the Engine, so it is not synced down as part of the branch If you want to try it, then you can always go to the latest UE4 release and take a look at it There is more information about setting it up on our Docs site Next up, branching The Engine team was the first team at Epic to try out Perforce streams We adopted a lot of the best practices from the Perforce documentation We have a kind of branch hierarchy like this where more stable changes exist at the top and unstable changes and development work goes on at the bottom of the hierarchy Those changes merge down Whenever we have done some work in a development stream and we are ready to copy it up for a release, then our QA department tests it and then we do a copy into main We try and keep main pretty stable Whenever we are coming up to a release and we want to stabilize it, then we create a new release branch by branching off main This kind of model works really well for code, but it works absolutely abysmally for content,

because you cannot merge Assets For Fortnite, when we switched to streams, we started off with a kind of model like this We would have three branches, the mainline branch where most people would work, then a released next branch which was used for stabilizing the next release, and a release live branch which reflected what was currently live If we wanted to do any hot fixes, we could do it in the release live branch and release next would give us a bit of a window to stabilize before the next release By reducing the number of branches, we reduce the number of places you can create conflict But this model was not without its problems First of all, the fact that we only have one release branch means that it is very easy to get pipeline bubbles If for some reason the build which is in release next gets held up by a few days and cannot be released alive because we find a bug or something like that, then that means that we also cannot vacate release next and copy up from main There is kind of a big confusion there because people never know quite what work is in which branch They do not know where to do their own work They would have to really track the release schedule to understand, and any notices or emails that come through about delays and things like that to understand where they were supposed to be working for something There is also a weird artifact in that we never know quite which build we are going to release until it has been approved by QA, so we are constantly testing builds that are made out of release next, and then finally they give the thumbs up to it, and then we copy them to release live and release the build that was made in release next That means the first time that we actually come to do a hot fix in release live is the first time we have exercised all the code paths running that build at that branch on our build system It is not the time where you want any kind of delays After a while, we switched to this kind of model, and this is what we are using nowadays We create a dead end branch for each release it is numbered so that people know exactly what it is Artists use fast stream switching to switch between them We have a major release coming out every two weeks or so We figured that it takes us about three weeks to stabilize each one Sometimes it is a little bit longer for the first release in the season or so This kind of model gives us lots of flexibility for creating as many out of bound releases or any special releases that require extra time to bring it to land We still have plenty of problems with conflicts being created in release branches, though We used to have a situation where we would have an engineer who would get the unlucky job of having to merge down from a release branch at the end of the day, and there would be all sorts of conflicts that he would have to figure out how to resolve He would reach out to people and try and negotiate which way it should be resolved and things like that But more often than not, that resolution would involve accepting one of the changes and then redoing some work Until that was done, that meant that what was actually in the branch was broken We made this tool which we called Robomerge that would automatically merge things down from release branches into more stable branches, into more unstable branches It would do it change list by change list, so if there was any conflict, it would know exactly who to blame It would be the person that submitted that change It would also do everything in order, so it would not cause conflicts later on because one change has been accepted over another Robomerge really helped us identify conflicts quickly, but it did not do much to stop us from creating conflicts in the first place We added this functionality to the Content Browser, which allows us to check multiple branches, check the state of Assets in multiple branches at the same time In the screenshot on the left, you can see that it is showing an Asset that is checked out by somebody in another branch In the screenshot on the right, it is showing that somebody has already made changes to that Asset in a different branch Because it knows that changes in a release branch are going to be merged down into the main branch, so they can check to see whether somebody has already changed it in the main branch If it has, then that is a conflict waiting to happen In both cases, it will give you a big warning dialogue saying, you are going to screw somebody over; do not do this unless you really know what you are doing Sometimes it is necessary for fixes that are going to an existing release that kind of do not mind dead ending You do not care about it being merged down or anything like that You just need to fix it in live But usually, it is something that you want to try and avoid That functionality is in the base Engine, but it has to be set up by code This is what the code to do that looks like The key function there is ISourceControlProvider::

RegisterStateBranches(): We have a Fortnite Editor Engine class, and we call it from the Init function of that. You just call RegisterStateBranches with a list of branch names and a relative path to the content folder within it Doing it from code is really convenient, because it lets us determine the branch topology dynamically In our case, because we have Robomerge set up, rather than hard coding a list of branch names, we just read the Robomerge Config files Whenever we create a new release branch, we only have to update it in one place After spending a while talking about how horrible binary Assets are for us, it would kind of be remiss of me not to talk a little about text based Assets We have been working on this for a while on and off We have made progress on it There are traces of it in the Engine today, but it is still not quite ready for primetime It is a pretty gnarly thing to retrofit, because there is so much serialization code already in the Engine But we have a plan for it, and we think it is going to scale up and allow us to create a good upgrade path The idea is that we create a new archive type, which we are calling FStructuredArchive, which allows serialization code to add additional annotations for structure and the naming of properties Creating a separate archive type means that we can change the interface a bit, make it so the compiler can statically check things, also put some validation code inside the archive before it gets to the underlying archive so that we can check to make sure that, for example, if you are starting an Array, then you are also finishing the Array and everything is scoped correctly and things like that The goal is to have that archive be able to write out to three different backends The first would be raw packages, exactly the same packages we have today The intent is to make it so that is completely compatible with the current binary format of packages so that you can load existing Assets without any problem The other backend would obviously be text file format The third format would be a kind of intermediate format that would help us make it so the Editor can start quickly It would be a kind of binary version of the text based Asset It would retain all of the annotations unlike the raw package format That is kind of useful for things like the Asset Registry which needs to scan all Assets and pull out metadata about the import tables and stuff like that We can precompute those and cache them The idea is to make all of this structured archive stuff completely optional so that you can compile it out in your finished game You do not have to pay any of the overheads for tracking any of this metadata One of the things that we have learned about it already though is that just because it is text does not necessarily mean that it is mergeable Obviously, things like Textures, if they are stored in a text based format, you do not expect them to be merged One of the high-traffic types of Assets we have right now are Blueprints and UMG Widgets Those kinds of things are really difficult to merge anyway Because they are node networks, then it is very easy to change the topology of it by just creating a new connection When you serialize that out, then it results in everything moving around across the file It is also really easy to just select a bunch of nodes and just move them right by 10 pixels and change hundreds of places in the file We do not expect them to be a panacea for merging Assets, but we do think that having the visibility into what is in an Asset is going to be useful By improving our data structures over time, we are going to make a lot of that stuff kind of easier It is going to require a lot of work and a lot of changes to existing systems, though We have got to start somewhere The last thing I wanted to talk about was iteration, some of our best practices for iteration This is a little bit more scattershot than the other two categories that I talked about But I think some of it is valuable and really a good place that you might otherwise not find out about it First, on Fortnite, we invest heavily in working in Play-In-Editor Most of our artists and designers work in Play-In-Editor, and so do most of our engineers For a multiplayer game, it is quite convenient

You have the ability in the dropdown menu next to the play button, you can choose how many clients you want to spawn, whether you want to dedicate the server or not We end up with most of our development work happening in the full Map We have a few test Maps for isolated bits of things that we want to test, but most of the work happens in the full Map We have also made an effort to support full integration with our backend services through Play-In-Editor This is a little bit tricky, because it means that we kind of have two separate code paths for it But doing so means that we can test anything in Play-In-Editor We use real accounts, we can log in to the game We can test the whole login flow and matchmaking system and design all of our UI and test it right there and then Worth mentioning as well, being able to run the Editor with -game or -server to use uncooked content is a super useful way of being able to iterate on code without having to launch the Editor Next up, the cooker, everyone’s favorite waste of CPU cycles The main loop in the cooker just consists of loading up packages, inlining derived data, and saving them out again Derived data is the platform-specific version of say a Texture or a shader or whatever, whatever needs to be precomputed to run on that particular platform Creating derived data is usually really slow, things like computing PVRTC Textures are incredibly slow But it is also completely deterministic and it is very well defined in terms of a transform You know exactly what the inputs are; you know exactly what the outputs are Our hardware is designed to decompress it, and so the format is not going to change It is usually stable over a long period of time What we do with derived data is to cache it By default, it is cached in a local folder on your machine under the Engine folder called derived data cache It means that the next time you run the cooker or even when you run the Editor, because running on the Editor is running on your host platform, we have to create platform-specific versions of PNG Textures for that Next time you run, it is always going to fetch it out from the derived data cache If you are working on a team on a project, then setting up a shared Derived Data Cache is really valuable A shared Derived Data Cache is just a network folder that everybody can access If something is in your local derived data cache, it will get it from there If not, it will go and get it from the shared derived data cache If it is not there either, then you will do all of your teammates a favor by making it locally and then pushing it to both levels of the cache The one time that we find derived data to be really volatile is shaders because you can create new Materials Shader compilers are upgraded often with SDKs and things like that To recompile all the shaders for your game can take a really long period of time For that, we use IncrediBuild or XGE, the Dev tools package We can run the shader compiler through that and parallelize it out over a large number of machines That is the main way that the cooker runs in parallel For most builds, especially builds where everything is already in the cache then, it does not run very parallel at all It is bound by CPU performance and being able to load up packages and save them back out again Loading and saving is done on the main thread, so that means that if you are looking to buy hardware that is going to be good for cooking, then you want something that is going be fewer, faster calls rather than a large number of slower calls In terms of optimizing that, the obvious ways to do that are to just load and save fewer packages For our games, we have a few special modes that we can run the cooker in depending on what you want to test, which filters out the packages which the cooker is going to consider You can do that through registering a cook modification delegate The other thing that causes the cooker to load up more things that it would need to is if it has loaded that package before, but it has already purged it from memory The cooker is eventually going to go through every single package that is in your cooked game As part of doing that, it is going to have to load up each one of them and resolve any references to other packages References in Unreal are stored as Object pointers, so it is actually going to have to instantiate all those other packages as well We do not expect that many games will be able to fit in memory all at one time Periodically, we have to run a garbage collection which will discard things But there is a good chance that some of the things that are discarded

will also be referenced by other things in the future To limit the amount of times we run a garbage collection pass, you can set the max memory allowance Config value in defaultengine.ini to something higher It defaults to 16 GB, so as soon as you hit that 16 GB working set, it is going to run a garbage collection and get rid of everything If you have memory on your build machines or on your development machines, you would increase that to something much higher I would like to be standing here able to tell you that there is some secret magic trick that I can exclusively reveal today which is going to make working on device with UE4 a lot easier than it is But we do not really have one It is pretty rough We know about it We are working to improve it There is just a lot of legacy there, and it is taking a while One of the things that we are talking about doing right now or we are actually experimenting with doing is what we are calling shared cooked builds The idea is that you will copy a build from a build machine that is already cooked and packaged Then you can just get your local changes, your diffs from that build, and cook those locally and then add them on the top That makes the cook times for it a lot quicker One of the things that we are having a lot of success with is Gauntlet Gauntlet is a command that is part of an automation tool that we use for running our automated tests It actually kind of does more than that, and you can use it locally for test scenarios and just to set up complicated deployments For a sort of defined scenario, it can spawn a server and four clients on different devices, communicate with them all, get them all to run a certain command, and then harvest data from all those devices, bring it back to the host machine, and generate reports for it We use Gauntlet a lot for performance testing, regression testing as well, especially We use replays to reproduce exactly the same gameplay scenarios from a live match over and over again, and we can get a sense of whether performance is improving or worsening over time Gauntlet is pretty badly documented right now, but we are hoping to have better documentation for that in 4.23 We have had Hot Reload in UE4 for a long time, and it is great for marketing Materials, but on real world projects, it kind of falls a bit short It makes some pretty big assumptions that all of your Game State is going to be captured by the Unreal reflection mechanism so that it can load a second copy of your game DLL and then transfer all this data over to it That is not usually true for a big AAA project As soon as you introduce a global variable or a native singleton or something like that, then hot reload is kind of dead to you, unless you jump through some hoops to make it work A while ago, engineers on Fortnite started trying out this commercial product called Live++ Live++ works in quite a different way, works in a more fundamental way You compile a bunch of Object files, and then it will actually link those into the running process That means that whenever it resolves a symbol to a function, if it does not have a new version of that function, it can call the original one that was in the process If it has a reference to a global variable, it can use the initial one and its current value Similarly, any functions that already exist in the process, it can patch those, do a long jump to the Object file and use the replacement versions It is really quite awesome when you try it, and it is quite a transformative tool For our workflows, like I said, we work a lot in Play-In-Editor That means that you can load up the Editor once and keep a session running for hours, just constantly making changes to it We liked Live++ so much that we licensed it It is now part of UE4 We integrated it as a featured called Live Coding, and it is in the 4.22 release It does not support layout changes right now You cannot change Class layouts, but that is something that we are working on Finally, iterating in prod There are always things that will escape the testing net, and you will only discover when it gets into the hands of players and you are running at scale That can be bugs It can be exploits or glitches

It is quite difficult to deal with those things It is always a scramble One of our mandates for any new features that we add is that everything can be disabled by a Config file value Not only disabled, but everything can be tuned as well by Config file values It kind of means that once we have our build lockdown, we can still maybe iterate with the settings and stuff like that That is done with a class called OnlineHotfixManager It requires an implementation of the IOnlineTitleFile interface, which is what provides the files which are on your backend or on your backend service platform or whatever The idea is that in our case, we can upload a bunch of Config files that we want to patch the game with to our servers, and then at the start of the match, it will sync them down and apply them there and then Through that, you can change a bunch of things You can change properties, console variables, or if you have code which just queries the property value every single time, that will also be updated You get a notification called if you want to update things based on those properties changing Thank you very much [Applause] ♫ Unreal logo music ♫